title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Coherent Stokes Raman scattering microscopy (CSRS)",
"Coherent Stokes Raman scattering microscopy (CSRS)"
] | [
"Sandro Heuke \nInstitut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance\n",
"Hervé Rigneault \nInstitut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance\n"
] | [
"Institut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance",
"Institut Fresnel\nAix Marseille Univ\nCNRS\nCentrale Marseille\nMarseilleFrance"
] | [] | We report the first implementation of laser scanning coherent Stokes Raman scattering (CSRS) microscopy. To overcome the major challenge in CSRS imaging, we show how to suppress the fluorescence background by narrow bandpass filter and a lock-in based demodulation. Near background free CSRS imaging of polymer beads, human skin, onion cells, avocado flesh and the wing disc of a drosphila larva are presented. Finally, we explain and demonstrate numerically that CSRS solves a major obstacle of other coherent Raman techniques by sending a significant part (up to 100%) of the CSRS photons into the backward direction under tight focusing conditions. We believe that this discovery will pave the way for numerous technological advances, e.g., in epidetected coherent Raman multi-focus imaging, real-time laser scanning based spectroscopy or efficient endoscopy. | 10.1038/s41467-023-38941-4 | [
"https://export.arxiv.org/pdf/2301.03516v1.pdf"
] | 255,545,924 | 2301.03516 | b7c9301e814ff46545e641091af20b9543b957a0 |
Coherent Stokes Raman scattering microscopy (CSRS)
Sandro Heuke
Institut Fresnel
Aix Marseille Univ
CNRS
Centrale Marseille
MarseilleFrance
Hervé Rigneault
Institut Fresnel
Aix Marseille Univ
CNRS
Centrale Marseille
MarseilleFrance
Coherent Stokes Raman scattering microscopy (CSRS)
10.1038/s41467-023-38941-4Article
We report the first implementation of laser scanning coherent Stokes Raman scattering (CSRS) microscopy. To overcome the major challenge in CSRS imaging, we show how to suppress the fluorescence background by narrow bandpass filter and a lock-in based demodulation. Near background free CSRS imaging of polymer beads, human skin, onion cells, avocado flesh and the wing disc of a drosphila larva are presented. Finally, we explain and demonstrate numerically that CSRS solves a major obstacle of other coherent Raman techniques by sending a significant part (up to 100%) of the CSRS photons into the backward direction under tight focusing conditions. We believe that this discovery will pave the way for numerous technological advances, e.g., in epidetected coherent Raman multi-focus imaging, real-time laser scanning based spectroscopy or efficient endoscopy.
Conventional bright-field microscopy provides information about the refractive index and absorption properties but cannot elucidate the sample's chemical composition. Infrared absorption and linear Raman scattering can provide the sample chemical composition 1,2 , but they are incompatible with high spatial resolution or real-time imaging. Coherent Raman scattering imaging (CRS) fills this gap by combining chemical sensitivity with signal levels that permit video-rate image acquisition. Well-established CRS microscopy techniques are coherent anti-Stokes Raman scattering (CARS) 3,4 and stimulated Raman scattering (SRS) [5][6][7] . CARS owe its wide-range application to the blue-shifted anti-Stokes radiation, which greatly facilitates its separation from linear fluorescence. When working with near-infrared excitation wavelengths, the blue-shifted CARS radiation is readily detected using photo-electron multiplier tubes (PMT) of standard laser scanning microscopes. SRS's popularity arises from the heterodyne signal amplification that frees SRS images from an omnipresent non-resonant four-wave-mixing background that is present in CARS images 8 . SRS also allows for measurements under daylight conditions owing to its modulation and signal detection scheme.
Overshadowed by CARS and SRS until now, there exists a third four-wave-mixing process that can be resonant with vibrational levels termed coherent Stokes Raman scattering (CSRS, pronounced "SCiSsoRS") [8][9][10][11][12] . CSRS, as CARS and SRS, is always present in CRS experiments and also provides chemical sensitivity 13 -see Fig. 1. In analogy to the Stokes emission in linear Raman microscopy, the CSRS radiation (2ω S − ω p ) is red-shifted with respect to the excitation frequencies of the pump (ω p ) and Stokes beams (ω S ). Surprisingly, CSRS imaging has never been implemented for laser scanning microscopy (LSM). Presumably, this is due to the high degree of resemblance of CARS and CSRS spectra 13 , rendering CSRS-prima facie-to be either CARS with an added fluorescence background when working with visible light sources or CARS with a radiation wavelength offside high quantum yields of common detectors when working with nearinfrared (NIR) excitation. CSRS provides, however, some unique properties that are of high interest for imaging.
First, the CSRS spectrum differs from CARS in the presence of accessible electronic resonances. For example, pre-resonant CSRS will offer complementary information in the application of alkyne-labeled dyes 14 and standard dyes used in microbiology 15 . Second, the redshifted radiation of CSRS becomes an advantage for UV or near-UV excitation where CARS photons 16 would be too far blue-shifted to be detected efficiently while any SRS image 17 is likely to be compromised by various artifacts such as multi-photon absorption 18,19 . Thus, UVexcited CSRS holds the potential to achieve the highest possible spatial resolution (λ Stokes =½ ffiffiffi 8 p NA) in coherent Raman imaging. Third, NIRexcitation wavelength combined with CSRS may allow for deeper tissue imaging due to the reduced scattering and absorption of its radiation 20 . Last but most important: Due to a modified phasematching geometry, CSRS microscopy can be configured to radiate more light in the backward direction. This game changer would benefit the investigation of thick samples, real-time spectroscopy, multi-focus imaging, and endoscopy 21 . Within this contribution, we want to open up the field of laser scanning CSRS imaging by demonstrating CSRS microscopy within the visible excitation spectrum. To remove the major fluorescence background obstacle, we will show how linear fluorescence can be suppressed by combining a set of bandpass filters with a lock-in-based detection scheme. Furthermore, we shall investigate numerically CSRS' spatial radiation behavior under NIR excitation, paving the way towards CSRS experiments with an efficient epidetection.
Result and discussion
Experiments
The CSRS signal of biomedical samples is often overwhelmed by linear fluorescence as CSRS radiation is red-shifted as compared to the excitation lasers. Time-gating 22 , time-resolved detection using streak cameras 23 , or polarization filtering can be used to reduce or suppress any fluorescence signal. However, these methods require either a substantial modification of standard coherent Raman microscopes or do not work in the presence of strong fluorescence light backgrounds. Here, we exploit the fact that the CSRS is spectrally narrow under psexcitation. Thus, the majority of fluorescence can be readily suppressed by narrow-band filters. Filters with a spectral width below <1 nm are commercially available, but the selection of a specific center wavelength requires expensive custom solutions. Instead, we use a combination of two inexpensive bandpass filters with a spectral width of about 15 nm but with different center wavelengths. In addition, we fine-tune the filter transmissions by tilting them (<20°) with respect to the incident beam, see Fig. 2b. Thus, two tilt-adjusted bandpass filters create a sharp transmission line (FWHM < 3 nm) for the CSRS signal collection while rejecting a significant part of the autofluorescence.
As a second method for fluorescence background rejection, we take advantage of the CSRS intensity dependence on both pump and Stokes excitation colors while linear fluorescence follows either the intensity of the pump or the Stokes laser, see Fig. 2c. Consequently, modulating the pump and Stokes beams at f1 and f2 while demodulation the signal at f1-f2 (or f1 + f2) yields exclusively nonlinear signals that depend on both excitation colors. The f1-f2 demodulation, therefore, also discriminates the CSRS signal against 2-photon excited fluorescence (2PEF) under single-color excitation. It should be noted that the f1-f2 modulation is also sensitive to two-color 2-photon fluorescence (2C-2PEF). Nevertheless, we will find experimentally that the emission strength of native 2C-2PEF is negligible within our CSRS implementation using visible beams.
For the experimental implementation of CSRS LSM, we chose visible excitation wavelengths at 445 nm (pump) and 515 nm (Stokes) for the following reasons: First, CSRS under near UV excitation is a potentially important application area since CARS, and SRS encounter experimental difficulties within this spectral range: the CARS signal falls into the UV range while SRS artifacts are increased due the possible high concentration of endogenous chromophores. Second, the red-shifted CSRS radiation can be readily detected by ordinary PMTs. Third, stress test: fluorescence artifacts are enhanced as compared to a near-infrared (NIR) excitation. Thus, our approach will be viable as well for CSRS under NIR excitation if pure CSRS signals can be obtained under VIS excitation.
The experimental setup, the spectral filtering, and the double modulation are schematically shown in Fig. 2a. Our implementation resembles a standard SRS setup with the difference that we use visible excitation wavelengths, we modulate not one but both beams, and the photo-diode is replaced by a PMT which is connected to a lock-in amplifier. More information about the setup can be found in part "Methods: Experimental setup". To quantify the level of fluorescence rejection, we investigated the signal of native olive oil at 2850 cm −1 when blocking the pump or Stokes beams or when the temporal pulse overlap is removed. The output signal of the lock-in is plotted as functions of the demodulation frequencies at 0 Hz (DC), f1, f2 and f1-f2 in Fig. 2d. It can be observed that the DC channel contains significant amounts of fluorescence while this artifact is already reduced within the f1 and f2 channels. Nevertheless, only the difference frequency channel at f1-f2 approaches zero when the excitation pulses do not overlap in time (Δt ≫ 3ps). In a second experiment, we imaged with CSRS (demodulated at f1-f2) the interface between olive oil and a 20 μm sized Plexiglas (PMMA) bead to obtain an estimation of the lateral resolution with an excitation objective of NA = 1.45-see Fig. 2e. From this "knife-edge" CSRS intensity profile, we can infer a lateral resolution below 400 nm. The difference to the expected λ Stokes =½ ffiffiffi 8 p NA = 515 nm/ [ ffiffiffi 8 p 1.49] = 120 nm can be attributed to the underfilling of the excitation objective back aperture and the bent oil/bead interface. Having confirmed a high-resolved, fluorescence-free CSRS image contrast, we investigated the suitability of LSM-CSRS for vibrational imaging of various objects featuring non-negligible background fluorescence levels. Within Fig. 3, we show the CSRS images of test and biomedical samples demodulated at the DC and f1-f2 frequencies for (non-)overlapping pump and Stokes pulses. The images were organized along the ratio of the CSRS to the fluorescence signal, starting from the highest at the top. Comparing the DC and f1-f2 images in Fig. 3a, it is obvious that narrow spectral filtering is already sufficient for CSRS imaging of polymer beads in oil (see CSRS at DC). The first artifacts become visible for the DC CSRS images of the epithelium and dermis of a 20 μm thick section of human skin-see Fig. 3b, c. For the epithelium, a pronounced fluorescence artifact arises from melanin within the epidermis dermis junction. Artifacts within the dermis can be attributed to the autofluorescence of collagen and elastin 24 . The quantity of fluorescence observed within the DC channel increases stepwise further for CSRS imaging of onion cells, lipid droplets within the flesh of an avocado, and the wing disc of a Drosophila larva. From the second row of Fig. 3, we validate that almost no fluorescence is leaking into the f1-f2 CSRS channel.
In the next section, we address a non-intuitive but key feature of CSRS microscopy: the possibility to dramatically increase the CSRS backwards radiation opening the road for an effective epi-CSRS detection.
Momentum conservation and simulations
Before entering into the calculations, we want to consider CSRS from a heuristic viewpoint investigating the momentum conservation laws for CSRS and compare it to CARS. Under plane illumination, the momentum conservation laws can be written as K = k p − k S + k p − k aS for CARS 25 and K = k S − k p + k S − k cS for CSRS with K, k p , k S , k aS and k cS representing the wavevectors of the object, the pump (probe) and Stokes beam as well as the anti-Stokes and coherent Stokes radiation, respectively. Note that for homogeneous samples (K = 0) these laws are also referred to as phasematching condition and simplify to k p + k p = k S + k aS (CARS) and k S + k S = k p + k cS (CSRS). Under focusing conditions, the single wavevectors are replaced by the distribution of incident wavevectors which are distributed over a cap of a sphere . To identify those object frequencies (K) that are effectively probed, every combination of excitation and emission wavevector must be identified. This operation is equivalent to the convolution of the caps of the illumination and detection Ewald spheres 26 . Neglecting polarization effects, the result of this convolution (simplified to 3 points per arc) is shown in 2D within Fig. 4a.
Evidently, there exists no vector combination for epi-scattered CARS photons which covers the origin K(0,0,0) of the object space. Thus, a homogeneous sample, such as olive oil, does not provide any backward CARS radiation. On the contrary, structures that feature high object frequencies, such as small polymer beads or layered materials, generate Epi-CARS radiation. In the past, Epi-CARS was occasionally considered to be a size-selective contrast that would highlight exclusively small objects 27 . While this statement holds for the majority of biomedical samples, there do exist large structures, e.g., multi-layered lipids in vesicles, that also emit a strong CARS radiation in the backward direction. Hence, it is more appropriate to refer to Epi-CARS as a technique that probes high object frequencies along the z-axis instead of being considered as size-selective.
Switching the detection wavelength to the red-shifted CSRS radiation changes the covered object support significantly and includes now the origin at K(0,0,0). Due to the reduced size of the detection wavevector (|k cS | ≪ |k aS |), steep incident angle Stokes vectors, and the pump vector entering as complex conjugated, it is now possible to find vector combinations that cover the origin at K(0,0,0). Consequently, even a homogeneous object will radiate considerable amounts of Epi-CSRS. Nevertheless, since the centroid of the Epi-CSRS object support, i.e., the gray cloud within Fig. 4a, does not coincide with the K-space origin K(0,0,0), Epi-CSRS images will also highlight objects containing higher frequencies.
To address the question of how to increase the ratio of backward versus forward CSRS and which object frequencies are most efficiently probed using Epi-CSRS, we performed finite element simulations whose results are summarized in Fig. 4b-e. The equations implemented numerically, as well as parameters, are found in methods: numerical calculation. From the momentum conservation law and the vector diagrams in Fig. 4a, it is readily comprehensible that a larger wavelength difference between the pump and CSRS wavelength greatly The CSRS signal is separated from fluorescence by means of two angle-tuned narrow bandpass filters. c Additional suppression of fluorescence is achieved by intensity modulating the Stokes and pump beam at the frequencies f1 and f2, respectively. Fluorescence-free CSRS signal is obtained at f1-f2. d Measured CSRS signal at DC, f1, f2, and f1-f2 frequencies when the pump or Stokes beam is blocked (Ip = 0 or Is = 0) or when their temporal overlap is removed (Δt ≫ 3 ps). The fluorescence is strongly rejected on the f1-f2 time trace and mainly comes from the Stokes beam. e The CSRS intensity profile obtained at f1-f2 at the interface of a PMMA bead and olive oil indicates a lateral resolution of <400 nm. relaxes the necessity for extreme incident illumination angles of the Stokes beam. The wavelength difference between the pump and CSRS radiation is enhanced using NIR instead of VIS excitation wavelengths, which is why we used in our simulations the wavelength λ p = 797 nm and λ S = 1030 nm, which matches the 2850 cm −1 Raman shift. For these conditions, the coherent Stokes radiation is observed at λ cS = 1450 nm. It should be noted that our results equally apply to the visible excitation wavelength using a higher excitation angle (or thinner annular masks-see below).
To start with, we computed the radiation pattern of CSRS and CARS of a homogeneous object using an NA of 1.49 (oil immersion), corresponding to a maximum illumination angle of 80°. From Fig. 4b, it is evident that both CARS and CSRS are predominately forward directed though the CSRS' radiation distribution features a larger radiation cone. Considering the ratio of backward versus forwarddirected photons R b/f , we find numerically that less than 1 photon in 10 5 is backward-directed for CARS. Note that the momentum conservation law actually predicts R b/f = 0 for CARS. Thus, the deviation observed Since common surfaces within biomedical samples scatter more than 1%, it is still likely that in this high NA illumination scheme, epi-detected CSRS is dominated by forward-generated CSRS that is back-scattered by linear scattering at interfaces (as in the CARS case). To find an approach that increases the proportion of epi-CSRS radiation, we consider the CSRS vector diagram matching K(0,0,0) on the top left of Fig. 4a. The ratio of backward versus forward radiation is readily increased by reducing the impact of vectors combinations probing higher frequencies and favoring those that cover the origin by satisfying k S + k S = k p + k cS . This enhancement of epi-CSRS radiation can be achieved using an annular illumination of the Stokes beam. Experimentally, such an annular illumination can be generated, without power loss, using two axicons within the Stokes beam path 28,29 . Numerically, we restricted the incident angles for the Stokes between a The object spatial frequency K-support for Epi-CSRS(CARS) is found by convolving the illumination Ewald spheres of the Stokes (pump), pump (Stokes), and Stokes (probe) with the cap of detection Ewald sphere at coherent Stokes (anti-Stokes) frequency. Note that vector combinations covering the frequency of a homogeneous sample K(0,0,0) are only found for CSRS but not for CARS. A single wavevector combination that phase-matches K(0,0,0) is highlighted to the left, while a similar approach for CARS leads to a large phase-mismatch (ΔK). b CSRS and CARS radiation behavior of a homogeneous sample under standard illumination conditions, i.e., the pump and Stokes beam fill the objective aperture homogeneously (θ max = 80°). c same as in b but with an annular pupil filter applied to the Stokes beam for CSRS covering 50% of the area of the objective back-aperture. For an equitable comparison with CARS, the same pupil filter was applied to the pump beam. d same as for b (conventionally focused beams), but the homogeneous sample was replaced by a frequency object whose scatter density is described as 1 + cosð2πz=λ o Þ and λ o = 1 μm. e Plot of the ratio of backward/forward radiation (R b/f ) as a function of the object frequency λ o . Calculations were performed with λ p = 797 nm and λ S = 1030 nm. θ min = 56.5°and θ max = 80°, which corresponds to covering 50% of the area of the objective lens' back-focal plane. The pump beam remains a normally focused beam and covers the full lens' back-focal plane. With this Stokes pupil filtering, the ratio of backward to forward radiation increased for CARS to 2 in 10 4 photons while most of the CSRS radiation is backward directed (R b/f = 1.5) when the object is homogeneoussee Fig. 4c. For the CARS calculation we considered the annular illumination applied to the pump beam whereas the Stokes is a conventional focused beam.
As a second important result from the heuristic derivation of CSRS object support, we found that the presence of high spatial frequencies along K z increases the amount of backward radiation. To confirm this prediction, we investigated in Fig. 4d, e an object whose nonlinear scatterer density, i.e., the concentration of molecular groups, is modulated along the optical axis as 1 + cos(K z z) with K z = 2π/λ o being the object frequency. We now consider a conventional illumination scheme where both Stokes and the pump are tightly focused and cover the full back aperture of the objective lens. Figure 4d outlines the radiation behavior of such a z-structured object with K z = 2π/1 μm. It is found that R b/f increases to one-fourth for Epi-CSRS while Epi-CARS remains negligible weak. To identify those object frequencies which are most efficiently probed by Epi-CSRS, we computed R b/f as a function of K z . From Fig. 4e, we find that Epi-CSRS peaks at K z = 2π/1 μm whereas Epi-CARS R b/f still increases at K z = 2π/0.5 μm confirming that CARS requires larger K z , i.e., objects with higher frequency modulation of the scatterer density, to generate a strong Epi radiation.
Thus, we have found that CSRS features non-negligible backward radiation from a homogenous sample under tight-focusing conditions, while this is not the case for CARS. The amount of backward radiated CSRS can be further enhanced using a Stokes annular illumination to surpass the forward CSRS radiation.
In conclusion, we have demonstrated the first LSM CSRS experiment. As the major challenge, we were able to reduce the fluorescence background significantly using a pair of tilted bandpass filters. The remaining fluorescence contribution was removed by intensity modulating the Stokes and pump beams at the frequencies f1 and f2 and a lock-in-based demodulation of the CSRS signal. Taking advantage of CSRS' characteristic dependence on both excitation colors, near fluorescence-free CSRS images were obtained when demodulating the CSRS signal at f1-f2. Fluorescence-free LSM-CSRS imaging was demonstrated on a variety of samples showing different fluorescence levels, such as polymer beads, epithelium, and dermis of human skin, onion cells, avocado flesh, and the wing disc of Drosophila larva.
Having demonstrated the viability of CSRS imaging, we introduced and quantified numerically how CSRS can be implemented to generate a strong backward radiated signal with high NA objective lenses. CSRS' unique backward radiation ability can be understood considering the momentum conservation laws for all combinations of all contributing k-vectors and cannot be achieved with CARS or SRS. With efficient backward radiation at hand, various coherent Raman experiments become feasible, which were impossible before. For example, this is the case for Epi-detected confocal multi-focus CSRS, Epi-detected LSM-CSRS with a spectrometer at the descanned position, Epidetected CSRS image scanning microscopy, or efficient endoscopy. Thus, we believe that this discovery will open new directions for coherent Raman developments and applications.
Methods
Experimental setup
A Yb-based fiber laser (APE Emerald engine, 80 MHz, 2-3 ps) is frequency doubled, yielding 7 W of 515 nm output power. Parts of the emissions are used directly as a Stokes beam to drive the CSRS process. The major part (4 W) of the 515 nm is employed to pump an optical parametric oscillator (OPO, APE Emerald). The OPO's signal beam is tunable to 660-950 nm and coupled into an external SHG unit (APE, HarmoniXX). The latter generates up to 50 mW within the spectral range of 330-475 nm and serves as the pump beam for CSRS. Thus, the 330-475 nm pump combined with the 515 nm Stokes beam allows addressing a Raman shift range from 1630-11,000 cm −1 . The pump and Stokes beams are superimposed in space and time using a dichroic beam splitter (Semrock, FF470-Di01-25x36) and a delay stage. Both beams are coupled into a home-built laser scanning microscope and focused by a 40× water objective lens (Nikon, Plan, NA = 1.15, immersion: water) into the sample. The excitation objective lens was replaced for a 60× objective (Nikon, Plan Apo TIRF, NA 1.45, immersion:oil) to generate the bead-oil interface image within Fig. 2. The CSRS radiation is collected by a condenser lens (Nikon, Achr-Apl, NA 1.4) in the forward direction, spectrally separated from the broadband fluorescence background by means of 2 tilted bandpass filter (Semrock FF01-620/14-25 + FF01-605/15-25) and detected by a photo-electron multiplier (PMT, Thorlabs, PMT1001). We measured the CSRS and CARS (at 398 nm) radiation strength for olive oil one after another and found comparable signal levels. To avoid detector saturation for the acquisition of CSRS images, we applied the lowest possible PMT gain corresponding to an amplification of only 5 × 10 3 . For an enhanced suppression of the linear fluorescence background, two acousto-optic modulators (AOM, AA, MT200-A0.5-VIS) were applied to modulate the intensity of the Stokes and pump beams and at the frequencies f1 = 2.28 MHz and f2 = 3.75 MHz, respectively. The PMT output was demodulated simultaneously at the DC frequency, f1, f2, and at f1-f2 = 1.47 MHz using a lock-in amplifier (Zürich instruments, HF2LI). The lock-in time constant was set to 30 μs. All CSRS images shown were recorded with a pixel dwell time of 40 μs. All samples were investigated in live image acquisition mode. Thus, some areas were scanned more than 100 times. We noticed that the fluorescence background signal within the DC channel was reduced over time as a result of photo-bleaching though the f1-f2 channel remained unaffected, which indicates that our experimental conditions are below the damage threshold of ex vivo samples. Note that the demodulation at f1-f2 only removes the fluorescence background while the CSRS non-resonant four-wave-mixing background 30 that is inherent to all coherent Raman techniques is still present. Nevertheless, the removal of this non-resonant background could be achieved by a heterodyne interference of the CSRS signal with a reference beam at the same wavelength 31 or by Kramers-Kronig or Maximum entropy-based algorithms in application to CSRS spectra 12 .
Numerical calculation
In the following, we shall summarize the equations used to generate Fig. 4b-e. The meaning of the variables is summarized in Fig. 5.
The focused field at the sample is given by the angular spectrum representation 32 : Here f denotes the focal length of the objective lens, and the integrals I 0m are provided by
E x ðρ,I 0m = Z θ max θ min E inc ðθÞ sinðθÞ½cosðθÞ 1=2 g m ðθÞJ m ½kρ sinðθÞdθð2Þ
where g m equals 1 + cosðθÞ, sinðθÞ and 1 À cosðθÞ for m = 0, 1, 2, respectively. J m is the mth order Bessel function while E inc is the incoming electric field which we assumed to be x-polarized and constant within the (annular) aperture angles θ min ≤ θ ≤ θ max . The nonlinear polarization at anti-Stokes and coherent Stokes wavelength is given by:
Pð3Þ
Where a,b,c,d represent the polarization coordinates x, y, or z. Using an x-polarized excitation, it was noticed that χ ð3Þ xxxx dominates all other tensor components even under tight focusing conditions while filling the objective lens homogeneously 32 . Nevertheless, for the generation of Fig. 4c, an annular mask with θ min = 56.5°and θ max = 80°was applied, which does necessitate the inclusion of other tensor elements. For simplicity, we consider here only isotropic samples reducing the 81 susceptibility tensor elements to 21, which are nonzero 30 . Within isotropic media, these nonzero elements follow certain symmetry rules which are, χ 1111 = χ 2222 = χ 3333 , χ 1122 = χ 1133 = χ 2211 = χ 2233 = χ 3311 = χ 3322 , χ 1212 = χ 1313 = χ 2323 = χ 2121 = χ 3131 = χ 3232 , χ 1221 = χ 1331 = χ 2112 = χ 2332 = χ 3113 = χ 3223 . Further, it applies χ 1111 = χ 1122 + χ 1212 + χ 1221 30 . Within our simulations we were setting χ 1122 = χ 1212 = χ 1221 = 1 and, hence, χ 1111 = 3. The nonlinear far-field radiation distributions are obtained using a dyadic Green function approach: E q,R ðR,Θ,ΦÞ E q,Θ ðR,Θ,ΦÞ E q,Φ ðR,Θ,ΦÞ
where q is replaced by aS or cS to calculate either the anti-Stokes or coherent Stokes radiation. Within the simulations, we segmented the focal area into (121 × 121 × 121 ≈ ) 1.77 million elements of a width of 50 nm equally spaced into the x, y, and z directions. The far-field radiation sphere was discretized into (ΔΘ = 1°, ΔΦ = 2°) 32,400 elements. The coherent (anti-)Stokes radiation was qualified as either forward or backward-directed if falling into the range Θ.. 0-80°or Θ.. 100-180°, respectively.
Fig. 1 |
1Overview coherent Raman imaging techniques. In energy diagrams 33 , relative radiation wavelength and energy conservation under plane-wave illumination.
Fig. 2 |
2CSRS experimental implementation and characterization. a Scheme of the CSRS experiment. 1. Yb-fiber laser, 2. optical parametric oscillator (OPO), 3. second harmonic generation (SHG), 4. acousto-optic modulator (AOM), laser scanning microscope (LSM), 6. photo-electron multiplier (PMT), 7. lock-in amplifier. b
Fig. 3 |
3Laser scanning CSRS at 2850 cm −1 . The left and right column show the CSRS image demodulated at the frequencies f1-f2 = 1.47 MHz and 0 Hz (DC), respectively. To estimate the remaining fluorescence level, images without temporal overlap of the pump and Stokes pulses are displayed to the right (Δt ≫ 3ps). a Mixture of polystyrene (PS, 30 μm) and Poly-methyl-methacrylate (PMMA, 20 μm) beads in olive oil. b, c Epithelium and dermis of a 20 μm thick human skin section. d Cells of an onion. e Lipid droplets within the flesh of an avocado. f Wing disc of a Drosophila larva. The insets displayed in the second column are the zoomed "CSRS at f1-f2" regions of interest shown in the first column on (b, d). Pixel dwell time: 40 μs. Image acquisition time: 40 s (1000 × 1000). The white and green scale bar equals 20 and 5 μm, respectively. must be attributed to the finite number of voxels of the numerical model. For CSRS, R b/f increases dramatically to about 1 in 100 photons.
Fig. 4 |
4Object frequency support and radiation behavior of CSRS versus CARS.
Fig. 5 |
5Declaration of variables.
ϕ,zÞ E y ðρ,ϕ,zÞ E z ðρ,ϕ,zÞ2
6
4
3
7
5 =
ikf
2
expðÀikf Þ
I 00 + I 02 cosð2ϕÞ
I 02 sinð2ϕÞ
Ài2I 01 cosðϕÞ
2
6
4
3
7
5
ð1Þ
aS,a ðrÞ = 3χ ð3Þ abcd ðrÞE p,b E * S,c E p,d Pð3ÞcS,a ðrÞ = 3χ ð3Þ abcd ðrÞE S,b E * p,c E S,d
Nature Communications | (2023) 14:3337
© The Author(s) 2023
AcknowledgementsWe acknowledge financial support from the Center NationalData availabilityThe data that support the findings of this study are available from the corresponding author upon request.Author contributionsS.H. conceived the idea, and performed the experiments and numerical calculations. H.R. conceived the idea and discussed the results. S.H. and H.R. wrote the paper.Competing interestsThe authors declare no competing interests.Additional informationSupplementary information The online version contains supplementary material available at https://doi.org/10.1038/s41467-023-38941-4.Correspondence and requests for materials should be addressed to Sandro Heuke or Hervé. Rigneault.Peer review information Nature Communications thanks Giulio Cerullo, Marcus Cicerone and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.Reprints and permissions information is available at http://www.nature.com/reprintsPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
Infrared microspectroscopy applications-review. A Bunaciu, S Fleschin, H Aboul-Enein, Curr. Anal. Chem. 10Bunaciu, A., Fleschin, S. & Aboul-Enein, H. Infrared microspectro- scopy applications-review. Curr. Anal. Chem. 10, 132-139 (2013).
Advances in biomedical raman microscopy. K A Antonio, Z D Schultz, Anal. Chem. 86Antonio, K. A. & Schultz, Z. D. Advances in biomedical raman microscopy. Anal. Chem. 86, 30-46 (2013).
Scanning coherent antistokes raman microscope. M D Duncan, J Reintjes, T J Manuccia, Opt. Lett. 7350Duncan, M. D., Reintjes, J. & Manuccia, T. J. Scanning coherent anti- stokes raman microscope. Opt. Lett. 7, 350 (1982).
Three-dimensional vibrational imaging by coherent anti-stokes raman scattering. A Zumbusch, G R Holtom, X S Xie, Phys. Rev. Lett. 82Zumbusch, A., Holtom, G. R. & Xie, X. S. Three-dimensional vibra- tional imaging by coherent anti-stokes raman scattering. Phys. Rev. Lett. 82, 4142-4145 (1999).
Vibrational imaging based on stimulated raman scattering microscopy. P Nandakumar, A Kovalev, A Volkmer, N. J. Phys. 1133026Nandakumar, P., Kovalev, A. & Volkmer, A. Vibrational imaging based on stimulated raman scattering microscopy. N. J. Phys. 11, 033026 (2009).
Label-free biomedical imaging with high sensitivity by stimulated raman scattering microscopy. C W Freudiger, Science. 322Freudiger, C. W. et al. Label-free biomedical imaging with high sensitivity by stimulated raman scattering microscopy. Science 322, 1857-1861 (2008).
Analysis and experimental assessment of the sensitivity of stimulated raman scattering microscopy. Y Ozeki, F Dake, S Kajiyama, K Fukui, K Itoh, Opt. Express. 173651Ozeki, Y., Dake, F., Kajiyama, S., Fukui, K. & Itoh, K. Analysis and experimental assessment of the sensitivity of stimulated raman scattering microscopy. Opt. Express 17, 3651 (2009).
Tutorial: coherent raman light matter interaction processes. H Rigneault, P Berto, APL Photonics. 391101Rigneault, H. & Berto, P. Tutorial: coherent raman light matter interaction processes. APL Photonics 3, 091101 (2018).
Study of optical effects due to an induced polarization third order in the electric field strength. P D Maker, R W Terhune, Phys. Rev. 137Maker, P. D. & Terhune, R. W. Study of optical effects due to an induced polarization third order in the electric field strength. Phys. Rev. 137, A801-A818 (1965).
Simultaneous observation of rotational coherent stokes raman scattering and coherent anti-stokes raman scattering in air and nitrogen. J B Zheng, A Leipertz, J B Snow, R K Chang, Opt. Lett. 8Zheng, J. B., Leipertz, A., Snow, J. B. & Chang, R. K. Simultaneous observation of rotational coherent stokes raman scattering and coherent anti-stokes raman scattering in air and nitrogen. Opt. Lett. 8, 350-352 (1983).
Comparing coherent and spontaneous Raman scattering under biological imaging conditions. M Cui, B R Bachler, J P Ogilvie, Opt. Lett. 34Cui, M., Bachler, B. R. & Ogilvie, J. P. Comparing coherent and spontaneous Raman scattering under biological imaging condi- tions. Opt. Lett. 34, 773-775 (2009).
Three-pulse multiplex coherent anti-stokes/stokes raman scattering (CARS/CSRS) microspectroscopy using a whitelight laser source. K Bito, Chem. Phys. 419Bito, K. et al. Three-pulse multiplex coherent anti-stokes/stokes raman scattering (CARS/CSRS) microspectroscopy using a white- light laser source. Chem. Phys. 419, 156-162 (2013).
. S A Druet, J.-P E Taran, Cars spectroscopy. Prog. Quantum Electron. 7Druet, S. A. & Taran, J.-P. E. Cars spectroscopy. Prog. Quantum Electron. 7, 1-72 (1981).
Super-multiplex vibrational imaging. L Wei, Nature. 544Wei, L. et al. Super-multiplex vibrational imaging. Nature 544, 465-470 (2017).
Electronic preresonance stimulated Raman scattering microscopy. L Wei, W Min, J. Phys. Chem. Lett. 9Wei, L. & Min, W. Electronic preresonance stimulated Raman scat- tering microscopy. J. Phys. Chem. Lett. 9, 4294-4301 (2018).
Going visible: high-resolution coherent raman imaging of cells and tissues. R C Prince, E O Potma, Light. 810Prince, R. C. & Potma, E. O. Going visible: high-resolution coherent raman imaging of cells and tissues. Light 8, 10 (2019).
Near-resonance enhanced label-free stimulated Raman scattering microscopy with spatial resolution near 130 nm. Y Bi, 781Bi, Y. et al. Near-resonance enhanced label-free stimulated Raman scattering microscopy with spatial resolution near 130 nm. Light 7, 81 (2018).
Background-free stimulated Raman spectroscopy and microscopy. P Berto, E R Andresen, H Rigneault, Phys. Rev. Lett. 11253905Berto, P., Andresen, E. R. & Rigneault, H. Background-free stimu- lated Raman spectroscopy and microscopy. Phys. Rev. Lett. 112, 053905 (2014).
Simultaneous stimulated Raman gain and loss detection (SRGAL). S Heuke, A Lombardini, E Büttner, H Rigneault, Opt. Express. 28Heuke, S., Lombardini, A., Büttner, E. & Rigneault, H. Simultaneous stimulated Raman gain and loss detection (SRGAL). Opt. Express 28, 29619-29630 (2020).
Deep tissue multiphoton microscopy using longer wavelength excitation. D Kobat, Opt. Express. 17Kobat, D. et al. Deep tissue multiphoton microscopy using longer wavelength excitation. Opt. Express 17, 13354-13364 (2009).
High-resolution multimodal flexible coherent raman endoscope. A Lombardini, 710Lombardini, A. et al. High-resolution multimodal flexible coherent raman endoscope. Light 7, 10 (2018).
Time-gated Raman spectroscopy-a review. M Kögler, B Heilala, Meas. Sci. Technol. 3212002Kögler, M. & Heilala, B. Time-gated Raman spectroscopy-a review. Meas. Sci. Technol. 32, 012002 (2020).
Picosecond Raman spectroscopy using a streak camera. T Tahara, H.-O Hamaguchi, Appl. Spectrosc. 47Tahara, T. & Hamaguchi, H.-O. Picosecond Raman spectroscopy using a streak camera. Appl. Spectrosc. 47, 391-398 (1993).
Multimodal mapping of human skin. S Heuke, Br. J. Dermatol. 169Heuke, S. et al. Multimodal mapping of human skin. Br. J. Dermatol. 169, 794-803 (2013).
Coherent anti-stokes Raman Fourier ptychography. S Heuke, Opt. Express. 27Heuke, S. et al. Coherent anti-stokes Raman Fourier ptychography. Opt. Express 27, 23497-23514 (2019).
Laser scanning dark-field coherent antistokes raman scattering (df-cars): a numerical study. S Heuke, H Rigneault, Opt. Express. 29Heuke, S. & Rigneault, H. Laser scanning dark-field coherent anti- stokes raman scattering (df-cars): a numerical study. Opt. Express 29, 3985-3995 (2021).
Vibrational imaging with high sensitivity via epidetected coherent anti-stokes raman scattering microscopy. A Volkmer, J.-X Cheng, X S Xie, Phys. Rev. Lett. 8723901Volkmer, A., Cheng, J.-X. & Xie, X. S. Vibrational imaging with high sensitivity via epidetected coherent anti-stokes raman scattering microscopy. Phys. Rev. Lett. 87, 023901 (2001).
Bessel beam CARS of axially structured samples. S Heuke, Sci. Rep. 510991Heuke, S. et al. Bessel beam CARS of axially structured samples. Sci. Rep. 5, 10991 (2015).
Bessel beam coherent anti-stokes raman scattering microscopy. S Heuke, J. Opt. Soc. Am. B. 32Heuke, S. et al. Bessel beam coherent anti-stokes raman scattering microscopy. J. Opt. Soc. Am. B 32, 1773-1779 (2015).
Coherent Raman Scattering Microscopy (Series in Cellular and Clinical Imaging). Cheng, J.-X. & Xie, X. S.CRC PressCheng, J.-X. & Xie, X. S. (eds.) Coherent Raman Scattering Micro- scopy (Series in Cellular and Clinical Imaging) (CRC Press, 2016). https://www.amazon.com/Coherent-Scattering-Microscopy- Cellular-Clinical-ebook/dp/B00B3WA4R8?SubscriptionId= 0JYN1NVW651KCA56C102&tag=techkie-20&linkCode= xm2&camp=2025&creative=165953&creativeASIN=B00B3WA4R8.
Heterodyne coherent antistokes Raman scattering (CARS) imaging. E O Potma, C L Evans, X S Xie, Opt. Lett. 31Potma, E. O., Evans, C. L. & Xie, X. S. Heterodyne coherent anti- stokes Raman scattering (CARS) imaging. Opt. Lett. 31, 241-243 (2006).
Theoretical and experimental characterization of coherent anti-stokes raman scattering microscopy. J.-X Cheng, A Volkmer, X S Xie, J. Opt. Soc. Am. B. 19Cheng, J.-X., Volkmer, A. & Xie, X. S. Theoretical and experimental characterization of coherent anti-stokes raman scattering micro- scopy. J. Opt. Soc. Am. B 19, 1363-1375 (2002).
J.-X Cheng, W Min, Y Ozeki, D Polli, Stimulated Raman Scattering Microscopy Techniques and Applications. ElsevierCheng, J.-X., Min, W., Ozeki, Y. & Polli, D.Stimulated Raman Scat- tering Microscopy Techniques and Applications (Elsevier, 2021).
| [] |
[
"APPROXIMATION BY EGYPTIAN FRACTIONS AND THE WEAK GREEDY ALGORITHM",
"APPROXIMATION BY EGYPTIAN FRACTIONS AND THE WEAK GREEDY ALGORITHM"
] | [
"Viê T Hùng ",
"Chu "
] | [] | [] | Let 0 < θ 1. A sequence of positive integers (b n ) ∞ n=1 is called a weak greedy approximation of θ if ∞ n=1 1/b n = θ. We introduce the weak greedy approximation algorithm (WGAA), which, for each θ, produces two sequences of positive integers (a n ) andc) there exists t 1 such that b n /a n t infinitely often. We then investigate when a given weak greedy approximation (b n ) can be produced by the WGAA. Furthermore, we show that for any non-decreasing (a n ) with a 1 2 and a n → ∞, there exist θ and (b n ) such that a) and b) are satisfied; whether c) is also satisfied depends on the sequence (a n ). Finally, we address the uniqueness of θ and (b n ) and apply our framework to specific sequences.2020 Mathematics Subject Classification. 11A67, 11B99. | 10.1016/j.indag.2023.05.008 | [
"https://export.arxiv.org/pdf/2302.01747v2.pdf"
] | 256,598,330 | 2302.01747 | 2bdc6fb486b2c53547403973a55bbd5dc8c454d1 |
APPROXIMATION BY EGYPTIAN FRACTIONS AND THE WEAK GREEDY ALGORITHM
30 May 2023
Viê T Hùng
Chu
APPROXIMATION BY EGYPTIAN FRACTIONS AND THE WEAK GREEDY ALGORITHM
30 May 2023
Let 0 < θ 1. A sequence of positive integers (b n ) ∞ n=1 is called a weak greedy approximation of θ if ∞ n=1 1/b n = θ. We introduce the weak greedy approximation algorithm (WGAA), which, for each θ, produces two sequences of positive integers (a n ) andc) there exists t 1 such that b n /a n t infinitely often. We then investigate when a given weak greedy approximation (b n ) can be produced by the WGAA. Furthermore, we show that for any non-decreasing (a n ) with a 1 2 and a n → ∞, there exist θ and (b n ) such that a) and b) are satisfied; whether c) is also satisfied depends on the sequence (a n ). Finally, we address the uniqueness of θ and (b n ) and apply our framework to specific sequences.2020 Mathematics Subject Classification. 11A67, 11B99.
INTRODUCTION
Throughout this paper, let θ denote a number in (0, 1] and let G : (0, 1] → N 2 be the function
G(θ) = 1 θ + 1;
that is, G(θ) gives the unique positive integer a 2 such that 1 a < θ 1 a − 1 .
An Egyptian fraction is a fraction of the form 1/n for some positive integer n. We consider the problem of representing θ as an infinite sum of Egyptian fractions. One natural method is the greedy underapproximation algorithm (GUA), which constructs a sequence of positive integers (a n ) ∞ n=1 recursively as follows: a 1 = G(θ) 2; supposing that a 1 , . . . , a n have been constructed, let
a n+1 = G θ − n i=1 1 a i .
By [Na23,(3)], the sequence (a n ) is strictly increasing and particularly, satisfies a 1 2 and a n+1 a 2 n − a n + 1.
(1.1)
Since by construction,
θ − n i=1
1 a n 1 a n+1 − 1 → 0, we have ∞ n=1 1 a n = θ.
According to [Na23,Theorem 5], if θ = p/q, where p, q are positive integers such that p divides q + 1, then the GUA produces the best approximations; i.e., the n-term approximation n i=1 1/a i outperforms any other n-term underapproximations using Egyptian fractions. This generalizes a result in [Cu22,So05,Ta21]. The proof involves an useful inequality established in [AB15] (see also [Na22].) However, such optimality does not hold for general θ (see [Na23,Section 5].)
The goal of this paper is to investigate a weak version of the GUA, which is inspired by the so-called (weak) thresholding greedy algorithm (TGA) in the area of functional analysis. We describe the (weak) TGA briefly. Let X be an infinite-dimensional, complete, normed vector space. Assume further that X has a basis B = (e n ) ∞ n=1 so that every vector x ∈ X can be represented by a formal series ∞ n=1 a n e n , where a n are scalars. (The series converges to a vector when our basis is Schauder; however, for general Markushevich bases, the series may only be formal.) In order to form an m-term approximation of x, the TGA chooses m largest coefficients a n in modulus. Formally, let A ⊂ N verify |A| = m and min n∈A |a n | max n / ∈A |a n |.
(1.2)
Then the TGA produces the m-term approximation n∈A a n e n . It is not always true that approximations produced by this method converge to the original vector x as m grows. In fact, Konyagin Inspired by the aforementioned interactions between the TGA and the WTGA, we introduce the weak greedy approximation algorithm (WGAA) as a companion of the GUA. The idea is that at the nth step of our weak algorithm, we pick a n based on the "greedy choice up to a constant". Specifically, fix t ∈ R 1 and an infinite set Λ ⊂ N. For each θ ∈ (0, 1], we define the (t, Λ)-WGAA as follows: let a 1 = G(θ). Choose b 1 a 1 . Additionally, we require b 1 ta 1 if 1 ∈ Λ. Assuming that a 1 , b 1 , . . . , a n , b n have been defined, we let
a n+1 = G θ − n i=1 1 b i . (1.4)
Choose b n+1 a n+1 . Additionally, we require b n+1 ta n+1 if n + 1 ∈ Λ. We see that the (t, Λ)-WGAA generalizes the GUA by simply setting t = 1 and Λ = N.
Definition 1.1. An infinite sequence of positive integers (b n ) ∞ n=1 is called a weak greedy approximation of θ if ∞ n=1 1/b n = θ and for all n 1,
G θ − n−1 i=1 1 b i b n .
(1.5) Inequality (1.5) indicates that a term b n is not necessarily picked by the greedy algorithm. Attentive readers may notice that (1.5) is superfluous. Indeed, suppose that for some N,
b N < G θ − N −1 i=1 1 b i =: a N . Then N i=1 1 b i = N −1 i=1 1 b i + 1 b N N −1 i=1 1 b i + 1 a N − 1 θ,
which contradicts ∞ n=1 1/b n = θ. We describe the paper's structure. In Section 2, we show that the WGAA satisfies the minimal requirement for an algorithm to be sensible; that is, for every θ, the sequence (b n ) produced by the WGAA satisfies
∞ n=1 1 b n = θ. (1.6)
This is the analog of the relation between the TGA and the WTGA. Moreover, we compute the growth rate of the sequence (b n ) produced by the (t, Λ)-WGAA when Λ = N and b n = ⌈ta n ⌉ (Proposition 2.2.) In Section 3, we carry out a deeper study of the two sequences (a n ) and (b n ) produced by the WGAA. According to Section 2, if (b n ) is produced by the WGAA applied to θ, then (b n ) is a weak greedy approximation of θ. We shall show that the converse is not true: there exist θ and (b n ) such that ∞ n=1 1/b n = θ, but (b n ) cannot be produced by the WGAA. To do so, we observe that θ, (a n ), (b n ) produced by the WGAA have three properties a) ∞ n=1 1/b n = θ (see Section 2); b) (1.4) holds for all n; c) there exists t 1 such that b n /a n t infinitely often. As we shall see, condition c) guarantees the convergence (1.6). However, even when θ, (a n ), and (b n ) verify a) and b), they do not necessarily satisfy c). As a result, in such cases, (b n ) cannot be produced by the WGAA. We then go further to characterize the situation when c) does not hold (see Proposition 3.2.) Next, we consider the following question: given a non-decreasing sequence (a n ) with a 1 2 and a n → ∞, are there θ ∈ (0, 1] and (b n ) such that a) and b) hold? According to [Na23,Corollary 3], the answer is positive if a n+1 a 2 n − a n + 1, in which case, θ = ∞ n=1 1/a n and b n = a n for all n 1. By explicit construction, we answer the aforementioned question in the affirmative for any non-decreasing sequence (a n ) with a 1 2 and a n → ∞ (see Theorem 3.5 and its Corollary 3.6.)
Section 4 gives necessary and sufficient conditions for when a sequence (a n ) gives unique θ and (b n ) (Corollary 4.2 and Proposition 4.3.) Finally, Section 5 applies the framework from previous sections to particular sequences including geometric progressions, arithmetic progressions, and the Fibonacci sequence.
CONVERGENCE OF THE WGAA
The minimal requirement we want the WGAA to satisfy is convergence, which is confirmed by the following proposition.
Proposition 2.1. If (b n ) ∞ n=1 is obtained from the (t, Λ)-WGAA applied to θ, then ∞ n=1 1 b n = θ.
Proof. Let (a n ), (b n ) be the two sequences produced by the (t, Λ)-WGAA applied to θ: for each n 1,
a n = G θ − n−1 i=1 1 b i ; equivalently, 0 < 1 a n < θ − n−1 i=1 1 b i 1 a n − 1 . (2.1)
Hence, (a n ) is non-decreasing. It suffices to prove that (a n ) is unbounded. Suppose otherwise that there is some M such that a n M for all n. Then b n Mt infinitely often, which implies that ∞ n=1 1/b n = ∞, contradicting (2.1). Next, we consider a special case of the general (t, Λ)-WGAA by requiring that Λ = N and for all n, b n = ⌈ta n ⌉. Let us denote this algorithm by G(t). Suppose that we use G(t) to obtain an n-term approximation n i=1 1/c i of θ. Then a logical choice is to have
c i = b i = ⌈ta i ⌉ for all 1 i n − 1, while c n = a n .
(It makes no sense if we do not choose the last term c n greedily.) An approximation by G(4/3) may outperform the GUA. We borrow an example from [Na23]. The GUA gives 1/3 + 1/17 as a 2-term underapproximation of 19/48, while G(4/3) gives 1/4 + 1/7. We have
1 3 + 1 17 < 1 4 + 1 7 < 19 48 .
By definition, G(1) is the greedy underapproximation algorithm. There is an interesting difference between t = 1 and t > 1.
If (b n ) is obtained by G(1) applied to θ, then [Na23, (3)] gives b n+1 b n b n − 1 + 1 b n .
Since lim n→∞ b n = ∞, we get lim n→∞ b n+1 /b n = ∞. However, the limit is finite when t > 1 as the following proposition shows.
Proposition 2.2. If (b n ) ∞ n=1 is the sequence from G(t) applied to θ, then lim n→∞ b n+1 b n = t/(t − 1) if t > 1, ∞ if t = 1.
Before proving Proposition 2.2, we record an important inequality addressing the relation between (a n ) and (b n ) produced by the WGAA. For each n 1, we have
1 a n+1 < θ − n i=1 1 b i = θ − n−1 i=1 1 b i − 1 b n 1 a n − 1 − 1 b n , and 1 a n+1 − 1 θ − n i=1 1 b i = θ − n−1 i=1 1 b i − 1 b n > 1 a n − 1 b n . Hence, 1 a n − 1 a n+1 − 1 < 1 b n < 1 a n − 1 − 1 a n+1 , ∀n ∈ N. (2.2)
Proof of Proposition 2.2. The case t = 1 is explained right before Proposition 2.2. Let t > 1. The right side of (2.2) yields
1 a n+1 < 1 a n − 1 − 1 b n = 1 a n − 1 − 1 ⌈ta n ⌉ < 1 a n − 1 − 1 ta n + 1 , ∀n ∈ N.
Therefore, 1 a n+1 < (t − 1)a n + 2 (ta n + 1)(a n − 1) =⇒ a n+1 a n > t + 1
an 1 − 1 an t − 1 + 2 an . (2.3)
The left side of (2.2) yields
1 a n+1 − 1 > 1 a n − 1 b n = 1 a n − 1 ⌈ta n ⌉ 1 a n − 1 ta n .
Hence, a n+1 a n < t t − 1 + 1 a n .
(2.4) From (2.3) and (2.4), we obtain that lim n→∞ a n+1 /a n = t/(t − 1). Since b n = ⌈ta n ⌉, we have the desired conclusion.
THE RANGE OF THE WGAA
In this section, we address the question of whether every weak greedy approximation can be obtained from the WGAA. The boundedness condition on the WGAA requires that for some t 1, b n /a n t infinitely often, which guarantees the convergence of ∞ n=1 1/b n to the desired θ (see the proof of Proposition 2.1.) However, there exist θ and (b n ) such that if (a n ) satisfies (1.4), then lim n→∞ b n /a n = ∞. By studying such a situation, we know more about the sequence (a n ) (see Corollary 3.3.) First, consider the following example.
Example 3.1. For n ∈ N, let b n = n(n + 2) and θ = 3/4. It is easy to check that ∞ n=1 1/b n = θ. We claim that if (a n ) satisfies (1.4), then a n = n + 1. Indeed, it suffices to show that
3 4 − n−1 i=1 1 i(i + 2) −1 = n, ∀n ∈ N. We have 3 4 − n−1 i=1 1 i(i + 2) −1 = ∞ i=1 1 i(i + 2) − n−1 i=1 1 i(i + 2) −1 = ∞ i=n 1 i(i + 2) −1 = 1 2 1 n + 1 n + 1 −1 by telescoping = n + n 2n + 1 = n.
Hence, a n = n + 1 and b n /a n → ∞.
The sequences (a n ) and (b n ) in Example 3.1 do not have b n /a n infinitely often bounded. In other words, a weak greedy approximation does not necessarily come from the WGAA. The next proposition provides a characterization of this situation.
Proposition 3.2. Let (b n ) ∞ n=1 be a weak greedy approximation of θ and (a n ) ∞ n=1 satisfy (1.4). The following are equivalent i) for all t 1, {n : b n /a n t} is finite. ii) lim n→∞ a n+1 /a n = 1.
Corollary 3.3. Let (b n ) ∞ n=1 be a weak greedy approximation of θ and (a n ) ∞ n=1 satisfy (1.4). Then (a n ) ∞ n=1 and (b n ) ∞ n=1 are obtained from the WGAA if and only if for some ε > 0, a n+1 > (1 + ε)a n infinitely often.
Proof of Proposition 3.2. i) =⇒ ii): Since (a n ) is non-decreasing, it suffices to show that for all ε > 0, there exists N such that a n+1 /a n < 1 + ε for all n > N. Choose M sufficiently large such that M/(M − 1) < 1 + ε/2. By i), there exists N such that for all n > N, b n > Ma n and 1/a n < ε/2. By (2.2),
1 a n+1 − 1 > 1 a n − 1 b n > 1 a n − 1 Ma n = M − 1 M 1 a n , ∀n > N,
which gives a n+1 a n < M M − 1 + 1 a n < 1 + ε, ∀n > N.
ii) =⇒ i): We prove by contrapositive. Choose t 1 and suppose that b n /a n t infinitely often. Let A be the infinite set {n : b n /a n t}. By (2.2), we have 1 a n+1 < 1 a n − 1 − 1 b n 1 a n − 1 − 1 ta n , ∀n ∈ A.
Trivial calculations give a n+1 a n > t(a n − 1) ta n − (a n − 1) = a n − 1 (a n − 1) − an−1
t + 1 = 1 1 − 1 t + 1 an−1 , ∀n ∈ A.
If t = 1, then a n+1 /a n > a n − 1 for all n ∈ A. That a n → ∞ implies that a n+1 /a n 2 infinitely often, making ii) fail. If t > 1, choose N sufficiently large such that for n > N, a n > 2t + 1. Then for all n ∈ A and n > N, a n+1 a n > 1 1 − 1 t + 1 2t = 1 1 − 1 2t , which contradicts ii).
Remark 3.4. If we replace the hypothesis " ∞ n=1 1/b n = θ" in Proposition 3.2 by " ∞ n=1 1/b n < θ", both i) and ii) in Proposition 3.2 hold. Indeed, if θ − ∞ n=1 1/b n =: c > 0, then
a n := G θ − n−1 i=1 1 b i G(c),
so (a n ) is bounded.
We state and prove the last result in this section.
Theorem 3.5. Let (a n ) ∞ n=1 ⊂ N be non-decreasing such that a 1 2 and a n → ∞. There exist θ ∈ (0, 1) and
(b n ) ∞ n=1 such that ∞ n=1 1 b n = θ,
and for every n 1,
a n = G θ − n−1 i=1 1 b i .
Proof. Since a n → ∞, we can form the infinite set A ⊂ N such that n ∈ A if and only if a n+1 − a n 1. In other words, A contains all the indices immediately before a jump in (a n ). Write A = {n 1 , n 2 , n 3 , . . .}, where n 1 < n 2 < n 3 < · · · . Note that a n j < a n j+1 for all j. We obtain the sequence (b n ) by first constructing all the b n for n ∈ A then constructing the rest.
Step 1: for each j 1, choose b n j such that a n j a n j+1 a n j+1 − a n j − 1 − 2a n j − 1 a n j+1 − a n j < b n j < a n j a n j+1 a n j+1 − a n j , (3.1) which can be done since the distance between the two ends are greater than 1. Note that (3.1) is equivalent to 1 a n j − 1 a n j+1 < 1 b n j < 1 a n j − 1 − 1 a n j+1 − 1 .
(3.2)
It follows that for each j 1, 1 a n j < ∞ i=j 1 b n i < 1 a n j − 1 .
(3.3)
Step 2: Due to (3.3), we can choose a sequence of positive numbers (θ j ) ∞ j=1 satisfying 1 a n j < ∞ i=j 1 b n i + θ j < 1 a n j − 1 .
Let n 0 = 0. For each j 1, set b n j−1 +1 = b n j−1 +2 = · · · = b n j −1 = N j , where N j is sufficiently large such that n j − n j−1 − 1 N j < min θ 1 2 j , θ 2 2 j−1 , . . . , θ j 2 .
Step 3: Set θ := ∞ n=1 1 bn . We claim that θ ∈ (0, 1). We have
∞ n=1 1 b n = ∞ j=1 1 b n j + ∞ j=1 n j −1 i=n j−1 +1 1 b i = ∞ j=1 1 b n j + ∞ j=1 n j − n j−1 − 1 N j < ∞ j=1 1 b n j + ∞ j=1 θ 1 2 j = ∞ j=1 1 b n j + θ 1 < 1 a n 1 − 1 1.
Step 4: Finally, we need to verify that 1 a n < ∞ i=n 1 b i 1 a n − 1 , ∀n 1.
Fix n 1 and choose j such that n j−1 < n n j . By (3.3), we have
∞ i=n 1 b i ∞ i=n j 1 b i ∞ i=j 1 b n i > 1 a n j = 1 a n .
On the other hand,
∞ i=n 1 b i ∞ i=j 1 b n i + ∞ i=j n i −1 n i−1 +1 1 b n = ∞ i=j 1 b n i + ∞ i=j n i − n i−1 − 1 N i < ∞ i=j 1 b n i + ∞ i=j θ j 2 i+1−j = ∞ i=j 1 b n i + θ j < 1 a n j − 1 = 1 a n − 1 .
This completes our proof.
Corollary 3.6. Let (a n ) ∞ n=1 ⊂ N be non-decreasing with a 1 2 and a n → ∞. Then lim n→∞ a n+1 /a n = 1 is equivalent to the existence of θ ∈ (0, 1) and (b n ) ∞ n=1 such that (a n ) ∞ n=1 and (b n ) ∞ n=1 are the sequences obtained from the WGAA applied to θ. Proof. Use Proposition 3.2 and Theorem 3.5.
Remark 3.7. Observe that (3.2) is stronger than (2.2). This observation is important in studying the uniqueness of θ and (b n ) in the next section.
UNIQUENESS OF θ AND (b n )
Thanks to Theorem 3.5, we know the existence of θ and (b n ) given any non-decreasing sequence (a n ) with a 1 2 and a n → ∞. We now give sufficient and necessary conditions for when (a n ) determines θ and (b n ) uniquely. By Step 2 in the proof of Theorem 3.5, a necessary condition is that (a n ) must be strictly increasing. We can then eliminate Step 2 in constructing the sequence (b n ) because A = N. We claim further that a n+1 − a n 2 for all n ∈ N. Indeed, suppose a N +1 − a N = 1 for some N. We rewrite (3.1) as
a N +1 a N − 2a N b N a N a N +1 .
(4.1) There are at least 2a N + 1 choices of b N , so θ and (b n ) are not unique. (Note that we allow equalities in (4.1) because the construction in the proof of Theorem 3.5 still works if we allow equalities in finitely many (3.1).)
Moreover, (b n ) must satisfy (2.2). The following proposition tells us precisely when (2.2) determines (b n ) unequivocally.
Proposition 4.1. Let (a n ) ∞ n=1 be non-decreasing such that a 1 2 and a n → ∞. Then (b n ) ∞ n=1 is uniquely determined by (2.2) if and only if a n+1 − 2 a n 2, ∀n 1, and for each n, one of the following holds i) a n+1 − a n − 1 divides a 2 n , and a n+1 √ 3 2 4a 2 n − 4a n + 3 + 2a n − 1 2 ;
ii) a n+1 − a n − 1 does not divide a 2 n , and a 2 n a n+1 − a n − 1 (a n − 1) 2 a n+1 − a n + 1 .
Proof of Theorem 3.5. By (2.2), (b n ) is uniquely determined if and only if each of the intervals I n := (a n − 1)a n+1 a n+1 − a n + 1 , a n (a n+1 − 1) a n+1 − a n − 1 contains exactly one positive integer. It is easy to verify that there always exists one largest integer in I n , called k n . In order that I n contains no other integers, we need k n − (a n − 1)a n+1 a n+1 − a n + 1 1.
(4.2)
We obtain a formula for k n depending on whether a n (a n+1 − 1)/(a n+1 − a n − 1) is an integer or not.
Case 1: if a n (a n+1 − 1)/(a n+1 − a n − 1) ∈ N, then k n = a n (a n+1 − 1) a n+1 − a n − 1 − 1.
Hence, (4.2) is equivalent to a n (a n+1 − 1) a n+1 − a n − 1 − (a n − 1)a n+1 a n+1 − a n + 1 2.
Equivalently, a 2 n+1 − (4a n − 1)a n+1 + (a 2 n + a n − 2) 0, giving
a n+1 2a n + √ 3 2 4a 2 n − 4a n + 3 − 1 2 .
Case 2: if a n (a n+1 − 1)/(a n+1 − a n − 1) / ∈ N, then k n = a n (a n+1 − 1) a n+1 − a n − 1 = a n + a 2 n a n+1 − a n − 1 .
Hence, (4.2) is equivalent to a n + a 2 n a n+1 − a n − 1 − (a n − 1) 2 a n+1 − a n + 1 + (a n − 1) 1,
giving a 2 n a n+1 − a n − 1 (a n − 1) 2 a n+1 − a n + 1 .
Corollary 4.2 (Sufficient condition for uniqueness). Let (a n ) ∞ n=1 be increasing with a 1 2 and a n → ∞. If i) a n+1 − 2 a n 2 for all n, and ii) for each n 1, one of the following holds a) a n+1 − a n − 1 divides a 2 n , and
a n+1 √ 3 2 4a 2 n − 4a n + 3 + 2a n − 1 2 ;
b) a n+1 − a n − 1 does not divide a 2 n , and a 2 n a n+1 − a n − 1 (a n − 1) 2 a n+1 − a n + 1 ,
then there exist unique θ ∈ (0, 1] and (b n ) ∞ n=1 such that ∞ n=1 1 b n = θ, (4.3)
and for every n 1,
a n = G θ − n−1 i=1 1 b i . (4.4)
Proof. Theorem 3.5 guarantees the existence of θ and (b n ). Suppose that there exists another pair (θ ′ , (b ′ n )) different from (θ, (b n )). Then for some N, b N = b ′ N , both of which must verify (2.2). This contradicts Proposition 4.1.
Next, we establish a necessary condition for the uniqueness of θ and (b n ) by requiring the inequalities a n a n+1 a n+1 − a n − 1 − 2a n − 1 a n+1 − a n b n a n a n+1 a n+1 − a n (4.5)
to determine exactly one solution b n . Again, (4.5) is slightly different from (3.2) as we allow equalities, because the construction in the proof of Theorem 3.5 still works if equalities appear in finitely many (3.1).
Proposition 4.3 (Necessary condition for uniqueness). Let (a n ) ∞ n=1 be non-decreasing with a 1 2 and a n → ∞. Suppose that there exist unique θ ∈ (0, 1) and (b n ) ∞ n=1 that satisfy (4.3) and (4.4), then for all n 1, we have a n+1 a n + 2, (a n+1 − a n ) does not divide a n a n+1 , and a 2 n a n+1 − a n < (a n − 1) 2 a n+1 − a n .
(4.6)
Proof. That a n+1 a n + 2 is due to the discussion at the beginning of this section. We find a sufficient and necessary condition for (4.5) to have exactly one solution b n . If a n+1 − a n divides a n a n+1 , then I n := a n a n+1 a n+1 − a n − 1 − 2a n − 1 a n+1 − a n , a n a n+1 a n+1 − a n contains at least two integers because a n a n+1 a n+1 − a n − a n a n+1 a n+1 − a n − 1 − 2a n − 1 a n+1 − a n > 1.
If a n+1 − a n does not divide a n a n+1 , then the largest integer in I n is a n a n+1 a n+1 − a n , and I n contains exactly one integer if and only if a n a n+1 a n+1 − a n − a n a n+1 a n+1 − a n − 1 − 2a n − 1 a n+1 − a n < 1.
Equivalently, a 2 n a n+1 − a n < (a n − 1) 2 a n+1 − a n .
This completes our proof.
Corollary 4.4. Let (a n ) ∞ n=1 be non-decreasing with a 1 2 and a n → ∞. Suppose that there exist unique θ ∈ (0, 1) and (b n ) ∞ n=1 that satisfy (4.3) and (4.4), then for all n 1, i) a n+1 − a n divides none of (a n − 1) 2 , a 2 n , a n a n+1 ; ii) 3a n < a n+1 .
Proof. i) By Proposition 4.3, a n+1 −a n does not divide a n a n+1 . By (4.6), a n+1 −a n does not divide a 2 n . Also by (4.6), a n+1 − a n does not divide (a n − 1) 2 . Indeed, supposing otherwise, we have a 2 n a n+1 − a n = (a n − 1) 2 + 2a n − 1 a n+1 − a n = (a n − 1) 2 a n+1 − a n + 2a n − 1 a n+1 − a n , contradicting (4.6). ii) We write (4.6) as a 2 n a n+1 − a n < a 2 n a n+1 − a n − 2a n − 1 a n+1 − a n .
Hence, 2a n − 1 a n+1 − a n < 1, which gives a n+1 3a n . However, a n+1 cannot be 3a n . Otherwise, we obtain from (4.6) that a 2 n 2a n < (a n − 1) 2 2a n .
By i), ⌊a 2 n /(2a n )⌋ = (a n − 1)/2. Hence, a n − 1 2 < (a n − 1) 2 2a n =⇒ a n < 1, a contradiction.
APPLICATIONS TO PARTICULAR SEQUENCES
In this section, we look at sequences (a n ) of special forms and find (b n ) that satisfies (3.2). We use specific sequences in [Sl23] as examples.
5.1. Geometric progressions. Let a, r ∈ N with a 2 and r 2. Let (a n ) be the sequence a, ar, ar 2 , ar 3 , . . . . By Corollary 3.6, (a n ) can be obtained from the WGAA applied to some θ.
If r − 1 divides a, we have the sequence b n = ar n /(r − 1) − 1 satisfy (3.2) and θ = ∞ n=1 1 ar n /(r − 1) − 1 .
For example, take a = 2, r = 3 to have a n = 2 · 3 n−1 (A008776), b n = 3 n − 1 (A024023), θ ≈ 0.68215 (irrational due to [Er48]).
If r − 1 does not divide a, we have the sequence b n = ⌊ar n /(r − 1)⌋ satisfy (3.2) and θ = ∞ n=1 1 ⌊ar n /(r − 1)⌋ . a n − 1
a n+1 = 1 F n+1 − 1 F n+2 = F n F n+1 F n+2 .
Using (3.2), we choose b 1 = 3 and for n > 1, choose b n = F n+1 F n+2 F n = F n F n+1 + F n−1 F n+1 + F 2 n + F n−1 F n F n = F n F n+1 + F 2 n + (−1) n + F 2 n + F n−1 F n F n by the Cassini's identity
= F n+3 − 1 if n is odd, F n+3
if n is even.
and Temlyakov[KT99] called a basis quasi-greedy if these approximations converge to the desired x. Meanwhile, Temlyakov[Te98] introduced a weaker version of the TGA, called the weak TGA (WTGA), which is more flexible in forming approximating sums. In particular, fixing a number t ∈ (0, 1], the WTGA considers sets A satisfying |A| = m andmin
n∈A
|a n |
t max
n /
∈A
|a n |.
(1.3)
Clearly, (1.3) is weaker than (1.2). In other words, the WTGA chooses the "largest
coefficients up to a constant." Surprisingly, the flexibility of the WTGA does not affect
convergence: a basis is quasi-greedy under the TGA if and only if it is quasi-greedy
under the WTGA (see [Te08, Section 1.5].)
On representations by Egyptian fractions. F Ambro, M Barcǎu, Rev. Roum. Math. Pures Appl. 60F. Ambro and M. Barcǎu, On representations by Egyptian fractions, Rev. Roum. Math. Pures Appl. 60 (2015), 331-336.
On Kellogg's diophantine problem. D R Curtis, Am. Math. Mon. 29D. R. Curtis, On Kellogg's diophantine problem, Am. Math. Mon. 29 (1922), 380-387.
On arithmetical properties of Lambert series. P Erdős, J. Indian Math. Soc. (N.S.). 12P. Erdős, On arithmetical properties of Lambert series, J. Indian Math. Soc. (N.S.) 12 (1948), 63-66.
A remark on greedy approximation in Banach spaces. S V Konyagin, V N Temlyakov, East J. Approx. 5S. V. Konyagin and V. N. Temlyakov, A remark on greedy approximation in Banach spaces, East J. Approx. 5 (1999), 365-379.
Underapproximation by Egyptian fractions. M B Nathanson, J. Number Theory. 242M. B. Nathanson, Underapproximation by Egyptian fractions, J. Number Theory 242 (2023), 208-234.
The Muirhead-Rado inequality, 2: symmetric means and inequalities. M B Nathanson, preprint (2022). Available atM. B. Nathanson, The Muirhead-Rado inequality, 2: symmetric means and inequalities, preprint (2022). Available at: https://arxiv.org/abs/2201.01270.
The On-Line Encyclopedia of Integer Sequences. N J A Sloane, N. J. A. Sloane et al., The On-Line Encyclopedia of Integer Sequences, 2023. Available at: https://oeis.org.
Approximating 1 from below using Egyptian fractions. K Soundararajan, preprintK. Soundararajan, Approximating 1 from below using Egyptian fractions, preprint (2005). Available at: https://arxiv.org/abs/math/0502247.
On an indeterminate equation. T Takenouchi, Proc. Phys. Math. Soc. Jpn. 3T. Takenouchi, On an indeterminate equation, Proc. Phys. Math. Soc. Jpn. 3 (1921), 78-92.
Greedy approximation. V N Temlyakov, Acta Numer. 17V. N. Temlyakov, Greedy approximation, Acta Numer. 17 (2008), 235-409.
The best m-term approximation and greedy algorithms. V N Temlyakov, Adv. Comput. Math. 8V. N. Temlyakov, The best m-term approximation and greedy algorithms, Adv. Comput. Math. 8, 249-265.
| [] |
[
"The Classical Aharonov-Bohm Interaction as a Relativity Paradox",
"The Classical Aharonov-Bohm Interaction as a Relativity Paradox"
] | [
"Timothy H Boyer \nDepartment of Physics\nCity College of the City University of New York\n10031New YorkNew YorkUSA\n"
] | [
"Department of Physics\nCity College of the City University of New York\n10031New YorkNew YorkUSA"
] | [] | The situation of a charged particle passing down the symmetry axis through a magnetic toroid presents a relativity paradox; different inertial frames suggest different forces on the charge and on the toroid due to the unperturbed systems. We review the charge-toroid interaction and suggest that the magnetic Aharonov-Bohm situation is misunderstood because of unfamiliarity with the acceleration fields following from the Darwin Lagrangian, which go unmentioned in recent textbooks of classical electromagnetism. | 10.1088/1361-6404/acc0e6 | [
"https://export.arxiv.org/pdf/2302.01937v1.pdf"
] | 256,615,603 | 2302.01937 | 2b4a76141141eb4030e72d1f0b686c5891f1b7e2 |
The Classical Aharonov-Bohm Interaction as a Relativity Paradox
3 Feb 2023
Timothy H Boyer
Department of Physics
City College of the City University of New York
10031New YorkNew YorkUSA
The Classical Aharonov-Bohm Interaction as a Relativity Paradox
3 Feb 2023arXiv:2302.01937v1 [physics.class-ph]
The situation of a charged particle passing down the symmetry axis through a magnetic toroid presents a relativity paradox; different inertial frames suggest different forces on the charge and on the toroid due to the unperturbed systems. We review the charge-toroid interaction and suggest that the magnetic Aharonov-Bohm situation is misunderstood because of unfamiliarity with the acceleration fields following from the Darwin Lagrangian, which go unmentioned in recent textbooks of classical electromagnetism.
I. INTRODUCTION
A. The Aharonov-Bohm Situation
The magnetic Aharonov-Bohm phase shift involving electrons passing a long solenoid has attracted great attention because it is claimed to be an effect of the vector potential involving no forces on the passing electrons and having no classical analogue. [1][2] This interpretation of the observed phenomenon is pervasive in the physics literature. [3][4] [5] In contradiction [6] to such views, it is suggested here that the classical electromagnetic interaction of a charge particle and a solenoid should be regarded as yet another example of a relativity paradox where the outcome is easily understood in one interial frame but is disguised in another.
B. Relativity Paradoxes
The appearance of relativity paradoxes is familiar to any instructor who has taught special relativity. Perhaps the most famous example is the pole-and-the-barn paradox where the barn has one open door and a sturdy back wall. [7] The description in the inertial frame of the barn is clear. The farmer claims that the fast-moving pole is Lorentz contracted and so easily fits inside the barn before he closes the door. The account in the rest-frame of the pole is misleading, because the physics in this frame requires new forces which are not mentioned in the original description of the unperturbed motions of the pole and of the barn. Similarly, the Aharonov-Bohm situation involves two unperturbed systems in relative motion, in this case a point charge and a solenoid. The description is misleading in the inertial frame where the solenoid is at rest. Conservation of energy in this inertial frame requires forces arising from particle accelerations which are not mentioned in the original description of the unperturbed motion of the moving charge and constant-current solenoid.
C. Aharonov-Bohm Situation as Relativity Paradox
The classical Aharonov-Bohm situation involves the electromagnetic interaction of a charged particle and a magnet at the relativistic 1/c 2 -level, though this relativity aspect is rarely mentioned in the literature. The interaction between the charged particle and the solenoid is calculated in the approximation that each continues its unperturbed behavior during the interaction. The interaction is much more easily understood in the interial frame where the charged particle is at rest and the solenoid is moving, because in this inertial frame, the physics requires no new forces beyond those arising from the original descriptions of the unperturbed parts of the interacting system. Indeed, in this inertial frame where the charge is at rest, it is easy to verify the energy conservation law based upon the equal-andopposite electric forces that the unperturbed charge and solenoid put on each other. On the other hand, in the inertial frame in which the solenoid is at rest, energy conservation is violated unless one introduces additional particle accelerations or external forces beyond those present in the unperturbed solenoid. If the charges of the solenoid are allowed to accelerate, they introduce back (Faraday) forces on the electron which were not included in the original description of an unperturbed solenoid. Alternatively, one may introduce external forces holding the solenoid particles at constant speed, and these external forces account for the required changes in energy, but such external forces were not part of the original description of the interaction of a charged particle and a solenoid as unperturbed systems.
D. Paradoxes Involving Particle-Magnet Interactions
The interaction of charges and magnets occurs at the relativistic 1/c 2 -level of energy and momentum. Because the interaction of charges and magnets at the relativistic level is poorly understood in classical electrodynamics, it has given rise to a whole class of "paradoxes," including the Aharonov-Bohm phase shift,[1] the Aharonov-Casher phase shift, [8] the Shockley-James paradox, [9] "hidden momentum in magnets," [10] and Mansuripur's erroneous claim. [11] All of these effects involve relativistic interactions where our familiar experience with nonrelativistic mechanics, or with electrostatics, or with magnetostatics may not be adequate. These interactions can all be treated at the level of the Darwin Lagrangian [12] which describes quasi-static classical electrodynamics which excludes radiation.
In this article, we will treat only the energy conservation aspects of the interaction between a point charge and a toroid. A more complete description of the interaction between a point charge and magnet will be published elsewhere. [13]
II. INTERACTION OF A POINT CHARGE AND A MAGNETIC MOMENT
A. Magnetic Dipole Moment
At its basic level, the problem of the classical Aharonov-Bohm situation involves the interaction of a magnetic moment m and a point charge e. We will picture the magnetic moment in its own S m rest frame as an electrically-neutral circular current loop of radius b and current I. The magnetic moment of this current loop (in Gaussian units) is
m = nπb 2 I/c,(1)
where the direction n is normal to the plane of the current loop and is connected to the direction of the current I by the right-hand rule. If the center of the current loop is at r m , we assume that the point charge e at r e is sufficiently far away that the separation is large compared to the radius b, b << |r e − r m |, and so the magnetic dipole approximation is adequate.
B. Interaction in the Inertial Frame where the Magnetic Moment is at Rest
In the S m inertial frame where the magnetic moment is at rest and the charge e is moving with constant velocity v e = v, the charge e carries (through order 1/c 2 ) both an electric field E e (r, t) = e (r − r e ) |r − r e | 3 1 +
1 2 v 2 c 2 − 3 2 (r − r e ) · v |r − r e | c 2(2)
and a magnetic field
B e (r, t) = e v c × (r − r e ) |r − r e | 3 ,(3)
so that the charge e has an interaction energy with the magnetic moment given by the magnetic field energy [14] ∆U
(B) = −m · B e (r m , t) ≈ −m· e v c × (r m −r e ) |r m −r e | 3 .(4)
In this inertial frame, the magnetic moment experiences a magnetic force
F (B) onm = −∇ m [−m · B e (r m , t)] = ∇ m m· e v c × (r m −r e ) |r m −r e | 3 = ∇ m m×e v c · (r m −r e ) |r m −r e | 3 = m×e v c · ∇ m (r m −r e ) |r m −r e | 3 = −3 [m×e (v/c)] · (r m −r e ) (r m −r e ) |r m −r e | 5 + [m×e (v/c)] |r m −r e | 3 ,(5)
while the charge e experiences a (deflecting) magnetic force due to the magnetic dipole
F (B) one = e v c × B m (r e , t) = e v c × 3m· (r e −r m ) (r e −r m ) |r e −r m | 5 − m |r e −r m | 3 .(6)
In this inertial frame, the forces between the magnetic moment and the charge are not equal in magnitude and opposite in direction; Eq. (5) involves a term in the direction (r m −r e )
whereas Eq. (6) involves a term in the dirction v× (r e −r m ).
C. Interaction in the Inertial Frame where the Charge e is at Rest
On the other hand, in the S e inertial frame where the charge particle e is at rest and the magnetic moment is moving with velocity v m = −v, the interaction between the charge and the magnetic moment involves energy in the electric fields because, in this frame where it is moving, the magnetic moment has an electric dipole moment [15]
p m ≅ −v c × m.(7)
In this S e inertial frame, the electric interaction energy is [16] ∆U
(E) = −p m · E e (r m , t) = − −v c × m · e (r m −r e ) |r m −r e | 3 ,(8)
which is the same as the magnetic energy given in Eq. (4). The electric force on the magnetic moment is accordingly
F (E) onm = −∇ m [−p m · E e (r m , t)] = (p m · ∇ m ) E e (r m , t) = −v c × m · ∇ m e (r m −r e ) |r m −r e | 3 = −3 [(−v/c) × m] · (r e −r m ) (r e −r m ) |r m −r e | 5 + [(−v/c) × m] |r m −r e | 3 .(9)
Noting the reversals of sign connected with the order in the cross products, one finds that this electric force on the magnetic dipole in Eq. (9) is the same as the magnetic force on the magnetic dipole appearing in Eq. (5). Also, the electric force on the charge e is just the negative of this expression,
F (E) one = eE m (r e , t) = e 3 [(−v/c) × m] · (r e −r m ) (r e −r m ) |r e −r m | 5 − [(−v/c) × m] |r e −r m | 3 .(10)
Through order 1/c 2 in this S e inertial frame, the electric forces that the magnetic moment and charge place on each other are equal in magnitude and opposite in direction. The change in electric field energy is accounted for by the work done by the electric force F (E) onm on the moving magnetic moment.
III. TRANSITION TO A POINT CHARGE AND TOROID
A. Forming a Toroid from Magnetic Dipoles
Although the equations which we have listed already record the basic paradox, the situation becomes far more vivid, and also simpler calculationally, if we imagine many magnetic moments arranged so as to form a toroid. And indeed, a toroid can be pictured as a solenoid (a stack of current loops) which is bent into a circular shape and so brings us to the Aharonov-Bohm situation where electrons pass a long solenoid.
Thus, we picture the magnetic moments (which are simply circular current loops of radius b) arranged in a circular pattern of (average) radius R around the z-axis so as to form a toroid located along the z-axis at z T . Each current loop lies in the plane formed by the z-axis and the displacement from the z-axis to the center of the current loop. We assume that there are N current loops, each carrying current I and that the (average) radius R of the toroid is much larger than the radius b of each current loop, b << R. The average magnetic field inside the toroid is
B T = φ 4π c NI 2πR = φ 2NI cR ,(11)
and the magnetic flux through each current loop of the toroid is
Φ = πb 2 B T = 2πb 2 NI cR .(12)
For the electrically-neutral toroidal situation, there are no toroidal electric fields, and all the magnetic fields are confined to the interior of the toroid.
We consider a charged particle e moving with velocity v e = zv along the z-axis, which is the axis of symmetry of the toroid. We want to obtain the lowest non-vanishing approximation for the interaction between the charge e and the toroid. This "lowest-nonvanishinginteraction" approximation suggests that we consider the toroid and the charge e as continuing their unperturbed motions despite their mutual interaction. Thus we consider the currents carried by the charge carriers of the toroid as constant. We also consider the velocity v e of the charge e as constant. With these assumptions, we wish to determine the forces on the charge e and on the toroid due to the toroid and the charge e respectively through order 1/c 2 .
C. Toroid at Rest
In the S T inertial frame where the toroid is at rest and the charge e is moving with velocity v e = zv, it appears that the passing charge puts a magnetic force (corresponding to Eq. (5)
onT = − z ∂ ∂z T −N πb 2 I c B e (z T , t = z ∂ ∂z T Nπb 2 I c e v c R (z T − z e ) 2 + R 2 3/2 = z Nπb 2 I c e v c R [−3 (z T − z e )] (z T − z e ) 2 + R 2 5/2 .(13)
Since the toroid is electrically neutral and all the magnetic fields of the toroid are confined to the interior of the toroid, there appears to be no force back of the unperturbed toroid on the charge e.
Since the charge particle e experiences no forces, there is no change in its kinetic energy.
Since the toroid is electrically neutral, there is no change in the electric energy as the charge e and the toroid interact. However, there is a change in the system magnetic energy associated with the overlap of the magnetic field of the charge with the magnetic field of the toroid in the volume of the toroid,
∆U (B) overlap = 1 4π d 3 rB e · B T = 1 4π evR c (z T − z e ) 2 + R 2 3/2 2NI cR 2πRπb 2 .(14)
In this inertial frame, it may appear that the relativistic conservation law of energy is violated, since there is apparently no force on the moving charge e and the currents of the toroid are assumed unperturbed.
D. Charge e at Rest
On the other hand, in the S e inertial frame in which the charge e is at rest while the unperturbed toroid is moving with velocity v T = − zv , there are electric forces between the charged particle and the toroid. In an inertial frame in which it is moving with velocity −v,
Also, the charge e will place an electric force on each electric dipole of the moving toroid, giving a net force on the toroid
F (E) onT = z {N z · [(p m · ∇ m ) E e (r m , t)]} = zN z · p m ∂ ∂r e rr + z (z T − z e ) (z T − z e ) 2 + r 2 3/2 r=R = zN p m e −3r (z T − z e ) (z T − z e ) 2 + r 2 5/2 r=R = zN v c πb 2 I c e −3R (z T − z e ) (z T − z e ) 2 + R 2 5/2 .(16)
This electric force on the toroid in Eq. (16) is exactly the same as the magnetic force as found in Eq. (13) for the previous inertial frame where the toroid was at rest and the charge e was moving. However, here in the S e inertial frame where the charge e is at rest and the toroid is moving, the electric forces on the charge e and on the toroid are equal in magnitude and and opposite in direction.
During the interaction, there is no energy change in the magnetic field energy, since the charge e is at rest and so has no magnetic field. However, during the interaction, there is a change in the electric field energy given by
∆U (E) = −Np m · E e (r m , t) = −N v c πb 2 I c E er (r m , t) = −N v c πb 2 I c e R (z T − z e ) 2 + r 2 3/2 .(17)
The electric energy change ∆U (E) in Eq. (17) is accounted for by the electric force F
(E)
onT on the moving toroid,
∆U (E) = − zm ∞ F (E) T · zdz T .(18)
Thus energy conservation involving the unperturbed parts of the system indeed holds in the S e inertial frame in which the toroid is moving with velocity −v and the charge e is at rest.
On the other hand, the change in electric energy ∆U (E) in Eq. (17)
IV. DISCUSSION OF THE RELATIVITY PARADOX
A. Contrast in Forces Between Different Inertial Frames
Thus we have our relativity "paradox." In both inertial frames, all the forces are of order 1/c 2 and so the forces cannot change in leading order in v/c when viewed from a different inertial frame. Nevertheless, different inertial frames claim that different forces appear. When described in the S T rest frame of the toroid, there is a magnetic force on the toroid, but apparently no force on the moving charge e. However, when described in the S e rest frame of the charge e, there are electric forces on the charge e and also on the toroid. Indeed, in the rest frame of the charge e, one finds exactly the same force on the toroid (now an electric force) as was found as a magnetic force in the inertial frame where the toroid is at rest, but now one also finds its partner in the electric force of the toroid on the charge e.
B. The Inertial Frame with the Unreliable Description
Just as in the relativity paradox of the pole and the barn, one must make a choice. Which description should one trust as representing accurate physics? We suggest that in each case, the accurate description involves the inertial frame in which the physics does not require the introduction of external forces and/or accelerations which were not part of the original account of the unperturbed motion. For the pole and the barn, the unreliable description involves the inertial frame in which the barn is moving, and so is Lorentz contracted; this inertial frame requires the introduction of new forces when the front of the pole encounters the back wall of the barn, before the barn door is closed. [7] These external forces alter the account given for the unperturbed motion of the pole.
In the situation of the classical Aharonov-Bohm interaction of a charged particle and a magnet, the situation involves the same basic idea. In which inertial frame does the physics require the introduction of new forces and/or accelerations which were not part of the original account of unperturbed motion? The answer is that the S T inertial frame in which the toroid is at rest is less satisfactory; specifically, the changes in magnetic energy associated with the overlap of the magnetic field of the charge e and the magnetic field of the toroid have not been accounted for satisfactorily.
C. Problems Involving Magnetic Energy Changes
Indeed, changes in magnetic energy often present problems. They are the basis of the present paradox. Electric energy changes involve work done directly by the electric forces, as is evident in the second description given for our charge-magnetic interaction where the charge e is at rest and the toroid is moving. In contrast, magnetic forces do no work. Therefore magnetic energy changes require work being done by separate electric or external forces. Magnetic energy balance in quasistatic systems requires the existence of electric forces associated with the accelerations involving changing speeds of charge particles. Such accelerations are not contained in the description of the unperturbed toroid.
D. Balancing Magnetic Energy Changes for the Toroid at Rest
The energy balance for the system of the charge e and the toroid involves three different contributions, mechanical kinetic energy, electric energy, and magnetic energy
∆U = ∆U (M ) + ∆U (E) + ∆U (B) .
The troublesome aspect, as usual, involves the magnetic energy ∆U (B) . Although the 1/c 2force on the toroid (given in Eqs. (13) and (16)) is exactly the same in either inertial frame, the 1/c 2 -energy change of the system given in Eqs. (14) and (17) is not the same, but indeed involves a relative minus sign. The difficulty here involves the same aspect which appears in any discussion of magnetic energy changes for quasistatic systems. [17] [18] There is a sharp contrast between electric and magnetic energy changes. Electric energies involve only the relative positions between charged particles. However, quasistatic magnetic energies involve moving charges. Therefore magnetic energy changes can involve changes in 1) the relative positions of the current carriers and/or in 2) the speeds of the charge carriers.
For our charge-toroid example in the S T inertial frame in which the toroid is at rest and the charge e is moving, we have both aspects of magnetic energy change,
∆U (B) = ∆U (B) overlap + ∆U (B) toroid currents .
There is a positive magnetic energy change ∆U (B) overlap associated with the overlap of the magnetic field of the charge e with the magnetic field of the toroid. However, the electric fields of the charge e act on the current carriers of the toroid. The zero-order electrostatic field of the charge e has no emf and so does not deliver net energy to the toroid currents.
It is the terms of order v 2 /c 2 in Eq. (2) which do indeed produce an emf and deliver net energy to (or remove magnetic energy from) the toroid currents . The toroid responds to the effort to change the speeds of the current carriers [19] in the fashion typical of a solenoid. The electric force on the charge e appears immediately in the unperturbed -motion discussion in the S e inertial frame in which the toroid is moving and so (according to the relativistic description of the unperturbed toroid motion) has an electric dipole moment. In the S T rest frame of the toroid, the basis for the back field on the charge e involves particle accelerations which are not part of the description of the unperturbed toroid. Thus the unperturbed description in the S T restframe of the toroid, which does not mention the fields arising from the accelerations of the current carriers, is indeed the less reliable description of the relativity paradox.
E. Absence of Quasistatic Acceleration Terms in Recent Textbooks
The back (Faraday) acceleration fields (which are unfamiliar in the interaction of a charge e and a toroid) are thoroughly familiar in the case of a solenoid with increasing currents. The back emf appearing in a solenoid when the currents are increasing is caused by these same back (Faraday) acceleration fields of the accelerating current carriers of the solenoid. [19] However, in the current textbooks of classical electromagnetism, the solenoid's back emf is calculated from a changing magnetic flux for a highly-symmetric solenoid, not from the accelerations of the current carriers.
Acceleration electric fields appear immediately from the Darwin Lagrangian. Thus, at the quasistatic 1/c 2 -level, the electric field of an accelerating charge e is not that given in Eq.
(2) for a constant-velocity charge e, but rather includes additional acceleration-dependent terms, [20][21]
E (r,t) = a q a (r − r a ) |r − r a | 3 1 + 1 2 ṙ a c 2 − 3 2 ṙ a · (r − r a ) |r − r a | c 2 − a q a 2c 2 r a |r − r a | + (r − r a ) [r a · (r − r a )] |r − r a | 3 .(19)
However, even as the Darwin Lagrangian, is barely mentioned in the recent textbooks of classical electromagnetism, the local (Faraday) acceleration fields in Eq. (19) for an accelerating charge are never mentioned. Fields due to accelerating charges appear only in the sections on radiation leading to Larmor's formula.
F. Classical Counterpart to the Aharonov-Bohm Effect
The interaction between a charged particle and a magnet is a relativistic effect of order 1/c 2 to lowest order. Therefore the interaction is adequately described by the Darwin Lagrangian which reproduces classical electrodynamics through order 1/c 2 but excludes radiation. We expect that the same basic interaction continues to hold for full classical electrodynamics, where we have the additional complications of retarded times and (very small) radiation effects.
It seems widely accepted that there is "no classical analogue to the Aharonov-Bohm effect." Statements of this sort appear in many textbooks of quantum theory [22] and in some textbooks of classical electromagnetism. [4][5] The usual argument for this no-classicalanalogue statement notes that the magnetic field vanishes outside a very long solenoid or toroid where the currents are constant, and hence concludes that there is no force on a passing charged particle. However, such unsophisticated views based upon magnetostatics do not do justice to the subtleties of classical electrodynamics. Because physicists are unfamiliar with the idea of quasistatic accelerating charges producing the electric fields associated with an emf , the claims associated with the classical Aharonov-Bohm situation have been rarely challenged. [23] V. ACKNOWLEDGEMENT
The reanalysis here of the classical interaction of a charged particle and a magnet was stimulated by a manuscript of Dr. Hanno Essén, "A classical Aharonov-Bohm effet arises when one goes beyond the test particle approximation." I wish to thank Dr. Essén for alerting me to the work included in reference 21, [2] For reviews of the Aharonov-Bohm phase shift, see for example, S. Olariu and I. Iovitzu
Popescu, "The Quantum Effects of Electromagnetic Fluxes," Rev. Mod. Phys. 57, 339-436
) on each magnetic dipole moment (circular current loop) of the toroid. By symmetry, the z-components of the forces add while the radial components cancel. The magnetic field of the charge e (assumed positive) is in the same circular pattern as that of the toroid. Taking the negative derivative of the −m · B contributions gives a total magnetic force on the toroidF (B)
an unperturbed magnetic moment m has an electric dipole moment p m = (−v/c) × m as given in Eq.(7). Thus in the S e inertial frame in which it is moving with velocity −v, the unperturbed toroid has a ring of electric dipoles which produce a net z-component of electric force on the charge e which is N times larger than the z-component of force produced by a single electric dipole in Eq. (10),F (E) one = eE T (r e , t) = ze N3R (z T − z e )c (z T − z e ) 2 + R
is exactly equal to the negative of the change in magnetic energy ∆U (B) overlap in Eq. (14) due to the overlap of the magnetic field of the charge e and the magnetic field of the toroid.
The (small) accelerations of the (many) toroid current carriers produce a back (Faraday) acceleration electric field acting on the agent causing the original emf , in this case on the charge e. The magnetic energy change due to the changing toroid currents involves B 2 T and so is twice as large and of opposite sign as the overlap magnetic energy change which involves only the first power of B T .It is the back (Faraday) acceleration electric field of these accelerating charge carriers which places a force on the charge e in the S T inertial frame where the toroid is at rest and the charge e is moving. The energy-balancing back force is or order 1/c 2 .
[ 1 ]
1Y. Aharonov and D. Bohm, "Significance of electromagnetic potentials in quantum theory," Phys. Rev. 115, 485-491 (1959).
The Aharonov-Bohm effects of electromagnetic fluxes. H Batelaan, A Tonomura, Physics Today. , and H. Batelaan and A. Tonomura, "The Aharonov-Bohm effects of electromagnetic fluxes," Physics Today, September 2009, pp. 38-43.
Feynman agreed that his description in the Lectures was in error. T. H. Boyer, private correspondence. The Lectures are now freely available to read on the internet. However, they are published without any changes. R P Feynman, R B Leighton, M Sands ; Addison-Wesley, Reading, H Ma. ; T, Boyer, The Feynman Lectures on Physics. IIMisinterpretation of the Aharonov-Bohm Effect. and so Section 15.5 still contains the same errorsR. P. Feynman, R. B. Leighton, and M. Sands,The Feynman Lectures on Physics (Addison- Wesley, Reading, MA., 1964), Vol. II, Sect. 15-5. For a correction, see T. H. Boyer, "Misin- terpretation of the Aharonov-Bohm Effect," Am. J. Phys. 40, 56-59, (1972). Feynman agreed that his description in the Lectures was in error. T. H. Boyer, private correspondence. The Lectures are now freely available to read on the internet. However, they are published without any changes, and so Section 15.5 still contains the same errors.
A Shadowitz, The Electromagnetic Field. New YorkDover197A. Shadowitz, The Electromagnetic Field (Dover, New York, 1988), pp. 197, 208-209, 517-522.
Classical Electromagnetism in a Nutshell. A Garg, Princeton U. PressPrinceton, NJ 08450A. Garg, Classical Electromagnetism in a Nutshell (Princeton U. Press, Princeton, NJ 08450, 2012), pp. 107-108.
Classical electromagnetic deflections and lag effects associated with quantum interference pattern shifts: considerations related to the Aharonov-Bohm effect. T H Boyer, A dissenting view is given by. 8Found. Phys.. Classical electromagnetism and the Aharonov-Bohm phase shift," FoundA dissenting view is given by T. H. Boyer, "Classical electromagnetic deflections and lag effects associated with quantum interference pattern shifts: considerations related to the Aharonov- Bohm effect," Phys. Rev. D 8, 1679-1693 (1973); "The Aharonov-Bohm effect as a classical electromagnetic lag effect: an electrostatic analogue and possible experimental test," Il Nuovo Cimento 100, 685-701 (1987); "Does the Aharonov-Bohm effect exist?" Found. Phys. 30, 893-905 (2000); "Classical electromagnetism and the Aharonov-Bohm phase shift," Found.
Darwin-Lagrangian analysis for the interaction of a point charge and a magnet: Considerations related to the controversy regarding the Aharonov-Bohm and Aharonov-Casher phase shifts. J. Phys. A: Math. Gen. 30Phys.Phys. 30, 907-932 (2000); "Darwin-Lagrangian analysis for the interaction of a point charge and a magnet: Considerations related to the controversy regarding the Aharonov-Bohm and Aharonov-Casher phase shifts," J. Phys. A: Math. Gen. 39, 3455-3477 (2006).
If a spacetime event occurs in one inertial frame, the spacetime event will occur in any other inertial frame. Only the time order of spacetime events with a spacelike separation can be. D J See, F Griffiths ; E, J A Taylor, Wheeler, Introduction to Electrodynamics 4th. New York; New YorkFreeman166or B. F. Schutz, A First Course in General Relativity. different in two different inertial framesSee, for example, D. J. Griffiths, Introduction to Electrodynamics 4th ed (Pearson, New York 2013), p. 516-517 or B. F. Schutz, A First Course in General Relativity (Cambridge U. Press 1986), p. 34, or E. F. Taylor and J. A. Wheeler, Spacetime Physics: Introduction to Special Relativity; 2nd ed (Freeman, New York 1992), p. 166. If a spacetime event occurs in one inertial frame, the spacetime event will occur in any other inertial frame. Only the time order of spacetime events with a spacelike separation can be different in two different inertial frames.
Topological quantum effects for neutral particles. Y Aharonov, A Casher, Phys. Rev. Lett. 53Y. Aharonov and A. Casher, "Topological quantum effects for neutral particles," Phys. Rev. Lett. 53, 319-321 (1984).
Try simplest cases' discovery of 'hidden momentum' forces on 'magnetic currents. W Shockley, R P James, Phys. Rev. Lett. 18W. Shockley and R. P. James, "'Try simplest cases' discovery of 'hidden momentum' forces on 'magnetic currents,"' Phys. Rev. Lett. 18, 876-879 (1967).
. D J See For Example, Griffiths, Prentice-Hall361Upper Saddle River, NJIntroduction to Electrodynamics 3rd ednSee for example, D. J. Griffiths, Introduction to Electrodynamics 3rd edn (Prentice-Hall, Upper Saddle River, NJ 1999), pp. 357, 361, 520-521;
Interaction of a magnet and a point charge: Unrecognized internal electromagnetic momentum. J D H Jackson ; T, Boyer, 618; A. Zangwill, Modern Electrodynamics. New YorkCambridge U. Press91Am. J. Phys.J. D. Jackson, Classical Electrodynamics 3rd ed (John Wiley & Sons, New York, 1999), pp. 189, 618; A. Zangwill, Modern Electrodynamics (Cambridge U. Press, 2013), pp. 521-522. See also, T. H. Boyer, "Classical interaction of a magnet and a point charge: The Shockley-James Paradox," Phys. Rev. E 91, 013201(11) (2015); "Interaction of a magnet and a point charge: Unrecognized internal electromagnetic momentum," Am. J. Phys. 83, 433-442 (2015).
Trouble with the Lorentz law of force: Incompatibility with special relativity and momentum conservation. M Mansuripur, Phys. Rev. Lett. T. H. Boyer108Am. J. Phys.M. Mansuripur, "Trouble with the Lorentz law of force: Incompatibility with special relativity and momentum conservation," Phys. Rev. Lett. 108, 193901 (2012). One reply to Mansuripur which does not invoke hidden momentum is given by T. H. Boyer, "Examples and comments related to relativity controversies," Am. J. Phys. 80, 962-971 (2012).
. J D Jackson, Classical Electrodynamics. 2John Wiley & SonsJ. D. Jackson, Classical Electrodynamics 2nd ed (John Wiley & Sons, New York, 1975), pp. 593-595.
Concerning Classical Forces, Energies, and Potentials for Accelerated Point Charges. T H Boyer, Am. J. Phys. to be. published; and "A Classical Electromagnetic Basis for the Aharonov-Bohm Phase Shift. submitted for publicationT. H. Boyer, "Concerning Classical Forces, Energies, and Potentials for Accelerated Point Charges," Am. J. Phys. to be published; and "A Classical Electromagnetic Basis for the Aharonov-Bohm Phase Shift," submitted for publication.
See, Griffiths in Ref. 7291See, for example, Griffiths in Ref. 7, p. 291.
J D Jackson, Classical Electrodynamics. New YorkJohn Wiley & Sons389J. D. Jackson, Classical Electrodynamics (John Wiley & Sons, New York, 1962), p. 389.
See, for example, ref. 172See, for example, ref. [12], p. 172.
Concerning Classical Forces, Energies, and Potentials for Accelerating Point Charges. T H Boyer, Am. J. Phys. 91T. H. Boyer, "Concerning Classical Forces, Energies, and Potentials for Accelerating Point Charges," Am. J. Phys. 91, 74-78 (2023).
Electric and magnetic forces and energies for a parallel-plate capacitor and a flattened, slip-joint solenoid. T H Boyer, Am J. Phys. 69T. H. Boyer, "Electric and magnetic forces and energies for a parallel-plate capacitor and a flattened, slip-joint solenoid," Am J. Phys. 69, 1277-1279 (2001).
Faraday induction and the current carriers in a circuit. T H Boyer, Am. J. Phys. 83T. H. Boyer, "Faraday induction and the current carriers in a circuit," Am. J. Phys. 83, 263-271 (2015).
Action and reaction between moving charges. L Page, N I Adams, Am. J. Phys. L. Page and N. I. Adams, Electrodynamics13Van NostrandSee the older text by L. Page and N. I. Adams, Electrodynamics (Van Nostrand, New York, 1940), p. 175. See also, L. Page and N. I. Adams, "Action and reaction between moving charges," Am. J. Phys. 13, 141-147 (1945).
Many-Body Interactions in Atomic and Nuclear Systems. B Podolsky, K S Kunz, ; H Primakoff, T Holstein, Fundamentals of Electrodynamics. New YorkMarcel DekkerSee also, B. Podolsky and K. S. Kunz, Fundamentals of Electrodynamics (Marcel Dekker, New York 1969); and H. Primakoff and T. Holstein, "Many-Body Interactions in Atomic and Nuclear Systems," Phys. Rev. 1218-1234 (1939).
D J See For Example, Griffiths, or L. E. Balentine, Quantum Mechanics. Upper Saddle River, NJ; Englewood Cliffs, New JerseyPrentice HallIntroduction to Quantum MechanicsSee for example, D. J. Griffiths, Introduction to Quantum Mechanics 2nd ed. (Pearson Prentice Hall, Upper Saddle River, NJ 2005), pp. 384-391 or L. E. Balentine, Quantum Mechanics (Prentice Hall, Englewood Cliffs, New Jersey 07632, 1990), pp. 220-223.
A full discussion of the forces and energy changes is given in reference. 13A full discussion of the forces and energy changes is given in reference [13].
February 3 AB-asParadox4.tex. February 3 AB-asParadox4.tex
| [] |
[
"Controllability-Aware Unsupervised Skill Discovery",
"Controllability-Aware Unsupervised Skill Discovery"
] | [
"Seohong Park ",
"Kimin Lee ",
"Youngwoon Lee ",
"Pieter Abbeel "
] | [] | [] | One of the key capabilities of intelligent agents is the ability to discover useful skills without external supervision. However, the current unsupervised skill discovery methods are often limited to acquiring simple, easy-to-learn skills due to the lack of incentives to discover more complex, challenging behaviors. We introduce a novel unsupervised skill discovery method, Controllabilityaware Skill Discovery (CSD), which actively seeks complex, hard-to-control skills without supervision. The key component of CSD is a controllability-aware distance function, which assigns larger values to state transitions that are harder to achieve with the current skills. Combined with distance-maximizing skill discovery, CSD progressively learns more challenging skills over the course of training as our jointly trained distance function reduces rewards for easy-toachieve skills. Our experimental results in six robotic manipulation and locomotion environments demonstrate that CSD can discover diverse complex skills including object manipulation and locomotion skills with no supervision, significantly outperforming prior unsupervised skill discovery methods. Videos and code are available at | 10.48550/arxiv.2302.05103 | [
"https://export.arxiv.org/pdf/2302.05103v3.pdf"
] | 256,808,231 | 2302.05103 | e966cca871cef85f3bfb9a6c69cdcbec23357c1d |
Controllability-Aware Unsupervised Skill Discovery
Seohong Park
Kimin Lee
Youngwoon Lee
Pieter Abbeel
Controllability-Aware Unsupervised Skill Discovery
One of the key capabilities of intelligent agents is the ability to discover useful skills without external supervision. However, the current unsupervised skill discovery methods are often limited to acquiring simple, easy-to-learn skills due to the lack of incentives to discover more complex, challenging behaviors. We introduce a novel unsupervised skill discovery method, Controllabilityaware Skill Discovery (CSD), which actively seeks complex, hard-to-control skills without supervision. The key component of CSD is a controllability-aware distance function, which assigns larger values to state transitions that are harder to achieve with the current skills. Combined with distance-maximizing skill discovery, CSD progressively learns more challenging skills over the course of training as our jointly trained distance function reduces rewards for easy-toachieve skills. Our experimental results in six robotic manipulation and locomotion environments demonstrate that CSD can discover diverse complex skills including object manipulation and locomotion skills with no supervision, significantly outperforming prior unsupervised skill discovery methods. Videos and code are available at
Introduction
Humans are capable of autonomously learning skills, ranging from basic muscle control to complex acrobatic behaviors, which can be later combined to achieve highly complex tasks. Can machines similarly discover useful skills without any external supervision? Recently, many unsupervised skill discovery methods have been proposed to discover diverse behaviors in the absence of extrinsic rewards (Gregor et al., 2016;Eysenbach et al., 2019;Sharma et al., 2020;Achiam et al., 2018;Campos Camúñez et al., 2020;Hansen et al., 2020;Kim et al., 2021;Liu & Abbeel, 2021a; Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
CSD (ours)
LSD DIAYN Object (b) Skill trajectories (a) FetchPush Gripper Figure 1. Object trajectories and gripper trajectories of 2-D continuous skills discovered by three unsupervised skill discovery methods, CSD (ours), LSD , and DIAYN (Eysenbach et al., 2019), in the FetchPush environment. Trajectories with different colors represent different skills. While previous methods focus only on maneuvering the gripper, CSD discovers object manipulation skills in the absence of supervision. Laskin et al., 2022). These methods have also demonstrated efficient downstream reinforcement learning (RL) either by fine-tuning (Laskin et al., 2021; or sequentially combining (Eysenbach et al., 2019;Sharma et al., 2020; the discovered skills.
However, in complex environments, current unsupervised skill discovery methods are often limited to discovering only simple, easy-to-learn skills. For example, as illustrated in Figure 1, previous approaches (LSD and DIAYN) only learn to gain control of the agent's own 'body' (i.e., the gripper and joint angles), completely ignoring the object in the Fetch environment. This is because learning difficult skills, such as interacting with the object, has no incentive for them compared to learning easy skills. In other words, their objectives can be fully optimized with simple skills.
To mitigate this issue, prior approaches incorporate human supervision, such as limiting the agent's focus to specific dimensions of the state space of interest (Eysenbach et al., 2019;Sharma et al., 2020;Adeniji et al., 2022). However, this not only requires manual feature engineering but also significantly limits the diversity of skills. On the other hand, we humans consistently challenge ourselves to learn more complex skills after mastering simple skills in an autonomous manner.
Inspired by this, we propose a novel unsupervised skill discovery method, Controllability-aware Skill Discovery (CSD), which explicitly seeks complex, hard-to-learn behaviors that are potentially more useful for solving down-stream tasks. Our key idea is to train a controllability-aware distance function based on the current skill repertoire and combine it with distance-maximizing skill discovery. Specifically, we train the controllability-aware distance function to assign larger values to harder-to-achieve state transitions and smaller values to easier-to-achieve transitions with the current skills. Since CSD aims to maximize this controllabilityaware distance, it autonomously learns increasingly complex skills over the course of training. We highlight that, to the best of our knowledge, CSD is the first unsupervised skill discovery method that demonstrates diverse object manipulation skills in the Fetch environment without any external supervision or manual feature engineering (e.g., limiting the focus only to the object).
To summarize, the main contribution of this work is to propose CSD, a novel unsupervised skill discovery method built upon the notion of controllability. We also formulate a general distance-maximizing skill discovery approach to incorporate our controllability-aware distance function with skill discovery. We empirically demonstrate that CSD discovers various complex behaviors, such as object manipulation skills, with no supervision, outperforming previous state-of-the-art skill discovery methods in diverse robotic manipulation and locomotion environments.
Preliminaries
Unsupervised skill discovery aims at finding a potentially useful set of skills without external rewards. Formally, we consider a reward-free Markov decision process (MDP) defined as M = (S, A, µ, p), where S and A are the state and action spaces, respectively, µ : P(S) is the initial state distribution, and p : S × A → P(S) is the transition dynamics function. Each skill is defined as a skill latent vector z ∈ Z and a skill-conditioned policy π(a|s, z) that is shared across the skills. The skill space Z can be either discrete skills ({1, 2, . . . , D}) or continuous skills (R D ).
To collect a skill trajectory (behavior), we sample a skill z from a predefined skill prior distribution p(z) at the beginning of an episode. We then roll out the skill policy π(a|s, z) with the sampled z for the entire episode. For the skill prior, we use a standard normal distribution for continuous skills and a uniform distribution for discrete skills.
Throughout the paper, I(·; ·) denotes the mutual information and H(·) denotes either the Shannon entropy or differential entropy depending on the context. We use uppercase letters for random variables and lowercase letters for their values (e.g., S denotes the random variable for states s).
Related Work
In this section, we mainly discuss closely related prior unsupervised skill discovery work based on mutual information maximization or Euclidean distance maximization. A more extensive literature survey on unsupervised skill discovery and unsupervised RL can be found in Appendix A.
Mutual Information-Based Skill Discovery
Mutual information-based unsupervised skill discovery maximizes the mutual information (MI) between states S and skills Z, I(S; Z), which associates different states with different skill latent vectors so that the behaviors from different zs are diverse and distinguishable. Since computing exact MI is intractable, previous MI-based methods approximate MI in diverse ways, which can be categorized into reverse-MI and forward-MI (Campos Camúñez et al., 2020).
First, reverse-MI approaches (Gregor et al., 2016;Eysenbach et al., 2019;Achiam et al., 2018;Hansen et al., 2020) optimize MI in the form of I(S; Z) = H(Z) − H(Z|S), where H(Z) is a constant as we assume that the skill prior distribution p(z) is fixed. Thus, maximizing I(S; Z) corresponds to minimizing H(Z|S), which can be approximated with a variational distribution q θ (z|s). For instance, DIAYN (Eysenbach et al., 2019) maximizes the variational lower bound of MI as follows:
I(S; Z) = −H(Z|S) + H(Z) (1) = E z,s [log p(z|s)] − E z [log p(z)] (2) ≥ E z,s [log q θ (z|s)] + (const),(3)
where q θ (z|s) is a variational approximation of p(z|s) (Barber & Agakov, 2003). Intuitively, q θ (z|s) works as a 'skill discriminator' that tries to infer the original skill z from the state s, encouraging the skill policy to generate distinguishable skill trajectories for different zs (i.e., diverse skills).
Other reverse-MI methods optimize the MI objective similarly but computing MI on entire trajectories (Achiam et al., 2018) or only on final states (Gregor et al., 2016) rather than all intermediate states, or using von Mises-Fisher distributions (Hansen et al., 2020) for the skill prior distribution instead of Gaussian or uniform distributions.
On the other hand, forward-MI approaches (Sharma et al., 2020;Campos Camúñez et al., 2020;Liu & Abbeel, 2021a;Laskin et al., 2022) employ the other decomposition of MI:
I(S; Z) = H(S) − H(S|Z)
. This decomposition explicitly maximizes the state entropy H(S), which helps diversify skill trajectories in practice (Laskin et al., 2022). Forward-MI methods minimize the H(S|Z) term with a variational approximation (Sharma et al., 2020;Liu & Abbeel, 2021a;Campos Camúñez et al., 2020) or a contrastive estimator (Laskin et al., 2022). H(S) can be estimated using a particlebased entropy estimator (Liu & Abbeel, 2021a;Laskin et al., 2022), a state marginal matching objective (Lee et al., 2019;Campos Camúñez et al., 2020), or sampling-based approximation (Sharma et al., 2020).
One major limitation of MI-based approaches is that optimizing the MI objective does not necessarily lead to cov-(c) LSD (Euclidean distance) (d) CSD (ours) (Controllability-aware distance) (b) Skill space mappings of LSD and the MI objective (a) Two skill sets having the same MI Figure 2. Illustration of unsupervised skill discovery methods. (a) MI is invariant to traveled distances. (b) The MI objective simply seeks any mapping between Z and S, while LSD finds the largest (longest) possible mapping. (c) LSD maximizes the Euclidean traveled distance, which can lead to simple or trivial behaviors. (d) Our CSD maximizes the traveled distance with respect to our learned controllability-aware distance function that assigns larger values to harder-to-achieve state transitions. This leads to more complex skills that can be useful for downstream tasks. ering a larger region in the state space. This is because MI is invariant to traveled distances or any invertible transformation (Figure 2a), i.e., I(S; Z) = I(f (S); Z) for any invertible f (Kraskov et al., 2004). Since there is no incentive for the MI objective to further explore the state space, they often end up discovering 'static' skills with limited state coverage (Gu et al., 2021;Laskin et al., 2022).
Euclidean Distance-Maximizing Skill Discovery
To resolve the limitation of MI-based skill discovery, recently proposed Lipschitz-constrained Skill Discovery (LSD), which aims to not only establish a mapping between Z and S but also maximize the Euclidean traveled distance in the state space for each skill. Specifically, LSD maximizes the state change along the direction specified by the skill z with the following objective:
J LSD := E z,s,s ′ [(ϕ(s ′ ) − ϕ(s)) ⊤ z](4)s.t. ∀x, y ∈ S, ∥ϕ(x) − ϕ(y)∥ ≤ ∥x − y∥,(5)
where s ′ denotes the next state and ϕ : S → R D denotes a mapping function. LSD maximizes Equation (4) with respect to both the policy and ϕ. Intuitively, this objective aims to align the directions of z and (ϕ(s ′ ) − ϕ(s)) while maximizing the length ∥ϕ(s ′ ) − ϕ(s)∥, which leads to an increase in the state difference ∥s ′ − s∥ due to the Lipschitz constraint. As illustrated in Figure 2b, LSD finds the largest possible mapping in the state space by maximizing Euclidean traveled distances in the state space in diverse directions, which leads to more 'dynamic' skills. On the other hand, the MI objective finds any mapping between the skill space and the state space, being agnostic to the area of the mapped region, which often results in 'static' skills with limited state coverage.
While promising, LSD is still limited in that it maximizes Euclidean traveled distances in the state space, which often does not match the behaviors of our interests because the Euclidean distance treats all state dimensions equally. For example, in the Fetch environment in Figure 1, simply diversifying the position and joint angles of the robot arm is sufficient to achieve large Euclidean traveled distances because both the coordinates of the object and the gripper lie in the same Euclidean space ( Figure 2c). As such, LSD and any previous MI-based approaches mostly end up learning skills that only diversify the agent's own internal states, ignoring the external states (e.g., object pose).
Instead of maximizing the Euclidean distance, we propose to maximize traveled distances with respect to a learned controllability-aware distance function that 'stretches' the axes along hard-to-control states (e.g., objects) and 'contracts' the axes along easy-to-control states (e.g., joint angles), so that maximizing traveled distances results in the discovery of more complex, useful behaviors ( Figure 2d).
Unsupervised Goal-Conditioned RL
Another line of unsupervised RL focuses on discovering a wide range of goals and learning corresponding goalreaching policies, which leads to diverse learned behaviors (Warde-Farley et al., 2019;Pong et al., 2020;Pitis et al., 2020;Mendonca et al., 2021). On the other hand, unsupervised skill discovery, including our approach, (1) focuses on more general behaviors (e.g., running, flipping) not limited to goal-reaching skills, whose behaviors tend to be 'static' (Mendonca et al., 2021;Jiang et al., 2022), and (2) aims to learn a compact set of distinguishable skills embedded in a low-dimensional, possibly discrete skill space, rather than finding all possible states, making it more amenable to hierarchical RL by providing a low-dimensional high-level action space (i.e., skill space). While these two lines of approaches are not directly comparable, we provide empirical comparisons and further discussion in Appendix C.
Controllability-Aware Skill Discovery
To discover complex, useful skills without extrinsic reward and domain knowledge, we introduce the notion of controllability 1 to skill discovery -once an agent discovers 1 The term controllability in this paper describes whether an agent can manipulate hard-to-control states (e.g., external objects) or not, different from the one used in control theory (Ogata et al., 2010). easy-to-achieve skills, it continuously moves its focus to hard-to-control states and learns more diverse and complex skills. We implement this idea in our Controllability-aware Skill Discovery (CSD) by combining a distance-maximizing skill discovery approach (Section 4.1) with a jointly trained controllability-aware distance function (Section 4.2), which enables the agent to find increasingly complex skills over the course of training (Section 4.3).
General Distance-Maximizing Skill Discovery
As explained in Section 3.2, Euclidean distance-maximizing skill discovery does not necessarily maximize distances along hard-to-control states (i.e., hard-to-achieve skills). To discover more challenging skills, we propose to learn a skill policy with respect to a jointly learned controllability-aware distance function.
To this end, we first present a general Distance-maximizing Skill Discovery approach (DSD) that can be combined with any arbitrary distance function d(·, ·) : S×S → R + 0 . Specifically, we generalize the Euclidean distance-maximizing skill discovery by replacing ∥x − y∥ in Equation (5) with d(x, y) as follows:
J DSD := E z,s,s ′ [(ϕ(s ′ ) − ϕ(s)) ⊤ z](6)s.t. ∀x, y ∈ S, ∥ϕ(x) − ϕ(y)∥ ≤ d(x, y),(7)
where ϕ(·) : S → R D is a function that maps states into a D-dimensional space (which has the same dimensionality as the skill space). DSD can discover skills that maximize the traveled distance under the given distance function d in diverse directions by (1) aligning the directions of z and (ϕ(s ′ ) − ϕ(s)) and (2) maximizing its length ∥ϕ(s ′ ) − ϕ(s)∥, which also increases d(s, s ′ ) due to the constraint in Equation (7). Here, LSD can be viewed as a special case of DSD with d(x, y) = ∥x − y∥.
When dealing with a learned distance function d, it is generally not straightforward to ensure that d is a valid distance (pseudo-)metric, which must satisfy symmetry and the triangle inequality. However, DSD has the nice property that d in Equation (7) does not have to be a valid metric. This is because DSD implicitly converts the original constraint (Equation (7)) into the one with a valid pseudometricd. As a result, we can use any arbitrary non-negative function d for DSD, with the semantics being implicitly defined by its induced pseudometricd. We summarize our theoretical results as follows and the proofs are in Appendix B.1.
Theorem 4.1. Given any non-negative function d : S ×S → R + 0 , there exists a valid pseudometricd : S × S → R + 0 that satisfies the following properties:
1. Imposing Equation (7) with d is equivalent to imposing
Equation (7) withd, i.e., ∀x, y ∈ S, ∥ϕ(x) − ϕ(y)∥ ≤ d(x, y) (8) ⇐⇒ ∀x, y ∈ S, ∥ϕ(x) − ϕ(y)∥ ≤d(x, y). (9) 2.d is a valid pseudometric. 3.d is a lower bound of d, i.e., ∀x, y ∈ S, 0 ≤d(x, y) ≤ d(x, y).(10)
Training of DSD. While LSD implements the Lipshitz constraint in Equation (5) using Spectral Normalization (Miyato et al., 2018), similarly imposing DSD's constraint in Equation (7) is not straightforward because it is no longer a Euclidean Lipschitz constraint. Hence, we optimize our objective with dual gradient descent (Boyd et al., 2004): i.e., with a Lagrange multiplier λ ≥ 0, we use the following dual objectives to train DSD:
r DSD := (ϕ(s ′ ) − ϕ(s)) ⊤ z,(11)J DSD,ϕ := E[(ϕ(s ′ ) − ϕ(s)) ⊤ z + λ · min(ϵ, d(x, y) − ∥ϕ(x) − ϕ(y)∥)], (12) J DSD,λ := −λ · E[min(ϵ, d(x, y) − ∥ϕ(x) − ϕ(y)∥)],(13)
where r DSD is the intrinsic reward for the policy, and J DSD,ϕ and J DSD,λ are the objectives for ϕ and λ, respectively. x and y are sampled from some state pair distribution p cst (x, y) that imposes the constraint in Equation (7). ϵ > 0 is a slack variable to avoid the gradient of λ always being nonnegative. With these objectives, we can train DSD by optimizing the policy with Equation (11) as an intrinsic reward while updating the other components with Equations (12) and (13).
Controllability-Aware Distance Function
To guide distance-maximizing skill discovery to focus on more challenging skills, a distance function d is required to assign larger values to state transitions that are hard-toachieve with the current skills and smaller values to easyto-achieve transitions. d also needs to be adaptable to the current skill policy so that the agent continuously acquires new skills and finds increasingly difficult state transitions over the course of training.
Among many potential distance functions, we choose a negative log-likelihood of a transition from the current skill policy, − log p(s ′ |s), as a controllability-aware distance function in this paper. Accordingly, we define the degree to which a transition is "hard-to-achieve" as − log p(s ′ |s) with respect to the current skill policy's transition distribution. This suits our desiderata since (1) it assigns high values for rare transitions (i.e., low p(s ′ |s)) while assigns small values for frequently visited transitions (i.e., high p(s ′ |s));
(2) p(s ′ |s) can be approximated by training a density model q θ (s ′ |s) from policy rollouts; and (3) the density model q θ (s ′ |s) continuously adjusts to the current skill policy by jointly training it with the skill policy. Here, while it is also possible to employ multi-step transitions p(s t+k |s t ) for the distance function, we stick to the single-step version for simplicity. We note that even though we employ singlestep log-likelihoods, DSD maximizes the sum of rewards,
T −1 t=0 (ϕ(s t+1 ) − ϕ(s t )) ⊤ z = (ϕ(s T ) − ϕ(s 0 )) ⊤ z
for the trajectory (s 0 , a 0 , s 1 , . . . , s T ), which maximizes the traveled distance of the whole trajectory while maintaining the directional alignment with z.
Controllability-Aware Skill Discovery
Now, we introduce Controllability-aware Skill Discovery (CSD), a distance-maximizing skill discovery method with our controllability-aware distance function. With the distance function in Section 4.2 we can rewrite the constraint of DSD in Equation (7) as follows:
∀s, s ′ ∈ S, ∥ϕ(s) − ϕ(s ′ )∥ ≤ d CSD (s, s ′ ),(14)d CSD (s, s ′ ) ≜ (s ′ − µ θ (s)) ⊤ Σ −1 θ (s)(s ′ − µ θ (s)) (15) ∝ − log q θ (s ′ |s) + (const),(16)
where the density model is parameterized as q θ (s ′ |s) = N (µ θ (s), Σ θ (s)), which is jointly trained using (s, s ′ ) tuples collected by the skill policy. We also use the same p(s, s ′ ) distribution from the skill policy for the dual constraint distribution p cst (x, y) introduced in Section 4.1 as well. Here, we note that d CSD (·, ·) is not necessarily a valid distance metric; however, we can still use it for the constraint in Equation (7) according to Theorem 4.1, because it automatically transforms d CSD into its induced valid pseudometricd CSD . Further discussion about its implications and limitations can be found in Appendix B.2.
CSD has several main advantages. First, the agent actively seeks rare state transitions and thus acquires increasingly complex skills over the course of training, which makes the skills discovered more useful for downstream tasks. In contrast, LSD or previous MI-based approaches only maximize Euclidean distances or are even agnostic to traveled distances, which often leads to simple or static behaviors. Second, unlike LSD, the optimal behaviors of CSD are agnostic to the semantics and scales of each dimension of the state space; thus, CSD does not require domain knowledge about the state space. Instead, the objective of CSD only depends on the difficulty or sparsity of state transitions. Finally, unlike curiosity-or disagreement-based exploration methods that only seek unseen transitions (Pathak et al., 2017;Mendonca et al., 2021), CSD finds a balance between covering unseen transitions and learning maximally different skills across zs via directional alignments, which leads to diverse yet consistent skills.
Algorithm 1 Controllability-aware Skill Discovery (CSD) 1: Initialize skill policy π(a|s, z), function ϕ(s), conditional density model q θ (s ′ |s), Lagrange multiplier λ 2: for i ← 1 to (# epochs) do Only CSD learns to manipulate the object across all three tasks without supervision while other methods focus only on moving the robot arm. We refer to Appendix D for the complete qualitative results from all random seeds.
Training of CSD. We train the skill policy π(a|s, z) with Soft Actor-Critic (SAC) (Haarnoja et al., 2018b) with Equation (11) as an intrinsic reward. We train the other components with stochastic gradient descent. We summarize the training procedure of CSD in Algorithm 1 and provide the full implementation details in Appendix E.
Experiments
The goal of our experiments is to verify whether our controllability-aware skill discovery method can learn complex, useful skills without supervision in a variety of environments. We test CSD on six environments across three different domains: three Fetch manipulation environments (FetchPush, FetchSlide, and FetckPickAndPlace) (Plappert et al., 2018), Kitchen , and two MuJoCo locomotion environments (Ant and HalfCheetah) Brockman et al., 2016). We mainly compare CSD with three state-of-the-art unsupervised skill discovery methods: LSD Figure 4. Comparison of the object state coverage and downstream task performances of skill discovery methods in three Fetch manipulation environments. Only CSD learns to manipulate the object without external supervision, while the other methods mainly focus on controlling the internal states ( Figure 16) because there is little incentive for them to discover more 'challenging' skills. Figure 5. Comparison of the downstream task performances of skill discovery methods with the oracle prior, which restricts the input to the skill discriminators to the object xyz coordinates. 2019), and DADS (Sharma et al., 2020). They respectively fall into the categories of Euclidean distance-maximizing skill discovery, reverse-MI, and forward-MI (Section 3). We also compare with disagreement-based exploration used in unsupervised goal-conditioned RL, such as LEXA (Mendonca et al., 2021), in Appendix C. We evaluate state coverage and performance on downstream tasks to assess the diversity and usefulness of the skills learned by each method. For our quantitative experiments, we use 8 random seeds and present 95% confidence intervals using error bars or shaded areas. We refer to our project page for videos.
Fetch Manipulation
We first show (1) whether CSD can acquire object manipulation skills without any supervision, (2) how useful the learned skills are for the downstream tasks, and (3) which component allows CSD to learn complex skills in the Fetch manipulation environments (Plappert et al., 2018). Each Fetch environment consists of a robot arm and an object but has a unique configuration; e.g., FetchSlide has a slippery table and FetchPickAndPlace has a two-fingered gripper.
We train CSD, LSD, DIAYN, and DADS on the three Fetch environments for 80K episodes with 2-D continuous skills (FetchPush, FetchSlide) or 3-D continuous skills (Fetch-PickAndPlace). Note that we do not leverage human prior knowledge on the state space (e.g., object pose); thus, all methods are trained on the full state in this experiment. 2 2 We note that the Fetch experiments in the LSD paper are using the 'oracle' prior, which enforces an agent to only focus on the state change of the object. Figure 6. Ablation study of distance-maximizing skill discovery in three Fetch environments. This suggests that CSD's performance cannot be achieved by just applying simple tricks to the previous Euclidean distance-maximizing skill discovery method. Figure 3 illustrates the object trajectories of continuous skills learned by skill discovery methods in the absence of any supervision. CSD successfully learns to move the object in diverse directions without external supervision. On the other hand, all of the previous methods fail to learn such skills and instead focus on diversifying the joint angles of the robot arm itself. This is because there is no incentive for the previous methods to focus on challenging skills such as object manipulation, while CSD explicitly finds hard-toachieve state transitions.
Following the setup in , we evaluate two quantitative metrics: the object state coverage and goalreaching downstream task performance. Figure 4a compares the four skill discovery methods in terms of the object state coverage, which is measured by the number of 0.1 × 0.1 square bins occupied by the object at least once, in the three Fetch environments. Figure 4b shows the comparison of the goal-reaching downstream task performances, where we train a hierarchical controller π h (z|s, g) that sequentially combines skills z for the frozen skill policy π(a|s, z) to move the object to a goal position g. We additionally train the vanilla SAC baseline to verify the effectiveness of leveraging autonomously discovered skills. We refer to Appendix E.2 for further details. On both quantitative metrics, CSD outperforms the prior methods by large margins, successfully discovering diverse manipulation skills that are useful for solving downstream tasks.
Skill discovery with the oracle prior on the state space.
While our experiments show that our approach can discover useful manipulation skills without any human prior on the state space, previous unsupervised skill discovery methods (Eysenbach et al., 2019;Sharma et al., 2020; mostly do not work without the oracle state prior, which restricts the skill discriminator module's input to only the xyz coordinates of the object. To investigate how CSD and the prior methods perform in the presence of this supervision, we train them with the oracle state prior. Figure 5 demonstrates that even without the oracle state prior, our CSD is mostly comparable to the previous best method with the oracle prior. This result demonstrates the potential of our approach in scalability to more complex environments, where human prior is no longer available. Moreover, with the oracle state prior, CSD further improves its performance. We refer to Figure 17 for the full qualitative results of CSD and LSD with the oracle prior in FetchPickAndPlace.
Ablation study. To understand the importance of our controllability-aware distance function in CSD, we examine whether similar results can be achieved without some components of CSD or by just applying simple tricks to LSD, a previous Euclidean distance-maximizing skill discovery method. Specifically, we consider the following three variants: (1) LSD + preset: LSD with a normalized state space using the precomputed standard deviation of each state dimension from randomly generated trajectories, . Evolution of task-related distances and corresponding task success rates. Our learned task-related distances decrease once the agent gains control of the corresponding objects, which makes the agent focus on other new objects consistently over the course of training. Distance plots are smoothed over a window of size 10 for better visualization. gradient descent instead of spectral normalization (i.e., CSD without our learned distance function). Figure 6 compares the performances of these variants with CSD, LSD, and SAC in three downstream tasks. The results show that only CSD learns to manipulate objects, which suggests that our controllability-aware distance function is indeed necessary to discover such complex skills without supervision.
Kitchen Manipulation
To verify the scalability of unsupervised skill discovery in a complex environment with diverse objects, we evaluate our method on the Kitchen manipulation environment , which includes 13 downstream tasks in total, such as opening a microwave, turning a light switch, moving a kettle, and opening slide/hinge cabinet doors (Figure 8a). We train CSD, LSD, DIAYN, and DADS with both 2-D continuous skills and 16 discrete skills for 40K episodes without any supervision. We refer to Appendix E for further experimental details regarding the Kitchen environment. We first measure the task success rates of the skills learned by the four methods. After the unsupervised skill training, we roll out the skill policy to collect 50 trajectories with 50 randomly sampled zs and measure whether each of the 13 tasks has at least one successful trajectory. The results with 16 discrete skills in Figure 7 suggest that CSD learns on average 10 out of 13 skills, while the prior methods fail to discover such skills (2 for LSD, 4 for DIAYN, 0 for DADS) because they mainly focus on diversifying the robot state. Continuous skills in Figure 14 also show similar results.
We then evaluate the downstream task performance by training a high-level controller π h (z|s, g) with the learned 2-D continuous skills π(a|s, z) as behavioral primitives to achieve a task specified by a 13-D one-hot vector g. The high-level controller chooses a skill z every 10 steps until the episode ends. The results in Figure 8b show that CSD significantly outperforms the previous methods.
Qualitative analysis. Figure 9 illustrates how our controllability-aware distance evolves over time and how this leads to the discovery of diverse, complex skills, e.g., SlideCabinet, KettleLift, and Microwave. Over training, we measure the task-related controllability-aware distance v ⊤ Σ −1 θ (s)v for each task v using skill trajectories, where v is the one-hot task vector corresponding to each of the three tasks. At around 4K episodes (Figure 9a), our controllability-aware distance encourages the agent to control the sliding cabinet with a large distance value (i.e., high reward). Once the agent learns to manipulate the sliding cabinet door, our controllability-aware distance for that skill decreases, letting the agent move its focus to other harderto-achieve skills, e.g., lifting kettle (Figure 9b) or opening a microwave (Figure 9c). As a result, the number of successful tasks gradually increases over the course of training. Figure 11. Comparison of the state coverage and downstream task performance of skills discovery methods in Ant and HalfCheetah.
MuJoCo Locomotion
To assess whether the idea of controllability-aware skill discovery works on domains other than manipulation, we evaluate CSD mainly on two MuJoCo locomotion environments Brockman et al., 2016): Ant and HalfCheetah. We additionally employ 17-DoF Humanoid, the most complex environment in the benchmark, for a qualitative comparison between CSD and LSD. In these environments, we train skill discovery methods for 200K episodes (100K for Humanoid) with 16 discrete skills. Figure 10 shows examples of skills discovered by each method, which suggests that CSD leads to the largest state coverage thanks to our controllability-aware distance function. For quantitative evaluation, we first measure the state space coverage by counting the number of 1 × 1 bins occupied by the agent's xy coordinates (xz coordinates for 2-D HalfCheetah) at least once. Figure 11a demonstrates that CSD covers the largest area among the four methods. This is because CSD's controllability objective makes the agent mainly focus on diversifying the global position of the agent, which corresponds to the 'challenging' state transitions in these locomotion environments. We emphasize that CSD not just learns to navigate in diverse directions but also learns a variety of behaviors, such as rotating and flipping in both environments (videos). We also note that MI-based methods (DIAYN and DADS) completely fail to diversify the agent's location and only discover posing skills, because the MI objective is agnostic to the distance metric, not providing incentives to maximize traveled distances in the state space.
We also evaluate the downstream learning performance on four tasks: AntGoal, AntMultiGoals, HalfCheetahGoal, and HalfCheetahHurdle, following previous works (Eysenbach et al., 2019;Sharma et al., 2020;Kim et al., 2021;. In AntGoal and HalfCheetahGoal, the agent should reach a randomly sampled goal position, and in AntMultiGoals, the agent should follow multiple randomly sampled goals in sequence. In HalfCheetahHurdle (Qureshi et al., 2020), the agent should jump over as many hurdles as possible. With downstream task rewards, we train a high-level policy that sequentially combines the learned skills. In Figure 11b, CSD consistently demonstrates the best performance among the four methods, which suggests that the skills discovered by CSD are effective not just on locomotion tasks but also on a wide variety of tasks, such as hurdle jumping.
Conclusion
In this paper, we present Controllability-aware Skill Discovery (CSD), a novel unsupervised skill discovery method that explicitly looks for hard-to-achieve skills. Specifically, we first formulate a distance-maximizing skill discovery approach (DSD), which can be combined with any arbitrary distance function. We then propose a jointly trained controllability-aware distance function, which consistently encourages the agent to discover more complex, hard-to-achieve skills. We empirically show that the idea of controllability-awareness enables the agent to acquire diverse complex skills in the absence of supervision in a variety of robotic manipulation and locomotion environments.
Limitations and future directions. Although the general idea of controllability-aware skill discovery is still applicable to pixel domains, e.g., in combination with representation learning techniques (Hafner et al., 2020;Srinivas et al., 2020;Seo et al., 2022), where they will reveal both the object and agent representations and CSD will focus on the object representation, we did not verify the scalability of our controllability-aware distance function to pixel-based environments. We leave it as future work. Another limitation is that CSD in its current form might not discover 'slowly moving' skills because underlying DSD prefers skills with large state variations. We believe acquiring skills with diverse moving speeds is another interesting future direction.
Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., and Levine, S. Soft actor-critic algorithms and applications. ArXiv, abs/1812.05905, 2018b.
Hafner Rajeswar, S., Mazzaglia, P., Verbelen, T., Pich'e, A., Dhoedt, B., Courville, A. C., and Lacoste, A. Unsupervised modelbased pre-training for data-efficient control from pixels. ArXiv, abs/2209.12016, 2022.
A. Extended Related Work on Unsupervised RL
The goal of unsupervised RL is to learn useful knowledge, such as dynamics models, state representations, and behavioral primitives, without predefined tasks so that we can later utilize them to efficiently solve downstream tasks. One line of research focuses on gathering knowledge of the environment with pure exploration (Pathak et al., 2017;Burda et al., 2019;Pathak et al., 2019;Sekar et al., 2020;Liu & Abbeel, 2021b;Yarats et al., 2021;Rajeswar et al., 2022). Unsupervised skill discovery methods (Gregor et al., 2016;Co-Reyes et al., 2018;Eysenbach et al., 2019;Sharma et al., 2020;Kim et al., 2021;Kamienny et al., 2022;Strouse et al., 2022;Shafiullah & Pinto, 2022;Jiang et al., 2022;Zhao et al., 2022) aim to learn a set of temporally extended useful behaviors, and our CSD falls into this category. Another line of work focuses on discovering goals and corresponding goal-conditioned policies via pure exploration (Warde-Farley et al., 2019;Pong et al., 2020;Pitis et al., 2020;Mendonca et al., 2021) or asymmetric/curriculum self-play (Sukhbaatar et al., 2018;OpenAI et al., 2021;Du et al., 2022). Lastly, Touati & Ollivier (2021); Touati et al. (2022) aim to learn a set of policies that can be instantly adapted to task reward functions given an unsupervised exploration method or an offline dataset.
B. Theoretical Results
B.1. Proof of Theorem 4.1
We assume that we are given an arbitrary non-negative function d : S × S → R + 0 . We first introduce some additional notations. For x, y ∈ S, define d s (x, y) ≜ min(d(x, y), d(y, x)). For x, y ∈ S, let P (x, y) be the set of all finite state paths from x to y. For a state path p = (s 0 , s 1 , . . . , s t ), define D s (p) ≜ t−1 i=0 d s (s i , s i+1 ). Now, for x, y ∈ S, we define the induced pseudometricd : S × S → R + 0 as follows:
d(x, y) ≜ inf p∈P (x,y) D s (p) if x ̸ = y 0 if x = y .(17)
Then, the following theorems hold.
Lemma B.1.d is a lower bound of d, i.e., ∀x, y ∈ S, 0 ≤d(x, y) ≤ d(x, y).
Proof. If x = y, thend(x, y) = 0 by definition and thus 0 ≤d(x, y) ≤ d(x, y) always holds. Otherwise, 0 ≤d(x, y) ≤ D s ((x, y)) = d s (x, y) ≤ d(x, y) holds and this completes the proof.
Theorem B.2. For ϕ : S → R D , imposing Equation (7) with d is equivalent to imposing Equation (7) withd, i.e., ∀x, y ∈ S, ∥ϕ(x) − ϕ(y)∥ ≤ d(x, y) ⇐⇒ ∀x, y ∈ S, ∥ϕ(x) − ϕ(y)∥ ≤d(x, y).
Proof. From Lemma B.1, we know that ∥ϕ(x) − ϕ(y)∥ ≤d(x, y) implies ∥ϕ(x) − ϕ(y)∥ ≤ d(x, y). Now, we assume that ∥ϕ(x) − ϕ(y)∥ ≤ d(x, y) holds for any x, y ∈ S. First, if x = y, then ∥ϕ(x) − ϕ(y)∥ becomes 0 and thus ∥ϕ(x) − ϕ(y)∥ ≤ d(x, y) always holds. For x ̸ = y, let us consider any state path p = (s 0 = x, s 1 , s 2 , . . . , s t−1 , s t = y) ∈ P (x, y). For any i ∈ {0, 1, . . . , t − 1}, we have
∥ϕ(s i ) − ϕ(s i+1 )∥ ≤ d(s i , s i+1 ),(20)∥ϕ(s i+1 ) − ϕ(s i )∥ ≤ d(s i+1 , s i ),(21)
and thus we get ∥ϕ(s
i ) − ϕ(s i+1 )∥ = ∥ϕ(s i+1 ) − ϕ(s i )∥ ≤ min(d(s i , s i+1 ), d(s i+1 , s i )) = d s (s i , s i+1 )
. Now, we have the following inequalities:
∥ϕ(s 0 ) − ϕ(s 1 )∥ ≤ d s (s 0 , s 1 ),(22)∥ϕ(s 1 ) − ϕ(s 2 )∥ ≤ d s (s 1 , s 2 ),(23). . . ,(24)∥ϕ(s t−1 ) − ϕ(s t )∥ ≤ d s (s t−1 , s t ).(25)
From these, we obtain ∥ϕ(
x)−ϕ(y)∥ = ∥ϕ(s 0 )−ϕ(s t )∥ ≤ t−1 i=0 ∥ϕ(s i )−ϕ(s i+1 )∥ ≤ t−1 i=0 d s (s i , s i+1 ) = D s (p).
Then, by taking the infimum of the right-hand side over all possible p ∈ P (x, y), we get ∥ϕ(x) − ϕ(y)∥ ≤ inf p∈P (x,y) D s (p) = d(x, y) and this completes the proof. Theorem B.3.d is a valid pseudometric, i.e., (a) ∀x ∈ S,d(x, x) = 0.
(b) (Symmetry) ∀x, y ∈ S,d(x, y) =d(y, x).
(c) (Triangle inequality) ∀x, y, z ∈ S,d(x, y) ≤d(x, z) +d(z, y).
Proof. (a) By definition,d(x, x) = 0 always holds for all x ∈ S.
(b) If x = y, thend(x, y) =d(y, x) = 0. Otherwise, with p = (s 0 = x, s 1 , s 2 , . . . , s t−1 , s t = y) ∈ P (x, y), we can prove the symmetry ofd as follows:d
(x, y) = inf p∈P (x,y) D s (p) (26) = inf p∈P (x,y) t−1 i=0 d s (s i , s i+1 ) (27) = inf p∈P (x,y) t−1 i=0 d s (s i+1 , s i ) (28) = inf p∈P (y,x) D s (p) (29) =d(y, x).(30)
(c) If x = y, y = z, or z = x, then it can be easily seen thatd(x, y) ≤d(x, z) +d(z, y) always holds. Hence, we assume that they are mutually different from each other. Then, the following inequality holds:
d(x, y) = inf p∈P (x,y) D s (p) (31) ≤ inf p1∈P (x,z),p2∈P (z,y) D s (p 1 ) + D s (p 2 ) (32) = inf p1∈P (x,z) D s (p 1 ) + inf p2∈P (z,y) D s (p 2 )(33)
=d(x, z) +d(z, y),
which completes the proof.
B.2. Implications of Theorem 4.1
Theorem 4.1 suggests that the constraint in Equation (7) implicitly transforms an arbitrary distance function d into a tighter valid pseudometricd. Intuitively, thisd(x, y) corresponds to the minimum possible (symmetrized) path distance from x to y. Hence, if we train DSD with Equation (7), it will find long-distance transitions that cannot be equivalently achieved by taking multiple short-distance transitions. Intuitively, in the context of CSD (Section 4.2), this implies that the agent will find rare state transitions that cannot be bypassed by taking 'easy' intermediate steps, which is a desirable property.
However, there are some limitations regarding the use of our distance function d CSD (Equation (16)). First, while the DSD constraint in Equation (7) implicitly symmetrizes the distance function by taking the minimum between d(x, y) and d(y, x), this may not be ideal in highly asymmetric environments involving many irreversible transitions. In practice, this may be resolved by only imposing one-sided constraints of our interest. Second, in our implementation, we only consider a single-step transition (s, s ′ ) and a single-step density model q θ (s ′ |s) as we found this simple design choice to be sufficient for our experiments. However, in order to fully leverage the aforementioned property of the induced pseudometric, the constraint may be imposed on any state pairs with a multi-step density model, which we leave for future work.
CSD Disagreement
Ant Half Cheetah CSD Disagreement Figure 13. The agent's xy (Ant) or x (HalfCheetah) trajectories of CSD and disagreement-based exploration. While CSD seeks very consistent, directed behaviors, disagreement-based exploration only focuses on diversifying states with chaotic, random behaviors. We provide videos illustrating this difference on our project page.
C. Comparison with Unsupervised Disagreement-Based Exploration
In this section, we discuss the difference between CSD and unsupervised goal-conditioned RL and present an empirical comparison between them. Unsupervised goal-conditioned RL approaches, such as DISCERN ( (1) running an exploration method that collects diverse 'goal' states g and (2) learning a goal-conditioned policy π(a|s, g) to reach the states discovered. Hence, treating g as a |S|-dimensional skill latent vector, these approaches may be viewed as a special type of unsupervised skill discovery.
However, the main focuses of unsupervised skill discovery are different from that of unsupervised goal-conditioned RL. First, unsupervised skill discovery aims to discover more general skills not restricted to goal-reaching behaviors, which tend to be static as the agent is encouraged to stay still at the goal state (Mendonca et al., 2021;Jiang et al., 2022). For instance, our approach maximizes traveled distances, which leads to more 'dynamic' behaviors like consistently running in a specific direction ( Figure 10). Second, unsupervised skill discovery aims to build a compact set of skills, which could also be discrete, rather than finding all the possible states in the given environment. For example, if we train CSD with three discrete skills, these behaviors will be as 'distant' as possible from one another, being maximally distinguishable. As such, we can have useful behaviors with a much low-dimensional skill space, making it more amenable to hierarchical RL.
Despite the difference in goals, to better illustrate the difference between them, we make an empirical comparison between CSD and ensemble disagreement-based exploration (Pathak et al., 2019), which some previous unsupervised goal-conditioned RL methods like LEXA (Mendonca et al., 2021) use as the exploration method. Disagreement-based exploration learns an ensemble of E forward dynamics models {p i (s ′ |s, a)} i∈{1,2,...,E} , and uses its variance |S| k V[p i (· k |s, a)] as an intrinsic reward, in order to seek unexplored transitions with high epistemic uncertainty. While unsupervised goal-condition RL approaches additionally learn a goal-conditioned policy, we do not separately learn it since the state coverage metrics of the exploration policy can serve as an approximate upper bound of the corresponding optimal goal-conditioned policy's performance. Figure 12 presents the comparisons of unsupervised state coverage metrics between CSD and disagreement-based exploration in all of our six environments. The results suggest that CSD mostly outperforms disagreement-based exploration in our state coverage metrics, mainly because CSD actively diversifies hard-to-control states such as the object position or the agent location, while the pure exploration method only focuses on finding unseen transitions. This difference is especially prominent in Ant and HalfCheetah (Figure 13), in which CSD seeks very consistent, directed behaviors, such as moving in one direction, while disagreement-based exploration only focuses on diversifying states with chaotic, random behaviors. We provide videos illustrating this difference at https://seohong.me/projects/csd/.
D. Additional Results
Additional quantitative results. Figure 14 shows the task success rates of the 2-D continuous skills learned by CSD, LSD, DIAYN, and DADS. As in the discrete case, CSD outperforms the other methods by a significant margin. Figure 15 demonstrates extended learning curves in Fetch and Kitchen downstream tasks, where we train SAC for four times as long as skill discovery methods. The results suggest that, while SAC alone can solve the FetchSlideGoal task with a lot more samples, it fails at learning FetchPushGoal, FetchPickAndPlaceGoal, and Kitchen mainly because they are challenging sparse-reward tasks. In contrast, agents can quickly learn all these tasks with temporally extended skills from CSD.
Additional qualitative results. Figures 16 and 19 illustrate the skill trajectories of all runs we use for our experiments in Fetch manipulation and two MuJoCo locomotion environments (eight random seeds for each method in each environment).
In the Fetch environments, CSD is the only method that learns object manipulation skills without supervision ( Figure 16). In Ant and HalfCheetah, CSD not only learns locomotion skills but also discovers a variety of diverse skills, such as rotating and flipping in both environments (Figure 19, videos). We provide the complete qualitative results in Humanoid in Figure 18. Figure 17 shows the full results of CSD and LSD equipped with the oracle prior in FetchPickAndPlace (eight seeds each). While CSD always learns to pick up the object, LSD discovers such skills in only three out of eight runs ( Figure 17). This is because our controllability-aware distance function consistently encourages the agent to learn more challenging picking-up behaviors. As a result, CSD significantly outperforms LSD in downstream tasks ( Figure 5). . Complete qualitative results in three Fetch environments (eight runs for each method in each environment). We plot the skill trajectories of the object and the gripper with different colors. CSD is the only unsupervised skill discovery method that discovers object manipulation skills without supervision.
CSD (oracle)
LSD (oracle)
Object trajectories Figure 17. Complete qualitative results of CSD and LSD trained with the oracle prior in FetchPickAndPlace (eight runs for each method). We plot the skill trajectories of the object and the gripper with different colors. Note that while LSD mostly just throws the object away, CSD always learns to pick up the object in all eight runs. Figure 18. Complete qualitative results in Humanoid (four runs for each method in each environment). We plot the skill xy trajectories of the agent with different colors. We note that we train CSD and LSD for 100K episodes (which is a tenth of the number of episodes used in the LSD paper ). . Complete qualitative results in Ant and HalfCheetah (eight runs for each method in each environment). We plot the skill trajectories of the agent with different colors. We note that in both environments, CSD not only learns locomotion skills but also discovers a variety of diverse skills, such as rotating and flipping.
CSD
LSD
LSD
Figure 3 .
3The object trajectories in the xy plane of randomly sampled 1000 continuous skills learned by CSD, LSD, DIAYN, and DADS in three Fetch manipulation environments without any supervision. Trajectories with different colors represent different skills.
Figure 7 .Figure 8 .
78Task success rates of 16 discrete skills discovered by CSD, LSD, DIAYN, and DADS in the Kitchen environment. CSD learns to manipulate diverse objects in the kitchen without any supervision. We refer to Appendix D for the results with 2-D continuous skills. Comparison of the downstream task performances of skill discovery methods in the Kitchen environment.
Figure 9
9LSD + norm: LSD with a normalized state space using the moving average of the standard deviation of state differences (s ′ − s), and (3) LSD + dual: LSD trained with dual
Figure 10 .
10The agent's xy (Ant and Humanoid) or x (HalfCheetah) trajectories of skills discovered by CSD, LSD, DIAYN, and DADS in MuJoCo locomotion environments. Trajectories with different colors represent different skills. We refer to Appendix D for the complete qualitative results from all random seeds.
, D., Lillicrap, T. P., Ba, J., and Norouzi, M. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations (ICLR), 2020.Hansen, S., Dabney, W., Barreto, A., Wiele, T., Warde-Farley, D., and Mnih, V. Fast task inference with variational intrinsic successor features. In International Conference on Learning Representations (ICLR), 2020.Jiang, Z., Gao, J., and Chen, J. Unsupervised skill discovery via recurrent skill training. In Neural Information Processing Systems (NeurIPS), 2022.Kamienny, P.-A., Tarbouriech, J., Lazaric, A., and Denoyer, L. Direct then diffuse: Incremental unsupervised skill discovery for state covering and goal reaching. In International Conference on Learning Representations (ICLR), 2022. Kim, J., Park, S., and Kim, G. Unsupervised skill discovery with bottleneck option learning. In International Conference on Machine Learning (ICML), 2021. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. Kraskov, A., Stögbauer, H., and Grassberger, P. Estimating mutual information. Physical review E, 69(6):066138, 2004. Laskin, M., Yarats, D., Liu, H., Lee, K., Zhan, A., Lu, K., Cang, C., Pinto, L., and Abbeel, P. Urlb: Unsupervised reinforcement learning benchmark. In Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2021. Laskin, M., Liu, H., Peng, X. B., Yarats, D., Rajeswaran, A., and Abbeel, P. Unsupervised reinforcement learning with contrastive intrinsic control. In Neural Information Processing Systems (NeurIPS), 2022. Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., and Salakhutdinov, R. Efficient exploration via state marginal matching. ArXiv, abs/1906.05274, 2019. Liu, H. and Abbeel, P. APS: Active pretraining with successor features. In International Conference on Machine Learning (ICML), 2021a. Liu, H. and Abbeel, P. Behavior from the void: Unsupervised active pre-training. In Neural Information Processing Systems (NeurIPS), 2021b. Mendonca, R., Rybkin, O., Daniilidis, K., Hafner, D., and Pathak, D. Discovering and achieving goals via world models. In Neural Information Processing Systems (NeurIPS), 2021. Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2018. Ogata, K. et al. Modern control engineering. Prentice hall Upper Saddle River, NJ, 2010. OpenAI, O., Plappert, M., Sampedro, R., Xu, T., Akkaya, I., Kosaraju, V., Welinder, P., D'Sa, R., Petron, A., de Oliveira Pinto, H. P., Paino, A., Noh, H., Weng, L., Yuan, Q., Chu, C., and Zaremba, W. Asymmetric selfplay for automatic goal discovery in robotic manipulation. ArXiv, abs/2101.04882, 2021.Park, S., Choi, J., Kim, J., Lee, H., and Kim, G. Lipschitzconstrained unsupervised skill discovery. In International Conference on Learning Representations (ICLR), 2022. Pathak, D., Agrawal, P., Efros, A. A., and Darrell, T. Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning (ICML), 2017. Pathak, D., Gandhi, D., and Gupta, A. K. Self-supervised exploration via disagreement. In International Conference on Machine Learning (ICML), 2019. Pitis, S., Chan, H., Zhao, S., Stadie, B. C., and Ba, J. Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. In International Conference on Machine Learning (ICML), 2020. Plappert, M., Andrychowicz, M., Ray, A., McGrew, B., Baker, B., Powell, G., Schneider, J., Tobin, J., Chociej, M., Welinder, P., Kumar, V., and Zaremba, W. Multi-goal reinforcement learning: Challenging robotics environments and request for research. ArXiv, abs/1802.09464, 2018. Pong, V. H., Dalal, M., Lin, S., Nair, A., Bahl, S., and Levine, S. Skew-Fit: State-covering self-supervised reinforcement learning. In International Conference on Machine Learning (ICML), 2020. Qureshi, A. H., Johnson, J. J., Qin, Y., Henderson, T., Boots, B., and Yip, M. C. Composing task-agnostic policies with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2020.
Figure 12 .
12Comparison of unsupervised state coverage metrics between CSD and ensemble disagreement-based exploration(Pathak et al., 2019) in all six environments. CSD mostly outperforms disagreement-based exploration in our state coverage metrics mainly because it actively diversifies hard-to-control states such as the object position or the agent location.
Warde-Farley et al., 2019), Skew-Fit (Pong et al., 2020), MEGA (Pitis et al., 2020), and LEXA (Mendonca et al., 2021), learn diverse behaviors typically by
Figure 14 .Figure 15 .
1415Task success rates of 2-D continuous skills discovered by four methods in the Kitchen environment. Extended learning curves of the SAC baseline in Fetch and Kitchen downstream tasks.
Figure 16
16Figure 16. Complete qualitative results in three Fetch environments (eight runs for each method in each environment). We plot the skill trajectories of the object and the gripper with different colors. CSD is the only unsupervised skill discovery method that discovers object manipulation skills without supervision.
Figure 19
19Figure 19. Complete qualitative results in Ant and HalfCheetah (eight runs for each method in each environment). We plot the skill trajectories of the agent with different colors. We note that in both environments, CSD not only learns locomotion skills but also discovers a variety of diverse skills, such as rotating and flipping.
, DIAYN(Eysenbach et al., 0
20K
40K
60K
80K
# episodes
50
100
150
200
State coverage
0
20K
40K
60K
80K
# episodes
50
100
150
State coverage
0
20K
40K
60K
80K
# episodes
0
100
200
300
State coverage
FetchPush
FetchSlide
FetchPickAndPlace
0
16K
32K
48K
64K
# episodes
0.1
0.2
0.3
Return
0
16K
32K
48K
64K
# episodes
0.2
0.4
0.6
0.8
Return
0
16K
32K
48K
64K
# episodes
0.0
0.1
0.2
Return
FetchPushGoal
FetchSlideGoal
FetchPickAndPlaceGoal
6K
32K
48K
64K
# episodes
CSD
LSD
DIAYN
DADS
SAC
(a) Object state coverage
(b) Downstream task performance
Table 2 .
2Hyperparameters for locomotion environments.Hyperparameter
Table 3 .
3Hyperparameters for SAC downstream policies in manipulation environments.Hyperparameter
Value
# training epochs
4000 (Fetch), 8000 (Kitchen)
# episodes per epoch
16 (Fetch), 2 (Kitchen)
# gradient steps per epoch
4 (Fetch), 10 (Kitchen)
Replay buffer size
10 6
Skill sample frequency R
10
Skill range
[−1.5, 1.5] D
Table 4 .
4Hyperparameters for PPO downstream policies in locomotion environments.Hyperparameter
Value
Learning rate
3 × 10 −4
# training epochs
1000
# episodes per epoch
64
# gradient steps per episode
10
Minibatch size
256
Entropy coefficient
0.01
Skill sample frequency R
25
University of California, Berkeley 2 Google Research. Correspondence to: Seohong Park <[email protected]>.
The original LSD implementation updates the target network every epoch, not every gradient step, but we find the latter to be about 10× sample efficient in terms of the number of environment steps.
AcknowledgementWe would like to thank Amber Xie, Younggyo Seo, and Jaekyeom Kim for their insightful feedback and discussion. This work was funded in part by Darpa RACER, Komatsu, a Berkeley Graduate Fellowship, and the BAIR Industrial Consortium. Seohong Park was partly supported by Korea Foundation for Advanced Studies (KFAS).High-level controller. After unsupervised skill discovery, we train a high-level controller π h (z|s, g) that selects skills in a sequential manner for solving downstream tasks. We use SAC(Haarnoja et al., 2018a)for continuous skills and PPO(Schulman et al., 2017)for discrete skills. The high-level policy selects a new skill every R steps. We mostly follow the hyperparameters for low-level skill policies and present the specific hyperparameters used for high-level controllers inTables 3 and 4.
Variational option discovery algorithms. J Achiam, H Edwards, D Amodei, Abbeel , P , abs/1807.10299ArXiv. Achiam, J., Edwards, H., Amodei, D., and Abbeel, P. Variational option discovery algorithms. ArXiv, abs/1807.10299, 2018.
Skill-based reinforcement learning with intrinsic reward matching. A Adeniji, A Xie, Abbeel , P , abs/2210.07426ArXiv. Adeniji, A., Xie, A., and Abbeel, P. Skill-based reinforce- ment learning with intrinsic reward matching. ArXiv, abs/2210.07426, 2022.
The IM algorithm: a variational approach to information maximization. D Barber, F Agakov, Neural Information Processing Systems (NeurIPS). Barber, D. and Agakov, F. The IM algorithm: a variational approach to information maximization. In Neural Infor- mation Processing Systems (NeurIPS), 2003.
Convex optimization. S Boyd, S P Boyd, L Vandenberghe, Cambridge university pressBoyd, S., Boyd, S. P., and Vandenberghe, L. Convex opti- mization. Cambridge university press, 2004.
. G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, Zaremba , abs/1606.01540W. OpenAI Gym. ArXivBrockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. OpenAI Gym. ArXiv, abs/1606.01540, 2016.
Exploration by random network distillation. Y Burda, H Edwards, A J Storkey, O Klimov, International Conference on Learning Representations (ICLR). Burda, Y., Edwards, H., Storkey, A. J., and Klimov, O. Ex- ploration by random network distillation. In International Conference on Learning Representations (ICLR), 2019.
Explore, discover and learn: unsupervised discovery of state-covering skills. V Campos Camúñez, A Trott, C Xiong, R Socher, X Giró Nieto, Torres Viñals, J , International Conference on Machine Learning (ICML). 2020Campos Camúñez, V., Trott, A., Xiong, C., Socher, R., Giró Nieto, X., and Torres Viñals, J. Explore, dis- cover and learn: unsupervised discovery of state-covering skills. In International Conference on Machine Learning (ICML), 2020.
Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings. J D Co-Reyes, Y Liu, A Gupta, B Eysenbach, P Abbeel, S Levine, International Conference on Machine Learning (ICML). Co-Reyes, J. D., Liu, Y., Gupta, A., Eysenbach, B., Abbeel, P., and Levine, S. Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory em- beddings. In International Conference on Machine Learn- ing (ICML), 2018.
It takes four to tango: Multiagent selfplay for automatic curriculum generation. Y Du, P Abbeel, A Grover, International Conference on Learning Representations (ICLR). 2022Du, Y., Abbeel, P., and Grover, A. It takes four to tango: Multiagent selfplay for automatic curriculum generation. In International Conference on Learning Representations (ICLR), 2022.
Diversity is all you need: Learning skills without a reward function. B Eysenbach, A Gupta, J Ibarz, S Levine, International Conference on Learning Representations (ICLR). Eysenbach, B., Gupta, A., Ibarz, J., and Levine, S. Diversity is all you need: Learning skills without a reward function. In International Conference on Learning Representations (ICLR), 2019.
. K Gregor, D J Rezende, D Wierstra, abs/1611.07507Variational intrinsic control. ArXiv. Gregor, K., Rezende, D. J., and Wierstra, D. Variational intrinsic control. ArXiv, abs/1611.07507, 2016.
Braxlines: Fast and interactive toolkit for rl-driven behavior engineering beyond reward maximization. S S Gu, M Diaz, D C Freeman, H Furuta, S K S Ghasemipour, A Raichuk, B David, E Frey, E Coumans, O Bachem, abs/2110.04686ArXiv. Gu, S. S., Diaz, M., Freeman, D. C., Furuta, H., Ghasemipour, S. K. S., Raichuk, A., David, B., Frey, E., Coumans, E., and Bachem, O. Braxlines: Fast and inter- active toolkit for rl-driven behavior engineering beyond reward maximization. ArXiv, abs/2110.04686, 2021.
Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. A Gupta, V Kumar, C Lynch, S Levine, K Hausman, Conference on Robot Learning (CoRL). Gupta, A., Kumar, V., Lynch, C., Levine, S., and Hausman, K. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. In Conference on Robot Learning (CoRL), 2019.
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. T Haarnoja, A Zhou, P Abbeel, S Levine, International Conference on Machine Learning (ICML). Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforce- ment learning with a stochastic actor. In International Conference on Machine Learning (ICML), 2018a.
Proximal policy optimization algorithms. J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, abs/1707.06347ArXiv. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. ArXiv, abs/1707.06347, 2017.
Planning to explore via self-supervised world models. R Sekar, O Rybkin, K Daniilidis, P Abbeel, D Hafner, D Pathak, International Conference on Machine Learning (ICML). 2020Sekar, R., Rybkin, O., Daniilidis, K., Abbeel, P., Hafner, D., and Pathak, D. Planning to explore via self-supervised world models. In International Conference on Machine Learning (ICML), 2020.
Masked world models for visual control. Y Seo, D Hafner, H Liu, F Liu, S James, K Lee, Abbeel , P , Conference on Robot Learning (CoRL). 2022Seo, Y., Hafner, D., Liu, H., Liu, F., James, S., Lee, K., and Abbeel, P. Masked world models for visual control. In Conference on Robot Learning (CoRL), 2022.
One after another: Learning incremental skills for a changing world. N M M Shafiullah, L Pinto, International Conference on Learning Representations (ICLR. 2022Shafiullah, N. M. M. and Pinto, L. One after another: Learn- ing incremental skills for a changing world. In Interna- tional Conference on Learning Representations (ICLR), 2022.
Dynamics-aware unsupervised discovery of skills. A Sharma, S Gu, S Levine, V Kumar, K Hausman, International Conference on Learning Representations (ICLR). 2020Sharma, A., Gu, S., Levine, S., Kumar, V., and Hausman, K. Dynamics-aware unsupervised discovery of skills. In International Conference on Learning Representations (ICLR), 2020.
Curl: Contrastive unsupervised representations for reinforcement learning. A Srinivas, M Laskin, Abbeel , P , International Conference on Machine Learning (ICML). 2020Srinivas, A., Laskin, M., and Abbeel, P. Curl: Contrastive unsupervised representations for reinforcement learn- ing. In International Conference on Machine Learning (ICML), 2020.
Learning more skills through optimistic exploration. D Strouse, K Baumli, D Warde-Farley, V Mnih, S S Hansen, International Conference on Learning Representations (ICLR. 2022Strouse, D., Baumli, K., Warde-Farley, D., Mnih, V., and Hansen, S. S. Learning more skills through optimistic exploration. In International Conference on Learning Representations (ICLR), 2022.
Intrinsic motivation and automatic curricula via asymmetric self-play. S Sukhbaatar, I Kostrikov, A D Szlam, Fergus , R , International Conference on Learning Representations (ICLR). Sukhbaatar, S., Kostrikov, I., Szlam, A. D., and Fergus, R. Intrinsic motivation and automatic curricula via asymmet- ric self-play. In International Conference on Learning Representations (ICLR), 2018.
Mujoco: A physics engine for model-based control. E Todorov, T Erez, Y Tassa, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics en- gine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012.
Learning one representation to optimize all rewards. A Touati, Y Ollivier, Neural Information Processing Systems (NeurIPS). 2021Touati, A. and Ollivier, Y. Learning one representation to optimize all rewards. In Neural Information Processing Systems (NeurIPS), 2021.
Does zero-shot reinforcement learning exist? ArXiv. A Touati, J Rapin, Y Ollivier, abs/2209.14935Touati, A., Rapin, J., and Ollivier, Y. Does zero-shot rein- forcement learning exist? ArXiv, abs/2209.14935, 2022.
Unsupervised control through non-parametric discriminative rewards. D Warde-Farley, T V De Wiele, T Kulkarni, C Ionescu, S Hansen, V Mnih, International Conference on Learning Representations (ICLR). Warde-Farley, D., de Wiele, T. V., Kulkarni, T., Ionescu, C., Hansen, S., and Mnih, V. Unsupervised control through non-parametric discriminative rewards. In International Conference on Learning Representations (ICLR), 2019.
Reinforcement learning with prototypical representations. D Yarats, R Fergus, A Lazaric, L Pinto, International Conference on Machine Learning (ICML). 2021Yarats, D., Fergus, R., Lazaric, A., and Pinto, L. Rein- forcement learning with prototypical representations. In International Conference on Machine Learning (ICML), 2021.
A mixture of surprises for unsupervised reinforcement learning. A Zhao, M Lin, Y Li, Y Liu, G Huang, Neural Information Processing Systems (NeurIPS). 2022Zhao, A., Lin, M., Li, Y., Liu, Y., and Huang, G. A mixture of surprises for unsupervised reinforcement learning. In Neural Information Processing Systems (NeurIPS), 2022.
Mutual information state intrinsic control. R Zhao, Y Gao, P Abbeel, V Tresp, W Xu, International Conference on Learning Representations (ICLR). 2021Zhao, R., Gao, Y., Abbeel, P., Tresp, V., and Xu, W. Mu- tual information state intrinsic control. In International Conference on Learning Representations (ICLR), 2021.
Implementation Details For manipulation environments, we implement CSD on top of the publicly available codebase of MUSIC. E Zhao, E. Implementation Details For manipulation environments, we implement CSD on top of the publicly available codebase of MUSIC (Zhao et al., 2021).
We run our experiments on an internal cluster with NVIDIA Tesla V100 and NVIDIA GeForce RTX 2080 Ti GPUs. Park, For MuJoCo environments, we implement CSD based on the publicly available codebase of LSD. We mostly follow the hyperparameters used in the original implementations. Each run mostly takes a day or lessFor MuJoCo environments, we implement CSD based on the publicly available codebase of LSD (Park et al., 2022). We mostly follow the hyperparameters used in the original implementations. Our implementation can be found in the follow- ing repositories: https://github.com/seohongpark/CSD-manipulation (manipulation environments) and https://github.com/seohongpark/CSD-locomotion (locomotion environments). We run our experiments on an internal cluster with NVIDIA Tesla V100 and NVIDIA GeForce RTX 2080 Ti GPUs. Each run mostly takes a day or less.
2021) with state-based observations. We use an episode length of 200 for locomotion environments and an episode length of 50 for manipulation environments. In locomotion environments, to ensure fair comparisons, we use preset normalizers for all skill discovery methods as done in Park et. Park, In Fetch environments, unlike LSD, we do not use any supervision, such as limiting the discriminator's input only to the object. For the Kitchen environment, we use a 7-DoF end-effector controller. but we find that CSD can still discover diverse behaviors including locomotion skills without a normalizerE.1. Environments We adopt the same environment settings used in LSD (Park et al., 2022) for Fetch manipulation environments (FetchPush, FetchSlide, FetchPickAndPlace) (Plappert et al., 2018) and MuJoCo locomotion environments (Ant, HalfCheetah) (Todorov et al., 2012; Brockman et al., 2016). In Fetch environments, unlike LSD, we do not use any supervision, such as limiting the discriminator's input only to the object. For the Kitchen environment, we use a 7-DoF end-effector controller (Mendonca et al., 2021) with state-based observations. We use an episode length of 200 for locomotion environments and an episode length of 50 for manipulation environments. In locomotion environments, to ensure fair comparisons, we use preset normalizers for all skill discovery methods as done in Park et al. (2022), but we find that CSD can still discover diverse behaviors including locomotion skills without a normalizer.
We use the same downstream tasks in Park et al. (2022) for Fetch environments. In FetchPushGoal, FetchSlideGoal, and FetchPickAndPlaceGoal, a goal position is randomly sampled at the beginning of each episode. If the agent successfully places the object to the target position, a reward of 1 is given to the agent and the episode ends. We follow the original goal sampling range and reach criterion from. E , Plappert et al. Downstream Tasks Fetch environmentsE.2. Downstream Tasks Fetch environments. We use the same downstream tasks in Park et al. (2022) for Fetch environments. In FetchPushGoal, FetchSlideGoal, and FetchPickAndPlaceGoal, a goal position is randomly sampled at the beginning of each episode. If the agent successfully places the object to the target position, a reward of 1 is given to the agent and the episode ends. We follow the original goal sampling range and reach criterion from Plappert et al. (2018).
For the success criteria of the tasks, we mostly follow. Bottomleftburner, Bottomrightburner, Hingecabinet, Kettlebottomright, Kettlefall, Kettlelift, Kettletopleft, Kettletopright, Lightswitch, Microwave, Slidecabinet, Topleftburner, ; Toprightburner, Gupta, We consider the following 13 downstream tasks for the Kitchen environment. Kitchen environment. We consider the following 13 downstream tasks for the Kitchen environment: BottomLeftBurner, BottomRightBurner, HingeCabinet, KettleBottomRight, KettleFall, KettleLift, KettleTopLeft, KettleTopRight, LightSwitch, Microwave, SlideCabinet, TopLeftBurner, TopRightBurner. For the success criteria of the tasks, we mostly follow Gupta et al. (2019);
As in the Fetch tasks, the agent gets a reward of 1 when it satisfies the success criterion of each task. Mendonca, 2021) and refer to our implementation for detailed definitionsMendonca et al. (2021) and refer to our implementation for detailed definitions. As in the Fetch tasks, the agent gets a reward of 1 when it satisfies the success criterion of each task.
60]), and if the agent reaches the goal, it gets a reward of 10 and the episode ends. For these three environments, we consider the agent to have reached the goal if it enters within a radius of 3 from the goal. HalfCheetahGoal, a goal's x coordinate is randomly sampled from Unif. 20and if the agent reaches the goal, it gets a reward of 10 and the episode ends. In AntMultiGoals, the agent should follow four goals within 50 steps each, where goal positions are randomly sampled from Unif. In HalfCheetahHurdle, the agent gets a reward of 1 if it jumps over a hurdle, where we use the same hurdle positions from Qureshi et al.MuJoCo locomotion environments. In AntGoal, a goal's xy position is randomly sampled from Unif([−20, 20] 2 ), and if the agent reaches the goal, it gets a reward of 10 and the episode ends. In AntMultiGoals, the agent should follow four goals within 50 steps each, where goal positions are randomly sampled from Unif([−7.5, 7.5] 2 ) centered at the current coordinates. The agent gets a reward of 2.5 every time it reaches a goal. In HalfCheetahGoal, a goal's x coordinate is randomly sampled from Unif([−60, 60]), and if the agent reaches the goal, it gets a reward of 10 and the episode ends. For these three environments, we consider the agent to have reached the goal if it enters within a radius of 3 from the goal. In HalfCheetahHurdle, the agent gets a reward of 1 if it jumps over a hurdle, where we use the same hurdle positions from Qureshi et al. (2020).
For DADS, we follow the original implementation choices, such as the use of batch normalization and fixing the output variance of the skill dynamics model. For CSD in manipulation environments, we start training the skill policy from epoch 4000, after the initial conditional density model has stabilized. When modeling Σ θ (s) of the conditional density model, we use a diagonal covariance matrix as we found it to be practically sufficient for our experiments. Lsd Park, Gaussian distribution (for continuous skills) or a uniform distribution (for discrete skills), and fix the skill throughout the episode. For discrete skills, we use standard one-hot vectors for DIAYN and DADS, and zero-centered one-hot vectors for CSD. Also, we normalize the diagonal elements with their geometric mean at each state for further stabilitySkill policy. At the beginning of each episode, we sample a skill z from either a standard Gaussian distribution (for continuous skills) or a uniform distribution (for discrete skills), and fix the skill throughout the episode. For discrete skills, we use standard one-hot vectors for DIAYN and DADS, and zero-centered one-hot vectors for CSD and LSD, following Park et al. (2022). For DADS, we follow the original implementation choices, such as the use of batch normalization and fixing the output variance of the skill dynamics model. For CSD in manipulation environments, we start training the skill policy from epoch 4000, after the initial conditional density model has stabilized. When modeling Σ θ (s) of the conditional density model, we use a diagonal covariance matrix as we found it to be practically sufficient for our experiments. Also, we normalize the diagonal elements with their geometric mean at each state for further stability.
We present the full list of the hyperparameters used in our experiments in Tables 1 and 2, where we indicate the values considered for our hyperparameter search with curly brackets. For the intrinsic reward coefficient, we use 50 (DADS), 500 (CSD and LSD), 1500 (DIAYN). 50Disagreement Fetch. Disagreement Kitchen). For the learning rate, we useWe present the full list of the hyperparameters used in our experiments in Tables 1 and 2, where we indicate the values considered for our hyperparameter search with curly brackets. For the intrinsic reward coefficient, we use 50 (DADS), 500 (CSD and LSD), 1500 (DIAYN), 200 (Disagreement Fetch), or 50 (Disagreement Kitchen). For the learning rate, we use
| [
"https://github.com/seohongpark/CSD-manipulation",
"https://github.com/seohongpark/CSD-locomotion"
] |
[
"Extraordinary Bulk Insulating Behavior in the Strongly Correlated Materials FeSi and FeSb 2",
"Extraordinary Bulk Insulating Behavior in the Strongly Correlated Materials FeSi and FeSb 2"
] | [
"Yun Suk Eo \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA\n",
"Keenan Avers \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA\n",
"Jarryd A Horn \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA\n",
"Hyeok Yoon \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA\n",
"Shanta Saha \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA\n",
"Alonso Suarez \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA\n",
"Michael S Fuhrer \nSchool of Physics and Astronomy\nMonash University\n3800VictoriaAustralia\n\nARC Centre of Excellence in Future Low-Energy Electronics Technologies\nMonash University\n3800VictoriaAustralia\n",
"Johnpierre Paglione \nDepartment of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\nCollege Park, MarylandUSA\n\nCanadian Institute for Advanced Research\nM5G 1Z8TorontoOntarioCanada ‡\n"
] | [
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA",
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA",
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA",
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA",
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA",
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\n20742College ParkMarylandUSA",
"School of Physics and Astronomy\nMonash University\n3800VictoriaAustralia",
"ARC Centre of Excellence in Future Low-Energy Electronics Technologies\nMonash University\n3800VictoriaAustralia",
"Department of Physics\nMaryland Quantum Materials Center\nUniversity of Maryland\nCollege Park, MarylandUSA",
"Canadian Institute for Advanced Research\nM5G 1Z8TorontoOntarioCanada ‡"
] | [] | 4f electron-based topological Kondo insulators have long been researched for their potential to conduct electric current via protected surface states, while simultaneously exhibiting unusually robust insulating behavior in their interiors. To this end, we have investigated the electrical transport of the 3d-based correlated insulators FeSi and FeSb2, which have exhibited enough similarities to their f electron cousins to warrant investigation. By using a double-sided Corbino disk transport geometry, we show unambiguous evidence of surface conductance in both of these Fe-based materials. In addition, by using a 4-terminal Corbino inverted resistance technique, we extract the bulk resistivity as a function of temperature. Similar to topological Kondo insulator SmB6, the bulk resistivity of FeSi and FeSb2 are confirmed to exponentially increase by up to 9 orders of magnitude from room temperature to the lowest accessible temperature. This demonstrates that these materials are excellent bulk insulators, providing an ideal platform for studying correlated 2D physics. | 10.1063/5.0148249 | [
"https://export.arxiv.org/pdf/2302.09996v1.pdf"
] | 257,038,547 | 2302.09996 | 348413aeeacf955dd24cd6f50e6bce2c19a983b4 |
Extraordinary Bulk Insulating Behavior in the Strongly Correlated Materials FeSi and FeSb 2
Yun Suk Eo
Department of Physics
Maryland Quantum Materials Center
University of Maryland
20742College ParkMarylandUSA
Keenan Avers
Department of Physics
Maryland Quantum Materials Center
University of Maryland
20742College ParkMarylandUSA
Jarryd A Horn
Department of Physics
Maryland Quantum Materials Center
University of Maryland
20742College ParkMarylandUSA
Hyeok Yoon
Department of Physics
Maryland Quantum Materials Center
University of Maryland
20742College ParkMarylandUSA
Shanta Saha
Department of Physics
Maryland Quantum Materials Center
University of Maryland
20742College ParkMarylandUSA
Alonso Suarez
Department of Physics
Maryland Quantum Materials Center
University of Maryland
20742College ParkMarylandUSA
Michael S Fuhrer
School of Physics and Astronomy
Monash University
3800VictoriaAustralia
ARC Centre of Excellence in Future Low-Energy Electronics Technologies
Monash University
3800VictoriaAustralia
Johnpierre Paglione
Department of Physics
Maryland Quantum Materials Center
University of Maryland
College Park, MarylandUSA
Canadian Institute for Advanced Research
M5G 1Z8TorontoOntarioCanada ‡
Extraordinary Bulk Insulating Behavior in the Strongly Correlated Materials FeSi and FeSb 2
(Dated: February 21, 2023)
4f electron-based topological Kondo insulators have long been researched for their potential to conduct electric current via protected surface states, while simultaneously exhibiting unusually robust insulating behavior in their interiors. To this end, we have investigated the electrical transport of the 3d-based correlated insulators FeSi and FeSb2, which have exhibited enough similarities to their f electron cousins to warrant investigation. By using a double-sided Corbino disk transport geometry, we show unambiguous evidence of surface conductance in both of these Fe-based materials. In addition, by using a 4-terminal Corbino inverted resistance technique, we extract the bulk resistivity as a function of temperature. Similar to topological Kondo insulator SmB6, the bulk resistivity of FeSi and FeSb2 are confirmed to exponentially increase by up to 9 orders of magnitude from room temperature to the lowest accessible temperature. This demonstrates that these materials are excellent bulk insulators, providing an ideal platform for studying correlated 2D physics.
4f electron-based topological Kondo insulators have long been researched for their potential to conduct electric current via protected surface states, while simultaneously exhibiting unusually robust insulating behavior in their interiors. To this end, we have investigated the electrical transport of the 3d-based correlated insulators FeSi and FeSb2, which have exhibited enough similarities to their f electron cousins to warrant investigation. By using a double-sided Corbino disk transport geometry, we show unambiguous evidence of surface conductance in both of these Fe-based materials. In addition, by using a 4-terminal Corbino inverted resistance technique, we extract the bulk resistivity as a function of temperature. Similar to topological Kondo insulator SmB6, the bulk resistivity of FeSi and FeSb2 are confirmed to exponentially increase by up to 9 orders of magnitude from room temperature to the lowest accessible temperature. This demonstrates that these materials are excellent bulk insulators, providing an ideal platform for studying correlated 2D physics.
Introduction. The canonical Kondo insulators SmB 6 [1] and YbB 12 [2] have recently regained widespread interest following the identification of non-trivial band topology, studies of topological surface states, and the possible observations of charge-neutral fermions [3,4]. It is now well established that the low-temperature plateau in electrical resistivity measurements of these materials originates from a surface channel contribution, consistent with the topological band inversion that is predicted to occur when Kondo hybridization opens a bulk band gap at the Fermi level [5,6]. Other examples of materials with apparent surface conduction have come to light, with low-temperature plateaus in resistivity arising despite apparent insulating behavior on cooling from room temperature. Most systems exhibit a resistivity increase of only a few factors at most before exhibiting saturation [7][8][9][10][11]. For these materials, a resistance plateau originating from surface conduction is unlikely as the low resistivity values of the plateaus would imply an unusually high sheet conductivity (using reasonable geometric factors). In contrast, a handful of correlated insulators including FeSi [12], FeSb 2 [13], and Ce 3 Bi 4 Pt 3 [8] exhibit much larger (3-4 orders of magnitude) increases in resistivity before the plateau [14] similar to the cases of SmB 6 and YbB 12 .
In particular, the low-temperature resistivity saturation observed in the iron-based correlated insulators FeSi and FeSb 2 have been suggested to originate from topological surface conducting states, as evidenced by transport [15] and ARPES [16] experiments. In contrast to the weakly correlated topological surface states, exotic phenomena that might be related to strong correlation characteristics such as surface magnetism (Zak phase in FeSi [17]) and very low surface Fermi velocity (v F of 10 3 − 10 4 m/s in FeSb 2 [16]) have been reported in these two materials. Both materials exhibit striking similarities to the topological Kondo insulator SmB 6 , in that their ground states are non-magnetic [18][19][20] despite having magnetic elements, and that they both have narrow band gaps [21,22] which in the case of SmB 6 arises due to Kondo physics [23,24]. However, in FeSi and FeSb 2 , this would require that the 3d electrons participate in the gap opening instead of 4f electrons, and since the 3d electrons are less localized in nature, understanding the origin of the band gap is more difficult not as well agreed upon as in SmB 6 . Both FeSi [25][26][27] and FeSb 2 [28] have been studied under the Kondo insulator framework, which involves a gap opening due to hybridization between a localized moment and a dispersive conduction band. However, there are other band calculations that show the band gap is between the 3d multiplets ( [29] and [30][31][32]). Overall, it is not confirmed that these materials share a common origin of bulk insulating behavior (e.g., Kondo effect), nor that topology plays a role in originating the apparent surface state conduction, raising the question on the nature of bulk and surface conduction in these materials.
While numerous studies of transport have been performed on SmB 6 , the true bulk-insulating behavior was confirmed using a novel inverted resistance measurement technique [33]. This technique, which accesses the bulk conductance by circumventing the dominating surface arXiv:2302.09996v1 [cond-mat.str-el] 20 Feb 2023 conduction channel via measurement of the voltage exterior of a Corbino disk [34], revealed another remarkable feature of the bulk insulating behavior in SmB 6 : a thermally activated, ten orders-of-magnitude increase of the bulk resistivity on cooling from room temperature.
This measured exponential increase is in striking contrast to the behavior observed in conventional narrowgap insulators or semiconductors, where the exponential rise of resistivity is typically terminated by extrinsic carriers from point defects or other disorders. For this reason, forming a truly insulating bulk in a semiconductor generally requires exceptionally pure materials, and indeed the ability to precisely control impurities is the foundation of the modern semiconductor industry. This is because conventional semiconductors obey the Mott criterion (a B N 1/3 ≈ 0.26 [35]), where they transition into metal when the dopant concentration is higher than 10 16 − 10 17 cm −3 (i.e. ∼0.0001 -0.001%).
In contrast, the insulating state in SmB 6 is robust to many orders of magnitude higher impurity density before the material transitions to a metal. This increase in bulk resistivity can still be seen in SmB 6 samples with up to several percent chemical substitution levels [36][37][38]. This is surprising considering only metals or swave superconductors typically allow such high levels of the substitution before transitioning to a distinct ground state. Since the insulating bulk of SmB 6 is so robust to (zero-dimensional) point defects, there is growing evidence that higher-order (one-or two-dimensional) defects such as dislocations are the leading type of disorder important for bulk conduction, and those defects are unconventional due to their topologically non-trivial nature [39,40]. Given this unusual disorder and impurity response in bulk SmB 6 , it is also of interest to investigate other correlated insulators for similar properties. Moreover, the characterization of such robust bulk-insulating systems provides an important foundation for the continuing surface states transport studies of FeSi and FeSb 2 , and may even be the key technological advantage over more weakly correlated insulators.
In this study, we investigate the nature of bulk conductivity in the correlated insulators FeSi and FeSb 2 , utilizing the inverted resistance technique to extract and compare their bulk resistivities. Confirmation of thermally activated bulk behavior in the low-temperature plateau region suggests these systems are truly bulk-insulating correlated materials, and that resistance saturation is due to surface conduction. The absence of bulk impurity conduction in both materials reveals another set of examples of extraordinary bulk insulation in a correlated insulator.
Results We have prepared large single crystals (∼ 5 mm polyhedra for FeSi and ∼ 2-3 mm polyhedra for FeSb 2 ), which easily allow for standard four-probe measurements, as depicted in the lower left inset of Fig. (1). The resistance (R) vs. temperature (T ) of FeSi and FeSb 2 in comparison with SmB 6 is shown in Fig. (1). The stan- dard resistance of all three materials increases 5-6 orders of magnitude upon lowering the temperature, consistent with the previous literature [41][42][43]. Most notably, all three standard resistances saturate at low temperatures. In SmB 6 , this saturation below 4 K is due to a surface conduction layer, likely a gapless dispersion emerging from the non-trivial band topology. Recently, the existence of surface states has also been reported on FeSi and FeSb 2 , using thickness-dependent transport [15] and angle-resolved photoemission spectroscopy [16], respectively. These studies invite us to study the surface and hidden bulk conductivity of FeSi and FeSb 2 . Indeed, later we verify the low-temperature saturation features in FeSi and FeSb 2 are of a surface origin, together with an estimation of the sheet resistance.
But first, we comment on a hump feature at higher temperatures (SmB 6 at 15 K, FeSi at 50 K, FeSb 2 at 30 K). In SmB 6 this feature is weak, but it is much more pronounced in FeSi and FeSb 2 , and can even be thought of as another saturation feature before the low-temperature surface one. To understand these hump features of FeSi and FeSb 2 more clearly, we also study the Hall coefficient as a function of temperature as well. As shown in Fig. (2), we plot both resistivity and Hall coefficient as a function of inverse temperature. In this figure, we see that this feature exists in Hall effect and therefore it is likely due to a carrier density change by the shift of chemical potential (higher activation energy to lower activation energy). This change in activation energy has also been seen in the Hall effect of SmB 6 , being consistent with the activation energy change from the middle of the gap to closer to the conduction band edge [44]. The reason for this chemical potential shifting upon lowering the temperature is likely originating from a crossover from the intrinsic to the extrinsic regime (freeze out or ionization regime) [45] or band bending due to the surface states [46]. Other temperature-dependent effects such as the bulk gap opening reported from ARPES [16,47] and STM [48] studies may also play a role in the change in activation energy. We summarize the activation energy values in Table (I). The difference in the slope between resistivity and Hall indicates that mobility may also be a strong function of temperature, which requires in-depth follow-up studies. Lastly, it is important to note that the Hall coefficient of our FeSi and FeSb 2 lacks the Hall sign change, although sign changes of R H have been observed in FeSi previously [49]. The Hall sign change is a feature commonly seen in high-quality f -electron systems attributed to skew scattering [50][51][52]. The absence of this sign change may reflect the high density of the extrinsic scattering centers [52].
We now show that the saturation of resistance at lower temperatures originates from the surface and not from the bulk. It is difficult to determine from a standard fourprobe measurement if this saturation is of surface origin. Instead, we use a method introduced in Ref. ([33]), employing two Corbino disks coaxially aligned on two opposite surfaces. This allows us to measure what we call the lateral, hybrid, radial, and vertical resistances as described in the caption of Fig. 3. If the resistance saturation is originating from the surface conduction and the bulk conduction is negligible, the lateral resistance is identical to a Corbino disk measurement on a twodimensional electron gas or a thin film. Also, the radial and vertical resistance measurements are identical to two Corbino disk resistance measurements connected in parallel and in series, respectively. Most importantly, the hybrid measurement is an inverted resistance measurement, which essentially measures the voltage of the bulk current leaking out from a 2D Faraday cage (i.e., exterior of a Corbino disk). The hybrid resistance (R hybrid ) is given by
R hybrid = C 1 σ b t σ 2 s(1)
where C 1 is a dimensionless geometric factor, σ b is the bulk conductance, σ s is the surface conductance, and t is the thickness of the sample. This inverted (or hybrid) measurement is the key measurement of this study since it contains bulk conductivity information while the rest of the other measurements show saturation due to the weak temperature dependence of the surface resistance. Experimental R vs. T from Corbino measurements are shown in Fig. 3 for FeSi (a) and FeSb 2 (b) in compar- ison with the previously reported SmB 6 (c) (from Ref. [33]) and a numerical demonstration simulating a conducting surface and an insulating bulk (d). This experiment confirms that the resistance saturation in both FeSi and FeSb 2 originates from the surface conduction channel. First, the low temperature downturn of hybrid resistances consistent with Eq. 1 (ie σ s σ b t). In FeSi, both top and bottom surfaces were polished as identically as possible before patterning Corbino disks. In FeSb 2 , in contrast, we polished only one surface and left the other surface in an as-grown condition before patterning the Corbino disks. The difference of the lateral resistance values between two opposite surfaces, as shown in the inset of Fig. 3 (b), is quite significant. The as-grown surface R asgrown has an order of magnitude higher resistance value compared to the Corbino disk patterned on a roughly polished surface R polished . This is similar to the previous SmB 6 report, showing evidence of subsurface crack conduction on a poorly prepared surface [53,54]. Nevertheless, the radial and vertical measurements still show consistent behavior of the two channels connected in parallel and in series (R vertical = R polished + R asgrown ≈ R asgrown and R −1 radial = R −1 asgrown + R −1 polished ≈ R polished ), again consistent with the surface state picture. For both FeSi and FeSb 2 , we find that obtaining the same sheet resistance as the as-grown surface from finer polishing is much more challenging than in SmB 6 , perhaps because the samples are much softer. Once the surface has been polished, the sheet resistance drops to a value that is almost an order of magnitude smaller. This can be either surface quality improvement or the creation of subsurface crack conduction channels, which requires further studies for clarification. Although in principle, we can extract the bulk resistivity with this setup, we prefer avoiding this change of sheet resistivity (or effectively changing it) since the inverted resistance will become a smaller value according to Eq. (1) (i.e, smaller ρ s becomes smaller R inv measurement). To this end, to measure the inverted resistance measurement for bulk resistivity extraction studies, we employ a four-terminal Cornino disk resistance measurement (as shown in the inset of Fig. (4) (a)) patterned on an as-grown (unpolished) surface.
The bulk resistivity can be extracted by combining the information of the inverted resistance measurement and a standard four-terminal resistance measurement as shown in Fig. (4) (a) and (b) (details are provided in the SI) [33,34]. The result of the bulk resistivity of FeSi and FeSb 2 , compared to the SmB 6 [34], is shown in Fig. (4) (c). We note that the resistivity of FeSb 2 is an effective resistivity, where the current does not flow uniformly in an orthorhombic material. However, prior studies indicate the activation energies do not significantly differ depending on the directions [55]. We find that both FeSb 2 and FeSi show simple thermally activated behavior with nearly 8-9 orders of magnitude increase.
Lastly, we discuss the conducting surface channel. The resistance of FeSi from the standard Corbino measurement reads R = 5.08 kΩ in the surface-dominated regime. This corresponds to a sheet resistance of 78.7kΩ, far exceeding h/e 2 which is the Mott-Ioffe-Regel (MIR) limit (k F l = 1) for a two-dimensional electron gas [56]. This high value appears to rule out a metallic surface state emerging from a 3d strong topological insulator [57], which should be protected against back-scattering and (in red) in comparison with the previous SmB6 (in black) report (from Ref. [34]). Details of the transport geometry and the bulk resistivity extraction process can be found in Appendix B and D, respectively. localization. For FeSb 2 , the temperature dependence is much weaker and the sheet resistivity value does not exceed the MIR value. However, we note that σ xx and σ yy are not expected to be equal in general even on the 2d layer since the crystal is orthorhombic. Therefore, the sheet resistivity is an effective resistivity with ρ xx and ρ yy not necessarily being an equal contribution. The details of the surface states in both FeSi and FeSb 2 will require future in-depth studies.
Discussion Among correlated insulators, a robust insulating behavior of the bulk that increases exponentially by at least 8-9 orders of magnitude has only been seen in flux-grown SmB 6 [34]. In this study, we found two more materials behaving like this. Our finding is significant for future surface transport studies where interruption of the bulk channel is not acceptable. However, the detailed gap for-mation may be different in nature. The insulating gap of FeSi and FeSb 2 is likely to originate from the 3d orbitals instead of the hybridization between a localized 4f moment and a dispersive band.
In the historical literature of SmB 6 and FeSi, a saturation of resistance upon lowering the temperature has been interpreted as bulk metallic states by impurity conduction. In order to be valid, the authors have considered the Mott criterion and checked if the resistivity magnitude is consistent with a reasonable impurity concentration. In FeSi, the critical impurity density for the Mott criterion was reported to be 10 18 cm −3 , and it was consistent with a resistivity value after increasing ∼ 5 orders of magnitude.
We now find this saturation of resistance is a surface origin and the bulk continues to increase. The lowest temperature data point is limited by the performance of the electronics we used. Using the well-established transport theory of charged impurities in conventional semiconductors, the absence of a thermal activation energy change originating from hopping conduction (assisted by phonons) up to very high resistivity values suggests that FeSi and FeSb 2 has an impurity density that is lower than 5 × 10 −4 % and 2 × 10 −3 %, respectively. This low impurity density is likely lower than the impurity level of our starting materials of the crystals. Either the unintentional impurities do not act as charged impurities (donors or acceptors) or our conventional understanding of impurities does not apply in these correlated insulators.
It is worth mentioning different viewpoints of the bulk of SmB 6 which might be relevant to our Fe-based insulator studies. One speculation is that the Kondo gap may have a resemblance to an s-wave BCS superconducting gap whose existence is robust in the presence of a large number of impurities [58]. Related to this view, it is worth mentioning that, in order to explain the experiments that support the experimental evidence of charge neutral fermions, O. Erten et al. views SmB 6 as a failed superconductor, where the order parameter does not have the topological stability to condensate into Cooper pairs, but it is instead a super dielectric [59]. The resistivity and Hall coefficient temperature behavior has been explained by Rakoski et al. without the presence of ingap impurity states, but instead, band bending by the surface states being responsible for the detailed transport behavior [46]. Alternatively, Souza et al. [60] and Jiao et al. [61] suggested the impurities are sealed off by metallic states by the topological nature. Lastly, B. Skinner explains the behavior by in-gap impurity states [62]. In Skinner's model, if the dispersion can be approximated as a Mexican hat dispersion instead of a parabolic band, the insulator-to-metal transition is reserved until about 10 4 times the doping density of the Mott criterion. Whether FeSi and FeSb 2 can also be understood within in these theoretical models needs to be investigated in future works.
In conclusion, we have additionally discovered two robustly insulating correlated insulators: FeSi and FeSb 2 , in the presence of surface states. We believe these additional material findings that have surface conduction channels with excellent insulation in the bulk will allow heterostructures for 2D flat band engineering.
FIG. 1 .
1Typical resistance vs. temperature of FeSi (red) and FeSb2 (blue) in comparison with SmB6 (black). lower left inset: Resistances were measured using a conventional fourprobe geometry. Upper right inset: Schematic of an impurity band close to the conduction band. E1 is the thermal activation energy from the chemical potential (µ) to the nearest band (conduction band in the figure) and E3 is the extrinsic thermal activation energy originating from the hopping conduction between impurity sites.
FIG. 2 .
2(color) Comparison of resistivity (blue and left axis) and 9 T Hall coefficient (green and right axis) at high temperatures focusing near the hump feature. The hump feature is shaded in gray.
FIG
. 3. (Color) Surface conduction channel verification at low temperatures via employing coaxially-aligned Corbino disks (a) Inset: Schematic diagram of the Corbino disks on two opposite surfaces. Lateral (in blue): R1,2 or R3,4, Vertical (in green): R1,3 while shorting 2 and 4, Radial (in magenta): R1,2 while shorting 1 and 3; and 2 and 4. Hybrid (in red): v1,2/i3,4. (a) R vs T measurement of FeSi (b) R vs T measurement of FeSb2. Inset: The blue data with the upper triangle symbol is the lateral configuration of a Corbino disk on the polished surface, and the data with the lower triangle symbol is a lateral configuration measurement from an unpolished surface. (c) R vs T of SmB6 from Ref.[33] (d) Numerical (finite element analysis) demonstration of a cross over from insulating bulk to surface conduction upon lowering the temperature using a bulk activation energy is 3.5 meV and the sheet resistance is 100 Ω. Details of the sample transport gemetry can be found in Appendix B.
FIG
. 4. (Color) Resistance measurement of a 4-terminal Corbino disk and bulk resistivity after the extraction process. (a) R vs. T of FeSi. Inset: Schematic diagram of the 4-terminal Corbino disk geometry. The conversion from resistance R to sheet resistance ρ 2D is ρ 2D = 2π/ ln(3/2) × R. Standard resistance measurement configuration: v1,4/i2,3, Inverted resistance measurement configuration: v1,2/i4,3. (b) R vs. T of FeSb2. (c) Bulk resistivity extraction result of FeSi (in blue) and FeSb2
We thank Ji-Hoon Park for the wire bonder assistance. We thank Ke-Jun Xu, Brian Skinner, Andriy Nevidomskyy, Shouvik Sur, and Onur Erten for the discussions.
. L Li, K Sun, C Kurdak, J Allen, 10.1038/s42254-020-0210-8Nat. Rev. Phys. 2463L. Li, K. Sun, C. Kurdak, and J. Allen, Nat. Rev. Phys. 2, 463 (2020).
. Z Xiang, Y Kasahara, T Asaba, B Lawson, C Tinsman, L Chen, K Sugimoto, S Kawaguchi, Y Sato, G Li, 10.1126/science.aap960Science. 36265Z. Xiang, Y. Kasahara, T. Asaba, B. Lawson, C. Tins- man, L. Chen, K. Sugimoto, S. Kawaguchi, Y. Sato, G. Li, et al., Science 362, 65 (2018).
. B Tan, Y.-T Hsu, B Zeng, M C Hatnean, N Harrison, Z Zhu, M Hartstein, M Kiourlappou, A Srivastava, M Johannes, 10.1126/science.aaa797Science. 349287B. Tan, Y.-T. Hsu, B. Zeng, M. C. Hatnean, N. Harrison, Z. Zhu, M. Hartstein, M. Kiourlappou, A. Srivastava, M. Johannes, et al., Science 349, 287 (2015).
. M Hartstein, W Toews, Y.-T Hsu, B Zeng, X Chen, M C Hatnean, Q Zhang, S Nakamura, A Padgett, G Rodway-Gant, doi.org/10.1038/nphys4295Nat. Phys. 14166M. Hartstein, W. Toews, Y.-T. Hsu, B. Zeng, X. Chen, M. C. Hatnean, Q. Zhang, S. Nakamura, A. Padgett, G. Rodway-Gant, et al., Nat. Phys. 14, 166 (2018).
. M Dzero, K Sun, V Galitski, P Coleman, 10.1103/PhysRevLett.104.106408Phys. Rev. Lett. 104106408M. Dzero, K. Sun, V. Galitski, and P. Coleman, Phys. Rev. Lett. 104, 106408 (2010).
. T Takimoto, doi.org/10.1143/JPSJ.80.123710J. Phys. Soc. Jpn. 80123710T. Takimoto, J. Phys. Soc. Jpn. 80, 123710 (2011).
. J Stankiewicz, P F S Rosa, P Schlottmann, Z Fisk, 10.1103/PhysRevB.94.125141Phys. Rev. B. 94125141J. Stankiewicz, P. F. S. Rosa, P. Schlottmann, and Z. Fisk, Phys. Rev. B 94, 125141 (2016).
. T Takabatake, F Iga, T Yoshino, Y Echizen, K Katoh, K Kobayashi, M Higa, N Shimizu, Y Bando, G Nakamoto, 10.1016/S0304-8853(97)00842-1J. Magn. Magn. Mater. 177277T. Takabatake, F. Iga, T. Yoshino, Y. Echizen, K. Ka- toh, K. Kobayashi, M. Higa, N. Shimizu, Y. Bando, G. Nakamoto, et al., J. Magn. Magn. Mater. 177, 277 (1998).
. K Katoh, T Takabatake, 10.1016/S0925-8388(97)00583-5J. Alloys Compd. 26822K. Katoh and T. Takabatake, J. Alloys Compd. 268, 22 (1998).
. P Haen, F Lapierre, J M Mignot, R Tournier, F Holtzberg, 10.1103/PhysRevLett.43.304Phys. Rev. Lett. 43304P. Haen, F. Lapierre, J. M. Mignot, R. Tournier, and F. Holtzberg, Phys. Rev. Lett. 43, 304 (1979).
. S K Malik, D T Adroja, 10.1103/PhysRevB.43.6277Phys. Rev. B. 436277S. K. Malik and D. T. Adroja, Phys. Rev. B 43, 6277 (1991).
. Z Schlesinger, Z Fisk, H.-T Zhang, M B Maple, J Di-Tusa, G Aeppli, 10.1103/PhysRevLett.71.1748Phys. Rev. Lett. 711748Z. Schlesinger, Z. Fisk, H.-T. Zhang, M. B. Maple, J. Di- Tusa, and G. Aeppli, Phys. Rev. Lett. 71, 1748 (1993).
. A Bentien, S Johnsen, G Madsen, B Iversen, F Steglich, Europhysics Letters). 8017008EPLA. Bentien, S. Johnsen, G. Madsen, B. Iversen, and F. Steglich, EPL (Europhysics Letters) 80, 17008 (2007).
. M Pickem, E Maggio, J M Tomczak, doi.org/10.1038/s42005-021-00723-zCommun. Phys. 41M. Pickem, E. Maggio, and J. M. Tomczak, Commun. Phys. 4, 1 (2021).
. Y Fang, S Ran, W Xie, S Wang, Y S Meng, M B Maple, doi.org/10.1073/pnas.1806910115Proc. Natl. Acad. Sci. U.S.A. 1158558Y. Fang, S. Ran, W. Xie, S. Wang, Y. S. Meng, and M. B. Maple, Proc. Natl. Acad. Sci. U.S.A. 115, 8558 (2018).
. K.-J Xu, S.-D Chen, Y He, J He, S Tang, C Jia, E Y Ma, S.-K Mo, D Lu, M Hashimoto, doi.org/10.1073/pnas.200236111Proc. Natl. Acad. Sci. U.S.A. 11715409K.-J. Xu, S.-D. Chen, Y. He, J. He, S. Tang, C. Jia, E. Y. Ma, S.-K. Mo, D. Lu, M. Hashimoto, et al., Proc. Natl. Acad. Sci. U.S.A. 117, 15409 (2020).
. Y Ohtsuka, N Kanazawa, M Hirayama, A Matsui, T Nomoto, R Arita, T Nakajima, T Hanashima, V Ukleev, H Aoki, M Mogi, K Fujiwara, A Tsukazaki, M Ichikawa, M Kawasaki, Y Tokura, 10.1126/sciadv.abj0498Sci. Adv. 7498Y. Ohtsuka, N. Kanazawa, M. Hirayama, A. Matsui, T. Nomoto, R. Arita, T. Nakajima, T. Hanashima, V. Ukleev, H. Aoki, M. Mogi, K. Fujiwara, A. Tsukazaki, M. Ichikawa, M. Kawasaki, and Y. Tokura, Sci. Adv. 7, eabj0498 (2021).
. V Jaccarino, G K Wertheim, J H Wernick, L R Walker, S Arajs, 10.1103/PhysRev.160.476Phys. Rev. 160476V. Jaccarino, G. K. Wertheim, J. H. Wernick, L. R. Walker, and S. Arajs, Phys. Rev. 160, 476 (1967).
. S.-J Oh, J W Allen, J M Lawrence, 10.1103/PhysRevB.35.2267Phys. Rev. B. 352267S.-J. Oh, J. W. Allen, and J. M. Lawrence, Phys. Rev. B 35, 2267 (1987).
. I A Zaliznyak, A T Savici, V O Garlea, R Hu, C Petrovic, 10.1103/PhysRevB.83.184414Phys. Rev. B. 83184414I. A. Zaliznyak, A. T. Savici, V. O. Garlea, R. Hu, and C. Petrovic, Phys. Rev. B 83, 184414 (2011).
. Z Schlesinger, Z Fisk, H.-T Zhang, M B Maple, J Di-Tusa, G Aeppli, 10.1103/PhysRevLett.71.1748Phys. Rev. Lett. 711748Z. Schlesinger, Z. Fisk, H.-T. Zhang, M. B. Maple, J. Di- Tusa, and G. Aeppli, Phys. Rev. Lett. 71, 1748 (1993).
. A Perucchi, L Degiorgi, R Hu, C Petrovic, V F Mitrović, doi.org/10.1140/epjb/e2006-00433-1Eur. Phys. J. B. 54175A. Perucchi, L. Degiorgi, R. Hu, C. Petrovic, and V. F. Mitrović, Eur. Phys. J. B 54, 175 (2006).
. N Mott, doi.org/10.1080/14786439808206566Philos. Mag. 30403N. Mott, Philos. Mag. 30, 403 (1974).
. R M Martin, J Allen, doi.org/10.1063/1.326765J. Appl. Phys. 507561R. M. Martin and J. Allen, J. Appl. Phys. 50, 7561 (1979).
. Z Fisk, J Sarrao, S Cooper, P Nyhus, G Boebinger, A Passner, P Canfield, doi.org/10.1016/0921-4526(94)00588-MPhysica B Condens. 223409Z. Fisk, J. Sarrao, S. Cooper, P. Nyhus, G. Boebinger, A. Passner, and P. Canfield, Physica B Condens 223, 409 (1996).
. C Fu, M P C M Krijn, S Doniach, 10.1103/PhysRevB.49.2219Phys. Rev. B. 492219C. Fu, M. P. C. M. Krijn, and S. Doniach, Phys. Rev. B 49, 2219 (1994).
. D Mandrus, J L Sarrao, A Migliori, J D Thompson, Z Fisk, 10.1103/PhysRevB.51.4763Phys. Rev. B. 514763D. Mandrus, J. L. Sarrao, A. Migliori, J. D. Thompson, and Z. Fisk, Phys. Rev. B 51, 4763 (1995).
. C Petrovic, Y Lee, T Vogt, N D Lazarov, S L Bud'ko, Sample/Measurement E high (meVC. Petrovic, Y. Lee, T. Vogt, N. D. Lazarov, S. L. Bud'ko, Sample/Measurement E high (meV) (T > T * )
. T , hump temperature) E low (meVT * (hump temperature) E low (meV)
Activation energy fitting results of bulk resistivity and Hall coefficient. T * is the temperature at which the slope changes and shows as a hump in resistivity. The activation energies, ∆ = E high and ∆ = E low , are estimated by fitting the functional form ρ = ρ0 exp(∆/kBT ) to the data. and P. C. Canfield. Table I, 10.1103/PhysRevB.72.045103Phys. Rev. B. 7245103TABLE I. Activation energy fitting results of bulk resistivity and Hall coefficient. T * is the temperature at which the slope changes and shows as a hump in resistivity. The activation energies, ∆ = E high and ∆ = E low , are estimated by fitting the functional form ρ = ρ0 exp(∆/kBT ) to the data. and P. C. Canfield, Phys. Rev. B 72, 045103 (2005).
. L F Mattheiss, D R Hamann, 10.1103/PhysRevB.47.13114Phys. Rev. B. 4713114L. F. Mattheiss and D. R. Hamann, Phys. Rev. B 47, 13114 (1993).
. I I Mazin, K Koepernik, M D Johannes, R González-Hernández, L Šmejkal, doi.org/10.1073/pnas.2108924118Proc. Natl. Acad. Sci. U.S.A. 1182108924118I. I. Mazin, K. Koepernik, M. D. Johannes, R. González- Hernández, and L.Šmejkal, Proc. Natl. Acad. Sci. U.S.A. 118, e2108924118 (2021).
. A Chikina, J.-Z Ma, W H Brito, S Choi, P Sémon, A Kutepov, Q Du, J Jandke, H Liu, N C Plumb, M Shi, C Petrovic, M Radovic, G Kotliar, 10.1103/PhysRevResearch.2.023190Phys. Rev. Research. 223190A. Chikina, J.-Z. Ma, W. H. Brito, S. Choi, P. Sémon, A. Kutepov, Q. Du, J. Jandke, H. Liu, N. C. Plumb, M. Shi, C. Petrovic, M. Radovic, and G. Kotliar, Phys. Rev. Research 2, 023190 (2020).
. J M Tomczak, K Haule, T Miyake, A Georges, G Kotliar, 10.1103/PhysRevB.82.085104Phys. Rev. B. 8285104J. M. Tomczak, K. Haule, T. Miyake, A. Georges, and G. Kotliar, Phys. Rev. B 82, 085104 (2010).
. Y S Eo, K Sun, C Kurdak, D.-J Kim, Z Fisk, 10.1103/PhysRevApplied.9.044006Phys. Rev. Applied. 944006Y. S. Eo, K. Sun, C. Kurdak, D.-J. Kim, and Z. Fisk, Phys. Rev. Applied 9, 044006 (2018).
. Y S Eo, A Rakoski, J Lucien, D Mihaliov, Ç Kurdak, P F Rosa, Z Fisk, doi.org/10.1073/pnas.1901245116Proc. Natl. Acad. Sci. U.S.A. 11612638Y. S. Eo, A. Rakoski, J. Lucien, D. Mihaliov, Ç . Kurdak, P. F. Rosa, and Z. Fisk, Proc. Natl. Acad. Sci. U.S.A. 116, 12638 (2019).
. N Mott, J Davies, doi.org/10.1080/01418638008222332Philos. Mag., B. 42845N. Mott and J. Davies, Philos. Mag., B 42, 845 (1980).
. W Fuhrman, J Chamorro, P Alekseev, J.-M Mignot, T Keller, J Rodriguez-Rivera, Y Qiu, P Nikolić, T Mc-Queen, C L Broholm, doi.org/10.1038/s41467-018-04007-zNat. Commun. 91W. Fuhrman, J. Chamorro, P. Alekseev, J.-M. Mignot, T. Keller, J. Rodriguez-Rivera, Y. Qiu, P. Nikolić, T. Mc- Queen, and C. L. Broholm, Nat. Commun. 9, 1 (2018).
. W Phelan, S Koohpayeh, P Cottingham, J Tutmaher, J Leiner, M Lumsden, C Lavelle, X Wang, C Hoffmann, M Siegler, doi.org/10.1038/srep20860Sci. Rep. 61W. Phelan, S. Koohpayeh, P. Cottingham, J. Tutmaher, J. Leiner, M. Lumsden, C. Lavelle, X. Wang, C. Hoff- mann, M. Siegler, et al., Sci. Rep. 6, 1 (2016).
. W A Phelan, S M Koohpayeh, P Cottingham, J W Freeland, J C Leiner, C L Broholm, T M Mc-Queen, 10.1103/PhysRevX.4.031012Phys. Rev. X. 431012W. A. Phelan, S. M. Koohpayeh, P. Cottingham, J. W. Freeland, J. C. Leiner, C. L. Broholm, and T. M. Mc- Queen, Phys. Rev. X 4, 031012 (2014).
. Y S Eo, A Rakoski, S Sinha, D Mihaliov, W T Fuhrman, S R Saha, P F S Rosa, Z Fisk, M C Hatnean, G Balakrishnan, J R Chamorro, W A Phelan, S M Koohpayeh, T M Mcqueen, B Kang, M Song, B Cho, M S Fuhrer, J Paglione, C Kurdak, 10.1103/PhysRevMaterials.5.055001Phys. Rev. Materials. 555001Y. S. Eo, A. Rakoski, S. Sinha, D. Mihaliov, W. T. Fuhrman, S. R. Saha, P. F. S. Rosa, Z. Fisk, M. C. Hat- nean, G. Balakrishnan, J. R. Chamorro, W. A. Phelan, S. M. Koohpayeh, T. M. McQueen, B. Kang, M.-s. Song, B. Cho, M. S. Fuhrer, J. Paglione, and C. Kurdak, Phys. Rev. Materials 5, 055001 (2021).
. K.-J Xu, M Barber, E Y Ma, J Xia, M C Hatnean, G Balakrishnan, J Zaanen, Z.-X Shen, doi.org/10.48550/arXiv.2106.00112arXiv:2106.00112K.-J. Xu, M. Barber, E. Y. Ma, J. Xia, M. C. Hatnean, G. Balakrishnan, J. Zaa- nen, and Z.-X. Shen, arXiv:2106.00112 (2021), doi.org/10.48550/arXiv.2106.00112.
. L Degiorgi, M Hunt, H R Ott, Z Fisk, 10.1016/0921-4526(94)00592-JPhys. B: Condens. Matter. 206810L. Degiorgi, M. Hunt, H. R. Ott, and Z. Fisk, Phys. B: Condens. Matter. 206, 810 (1995).
. V Glushkov, N Sluchanko, S Demishev, M Kondrin, A Pronin, K Petukhov, Y Bruynseraede, V Moshchalkov, A Menovsky, 10.1016/S0921-4526(99)02605-8Physica B Condens. 2841179V. Glushkov, N. Sluchanko, S. Demishev, M. Kon- drin, A. Pronin, K. Petukhov, Y. Bruynseraede, V. Moshchalkov, and A. Menovsky, Physica B Condens. 284, 1179 (2000).
. K Lisunov, E Arushanov, C Kloc, J Broto, J Leotin, H Rokoto, M Respaud, E Bucher, 10.1016/S0921-4526(96)00508-XPhysica B Condens. 22937K. Lisunov, E. Arushanov, C. Kloc, J. Broto, J. Leotin, H. Rokoto, M. Respaud, and E. Bucher, Physica B Con- dens. 229, 37 (1996).
. N Sluchanko, A Volkov, V Glushkov, B Gorshunov, S Demishev, M Kondrin, A Pronin, N Samarin, Y Bruynseraede, V Moshchalkov, doi.org/10.1134/1.558825J. Exp. Theor. Phys. 88533N. Sluchanko, A. Volkov, V. Glushkov, B. Gorshunov, S. Demishev, M. Kondrin, A. Pronin, N. Samarin, Y. Bruynseraede, V. Moshchalkov, et al., J. Exp. Theor. Phys. 88, 533 (1999).
. B Gorshunov, N Sluchanko, A Volkov, M Dressel, G Knebel, A Loidl, S Kunii, 10.1103/PhysRevB.59.1808Phys. Rev. B. 591808B. Gorshunov, N. Sluchanko, A. Volkov, M. Dressel, G. Knebel, A. Loidl, and S. Kunii, Phys. Rev. B 59, 1808 (1999).
. A Rakoski, Y S Eo, K Sun, C Kurdak, 10.1103/PhysRevB.95.195133Phys. Rev. B. 95195133A. Rakoski, Y. S. Eo, K. Sun, and C. Kurdak, Phys. Rev. B 95, 195133 (2017).
. M Arita, K Shimada, Y Takeda, M Nakatake, H Namatame, M Taniguchi, H Negishi, T Oguchi, T Saitoh, A Fujimori, T Kanomata, 10.1103/PhysRevB.77.205117Phys. Rev. B. 77205117M. Arita, K. Shimada, Y. Takeda, M. Nakatake, H. Na- matame, M. Taniguchi, H. Negishi, T. Oguchi, T. Saitoh, A. Fujimori, and T. Kanomata, Phys. Rev. B 77, 205117 (2008).
. B Yang, M Uphoff, Y.-Q Zhang, J Reichert, A P Seitsonen, A Bauer, C Pfleiderer, J V Barth, doi.org/10.1073/pnas.202120311Proc. Natl. Acad. Sci. U.S.A. 1182021203118B. Yang, M. Uphoff, Y.-Q. Zhang, J. Reichert, A. P. Seitsonen, A. Bauer, C. Pfleiderer, and J. V. Barth, Proc. Natl. Acad. Sci. U.S.A. 118, e2021203118 (2021).
. P Sun, B Wei, D Menzel, F Steglich, 10.1103/PhysRevB.90.245146Phys. Rev. B. 90245146P. Sun, B. Wei, D. Menzel, and F. Steglich, Phys. Rev. B 90, 245146 (2014).
. A Fert, P Levy, doi.org/10.1103/PhysRevB.36.1907Phys. Rev. B. 361907A. Fert and P. Levy, Phys. Rev. B. 36, 1907 (1987).
. P Coleman, P Anderson, T Ramakrishnan, doi.org/10.1103/PhysRevLett.55.414Phys. Rev. Lett. 55414P. Coleman, P. Anderson, and T. Ramakrishnan, Phys. Rev. Lett. 55, 414 (1985).
. A Rakoski, Y S Eo, C Kurdak, B Kang, M Song, B Cho, J Supercond, doi.org/10.1007/s10948-019-05281-8Nov. Magn. 33265A. Rakoski, Y. S. Eo, C. Kurdak, B. Kang, M. Song, and B. Cho, J. Supercond. Nov. Magn. 33, 265 (2020).
. Y S Eo, S Wolgast, A Rakoski, D Mihaliov, B Kang, M Song, B Cho, M C Hatnean, G Balakrishnan, Z Fisk, doi.org/10.1103/PhysRevB.101.155109Phys. Rev. B. 101155109Y. S. Eo, S. Wolgast, A. Rakoski, D. Mihaliov, B. Kang, M. Song, B. Cho, M. C. Hatnean, G. Balakrishnan, Z. Fisk, et al., Phys. Rev. B 101, 155109 (2020).
. M V A Crivillero, M König, J Souza, P Pagliuso, J Sichelschmidt, P F Rosa, Z Fisk, S Wirth, 10.1103/PhysRevResearch.3.023162Phys. Rev. Res. 323162M. V. A. Crivillero, M. König, J. Souza, P. Pagliuso, J. Sichelschmidt, P. F. Rosa, Z. Fisk, and S. Wirth, Phys. Rev. Res. 3, 023162 (2021).
. A Bentien, S Johnsen, G Madsen, B Iversen, F Steglich, 10.1209/0295-5075/80/17008Europhysics Letters). 8017008EPLA. Bentien, S. Johnsen, G. Madsen, B. Iversen, and F. Steglich, EPL (Europhysics Letters) 80, 17008 (2007).
. S , Das Sarma, E H Hwang, 10.1103/PhysRevB.89.235423Phys. Rev. B. 89235423S. Das Sarma and E. H. Hwang, Phys. Rev. B 89, 235423 (2014).
. K Nomura, M Koshino, S Ryu, 10.1103/PhysRevLett.99.146806Phys. Rev. Lett. 99146806K. Nomura, M. Koshino, and S. Ryu, Phys. Rev. Lett. 99, 146806 (2007).
. P W Anderson, 10.1016/0022-3697(59)90036-8J. Phys. Chem. Solids. 1126P. W. Anderson, J. Phys. Chem. Solids 11, 26 (1959).
. O Erten, P.-Y Chang, P Coleman, A M Tsvelik, 10.1103/PhysRevLett.119.057603Phys. Rev. Lett. 11957603O. Erten, P.-Y. Chang, P. Coleman, and A. M. Tsvelik, Phys. Rev. Lett. 119, 057603 (2017).
. J C Souza, P F S Rosa, J Sichelschmidt, M Carlone, P A Venegas, M O Malcolms, P M Menegasso, R R Urbano, Z Fisk, P G Pagliuso, 10.1103/PhysRevResearch.2.043181Phys. Rev. Research. 243181J. C. Souza, P. F. S. Rosa, J. Sichelschmidt, M. Carlone, P. A. Venegas, M. O. Malcolms, P. M. Menegasso, R. R. Urbano, Z. Fisk, and P. G. Pagliuso, Phys. Rev. Research 2, 043181 (2020).
. L Jiao, S Rößler, D Kasinathan, P F Rosa, C Guo, H Yuan, C.-X Liu, Z Fisk, F Steglich, S Wirth, 10.1126/sciadv.aau4886Sci. Adv. 44886L. Jiao, S. Rößler, D. Kasinathan, P. F. Rosa, C. Guo, H. Yuan, C.-X. Liu, Z. Fisk, F. Steglich, and S. Wirth, Sci. Adv. 4, eaau4886 (2018).
. B Skinner, doi.org/10.1103/PhysRevMaterials.3.104601Phys. Rev. Materials. 3104601B. Skinner, Phys. Rev. Materials 3, 104601 (2019).
| [] |
[
"Evolution of matter and galaxy clustering in cosmological hydrodynamical simulations",
"Evolution of matter and galaxy clustering in cosmological hydrodynamical simulations"
] | [
"Jaan Einasto \nTartu Observatory\n61602TõravereEstonia\n\nEstonian Academy of Sciences\n10130TallinnEstonia\n\nICRANet\nPiazza della Repubblica 1065122PescaraItaly\n",
"Gert Hütsi \nNational Institute of Chemical Physics and Biophysics\n10143TallinnEstonia\n",
"Lauri-Juhan Liivamägi \nTartu Observatory\n61602TõravereEstonia\n",
"Changbom Park \nKorea Institute for Advanced Study\n85 Hoegi-ro, Dongdaemun-gu02455SeoulRepublic of Korea\n",
"Juhan Kim \nKorea Institute for Advanced Study\n85 Hoegi-ro, Dongdaemun-gu02455SeoulRepublic of Korea\n",
"Istvan Szapudi \nInstitute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHI\n",
"Maret Einasto \nTartu Observatory\n61602TõravereEstonia\n"
] | [
"Tartu Observatory\n61602TõravereEstonia",
"Estonian Academy of Sciences\n10130TallinnEstonia",
"ICRANet\nPiazza della Repubblica 1065122PescaraItaly",
"National Institute of Chemical Physics and Biophysics\n10143TallinnEstonia",
"Tartu Observatory\n61602TõravereEstonia",
"Korea Institute for Advanced Study\n85 Hoegi-ro, Dongdaemun-gu02455SeoulRepublic of Korea",
"Korea Institute for Advanced Study\n85 Hoegi-ro, Dongdaemun-gu02455SeoulRepublic of Korea",
"Institute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHI",
"Tartu Observatory\n61602TõravereEstonia"
] | [] | We quantify the evolution of matter and galaxy clustering in cosmological hydrodynamical simulations via correlation and bias functions of matter and galaxies. We use simulations TNG100 and TNG300 with epochs from z = 5 to z = 0. We calculate spatial correlation functions (CF) of galaxies, ξ(r), for simulated galaxies and dark matter (DM) particles to characterise the evolving cosmic web. We find that bias parameters decrease during the evolution, confirming earlier results. Bias parameters of the lowest luminosity galaxies, b 0 , estimated from CFs are lower relative to CFs of particle density-limited clustered samples of DM. At low and medium luminosities, bias parameters of galaxies are equal, suggesting that dwarf galaxies reside in the same filamentary web as brighter galaxies. We find that bias parameters b 0 , estimated from CFs of clustered DM, agree with the expected values from the fraction of particles in the clustered population, b = 1/F c . The cosmic web contains filamentary structures of various densities, and fractions of matter in the clustered and the unclustered populations are both less than unity. Thus the CF amplitude of the clustered matter is always higher than for all matter, i.e. bias parameter must be b > 1. Differences between CFs of galaxies and clustered DM suggest that these functions describe different properties of the cosmic web. | null | [
"https://export.arxiv.org/pdf/2304.09035v2.pdf"
] | 258,187,177 | 2304.09035 | 01c3b09f1dbb6737755d37172c9c36c30a4b9b65 |
Evolution of matter and galaxy clustering in cosmological hydrodynamical simulations
Jaan Einasto
Tartu Observatory
61602TõravereEstonia
Estonian Academy of Sciences
10130TallinnEstonia
ICRANet
Piazza della Repubblica 1065122PescaraItaly
Gert Hütsi
National Institute of Chemical Physics and Biophysics
10143TallinnEstonia
Lauri-Juhan Liivamägi
Tartu Observatory
61602TõravereEstonia
Changbom Park
Korea Institute for Advanced Study
85 Hoegi-ro, Dongdaemun-gu02455SeoulRepublic of Korea
Juhan Kim
Korea Institute for Advanced Study
85 Hoegi-ro, Dongdaemun-gu02455SeoulRepublic of Korea
Istvan Szapudi
Institute for Astronomy
University of Hawaii
2680 Woodlawn Dr96822HonoluluHI
Maret Einasto
Tartu Observatory
61602TõravereEstonia
Evolution of matter and galaxy clustering in cosmological hydrodynamical simulations
Accepted 2023 June 1. Received 2023 June 1, in origin form 2023 April 18;Compiled using MNRAS L A T E X style file v3.0Cosmology: large-scale structure of the universeCosmology: dark matterCos- mology: theoryMethods: numerical
We quantify the evolution of matter and galaxy clustering in cosmological hydrodynamical simulations via correlation and bias functions of matter and galaxies. We use simulations TNG100 and TNG300 with epochs from z = 5 to z = 0. We calculate spatial correlation functions (CF) of galaxies, ξ(r), for simulated galaxies and dark matter (DM) particles to characterise the evolving cosmic web. We find that bias parameters decrease during the evolution, confirming earlier results. Bias parameters of the lowest luminosity galaxies, b 0 , estimated from CFs are lower relative to CFs of particle density-limited clustered samples of DM. At low and medium luminosities, bias parameters of galaxies are equal, suggesting that dwarf galaxies reside in the same filamentary web as brighter galaxies. We find that bias parameters b 0 , estimated from CFs of clustered DM, agree with the expected values from the fraction of particles in the clustered population, b = 1/F c . The cosmic web contains filamentary structures of various densities, and fractions of matter in the clustered and the unclustered populations are both less than unity. Thus the CF amplitude of the clustered matter is always higher than for all matter, i.e. bias parameter must be b > 1. Differences between CFs of galaxies and clustered DM suggest that these functions describe different properties of the cosmic web.
INTRODUCTION
The clustering of galaxies and matter and their evolution are central problems of cosmology. Differences in the distribution of galaxies and matter were noticed already in early studies by ; , Gregory & Thompson (1978) and Tully & Fisher (1978), who showed that galaxies have filamentary distribution with large regions devoid of galaxies. Numerical simulations by Doroshkevich et al. (1980Doroshkevich et al. ( , 1982 predicted the filamentary character of particles but showed the presence of a rarefied population of particles in voids. This difference was explained by Zeldovich et al. (1982) as an indication of a threshold mechanism in galaxy formation: galaxies form in high-density filaments and knots but not in low-density regions -cosmic voids. A physical mechanism for the formation of galaxies in dark matter (DM) ⋆ E-mail: [email protected] halos was suggested by White & Rees (1978). A more detailed study of the formation of galaxies in the cosmic web was presented by Dekel & Silk (1986); Dekel (1986); Dekel & Rees (1987) and Bond et al. (1996). Thus galaxies are biased tracers of matter density fields.
Traditionally the difference between the distributions of galaxies and matter is quantified by the bias parameter, which is defined by the ratio of correlation functions or power spectra of galaxies and matter, b = ξ g /ξ m (Kaiser 1984). Most recent bias studies are devoted to various aspects of the formation and evolution of galaxies; for a review, see Desjacques et al. (2018). This paper focuses on the evolution of matter and galaxy clustering.
Early studies showed that matter could be divided into two populations: the clustered population with galaxies and systems of galaxies, and the unclustered populations in low-density filaments and voids. Unclustered in this context means much lower correlations than that of galaxies, but not necessarily zero corre-lations. Einasto & Saar (1987) and Einasto et al. (1994) identified the clustered matter with samples of DM particles with local densities above a certain threshold, ρ ≥ ρ 0 , and the unclustered matter with local densities ρ < ρ 0 . Einasto et al. (1994Einasto et al. ( , 1999Einasto et al. ( , 2023 investigated the relation between clustered and total matter using simple analytic models. They found that the bias parameter b is related to the fraction of matter in the clustered population, F c as follows: b = 1/F c . Repp & Szapudi (2020) used a statistical two-state Ising bias model, which agrees with this result in the zero temperature limit. However, it is unclear how accurately the high-density regions of DM represent real galaxies, particularly for low luminosities. The fraction of matter in the clustered and approximately unclustered populations can be found in the numerical simulation of the evolution of the cosmic web and determined independently of bias values from the correlation function. This factor yields an additional constraint on the bias parameter, besides the traditional correlation or power spectrum analyses.
The evolution of the biasing properties with cosmic epoch can be studied using numerical simulations of the cosmic web. One of the first studies of the evolution of the two-point correlation function (CF) of cold dark matter (CDM) universes was by Davis et al. (1985). The authors found that the amplitude of CF increases with time. Authors assumed that galaxies form in high-density peaks of the underlying matter density field and that the correlation length r 0 of simulated galaxies exceeds the correlation length of matter by a factor of 2.4 at expansion parameter a = 1.4. A similar biasing recipe was used by Einasto & Saar (1987) and Gramann (1987Gramann ( , 1988 in CDM and ΛCDM universes. Using ΛCDM simulations and the b = 1/F c constraint Einasto et al. (1999) obtained for the bias parameter of galaxies b c = 1/F c = 1.32±0.13. Cen & Ostriker (1992), Blanton et al. (1999) and Cen & Ostriker (2000) compared distributions of galaxies and matter using hydrodynamical simulations of the formation and evolution of galaxies. The authors found that there are no galaxies in regions of low spatial density of matter. The region of total density, including galaxies, starts at a mean density ρ ≈ 3. Cen & Ostriker (2000) found for the bias parameter in the present epoch a value b = 1.35.
Using Millennium simulations by Springel et al. (2006). Springel et al. (2005) found for L ⋆ -galaxies bias values b = 2.7 for z = 3, and b = 0.9 for z = 0, i.e. at the present epoch L ⋆ -galaxies are antibiased. More recent hydrodynamical simulations of galaxy formation allowed us to include essential physical processes to simulate and follow galaxies' formation and evolution in more detail. Examples of such simulations are Illustris (Vogelsberger et al. 2014a,b), EAGLE (Schaye et al. 2015), Hori-zonAGN (Dubois et al. 2016), and The Next Generation (TNG) series of hydrodynamical simulations by Springel et al. (2018) and Pillepich et al. (2018a,b). The first analysis of matter and galaxy clustering using TNG100 and TNG300 simulations was made by Springel et al. (2018). Springel et al. (2018) calculated bias parameter values for a range of evolution epochs from z = 5 to z = 0, separately for galaxies of different stellar masses, star formation rate, and for halos of various masses. For the present epoch z = 0 Springel et al. (2018) found a value b ≈ 1.02. As noted by Springel et al. (2018), this low value of the bias parameter is consistent with analytic models by Mo & White (1996), Sheth & Tormen (1999) and Tinker et al. (2010), and with the observed CF and bias parameter of SDSS galaxies, as found by Li & White (2009).
This short overview of previous determinations of the bias parameter shows that bias values for L ⋆ type galaxies are concentrated around two values, b ⋆ ≈ 1 and b ⋆ ≈ 1.3. The smaller value is obtained from galaxy clustering studies, either observational or using modern hydrodynamical simulation of the evolution of galaxies. The higher value is found in DM-only numerical simulations and early hydrodynamical simulations. The difference between these values is much more significant than possible random errors. However, biasing is not a phenomenon which can be characterised by CFs or power spectra alone. It is closely related to the division of matter into clustered and approximately unclustered populations of dark and baryonic matter. Thus, the bias parameter depends on the luminosity of galaxies and the fraction of matter in the clustered and approximately unclustered populations. The first factor is wellknown from the first studies of the bias phenomenon by Kaiser (1984) and Bardeen et al. (1986). The second factor is related to the evolution of the cosmic web.
In this paper, our main goal is to characterise the differences in the bias evolution of galaxies and DM. To do this, we determine the bias parameters of galaxies and clustered DM particles as test objects in the correlation analysis. We use the Illustris TNG simulations of the evolution of the cosmic web, where data on simulated galaxies and DM particles are available. The TNG series uses three box sizes, L 0 = 35, 75, 205 h −1 Mpc, called TNG50, TNG100 and TNG300. The latter two are large enough to study the evolution of galaxies in the cosmic web. We shall use TNG100 and TNG300 simulations within cosmic epochs corresponding to redshifts from z = 5 to the present epoch z = 0. These simulations have sufficient mass resolutions to track the evolution of galaxies substantially below L ⋆ . We analyse the clustering properties of simulated galaxies of various luminosities and DM particle samples with varying particle density limits. In addition to the correlation analysis, we study the evacuation of DM particles from voids, which yields independent information on the evolution of the bias parameter.
As an independent check, we also analyse the evolution of the bias parameter using Horizon Run 5 (HR5) simulations by Lee et al. (2021) and Park et al. (2022).
The paper is structured as follows. In Section 2 we give an overview of TNG and HR5 galaxy and DM samples and methods to calculate correlation functions and bias properties. In Section 3 we analyse the evolution of the physical properties of simulated galaxies. In Section 4 we describe the correlation and bias functions of simulated galaxies and DM density-limited samples. We analyse the evolution of clustering properties of simulated galaxies of various luminosity and DM density-limited samples. In Section 5 we discuss our results and compare them with earlier studies. Section 6 summarises the conclusions of our study.
DATA AND METHODS
In this Section, we describe TNG100, TNG300 and HR5 simulations used in our analysis, and methods to calculate correlation and bias functions of DM and simulated galaxy samples.
TNG simulations
Basic data on TNG100 and TNG300 simulations are given in Springel et al. (2018) and Pillepich et al. (2018a). High-resolution versions of these simulations, TNG100-1 and TNG300-1, have particle numbers 1 820 3 and 2 500 3 , respectively, both for DM and gas particles, and the same number of cells. Low resolutions versions, TNG100-3 and TNG300-3, have 455 3 and 625 3 cells and numbers of DM and gas particles, respective masses of DM and gas particles are given in Table 1. TNG simulations are made for cosmology
pa- rameters Ω m = Ω DM + Ω b = 0.3089, Ω b = 0.0486, Ω Λ = 0.6911,Simulation L 0 [h −1 Mpc] m DM [h −1 M ⊙ ] m gas [h −1 M ⊙ ] (1) (2) (3)(4)
TNG100-1 75 7.5 × 10 6 1.4 × 10 6 TNG100-3 75 4.8 × 10 8 TNG300-1 205 4.0 × 10 7 7.4 × 10 6 TNG300-3 205 4.5 × 10 9 7.0 × 10 8
Columns give: (1) name of simulation;
(2) box size in h −1 Mpc; (3) DM particle mass; (4) gas particle mass. 680 244 TNG100-1 3 455 3 836 537 TNG100-1 5 455 3 762 174 TNG100-1 10 455 3 112 084 TNG300-1 0 625 3 1 947 928 TNG300-1 0.5 625 3 2 324 925 TNG300-1 1 625 3 2 678 779 TNG300-1 2 625 3 3 183 544 TNG300-1 3 625 3 3 218 212 TNG300-1 5 625 3 2 043 334 TNG300-1 10 625 3 65 012 Columns give: (1) name of simulation; (2) simulation epoch z;
(3) number of DM particles; and (4) number of galaxies.
and Hubble constant H 0 = 100 h km s −1 Mpc −1 with h = 0.6774. Initial conditions were generated for the epoch z = 127 using a linear theory power spectrum with a normalisation σ 8 = 0.8159 and spectral index n s = 0.9667. For conformity with literature, we express lengths in units h −1 Mpc. We downloaded from the TNG site subhalo data for simulations TNG100-1, TNG300-1 and TNG300-3, and DM data for TNG100-3 and TNG300-3. Files for DM data for TNG100-1 and TNG300-1 are too large for downloading. Experience with Uchuu simulation shows that randomly selected DM particle samples yield correlation analysis results fully compatible with DM particle samples (Ishiyama et al. 2021).
For subhalos, we extracted x, y, z-coordinates, subhalo total and stellar masses, and luminosities of simulated galaxies in g and r colours from the TNG website. Our primary analysis is done in the r-band, but we also analysed g − r colours and masses of subhalos. Galaxy absolute magnitudes in r-band were used as labels to select simulated galaxy samples for further analysis. The derivation of galaxy magnitudes is described by Pillepich et al. (2018a). Numbers of DM particles and simulated galaxies for all redshifts used in our analysis are given in Table 2. The number of subhalos for simulation TNG300-3 is much larger than the number of galaxies, i.e. there exists a large number of subhalos with no galaxy data. This simulation's resolution is insufficient to find galaxy data for lowmass subhalos. For this reason, galaxies of the simulation TNG300-3 were not used in the final analysis. For simulations TNG100-1 and TNG300-1, almost all subhalos contain galaxies. Thus we give for these simulations in Table 2 only numbers of simulated galaxies.
For simulations TNG100-3 and TNG300-3, we extracted DM particle x, y, z-coordinates and local densities. The local total comoving mass density is estimated using the standard cubic-spline SPH kernel over all particles/cells within a comoving radius of the sphere centred on this particle enclosing the 64 ± 1 nearest DM particles (for details see Nelson et al. 2015;Springel et al. 2018). Densities are given in Solar masses per cubic comoving kpc, corresponding to low numerical values for the density. We used these densities as labels to select DM particles of the clustered population, ρ ≥ ρ 0 , where the limiting density was selected to obtain clustered DM particle samples for various density thresholds. This method to select particles was applied earlier by Jensen & Szalay (1986), Einasto et al. (1991), Szapudi & Szalay (1993), and Little & Weinberg (1994). This model to select particles is similar to the Ising model, discussed by Repp & Szapudi (2019b,a). Also, this method allows for finding the fraction of DM particles in the clustered and unclustered population.
HR5 simulations
Hydrodynamical Horizon Run 5 (HR5) simulations are described by Lee et al. (2021) and Park et al. (2022). Simulations were run with cosmological parameters: Ω m = 0.3, Ω b = 0.047, Ω Λ = 0.7, and Hubble constant h 0 = 0.684; in a cubic box of comoving physical length L = 1049 cMpc. Within this cube, a high-resolution cuboid with a volume 1049 × 119 × 127 cMpc 3 was selected, which allowed resolving kiloparsec physical scales. Simulations started at redshift z = 200 and finished at redshift z = 0.625. The whole box has, at redshift z = 0.625, 7.7 × 10 9 DM particles and 2.2 × 10 9 star particles. In the high-resolution region at epoch z = 0.625 there are 290 086 galaxies of mass M ⋆ ≥ 10 9 M ⊙ . The volume of the highresolution cuboid is (204 h −1 Mpc) 3 , almost equal to the volume of the TNG300-3 simulation cube. The simulation TNG300-1 has 290 061 galaxies with absolute magnitudes M r ≤ −18.41.
For the present study, one of the coauthors (JK) calculated galaxy and DM correlation functions for simulation epochs z = 0.625, 1, 2, 3, 5, 7, and for galaxies of stellar masses M ⋆ ≥ 10 9 , ≥ 3 × 10 9 , ≥ 10 10 M ⊙ . CFs were calculated with the Landy & Szalay (1993) estimator in 30 logarithmic bins up to separation r = 140 cMpc.
Calculation of the correlation and bias functions
Simulated DM files contain a large number of particles. Thus we applied the Szapudi et al. (2005) grid-based method with N grid = 2048 3 for finding correlation and bias functions on sub-megaparsec scales. CFs for simulation TNG100-1 were found up to separations r max = 37.5 h −1 Mpc with 90 logarithmic bins, and for simulation TNG300-1 up to separations r max = 100 h −1 Mpc with 98 logarithmic bins. For consistency with the DM samples, we used this method also for simulated galaxies, applying the same bins as for DM samples. This allows easy computation of bias functions.
We calculated the CFs ξ(r) for all samples of TNG simulations for two sets of data. In the first set, we used as test objects simulated galaxies. CFs were calculated with a series of galaxy luminosity limits, M r . In the second set, we used as test objects DM particles with a series of particle density limits, ρ 0 . We always used galaxy/particle samples in cumulative form, i.e. all galaxies/particles equal to and above some limit are considered. The ratio of CFs of the galaxy (particle) samples with luminosity limits M r (particle density limits ρ 0 ) to correlations functions of all DM, both at identical separations r, defines the bias function b(r, M r ):
b 2 (r, M r ) = ξ(r, M r )/ξ DM (r),(1)
and a similar formula for b(r, ρ 0 ), where the limiting luminosity M r is replaced by the particle density limit ρ 0 . Bias depends on the luminosity M r of galaxies (particle density limit ρ 0 ), used in the calculation of CFs. Bias functions have a plateau at 6 ≤ r ≤ 20 h −1 Mpc, see Fig. 5 below. This feature is similar to the plateau around k ≈ 0.03 h Mpc −1 of relative power spectra (Einasto et al. 2019b). Following Einasto et al. (2020Einasto et al. ( , 2021bEinasto et al. ( , 2023, we use this plateau to measure the relative amplitude of the CF, i.e. of the bias function, as the bias parameter,
b(M r ) = b(r 0 , M r ),(2)
and a similar formula, where M r is replaced by particle density limit ρ 0 , and r 0 is the value of the separation r to measure the amplitude of the bias function. We calculated for all samples bias parameters for the comoving separation, r 0 = r 10 = 10 h −1 Mpc, as functions of the galaxy absolute magnitude in r colour, M r , or particle density limit ρ 0 for DM simulations. At smaller distances, bias functions are influenced by the distribution of particles and galaxies in halos, and at larger distances, the bias functions have wiggles, which makes difficult the comparison of samples with various galaxy luminosity limits.
EVOLUTION OF PHYSICAL PROPERTIES OF SIMULATED GALAXIES
In this Section, we describe the evolution of the physical properties of simulated galaxies and DM samples. We begin with a description of the evolution of power spectra and CFs of DM samples, thereafter we discuss the evolution of spatial distribution and luminosity functions of simulated galaxies.
Evolution of DM power spectra and CFs
We calculated the power spectra and CFs of DM particle samples for the simulation TNG300-3. To find power spectra we applied standard procedure as discussed by Jing (2005). For CFs, we used the Szapudi et al. (2005) method as discussed above. The evolution of power spectra and correlation functions of DM is presented in Fig. 1. As expected, the amplitudes of both functions increase considerably with time. The growth of amplitudes of both functions describes the growth of amplitudes of density fluctuations with time. Fig. 1 shows that CFs of DM deviate considerably from a simple power law. On large separations, CFs can be approximated by a power law with exponent ≈ −1.8. In this separation range, CFs describe fractal properties of the distribution of halos, as discussed by Einasto et al. (2020). On smaller separations, CFs describe the distribution of DM particles in halos, as discussed among others by Springel et al. (2018) and Einasto et al. (2020Einasto et al. ( , 2023. The characteristic diameters of halos determine the transition between these regimes at a separation r ≈ 3 h −1 Mpc.
Evolution of the spatial distribution of galaxies
We calculated density fields for DM and galaxies. We calculate the nearest grid point (NGP) DM density field from a grid of size N 3 grid with the same resolution as the number of particles, N 3 part . The NGP method was used to find DM density fields of the TNG100-3 simulation with N grid = 455 and N part = 455 3 . Results are presented in Fig. 2 for the TNG100-3 simulation in x, y−coordinates in a sheet of a thickness of 11 simulation cells, 1.8 h −1 Mpc, across a massive cluster. The top panels are for full DM samples, bottom panels for DM particles with densities log ρ ≥ log ρ 0 = −7.8, i.e. all particles of the clustered population. Colour codes are for grid cells of different spatial densities.
The Figure demonstrates the presence of the cosmic web from the early epoch z = 5 to the present epoch z = 0. All principal elements of the cosmic web are present already at the epoch z = 5, see also Springel et al. (2006Springel et al. ( , 2018, Park et al. (2022) and Asgari et al. (2023). This result is consistent with Table 2 showing that the number of subhalos/galaxies did not change considerably during this period. The number of resolved galaxies, including dwarf satellite galaxies, reaches a maximum at z = 3, and decreases slightly thereafter. The contraction of superclusters -low-mass systems flow towards central clusters of superclusters is also visible in these plots. A similar effect was noted by Einasto et al. (2019aEinasto et al. ( , 2021a.
The top panels of Fig. 2 show the presence of faint DM filaments, absent in the bottom panels in the density fields of the clustered matter. Here only strong galaxy filaments exist, and most of the volume has zero density. A similar difference is seen in plots of DM and galaxy/halo distributions by Springel et al. (2006Springel et al. ( , 2018, Park et al. (2022) and Asgari et al. (2023). Already the visual inspection of the bottom panels shows the increase of the fraction of the volume of zero-density cells with time (decreasing z).
Evolution of luminosity functions of galaxies
Differential luminosity functions of galaxies of TNG100-1 and TNG300-1 simulations are shown on the top panels of Fig. 3. This Figure demonstrates clearly the presence of differences in the number of galaxies in these simulations. As expected, the number of galaxies per unit magnitude interval and cubic megaparsec is the largest in the TNG100-1 simulation. The increase in the number of galaxies in the TNG100-1 simulation comes essentially from dwarf galaxies with luminosities M r ≥ −18.0, where the number of dwarf galaxies per cubic megaparsec is up to ten times larger than in the simulation TNG300-1. The faint end tail of luminosity distribution of the TNG100-1 simulation is two magnitudes fainter than that of the simulation TNG300-1.
Differences between simulations TNG100-1 and TNG300-1 are present also in the high-luminosity tail of the distribution at various epochs. At the early epoch, z = 10, the most luminous galaxies have luminosity M r ≈ −18.0 for TNG100-1 and M r ≈ −20.0 for TNG300-1. For both simulations, there is a rapid increase of the luminosity of the brightest galaxies between redshifts z = 10 and z = 5. This increase is due to the merging of galaxies in the centres of massive halos. The difference between simulations TNG100-1 and TNG300-1 can probably be explained by the larger volume of simulation TNG300-1, which contains larger modes of density perturbations and allows more effective merging. Notice also that the luminosity of most luminous galaxies with M r < −22.0 increases between redshifts z = 10 to z = 3, and thereafter decreases slightly. The luminosity function curve for redshift z = 0 at the high end is lower than for redshifts z = 2 and z = 3. The most essential difference between simulations at epochs z = 10 and z ≤ 5 is in the number of simulated galaxies: at redshift z = 10 it is much lower than in smaller redshifts, as seen from Table 2 and Fig. 3. In further analyses, we shall use only simulated galaxies at redshifts z ≤ 5.
Bottom panels of Fig. 3 show the distribution of DM particle densities at various redshifts z ≤ 5. We see that particle density distributions at the high density end differ about three magnitudes for redshifts z = 5 to z = 0, much more than for most luminous galaxies M r at redshifts z = 5 to z = 0.
EVOLUTION OF CLUSTERING PROPERTIES OF LUMINOSITY AND PARTICLE DENSITY LIMITED SAMPLES OF TNG SIMULATIONS
The Section begins with a description of the evolution of correlation and bias functions with time. We show also the evolution of correlation and bias functions of galaxies of HR5 simulations. Thereafter we describe the evolution of bias parameters with cosmic epoch, and its dependence on the luminosities of galaxies. Finally, we describe the determination of bias parameters of the faintest galaxies and the fraction of matter in the clustered population. . Top panels: Differential galaxy luminosity distribution in photometric system r. Panels from left to right are for TNG100-1 and TNG300-1 simulations. Colour codes show distributions for various z. The bottom panels show the differential distribution of DM particles of TNG100-3 and TNG300-3 as a function of the total density at the location of particles, log ρ 0 .
4.1 Evolution of CFs of TNG simulation galaxies and particle density limited DM samples, and CFs of galaxies of HR5 simulations
We calculated CFs of galaxies and DM particle density limited samples for all selected simulations. For all samples, we applied the Szapudi et al. (2005) method. As a reference sample, we used DM particles from simulations TNG100-3 and TNG300-3. In Fig. 4 we show CFs of galaxies for TNG100-1 and TNG300-1 simulations. For all samples a large range of limiting luminosities M r was applied from M r = −11.0 to M r = −23.0. The top and second rows are for the simulations TNG100-1 and TNG300-1, left, middle and right panels are for evolutionary epochs z = 0, z = 2 and z = 5, respectively. Bold black lines show DM CFs, and coloured lines present galaxy CFs for various M r limits, shown as labels. Fig. 4 shows that CFs of galaxies for M r limits form sequences of increasing amplitude with increasing luminosity limits. The number of galaxies in most samples is large, thus random errors of CFs are very small. Only for the brightest galaxies, errors are larger and CFs have visible scatter. The Figure shows also that for the present epoch z = 0 CFs of the faintest galaxies almost coincide with CFs of DM. For earlier epochs, there exists a gap in the amplitudes of CFs of galaxies relative to DM, which increases with the simulation epoch z. This is the biasing effect, discussed in detail in the next subsection.
In the third and fourth rows of Fig. 4, we present CFs of particle density limited DM samples of TNG100-3 and TNG300-3 simulations for epochs z = 0, 2, 5. Here we used particle density limits ρ 0 in log ρ 0 units, starting from log ρ 0 = −8.3. The limit log ρ 0 = −8.3 corresponds to DM particles, not associated with galaxies at the epoch z = 0; the corresponding CFs are marked with dashed orange lines. DM samples, corresponding to the faintest simulated galaxies at z = 0, have the limit log ρ 0 = −7.8. DM samples corresponding to most luminous galaxies have at present epoch log ρ 0 ≈ −3.3.
Bottom panels of Fig. 4 show CFs of galaxies of HR5 simulations for epochs z = 0.625, 2, 5, calculated with the Landy & Szalay (1993) estimator for separations from 1 to 100 in comoving physical megaparsecs. Here we used simulated galaxies with a stellar mass lower limits M ⋆ = 1, 3, 10 in units of 10 9 M ⊙ .
Evolution of bias functions of TNG and HR5 simulation galaxies
CFs of galaxies divided by CFs of DM define bias functions, see Eq.
(1). They are shown in Fig. 5. The top panels are for simulations TNG100-1, and the left, central and right panels are for epochs z = 0, z = 2 and z = 5. Various colours are for bias functions of galaxies of different luminosity limits. In the second and third rows of Fig. 5, we present bias functions of particle density limited DM samples of TNG100-3 and TNG300-3 simulations for epochs, z = 0, 2, 5. As for CFs we used particle density limits ρ 0 in log ρ 0 units, starting from log ρ 0 = −8.3. Two bottom rows of Fig. 5 are for bias functions of TNG300-1 simulations for epochs z = 0, 0.5, 1, 2, 3, 5, superposed with bias functions of HR5 simulations for similar epochs, epoch z = 0.625 is shown in panel for z = 0.5. Separations r of HR5 simulations were reduced to units h −1 Mpc, using the adopted Hubble constant h 0 = 0.684. Star masses of galaxies are in units 10 9 M ⊙ .
Bias functions, presented in Fig. 5, have three important properties. The first property is: bias function curves for galaxies of low luminosity, M r ≥ −18.0, are almost identical. We discuss this effect in more detail below.
The second important feature is the shape of bias functions for separations, r ≤ 5 h −1 Mpc. In this separation range bias functions have larger values than on medium and larger separations. This is Figure 4. CFs of galaxies, ξ(r) for epochs z = 0, 2, 5, shown in the left, central and right panels. The top and second rows are for the simulations TNG100-1 and TNG300-1 respectively. Separations r are in comoving units. Magnitude limits in r photometric system are shown as symbol labels. The third and fourth rows are for TNG100-3 and TNG300-3 DM simulations. Coloured lines show functions for various DM particle density limit log ρ 0 . The bottom row is for CFs of HR5 simulations for epochs z = 0.625, 2, 5. The stellar mass of galaxies is in units 10 9 M ⊙ . Black bold lines show CFs of DM for respective epochs. Figure 5. Bias functions of galaxies, b(r), for epochs z = 0, 2, 5 shown in the left, central and right panels in the first three rows. The top row is for simulations TNG100-1. Separations r are in comoving units. Magnitude M r limits are shown as symbol labels. The second and third rows are for TNG100-3 and TNG300-3 DM simulations. Coloured lines show functions for various particle density limits log ρ 0 . The limit log ρ 0 = −8.3 is marked with dashed orange lines, it corresponds to DM particles, not associated with galaxies. Two bottom rows are for bias functions of TNG300-1 simulations for epochs z = 0, 0.5, 1, 2, 3, 5, superposed with bias functions of HR5 simulations for similar epochs (epoch z = 0.625 is shown in panel for z = 0.5). Star masses of galaxies are in units 10 9 M ⊙ . Level b = 1 is the bias function of DM.
MNRAS 000, 1-15 (2023) due to the effect of halos, which have characteristic diameters up to r ≈ 5 h −1 Mpc.
The third feature is the amplitude at very small separations, r ≤ 1 h −1 Mpc, for galaxy samples of the lowest luminosities. Here bias functions are lower than at higher separations. This effect is observed for epochs z ≤ 3. For the present epoch z = 0 this means anti-biasing, since b < 1. Bias functions of particle density limited DM samples of simulations TNG100-3 and TNG300-3 do not have this feature, in this separation interval and particle density limits log ρ 0 ≤ −6.8 their bias functions are almost parallel lines with amplitudes increasing with increasing limit log ρ 0 .
Evolution of bias parameters of luminosity and particle density limited samples of TNG simulations
Following Einasto et al. (2023) we define bias parameters as values of the bias function at separation r 0 = 10 h −1 Mpc. Top panels of Fig. 6 present bias parameters for simulations TNG100-1 and TNG300-1 as functions of redshift z. Different colours show bias parameters for galaxies of various M r luminosity. As we see, bias parameters of galaxies form smooth curves, b(M r ), with amplitudes, increasing with luminosity M r . With decreasing luminosity bias parameters b(M r ) approach to asymptotic low limits b(M r ) → b 0 . Low-luminosity limits b 0 of simulation TNG100-1 and TNG300-1 almost coincide. The bottom panels of Fig. 6 show the evolution of bias parameters of particle density limited DM samples of simulations TNG100-3 and TNG300-3 for various limits log ρ 0 . Samples with DM particle density limit log ρ 0 = −7.8 correspond to the faintest clustered population, similar to the faintest galaxies of simulations TNG100-1 and TNG300-1. The dashed orange line corresponds to DM samples with a limit log ρ 0 = −8.3, i.e. to DM particles be-low the density limit, needed to form stars and galaxies. The Figure shows that in simulation TNG300-3, the curves for the highest particle density limits are higher than in simulation TNG100-3. This difference is due to various shapes of density distributions, as seen in the bottom panels of Fig. 3.
Bias parameters as functions of luminosities of galaxies and particle density limits
Top panels of Fig. 7 present bias parameters b(M r ) as functions of the luminosity M r for epochs z = 0 to z = 5. As noticed in the previous subsection, an essential property of bias parameters is that at low and medium luminosities b(M r ) approaches asymptotically a low-luminosity limit, b 0 , and rises at luminosities from M r ≤ −15 to M r ≤ −18, corresponding stellar masses are 1.7 × 10 8 M ⊙ and 2.7 × 10 9 M ⊙ ; details depend on simulation epoch z.
The bottom panels of Fig. 7 show the dependence of bias parameter on particle density limit log ρ 0 of clustered DM samples of simulations TNG100-3 and TNG300-3. The dependence of the bias parameter of clustered DM samples of simulations TNG100-3 and TNG300-3 is different from the dependence of bias parameter on luminosities of simulations TNG100-1 and TNG300-1. In the clustered DM samples there is no flat region of the bias function b(log ρ 0 ) at low particle density limits, ρ 0 . Rather, the amplitudes of b(log ρ 0 ) curves rise continuously with increasing log ρ 0 . Above a very low particle density log ρ 0 ≈ −10 limit essentially all particles are included, see Fig. 3. Thus the bias parameter should be b = 1 by definition. The b(log ρ) curves really converge to 1 at log ρ 0 ≈ −9.
At a particle density limit log ρ 0 = −7.8, the bias parameter values of DM simulations TNG100-3 and TNG300-3 are almost equal to the bias values of the faintest galaxies of TNG100-1 and TNG300-1 simulations, c.f., in the top panels of Fig. 7. At higher particle density limits, the b(log ρ 0 ) curves of the DM selected samples from TNG100-3 and TNG300-3 are rather similar to the b(M r ) curves of the TNG100-1 and TNG300-1 simulations. The basic difference lies in the bias values for earlier epochs -here b(log ρ 0 ) curves lie higher than b(M r ) curves. This difference is due to the fact that at earlier epochs identical particle density limits log ρ 0 correspond to more luminous galaxies, see Fig. 3.
Evolution of bias parameters of HR5 simulations
We show in Fig. 8 the evolution of the bias parameter b(z, M ⋆ ) of HR5 simulations. In the left panel, the bias parameter is presented as a function of the epoch z, in the right panel, as a function of the mass of simulated galaxies M ⋆ in units 10 9 M ⊙ . The evolution of the bias parameter of HR5 galaxies is close to that of the TNG300-1 simulations presented in previous Figures. The main difference is the absence of data for epoch z = 0 and for very low-mass galaxies.
Amplitudes of CFs at fixed separations
The stable behaviour of the bias parameter of faintest galaxies raises the question: how amplitudes of CFs at a fixed separation ξ 0 = ξ(r 0 ) evolve with time. As discussed above, we use CF amplitudes at the separation r 0 = 10 h −1 Mpc to define bias parameters. We calculated CF amplitudes at separation r 0 = 10 h −1 Mpc for TNG100-1 and TNG300-1 simulations as functions of the limiting magnitude M r , and for TNG100-3 and TNG300-3 simulations as functions of limiting particle density log ρ 0 . These dependencies are shown in the top panels of Fig. 9 for TNG100-1 and TNG300-1 galaxy simulations, and in bottom panels of Fig. 9 for TNG100-3 and TNG300-3 DM simulations. Fig. 9 shows several important properties of amplitudes of TNG300-3 Figure 9. Top panels plot the evolution of the CF amplitude at fixed separation, ξ(10), as a function of magnitude limits M r for simulations TNG100-1 and TNG300-1. Bottom panels show the evolution of CF amplitude ξ(10) of the DM simulations TNG100-3 and TNG300-3 for various particle density limits log ρ o . Figure 10. Left: Evolution of the bias parameter b 0 of the faintest galaxies from simulations TNG100-1 and TNG300-1. Dashed curves show the evolution of the inverse of the fraction of matter in the clustered population b 0 = 1/F c for simulations TNG100-3 and TNG300-3. Right: Evolution of the bias parameter b 0 of the faintest galaxies from simulations TNG100-3 and TNG300-3 at a particle density limit log ρ 0 = −7.8. The dashed curves show the evolution of the inverse of the fraction of matter in the clustered population b 0 = 1/F c for simulations TNG100-3 and TNG300-3.
CFs. First, the growth of amplitudes of CFs of galaxies with cosmic epoch z is very modest, much smaller than the growth of amplitudes of CFs of DM, presented in Fig. 1. This property is well-known, see Springel et al. (2006Springel et al. ( , 2018. Second, we see a big difference between the shape of ξ 0 (M r ) and ξ 0 (log ρ o ) functions of galaxy and DM simulations. In DM simulations TNG100-3 and TNG300-3, there exists no asymptotic flat region of ξ 0 (log ρ 0 ) curves on low particle density limit, present in low-luminosity regions of ξ 0 (M r ) functions of TNG100-1 and TNG300-1 simulations. Second, upper limits of DM densities at high redshifts are much lower than at low redshifts, see Fig. 3, thus ξ 0 (log ρ 0 ) curves for high redshifts rise rapidly with increasing log ρ 0 , since they correspond to particles with higher density limits. The third essential difference between amplitudes of CFs of galaxy and DM simulations is the level of ξ 0 at low-luminosity regions: in DM simulations TNG100-3 and TNG300-3, it is higher than in the galaxy simulations TNG100-1 and TNG300-1.
We also calculated the evolution of correlation lengths r 0 of TNG100-1 and TNG300-1 simulations, r 0 (z), for M r limited samples. Results are similar to functions r 0 (M ⋆ ), found by Springel et al. (2018), and are not presented here.
Bias parameter of faintest galaxies and the fraction of matter in the clustered population
Our analysis shows that the bias parameter of faintest galaxies, b 0 , is a well-defined quantity, almost independent of the luminosity of galaxies but dependent on the evolutionary epoch z. We determined the asymptotic bias parameter of faintest galaxies, b 0 , for TNG100-1 and TNG300-1 simulations. Bias parameter curves for luminosities −11.0 ≤ M r ≤ −14.0 of simulation TNG100-1 vary slightly. We accepted the asymptotic value b 0 = 1.045. Fig. 10 shows the dependence of b 0 on cosmic epoch z. Our data show that simulations TNG100-1 and TNG300-1 yield similar results for the asymptotic bias parameter of faintest galaxies b 0 similar results for the present epoch z = 0. The right panel of Fig. 10 shows the evolution of the bias parameter b 0 of DM simulations TNG100-3 and TNG300-3. The particle density ρ, extracted from the TNG website, includes all matter: DM plus baryonic gas and stellar matter. The ρ value for stellar matter is not given on the TNG website. We used a twostep procedure to determine the lower ρ 0 limit for stellar matter. First, we found the distribution of the fraction of matter in the clustered population, F c (ρ 0 ), of the DM-only simulations TNG100-3 and TNG300-3, using the cumulative distribution of particle densities, see for reference in Fig. 3 differential distributions of particle densities, N(log ρ). In the next step, we compared cumulative distributions of particle densities F c (ρ 0 ) with bias parameter distributions b(log ρ 0 ) for various particle density limit ρ 0 , shown in Fig. 7. This comparison showed that for particle density limit log ρ 0 = −7.8 functions b(z) and b c (z) = 1/F c (z) are very close, and also close to the b 0 value for galaxy simulations. We use this particle density limit log ρ 0 = −7.8 as the limit for faintest stellar systems in TNG100-3 and TNG300-3 simulations. Dashed curves in Fig. 10 show the inverse of the fraction of the clustered population, b c (z) = 1/F c (z) for TNG100-3 and TNG300-3 simulations. The Figure shows that at the present epoch z = 0 both DM simulations yield for b 0 very close values, almost identical to the expected value from the fraction of the clustered population, b 0 = 1/F c .
We use the same b c (z) = 1/F c (z) curves, found for the TNG100-3 and TNG300-3 DM particle simulations, also for TNG100-1 and TNG300-1 galaxy simulations, shown in the left panel of Fig. 10 by dashed curves. We see that b c (z) = 1/F c (z) curves of simulations TNG100-3 and TNG300-3 lie higher than the actual b 0 (z) curves of TNG100-1 and TNG300-1 simulations. Fig. 10 shows that the b 0 (z) curves of simulations TNG300 for both data types lie higher than those of simulation TNG100; the difference increases with epoch z. The difference is probably due to varying evolutionary histories of dwarf halos since in the higher resolution simulation TNG100, the number of dwarf galaxies is much higher than in TNG300, see Fig. 3.
DISCUSSION
In this Section we discuss how bias functions b(r|z, M r ) and bias parameters b(z, M r ) represent properties of the cosmic web.
The shape of evolving correlation and bias functions
The shape of correlation and bias functions at large separations
Correlation and bias functions of simulated galaxies are presented in Figs. 4 to 8. These Figures show that the bias parameters b(z, M r ) decrease during the evolution. This result is not new; it confirms earlier studies by Tegmark & Peebles (1998), Springel et al. (2005Springel et al. ( , 2018, Park et al. (2022) and Einasto et al. (2023). The evolution of the correlation and bias functions of galaxies in simulations TNG300-1 and HR5 are very similar as demonstrated in two bottom panels of Fig. 5, where we show bias functions for the whole set of simulation epochs z = 0 to z = 5. The bias functions of HR5 simulations lie very close to the bias functions of TNG300-1 simulations. Bias parameters of HR5 simulations are almost identical to bias parameters of TNG300-1 galaxies: HR5 mass limits M ⋆ = 1, 3, 10 (in units 10 9 M ⊙ ) corresponds to magnitude M r limits of TNG300-1 simulation M r = −17.9, −19.6, −22.1, for all simulation epochs with a small scatter of the order ±0.1 mag. Despite the different simulation programs and recipes of galaxy formation and evolution, this similarity is remarkable. Another similarity is in the shape of correlation and bias functions for TNG100 and TNG300 simulations. At epoch z = 0, these functions for both simulations are very close; at higher redshifts, the values of the bias functions of the simulation TNG300 for luminous galaxies are higher; see Figs. 5 -7. This difference is due to slightly various evolutionary stages of luminous galaxies in TNG100 and TNG300 simulations, also seen in the shape of the luminosity distributions in Fig. 3.
Bias functions of TNG300-3 DM simulations are presented in the third row of Fig. 5, and of ΛCDM simulations in Fig. 5 of Einasto et al. (2023). This comparison shows that in the separation interval 1 ≤ r ≤ Einasto et al. (2023). In both Figures at low particle density limits b(z, ρ 0 ) lines are almost independent on z, and increase with z for higher particle density limits ρ 0 .
The shape of correlation and bias functions on small separations
At around separation r ≤ 5 h −1 Mpc, the correlation and bias functions depend on the structure of the halos. Galaxies and DM particles are located in identical halos, and the density-limited-DM particle pair counts are higher, which raises the amplitude of CFs. This remarkable feature is well known from earlier studies, see among others Chiang et al. (2013), Springel et al. (2018) and Einasto et al. (2023); it corresponds to the one-halo term in Halo Models (Asgari et al. 2023). Halos are gravitationally stable systems detached from the expansion. In comoving units, halos were larger in the past. On small separations 0.1 ≤ r ≤ 2 h −1 Mpc, an important difference in bias functions between galaxy and DM simulations emerges on Fig. 5; compare top and bottom rows. The bias functions of simulations TNG100-1 and TNG300-1 are made for galaxy-galaxy pairs, simulations TNG100-3 and TNG300-3 for particle densitylimited DM particle pairs. In galaxy-galaxy pairs, the bias functions in this separation interval have depressions, but bias functions of DM particle pairs are flat, forming almost parallel lines for particle density limits log ρ 0 ≤ −6.8. This depression in galaxygalaxy pairs is also seen in the analysis by Springel et al. (2018). These separations correspond to central regions of halos (clusters of galaxies). A possible explanation of this phenomenon is that near the centres of clusters, a large fraction of faint galaxies is "eaten" by more massive galaxies.
Amplitudes of correlation and bias functions
Bias parameters of low luminosity galaxies
The basic difference between TNG100-1 and TNG300-1 galaxy simulations on the one side and TNG100-3 and TNG300-3 DM particle simulations on the other side is in the shape of bias functions in low and intermediate luminosity M r and particle density log ρ 0 ranges. As shown in Fig. 7, the bias parameter b(M r ) of galaxy simulations TNG100-1 and TNG300-1 is constant for a broad interval of luminosity, M r ≥ −20 for present epoch z = 0, and for M r ≥ −15 for early epoch z = 5. In contrast, the bias parameter b(log ρ 0 ) of DM TNG100-3 and TNG300-3 simulations rises continuously with increasing particle density limit log ρ 0 . The approximating of the function b(M r ) to a low asymptotic level b 0 was found by Norberg et al. (2001) from the 2dF Galaxy Redshift survey and by Zehavi et al. (2011) andEinasto et al. (2020) for SDSS galaxies. A similar phenomenon was found by Einasto et al. (2020) for Millennium simulation galaxies in magnitude interval −17.4 ≥ M r ≥ −20, and for EAGLE simulations in magnitude interval −15.5 ≥ M r ≥ −18.0.
These differences between galaxy and DM simulations mean that the transition of the DM filamentary web from higher to lower log ρ 0 levels is a continuous process, but the transition of the filamentary web of galaxies to lower luminosities has a sharp limit. In other words, there is no population of dwarf galaxies in faint DM filaments -faint dwarf galaxies are located in the same filamentary web as brighter ones and the properties of galaxies are largely shaped by their birthplace in the cosmic web (initial conditions for galaxy formation) (Repp & Szapudi 2019b;Einasto et al. 2022). Differences between filamentary webs of galaxies and DM are clear in Fig. 2. The upper panel displays the faint DM web that is absent from the bottom panel of the galaxy-defined web; see also Springel et al. (2005Springel et al. ( , 2018.
The constant level of the bias function b(M r ) at low luminosities raises the question of the galaxy distribution in voids. When large voids were detected by Gregory & Thompson (1978), , and Kirshner et al. (1981), then Dekel & Silk (1986) and Dekel (1986) assumed that giant galaxies form in highdensity regions, but dwarf galaxies can also form in voids. To check the presence of void galaxies Einasto (1988Einasto ( , 1990) compared distributions of faint and bright galaxies in and around the Virgo supercluster and found that both types of galaxies occupy identical regions. The study by Lindner et al. (1995Lindner et al. ( , 1996 showed that dwarf galaxies are located near void boundaries, and that they are not randomly distributed in voids. The void phenomenon was studied by Peebles (2001), Conroy (2009), andNeyrinck et al. (2014). Using numerical simulations, Tinker & Conroy (2009) found that the boundary between filaments and voids in the galaxy distribution is nearly as sharp for dwarfs as for ∼ L ⋆ galaxies. Note that this observation does not exclude the presence of some isolated dwarf galaxies in the outer surrounding of brighter galaxies, similar to dwarf galaxies observed recently by Rizzi et al. (2017), Karachentsev et al. (2023) and Makarova et al. (2023).
For the present epoch z = 0, the galaxy simulations TNG100-1 and TNG300-1 yield samples with almost identical asymptotic values of the bias parameter of low-luminosity galaxies, b 0 = 1.045 and b 0 = 1.044 respectively. This suggests that the mean asymptotic bias parameter of the lowest luminosity galaxies in the present epoch has a mean value b 0 = 1.045 ± 0.01. For the epoch z = 0, the DM simulations TNG100-3 and TNG300-3 also yield particle density-limited samples with similar bias values, b 0 = 1.195 and b 0 = 1.299, respectively, with a mean value b 0 = 1.25 ± 0.05. The error was estimated from the difference of b 0 values for simulations TNG100-3 and TNG300-3. This bias value is only 1.2 times higher than for galaxy samples. For earlier epochs b 0 (z) curves for simulations TNG100 and TNG300 diverse, both for galaxy and particledensity-limited samples. As discussed by Einasto et al. (2023), in earlier epochs, fixed luminosity and particle density limits correspond to more advanced stages of evolution, see Fig. 3, which raises the amplitudes of the bias parameters.
Amplitudes of correlation and bias functions as cosmological parameters
One possibility to define the bias parameter is to use Halo Models (HM) of large-scale structure, for a recent review, see Asgari et al. (2023). In the HM model the bias is defined as a function of mass, b(M), and it satisfies normalisation conditions:
∞ 0 Mn(M)dM = ρ and ∞ 0 Mb(M)n(M)dM = ρ, where n(M) is the halo mass function, and ρ is the mean comoving cosmological matter density. In the HM prescription, low-mass halos are anti-biased (0 < b(M) < 1) with a constant asymptotic value at low mass. In HM, the bias function is defined with respect to the characteristic mass M ⋆ , where the transition to biased objects occurs. HM is based on tacit or implicit assumptions that all matter is contained in halos, and the matter in low-density regions outside halos can be ignored. We drop the second assumption when we study the effect of particles in low-density regions on the bias phenomenon.
It is well-known that amplitudes of correlation and bias functions depend on cosmological factors: (i) cosmological parameters: matter-energy densities Ω b , Ω m , Ω Λ , (ii) the present rms matter fluctuation amplitude averaged over a sphere of radius 8 h −1 Mpc, σ 8 ; (iii) luminosities of galaxies (Kaiser 1984); (iv) systematic motions of galaxies in clusters -the finger of God effect, (v) the flow of galaxies toward attractors (Kaiser 1987) and (vi) the thickness of observational samples, if 3D CFs are determined by the inversion of 2D CFs (Einasto et al. 2021b). The present study is based on numerical simulations of the cosmic web, where the last three effects are not present. We use simulations with fixed density and σ 8 parameters, thus the essential cosmological parameter is the luminosity of galaxies (and the particle density limit in DM simulations). However, our study shows that amplitudes of correlation and bias functions depend on one more factor -the fraction of matter in voids and in the clustered population, which is the topic of the next subsection.
Bias parameter and the fraction of matter in the clustered population
As noted in the Introduction, Einasto et al. (1994Einasto et al. ( , 1999Einasto et al. ( , 2019bEinasto et al. ( , 2023 investigated the relation between clustered and total matter using DM only numerical simulations of the evolution of the cosmic web. Clustered matter was identified with samples of DM particles with local densities above a certain threshold, ρ ≥ ρ 0 . The main result of these studies was the establishment of a relation between the bias parameter b and the fraction of matter in the clustered population, F c : b = 1/F c . Einasto et al. (2023) found that the relation b = 1/F c is well fulfilled for low particle density limits ρ 0 . Our analysis shows that correlation and bias functions depend on the nature of objects used in their determination. The expected relationship between the bias parameter b and the fraction of matter in the clustered population F c is fulfilled in DM simulations TNG100-3 and TNG300-3. In these cases, the bias is defined using DM particles: CFs based on numbers of particles in high-density regions above the threshold level ρ ≥ ρ 0 , are divided by CFs found from numbers of all particles of the full DM particle sample. In simulations TNG100-1 and TNG300-1 the bias parameter is defined using CFs of galaxies of luminosity M r , divided by CFs of DM particle samples. In this case, in Eq. (1), numerators and denominators are objects of different nature: galaxies do not have a simple relationship to a threshold in the corresponding DM field. Our analysis has shown that the relationship between the bias parameter, b(M r ), and the respective fraction of the clustered matter, F c , is not fulfilled in this case. Measured functions b 0 (z) for lowluminosity galaxies of simulations TNG100-1 and TNG300-1 lie considerably lower than expected functions b 0 (z) = 1/F c (z), by a factor of ≈ 1.3, over the whole range of simulation epochs z.
In the present study, we used the separation r 0 = 10 h −1 Mpc to measure the bias parameter b. Using this separation, we found for the bias parameter of lowest luminosity galaxies, b(z = 0) = 1.045, both for TNG100-1 and TNG300-1. If a lower separation would be used, r 0 ≈ 1 h −1 Mpc, then the bias parameter of lowest luminosity galaxies at the present epoch would be smaller, b(r) ≤ 1, see left panels of top and second rows of Fig. 5. Springel et al. (2005) has found that at the present epoch, galaxies are slightly antibiased with b = 0.9. This means there is no room for particles in the unclustered population, F v = 1 − F c , if the relation b c = 1/F c is valid.
During evolution, matter flows from low-density regions to high-density ones. The outflow of matter from voids toward superclusters was investigated using velocity field data by Tully et al. (2008Tully et al. ( , 2014, Carlesi et al. (2016), Sorce et al. (2016), Rizzi et al. (2017), and Anand et al. (2019). However, gravity cannot evacuate voids completely, thus there is always some matter in voids. Note that this matter has a low correlation compared to the highly clustered regions (Repp & Szapudi 2022). Therefore we can neglect its correlations to zeroth order and refer to it as unclustered. Thus, the bias parameter of galaxies must be greater than unity, over the whole range of evolution epochs.
The principal result of this study is the establishment of the difference between properties of correlation and bias functions of galaxies and DM particle samples. In earlier studies, bias properties were investigated using either galaxy or DM data: the fraction of matter in the clustered population was not used. For the first time, we use both types of test particles, galaxies and samples of DM particles, in identical simulations. Thus we avoid potential misinterpretations resulting from using different test particles and data samples.
All previous studies have shown that the bias parameter depends on the luminosity of galaxies (particle density limit). However, only the bias function of DM particle samples satisfies the b = 1/F c criterion. This means that correlation and bias functions of only DM particle samples properly measure the relationship between galaxies and DM. The general conclusion of this study is that correlation and bias functions of galaxies and DM particle samples measure different properties of the cosmic web. This result is no surprise. As shown recently by Ouellette et al. (2023), a topological analysis of TNG simulations reveals the presence of differences between simulated galaxies and DM halos.
CONCLUSIONS
We investigated properties of correlation and bias functions and bias parameters using two types of data: simulated galaxies and DM particles. For both data types, we applied several input sources, for galaxies TNG100-1, TNG300-1 and HR5 simulations, and for DM particle samples TNG100-3, TNG300-3 simulations and ΛCDM simulations by Einasto et al. (2023). Our analysis showed that essential properties of correlation and bias functions of simulated galaxies using various data sources are consistent. The same consistency of correlation and bias functions is also observed using all particle density-limited simulations. In contrast, essential differences exist between correlation and bias functions of luminositylimited samples of galaxies on the one side and particle density-limited samples on the other. The fundamental results of our study can be listed as follows.
(i) The bias parameter of low luminosity galaxies approaches an asymptotic level b(M r ) → b 0 ; at the present epoch b 0 = 1.045±0.01 for galaxy samples in TNG100-1 and TNG300-1 and HR5 simulations. A flat region of the bias function b(M r ) at low luminosities suggests that faint dwarf galaxies are located in the same filamentary web as brighter galaxies.
(ii) The bias parameters b(ρ 0 ) of particle density ρ 0 limited samples of DM particles of TNG100-3 and TNG300-3 simulations form a continuous sequence with decreasing ρ 0 , which suggests that the transition of the filamentary DM web from higher to lower particle density limit ρ 0 is continuous. For the present epoch, the DM simulations TNG100-3 and TNG300-3 yield for the bias parameter a value b 0 = 1.25 ± 0.05. This should correspond to the lowest luminosity galaxies.
(iii) Cosmic web consists of filamentary structures of various densities. The fractions of matter in the clustered and the unclustered populations are less than unity. For this reason, the bias parameter of the clustered matter is b > 1 for all cosmic epochs.
(iv) Bias parameter b 0 from density-limited DM samples in TNG100-3 and TNG300-3 agrees with expectations from the fraction of particles in the clustered population: b 0 = 1/F c .
(v) Bias parameter b 0 of galaxy samples in TNG100-1 and TNG300-1 is in disagreement with the b 0 = 1/F c constraint. This means that correlation and bias functions, calculated for objects of different natures (galaxies vs. density-limited samples of DM particles) describe properties of the cosmic web differently.
We do not fully understand the differences in correlation and bias functions for galaxies and density-limited DM populations. In particular, the measurements conflict with predictions from the fraction of particles in voids. We note that opinions on the role of the unclustered matter in voids to general properties of the web are different. Einasto et al. (1994Einasto et al. ( , 2023 used a simple analytic model to estimate the fraction of matter in voids. Their analysis suggests that the void fraction decreases with time. In contrast, the analytic model by Sheth & van de Weygaert (2004) indicates that the fraction of mass in voids is constant over time. The role unclustered matter in voids plays in the clustering of the cosmic web deserves further study.
Figure 1 .
1Power spectra and CFs of DM in TNG300-3 simulations for redshifts z = 0, 0.5, 1, 2, 3, 5 are shown in left and right panels, respectively.
Figure 2 .
2Density fields of DM of the TNG100-3 simulation. The top panels show density fields of the full DM, and the bottom panels show density fields of the clustered population of DM using DM particles with densities log ρ ≥ log ρ 0 = −7.8, i.e. all particles of the clustered population. The left, central and right panels show density fields at epochs z = 0, z = 2 and z = 5. The densities are given in the logarithmic scale to see better the distribution of faint filaments.
Figure 3
3Figure 3. Top panels: Differential galaxy luminosity distribution in photometric system r. Panels from left to right are for TNG100-1 and TNG300-1 simulations. Colour codes show distributions for various z. The bottom panels show the differential distribution of DM particles of TNG100-3 and TNG300-3 as a function of the total density at the location of particles, log ρ 0 .
Figure 6 .
6Evolution of bias parameter values with the epoch of simulations z. The top left and right panels show data for simulations TNG100-1 and TNG300-1. Magnitude limits in M r photometric system are shown as symbol labels. The bottom panels show the evolution of the bias parameter of DM samples of simulations TNG100-3 and TNG300-3 with various limits of substructure density log ρ 0 . The limit log ρ 0 = −8.3 is marked with dashed orange lines, it corresponds to DM particles, not associated with galaxies. Level b = 1 is the bias of DM.
Figure 7 .Figure 8 .
78The top left and right panels show the evolution of the bias parameter b of simulations TNG100-1 and TNG300-1 as a function of the magnitude limits M r for various simulation epochs z. The bottom panels show the evolution of bias parameter b of simulations TNG100-3 and TNG300-3 as a function of the logarithm of particle density limit ρ 0 . Level b = 1 is for DM. Evolution of bias parameter b(z, M ⋆ ) of HR5 simulations. In the left panel, as a function of the epoch z, in the right panel, of the galaxy stellar mass M ⋆ in units of 10 9 M ⊙ .
100 h −1 Mpc the shapes of bias functions of TNG300-3 DM simulations are very close to the shapes of bias functions of ΛCDM simulation in box of size L = 256 h −1 Mpc. Both simulations have approximately equal volumes. Also the evolution of the bias parameters with cosmic epoch z are similar: compare bottom row of Fig 6 and left panels of Fig. 6 by
Table 1 .
1Parameters of simulations
Table 2 .
2Number of DM particles and galaxies in simulationsSimulation
z
N DM
N gal
(1)
(2)
(3)
(4)
TNG100-1
0
455 3
337 261
TNG100-1
0.5
455 3
410 155
TNG100-1
1
455 3
492 655
TNG100-1
2
455 3
© 2023 The Authors
MNRAS 000, 1-15(2023)
ACKNOWLEDGEMENTSWe thank the TNG collaboration for publicly releasing their simulation data and example analysis scripts, Neta Bahcall, Jim Peebles, Dmitri Pogosyan and Brent Tully for stimulating discussions, and the anonymous referee for useful suggestions.This work was supported by institutional research funding IUT40-2 of the Estonian Ministry of Education and Research, by the Estonian Research Council grant PRG803, and by Mobilitas Plus grant MOBTT5. We acknowledge the support by the Centre of Excellence "Dark side of the Universe" (TK133) financed by the European Union through the European Regional Development Fund. The study has been also supported by Kavli Institute for Theoretical Physics, University of California, Santa Barbara, through the program "The Cosmic Web: Connecting Galaxies to Cosmology at High and Low Redshifts", and by ICRAnet through a professorship for Jaan Einasto.All data on TNG100 and TNG300 simulations used in this work are publicly available. Any other data will be shared upon reasonable request to the corresponding author.
. G S Anand, R B Tully, L Rizzi, E J Shaya, I D Karachentsev, 10.3847/1538-4357/ab24e5ApJ. 88052Anand G. S., Tully R. B., Rizzi L., Shaya E. J., Karachentsev I. D., 2019, ApJ, 880, 52
. M Asgari, A J Mead, C Heymans, 10.48550/arXiv.2303.08752arXiv:2303.087522023arXiv e-printsAsgari M., Mead A. J., Heymans C., 2023, arXiv e-prints, p. arXiv:2303.08752
. J M Bardeen, J R Bond, N Kaiser, A S Szalay, 10.1086/164143ApJ. 30415Bardeen J. M., Bond J. R., Kaiser N., Szalay A. S., 1986, ApJ, 304, 15
. M Blanton, R Cen, J P Ostriker, M A Strauss, 10.1086/307660ApJ. 522590Blanton M., Cen R., Ostriker J. P., Strauss M. A., 1999, ApJ, 522, 590
. J R Bond, L Kofman, D Pogosyan, 10.1038/380603a0Nature. 380603Bond J. R., Kofman L., Pogosyan D., 1996, Nature, 380, 603
. E Carlesi, 10.1093/mnras/stw357MNRAS. 458900Carlesi E., et al., 2016, MNRAS, 458, 900
. R Cen, J P Ostriker, 10.1086/186620ApJL. 399113Cen R., Ostriker J. P., 1992, ApJL, 399, L113
. R Cen, J P Ostriker, 10.1086/309090ApJ. 53883Cen R., Ostriker J. P., 2000, ApJ, 538, 83
. Y.-K Chiang, R Overzier, K Gebhardt, 10.1088/0004-637X/779/2/127ApJ. 779127Chiang Y.-K., Overzier R., Gebhardt K., 2013, ApJ, 779, 127
. M Davis, G Efstathiou, C S Frenk, S D M White, 10.1086/163168ApJ. 292371Davis M., Efstathiou G., Frenk C. S., White S. D. M., 1985, ApJ, 292, 371
. A Dekel, Comments on Astrophysics. 11235Dekel A., 1986, Comments on Astrophysics, 11, 235
. A Dekel, M J Rees, 10.1038/326455a0Nature. 326455Dekel A., Rees M. J., 1987, Nature, 326, 455
. A Dekel, J Silk, 10.1086/164050ApJ. 30339Dekel A., Silk J., 1986, ApJ, 303, 39
. V Desjacques, D Jeong, F Schmidt, 10.1016/j.physrep.2017.12.002Phys. Rep. 7331Desjacques V., Jeong D., Schmidt F., 2018, Phys. Rep., 733, 1
. A G Doroshkevich, E V Kotok, A N Poliudov, S F Shandarin, I S Sigov, I D Novikov, MNRAS. 192321Doroshkevich A. G., Kotok E. V., Poliudov A. N., Shandarin S. F., Sigov I. S., Novikov I. D., 1980, MNRAS, 192, 321
. A G Doroshkevich, S F Shandarin, I B Zeldovich, Comments on Astrophysics. 9265Doroshkevich A. G., Shandarin S. F., Zeldovich I. B., 1982, Comments on Astrophysics, 9, 265
. Y Dubois, S Peirani, C Pichon, J Devriendt, R Gavazzi, C Welker, M Volonteri, 10.1093/mnras/stw2265MNRAS. 4633948Dubois Y., Peirani S., Pichon C., Devriendt J., Gavazzi R., Welker C., Volonteri M., 2016, MNRAS, 463, 3948
. M Einasto, 10.1093/mnras/234.1.37MNRAS. 23437Einasto M., 1988, MNRAS, 234, 37
. M Einasto, 10.1093/mnras/242.1.56MNRAS. 24256Einasto M., 1990, MNRAS, 242, 56
J Einasto, E Saar, IAU Symposium. Hewitt A., Burbidge G., Fang L. Z.124Einasto J., Saar E., 1987, in Hewitt A., Burbidge G., Fang L. Z., eds, IAU Symposium Vol. 124, Observational Cosmology. pp 349-358
. J Einasto, M Einasto, M Gramann, E Saar, MNRAS. 248593Einasto J., Einasto M., Gramann M., Saar E., 1991, MNRAS, 248, 593
. J Einasto, E Saar, M Einasto, W Freudling, M Gramann, 10.1086/174336ApJ. 429465Einasto J., Saar E., Einasto M., Freudling W., Gramann M., 1994, ApJ, 429, 465
. J Einasto, M Einasto, E Tago, V Müller, A Knebe, R Cen, A A Starobinsky, F Atrio-Barandela, 10.1086/307385ApJ. 519456Einasto J., Einasto M., Tago E., Müller V., Knebe A., Cen R., Starobinsky A. A., Atrio-Barandela F., 1999, ApJ, 519, 456
. J Einasto, I Suhhonenko, L J Liivamägi, M Einasto, 10.1051/0004-6361/201834450A&A. 62397Einasto J., Suhhonenko I., Liivamägi L. J., Einasto M., 2019a, A&A, 623, A97
. J Einasto, L J Liivamägi, I Suhhonenko, M Einasto, 10.1051/0004-6361/201936054A&A. 63062Einasto J., Liivamägi L. J., Suhhonenko I., Einasto M., 2019b, A&A, 630, A62
. J Einasto, G Hütsi, T Kuutma, M Einasto, 10.1051/0004-6361/202037683A&A. 64047Einasto J., Hütsi G., Kuutma T., Einasto M., 2020, A&A, 640, A47
. J Einasto, G Hütsi, I Suhhonenko, L J Liivamägi, M Einasto, 10.1051/0004-6361/202038358A&A. 64717Einasto J., Hütsi G., Suhhonenko I., Liivamägi L. J., Einasto M., 2021a, A&A, 647, A17
. J Einasto, G Hütsi, M Einasto, 10.1051/0004-6361/202038106A&A. 652152Einasto J., Hütsi G., Einasto M., 2021b, A&A, 652, A152
. M Einasto, R Kipper, P Tenjes, J Einasto, E Tempel, L J Liivamägi, 10.1051/0004-6361/202244304A&A. 66869Einasto M., Kipper R., Tenjes P., Einasto J., Tempel E., Liivamägi L. J., 2022, A&A, 668, A69
. J Einasto, L J Liivamägi, M Einasto, MNRAS. 5182164Einasto J., Liivamägi L. J., Einasto M., 2023, MNRAS, 518, 2164
. M Gramann, Tartu Astr. Obs. Publ. 52216Gramann M., 1987, Tartu Astr. Obs. Publ., 52, 216
. M Gramann, MNRAS. 234569Gramann M., 1988, MNRAS, 234, 569
. S A Gregory, L A Thompson, 10.1086/156198ApJ. 222784Gregory S. A., Thompson L. A., 1978, ApJ, 222, 784
. T Ishiyama, 10.1093/mnras/stab1755MNRAS. 5064210Ishiyama T., et al., 2021, MNRAS, 506, 4210
M Jõeveer, J Einasto, IAU Symposium. Longair M. S., Einasto J.79Large Scale Structures in the UniverseJõeveer M., Einasto J., 1978, in Longair M. S., Einasto J., eds, IAU Sympo- sium Vol. 79, Large Scale Structures in the Universe. pp 241-250
. M Jõeveer, J Einasto, E Tago, MNRAS. 185357Jõeveer M., Einasto J., Tago E., 1978, MNRAS, 185, 357
. L G Jensen, A S Szalay, 10.1086/184673ApJL. 3055Jensen L. G., Szalay A. S., 1986, ApJL, 305, L5
. Y P Jing, 10.1086/427087ApJ. 620559Jing Y. P., 2005, ApJ, 620, 559
. N Kaiser, 10.1086/184341ApJL. 2849Kaiser N., 1984, ApJL, 284, L9
. N Kaiser, 10.1093/mnras/227.1.1MNRAS. 2271Kaiser N., 1987, MNRAS, 227, 1
. I D Karachentsev, L N Makarova, B S Koribalski, G S Anand, R B Tully, A Y Kniazev, 10.1093/mnras/stac3284MNRAS. 5185893Karachentsev I. D., Makarova L. N., Koribalski B. S., Anand G. S., Tully R. B., Kniazev A. Y., 2023, MNRAS, 518, 5893
. R P Kirshner, A OemlerJr, P L Schechter, S A Shectman, 10.1086/183623ApJL. 24857Kirshner R. P., Oemler Jr. A., Schechter P. L., Shectman S. A., 1981, ApJL, 248, L57
. S D Landy, A S Szalay, 10.1086/172900ApJ. 41264Landy S. D., Szalay A. S., 1993, ApJ, 412, 64
. J Lee, 10.3847/1538-4357/abd08bApJ. 90811Lee J., et al., 2021, ApJ, 908, 11
. C Li, S D M White, 10.1111/j.1365-2966.2009.15268.xMNRAS. 3982177Li C., White S. D. M., 2009, MNRAS, 398, 2177
. U Lindner, J Einasto, M Einasto, W Freudling, K Fricke, E Tago, A&A. 301329Lindner U., Einasto J., Einasto M., Freudling W., Fricke K., Tago E., 1995, A&A, 301, 329
. U Lindner, A&A. 3141Lindner U., et al., 1996, A&A, 314, 1
. B Little, D H Weinberg, 10.1093/mnras/267.3.605MNRAS. 267605Little B., Weinberg D. H., 1994, MNRAS, 267, 605
. L N Makarova, R B Tully, G S Anand, T S Lambert, M E Sharina, B S Koribalski, R C Kraan-Korteweg, 10.3847/1538-4357/acb048ApJ. 943139Makarova L. N., Tully R. B., Anand G. S., Lambert T. S., Sharina M. E., Koribalski B. S., Kraan-Korteweg R. C., 2023, ApJ, 943, 139
. H J Mo, S D M White, 10.1093/mnras/282.2.347MNRAS. 282347Mo H. J., White S. D. M., 1996, MNRAS, 282, 347
. D Nelson, 10.1016/j.ascom.2015.09.003Astronomy and Computing. 1312Nelson D., et al., 2015, Astronomy and Computing, 13, 12
. M C Neyrinck, M A Aragón-Calvo, D Jeong, X Wang, 10.1093/mnras/stu589MNRAS. 441646Neyrinck M. C., Aragón-Calvo M. A., Jeong D., Wang X., 2014, MNRAS, 441, 646
. P Norberg, 10.1046/j.1365-8711.2001.04839.xMNRAS. 32864Norberg P., et al., 2001, MNRAS, 328, 64
. A Ouellette, G Holder, E Kerman, 10.48550/arXiv.2302.01363arXiv:2302.01363arXiv e-printsOuellette A., Holder G., Kerman E., 2023, arXiv e-prints, p. arXiv:2302.01363
. C Park, 10.3847/1538-4357/ac85b5ApJ. 93715Park C., et al., 2022, ApJ, 937, 15
. P J E Peebles, 10.1086/322254ApJ. 557495Peebles P. J. E., 2001, ApJ, 557, 495
. A Pillepich, 10.1093/mnras/stx2656MNRAS. 4734077Pillepich A., et al., 2018a, MNRAS, 473, 4077
. A Pillepich, 10.1093/mnras/stx3112MNRAS. 475648Pillepich A., et al., 2018b, MNRAS, 475, 648
. A Repp, I Szapudi, arXiv:1904.05048arXiv e-printsRepp A., Szapudi I., 2019a, arXiv e-prints, p. arXiv:1904.05048
. A Repp, I Szapudi, arXiv:1912.05557arXiv e-printsRepp A., Szapudi I., 2019b, arXiv e-prints, p. arXiv:1912.05557
. A Repp, I Szapudi, 10.1093/mnras/staa496MNRAS. 4933449Repp A., Szapudi I., 2020, MNRAS, 493, 3449
. A Repp, I Szapudi, 10.1093/mnras/stab3031MNRAS. 509586Repp A., Szapudi I., 2022, MNRAS, 509, 586
. L Rizzi, R B Tully, E J Shaya, E Kourkchi, I D Karachentsev, 10.3847/1538-4357/835/1/78ApJ. 83578Rizzi L., Tully R. B., Shaya E. J., Kourkchi E., Karachentsev I. D., 2017, ApJ, 835, 78
. J Schaye, 10.1093/mnras/stu2058MNRAS. 446521Schaye J., et al., 2015, MNRAS, 446, 521
. R K Sheth, G Tormen, 10.1046/j.1365-8711.1999.02692.xMNRAS. 308119Sheth R. K., Tormen G., 1999, MNRAS, 308, 119
. R K Sheth, R Van De Weygaert, 10.1111/j.1365-2966.2004.07661.xMNRAS. 350517Sheth R. K., van de Weygaert R., 2004, MNRAS, 350, 517
. J G Sorce, 10.1093/mnras/stv2407MNRAS. 4552078Sorce J. G., et al., 2016, MNRAS, 455, 2078
. V Springel, 10.1038/nature03597Nature. 435629Springel V., et al., 2005, Nature, 435, 629
. V Springel, C S Frenk, S D White, 10.1038/nature04805Nature. 4401137Springel V., Frenk C. S., White S. D. M., 2006, Nature, 440, 1137
. V Springel, 10.1093/mnras/stx3304MNRAS. 475676Springel V., et al., 2018, MNRAS, 475, 676
. I Szapudi, A S Szalay, 10.1086/173096ApJ. 414493Szapudi I., Szalay A. S., 1993, ApJ, 414, 493
. I Szapudi, J Pan, S Prunet, T Budavári, 10.1086/496971ApJL. 6311Szapudi I., Pan J., Prunet S., Budavári T., 2005, ApJL, 631, L1
. M Tegmark, P J E Peebles, 10.1086/311426ApJL. 50079Tegmark M., Peebles P. J. E., 1998, ApJL, 500, L79
. J L Tinker, C Conroy, 10.1088/0004-637X/691/1/633ApJ. 691633Tinker J. L., Conroy C., 2009, ApJ, 691, 633
. J L Tinker, B E Robertson, A V Kravtsov, A Klypin, M S Warren, G Yepes, S Gottlöber, 10.1088/0004-637X/724/2/878ApJ. 724878Tinker J. L., Robertson B. E., Kravtsov A. V., Klypin A., Warren M. S., Yepes G., Gottlöber S., 2010, ApJ, 724, 878
R B Tully, J R Fisher, IAU Symposium. Longair M. S., Einasto J.79214Large Scale Structures in the UniverseTully R. B., Fisher J. R., 1978, in Longair M. S., Einasto J., eds, IAU Sym- posium Vol. 79, Large Scale Structures in the Universe. p. 214
. R B Tully, E J Shaya, I D Karachentsev, H M Courtois, D D Kocevski, L Rizzi, A Peel, 10.1086/527428ApJ. 676184Tully R. B., Shaya E. J., Karachentsev I. D., Courtois H. M., Kocevski D. D., Rizzi L., Peel A., 2008, ApJ, 676, 184
. R B Tully, H Courtois, Y Hoffman, D Pomarède, 10.1038/nature13674Nature. 51371Tully R. B., Courtois H., Hoffman Y., Pomarède D., 2014, Nature, 513, 71
. M Vogelsberger, 10.1093/mnras/stu1536MNRAS. 4441518Vogelsberger M., et al., 2014a, MNRAS, 444, 1518
. M Vogelsberger, 10.1038/nature13316Nature. 509177Vogelsberger M., et al., 2014b, Nature, 509, 177
. S D M White, M J Rees, MNRAS. 183341White S. D. M., Rees M. J., 1978, MNRAS, 183, 341
. I Zehavi, 10.1088/0004-637X/736/1/59ApJ. 73659Zehavi I., et al., 2011, ApJ, 736, 59
. Y B Zeldovich, J Einasto, S F Shandarin, 10.1038/300407a0Nature. 300407Zeldovich Y. B., Einasto J., Shandarin S. F., 1982, Nature, 300, 407
| [] |
[
"Implicit Temporal Modeling with Learnable Alignment for Video Recognition",
"Implicit Temporal Modeling with Learnable Alignment for Video Recognition"
] | [
"Shuyuan Tu \nShanghai Key Lab of Intell. Info. Processing\nSchool of CS\nFudan University\n\n\nShanghai Collaborative Innovation Center of Intelligent Visual Computing\n\n",
"Qi Dai \nMicrosoft Research Asia\n\n",
"Zuxuan Wu \nShanghai Key Lab of Intell. Info. Processing\nSchool of CS\nFudan University\n\n\nShanghai Collaborative Innovation Center of Intelligent Visual Computing\n\n",
"Zhi-Qi Cheng \nCarnegie Mellon University\n\n",
"Han Hu \nMicrosoft Research Asia\n\n",
"Yu-Gang Jiang \nShanghai Key Lab of Intell. Info. Processing\nSchool of CS\nFudan University\n\n\nShanghai Collaborative Innovation Center of Intelligent Visual Computing\n\n"
] | [
"Shanghai Key Lab of Intell. Info. Processing\nSchool of CS\nFudan University\n",
"Shanghai Collaborative Innovation Center of Intelligent Visual Computing\n",
"Microsoft Research Asia\n",
"Shanghai Key Lab of Intell. Info. Processing\nSchool of CS\nFudan University\n",
"Shanghai Collaborative Innovation Center of Intelligent Visual Computing\n",
"Carnegie Mellon University\n",
"Microsoft Research Asia\n",
"Shanghai Key Lab of Intell. Info. Processing\nSchool of CS\nFudan University\n",
"Shanghai Collaborative Innovation Center of Intelligent Visual Computing\n"
] | [] | Contrastive language-image pretraining (CLIP) has demonstrated remarkable success in various image tasks. However, how to extend CLIP with effective temporal modeling is still an open and crucial problem. Existing factorized or joint spatial-temporal modeling trades off between the efficiency and performance. While modeling temporal information within straight through tube is widely adopted in literature, we find that simple frame alignment already provides enough essence without temporal attention. To this end, in this paper, we proposed a novel Implicit Learnable Alignment (ILA) method, which minimizes the temporal modeling effort while achieving incredibly high performance. Specifically, for a frame pair, an interactive point is predicted in each frame, serving as a mutual information rich region. By enhancing the features around the interactive point, two frames are implicitly aligned. The aligned features are then pooled into a single token, which is leveraged in the subsequent spatial self-attention. Our method allows eliminating the costly or insufficient temporal self-attention in video. Extensive experiments on benchmarks demonstrate the superiority and generality of our module. Particularly, the proposed ILA achieves a top-1 accuracy of 88.7% on Kinetics-400 with much fewer FLOPs compared with Swin-L and ViViT-H. Code is released at https://github.com/Francis-Rings/ILA. | 10.48550/arxiv.2304.10465 | [
"https://export.arxiv.org/pdf/2304.10465v1.pdf"
] | 258,236,183 | 2304.10465 | 6416c56425c6df53b47c5bb2231d5865674c9fb9 |
Implicit Temporal Modeling with Learnable Alignment for Video Recognition
Shuyuan Tu
Shanghai Key Lab of Intell. Info. Processing
School of CS
Fudan University
Shanghai Collaborative Innovation Center of Intelligent Visual Computing
Qi Dai
Microsoft Research Asia
Zuxuan Wu
Shanghai Key Lab of Intell. Info. Processing
School of CS
Fudan University
Shanghai Collaborative Innovation Center of Intelligent Visual Computing
Zhi-Qi Cheng
Carnegie Mellon University
Han Hu
Microsoft Research Asia
Yu-Gang Jiang
Shanghai Key Lab of Intell. Info. Processing
School of CS
Fudan University
Shanghai Collaborative Innovation Center of Intelligent Visual Computing
Implicit Temporal Modeling with Learnable Alignment for Video Recognition
Contrastive language-image pretraining (CLIP) has demonstrated remarkable success in various image tasks. However, how to extend CLIP with effective temporal modeling is still an open and crucial problem. Existing factorized or joint spatial-temporal modeling trades off between the efficiency and performance. While modeling temporal information within straight through tube is widely adopted in literature, we find that simple frame alignment already provides enough essence without temporal attention. To this end, in this paper, we proposed a novel Implicit Learnable Alignment (ILA) method, which minimizes the temporal modeling effort while achieving incredibly high performance. Specifically, for a frame pair, an interactive point is predicted in each frame, serving as a mutual information rich region. By enhancing the features around the interactive point, two frames are implicitly aligned. The aligned features are then pooled into a single token, which is leveraged in the subsequent spatial self-attention. Our method allows eliminating the costly or insufficient temporal self-attention in video. Extensive experiments on benchmarks demonstrate the superiority and generality of our module. Particularly, the proposed ILA achieves a top-1 accuracy of 88.7% on Kinetics-400 with much fewer FLOPs compared with Swin-L and ViViT-H. Code is released at https://github.com/Francis-Rings/ILA.
Introduction
Video recognition is rated as one of the most fundamental components of video understanding. Numerous downstream tasks heavily rely on the basic recognition model, e.g., action localization [10,43,41,40], detection [19,25], and video object tracking [52]. Due to the great potential of video technologies, it has been an active research direction over the past few years. Various approaches have been proposed, including convolution-based methods [44,50,48,8,55,15,14] and transformer-based methods [5,31,13,3,26,45,56]. Recently, Contrastive Language-Image Pretraining (CLIP) [36] has demonstrated strong performance in video domain. Studies [51,22,33,30,34,59] attempt to transfer the powerful CLIP model to video tasks, which promote the recognition performance to a brand-new level, showing its general representation ability.
Generally, existing methods devise various temporal modeling schemes to explore the potential of CLIP, including the factorized [59] or frame-level [33,22] temporal attention, and temporal cross attention [30]. All these tailored methods aim at designing lightweight temporal modules to reuse the CLIP model. Though considerable improvements are achieved, such temporal modeling approaches still depend on the complex self-attention, which we argue is not necessary in CLIP-based framework.
In this paper, we rethink the role of temporal modeling in general CLIP-based video recognition framework. Unlike existing approaches rely on temporal attention, we hypothesize that important motion and action clues can be derived when performing alignment of pairwise frames. As a result, the costly [31,5] or insufficient [33,22,30] temporal attentions can be avoided, without harming the performance. While explicit patch alignment is time consuming with low efficiency, we prioritize only an implicit and coarse alignment, aiming at involving the vital temporal signals.
In light of this, we present a novel Implicit Learnable Alignment (ILA) method for efficient video recognition. More specifically, ILA employs learnable masks to align features of two adjacent frames. The alignment is achieved with the help of an interactive point that is predicted using a convolution module conditioned on a frame pair. Based on the point, a corresponding region is generated indicating close interactions of adjacent frames. The mask is defined as the map of weights implying which region contains vital information. We then assign higher weights around the interactive point in the mask, while assigning lower weights to other positions, suppressing irrelevant signals among them. By leveraging the generated mask to weight the frame representations, coarsely aligned features are obtained, as shown in Figure 2. Note all above operations are performed in parallel among frame pairs to boost the speed. To efficiently and fully exploit the alignment, the aligned features are pooled into a single mutual information token. The token is subsequently concatenated with other frame tokens to perform the spatial self-attention, which implicitly models the temporal relations between frames. Our method is plugged into each spatial block of vision transformer and forms the Implicit Spatio-Temporal attention (IST) block, which allows temporal modeling without the use of the traditional temporal self-attention.
Our contributions can be summarized as follows: (1) We propose Implicit Learnable Alignment (ILA) for video recognition. Our implicit temporal modeling can be seamlessly plugged into existing vision transformer models. It utilizes the coarse alignment as the key temporal signals, which enables superior temporal modeling at a low computational cost. (2) We show that such a simple frame alignment already encodes the essence of temporal relations, which allow eliminating the insufficient temporal self-attention. (3) Extensive qualitative and quantitative experiments demonstrate the effectiveness and efficiency of ILA. We achieve 88.7% on Kinetics-400 with low computation overhead. Our method builds a promising bridge for CLIP from image processing to video recognition.
Related Work
Visual-language representation learning has demonstrated remarkable success in various tasks [36,20,60]. By leveraging contrastive learning between language and image, a joint representation space is learned. Particularly, CLIP [36] has shown its strong power in open domain problems, and dozens of approaches are developed, including few-shot learning [16,62], point cloud understanding [63,38], video understanding [57,51,22], etc.
Recently, several studies extend the existing CLIP model to the video domain. X-CLIP [33] devises the frame-level temporal attention to avoid high computation. EVL [30] employs temporal convolution and cross-attention on top of the CLIP features. ST-Adapter [34] inserts the spatiotemporal adapter into each block, which consists of several 3D convolution layers. AIM [59] reuses the CLIP self-attention as the temporal ones via an additional adapter module. Nevertheless, the above methods explore lightweight adaptations of CLIP using insufficient temporal attention, e.g., frame-level or local temporal attention. In our work, we attempt to perform temporal modeling with signals emerged from a simple alignment process, which involves the comprehensive temporal clues yet remains simplicity.
Video recognition is the key task in video understanding. In the convolution era, two-stream networks [44,50,66] and spatiotemporal CNNs [48,18,49,55] are proposed. The former treats spatial representations and optical flow images as two independent modules, and the latter employs (separable) 3D convolution to extract spatiotemporal features. Recently, inspired by vision transformers [12,47,64,45,17], video transformers [5,31,13,3,37,1] have shown promising results compared to CNN methods, due to their much larger receptive fields. TimeSformer [5] adopts factored space time attention as a trade-off between speed and accuracy. ViViT [3] investigates four types of temporal attention, and selected the global spatiotemporal attention as the default. Video Swin [31] uses local spatiotemporal attention to model the temporal information. However, these methods are either computationally intensive or insufficient in modeling the temporal interactions, resulting in high model cost or unsatisfactory performance. In contrast, our method explores how to model the complex temporal information with minimal effort, demonstrating the redundancy in existing temporal attention models.
Temporal correspondences reflect the motions in video and can be used in several video understanding tasks [35,21,24,37,54,27]. For example, in video super resolution, alignment-based methods [42,7,46,53,9,29] have been proposed to keep frames coherent. PSRT-recurrent [42] points out that patch level alignment can reduce memory cost when computing optical flow. While in video recognition, the recent ATA [65] adopts Hungarian matching to align the patches between frames, and then performs temporal attention within aligned patches, followed by the dealignment. However, the model is significantly encumbered with the slow serial alignment, followed by computationally expensive operations to calculate temporal attention. In contrast, our approach employs learnable masks to align frames in parallel with an aim to involve important motion and action clues, thus benefiting the video understanding. Therefore, the alignment in our method is implicit and coarse.
Method
In this section, we elaborate our proposed architecture in detail. First, we introduce the overview of our ILA in Section 3.1. Second, we depict the concrete implicit maskbased alignment in Section 3.2. Finally, we describe the loss functions of our dedicated framework.
Architecture Overview
The proposed ILA model consists of several Implicit Spatio-Temporal attention (IST) blocks. The model is built upon a pretrained image vision transformer (ViT) [12]. While previous methods [3,5] mostly rely on the ImageNet initialized models, recent approaches [33,30,34,59] have revealed the powerful representation ability of large-scale visual-language pretrained models [36,60]. Our method follows the predecessors and is initialized from thw CLIP model [36]. Given an input video clip
x = [x 1 , ..., x t , ..., x T ], x t ∈ R H×W ×3 , we decompose each frame into H P × W P non-overlapping patches{x t,i } hw i=1
, where T, H, W are the number of frames, height and width, h = H P , w = W P . P is the patch size. The patches are linearly mapped to embedding vectors z
(0) t = [z (0) t,1 , ..., z (0) t,i , ..., z (0) t,hw ], z (0) t,i ∈ R d : z (0) t,i = Ex t,i + e pos t,i ,(1)
where E ∈ R d×3P 2 is the projection matrix, and e pos t,i is the spatial positional embedding. We also add a classification token z (0) t,cls for each frame. The structure of the IST block is illustrated in Figure 3. At each IST block , we align the semantic features of each consecutive frame pair (z
( −1) t , z ( −1)
t−1 ) via finding an interactive position (as will be introduced in Section 3.2) per frame, which serves as a mutual information (MI) rich region. By simply weighting the feature map with higher weights surrounding the interactive position, the aligned features a
( ) t , a ( ) t−1 are obtained: a ( ) t , a ( ) t−1 = Align(z ( −1) t , z ( −1) t−1 ).(2)
The aligned features are subsequently averagely pooled into a single mutual information tokenẑ Figure 4. Details of the proposed alignment method. For each adjacent frame pair, a convolution module is leveraged to predict one interactive point per frame, which refers to region with close interactions between frames. A mask is generated by assigning higher weights around the interactive point, while assigning lower weights to other positions. The mask is then adopted to weight the frame features, obtaining aligned features. Finally, the aligned features are pooled into a single mutual information token. Best viewed in color.
Implicit Mask-based Alignment
The IST block employs an implicit mask-based alignment component to align the semantic features between two frames. A previous study [65] had explored patch-level alignment through Hungarian matching [6], which however suffered from limited performance and low efficiency. On one hand, the explicit patch alignment focuses on patch coherence across frames, which can eliminate possible beneficial temporal interactions. On the other hand, such alignment must be operated frame by frame with cubic time complexity, incurring significant computational overhead. In contrast, our implicit alignment attempts to enhance favorable mutual information and in turn suppress irrelevant information with learned masks. As such, the key temporal clues are preserved while allowing flexible and efficient computation. Figure 4 illustrates the details of our alignment method, which is concretely depicted in the following. In the -th block, we duplicate each input clip {z (3b)). Note that since we duplicate the input clip to form frame pairs, there are two aligned features for frame z
( −1) t , 2 ≤ t ≤ T − 1. For example, a ( ) t can be computed from both pairs (z ( −1) t , z ( −1) t−1 ) and (z ( −1) t+1 , z ( −1) t ). In our im- plementation, only a ( ) t computed from (z ( −1) t , z ( −1) t−1 )
is exploited for pooling to the mutual information token.
Our simple alignment implicitly introduces cross-frame cross-location interactions to the model, thus capturing semantically rich actions. We reveal that the primitive pairwise interaction already contains sufficient information for modeling the complex temporal relations, which allows eliminating the costly temporal self-attention in video. Therefore, there is no additional temporal modeling design in IST block.
Training
The loss function of our framework consists of two parts. The first part is the supervised prompt-enhanced similarity loss, where the cosine similarity between video representation v and text representation c is computed by:
v = Avg(MSA([z (L) 1,cls , ..., z (L) T,cls ])), cos(v, c) = < v, c > ||v|| · ||c|| .(8)
Here Avg(·) is the average pooling. The model maximizes cos(v, c) if v and c are matched, otherwise minimizes it. The second part is the alignment loss for aligning pairwise frames in each IST block. Particularly, we align the averagely pooled feature, i.e. the mutual information token z ( ) t,mut as in Eq. (3a), using the cosine similarity:
cos ( ) t = <ẑ ( ) t,mut ,ẑ ( ) t−1,mut > ||ẑ ( ) t,mut || · ||ẑ ( ) t−1,mut || ,(9)
where cos ( ) t is the similarity score for t-th frame pair in block . The loss function l a is formulated by summing up the similarity scores:
la = − L =1 T t=2 cos ( ) t .(10)
Finally, we optimize Eq. (8) and Eq. (10) simultaneously with a loss weight parameter γ.
Experiments
We evaluate our method on two datasets: Kinetics-400 [23] and Something-Something-V2 [32]. Four variants are considered, namely the ILA model based on ViT-B/32, ViT-B/16, ViT-L/14, and ViT-L/14@336, respectively. We sparsely sample 8 or 16 frames to form a video clip, both in training and inference. Additional implementation, hyperparameter details, and more experiments are provided in the supplementary materials.
Main Results
Kinetics-400. In Table 1, we report the performance of our proposed method on Kinetics-400. Comparisons with recent state-of-the-art approaches are listed, including methods with random initialization, pretrained on ImageNet-1k/21k pretraining, and pretrained with web-scale data.
Compared to methods pretrained on ImageNet [11], ILA-ViT-L with 8 frames outperforms the best competitor MViTv2-L [28] by 1.9% in accuracy with 4× fewer FLOPs. We also observe ILA surpasses other baselines with large margins, e.g., Swin [31] and TimeSformer [5]. It indicates the strong representations of the CLIP model, showing the great potential of large-scale visual-language pretraining.
In comparison with methods pretrained on web-scale images, e.g. JFT-300M/3B, ILA exhibits significant advantages. Our ILA-ViT-L exceeds ViViT-H by 3.2% with 12× less computation, and exceeds CoVeR by 0.8%. Note that CoVeR uses much more training data (3B images) compared to CLIP (400M image-text pairs).
In addition, when compared with the recent CLIP-based methods, ILA achieves the best performance. ILA-ViT-B with 16 frames surpasses the typical CLIP-based model ActionCLIP-B by 1.9% with 2× fewer FLOPs. Moreover, our largest model outperforms the best competitors X-CLIP-L and EVL-L by 1% with comparable or much less computation. Though MTV-H performs a little higher (89.1%) than ILA (88.7%), it employs the WTS dataset that contains 70M video-text pairs with about 17B images, which are much larger than that in CLIP. The observations show that our alignment-based temporal modeling could capture more comprehensive motion clues than the insufficient temporal attention of X-CLIP and EVL, without increasing the computational burden.
Something-Something-V2. Table 2 reports the comparisons on SSv2. This dataset focuses on the human object action recognition, in which the open domain semantics are limited. We assume the rich textual representation of CLIP language branch can help less. Therefore, we use the crossentropy loss with one-hot labels, instead of the visual-text similarity loss in Eq. (8). We also increase the number of convolution layers for better alignment. Moreover, we freeze the weights of CLIP for stability.
SSv2 is a motion-heavy dataset and largely depends on temporal modeling. Methods pretrained on CLIP usually produce weaker results compared to those pretrained on Kinetics-400 . For example, X-CLIP-B only achieves 57.8% in accuracy, while MViTv1-B produces much higher results (64.7%) with similar computation. Similarly, the result of EVL-ViT-B is also unsatisfactory (61.7%). This phenomenon can be attributed to three factors. (1) The temporal modeling in X-CLIP and EVL is insufficient. In pursuit of high efficiency, X-CLIP and EVL adopt frame-level or local temporal attention on top of the CLIP features, which inevitably harms the results. (2) Tuning the weights of CLIP is very challenging, where small perturbations can easily prejudice the primal CLIP. We assume the reason is that SSv2 is a dataset with relatively small semantics. Even assigning a very small learning rate to CLIP weights and a large one to other weights, the model is still prone to encounter exploding gradients. This phenomenon reduces the flexibility of parameter tuning, which leads to the insufficient training of the model. (3) The pretraining on Kinetics can bring significant advantages compared to pretraining on CLIP data.
As shown in the table, ILA-ViT-B (8 frames) achieves a comparable 65.0% with MViTv1-B, which is much higher than X-CLIP and EVL. Moreover, ILA-ViT-L/14@336px obtains promising performance referring to 70.2% on top-1 and 91.8% on top-5. It outperforms EVL-ViT-L/14@336px by 2.2% on top-1 with 2× fewer frames and over 2× fewer FLOPs. It indicates that the proposed implicit alignment can comprehensively model the temporal information with a low computational cost.
Ablation Study
Generalization to different backbones. To demonstrate ILA is a versatile module and can be plugged into various backbones, we experiment with a CLIP-based model (EVL-ViT-B/16, 8frames [30]) as well as an ImageNetbased architecture (TimeSformer-ViT-B/16, 8frames [5]). For EVL, we insert our alignment into the CLIP backbone, while keep others unchanged. For TimeSformer, we replace the temporal attention with the proposed alignment module. The results are summarized in Table 3. The utilization of ILA results in a 0.6% and 1.8% performance gain for the CLIP-based and ImageNet-based backbones, respectively, demonstrating ILA is compatible with modern networks.
Effectiveness of implicit alignment. We compare ILA with ATA [65], an alternative of patch alignment, and other temporal modeling approaches, i.e. X-CLIP [33], Divided Spatio-Temporal Attention [5], Temporal Shift [51], and Average Pooling. The baseline is employing the loss in Eq. (8) for CLIP without temporal modeling. Average Pooling indicates forming the mutual information token in Eq. (3a) without alignment. Table 4 shows the comparison re-sults. We have the following observations: (1) ILA outperforms the baseline by 1.5% in top-1 accuracy with minor additional computational cost. It indicates that ILA can promote CLIP for video tasks effectively. (2) Compared to ATA that uses patch-level movement for alignment with a cubic complexity, ILA offers better results with nearly 2× fewer FLOPs through learning an implicit mask with a quadratic complexity. (3) ILA also outperforms other approaches like X-CLIP using temporal attention and temporal shifting, highlighting the effectiveness of ILA. (4) ILA achieves better results compared with average pooling, indicating that the improvement results from our implicit alignment instead of the pooling operation. Comparison of mutual information. In our work, we assume that ILA can enhance the mutual information between frames, thereby boosting the recognition performance. Here, we compare the mutual information of ILA in the last visual layer with other approaches. In particular, we calculate the averaged Wasserstein Distance (i.e. Earth Mover's Distance) [2] between adjacent frames which is negatively correlated to mutual information. Table 5 presents the results. We observe that models with additional alignment have lower Wasserstein Distance and higher performance, suggesting that the alignment can indeed correlate adjacent frames.
Impact of different aligning strategies. ILA aligns two consecutive frames and here we experiment with the following alignment strategies: (1) Align-First: each frame is aligned with the first frame; (2) Align-Middle, each frame is aligned with the middle frame. We can observe in Table 6 that anchor frame based alignments are inferior to adjacent alignment. The reason may be that it is not reliable to align two frames that are too far away. Impact of inserting locations of alignment. We divide the visual branch of ViT-B/32 (12 blocks) into 4 groups, each containing 3 blocks. We plug our ILA into each group individually for exploring the impact of different inserting locations. Table 7 shows the results. Per-block insertion of ILA outperforms the baseline CLIP by 0.6%, 0.7%, 0.6% and 0.5% in accuracy, respectively. We see that inserting ILA into shallow blocks performs slightly better than inserting it into deep ones, showing that aligning low-level features can encode more temporal clues. Operators in alignment module. To validate the effectiveness and efficiency of the 2D convolution module in ILA, we experiment with an alternative choice of window attention in Eq. (5). Table 8 depicts the comparison results. It demonstrates that window attention requires high computational resources and is difficult to optimize, producing limited results. Impact of mutual information token. Here we discuss different approaches of exploiting the aligned features. ILA employs a mutual information (MI) token by pooling and concatenation on aligned features. Another choice is the element-wise addition between the frame and the aligned features. In addition, one can also directly concatenate the tokens of aligned features to frame tokens, resulting 2× tokens in spatial attention.
The results are shown in Table 9. It can be observed that both element-wise addition and direct concatenation perform inferior to the ILA. Furthermore, their inference latencies are much higher than ILA. The plausible reason is that the aligned features are produced by simple mask weighting of frame features, thus containing much redundant information when performing addition or concatenation. Meanwhile, the pooling operation can effectively remove such irrelevant information and boost the model performance. Table 9. Ablation study of mutual information (MI) token. ILA employs a MI token by pooling & concatenation with aligned features. Other choices include element-wise addition, or direct concatenation.
Implementation
Acc. FLOPs Latency (ms)
Conclusion
We introduced Implicit Learnable Alignment (ILA), a novel temporal modeling method for video recognition. ILA performs frame alignment so as to encode motion information in lieu of the widely used temporal attention operation. Particularly, ILA employs only an implicit and coarse feature alignment via a weighting mask. By finding the active interaction position in the frame, the mask is generated with higher weights around that position, and lower weights on others. Extensive experiments demonstrates the effectiveness and efficiency of ILA, showcasing its promising adaption ability to CLIP and compatibility with modern visual backbones.
Training Details. The experiments are conducted on 8 NVIDIA 32G V100 GPUs. The training configuration is listed in Table 10. It is worth noting that our sampling strategies for Kinetics-400 and Something-Something-V2 are different during the training phase. We implement the sparse sampling strategy on Kinetics-400. For SSv2, we uniformly sample the entire video at predefined temporal intervals without group division. In term of the training on Kinetics-400, the base learning rate indicates the learning rate of the original CLIP parameters. The learning rate for other additional parameters is 10× larger than the base learning rate. In term of the training on SSv2, we exclude the prompt branch and freeze the weights of CLIP visual branch for training stability. Thus the base learning rate is used for the rest parameters.
B. Complexity of ILA
We analyze various temporal modeling methods (Spatial Attention [5], Joint Attention [5], Divided ST Attention [5], ATA [65], X-CLIP [33] and our proposed ILA) in terms of complexity, as shown in Table 11. The complexity of our alignment process is O(T hwk 2 d) due to the 2D convolution-based operations. The complexity of the whole ILA consists of the implicit alignment O(T hwk 2 d) and the spatial attention O(T h 2 w 2 d). In terms of Joint Attention and Divided Spatiotemporal Attention, Joint Attention requires more computational memory since it takes all patches into consideration. Divided ST Attention only considers the temporal attention along the time axis. In terms of ATA, ATA is based on Hungarian Algorithm whose complexity is O(N 3 ). In practice, the complexity of Hungarian matching is O(T h 3 w 3 d) in video domain. Moreover, ATA requires additional temporal attention with complexity O(T 2 hwd). X-CLIP adopts a frame-level temporal attention with complexity O(T 2 d), which however obtains suboptimal result. We can observe that our proposed ILA can have better performance in low complexity.
C. Qualitative Analysis
In order to investigate the quality of three temporal modeling approaches (Divided ST Attention [5], ATA [65], and ILA), we visualize their intermediate and last feature maps respectively, as shown in Figure 5 and Figure 6. According to the illustrations, all three approaches capture the static semantic features, such as static flowers on the desk. Moreover, our proposed ILA pays more attention to the action area of arranging flowers (e.g. the 5-th frame in the last row of Figure 6) instead of the static flowers on the desk. It indicates that our ILA can leverage the learnable mask to achieve implicit temporal modeling, focusing on the vital motion region. For divided ST attention, the model prefers to focus on static object instead of significant actions. While in ATA, the model attempts to concentrate on discontinuous regions with inaccurate positions. The plausible reason is that ATA utilizes patch movement-based alignment, which may destroys the continuity of semantic distribution. Table 11. Complexities of different methods, with results on Kinetics-400. T , h, w, d, and k refer to temporal size, spatial height of input, spatial width of input, channel depth of input, and kernel size of convolution, respectively.
Temporal Modeling
Complexity Acc.(%) FLOPs Spatial Attention [5] O(T h 2 w 2 d) 79.8 37G Joint Attention [5] O(T 2 h 2 w 2 d) 80.4 71G Divided ST Attention [5] O(T 2 hwd + T h 2 w 2 d) 80.6 58G ATA [65] O(T h 3 w 3 d + T 2 hwd + T h 2 w 2 d) 81.0 60G X-CLIP [33] O
Figure 1 .
1Top-1 accuracy comparison with state-of-the-art methods on Kinetics-400 [23] under different FLOPs. ILA achieves competitive results. Best viewed in color.
Figure 2 .
2The proposed ILA employs an implicit and coarse mask to align the features, which focus on the active interaction region. We hypothesize the important motion and action clues can be derived from aligned features.
Figure 3 .
3The structures of three different models. (a) The divided spatiotemporal attention in TimeSformer[5]. (b) The frame-level temporal attention in X-CLIP[33]. (c) The alignment-based temporal modeling in our ILA.
Figure 5 .
5Visualization of intermediate feature map of different temporal modeling approaches on Kinetics-400. (a) refers to raw frames. (b), (c) and (d) refer to Divided ST Attention, ATA and ILA respectively.
Figure 6 .
6Visualization of the last feature map of different temporal modeling approaches on Kinetics-400. (a) refers to raw frames. (b), (c) and (d) refer to Divided ST Attention, ATA and ILA respectively.
Table 1 .
1Comparison with the state-of-the-arts on Kinetics-400. The FLOPs per view of each method is reported. We categorize methods by different pretraining data.Model
Pretrain
Frames Top-1 Top-5 Views FLOPs (G)
Random initialization
MViTv1-B [13]
-
64
81.2
95.1
3×3
455
ImageNet pretraining
Uniformer-B [26]
IN-1K
32
83.0
95.4
4×3
259
TimeSformer-L [5]
IN-21K
96
80.7
94.7
1×3
2380
ATA [65]
IN-21K
32
81.9
95.5
4×3
793
Mformer-HR [35]
IN-21K
16
81.1
95.2
10×3
959
Swin-L (@384px) [31]
IN-21K
32
84.9
96.7
10×5
2107
MViTv2-L (@312px) [28]
IN-21K
40
86.1
97.0
5×3
2828
Web-scale image pretraining
ViViT-H/16×2 [3]
JFT-300M
32
84.8
95.8
4×3
8316
TokenLearner-L/10 [39]
JFT-300M
-
85.4
96.3
4×3
4076
CoVeR [61]
JFT-3B
-
87.2
-
1×3
-
Web-scale language-image pretraining
ActionCLIP-B/16 [51]
CLIP-400M
32
83.8
96.2
10×3
563
A6 [22]
CLIP-400M
16
76.9
93.5
-
-
EVL-ViT-B/16 [30]
CLIP-400M
16
83.6
-
1×3
296
EVL-ViT-L/14 [30]
CLIP-400M
16
87.0
-
1×3
1350
EVL-ViT-L/14@336px [30]
CLIP-400M
32
87.7
-
1×3
6068
X-CLIP-B/16 [33]
CLIP-400M
16
84.7
96.8
4×3
287
X-CLIP-L/14 (@336px) [33]
CLIP-400M
16
87.7
97.4
4×3
3086
AIM-ViT-L/14 [59]
CLIP-400M
16
87.3
97.6
1×3
1868
ST-Adapter-ViT-L/14 [34]
CLIP-400M
16
86.9
97.6
1×3
1375
MTV-H [58]
WTS
32
89.1
98.2
4×3
3705
ILA-ViT-B/32
CLIP-400M
8
81.3
95.0
4×3
40
ILA-ViT-B/32
CLIP-400M
16
82.4
95.8
4×3
75
ILA-ViT-B/16
CLIP-400M
8
84.0
96.6
4×3
149
ILA-ViT-B/16
CLIP-400M
16
85.7
97.2
4×3
295
ILA-ViT-L/14
CLIP-400M
8
88.0
98.1
4×3
673
ILA-ViT-L/14@336px
CLIP-400M
16
88.7
97.8
4×3
3130
Table 2 .
2Performance comparison with the state-of-the-arts on Something-Something-V2. The FLOPs per view of each method is reported.Model
Pretrain
Frames Top-1 Acc. Top-5 Acc. Views FLOPs (G)
ViViT-L [3]
IN-21K+K400
16
65.4
89.8
1×3
903
TimeSformer-L [5]
IN-21K
96
62.4
81.0
1×3
2380
TimeSformer-HR [5]
IN-21K
16
62.2
78.0
1×3
1703
ATA [65]
IN-21K
32
67.1
90.8
4×3
793
MViTv1-B [13]
K400
16
64.7
89.2
1×3
70.5
MViTv1-B [13]
K400
32
67.1
90.8
1×3
170
Mformer-B [35]
IN-21K+K400
16
66.5
90.1
1×3
370
Mformer-L [35]
IN-21K+K400
32
68.1
91.2
1×3
1185
Mformer-HR [35]
IN-21K+K400
64
67.1
90.6
1×3
959
X-CLIP-B/16 [33]
CLIP-400M
8
57.8
84.5
4×3
145
AIM-ViT-B/16 [59]
CLIP-400M
8
66.4
90.5
1×3
208
AIM-ViT-L/14 [59]
CLIP-400M
32
69.4
92.3
1×3
3836
EVL-ViT-B/16 [30]
CLIP-400M
16
61.7
-
1×3
345
EVL-ViT-L/14 [30]
CLIP-400M
32
66.7
-
1×3
3216
EVL-ViT-L/14@336px [30]
CLIP-400M
32
68.0
-
1×3
8090
ILA-ViT-B/16
CLIP-400M
8
65.0
89.2
4×3
214
ILA-ViT-B/16
CLIP-400M
16
66.8
90.3
4×3
438
ILA-ViT-L/14
CLIP-400M
8
67.8
90.5
4×3
907
ILA-ViT-L/14@336px
CLIP-400M
16
70.2
91.8
4×3
3723
Table 3 .
3Generalization ability of ILA on various visual backbones for Kinetics-400.Model
Pre-training Acc. (%) FLOPs
EVL [30]
CLIP-400M
82.9
150G
EVL + ILA
CLIP-400M
83.5
162G
TimeSformer [5]
IN-21K
78.0
196G
TimeSformer + ILA
IN-21K
79.8
164G
Table 4. Effectiveness of implicit alignment on Kinetics-400. Av-
erage Pooling indicates forming the mutual information token in
Eq. (3a) without alignment.
Model
Acc. (%) FLOPs
Baseline
79.8
37G
X-CLIP [33]
80.4
39G
CLIP + Divided ST Attention [5]
80.6
58G
CLIP + Temporal Shift [51]
80.1
37G
CLIP + ATA [65]
81.0
60G
CLIP + Average Pooling
80.4
39G
CLIP + ILA
81.3
40G
Table 5 .
5Comparison of mutual information on Kinetics-400.
MI (EMD) refers to the average Wasserstein Distance between
neighbouring frames.
Model
Acc. (%) MI (EMD)
Baseline
79.8
0.56
X-CLIP [33]
80.4
0.51
CLIP + Divided ST Attention [5]
80.6
0.47
CLIP + ATA [65]
81.0
0.30
CLIP + ILA
81.3
0.13
Table 6 .
6Ablation study of different aligning strategies on K-400.Aligning Strategy Top1. (%) Top5. (%)
Align-First
80.7
94.5
Align-Middle
80.8
94.6
Adjacent frame
81.3
95.0
Table 7 .
7Comparisons of different inserting locations.Configuration
Acc. (%) FLOPs
None
79.8
37G
Block 1-3
80.4
38G
Block 4-6
80.5
38G
Block 7-9
80.4
38G
Block 10-12
80.3
38G
ILA (Block 1-12)
81.3
40G
Table 8 .
8Evaluation of different operators in Eq. (5). We exper-
iment with an alternative of window attention with size 3 × 3,
instead of the convolution.
Basic Operators
Acc. (%) FLOPs
2D Convolution
81.3
40G
Window Attention
80.8
114G
Table 10 .
10Default implementation details of our method.Convolution Module in SSv2. In SSv2, we increase the number of convolution layers in alignment. Particularly, two additional 3×3 convolution layers plus batch normalization and ReLU are added. In comparison to the original convolution module, it can bring 0.6% improvement on top-1 accuracy.Training Configuration
Kinetics-400 Something-Something v2
Optimisation
Optimizer
AdamW
Optimizer betas
(0.9,0.98)
Batch size
256
Learning rate schedule
Cosine
Learning warmup epochs
5
Base learning rate
8e-6
5e-4
Minimal learning rate
8e-8
5e-6
training steps
50000
30000
Data augmentation
RandomFlip
0.5
MultiScaleCrop
(1, 0.875, 0.75, 0.66)
ColorJitter
0.8
GrayScale
0.2
Label smoothing
0.1
Mixup
0.8
Cutmix
1.0
Other regularisation
Weight decay
0.003
0.01
( ) t,mut , which is further concatenated with corresponding frame to perform the spatial Multi-head Self Attention (MSA):z ( ) t,mut = Avg(a ( ) t ), (3a) [z ( ) t ,z ( ) t,mut ] = MSA(LN([z ( −1) t ,ẑ ( ) t,mut ])) + [z ( −1) t ,ẑ ( ) t,mut ],(3b)where LN(·) indicates layer normalization[4].z ( ) t,mut is then dropped before feeding to the MLP, and the output of block is formulated as:z ( ) t = MLP(LN(z ( ) t )) +z ( ) t .(4)Unlike common supervised frameworks that use one-hot labels as the target, to fully leverage the pretrained visuallanguage model, we follow[33] to optimize the similarity loss supervised by textual information of categories. Formally, the text representation c is computed by inputting the category name to the text encoder f t (·). Then a videospecific prompt is obtained by querying c among video representation {z (L) t } T t=1 (L is the number of IST blocks), which is further used to enhance c. Finally, the model maximizes the cosine similarity between the video and text representations if they are matched, otherwise minimizes it.
( −1) t } T t=1 to form an adjacent input pair {(z ( −1) t , z ( −1) t−1 )} T t=2 .Each pair of representations are then concatenated along the channel dimension, which are further fed into a dedicated lightweight convolution module for predicting two interactive points:p ( ) t , p ( ) t−1 = Conv(Concat(z ( −1) t , z ( −1) t−1 )),(5)where the convolution module Conv(·) consists of a sequence of convolution, normalization and pooling layers. The interactive points p( ) t , p ( )t−1 ∈ R 2 represent the most semantically similar positions in two frames, indicating the region with favorable mutual information. We assume the closer the position is to the interactive point, the more temporal information it involves. On the contrary, a position that is far away from the interactive point can contain redundant and irrelevant information, which should be suppressed. To this end, two align masks m( ) t , m ( ) t−1 ∈ R h×ware generated by endowing positions closer to the interactive points with higher weights. Formally, for a spatial position u in m ( ) t , its weight w u is computed by:s = dist(u, p ( ) t ), wu = η, if s ≤ δ, max (0, η − β (s − δ)) , if s > δ,(6)where dist(·) is the distance function, and η, δ, β are the parameters. The weights of m( ) t−1 are obtained by similar calculation with p ( ) t−1 . Note that all the coordinates of positions are scaled to the range [−1, 1] to facilitate the mask calculation. The aligned feature representations a ( ) t , a ( ) t−1are produced by weighting the frame features with the align masks:a ( ) t = m ( ) t z ( −1) t , (7a) a ( ) t−1 = m ( ) t−1 z ( −1) t−1 .(7b)We hypothesize that the aligned feature can implicitly preserve the mutual information and already encodes essential temporal information, which could be leveraged to model the temporal relations across frames. Nevertheless, directly replacing z ( −1) t with the aligned feature a ( ) t would prejudice the performance, since a ( ) t focuses more on the interaction region while ignoring the spatial correlations. Instead, we consider a ( ) t as a specific temporal signal. Thus, we averagely pool the feature into a single mutual information tokenẑ ( ) t,mut (Eq. (3a)), which is further utilized in spatial multi-head self attention (Eq.
AppendixA. Implementation Details of ILA
Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, Boqing Gong, NeurIPS. 2Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. In NeurIPS, 2021. 2
. Martin Arjovsky, Soumith Chintala, Leon Bottou, arXiv:1701.07875Wasserstein gan. arXiv preprintMartin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. 8
Vivit: A video vision transformer. Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid, ICCV. Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. Vivit: A video vi- sion transformer. In ICCV, 2021. 1, 2, 3, 6, 7
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hin, arXiv:1607.06450ton. Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- ton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 3
Is space-time attention all you need for video understanding. Gedas Bertasius, Heng Wang, Lorenzo Torresani, ICML. 12Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, 2021. 1, 2, 3, 5, 6, 7, 11, 12
A new algorithm for the assignment problem. P Dimitri, Bertsekas, Mathematical Programming. 211Dimitri P Bertsekas. A new algorithm for the assign- ment problem. Mathematical Programming, 21(1):152-171, 1981. 4
Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool, arXiv:2106.06847Video super-resolution transformer. arXiv preprintJiezhang Cao, Yawei Li, Kai Zhang, and Luc Van Gool. Video super-resolution transformer. arXiv preprint arXiv:2106.06847, 2021. 3
Quo vadis, action recognition? a new model and the kinetics dataset. Joao Carreira, Andrew Zisserman, CVPR. Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. 1
Basicvsr++: Improving video superresolution with enhanced propagation and alignment. C K Kelvin, Shangchen Chan, Xiangyu Zhou, Chen Change Xu, Loy, CVPR. 2022Kelvin CK Chan, Shangchen Zhou, Xiangyu Xu, and Chen Change Loy. Basicvsr++: Improving video super- resolution with enhanced propagation and alignment. In CVPR, 2022. 3
Rethinking the faster r-cnn architecture for temporal action localization. Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, A David, Jia Ross, Rahul Deng, Sukthankar, CVPR. Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Sey- bold, David A Ross, Jia Deng, and Rahul Sukthankar. Re- thinking the faster r-cnn architecture for temporal action lo- calization. In CVPR, 2018. 1
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 5
Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, ICLR, 2021. 23Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. In ICLR, 2021. 2, 3
Multiscale vision transformers. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, Christoph Feichtenhofer, ICCV. 7Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In ICCV, 2021. 1, 2, 6, 7
X3d: Expanding architectures for efficient video recognition. Christoph Feichtenhofer, CVPR. Christoph Feichtenhofer. X3d: Expanding architectures for efficient video recognition. In CVPR, 2020. 1
Slowfast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, ICCV. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019. 1
Clip-adapter: Better vision-language models with feature adapters. Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, Yu Qiao, arXiv:2110.04544arXiv preprintPeng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. arXiv preprint arXiv:2110.04544, 2021. 2
On the connection between local attention and dynamic depth-wise convolution. Qi Han, Zejia Fan, Qi Dai, Lei Sun, Ming-Ming Cheng, Jiaying Liu, Jingdong Wang, ICLR. 2022Qi Han, Zejia Fan, Qi Dai, Lei Sun, Ming-Ming Cheng, Jiay- ing Liu, and Jingdong Wang. On the connection between lo- cal attention and dynamic depth-wise convolution. In ICLR, 2022. 2
Learning spatio-temporal features with 3d residual networks for action recognition. Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh, ICCV. Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. Learn- ing spatio-temporal features with 3d residual networks for action recognition. In ICCV, 2017. 2
Tube convolutional neural network (t-cnn) for action detection in videos. Rui Hou, Chen Chen, Mubarak Shah, ICCV. Rui Hou, Chen Chen, and Mubarak Shah. Tube convolu- tional neural network (t-cnn) for action detection in videos. In ICCV, 2017. 1
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, ICML. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, 2021. 2
Learning to estimate hidden motions with global motion aggregation. Shihao Jiang, Dylan Campbell, Yao Lu, Hongdong Li, Richard Hartley, ICCV. Shihao Jiang, Dylan Campbell, Yao Lu, Hongdong Li, and Richard Hartley. Learning to estimate hidden motions with global motion aggregation. In ICCV, 2021. 3
Prompting visual-language models for efficient video understanding. Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, Weidi Xie, ECCV, 2022. 1. 26Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, and Weidi Xie. Prompting visual-language models for efficient video understanding. In ECCV, 2022. 1, 2, 6
The kinetics human action video dataset. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, CVPR. 15Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. In CVPR, 2017. 1, 5
Motionsqueeze: Neural motion feature learning for video understanding. Heeseung Kwon, Manjin Kim, Suha Kwak, Minsu Cho, ECCV. 2020Heeseung Kwon, Manjin Kim, Suha Kwak, and Minsu Cho. Motionsqueeze: Neural motion feature learning for video understanding. In ECCV, 2020. 3
Recurrent tubelet proposal and recognition networks for action detection. Dong Li, Zhaofan Qiu, Qi Dai, Ting Yao, Tao Mei, ECCV. Dong Li, Zhaofan Qiu, Qi Dai, Ting Yao, and Tao Mei. Re- current tubelet proposal and recognition networks for action detection. In ECCV, 2018. 1
Uniformer: Unified transformer for efficient spatiotemporal representation learning. Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, Yu Qiao, ICLR, 2022. 16Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Unified transformer for efficient spatiotemporal representation learning. In ICLR, 2022. 1, 6
Ta2n: Twostage action alignment network for few-shot action recognition. Shuyuan Li, Huabin Liu, Rui Qian, Yuxi Li, John See, Mengjuan Fei, Xiaoyuan Yu, Weiyao Lin, AAAI. 2022Shuyuan Li, Huabin Liu, Rui Qian, Yuxi Li, John See, Mengjuan Fei, Xiaoyuan Yu, and Weiyao Lin. Ta2n: Two- stage action alignment network for few-shot action recogni- tion. In AAAI, 2022. 3
Jitendra Malik, and Christoph Feichtenhofer. Improved multiscale vision transformers for classification and detection. Yanghao Li, Chao-Yuan, Haoqi Wu, Karttikeya Fan, Bo Mangalam, Xiong, CVPR. 56Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Man- galam, Bo Xiong, Jitendra Malik, and Christoph Feichten- hofer. Improved multiscale vision transformers for classifi- cation and detection. In CVPR, 2022. 5, 6
Fdan: Flow-guided deformable alignment network for video super-resolution. Jiayi Lin, Yan Huang, Liang Wang, arXiv:2105.05640arXiv preprintJiayi Lin, Yan Huang, and Liang Wang. Fdan: Flow-guided deformable alignment network for video super-resolution. arXiv preprint arXiv:2105.05640, 2021. 3
Frozen clip models are efficient video learners. Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, Hongsheng Li, ECCV. 67Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard de Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In ECCV, 2022. 1, 2, 3, 6, 7
Video swin transformer. Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu, CVPR. 6Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swin transformer. In CVPR, 2022. 1, 2, 5, 6
Something-else: Compositional action recognition with spatial-temporal interaction networks. Joanna Materzynska, Tete Xiao, Roei Herzig, Huijuan Xu, Xiaolong Wang, Trevor Darrell, CVPR. Joanna Materzynska, Tete Xiao, Roei Herzig, Huijuan Xu, Xiaolong Wang, and Trevor Darrell. Something-else: Com- positional action recognition with spatial-temporal interac- tion networks. In CVPR, 2020. 5
Expanding language-image pretrained models for general video recognition. Houwen Bolin Ni, Minghao Peng, Songyang Chen, Gaofeng Zhang, Jianlong Meng, Shiming Fu, Haibin Xiang, Ling, ECCV. 12Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, and Haibin Ling. Expanding language-image pretrained models for gen- eral video recognition. In ECCV, 2022. 1, 2, 3, 6, 7, 11, 12
St-adapter: Parameter-efficient image-to-video transfer learning for action recognition. Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, Hongsheng Li, NeurIPS. 6Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, and Hong- sheng Li. St-adapter: Parameter-efficient image-to-video transfer learning for action recognition. In NeurIPS, 2022. 1, 2, 3, 6
Keeping your eye on the ball: Trajectory attention in video transformers. Mandela Patrick, Dylan Campbell, Yuki Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, João F Henriques, NeurIPS, 2021. 3. 67Mandela Patrick, Dylan Campbell, Yuki Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, and João F Henriques. Keeping your eye on the ball: Tra- jectory attention in video transformers. In NeurIPS, 2021. 3, 6, 7
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, ICML, 2021. 1. 23Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In ICML, 2021. 1, 2, 3
Strike a pose: Tracking people by finding stylized poses. Deva Ramanan, A David, Andrew Forsyth, Zisserman, CVPR. 23Deva Ramanan, David A Forsyth, and Andrew Zisserman. Strike a pose: Tracking people by finding stylized poses. In CVPR, 2005. 2, 3
Denseclip: Language-guided dense prediction with contextaware prompting. Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, Jiwen Lu, CVPR. 2022Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, and Jiwen Lu. Denseclip: Language-guided dense prediction with context- aware prompting. In CVPR, 2022. 2
Tokenlearner: Adaptive space-time tokenization for videos. Michael Ryoo, Anurag Piergiovanni, Mostafa Arnab, Anelia Dehghani, Angelova, NeurIPS. Michael Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, and Anelia Angelova. Tokenlearner: Adaptive space-time tokenization for videos. In NeurIPS, 2021. 6
Temporal action detection with multi-level supervision. Baifeng Shi, Qi Dai, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu, ICCV. Baifeng Shi, Qi Dai, Judy Hoffman, Kate Saenko, Trevor Darrell, and Huijuan Xu. Temporal action detection with multi-level supervision. In ICCV, 2021. 1
Weakly-supervised action localization by generative attention modeling. Baifeng Shi, Qi Dai, Yadong Mu, Jingdong Wang, CVPR. 2020Baifeng Shi, Qi Dai, Yadong Mu, and Jingdong Wang. Weakly-supervised action localization by generative atten- tion modeling. In CVPR, 2020. 1
Rethinking alignment in video superresolution transformers. Shuwei Shi, Jinjin Gu, Liangbin Xie, Xintao Wang, Yujiu Yang, Chao Dong, NeurIPS. 2022Shuwei Shi, Jinjin Gu, Liangbin Xie, Xintao Wang, Yujiu Yang, and Chao Dong. Rethinking alignment in video super- resolution transformers. In NeurIPS, 2022. 3
Temporal action localization in untrimmed videos via multi-stage cnns. Zheng Shou, Dongang Wang, Shih-Fu Chang, CVPR. Zheng Shou, Dongang Wang, and Shih-Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In CVPR, 2016. 1
Two-stream convolutional networks for action recognition in videos. Karen Simonyan, Andrew Zisserman, NeurIPS. 1Karen Simonyan and Andrew Zisserman. Two-stream con- volutional networks for action recognition in videos. In NeurIPS, 2014. 1, 2
Resformer: Scaling vits with multi-resolution training. Rui Tian, Zuxuan Wu, Qi Dai, Han Hu, Yu Qiao, Yu-Gang Jiang, CVPR, 2023. 1Rui Tian, Zuxuan Wu, Qi Dai, Han Hu, Yu Qiao, and Yu- Gang Jiang. Resformer: Scaling vits with multi-resolution training. In CVPR, 2023. 1, 2
Tdan: Temporally-deformable alignment network for video super-resolution. Yapeng Tian, Yulun Zhang, Yun Fu, Chenliang Xu, CVPR. 2020Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. Tdan: Temporally-deformable alignment network for video super-resolution. In CVPR, 2020. 3
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou, ICML. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through at- tention. In ICML, 2021. 2
Learning spatiotemporal features with 3d convolutional networks. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, Manohar Paluri, ICCV. 1Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015. 1, 2
A closer look at spatiotemporal convolutions for action recognition. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann Lecun, Manohar Paluri, CVPR. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In CVPR, 2018. 2
Temporal segment networks: Towards good practices for deep action recognition. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool, ECCV. 1Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recogni- tion. In ECCV, 2016. 1, 2
Actionclip: A new paradigm for video action recognition. Mengmeng Wang, Jiazheng Xing, Yong Liu, ECCV. 7Mengmeng Wang, Jiazheng Xing, and Yong Liu. Action- clip: A new paradigm for video action recognition. In ECCV, 2022. 1, 2, 6, 7
Fast online object tracking and segmentation: A unifying approach. Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, Philip Hs Torr, CVPR. Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, and Philip HS Torr. Fast online object tracking and segmentation: A unifying approach. In CVPR, 2019. 1
Edvr: Video restoration with enhanced deformable convolutional networks. Xintao Wang, C K Kelvin, Ke Chan, Chao Yu, Chen Change Dong, Loy, CVPR. Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration with enhanced deformable convolutional networks. In CVPR, 2019. 3
Learning comprehensive motion representation for action recognition. Mingyu Wu, Boyuan Jiang, Donghao Luo, Junchi Yan, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, Xiaokang Yang, AAAI. Mingyu Wu, Boyuan Jiang, Donghao Luo, Junchi Yan, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, and Xiaokang Yang. Learning comprehensive mo- tion representation for action recognition. In AAAI, 2021. 3
Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, Kevin Murphy, ECCV. 1Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018. 1, 2
Svformer: Semi-supervised video transformer for action recognition. Zhen Xing, Qi Dai, Han Hu, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang, CVPR. 2023Zhen Xing, Qi Dai, Han Hu, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. Svformer: Semi-supervised video trans- former for action recognition. In CVPR, 2023. 1
Videoclip: Contrastive pre-training for zero-shot video-text understanding. Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer, EMNLP. 2Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In EMNLP, 2021. 2
Multiview transformers for video recognition. Xuehan Shen Yan, Anurag Xiong, Zhichao Arnab, Mi Lu, Chen Zhang, Cordelia Sun, Schmid, CVPR. Shen Yan, Xuehan Xiong, Anurag Arnab, Zhichao Lu, Mi Zhang, Chen Sun, and Cordelia Schmid. Multiview trans- formers for video recognition. In CVPR, 2022. 6
Aim: Adapting image models for efficient video action recognition. Taojiannan Yang, Yi Zhu, Yusheng Xie, Aston Zhang, Chen Chen, Mu Li, ICLR. Taojiannan Yang, Yi Zhu, Yusheng Xie, Aston Zhang, Chen Chen, and Mu Li. Aim: Adapting image models for efficient video action recognition. In ICLR, 2023. 1, 2, 3, 6, 7
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, arXiv:2111.11432A new foundation model for computer vision. 23arXiv preprintLu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432, 2021. 2, 3
Co-training transformer with videos and images improves action recognition. Bowen Zhang, Jiahui Yu, Christopher Fifty, Wei Han, M Andrew, Ruoming Dai, Fei Pang, Sha, arXiv:2112.07175arXiv preprintBowen Zhang, Jiahui Yu, Christopher Fifty, Wei Han, An- drew M Dai, Ruoming Pang, and Fei Sha. Co-training trans- former with videos and images improves action recognition. arXiv preprint arXiv:2112.07175, 2021. 6
Tip-adapter: Training-free clip-adapter for better visionlanguage modeling. Renrui Zhang, Rongyao Fang, Wei Zhang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li, ECCV. 2022Renrui Zhang, Rongyao Fang, Wei Zhang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free clip-adapter for better vision- language modeling. In ECCV, 2022. 2
Pointclip: Point cloud understanding by clip. Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, Hongsheng Li, CVPR. 2022Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xu- peng Miao, Bin Cui, Yu Qiao, Peng Gao, and Hongsheng Li. Pointclip: Point cloud understanding by clip. In CVPR, 2022. 2
Hivit: A simpler and more efficient design of hierarchical vision transformer. Xiaosong Zhang, Yunjie Tian, Lingxi Xie, Wei Huang, Qi Dai, Qixiang Ye, Qi Tian, ICLR. 2023Xiaosong Zhang, Yunjie Tian, Lingxi Xie, Wei Huang, Qi Dai, Qixiang Ye, and Qi Tian. Hivit: A simpler and more efficient design of hierarchical vision transformer. In ICLR, 2023. 2
Alignment-guided temporal attention for video action recognition. Yizhou Zhao, Zhenyang Li, Xun Guo, Yan Lu, NeurIPS. 12Yizhou Zhao, Zhenyang Li, Xun Guo, and Yan Lu. Alignment-guided temporal attention for video action recog- nition. In NeurIPS, 2022. 3, 4, 6, 7, 11, 12
Aude Oliva, and Antonio Torralba. Temporal relational reasoning in videos. Bolei Zhou, Alex Andonian, ECCV. Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Tor- ralba. Temporal relational reasoning in videos. In ECCV, 2018. 2
| [
"https://github.com/Francis-Rings/ILA."
] |
[
"Baroclinic interaction of forced shock waves with random thermal gradients",
"Baroclinic interaction of forced shock waves with random thermal gradients"
] | [
"Joaquim P Jossy \nDepartment of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia\n",
"Prateek Gupta \nDepartment of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia\n"
] | [
"Department of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia",
"Department of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia"
] | [] | Density gradients aligned at an angle to pressure gradients result in baroclinic torque in fluid flows, generating vorticity. In this work, we study the vorticity generated by the baroclinic torque exerted by the interaction of pressure jumps across random two-dimensional shock waves with density gradients. A field of random two-dimensional shock waves has acoustic spectral energy scaling as E k ∼ ε 2/3 ℓ −1/3 k −2 where k is the wavenumber, ε is the energy dissipation, and ℓ is the integral length scale of the field. Since the acoustic energy is broadband, pressure and velocity gradients exist in a wide range of length scales. We study the interaction of these broadband gradients with isobaric thermal gradients localized at a length scale in the spectral space. We show that the method of generating shock waves or injection of wave energy in the system governs the baroclinic interactions. For stochastically forced shock waves, baroclinic terms are negligible. Broadband vorticity with energy at least two orders of magnitude smaller is generated due to continuous variation in curvature of shock waves caused by stochastic forcing. On the other hand, shock waves maintained by energy rescaling result in the generation of coherent vorticity. We also discuss the relative magnitude of the baroclinic torque generated due to total density gradients compared to the one generated due to non-isentropic density gradients within the shock waves interacting with the pressure gradients. a) https://web.iitd.ac.in/~prgupta/ 1 | 10.1063/5.0148159 | [
"https://export.arxiv.org/pdf/2304.11302v1.pdf"
] | 258,298,574 | 2304.11302 | d3a100efa13b9b86d965bfabbb6611222f067ca9 |
Baroclinic interaction of forced shock waves with random thermal gradients
22 Apr 2023
Joaquim P Jossy
Department of Applied Mechanics
Indian Institute of Technology
110016Delhi, New DelhiIndia
Prateek Gupta
Department of Applied Mechanics
Indian Institute of Technology
110016Delhi, New DelhiIndia
Baroclinic interaction of forced shock waves with random thermal gradients
22 Apr 2023Preprint submitted to Physics of fluids on February 28, 2023 (Dated: 25 April 2023)arXiv:2304.11302v1 [physics.flu-dyn]
Density gradients aligned at an angle to pressure gradients result in baroclinic torque in fluid flows, generating vorticity. In this work, we study the vorticity generated by the baroclinic torque exerted by the interaction of pressure jumps across random two-dimensional shock waves with density gradients. A field of random two-dimensional shock waves has acoustic spectral energy scaling as E k ∼ ε 2/3 ℓ −1/3 k −2 where k is the wavenumber, ε is the energy dissipation, and ℓ is the integral length scale of the field. Since the acoustic energy is broadband, pressure and velocity gradients exist in a wide range of length scales. We study the interaction of these broadband gradients with isobaric thermal gradients localized at a length scale in the spectral space. We show that the method of generating shock waves or injection of wave energy in the system governs the baroclinic interactions. For stochastically forced shock waves, baroclinic terms are negligible. Broadband vorticity with energy at least two orders of magnitude smaller is generated due to continuous variation in curvature of shock waves caused by stochastic forcing. On the other hand, shock waves maintained by energy rescaling result in the generation of coherent vorticity. We also discuss the relative magnitude of the baroclinic torque generated due to total density gradients compared to the one generated due to non-isentropic density gradients within the shock waves interacting with the pressure gradients. a) https://web.iitd.ac.in/~prgupta/ 1
Density gradients aligned at an angle to pressure gradients result in baroclinic torque in fluid flows, generating vorticity. In this work, we study the vorticity generated by the baroclinic torque exerted by the interaction of pressure jumps across random two-dimensional shock waves with density gradients. A field of random two-dimensional shock waves has acoustic spectral energy scaling as E k ∼ ε 2/3 ℓ −1/3 k −2 where k is the wavenumber, ε is the energy dissipation, and ℓ is the integral length scale of the field. Since the acoustic energy is broadband, pressure and velocity gradients exist in a wide range of length scales. We study the interaction of these broadband gradients with isobaric thermal gradients localized at a length scale in the spectral space. We show that the method of generating shock waves or injection of wave energy in the system governs the baroclinic interactions. For stochastically forced shock waves, baroclinic terms are negligible. Broadband vorticity with energy at least two orders of magnitude smaller is generated due to continuous variation in curvature of shock waves caused by stochastic forcing. On the other hand, shock waves maintained by energy rescaling result in the generation of coherent vorticity. We also discuss the relative magnitude of the baroclinic torque generated due to total density gradients compared to the one generated due to non-isentropic density gradients within the shock waves interacting with the pressure gradients. a) https://web.iitd.ac.in/~prgupta/
I. INTRODUCTION
Finite amplitude or nonlinear acoustic waves exist in various engineering and scientific applications such as aerospace [1][2][3][4][5] , nondestructive testing 6,7 , astrophysics 8 , nuclear energy 9 , and medicine 10 . Nonlinear effects such as acoustic streaming 11,12 and wave steepening [13][14][15] govern the propagation of such finite amplitude nonlinear acoustic waves in a compressible fluid. Due to steepening, nonlinear acoustic waves form propagating shock waves 13,14 . Propagation, dissipation, and dispersion of these shock waves can be altered by the thermodynamic properties of the medium in which they propagate. Furthermore, interaction of shock waves with inhomogeneities in the medium may generate additional hydrodynamic quantities of interest such as vorticity. In this work, we study the propagation of forced shock waves in an inhomogeneously heated medium in two dimensions.
Ellermeier 16 showed that non-uniform cross-section and density stratification affect the nonlinear distortion of planar waves. Tyagi and Sujith 17 found that entropy gradients adversely affect the nonlinear wave steepening and shock formation. Prasad 18 studied one-dimensional shock waves in slowly varying isentropic one-dimensional flows. Using multiple scales analysis, Prasad 18 examined the effect of non-uniform flow in a duct on evolution and transport of acoustic power in the duct and found that the mean flow has an impact on the residence time of shock waves in the duct due to viscous dissipation inside the shock waves. In this work, we focus on the propagation of shock waves in a quiescent thermally inhomogeneous medium in two dimensions. Due to the misalignment of pressure gradients across the shock waves and density gradients, baroclinic torque generates vorticity. Such misalignment may exist due to the thermoviscous dissipation inside the shock waves or externally imposed thermal gradients. The vorticity caused by such baroclinic interaction can be used to enhance fuel-air mixing in air-breathing engines [19][20][21][22][23] or active flow-control 24 . We perform shock-resolved direct numerical simulations (DNS) of two-dimensional randomly generated shock waves and study their interaction with background thermal gradients. We focus on the length scales and magnitude of vorticity generated due to baroclinic interaction between pressure gradients across the shock waves and background density stratification generated by thermal gradients.
In baroclinic flows, fluid density is a function of pressure, temperature, and composition or concentration of the dissolved constituents 25 . This dependency can create a misalignment in the direction of the density and pressure gradients, unlike barotropic flows in which the density and pressure gradients are parallel. The resulting torque due to the misalignment of density and pressure gradients is called baroclinic torque and may result in increase or decrease of the vorticity and circulation in the fluid, depending on the existing vorticity field. In inertially confined fusion reactors, baroclinic torque results in Richtmyer-Meshkov instability 9,26 which causes mixing of the fuel and capsule material limiting the efficiency of the reactor. In supersonic combustion chambers, shock-flame interaction results in vorticity generated due to the resulting baroclinic torque which is argued as the primary cause of deflagration to detonation transition 27 . Additionally, baroclinic production is attributed as the primary mechanism of enhanced mixing downstream of an oblique shock wave as streamwise vorticity passes through it 28,29 . Usually, only stratification or density gradient is considered while analyzing the baroclinic torque. However, in a multidimensional field of shock waves propagating in a homogeneous medium, the density gradient is not necessarily parallel to the pressure gradient due to local entropy generation. Consequently, the interaction of pressure gradients with density gradients in a two-dimensional or three-dimensional field of shock waves is possible without prescribed or background stratification as well. In this work, we analyze the interaction of two-dimensional field of random shock waves with random background thermal gradients using two-dimensional shock-resolved direct numerical simulation (DNS) of fully compressible Navier-Stokes equations. Such background thermal gradients result in the stratification of the fluid in which the shock waves propagate. We generate these shock waves using two contrasting forcing methods. In one method, we force the momentum equations using a stochastic process, also used to generate equilibrium turbulence 30,31 . In the other method, we rescale the density-weighted wave spectral energy in the system in a band of small wavenumbers (large length scales). In the latter forcing method, energy loss to smaller length scales due to acoustic spectral energy cascade 5,14 is compensated at each time step of the simulation. Furthermore, the relative phasing of shock waves is maintained since only amplitudes are rescaled. Using numerical simulations, we show that the vorticity generated due to different forcing methods is at different length scales. For random forcing, the vorticity and enstrophy are broadband, while they are coherent for energy rescaling. For energy rescaling, we discuss the enstrophy budgets to elucidate the mechanism of coherent vorticity generation in the system. To the best of our knowledge, this is the first study in which the interaction of random shock waves with a inhomogeneously stratified medium has been studied.
In Section II we discuss the governing equations of our simulations and the theoretical derivation of wave spectral energy equation using second-order nonlinear perturbation equations. Since vorticity is expected to be generated due to baroclinic interactions, we also discuss the decomposition of flow fields in rotational and dilatational components to isolate the effects on the spectral energy dynamics of waves. In Section III we discuss the numerical setup and problem formulation.
Particularly, we outline the two different forcing methods used to maintain equilibrium with shock waves without any preference of direction of propagation. We also show that for simulations with no background thermal gradients, density wave spectral energy E f wk scales as ε 2/3 ℓ −1/3 k −2 (see Section III for definitions). In Section IV we discuss the length scales of the two-dimensional field of random shock waves generated due to forcing, the magnitude and length scales of vorticity generated, and the enstrophy budgets before concluding in Section V.
II. FLOW FIELD DECOMPOSITION AND SPECTRAL ENERGY TRANSFER
In this section, we discuss the theoretical background and the derivation of nonlinear governing equations for perturbations in density, velocity, and pressure field from fully compressible Navier-Stokes equations for an ideal gas. Using these nonlinear perturbation equations, we derive the expressions for the flux of spectral wave energy and spectral dissipation. To this end, we discuss the possible decomposition of fields into the wave and the vortical components.
Dimensionless fully compressible Navier-Stokes equations for an ideal gas are given by,
∂ ρ ∂t + ∇ · (ρu) = 0, (1a) ∂ u ∂t + u · ∇u + ∇p ρ = 1 ρRe ac (∇ · (2µS)) + 1 ρRe ac ∇ · κ − 2µ 3 D + F ,(1b)∂ p ∂t + u · ∇p + γ p∇ · u = 1 Re ac Pr ∇ · (α∇T )+ γ − 1 Re ac 2µS : S + κ − 2µ 3 D : D ,(1c)
combined with the dimensionless ideal gas equation of state,
γ p = ρT.(2)
In Eqs. (1b) and (1c), S and D are the strain rate and dilatation tensors, respectively, given by,
S = 1 2 ∇u + ∇u T , D = ∇ · uI.(3)
In Eq. (1b), F , is the external force vector which is used to maintain random shocks in a twodimensional domain and κ, µ, α, Re ac , and Pr are the dimensionless bulk viscosity, dimensionless dynamic viscosity, dimensionless thermal conductivity, acoustic Reynolds number, and Prandtl number, respectively. As discussed in detail in next section, the external force vector F may represent the stochastic forcing or the energy rescaling. Furthermore, to obtain the dimensionless Eqs. (1a), (1b), and (1c), following scales have been used,
[ρ] = ρ m , [p] = γ p m , [T ] = T m , [u] = (c m , c m ), [x, y] = (L, L).(4)
Subscript () m in above relations denotes characteristic values corresponding to the quiescent medium. Velocity scale c m denotes the speed of sound at these characteristic values and L corresponds to a length scale. Using scales in Eq. (4), Re ac is defined as,
Re ac = ρ m c m L µ m ,(5)
where µ m is dynamic viscosity at T m . Throughout this work, we analyse results from numerical In this work, we study the interaction of random nonlinear acoustic fields (weak shock waves 14 , also see results in Section III C) with random thermal gradients. To this end, we perform DNS of Eqs. (1a)-(1c) (see Section III) and extract relevant physical quantities using the second order nonlinear governing equations for perturbations in density, velocity, and pressure. To derive these equations, we consider a quiescent base state with isobaric thermal gradients. The fields of dependent variables u, p, ρ, and T can be written as a decomposition of the base field and the random perturbations denoting the random nonlinear acoustic fields,
u = u ′ , p = 1 γ + p ′ , ρ = f (x, y) + ρ ′ ,(6)
and
T = 1 f (x, y) + T ′ .(7)
Field 1/ f (x, y) corresponds to the base state thermal gradients (which may be interpreted as base state stratification as well). For a homogeneous medium, f (x, y) = 1. As we discuss in Section III, we initialize these thermal gradients within bands of wavenumbers which generates an inhomogeneous medium in which the random shock waves are forced and maintained at statistical steady state. .
To understand energy cascade and interaction with background thermal gradients, we first derive nonlinear governing equations in p ′ and u ′ . Since the thermoviscous terms can only have second order contribution at the leading order (see Gupta and Scalo 14 for a detailed discussion), we obtain the following dimensionless nonlinear equations correct up to second order,
∂ u ′ ∂t + u ′ · ∇u ′ + ∇p ′ f − p ′ ∇p ′ f 2 = 1 f Re ac ∇ · 2µS ′ + ∇ · κ − 2µ 3 D ′ ,(8a)∂ p ′ ∂t + u ′ · ∇p ′ + ∇ · u ′ + γ p ′ ∇ · u ′ = 1 Re ac Pr ∇ · (α∇T ′ ).(8b)
In Eq. (8b), temperature perturbation T ′ is related to pressure and density perturbations (p ′ and ρ ′ , respectively) via the dimensionless ideal gas equation of state (see Eq.
(2)),
γ p ′ = f + ρ ′ 1 f + T ′ = 1 + ρ ′ f + f T ′ + ρ ′ T ′ .(9)
Density perturbation ρ ′ is governed by,
∂ ρ ′ ∂t + ∇ · f u ′ + ∇ · ρ ′ u ′ = 0.(10)∂ p ′ ∂t + ∇ · u ′ = 0,(12)
do not admit a modal decomposition for non-uniform f . For a homogeneous background medium ( f (x, y) = 1 everywhere), the modal decomposition of the linearized Eqs. (11) and (12) is given by,
u ′ , v ′ , p ′ T = ∑ n
φ ± e (i(k nx x+k ny y±ω n t) + ∑ n φ 0 e (i(k nx x+k ny y) .
where u ′ and v ′ are components of velocity field perturbation u ′ . Modes φ 0 and φ ± are given by,
φ 0 = 1 |k| (−k ny , k nx , 0) T ,(14)φ ± = 1 |k| √ 2 (k nx , k ny , ±|k|) T ,(15)
where φ 0 represents the vortical mode which has no contributions from the pressure perturbations and φ ± represent the acoustic modes. Since φ 0 , φ ± are orthogonal to each other, any twodimensional compressible flow field can be projected on φ 0 , φ ± . For second order nonlinear
Eqs. (8a)-(8b), writing the instantaneous solution as an expansion of φ 0 , φ ± , we obtain,
u ′ , v ′ , p ′ T = ∑ n Ω(k n ,t) φ 0 + R + (k n ,t) φ + + R − (k n ,t) φ − e i(k nx x+k ny y) .(16)
Here Ω(k n ,t) corresponds to the vortical contribution of the perturbation field and R + (k n ,t), and R − (k n ,t) correspond to the acoustic contribution. Equation (16) yields a spectral energy density definition (since the modes φ 0 , φ ± are orthogonal) as,
E k (t) = 1 2 | Ω(k n ,t)| 2 + 1 2 | R + (k n ,t)| 2 + 1 2 | R − (k n ,t)| 2 .(17)
Since there is no such modal decomposition possible for waves in an inhomogeneous medium, one may follow Miura and Kida 32 and use the spatial Fourier modes of density-weighted velocity perturbations w ′ = √ f u ′ to define the spectral energy as,
E k = 1 2 ( w * k · w k + p * k p k ) = 1 2 ( w * vk · w vk ) E f vk + 1 2 ( w * wk · w wk + p * k p k ) E f wk ,(18)
where () * denotes complex conjugate, w k is the Fourier transform of w ′ and w vk , w wk are obtained using the Helhmholtz decomposition in the Fourier space,
w k = w vk + w wk ,(19a)
ik · w k = ik · w wk , and (19b)
ik × w k = ik × w vk .(19c)
Energy can be decomposed as E f wk and E f vk where the superscript () f denotes energy evaluated using Fourier modes of w ′ . However, as we show in further sections, E f vk is not a good choice for quantifying the actual vortical energy in the system corresponding to the local rotation rate of the fluid. Hence, we reserve E vk for the actual vortical energy defined as,
E vk = 1 2 u * vk · u vk ,(20)
where u vk satisfies,
ik × u vk = ik × u k .(21)
In sections below, we use E f wk to represent the spectral energy in waves. In the physical space, perturbation energy equation can be written as,
∂ ∂t f |u ′ | 2 2 + p ′2 2 + ∇ · u ′ p ′ + f u ′ · u ′ · ∇u ′ + γ p ′2 ∇ · u ′ = f u ′ · D u + p ′ D p ,(22)
where D u and D p correspond to the thermoviscous dissipation terms from Eq. (8a) and Eq. (8b), respectively. Equation (22) is obtained by taking the dot product of Eq. (8a) with f u ′ and Eq. (8b)
with p ′ .
As shown by Gupta and Scalo 14 , the γ p ′2 ∇ · u ′ term can be further decomposed into an infinite series expansion to yield an exact energy corollary for planar one-dimensional nonlinear acoustic waves. However, such a decomposition is hard in higher dimensions even for waves propagating in a homogeneous medium. Consequently we choose to work with the remaining terms for obtaining the spectral energy cascade terms. In spectral space, the energy equation can be represented as,
d E k dt + T k = D k ,(23)
where T k is the spectral energy transfer function and D k is the spectral dissipation function. From numerical simulations, we use discretized spectral quantities ( E k , T k , D k and derived quantities)
to investigate spectral energy dynamics using the corresponding binned quantities as,
Φ k = |k|<k+∆k/2 ∑ |k|>k−∆k/2 Φ k = b( Φ k ),(24)
where Φ may denote spectral energy, spectral energy transfer, spectral dissipation or derived quantities. Binned spectral energy transfer and spectral dissipation are obtained by multiplying Eq. (8a) with √ f and then taking the dot-product with w * wk in the spectral space (and similarly multiplying Eq. (8b) with p * k in the spectral space) as,
T k = b Re w * k · ∇p ′ √ f k + p * k ∇ · u ′ k + Re w * k · w · ∇u ′ k − w * k · p ′ ∇p ′ √ f k + Re γ p * k p ′ ∇ · u ′ k + p * k u ′ · ∇p ′ k ,(25)
and
D k = b Re w * k · µ √ f Re ac ∇ · (∇u ′ + ∇u ′T ) k + Re w * k · 1 √ f Re ac ∇ · κ − 2µ 3 (∇ · u ′ )) k + Re p * k · 1 Re ac Pr ∇ · (α∇T ′ ) k ,(26)
respectively. For uniform background temperature and density ( f = 1), Eq. (25) yields the transfer function for a nonlinear acoustic system in a homogeneous medium,
T k = b Re u * k · u · ∇u ′ k − u * k · p ′ ∇p ′ k + Re γ p * k p ′ ∇ · u ′ k + p * k u ′ · ∇p ′ k .(27)
Throughout this work, we quantify the results from numerical simulations of fully nonlinear governing Eqs. (1a)-(1c) (discussed in Section III) using second order nonlinear Eqs. (8a) and (8b) and corresponding spectral energy cascade quantities derived.
III. NUMERICAL SIMULATIONS
In this section we discuss the computational setup for our numerical simulations of twodimensional fully compressible dimensionless Navier-Stokes Eqs. (1a)-(1c). We solve Eqs. Table I) at different times.
At time t = 0, we prescribe quiescent conditions u = 0 along with isobaric thermal gradients.
To generate a thermal gradient at a wavevector k randomly, we define the initial temperature field 1/ f (x, y) in Eq. (7) as,
1 f (x, y) = 1 + ε T T b (x, y),(28)
where T b (x, y) is defined as a random periodic field for a wavenumber vector k in the Fourier space as,
T b k = exp(2πiθ k ),(29)
where θ k is drawn from a normal distribution over (0, 1] for every wavenumber vector k. Since these thermal gradients are assumed to be isobaric, the initial density field is set to f (x, y) ensuring no pressure perturbations due to thermal initialization. To study the interaction of these thermal gradients with random shock waves, we consider two types of forcings namely random forcing and energy rescaling as discussed below.
A. Random forcing
To generate random shocks, we force acoustic waves stochastically using the Uhlenbeck-Ornstein (UO) process 30 , typically used for generating homogeneous isotropic turbulence in a three-dimensional box. The forcing F in Eq. (1b) is defined in the Fourier space using four independent solutions of a UO process a i,k (t) for i = 1, 2, 3, and 4 (one complex number in each direction). Following conditions are imposed on each of the a i ,
a i,k (t) = 0,(30)a i,k (t)a * j,k (t + s) = 2σ 2 δ i j exp(−s/T L ),(31)
where · denotes the ensemble average, () * denotes the complex conjugate, σ 2 is the variance, and T L is the forcing time-scale. We restrict F to a band of wavenumbers 0 < |k| < k F . Spectral wave energy cascades from the forcing band to the higher wavenumbers generating shock waves (see Fig. 1). In this work, we choose k F = 5 ensuring energy injection in large length scales 35,36 . The total rate of energy addition ε in the system depends on the variance σ 2 , timescale T L , and number of wavenumber vectors within the forcing band N F as,
ε = 4N F T L σ 2 .(32)
Combining independent processes a 1 , a 2 , a 3 , and a 4 , the forcing vector in the Fourier space can be obtained as,
a k = a 1,k + ia 2,k , a 3,k + ia 4,k T .(33)
To force only the wave field, we remove any vortical component from a k yielding the forcing F introduced in Eq. (1b) in the Fourier space as,
F k (t) = ( a k (t) · k) |k| 2 k.(34)
Consequently, Eq. (32) yields an upper bound on the rate of energy addition rather than holding exactly. Even though we remove the vortical component from the forcing, randomly forced acoustic wave turbulence exhibits broadband vortical energy (see Fig. 2) because the forcing F (chosen from a stochastic process) may point in a different direction at different times. Consequently, acoustic density gradient ∇ρ(t 1 ) generated by F (t 1 ) can interact with F (t 2 ) at time t 2 > t 1 , when
F (t 2 )
is not parallel to ∇ρ(t 1 ), thus changing the curvature in the shocks generating vorticity. Figure 1 shows the two-dimensional field of random shock waves generated due to the stochastic forcing and the vorticity generated by the shock waves interacting with the background thermal gradients (k T = 60) and the forcing vector F .
B. Density weighted energy rescaling
Since the forcing parameter a k in Eq. (33) Hence, phase of the acoustic waves generated by stochastic forcing is random. Consequently, vorticity generated as a result of the baroclinic torque due to shock waves propagating in one direction is quickly nullified by shock waves propagating in opposite direction at some later time.
Hence, vorticity is not accumulated in the domain at any particular length scale due to baroclinic torque. However, due to interaction of the forcing vector F with the shock waves, a broadband vorticity with higher energy at forced length scales is generated.
Based on this reasoning, we use an energy rescaling forcing, which is inspired by the velocity rescaling thermostat in molecular simulations 37 and the deterministic forcing used in hydrodynamic turbulence 38 . For one set of simulations with different length scales of background thermal gradients, we restart the simulations by disabling random forcing F after statistical stationarity is achieved approximately. In simulations with energy rescaling, density-weighted wave spectral energy E f wk is rescaled in a band of wavenumbers K by forcing the contribution of smaller bins of Table I. Stochastic forcing generates broadband vorticity with highest actual vortical energy in forced wavenumbers due to continuously changing curvature of shock waves.
width ∆k in K uniform. For instance, let a band of wavenumbers K defined as K = {k :
k 1 < |k| < k 2 } be composed of partitions K i = {k : k i − 1/2 < |k| < k i + 1/2} where k i is between k 1 and k 2 .
We rescale the variables w wK i and p K i to˜ w wK i and˜ p K i , respectively,
w wK i = ε A α k i w wK i 1 2 ∑ j α 2 k j | w wK j | 2 + | p K j | 2 ,(35)p K i = ε A α k i p K i 1 2 ∑ j α 2 k j | w wK j | 2 + | p wK j | 2 ,(36)
where the weight coefficients α k i are chosen as,
α k i = 1 N i 2 ∑ K i (| w wK i | 2 + | p K i | 2 ) ,(37)
where N i is the number of wavenumber vectors k within the partition K i . Thus, all the partitions K i contribute equally to the total wave energy in K. The total wave energy in K is equal to ε 2 A after every rescaling operation. Rescaling of wave components of w ′ in the Fourier space results in rescaling of the amplitudes in the real space, maintaining the propagation direction of the waves at each time step. Furthermore, since we are forcing the density-weighted wave components w ′ w , energy rescaling also results in external injection of vorticity. However, as we show in Section IV C, the baroclinic interaction between the shock waves and the background thermal gradients are also significant thus resulting in coherent vorticity, unlike random forcing which results in broadband vorticity.
C. Simulation parameters
We define our simulation parameter space as the wavenumber of thermal perturbations k T and the rate of energy addition ε. We choose k T values between 10 and 60 (increments of 10) and
ε/4N f values as 1.5 × 10 −5 , 2.0 × 10 −5 , and 2.5 × 10 −5 (see Table I). We run these simulations till t = 100 and then restart with energy rescaling enabled and stochastic forcing disabled to isolate the coherent vorticity generated due to the interaction between the shock waves and background thermal gradients. Since the parameter ε does not influence rescaled runs, we only restart simulations with ε = 1.5 × 10 −5 with energy rescaling (1d r -6d r in Table I). For each ε value, we also run a baseline case with no thermal gradients ( f = 1). Figure 3 shows time series of density-weighted
wave energy E f w for all simulations in Tables I and II. For simulations with stochastic forcing, spatially averaged pressure and temperature rise due to constant dissipation of energy caused by shock Table I and the corresponding baseline cases in Table II. All simulations are run with stochastic forcing till t = 100. Only simulations 1d -6d are restarted with the density-weighted energy rescaling (marked with r in Table I). Due to forcing, spatially averaged pressure and temperature rise in the domain. The shown time series are obtained using pressure perturbations which are calculated by subtracting the spatial average of pressure from total pressure. waves in the system. In Fig. 3, spatially averaged pressure is removed from E f w , thus showing the true perturbation energy in the system. For all cases with random gradients, thermal gradients are defined for a range of wavenumber vectors k T − 1 2 < |k| < k T + 1 2 . The acoustic Reynolds number Re ac and the rate of energy injection ε (equal to rate of energy 0b 0c 0d 0e 0f dissipation at stationarity) are related as 14,39 ,
Re ac ∼ 1 η (εℓ) −1/3 ,(38)
where η is the smallest length scale (Kolmogorov length scale in hydrodynamic turbulence or shock thickness) and ℓ is the integral length scale (average distance between two shocks). Since 3072 Fourier modes are used in each direction with 2/3 dealiasing, maximum wavenumber captured in the simulations is k max = 1024. Hence, for shock-resolved simulations, k max η > 1 must hold (see Section IV A). We choose Re ac = 2500 in our simulations such that η > 1/1024 for all the values of ε and the field of shock waves is well resolved (see Table III). For all the values of ε, the shock waves in the field are weak shock waves (see Fig. 4a).
As the rate of energy injection ε increases, stronger shocks are generated in the domain. We calculate the mean Mach number M using the entropy perturbation at each point in the simulation domain and calculating the weighted mean of the shock Mach number. At each point, we evaluate the dimensionless entropy perturbation s ′ using,
s ′ = 1 γ − 1 ln γ p ′ − γ γ − 1 ln ρ ′ f ,(39)
using which, we approximate the entropy jump at each point as,
∆s ′ = ∂ s ′ ∂ x dx + ∂ s ′ ∂ y dy.(40)
Using ∆s ′ , we calculate the Mach number field assuming weak shocks as,
M = 1 + 3∆s ′ (γ + 1) 2 2γ 1/3 .(41)
In Fig. 4 Mach number M with forcing parameter ε for baseline cases 0a-0f (see Table II).
such that contribution of points where s ′ < 25% of the maximum entropy perturbation is set to 0. Eq. (1b) (see Section IV B). Below, we discuss the features of field of shock waves generated using stochastic forcing and the rescaled forcing in detail. We study the length scales of the field of shock waves following which we focus on vorticity generated due to the two forcing schemes and finally discuss the enstrophy budgets.
IV. RESULTS AND DISCUSSION
In this section, we discuss the interaction of stochastically forced shocks and shocks maintained by energy rescaling with the background thermal gradients. We show that broadband vorticity is generated due to stochastically forced shocks irrespective of the presence of the background thermal gradients. Shocks maintained by energy rescaling interact with the background thermal gradients and generate coherent vorticity, with eddies of the same length scale as that of the thermal gradient. We also show that the background thermal gradients do not significantly affect the length scales of the shocks and spectral energy cascade in shocks.
A. Length scales
Due to shock waves, the wave energy spectra decay as k −2 in the wavenumber range where spectral energy flux is high. Gupta and Scalo 14 derived such scaling for acoustic wave energy spectra for decaying one-dimensional acoustic wave turbulence in a homogeneous medium as, 39 derived a similar scaling for stochastically forced isotropic shocks in shallow water equations. In Fig. 5 we show the normalized density-weighted wave energy spectra, spectral flux, and cumulative spectral dissipation for baseline cases 0a-0f (see Table II). The wave energy spectra scale as k −2 with a similar dependence on dissipation and integral length scale as for decaying one-dimensional acoustic wave turbulence 14 and two-dimensional shallow water wave turbulence 39 , indicating that the scaling relation,
E wk ∼ ε 2/3 ℓ −1/3 k −2 ( E f wk = E wk for f = 1). Augier et al.E f wk ℓ −5/3 ε 2/3 ∼ 1 k 2 ℓ 2 ,(42)
holds where the spectral energy flux is high (10 kℓ 300). It is not possible to term the range of scales where the spectral energy flux is high as inviscid range or inertial range because the dissipation exists at all scales in shock waves propagating with a physical diffusion term. In Appendix A we show the effect of hyperdiffusion (widely used in two-dimensional wave turbulence literature) and how it modifies the distribution of spectral energy cascade flux Π k in shock waves, particularly concentrating the dissipation at small scales. Since the wave energy spectra decay as k −2 in a field of shock waves, dissipation is active at all length scales. Consequently, cascade of wave energy spectra in compressible flows may be regarded as a variable flux energy transfer 40,41 .
The scaling in Eq. (42) also indicates that the shocks generated by stochastic forcing in all the simulations are isotropic 39 .
The integral length scale ℓ quantifies the large length scales at which energy is injected in the system. Since we choose k F = 5, we use ℓ = 1 5/2 in Fig. 5 for scaling. We identify the Taylor microscale as 14
λ ∼ ∑ k E k ∑ k k 2 E k .(43)
Assuming the spectral energy E f wk constant from 0 < k < 2/ℓ, we obtain the relation, Table I and the baseline case (0d in Table II). The density-weighted wave energy spectra are almost identical for all the cases and follow the scaling in Eq. (42). The Taylor microscale λ for all the cases is almost identical highlighting that background thermal gradients and length scales of these gradients do not affect the shock waves significantly.
λ ∼ ηℓ,(44)
which indicates that the Taylor microscale is in the middle of the integral length scale and the Kolmogorov length scale, similar to one-dimensional acoustic wave turbulence and unlike classical hydrodynamic turbulence in which λ ∼ η 2/3 ℓ 1/3 and is biased towards the smallest length scale indicating that the dissipation is higher at smaller length scales 42 . Hence, scaling in Eq. (44) also shows that the dissipation occurs uniformly across all length scales smaller than the integral length scale, particularly in the range of wavenumbers for which Π wk is high. Figure 6 shows the wave energy spectra E f wk , flux of spectral wave energy Π wk , and cumulative dissipation of spectral wave energy D wk for simulation cases 0d-6d. Background thermal gradients do not affect the wave energy spectra and the length scales associated with the cascade of energy. Table III shows the time-averaged values of the Taylor microscale λ t calculated using Eq. (43) and the Kolmogorov length scale η t calculated using Eq. (38) for cases 0d-6d (see Table I). Both the length scales change insignificantly as the length scales of thermal gradients change. Moreover, for all cases k max η remains greater than 1 confirming the shock-resolved nature of simulations. As we discuss further, baroclinic interactions with background thermal gradients result in change of vortical length scales generated in the domain.
B. Vorticity
Due to spectral energy cascade, pressure gradients exist at a wide range of length scales (from largest length scale ℓ to the smallest length scale η). These pressure gradients can interact with the density gradients in the domain to generate local baroclinic rotation of fluid, quantified by vorticity. Motivated by the decomposition in Eqs. (19), we identify vorticity Ω and density-weighted vorticity Ω f as,
Ω = ∇ × u, Ω f = ∇ × ( √ ρu) .(45)
Taking curl of Eq. (1b) (with constant µ and κ = 0), we obtain the governing equation for Ω (in 2D) as,
∂ Ω ∂t = ∇ρ × ∇p ρ 2 + µ∇ 2 Ω ρRe ac + µ Re ac ∇ρ × [∇ × Ω] ρ 2 − 4µ 3Re ac ∇ρ × ∇(∇ · u) ρ 2 − ∇ · (uΩ) + F r Ω .(46)
Equation (46) holds without any perturbation decomposition of the field variables. First term on the right hand side of Eq. (46) is the baroclinic term. As we show in the next section, interaction of density gradients with pressure gradients is the primary source of vorticity (and enstrophy) generation in our simulations. These density gradients include both the background thermal inhomogeneity and the density gradients generated in the shock waves. The second term is the viscous diffusion of vorticity. The next two terms represent the interaction of density gradients with viscous stresses in the domain. In a field of shock waves, bulk viscous stress terms (proportional to ∇ (∇ · u)) are significantly higher than shear stresses (proportional to ∇ × Ω) because velocity dilatation is orders of magnitude higher than the rotation rate (c.f. Figs. 1 and 10). Hence, we expect the third term in Eq. (46) to be significantly smaller in magnitude compared to the fourth term.
Furthermore, an interaction of density gradients with bulk viscous stresses (captured by the fourth term) does not necessarily denote dissipation of vorticity and could also participate in generation of vorticity, as noted by Kida and Orszag 43 . The next term represents advection of vorticity. The last term F r Ω denotes any forcing of vorticity due to density-weighted wave energy rescaling. Since it is difficult to write the energy rescaling as a right hand side in the momentum equation. Hence, we do not simplify F r Ω further. Since the curl of stochastic forcing F is set to zero, there is no stochastic forcing of vorticity Ω. In the next section, we derive the enstrophy budget terms using Figure 7. Vortical energy E vk and density-weighted vortical energy E f vk for homogeneous (baseline) simulation 0d (see Table II) and simulation with thermal gradients 6d (see Table I) with stochastically forced acoustic waves. While E f vk peaks near k T = 60 for 6d, E vk exhibits no such peak, thus highlighting that the rotational energy E vk is broadband when shocks are forced stochastically. Eq. (46). Similarly, a governing equation for Ω f = ∇ × √ ρu can be derived as,
∂ Ω f ∂t = ∇ √ ρ × ∇p ρ + µ∇ 2 Ω √ ρRe ac + µ Re ac ∇ √ ρ × [∇ × Ω] ρ − 4µ 3Re ac ∇ √ ρ × ∇(∇ · u) ρ − ∇ × ( √ ρu · ∇u) + ∇ √ ρ × F .(47)
Since energy rescaling only rescales energy in density-weighted wave modes, density-weighted vorticity Ω f is not affected by energy rescaling. However, the final term in Eq. (47) is due to random forcing interacting with the density gradients. Figure 7 shows the spectra of E vk , which quantifies the vorticity Ω in the domain and E f vk , which quantifies the density-weighted vorticity Ω f for a stochastically forced homogeneous simulation (0d in Table II) and a stochastically forced inhomogeneous simulation (6d in Table I As shown in Fig. 7, vorticity is not localized at any particular length scale in both the homogeneous and inhomogeneous simulations. The broadband vorticity is generated due to changing shock curvature by the action of stochastic forcing. This also indicates that if the curvature of the shock waves is maintained, then the interaction with the thermal gradients must generate coherent vorticity at length scale at which thermal gradients are concentrated. To this end, we consider the energy rescaling runs which we restart from the stochastically forced runs at t = 100. During restart, we remove the vortical velocity u v (c.f. Eq. (21)) and put F = 0. During the simulations, the density-weighted spectral energy E f wk is rescaled as per Eqs. (35)-(37) for ε A = 0.15. Figure 8 shows the time series of total density-weighted wave energy E f w and vortical energy E v before and after energy rescaling restart. The wave energy readjusts to the energy rescaling and saturates around ε 2 A (see Fig. 8(a)). Since we remove any vortical component u v from the velocity field, the vortical energy drops but immediately recovers to a small but finite value (approximately 1 × 10 −5 in Fig. 8b) indicating that the vortical velocity field is atleast one order smaller than the wave velocity field. Figure 9 shows the spectra of E vk and E f vk for simulations with energy rescaling in a homogeneous medium (0d r in Table II) and a heterogeneous medium with thermal gradients at k T = 60 (6d r in Table I). The vortical energy E f vk is narrow-band and is localized around the length scale corresponding to k T for the inhomogeneous simulation, indicating coherent vorticity generation. In simulations with energy rescaling, spectral wave energy E f wk is rescaled (restored) so as to maintain the total wave energy at large length scales constant in time. Recoursing to Figure 9. Vortical energy E vk and density-weighted vortical energy E f vk for homogeneous (baseline) simulation 0d (see Table II) and simulation with thermal gradients 6d (see Table I) with energy rescaling. Table I) at different times. concepts in hydrodynamic turbulence, energy rescaling ensures permanence of large eddies in time. Consequently, the direction in which shock waves are propagating (phasing of shock waves) is never altered by energy rescaling. As a result, there is no broadband vorticity generated in the domain due to changing directions and curvature of shock waves. Instead, a coherent vorticity at the same length scale corresponding to the thermal gradient is generated. Figure 10(a) shows the snapshots of two-dimensional field of shock waves generated by the energy rescaling in an inhomogeneous medium with thermal gradients at k T = 60 (6d r in Table I). Figure 10(b) shows the coherent vorticity generated due to these shock waves.
In the next section, we discuss the budgets of enstrophy defined as Q = |Ω| 2 /2 indetifying the primary mechanism for generation of coherent vorticity due to shock waves driven by energy rescaling.
C. Enstrophy Budget
Generation of coherent vorticity is highlighted in enstrophy spectra (see Fig. 11) defined as,
Q k = 1 2 Ω k 2 .(48)
Using Eq. (46), we obtain the governing equation for enstrophy spectra as,
d Q k dt = B k − D k − A k + F r k ,(49)
where
B k = Re Ω * k · ∇ρ × ∇p ρ 2 k ,(50)D k = − µ Re ac Re Ω * k · ∇ 2 Ω ρ + ∇ρ × ∇ × Ω ρ 2 − 4 3 ∇ρ × ∇(∇ · u) ρ 2 k ,(51)A k = Re Ω * k · ∇ · (uΩ) k ,(52)
and F r k denotes the enstrophy generation due to energy rescaling (term corresponding to F r Ω in Eq. (46)).
First term on the right of Eq. (49), B k , denotes the baroclinic generation of enstrophy and captures the effect of the baroclinic torque on enstrophy. The next three terms (combined in D k ) appear due to the viscous forces in the momentum equation. Not all three terms corresponding Figure 11. Enstrophy spectra Q k for simulations with inhomogeneous thermal gradients 1d r -6d r (see Table I) with energy rescaling. to the viscous forces necessarily destroy enstrophy. The first term of these three terms which is proportional to the Laplacian of vorticity is always non-positive and has primary contribution in viscous dissipation of enstrophy. The second term and the third term of these three terms can be positive as well as negative. The next term ( A k ) arises due to convection term in the momentum equation and has very small contribution in changing enstrophy of the system. Lastly, F r k is due to density-weighted energy rescaling. As discussed in Section IV B, a simplified form of energy rescaling as a right hand side term in governing equations is difficult so we do not simplify F r k . Figure 11 shows the spectra of enstrophy for simulations with energy rescaling. As noted for vortical energy spectra E f vk in Fig. 9, the vorticity peaks at the same wavenumber at which background thermal gradients exist (k T ). Consequently, the enstrophy spectra also peak at k T .
Such localized (in the spectral space) enstrophy is generated due to both the baroclinic production term B k as well as the energy rescaling term F r k . Figure 12(a) shows the time series of right hand side terms of Eq. (49) and total enstrophy for simulation 6d r . The baroclinic term oscillates about a positive mean value while the advection terms have negligible contribution to the change of enstrophy in time. Figure 12(b) shows the time averaged values of the enstrophy, the full baroclinic production term and the nonlinear baroclinic production term, negative of the dissipation, and the advection terms for simulations 1d r -6d r ( · t denoting time average). As the wavenumber k T increases, the enstrophy in the system increases. Accordingly, the baroclinic generation and the dissipation also increase. Enstrophy Q k is statistically stationary since the vortical energy E v is statistically stationary as shown in Fig. 8(b). Consequently, F r k accounts for the difference between the two since the L.H.S of Eq. (49) vanishes. Density gradients in curved shock waves need not be parallel to the pressure gradients due to thermoviscous generation of entropy within the shock waves. Furthermore, due to imposed thermal inhomogeneity, shock waves get reflected and refracted continuously in the domain. A combination of these two effects can be seen by decomposing the baroclinic term B k as,
B k = Re Ω * k · ∇ f × ∇p ρ 2 k + Re Ω * k · ∇(ρ − f ) × ∇p ρ 2 k B NL k ,(53)
where B NL k captures the nonlinear baroclinic interactions between pressure gradients and density gradients. Figure 12(a) shows the time series of the nonlinear baroclinic term for simulation 6d r and Fig. 12(b), time average of the nonlinear baroclinic interaction term is shown. The overall contribution of the nonlinear baroclinic term to enstrophy is very small. However, as shown in Fig. 13(b), the nonlinear baroclinic interaction is local in the wavenumber space and negative in value, indicating its contribution in destruction of enstrophy. Additionally, the nonlinear baroclinic term peaks near k T , indicating that any nonlinear baroclinic interaction is amplified due to the background thermal gradients. Similarly, the dissipation term D k is also highly negative and localized near k T . On the other hand, the total baroclinic term B k has a nonlocal contribution indicating the participation of a wide range of length scales in generation of enstrophy. As shown in Fig. 12(a) and 12(b), the overall contribution of B k is positive and large. Furthermore, the difference between B k and D k in Fig. 12(b) represents F r k in Eq. (49). In Appendix B, we discuss the relative magnitudes of dissipation terms in Eq. (51).
V. CONCLUSIONS
We have studied forced two-dimensional shock waves in a periodic domain with an inhomogeneous background temperature field using numerical simulations. We showed that random forcing generates an isotropic field of shock waves and the density-weighted wave energy spectra scale with wavenumber as k −2 as in Eq. (42). The flux of energy spectra does not remain constant in the cascade range of wavenumbers if physical diffusion is considered indicating the spectral energy transfer to be of variable flux type 41 . We have also shown that dissipation exists in all length scales, unlike hydrodynamic turbulence, where dissipation is localized in smaller length scales. Such en-ergy spectra cascade characteristics remain similar when the field of shock waves propagates in an inhomogeneous medium with randomized thermal gradients.
While the interaction of shock waves with background thermal gradients results in a baroclinic torque generating vorticity, the type of energy injection in shock waves governs the evolution of this vorticity field in the domain. We showed that for stochastically forced shock waves, no coherent vorticity is generated, and only a broadband vorticity (distributed over all length scales) exists.
For stochastic forcing, the pressure gradient field in the domain is also a stochastic process. Since the direction of the pressure gradient is also stochastic, time-averaged baroclinic torque vanishes since the leading order term is linearly dependent on the pressure gradient. Consequently, only broadband vorticity is observed due to changing direction of pressure gradients. The broadband vorticity is high in the forced length scales and decays for smaller length scales, further highlighting that it is generated due to the stochastic forcing. To investigate the generation of coherent vorticity, we have introduced a new forcing based on wave energy rescaling. In this forcing, we rescale the energy in the density-weighted wave modes at each time step. The cascade is thus driven by waves that do not change phase and propagation direction continuously. Consequently, coherent vorticity is generated in the domain at the same length scale as that of the thermal gradients. We have also shown that density-weighted variables introduced by Miura and Kida 32 are not suitable for studying rotational and dilatational flow fields separately in compressible flows.
Though in the current study we have investigated the baroclinic interaction of forced weak shock waves, we expect the baroclinic vorticity production to increase for stronger shock waves due to higher pressure gradients. However, such simulations are beyond the scope of the current study.
For all the simulations we used constant viscosity and thermal conductivity, however, as shown in Appendix A, temperature dependent viscosity and conductivity have negligible affects on the energy spectra of shock waves. Consequently, the baroclinic interaction is not expected to be affected by the temperature dependent viscosity and conductivity values in the weak-shock regime.
In practice, the generation of random weak shock waves may be due to intense unsteady combustion events or thermoacoustic instabilities. Spatially distributed unsteady combustion events are better represented by stochastic forcing due to randomly generated pressure waves 44 , while instability-driven shock waves are better represented by forcing based on energy rescaling since only particular modes exhibit such instabilities 5 depending on the geometry of the combustion chamber. Besides the practical relevance in understanding unsteady shock induced mixing, we also highlighted that the method of injection of energy in the so-called wave-turbulent 45 systems in which nonlinear waves interact with each other is a richer problem compared to the hydrodynamic turbulent systems. In hydrodynamic turbulence, there is no wave velocity of propagation of eddies due to which phasing is never considered in hydrodynamic turbulence forcing 46 . However, phasing or phase difference between pressure and velocity in both space and time is an important parameter governing the acoustic wave turbulence and interaction with background medium. Accordingly, future research directions include a detailed investigation of random shock wave forcing and the possibility of sustaining hydrodynamic turbulence via forced weak shock waves in three dimensions.
ACKNOWLEDGMENTS
We acknowledge the financial support received from Science and Engineering Research Board (SERB), Government of India under Grant No. SRG/2022/000728. We also thank IIT Delhi HPC facility for computational resources.
Appendix A: Effects of hyperdiffusion and variable viscosity
Throughout our simulations, we consider the dimensionless viscosity constant (µ = 1). However, for high thermal perturbations, changes in viscosity due to inhomogeneous temperature may be significant. To confirm that such thermal dependence of viscosity has an insignificant impact on the spectral wave energy cascade in our simulations, we also perform a simulation with dimensionless viscosity governed by the Sutherland's law
µ = T 3/2 T m + S re f T T m + S re f ,(A1)
where T m = 273.15 K and is the reference temeperature scale (see Eq. (4)), S re f = 110.4 K, and T is the dimensionless temperature. To obtain a wide range of length scales in a given resolution, hyperdiffusion is used in wave-turbulence studies 39,47 . Such hyperdiffusion modifies all the viscous stress tensor and thermal conduction terms in the momentum and pressure governing Eqs. (1b) and (1c) as,
∂ u ∂t + u · ∇u = − ∇p ρ + ν∇ 2r u,(A2)
∂ p ∂t + u · ∇p + γ p∇ · u = ν∇ 2r p,
where r and ν are parameters of hyperdiffusivity. To assert the validity of constant dynamic viscosity, we compare the energy spectra of simulation 0d in Table II, with two simulations, one with viscosity varying with local temperature as per Eq. (A1) (simulation 0s) and one with hyperdiffusion with r = 2 and ν = 5.44 × 10 −10 (simulation 0h) in Fig. 14. (a) (b) (c) Figure 14. (a) Dimensionless scaled density-weighted wave energy spectra E f wk , (b) spectral flux of wave energy Π wk , and (c) cumulative dissipation D wk for a baseline case (0d in Table II), compared against a simulation with viscosity varying with the Sutherland law in Eq. (A1) (0s) and a simulation with hyperdiffusion (0h). For simulation 0s, all parameters are kept same as 0d simulation except for viscosity and thermal conductivity. Prandtl number Pr is fixed at 0.72. For simulation 0h, hyperdiffusion is used with r = 2 and ν = 5.44 × 10 −10 (c.f. Eqs. (A2) and (A3)).
As shown in Fig. 14 temperature dependent viscosity does not affect the energy spectra significantly. For simulation with hyperdiffusion (0h), the energy spectra extend with a −2 slope for larger range of wavenumbers. However, the flux of spectral energy Π and cumulative spectral dissipation D k are modified significantly. The flux is approximately constant in the spectral transfer range and the dissipation is accumulated in higher wavenumbers. However, due to physical viscous terms, the spectra flux decays smoothly and dissipation exists in all wavenumbers. Furthermore, even in a hypothetical simulations with infinite resolution, since the energy spectra will decay as k −2 , the dissipation will exist in all scales (since D k ∼ k 2 E k ). Hence, replacing physical dissipation terms with a hyperdiffusion term tends to modify the fundamental nature of spectral energy transfer. Since the variation of viscosity has negligible effect on energy spectra and hyperdiffusion modifies the fundamental nature of spectral energy transfer in nonlinear acoustic wave turbulence, we choose dimensionless viscosity µ = 1 in all our simulations.
Appendix B: Enstrophy dissipation
Three terms contribute to enstrophy dissipation as derived in Eq. (51). In Fig. 15 we show the spectra of all three terms contributing to the dissipation of enstrophy in simulations 1d r -6d r (see Table I). Similar to vorticity and enstrophy, all the dissipation terms are localized around k T signifying coherent vorticity in the domain. Figure 15(a) shows spectra of dissipation due to misalignment between density gradient and bulk viscous forces (quantified by ∇ (∇ · u). Figure 15b shows the spectra of dissipation due to misalignment between density gradient and a combination of bulk and shear viscous forces (since ∇ × Ω = ∇ (∇ · u) − ∇ 2 u). Together, the terms shown in Figs. 15(a) and 15(b) maybe considered as viscous counterparts of baroclinic torque in which misalignment of density gradients with viscous forces contributes to dissipation. We highlight that these two terms are not necessarily dissipative in nature 43 . Figure 15(c) shows the spectra of the Laplacian dissipation of vorticity and is necessarily negative. The Laplacian dissipation is highest in rescaled simulations denoting highly localized (in spectral space) vorticity. Since the vorticity and production of enstrophy are highly sensitive to the method of forcing random shock waves (stochastic forcing compared with rescaled forcing in this work), we expect the enstrophy dissipation to also change with the forcing methodology.
simulations of dimensionless equations. Hence, values of scales in Eqs. (4) are irrelevant and only values of dimensionless numbers are specified.
Figure 1 .
1) numerically using the fourth order Runge-Kutta time stepping and standard Fourier pseudospectral method33 combined with 2/3 deliasing for numerical stability and slab-decomposition for parallel distributed computing using MPI34 . For all the simulations presented in this work, we consider 3072 × 3072 Fourier modes ensuring shock-resolved DNS (see Section III C and IV A for details). Smooth variation of binned spectral energy, spectral energy transfer function, and cumulative spectral dissipation at higher wavenumbers also highlight the shock-resolved nature of the simulations. For all simulations, we consider constant viscosity (µ = 1) and thermal conductivity (α = 1). We show in Appendix A that temperature dependent viscosity and thermal conductivity have no significant affect on shock wave field. Since the shock waves in our simulations are weak, temperature fluctuations and hence viscosity fluctuations are very small compared to their respective base state values to have any significant impact. Furthermore, for simplicity, we ignore bulk viscosity in our simulations (κ = 0). (a) Instantaneous contours of −∇·w ′ showing the two-dimensional field of shock waves generated by stochastic forcing and (b) instantaneous contours of ∇ × u ′ showing the vorticity field generated due to these shock waves in a inhomogeneous medium (for case 6d in
is a solution of a Langevin equation, a randomly forced wave turbulence can be compared to the Langevin thermostat in the molecular simulations 37 . At every time-step, components of a k satisfy the Langevin equation independently.
Figure 2 .
2Wave energy E wk , and actual vortical energy E vk spectra of stochastically forced simulations for case 6d in
Figure 3 .
3Time series of E f w for all the simulation cases in
× 10 −5 1d r 2d r 3d r 4d r 5d r 6d r 2 .0 × 10 −5 1e 2e 3e 4e
2run all the simulations using random forcing till t = 100 and then restart the simulation cases marked with r with energy rescaling to isolate the coherent baroclinic vorticity. The rate of energy injection parameter ε loses its significance in rescaled forcing simulations since there is no random forcing. Other fixed parameters and their values are : ε A = 0.15, ε T = 0.1, Re ac = 2500, and Pr = 0.72, .
Figure 4 .
4we show the normalized histogram or probability distribution function (PDF) of Mach number M for all the baseline simulations. Since the shock waves are very thin, most of the domain exhibits Mach number very close to 1 (s ′ ≈ 0). To obtain a single numerical value representative of the Mach number of shock waves, we compute weighted average of the Mach number M , (a) Time averaged normalized histogram (PDF) of Mach number, (b) variation of weighted mean
Figure 4 (
4b) shows the variation of weighted mean Mach number M of shock waves in the domain with the forcing energy injection rate ε. Since the shocks become stronger for higher energy injection rate for same forcing wavenumber band, the smallest length scale of fields decreases (see Eq.(38)) and the mean Mach number of shocks increases. Even for the highest injection rate considered in this study (case 0f) the shocks are weak ( M < 1.2). Thus the governing equations correct up to second order in perturbations (Eqs. (8a)-(8b)) are sufficient to study the cascade of spectral wave energy. To elucidate the generation of vorticity, we consider the fully nonlinear
Figure 5 .Figure 6 .
56(a) Dimensionless scaled density-weighted wave energy spectra E f wk , (b) spectral flux of wave energy Π wk , and (c) cumulative dissipation D wk for baseline cases (0b-0f inTable II). The wave energy spectra E f wk scale with dissipation ε and integral length scale ℓ as in Eq.(42). (a) Dimensionless scaled density-weighted wave energy spectra E f wk , (b) spectral flux of wave energy Π wk , and (c) cumulative dissipation D wk for inhomogeneous cases (1d-6d in
Figure 8 .
8Time series of (a) density-weighted wave energy E f w and (b) Vortical energy E v for simulations 1d r -6d r before and after energy rescaling is enabled. Overall vortical energy in the system decreases and the density-weighted wave energy quickly saturates near ε 2 A .
). The random forcing results in generation of a broadband vorticity Ω. The peak in E f vk near the wavenumber k T occurs due to √ f in the definition of E f vk , hence falsely indicating coherent vorticity in the simulation. Consequently, in our further analysis of vorticity in this section and enstrophy budgets in the next section, we consider E vk , Ω, and the corresponding velocity field u v (c.f. Eq. (21)).
Figure 10 .
10(a) Instantaneous contours of −∇ · w ′ showing the two-dimensional field of shock waves generated by rescaled forcing and (b) instantaneous contours of ∇ × u ′ showing the coherent vorticity field generated due to the interaction shock waves with the inhomogeneous background medium (for case 6d in
Figure 12 .
12(a) Time series of total enstrophy ∑ k Q k , baroclinic term ∑ k B k , nonlinear baroclinic term ∑ k B NL k , dissipation ∑ k D k , and the advection ∑ k A k term for simulation 6d r . The baroclinic production oscillates about a positive mean value (averaged in time). (b) Time averaged total enstrophy Q t , baroclinic term B t , nonlinear baroclinic term B NL t negative of dissipation D t , and advection A t terms. Magnitude of D t is higher than B t . Difference between the two is accounted for by the energy injection due to energy rescaling F r k . Legends in both (a) and (b) are same.
Figure 13 .
13(a) Spectra of baroclinic term B k , negative of nonlinear baroclinic term B NL k , and the negative of (c) dissipation term − D k in enstrophy Eq. (49).
Figure 15 .
15(a) Spectra of dissipation due to density variation interacting with velocity divergence, (b) curl of vorticity, and (c) Laplacian of vorticity as derived in Eq. (51).
Table I .
ISimulation parameter space considered in this study and the corresponding simulation names. We
Table II .
IIBaseline simulation cases for f = 1.
Table III .
IIITime-averaged Taylor microscale λ t and Kolmogorov scale η t .Case
0d
1d
2d
3d
4d
5d
6d
λ t 0.087 0.088 0.083 0.089 0.084 0.086 0.088
η t 0.0030 0.0027 0.0028 0.0028 0.0028 0.0030 0.0028
Spatial evolution of nonlinear acoustic mode instabilities on hypersonic boundary layers. M Goldstein, D Wundrow, Journal of Fluid Mechanics. 219M. Goldstein and D. Wundrow, "Spatial evolution of nonlinear acoustic mode instabilities on hypersonic boundary layers," Journal of Fluid Mechanics 219, 585-607 (1990).
H W Liepmann, A Roshko, Elements of gasdynamics. Courier CorporationH. W. Liepmann and A. Roshko, Elements of gasdynamics (Courier Corporation, 2001).
Transient wall pressures in an overexpanded and large area ratio nozzle. W Baars, C Tinney, Experiments in fluids. 54W. Baars and C. Tinney, "Transient wall pressures in an overexpanded and large area ratio noz- zle," Experiments in fluids 54, 1-17 (2013).
Output-only parameter identification of a colorednoise-driven van-der-pol oscillator: thermoacoustic instabilities as an example. G Bonciolini, E Boujo, N Noiray, Physical Review E. 9562217G. Bonciolini, E. Boujo, and N. Noiray, "Output-only parameter identification of a colored- noise-driven van-der-pol oscillator: thermoacoustic instabilities as an example," Physical Re- view E 95, 062217 (2017).
Spectral energy cascade in thermoacoustic shock waves. P Gupta, G Lodato, C Scalo, Journal of Fluid Mechanics. 831P. Gupta, G. Lodato, and C. Scalo, "Spectral energy cascade in thermoacoustic shock waves," Journal of Fluid Mechanics 831, 358-393 (2017).
Finite-amplitude ultrasonic waves in aluminum. M Breazeale, D Thompson, Applied Physics Letters. 3M. Breazeale and D. Thompson, "Finite-amplitude ultrasonic waves in aluminum," Applied Physics Letters 3, 77-78 (1963).
Nonlinear ultrasonic guided waves-principles for nondestructive evaluation. C J Lissenden, Journal of Applied Physics. 12921101C. J. Lissenden, "Nonlinear ultrasonic guided waves-principles for nondestructive evaluation," Journal of Applied Physics 129, 021101 (2021).
The 1976 oppenheimer lectures: Critical problems in plasma astrophysics. i. turbulence and nonlinear waves. R Z Sagdeev, Reviews of Modern Physics. 511R. Z. Sagdeev, "The 1976 oppenheimer lectures: Critical problems in plasma astrophysics. i. turbulence and nonlinear waves," Reviews of Modern Physics 51, 1 (1979).
The richtmyer-meshkov instability. M Brouillette, Annual Review of Fluid Mechanics. 34M. Brouillette, "The richtmyer-meshkov instability," Annual Review of Fluid Mechanics 34, 445-468 (2002).
Tandem shock waves in medicine and biology: a review of potential applications and successes. P Lukes, F Fernández, J Gutiérrez-Aceves, E Fernández, U Alvarez, P Sunka, A Loske, Shock waves. 26P. Lukes, F. Fernández, J. Gutiérrez-Aceves, E. Fernández, U. Alvarez, P. Sunka, and A. Loske, "Tandem shock waves in medicine and biology: a review of potential applications and suc- cesses," Shock waves 26, 1-23 (2016).
Acoustic streaming. J , Journal of sound and vibration. 61J. Lighthill, "Acoustic streaming," Journal of sound and vibration 61, 391-418 (1978).
Numerical study of thermoacoustic taconis oscillations. D Shimizu, N Sugimoto, Journal of Applied Physics. 10734910D. Shimizu and N. Sugimoto, "Numerical study of thermoacoustic taconis oscillations," Journal of Applied Physics 107, 034910 (2010).
G B Whitham, Linear and nonlinear waves. John Wiley & SonsG. B. Whitham, Linear and nonlinear waves (John Wiley & Sons, 2011).
Spectral energy cascade and decay in nonlinear acoustic waves. P Gupta, C Scalo, Physical Review E. 9833117P. Gupta and C. Scalo, "Spectral energy cascade and decay in nonlinear acoustic waves," Physi- cal Review E 98, 033117 (2018).
Knudsen number effects on the nonlinear acoustic spectral energy cascade. S Thirani, P Gupta, C Scalo, Physical Review E. 10123101S. Thirani, P. Gupta, and C. Scalo, "Knudsen number effects on the nonlinear acoustic spectral energy cascade," Physical Review E 101, 023101 (2020).
Nonlinear acoustics in non-uniform infinite and finite layers. W Ellermeier, Journal of Fluid Mechanics. 257W. Ellermeier, "Nonlinear acoustics in non-uniform infinite and finite layers," Journal of Fluid Mechanics 257, 183-200 (1993).
Nonlinear distortion of travelling waves in variable-area ducts with entropy gradients. M Tyagi, R I Sujith, Journal of Fluid Mechanics. 492M. Tyagi and R. I. Sujith, "Nonlinear distortion of travelling waves in variable-area ducts with entropy gradients," Journal of Fluid Mechanics 492, 1-22 (2003).
Weakly nonlinear shock propagation in slowly varying one-dimensional flows. D Prasad, Physics of Fluids. 1836101D. Prasad, "Weakly nonlinear shock propagation in slowly varying one-dimensional flows," Physics of Fluids 18, 036101 (2006).
Rayleigh scattering measurements of shock enhanced mixing. J Budzinsky, E Zukoski, F Marble, 28th Joint Propulsion Conference and Exhibit. 3546J. Budzinsky, E. Zukoski, and F. Marble, "Rayleigh scattering measurements of shock enhanced mixing," in 28th Joint Propulsion Conference and Exhibit (1992) p. 3546.
Shock wave-turbulence interactions. Y Andreopoulos, J H Agui, G Briassulis, Annual Review of Fluid Mechanics. 32Y. Andreopoulos, J. H. Agui, and G. Briassulis, "Shock wave-turbulence interactions," Annual Review of Fluid Mechanics 32, 309-345 (2000).
The role of the baroclinic term in supersonic fuel/air mixing enhancement. L Romagnosi, A Ingenito, D Cecere, G Eugenio, C Bruno, 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. 401L. Romagnosi, A. Ingenito, D. Cecere, G. Eugenio, and C. Bruno, "The role of the baroclinic term in supersonic fuel/air mixing enhancement," in 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition (2011) p. 401.
Numerical study of variable density turbulence interaction with a normal shock wave. Y Tian, F A Jaberi, Z Li, D Livescu, Journal of Fluid Mechanics. 829Y. Tian, F. A. Jaberi, Z. Li, and D. Livescu, "Numerical study of variable density turbulence interaction with a normal shock wave," Journal of Fluid Mechanics 829, 551-588 (2017).
Analysis of second moments and their budgets for richtmyer-meshkov instability and variable-density turbulence induced by reshock. M L Wong, J R Baltzer, D Livescu, S K Lele, Physical Review Fluids. 744602M. L. Wong, J. R. Baltzer, D. Livescu, and S. K. Lele, "Analysis of second moments and their budgets for richtmyer-meshkov instability and variable-density turbulence induced by reshock," Physical Review Fluids 7, 044602 (2022).
Experimental and numerical study of flow induced by nanosecond repetitively pulsed discharges. B Singh, L K Rajendran, P Gupta, C Scalo, P P Vlachos, S P Bane, AIAA Scitech 2019 Forum. 740B. Singh, L. K. Rajendran, P. Gupta, C. Scalo, P. P. Vlachos, and S. P. Bane, "Experimental and numerical study of flow induced by nanosecond repetitively pulsed discharges," in AIAA Scitech 2019 Forum (2019) p. 0740.
P K Kundu, I M Cohen, D R Dowling, Fluid mechanics. Academic pressP. K. Kundu, I. M. Cohen, and D. R. Dowling, Fluid mechanics (Academic press, 2015).
Richtmyer-meshkov instabilities in stratified fluids. K O Mikaelian, Physical Review A. 31410K. O. Mikaelian, "Richtmyer-meshkov instabilities in stratified fluids," Physical Review A 31, 410 (1985).
Dynamics of cellular flame deformation after a head-on interaction with a shock wave: reactive richtmyer-meshkov instability. H Yang, M I Radulescu, Journal of Fluid Mechanics. 92336H. Yang and M. I. Radulescu, "Dynamics of cellular flame deformation after a head-on inter- action with a shock wave: reactive richtmyer-meshkov instability," Journal of Fluid Mechanics 923, A36 (2021).
Two-stage growth mode for lift-off mechanism in oblique shock-wave/jet interaction. B Yu, M He, B Zhang, H Liu, Physics of Fluids. 32116105B. Yu, M. He, B. Zhang, and H. Liu, "Two-stage growth mode for lift-off mechanism in oblique shock-wave/jet interaction," Physics of Fluids 32, 116105 (2020).
Flow structures of strong interaction between an oblique shock wave and a supersonic streamwise vortex. F Wei, R Yang, W Liu, Y Zhao, Q Wang, M Sun, Physics of Fluids. 34106102F. Wei, R. Yang, W. Liu, Y. Zhao, Q. Wang, and M. Sun, "Flow structures of strong interaction between an oblique shock wave and a supersonic streamwise vortex," Physics of Fluids 34, 106102 (2022).
An examination of forcing in direct numerical simulations of turbulence. V Eswaran, S B Pope, Computers & Fluids. 16V. Eswaran and S. B. Pope, "An examination of forcing in direct numerical simulations of tur- bulence," Computers & Fluids 16, 257-278 (1988).
Reynolds and mach number scaling in solenoidally-forced compressible turbulence using high-resolution direct numerical simulations. S Jagannathan, D A Donzis, Journal of Fluid Mechanics. 789S. Jagannathan and D. A. Donzis, "Reynolds and mach number scaling in solenoidally-forced compressible turbulence using high-resolution direct numerical simulations," Journal of Fluid Mechanics 789, 669-707 (2016).
Acoustic energy exchange in compressible turbulence. H Miura, S Kida, Physics of Fluids. 7H. Miura and S. Kida, "Acoustic energy exchange in compressible turbulence," Physics of Fluids 7, 1732-1742 (1995).
Chebyshev and Fourier spectral methods. J P Boyd, Courier CorporationJ. P. Boyd, Chebyshev and Fourier spectral methods (Courier Corporation, 2001).
High performance python for direct numerical simulations of turbulent flows. M Mortensen, H P Langtangen, Computer Physics Communications. 203M. Mortensen and H. P. Langtangen, "High performance python for direct numerical simulations of turbulent flows," Computer Physics Communications 203, 53-65 (2016).
Effects of finite spatial and temporal resolution in direct numerical simulations of incompressible isotropic turbulence. P Yeung, K Sreenivasan, S Pope, Physical Review Fluids. 364603P. Yeung, K. Sreenivasan, and S. Pope, "Effects of finite spatial and temporal resolution in direct numerical simulations of incompressible isotropic turbulence," Physical Review Fluids 3, 064603 (2018).
Thermal fluctuations in the dissipation range of homogeneous isotropic turbulence. J B Bell, A Nonaka, A L Garcia, G Eyink, Journal of Fluid Mechanics. 93912J. B. Bell, A. Nonaka, A. L. Garcia, and G. Eyink, "Thermal fluctuations in the dissipation range of homogeneous isotropic turbulence," Journal of Fluid Mechanics 939, A12 (2022).
D Evans, G Morriss, Statistical mechanics of nonequilbrium liquids. ANU PressD. J Evans and G. P Morriss, Statistical mechanics of nonequilbrium liquids (ANU Press, 2007).
A dynamic localization model for large-eddy simulation of turbulent flows. S Ghosal, T S Lund, P Moin, K Akselvoll, Journal of fluid mechanics. 286S. Ghosal, T. S. Lund, P. Moin, and K. Akselvoll, "A dynamic localization model for large-eddy simulation of turbulent flows," Journal of fluid mechanics 286, 229-255 (1995).
Shallow water wave turbulence. P Augier, A V Mohanan, E Lindborg, Journal of Fluid Mechanics. 874P. Augier, A. V. Mohanan, and E. Lindborg, "Shallow water wave turbulence," Journal of Fluid Mechanics 874, 1169-1196 (2019).
M K Verma, Energy transfers in fluid flows: multiscale and spectral perspectives. Cambridge University PressM. K. Verma, Energy transfers in fluid flows: multiscale and spectral perspectives (Cambridge University Press, 2019).
Variable energy flux in turbulence. M K Verma, Journal of Physics A: Mathematical and Theoretical. 5513002M. K. Verma, "Variable energy flux in turbulence," Journal of Physics A: Mathematical and Theoretical 55, 013002 (2021).
S B Pope, Turbulent flows. Cambridge university pressS. B. Pope, Turbulent flows (Cambridge university press, 2000).
Enstrophy budget in decaying compressible turbulence. S Kida, S A Orszag, Journal of scientific computing. 5S. Kida and S. A. Orszag, "Enstrophy budget in decaying compressible turbulence," Journal of scientific computing 5, 1-34 (1990).
Combustion noise. A P Dowling, Y Mahmoudi, Proceedings of the Combustion Institute. the Combustion Institute35A. P. Dowling and Y. Mahmoudi, "Combustion noise," Proceedings of the Combustion Institute 35, 65-100 (2015).
Wave turbulence. S Nazarenko, Springer Science & Business Media825S. Nazarenko, Wave turbulence, Vol. 825 (Springer Science & Business Media, 2011).
Linear forcing in numerical simulations of isotropic turbulence: Physical space implementations and convergence properties. C Rosales, C Meneveau, Physics of fluids. 1795106C. Rosales and C. Meneveau, "Linear forcing in numerical simulations of isotropic turbulence: Physical space implementations and convergence properties," Physics of fluids 17, 095106 (2005).
Wave-vortex dynamics in rotating shallow water. M Farge, R Sadourny, Journal of Fluid Mechanics. 206M. Farge and R. Sadourny, "Wave-vortex dynamics in rotating shallow water," Journal of Fluid Mechanics 206, 433-462 (1989).
| [] |
[
"Analysis of the Fed's communication by using textual entailment model of Zero- Shot classification",
"Analysis of the Fed's communication by using textual entailment model of Zero- Shot classification"
] | [
"Yasuhiro Nakayama [email protected] \nMizuho Research & Technologies, Ltd\n\n",
"Tomochika Sawaki [email protected] \nMizuho Bank, Ltd\n\n"
] | [
"Mizuho Research & Technologies, Ltd\n",
"Mizuho Bank, Ltd\n"
] | [] | In this study, we analyze documents published by central banks using text mining techniques and propose a method to evaluate the policy tone of central banks. Since the monetary policies of major central banks have a broad impact on financial market trends, the pricing of risky assets, and the real economy, market participants are attempting to more accurately capture changes in the outlook for central banks' future monetary policies. Since the published documents are also an important tool for the central bank to communicate with the market, they are meticulously elaborated on grammatical syntax and wording, and investors are urged to read more accurately about the central bank's policy stance. Sentiment analysis on central bank documents has long been carried out, but it has been difficult to interpret the meaning of the documents accurately and to explicitly capture even the intentional change in nuance. This study attempts to evaluate the implication of the zero-shot text classification method for an unknown economic environment using the same model. We compare the tone of the statements, minutes, press conference transcripts of FOMC meetings, and the Fed officials' (chair, vice chair, and Governors) speeches. In addition, the minutes of the FOMC meetings were subjected to a phase analysis of changes in each policy stance since 1971. | null | [
"https://export.arxiv.org/pdf/2306.04277v1.pdf"
] | 259,095,516 | 2306.04277 | 78c5c3b9bdc6d53bc76fbe629024209fbdbce845 |
Analysis of the Fed's communication by using textual entailment model of Zero- Shot classification
Yasuhiro Nakayama [email protected]
Mizuho Research & Technologies, Ltd
Tomochika Sawaki [email protected]
Mizuho Bank, Ltd
Analysis of the Fed's communication by using textual entailment model of Zero- Shot classification
In this study, we analyze documents published by central banks using text mining techniques and propose a method to evaluate the policy tone of central banks. Since the monetary policies of major central banks have a broad impact on financial market trends, the pricing of risky assets, and the real economy, market participants are attempting to more accurately capture changes in the outlook for central banks' future monetary policies. Since the published documents are also an important tool for the central bank to communicate with the market, they are meticulously elaborated on grammatical syntax and wording, and investors are urged to read more accurately about the central bank's policy stance. Sentiment analysis on central bank documents has long been carried out, but it has been difficult to interpret the meaning of the documents accurately and to explicitly capture even the intentional change in nuance. This study attempts to evaluate the implication of the zero-shot text classification method for an unknown economic environment using the same model. We compare the tone of the statements, minutes, press conference transcripts of FOMC meetings, and the Fed officials' (chair, vice chair, and Governors) speeches. In addition, the minutes of the FOMC meetings were subjected to a phase analysis of changes in each policy stance since 1971.
Introduction
Since the monetary policies of major central banks have a broad and significant impact on financial market trends, pricing of risky assets, and spillover to the real economy, market participants are trying to better understand the changes in the future monetary policy outlook of central banks. In particular, the monetary policy of the Central Bank of the United States (Federal Reserve System, hereinafter Fed) is positioned as the most important because it influences the movement of the dollar, the key currency. One of the means by which central banks engage in dialogue with the market and conduct smooth policy management is the publication of various documents, including statements and minutes issued after policy meetings, and transcripts of speeches and congressional testimony attended by senior officials. The Federal Open Market Committee (FOMC), a meeting at which U.S. monetary policy is formulated, meets eight times a year with members of the Federal Reserve Board (FRB) and the presidents of the regional Fed banks as participants. Immediately after the meeting, a written statement is published on the website, a press conference is held by the chairperson, and transcripts of the conference are published around the next day. After approximately 3 weeks, the minutes of the FOMC meeting will be made public (They were published 3 days later until 12, 2004.). The statement is a relatively short document of about two pages that summarizes current economic perceptions, the monetary policy determined, and the names of the voters. The transcripts of the press conference consist of a transcript to be read by the chairperson at the beginning of the conference, as well as questions and answers with reporters, and are approximately 20 ~ 30 pages in volume. In some cases, information that is not included in the statement but is of interest to market participants (specific information and future prospects) is recorded. The minutes are a document that confirms the content of the economic analysis reported by the Fed economists, the process of discussion that led to the decision of the policy, and the variation of opinion among the members, and the volume is around 10~20 pages. Outside of the FOMC meetings, transcripts of speeches and interviews by FOMC participants (Fed officials) and statements in congressional testimony will be released at each meeting. Although the themes may not necessarily be related to monetary policy, the Fed officials' own views on the economy and outlook for monetary policy may be expressed, and if there is a major change from past statements, they may have an impact on the market. The statements of the members of the Fed will be published on the website, and the statements of the presidents of the regional Fed banks will be published on the website of each Fed bank. The content is often a few pages or so. Another issue that draws the attention of market participants is the Beige Book (Federal Reserve Bank Business Report). This is a report compiled by the regional Fed on the local economies of all 12 districts. It will be released on Wednesday, two weeks before the FOMC meeting, and will serve as a report to be used as a reference at the FOMC meeting. The content will be around 30 pages.
With regard to the format of the various documents, statements, minutes, and beige books are documents with a relatively standardized structure, while written records of press conferences and speeches by Fed officials are not standardized and in some cases are characterized by their colloquial tone. In terms of monetary policy implications, it is generally assumed that the transcripts of Fed officials' speeches have a head start in terms of being able to capture their perceptions ahead of FOMC meetings (However, there will be a blackout period immediately before the FOMC meeting and no Fed officials' speeches will be made.). In addition, it is considered that more information can be obtained from minutes and transcripts of conferences of the FOMC meetings with larger sentences than those with smaller sentences.
In this study, we conducted text mining using statements and minutes, transcripts of press conferences of FOMC meetings, and Fed officials' speeches, and compared their usefulness as information sources.
Related Work
As an application of text mining technology to the financial and economic fields, many studies of central bank documents have been conducted along with analyses of corporate accounts and news. Many studies use text mining techniques for central bank documents to score central bank sentiment, and many of them are active, aiming to predict future monetary policy, market forecasts, and economic indicators. Ito The purpose of this research is to use natural language processing techniques to read the intentions of the writer and speaker of a document as correctly as possible and to be the most correct recipient of the communication intended by the Fed. Therefore, in addition to data from 1993, when the minutes began to be published in their current format, we also analyzed the Fed's past minutes, which had been published in a similar format, going back to data from 1971 onward, and classified the characteristics of the Fed's policy tone over a long time span by interest rate hike and rate cut phases.
Methodology
Data and Preprocessing
We obtained the statements, minutes, transcripts of press conferences of FOMC meetings, and Fed officials' speeches published on the Fed's website 1 . In terms of FOMC minutes, paragraphs unrelated to the stance, such as the names of participants and explanations of the specific content of the policies formulated, were excluded from analysis. However, all paragraphs were covered in the old format where paragraph structure could not be obtained. The transcripts of the press conference of FOMC meetings will be published only in the manuscript portion read out by the chairperson on the day of the meeting, and the following day in the form that includes the question and answer portion by the reporters, but this time, those that include the question and answer portion were included in the analysis. However, in the question and answer section, the reporter's questions were deleted and only the chairperson's response was extracted.
The acquired documents were divided into sentence units after pre-processing such as the removal of footers and annotations.
Topic classification by the textual entailment model
Topic classification is performed using an entailment model on a zero-shot basis [6]. In this study, we used a publicly available learned model. When selecting models, members with domain knowledge compared the output of several models and selected the one that best suited their senses 2 . The entailment model returns whether the sentence to be judged contains the meaning of the hypothetical sentence with a score of 0 ~ 1. The closer the score is to 1, the more likely it is that the hypothetical statement is implied. A value closer to zero means it is inconsistent with the hypothetical statement. Since it is a zero-shot model, hypothetical statements can be freely set. In this study, we classified topics by using a hypothetical statement in the form of "This sentence is related to the topic of {}." and by setting the topic to be judged in parentheses.
We set three topics, namely "Inflation", "Job Gain" and "Economic Growth" and judged each sentence. Since more than one topic may be included in a sentence, it is determined whether a sentence belongs to each topic. Since the model returns an entailment score, we set a threshold and judge that the topic belongs to the topic if the score is equal to or higher than the threshold. The threshold was set at 0.9 by checking the entailment scores of several sentences extracted as samples by members with domain knowledge.
Setting Hypothetical Statements by Category
To make an implication determination of nuances in each category, a categorical hypothesis statement is created. Hypothesis statements were set by the following process.
First, we extracted wording about the three categories of "inflation," "job gain," and "economic growth" on a keyword basis from past FOMC statements released by the Fed. Frequently used expressions were selected from the extracted expressions for each category and set as hypothetical statements. Balance of directions was also taken into consideration when selecting expressions. For example, on "Inflation": we extracted in a balanced manner "declined", "diminished", "edged down" as the expression for the downward direction, "increased" , "moved up", "elevated" as the expression for the upward direction.
Entailment judgment for each category
Entailment judgment is performed for each category based on the set category-specific hypothetical statements. The target sentences for each category of entailment judgment were as follows based on the results of the 3.2 topic classification. We analyzed sentences identified as belonging to the "inflation" topic for the "inflation" category, the "Job Gain" topic for the "Job Gain" category, and the "Economic Growth" topic for the "Economic 2 https://huggingface.co/facebook/bart-large-mnli Growth" category. We used the same model as the 3.2 topic classification to determine the need for each category. Entailment judgment is performed on the categorical hypothetical sentences set in 3.3, and the number of sentences whose implication score was at or above the threshold (0.9) is counted and tabulated for each document. In each category, a stance score ( ( )) is calculated by counting the number of sentences implied in each direction of expression, taking the difference, and dividing by the total number of sentences implied in both directions of expression.
Data
Documents of FOMC statements, minutes, transcripts of press conferences, and Fed officials' speeches were obtained from the FRB website and evaluated. Only minutes were obtained from February 1971, and the rest from December 2018. In addition, because it is not possible to verify the changes in policy since February 2023, which is the time point when this study was conducted, the analysis was conducted by dividing the period up to March 2022.
Analysis from December 2018 to March 2022
Using data from FOMC statements, minutes, transcripts of press conferences, and Fed officials' speeches from the December 2018 meeting to the March 2022 meeting, we conducted text mining for the phases after 2021 (At a time when inflation is rising, the Fed is shifting from an accommodative policy of zero interest rates and quantitative easing to a more restrictive policy of tapering, policy rate hikes, and B/S contractions) and compared each data source. For comparison, we limited the data from December 2018, the point in time when the Chair changed the practice of holding a press conference after each FOMC meeting. Figure 2 plots the stance scores for inflation. The score is calculated by subtracting the percentage of sentences judged to imply a hypothetical expression in the direction of inflation decline ("declined", "diminished", "moved lower", "edged down" etc.) from the percentage of sentences judged to imply a hypothetical expression in the direction of inflation rise ("increased", "picked up", "moved up", "elevated" etc.).
With regard to the FOMC statement, there was a perception that inflation was on a downward trend until the March 2021 meeting, and the score seemed to shift from the April 2021 meeting to a perception that upward pressure on inflation was intensifying. Later, the same trend continued.
Next, with regard to the score using the FOMC minutes as the data source, it was from the September 2020 meeting that the score shifted from the perception of low inflation to near neutral, and from the April 2021 meeting, as in the FOMC statement, that the perception shifted to the perception that upward pressure on the inflation rate was intensifying. Later, the same trend continued. However, it should be noted that the FOMC minutes will be published three weeks after the meeting.
Third, looking at the time series of the scores when the transcripts of the FOMC press conference are used as the data source, as in the case of the FOMC minutes, the perception shifted to the fact that the scores were gradually returning to near neutral from the September 2020 meeting and that upward pressure on the inflation rate was increasing at the March 2021 meeting. This was the earliest turnaround in comparison to the statements and the minutes of the FOMC meetings. Although there was no significant change in the inflation perception in the FOMC statement at the March 2021 meeting, the results suggest that the chairperson's post-meeting press conference may have shown his awareness of the growing inflationary pressure. The same trend continued thereafter. Given that the FOMC minutes of the meeting are released three weeks after the meeting, while the transcripts of the press conference are released the following day, we believe that this was the earliest conversion among the three data sources.
Finally, it is about the case of Fed officials' speeches as a source. For comparison with other documents at the same time, the average of the five Fed officials' speeches made immediately before the FOMC is included. As in the FOMC minutes, the score gradually turned upward from September 2020, confirming a sharp increase in April 2021.
Figure 2: Trends in the stance on inflation
We then confirm the results for a similar analysis of job gain shown in Figure 3. With regard to the transition to an accommodative monetary policy following the shutdown in response to the global spread of the COVID19 since March 2020, a sharp decline in the score was confirmed in April 2020 in all sources of information, partly due to the emergency rate cut, and the perception shifted to that the job market is accommodative. Since then, press conferences and the minutes of the FOMC meetings have gradually moved back toward neutral, but only the FOMC statement sharply shifted to a perception of tight job market conditions at the April 2021 meeting. Finally, economic growth scores are shown in Figure 4. In the FOMC statements and the FOMC minutes of the February 2019 meeting, it was confirmed that economic growth had turned to a decelerating direction, leading to a rate cut in the policy rate after July 2019. Around March 2020, there was no significant difference in the timing of the score decline due to differences in data sources, partly due to the sharp economic slowdown after the shutdown. As for the subsequent progress, the scores based on the minutes, transcripts of press conferences of FOMC meetings and Fed officials' speeches returned to the same level of score around the summer of 2020 as before March 2020, while the scores for FOMC statements returned to a neutral level in the spring of 2021. Table 1 uses the minutes of the FOMC meetings from February 1993 to March 2022 as a data source to generate scores for communication in the same method as in 4.2, and compares each aspect of the policy stance.
First, with regard to the perception of inflation, the Fed's communication, on average over the entire period, was also perceived to be under inflationary upward pressure, given that the inflation rate (annualized core PCE) has settled at a low level, with the rate remaining between 1% and 3% until 2021, since 1993. On the other hand, the scores were higher than the overall period average for periods in which the policy rate was raised, and lower than the overall period average for periods in which the rate was lowered and for periods in which the rate was zero. We believe that this result is consistent with the idea that the policy rate is changed as a means to achieve the inflation target or stabilize prices.
Next, job gain recognition is also described in Table 1. The scores were higher than the overall period average for periods in which the policy rate was raised, and lower than the overall period average for periods in which the rate was lowered and for periods in which the rate was zero. This result is consistent with the idea that the policy rate is changed as a means to achieve maximum employment, and we believe that the Fed's communication is read correctly.
Finally, a similar trend was observed for scores related to economic growth. The average over the entire period gave a positive score, i.e., a perception that the economy was strong, and a higher score than the average over the entire period during the period of interest rate increases. On the other hand, the average score was negative in the case of a rate cut, suggesting that the point at which the Fed perceived the economy to be in recession and the point at which the policy rate was actually cut were generally consistent. Before December 1992, there were no FOMC minutes in their current form, but there were documents similar to those named "Minutes of Action". Table 2 uses the "Minutes of Action" at the FOMC meetings from February 1971 to December 1992 as a data source, and compares each aspect of the policy stance in the same way as in 4.3. Although the score levels were different, a similar trend to 4.3 was confirmed. Comparing the overall period averages for Tables 1 and 2, we found that Table 2 had higher scores for inflation recognition, lower scores for job gain recognition, and identical scores for economic growth. This is consistent with the magnitude of the average inflation rate (core PCE) and unemployment rate for each period. In addition, in Table 2, the tendency for the average to be higher in all three categories of inflation, job gain, and economic growth during the rate hike phase than during the rate cut phase is the same as in Table 1, suggesting that the Fed's response stance to inflation and job gain data over the long term of the past 50 years is consistent and can be quantified using the model in this study.
Summary
In this study, we attempted to read the changes in the Fed's communication using the textual entailment model based on zero-shot text classification. Because our model uses zero-shot classification, it is possible to handle other subjects without additional learning.
Using data from December 2018 to March 2022, we attempted to compare each document published by the Fed. For the FOMC statements, the scores generated by the model used in this study showed a slightly binary shift in scores, while a more gradual change in stance was observed for minutes and transcripts of press conferences of the FOMC meetings. In addition, the average of the past five Fed officials' speeches has remained near a neutral level, and data pre-processing may be considered as a future issue.
Next, we conducted a phase analysis using the minutes of the FOMC for the long-term period from February 1971 to March 2022, and found that the Fed's communication on inflation, job gain, and economic growth was consistent with actual policy changes, which could be interpreted using the model in this study.
, is the set of upward expressions in the category , , is the set of downward expressions in the category , and ( ) is the number of sentences in the document judged to contain the expression .
Figure 1
1Figure 1: Analysis Flow
Figure 3 :
3Trends in Stance on Job Gain
Figure 4 :
4Trends in Stance on the economic growth 4.3 Analysis using FOMC minutes from February 1993 to March 2020
et al. combined an expert FOMC dictionary and engagement analysis to extract sentiment by topic for the minutes of the FOMC meeting and showed that it has explanatory power for macroeconomic indicators [1]. Wang used sentence-level embedding vectors obtained with FinBERT to calculate sentiment by topic and showed that they have explanatory power for macroeconomic 1 https://www.federalreserve.gov/default.htm indicators [2]. Granziera et al. also used speech texts from FOMC members and district Fed presidents to calculate sentiment about inflation [3]. Some studies have analyzed sentiment by member using transcripts that contain all statements made by participants and are released five years after the FOMC meeting [4].
Table 1 :
1Periodic averages of scores using FOMC *(1) the entire period, (2) the period of FF rate hikes, (3) the period of FF rate cuts, and (4) the period of zero interest rates (2009/1 to 2015/10, 2020/4 to 2022/1). *(2) when the rate is raised every meeting, the meeting that kept the rate unchanged was included in the period for raising the rate. *In the Welch t-test using the alternative hypothesis of "(2) the average of the interest rate hike phases > (3) the average of the interest rate cut phases", inflation, job gain, and business conditions were all determined to be significantly different at a significance level of 1%.minutes
(1)
(2)
(3)
(4)
Inflation
-0.23 +0.00 -0.40 -0.34
Job Gain
-0.08 +0.26 -0.45 -0.24
Economic growth
+0.22 +0.44 -0.16 +0.20
4.4 Analysis using FOMC minutes from February 1971 to
December 1992
Table 2 :
2Periodic averages of scores using FOMC "Minutes of Action" *(1) the entire period, (2) the period of FF rate hikes, and (3) the period of FF rate cuts *In the Welch t-test using the alternative hypothesis of "(2) the average of the interest rate hike phases > (3) the average of the interest rate cut phases", inflation, job gain, and business conditions were all determined to be significantly different at a significance level of 1%.①
②
③
Inflation
-0.05
+0.17
-0.34
Job Gain
-0.31
-0.13
-0.63
Economic Growth
+0.22
+0.30
+0.08
AcknowledgementsThis research was inspired by the FY2022 first half report of the University of Tokyo's Data Science School. We would like to thank all the students who took on the task with sincerity and came up with great ideas, as well as all the TAs and Professors.Points of AttentionThe contents and views of this paper belong to the author personally and are not the official views of the company to which he belongs.
Sentiment analysis in FOMC minutes using LDA topic model. Ryo Ito, Shintaro Suda, Kiyoshi Izumi, JSAI Special Interest Group on Financial Informatics SIG-FIN. 017Ryo Ito, Shintaro Suda, Kiyoshi Izumi: Sentiment analysis in FOMC minutes using LDA topic model, JSAI Special Interest Group on Financial Informatics SIG-FIN-017, (2016)
Sarah-Yifei-Wang , arXiv:2108.04080Aspect-based Sentiment Analysis in Document -FOMC Meeting Minutes on Economic Projection. Sarah-Yifei-Wang: Aspect-based Sentiment Analysis in Document -FOMC Meeting Minutes on Economic Projection, arXiv:2108.04080, (2021)
Fed Sentiment and Expectations: Evidence from Speeches by FOMC Members. V H Granziera, Larsen, Meggiorini, E Granziera, V. H. Larsen, G Meggiorini: Fed Sentiment and Expectations: Evidence from Speeches by FOMC Members, (2022)
S Cannon, Sentiment of the FOMC: Unscripted, economic Revie, 2015, issue Q IV. S Cannon, Sentiment of the FOMC: Unscripted, economic Revie, 2015, issue Q IV, 5-31, (2015)
Using Sentiment Analysis to Understand Monetary Policy Uncertainty. L Xiao, L Xiao: Using Sentiment Analysis to Understand Monetary Policy Uncertainty, (2022)
Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach. W Yin, Hay, Roth, Proceedings of the. theW Yin, J Hay, D Roth: Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach, In Proceedings of the 2019
Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Hong Kong, China2019Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLPIJCNLP 2019, Hong Kong, China, November 3-7, 2019, pp. 3914- 3923, (2019)
| [] |
[
"Edge conductivity in PtSe 2 nanostructures",
"Edge conductivity in PtSe 2 nanostructures"
] | [
"Roman Kempt ",
"Agnieszka Kuc ",
"Thomas Brumme ",
"Thomas Heine [email protected] ",
"DrRoman Kempt ",
"Prof. DrThomas Brumme ",
"Thomas Heine ",
"DrAgnieszka Kuc ",
"Prof. DrThomas Heine ",
"DrThomas Heine ",
"\nChair of Theoretical Chemistry\nHelmholtz-Zentrum Dresden-Rossendorf\nInstitute of Resource Ecology Permoserstrasse 15\nDepartment of Chemistry\nTechnische Universität Dresden\nBergstrasse 6601069, 04318Dresden, LeipzigGermany, Germany\n",
"\nYonsei University\nSeodaemun-gu120-749SeoulRepublic of Korea\n"
] | [
"Chair of Theoretical Chemistry\nHelmholtz-Zentrum Dresden-Rossendorf\nInstitute of Resource Ecology Permoserstrasse 15\nDepartment of Chemistry\nTechnische Universität Dresden\nBergstrasse 6601069, 04318Dresden, LeipzigGermany, Germany",
"Yonsei University\nSeodaemun-gu120-749SeoulRepublic of Korea"
] | [] | PtSe2 is a promising 2D material for nanoelectromechanical sensing and photodetection in the infrared regime. One of its most compelling features is the facile synthesis at temperatures below 500 °C, which is compatible with current back-end-of-line semiconductor processing. However, this process generates polycrystalline thin films with nanoflake-like domains of 5 to 100 nm size. To investigate the lateral quantum confinement effect in this size regime, we train a deep neural network to obtain an interatomic potential at DFT accuracy and use that to model ribbons, surfaces, nanoflakes, and nanoplatelets of PtSe2 with lateral widths between 5 to 15 nm. We determine which edge terminations are the most stable and find evidence that the electrical conductivity is localized on the edges for lateral sizes below 10 nm. This suggests that the transport channels in thin films of PtSe2 might be dominated by networks of edges, instead of transport through the layers themselves.resulting films are polycrystalline with fused, nanoflake-like domains between 10 to 50 nm and thicknesses between 6 to 8.5 nm.[5][6][7]12]Recently, chemical vapor deposition (CVD) with a metalorganic precursor was shown to yield domains of up to 300 nm in size.[23]Controlling these synthesis factors is crucial to obtain PtSe2 devices with reproducible performance because they strongly affect the electronic properties of the PtSe2 film. For example, varying the layer thickness from two to three layers gives rise to a semiconductor-to-semimetal transition,[24][25][26]point-and edge-defects cause magnetic behavior,[27][28][29]and stacking faults lead to semiconducting instead of semimetallic behavior.[30,31]All of those factors may contribute to varying observations of PtSe2 characteristics, such as mobilities below 1 cm 2 V -1 s -1 ,[32]p-or n-type behavior depending on the selenization process,[33]and a 35% reduction of the cross-plane thermal conductivity due to polycrystallinity.[34]In this study, we investigate how the electrical conductivity depends on the lateral width of PtSe2 nanostructures, namely ribbons, surfaces, nanoflakes and nanoplatelets, with different edge terminations using density-functional theory (DFT). The models required to study such systems contain up to a few thousand heavy atoms. To this end, we train a deep neural network to obtain an interatomic potential for PtSe2 nanostructures. The training data includes three different stacking phases in their bulk form and for up to nine layers (from our previous work)[30]and differently terminated ribbon and surface models (seeFigure 1a-d). We employ the DeePMD-kit framework[35]which implements the smooth version of the Deep Potential (DP) model by E et al.[36,37]The DP model has been successfully used to represent the Potential Energy Surface (PES) of complex systems in materials science, such as the formation of defects from radiation exposure under fusion reactor conditions[38]or the behavior of water confined between graphene nanocapillaries.[39]Furthermore, it has been used to explain the large piezo-and pyroelectric coefficients of zirconia at high temperatures,[40]the large lattice thermal conductivity of Bi2Te3 with strong anharmonicity,[41]and even moiré phonons in graphene.[42]The DP model has been applied to systems containing more than 100 million atoms with effectively the same accuracy as the underlying training method.[43]Using the DP model, we find that the stability of different edge terminations depends strongly on temperature and is influenced by the number of layers of PtSe2. We calculate the band structures and Boltzmann conductivities of ribbons, surfaces, nanoflakes and nanoplatelets of PtSe2 for lateral widths between 5 and 15 nm using DFT, with the largest structure containing more than 1200 heavy atoms in periodic boundary conditions along the stacking direction. All structures have been made available at the NOMAD repository as a data set.[44]Table S1 -Training data type and composition. G-Phonons refer to phonons being only calculated at the Gamma-point. E stands for total energy, F for forces, and S for stresses. A frame refers to a snapshot of a structure with energy, forces, and stresses, if available. | null | [
"https://export.arxiv.org/pdf/2306.04365v1.pdf"
] | 259,095,573 | 2306.04365 | 48d691f691a5e1d301ef9fe5ae70027c2c60e487 |
Edge conductivity in PtSe 2 nanostructures
Roman Kempt
Agnieszka Kuc
Thomas Brumme
Thomas Heine [email protected]
DrRoman Kempt
Prof. DrThomas Brumme
Thomas Heine
DrAgnieszka Kuc
Prof. DrThomas Heine
DrThomas Heine
Chair of Theoretical Chemistry
Helmholtz-Zentrum Dresden-Rossendorf
Institute of Resource Ecology Permoserstrasse 15
Department of Chemistry
Technische Universität Dresden
Bergstrasse 6601069, 04318Dresden, LeipzigGermany, Germany
Yonsei University
Seodaemun-gu120-749SeoulRepublic of Korea
Edge conductivity in PtSe 2 nanostructures
PtSe2Deep PotentialsNanostructuresDensity-Functional TheoryElectrical Conductivity
PtSe2 is a promising 2D material for nanoelectromechanical sensing and photodetection in the infrared regime. One of its most compelling features is the facile synthesis at temperatures below 500 °C, which is compatible with current back-end-of-line semiconductor processing. However, this process generates polycrystalline thin films with nanoflake-like domains of 5 to 100 nm size. To investigate the lateral quantum confinement effect in this size regime, we train a deep neural network to obtain an interatomic potential at DFT accuracy and use that to model ribbons, surfaces, nanoflakes, and nanoplatelets of PtSe2 with lateral widths between 5 to 15 nm. We determine which edge terminations are the most stable and find evidence that the electrical conductivity is localized on the edges for lateral sizes below 10 nm. This suggests that the transport channels in thin films of PtSe2 might be dominated by networks of edges, instead of transport through the layers themselves.resulting films are polycrystalline with fused, nanoflake-like domains between 10 to 50 nm and thicknesses between 6 to 8.5 nm.[5][6][7]12]Recently, chemical vapor deposition (CVD) with a metalorganic precursor was shown to yield domains of up to 300 nm in size.[23]Controlling these synthesis factors is crucial to obtain PtSe2 devices with reproducible performance because they strongly affect the electronic properties of the PtSe2 film. For example, varying the layer thickness from two to three layers gives rise to a semiconductor-to-semimetal transition,[24][25][26]point-and edge-defects cause magnetic behavior,[27][28][29]and stacking faults lead to semiconducting instead of semimetallic behavior.[30,31]All of those factors may contribute to varying observations of PtSe2 characteristics, such as mobilities below 1 cm 2 V -1 s -1 ,[32]p-or n-type behavior depending on the selenization process,[33]and a 35% reduction of the cross-plane thermal conductivity due to polycrystallinity.[34]In this study, we investigate how the electrical conductivity depends on the lateral width of PtSe2 nanostructures, namely ribbons, surfaces, nanoflakes and nanoplatelets, with different edge terminations using density-functional theory (DFT). The models required to study such systems contain up to a few thousand heavy atoms. To this end, we train a deep neural network to obtain an interatomic potential for PtSe2 nanostructures. The training data includes three different stacking phases in their bulk form and for up to nine layers (from our previous work)[30]and differently terminated ribbon and surface models (seeFigure 1a-d). We employ the DeePMD-kit framework[35]which implements the smooth version of the Deep Potential (DP) model by E et al.[36,37]The DP model has been successfully used to represent the Potential Energy Surface (PES) of complex systems in materials science, such as the formation of defects from radiation exposure under fusion reactor conditions[38]or the behavior of water confined between graphene nanocapillaries.[39]Furthermore, it has been used to explain the large piezo-and pyroelectric coefficients of zirconia at high temperatures,[40]the large lattice thermal conductivity of Bi2Te3 with strong anharmonicity,[41]and even moiré phonons in graphene.[42]The DP model has been applied to systems containing more than 100 million atoms with effectively the same accuracy as the underlying training method.[43]Using the DP model, we find that the stability of different edge terminations depends strongly on temperature and is influenced by the number of layers of PtSe2. We calculate the band structures and Boltzmann conductivities of ribbons, surfaces, nanoflakes and nanoplatelets of PtSe2 for lateral widths between 5 and 15 nm using DFT, with the largest structure containing more than 1200 heavy atoms in periodic boundary conditions along the stacking direction. All structures have been made available at the NOMAD repository as a data set.[44]Table S1 -Training data type and composition. G-Phonons refer to phonons being only calculated at the Gamma-point. E stands for total energy, F for forces, and S for stresses. A frame refers to a snapshot of a structure with energy, forces, and stresses, if available.
Introduction
PtSe2 is a candidate material for the next generation of electromechanical and optical sensors in nanoscale devices. [1,2] It features a large negative Gauge Factor (GF) in electromechanical pressure sensors of up to -84, [3][4][5] low contact resistivities and large carrier mobilities between 625 and 1500 cm 2 V -1 s -1 , [4,[6][7][8][9] and long-term stability of up to years in presence of air and moisture. [4,[10][11][12][13] The low thermal budget of 450-500 °C during synthesis, for example via thermally assisted conversion (TAC), [5,7,14] allows PtSe2 films to be directly grown on centimeter-scale silicon wafers and even plastics with different topographies. [3,7] This is ideal to directly integrate PtSe2 into devices, such as piezoresistive sensors, [1,15,16] photonic circuits, [17][18][19] and memristors. [20] There are still challenges in consistently growing high-quality PtSe2 films, such as controlling homogeneity, [6,7] layer orientation, [8] continuity, [6,21] and layer thickness. [5,7,22,23] Often, the
Results and Discussion
Training Data Set and Settings We employ a data set that includes PtSe2 structures in the octahedrally coordinated 1T O , 2H O , and 3R O stacking phases as discussed in our previous work, [30] as well as bulk phases, ribbons, and surfaces. [44] All trajectories have been calculated at the same level of theory (see Methods for details). The DP model was fed with energies, forces and stresses (where available) from relaxation trajectories, phonon calculations, elastic deformations, and Molecular Dynamics (MD) simulations. The composition of the data set is summarized in Table S1. The largest fraction of frames stems from MD (NVT ensemble) simulations (77 %), which were performed on different unit cell sizes up to 700 K. The 1T O -phase is the largest fraction of the training data structure type (69 %) because it is the one that defines edges as shown in Figure 1. For reasons discussed below, we do not include other polytypes, such as the 3T O , 3A O and 6R O phases. [30] At the current stage, the model includes only cell stresses from bulk unit cells, as we fix the unit cell parameters of edges and surfaces at their corresponding few-layer or bulk counterparts. An exemplary training parameter input is shown in Figure S1. The data set is rather small with approximately 12700 frames, which we use entirely for training. We validate the performance of the DP model not just on the as-obtained energies and forces, but on derived quantities, such as bond lengths and phonon spectra.
Accuracy of the DP model To verify the validity of the DP model, we analyze how the energy and force root mean square error (RMSE) distributes over the training data set and how accurately it predicts bond lengths and phonon spectra (see Figure 2 a-c). We find that the DP model represents the potential energy surface of the system well, yielding a weighted RMSE of 3.5 meV atom -1 for the energies and a weighted RMSE of 18.9 meV atom -1 Å -1 for the forces (see Figure 2 a). The prediction of atomic forces is better than the prediction of energies because there is more information to train from. The forces are atom-resolved (three numbers per atom), whereas the total energy is a single value averaged over the whole system. We find that the energy prediction error is larger (between 5 to 10 meV atom -1 ) for systems with single unit cells (i.e., primitive units) than for systems in supercell representation, likely because the training data includes more supercell calculations than calculations in the primitive unit cell. An additional effect can be observed in single-layered PtSe2, where the predicted total energy decreases with increasing supercell size (see Figure S2) because the structure is allowed to undergo distortions, such as wrinkling, which are not possible in a single unit cell. Thus, the DP model underestimates the total energy of single-layer PtSe2 in its primitive unit by 19 meV atom -1 . We conclude that the DP model should only be used for sufficiently large unit cells (above 50 atoms), which is the case in this study. The error of the force prediction becomes largest for structures which deviate from the ideal 1:2 Pt:Se ratio, which are Pt-rich or Se-rich edge models. This error might be further reduced by adding additional training data. The resulting bond lengths are reproduced within 2% accuracy of the DFT value (see Figure 2
b).
We observe that larger errors occur for the prediction of Se-Se dimer bond lengths, which form only in the Se-rich zigzag edge and the single-layer armchair edge. This might be remedied by adding more training data for those specific edges and for the pure bulk phases. There are larger difficulties in the prediction of the interlayer distance by the DP model, which tends to underestimate interlayer distances between 0.5 and 1.4 % (see Figure 2 b) compared to the DFT value. This error cannot be easily remedied by adding more training data but stems from how the DP model is trained. The DP model predicts energies of single atoms embedded in a local bonding environment, represented by environment matrices (ℛ ) : [35,38]
(ℛ ) = (| |) × ( | | , | | , | | ) with (| |) = 0 for | | >
where | | = | − | and , , and are the Cartesian components of the relative distance vector to other atoms within a cutoff radius . The factor (| |) is a switching function that smoothly varies from 1 to 0 at this cutoff distance. [38] These environment matrices define the feature matrix of the DNN. Consequently, the DP model can only learn from features which are representable within this cutoff radius. For a layered material, such as PtSe2, a spherical cutoff is not ideal, because one would have to choose a very large radius to include next-layer and nextnext-layer interactions. However, training quickly becomes unfeasible for too large cutoff radii, Figure 2 a) Energy and force prediction of the DP model as obtained by the dp test utility over the training data set. Each circle corresponds to a subset of input frames with the same chemical formula. The size of the circle corresponds to the size of the subset and the color indicates the deviation from the ideal stoichiometry of 1:2 in PtSe2. b) Relative bond length error compared between structures after relaxation by the DP model and from DFT. Each dot represents the comparison between the smallest, largest or mean bond length of the same type of bond. c) Bulk phonon spectra of three stacking phases obtained from DFT and the DP model. d) Relative free surface formation energy of the differently terminated nanoflake models. The width of the bands corresponds to energy differences obtained for different lateral diameters of the nanoflakes. and the prediction may become worse because the feature matrix becomes diluted. Furthermore, there are long-range interactions (such as dielectric screening), which strongly affect the interlayer distance, but cannot be captured by a simple cutoff radius at all. To solve this issue, further developments in DeePMD-kit are required, e.g., anisotropic cutoff radii. Here, we chose a cutoff radius of 8 Å for training, which means that each individual layer at most sees the next and second next layer (for comparison, the bulk interlayer distance in the 1T O phase at the DFT level is 4.97 Å). This yielded a good compromise to predict accurate interlayer distances and edge configurations. Consequently, we do not include more stacking orders with larger interlayer distances and long-range stacking effects in our training data set (such as 3T O , 3A O and 6R O ). [30] These would require further training data and improved cutoff definitions.
Edge Stability The DP model reproduces the phonon spectra of the three stacking orders 1T O , 2H O and 3R O in the bulk phase very well, as shown in Figure 2 c. All three stacking orders are found to be stable local minima, agreeing with our previous work. [30] Interestingly, the DP model shows a degeneracy of phonon modes at the A-point in the 1T O and 2H O stacking configuration, whereas the DFT calculations do not. This hints at a lack of long-range interaction in the DP model, which might be explained by the cutoff radius. Using the DP model, we can estimate thermodynamic quantities for very large systems, such as nanoflakes. We argue that nanoflake models are the most realistic approach to determine the stability of edge terminations because they allow for the comparison of systems with similar finite sizes without considering the additional effects of interacting periodic images. Furthermore, they are the best model to consider nanoflake growth, which depends, e.g., on charge localization on the corners of such nanoflakes. [45] Still, one has to choose a thermodynamic model to compare the Gibbs free energies of systems with different compositions and number of atoms. In the TAC process, a pre-deposited Pt film is converted to PtSe2 by pure, vaporized Se powder transported by a carrier gas (e.g., H2). [7] Hence, choosing bulk Pt and bulk Se in their most stable forms is the simplest reference for the (unbalanced) reaction:
+ ↔ 2
If we normalize the Gibbs free energy of this reaction over the area of a nanoflake, we obtain a free surface formation energy: [46] ( , )
= 1 2 ( ( , ) − ( , ) − ( , ))
Here, stands for pressure (we chose 1 atm), for temperature, for the basal area of the hexagonal flake, and for the number of Pt and Se atoms in the PtSe2 nanoflake, respectively. The individual terms ( , ) are the free energies including the internal energy + ( ), the Zero-Point Energy (ZPE) and the entropy ( , ). The pressure-volume term is included for the molecular nanoflakes, but neglected for the bulk crystals, where the change in volume at this pressure is negligible. The results are shown in Figure 2 d, where different values are obtained for nanoflakes with different widths between 5 and 8.7 nm, yielding a range of free formation energies (indicated as bands). Within this model, the Se-rich zigzag termination is determined as the most stable for temperatures below 500 °C, agreeing with the observations of Li et al. [29] For higher temperatures, the Se-poor zigzag termination becomes more favorable. In experiments, Se tends to evaporate from PtSe2 films at higher temperatures, which is not considered in this simple thermodynamic model. As Li et al. [29] pointed out, this likely leads to generally Se-deficient PtSe2 films, which means the Se-poor zigzag edge is more abundant than estimated by this model. Regarding the Pt-rich zigzag edge (see Figure 1 b), we exclude it in further analysis because we observe already in the DFT MD simulation that the edge breaks down under annealing (see Figure S3). In experiments, this edge probably requires surface passivation to be stabilized. The Pt-rich edge is the only edge termination where we have observed magnetization, which has been discussed for the Se-poor and Pt-rich edge by Li et al. [29] Similarly, Avsar et al. [27,28] have shown that Pt-vacancies in single-layer PtSe2 give rise to magnetization. We cannot exclude that the other edges also contribute to observed magnetic behavior in PtSe2 films [27,47] because the PBE functional might simply underestimate it. At the current stage, we investigate the electronic properties taking spin-orbit coupling into account, but no other spin polarization. Generally, we observe that the armchair edge is less stable than the Se-poor and Se-rich zigzag edge (see Figure 2 d). The armchair edge undergoes strong surface reconstruction for a single layer of PtSe2, which we refer to as the distorted armchair edge. The distorted armchair edge doubles its periodicity to form alternating Se-Se dimers (see Figure 1 b, Figure S4). This does not occur for surfaces, where the symmetric armchair edge is preferred. This might be rationalized by the larger dielectric screening and the larger bulk lattice constant compared to a single layer. This feature is correctly reproduced by the DP model, however, it overestimates the transition barrier between the symmetric and distorted armchair edge for single-layer PtSe2. For example, for a ribbon with 5.52 nm width, the DFT transition barrier is 12.2 kJ mol -1 unit -1 , whereas the DP model predicts a barrier of 18.7 kJ mol -1 unit -1 . The larger barrier predicted by the DP model likely stems from the larger energy prediction error on structures containing Se-Se dimers, which only make up a small fraction of the training data set. Lastly, we observe wrinkling (or rippling) in single-layered nanoflakes with the Se-poor and Serich zigzag edge, but not in nanoflakes with the armchair edge (see Figure S5). Such a wrinkling effect has been previously observed for other TMDC monolayers and may lead to a significant reduction of the band gap. [48] We observe that the DP model appears to have a tendency to induce larger wrinkling in such nanostructures than would be obtained from DFT (see Figure S5). In nanoribbon models, the effect is small and impacts the electronic properties only to a small amount (see Figure S6). In nanoplatelet models, the wrinkling is smaller due to the periodicity along the stacking axis. The wrinkling may contribute to the localization of conductive channels, though, as we show in the next section, we observe such localization for the perfectly flat armchair nanoflake as well. [48] Lateral Quantum Confinement in Ribbons and Surfaces Before we discuss the electronic properties of nanoflakes and nanoplatelets, we take a look at the electronic band structures of the corresponding ribbons and surfaces with the same edge termination, focusing on the change of electronic bands when increasing lateral width. In Figure 3, we project the electronic band structure onto the atoms belonging to the edge (within a 1 nm region) and the central PtSe2 units which we call channel or bulk (see Figure 1). The full set of band structures is shown in Figures S7-8. In the case of single-layered ribbons, we observe a clear distinction between edge bands and channel bands. For the armchair and Se-poor zigzag edge, the edge bands reside within the band gap of the PtSe2 channel, as was also observed by Li et al. [29] The bands of the PtSe2 channel are affected by quantum confinement, whereas the edge bands are not. In the limit of large lateral width, the concentration of edge states vanishes and can be considered dopant states, leaving the band edges of the PtSe2 channel (indicated by gray areas in Figure 3). For nanoribbons, we observe that the band gap converges to a value of about 1.2 eV after 10 nm width. The band gap of perfect, single-layered PtSe2 at the PBE level is 1.38 eV. The difference between the two values might be due to wrinkling. [48] The distorted armchair edge is formally metallic, whereas the Se-poor zigzag edge is semiconducting with a small band gap, and the Se-rich zigzag edge is insulating with the edge states residing close to the channel band edges. This does not reflect how the conductivity of the edges behaves. In Figure 3, we show the Boltzmann conductivity in the constant relaxation time approximation for different lateral widths. [49] The Boltzmann conductivity is a tensor / 0 , where 0 is the so-called relaxation time. The relaxation time of a system is in principle unknown and depends on how quickly excited carriers scatter back into the ground state due to, e.g., lattice vibrations. For reference, a typical relaxation time is in the range of 10 -13 to 10 -15 s (e.g., SnSe2) [50] depending on temperature and doping. [49,51] If we assume that the relaxation time for conductivity parallel to the edge does not depend much on the width of the ribbon and the termination of the edge, we can make qualitative comparisons between them. To this end, we project the conductivity tensor onto the lattice vectors (here and ) running parallel to the edges and surfaces: ∥ = (edges)
∥ = 1 2 ( + ) (surfaces)
With this, we can show that the distorted armchair edge features very small conductivity, even though it is formally metallic. This can be understood by taking into account that the bands crossing the Fermi level are flat and stem from lone pairs of the Se-Se dimers forming in the distorted armchair edge. For comparison, the symmetric armchair edge, which is instable for single-layer PtSe2, is semiconducting with a higher conductivity near the Fermi level (see Figure S3). Consequently, the transition from the symmetric to the distorted armchair edge in single-layer PtSe2 can be seen as a transition from a conducting state to a nearly insulating state. The Se-rich zigzag edge shows insulating behavior in the conductivity, whereas the Se-poor zigzag edge has the highest conductivity near the Fermi level. This conductivity decreases with increasing width of the nanoribbon, possibly because the edge states become more delocalized due to the larger screening of the PtSe2 channel. The analysis of edge states in the case of two-dimensional bulk surfaces becomes more involved, as shown on the right in Figure 3, because PtSe2 undergoes a transition from semiconductor to metal when going from a single layer to bulk. [24,52] Most bands crossing the Fermi level, especially near the Γ-point, stem from bulk PtSe2 states. Hence, the conductivity depends less on the edge termination and generally increases with increasing lateral width. In Figure S8, we show how the band structure changes for increasing lateral width of the stack (see Figure 1 d). Until ca. 10 nm, new bands approach and cross the Fermi level, whereas after 10 nm, we consider the band structure converged. Only in the case of the armchair and Se-poor zigzag edge, there are surface bands crossing the Fermi level. This indicates that there is large surface conductivity for these two models, whereas the bulk Se-rich zigzag surface has lower conductivity. In the case of the Se-poor zigzag edge, specifically, there are edge bands crossing at the Z-point, which corresponds to the stacking direction. This indicates that interlayer coupling of the Se-poor zigzag edges facilitates out-of-plane surface transport.
Edge conductivity in PtSe2 nanoflakes and nanoplatelets After discussing edge bands in ribbons and nanoplatelets, we can explain how these features are represented in single-layered nanoflakes and nanoplatelets. In Figure 4 a, we show the integrated Local Density of States (iLDOS) for different edge terminations integrated over the partially occupied states near the Fermi level. These states are partially occupied due to thermal smearing and serve as conductive channels localized on the nanoflake edges, agreeing with the observations in nanoribbons for the Se-poor zigzag and armchair edges. The Se-rich zigzag edge has only very few partially occupied states stemming from excess Se clusters. Disregarding these, the Se-rich zigzag nanoflakes are insulating with band gaps of 1.02, 0.98, and 0.96 eV for diameters of 5.62, 7.13 and 8.64 nm, respectively. These band gap values are smaller than in pristine monolayer PtSe2, which might be attributed to the wrinkling effect. [48] Similar to the bulk surfaces, when nanoflakes are stacked together to form vertical nanoplatelets, they undergo a transition from semiconductor to metal (see Figure 4 b). The contribution of edge atoms to the band structure is much larger in nanoplatelets with a smaller diameter than in comparable ribbon or surface models, because the number of edge atoms makes up a larger fraction. For large nanoplatelet models, the number of edge atoms is proportional to the radius , whereas the number of atoms in the PtSe2 channel is proportional to the square of the radius, 2 . Hence, for the smallest nanoplatelets with 5.2 to 5.62 nm width, we find that the states around the Fermi level have large contributions from the edge atoms for all three edge terminations. Only for the largest nanoplatelets with 8.01 to 8.64 nm width, the bands from the PtSe2 channel become continuous as in the corresponding direction Γ → Z in the surface model (see Figure 3). The conductivity, which, in this case, corresponds to the interlayer conductivity parallel to the surfaces of the nanoplatelet, is smaller than for the corresponding surface models by an order of magnitude (see Figure 4 b). As discussed for the surface models, the Se-poor zigzag edge features the largest conductivity at the Fermi level, because the coupling of the Se-poor zigzag edges appears to facilitate interlayer transport. The Se-rich zigzag edge features the largest conductivity below the Fermi level because the edge states of the Se-rich zigzag edge are deep below the Fermi level, as seen in the surface band structures in Figure 3.
To analyze whether electrical conductivity occurs on the edges or in the PtSe2 channel of nanoplatelets, we consider the integrated Local Density of States, as shown in Figure 4 a. We integrate the Local Density of States (LDOS) separately for holes and electrons over the same energy range as shown in the band structures in Figure 4 b and project it onto the basal plane of a single nanoflake in the nanoplatelet. The result for the largest nanoplatelets is shown in Figure 5 and for the smaller nanoplatelets in Figures S9-10.
For the hole states, we show that the largest iLDOS is localized on the edges for all three edge terminations, which also applies to small, medium-sized, and large nanoplatelets. The iLDOS in the center of the nanoplatelet is smaller, but non-zero, as the nanoplatelets are metallic. This is especially evident in the line scans shown in Figure 5, which can be measured experimentally, for example, with scanning-tunneling spectroscopy. [29] For the electron states, on the other hand, the edge states are localized on the corners of the nanoplatelet for the armchair and Se-poor zigzag edge, which agrees with the observation made by Miró et al. [45] for 1T-TiS2 nanoflakes. For the Se-rich zigzag edge, the electrons are rather localized in the center of the nanoplatelet, except for an increase where excess Se clusters have formed. The Boltzmann conductivity is proportional to The local density of states has been integrated over the holes and electrons in the same energy window as shown in the band structures in Figure 4.
the available DOS and the band velocities, [49] which are non-zero for the disperse bands shown in Figure 4. Hence, we conclude for all three edge terminations in the size regime below 10 nm that the holes are conducted to a significant amount over the surfaces of the nanoplatelets, whereas electrons are conducted along the nanoplatelet corners for the armchair edge and the Se-poor zigzag edge, but not for the Se-rich zigzag edge.
Conclusions
We investigate PtSe2 nanostructures, including ribbons, surfaces, nanoflakes and nanoplatelets, by means of Density-Functional Theory. To model these systems at sizes comparable to experiments, we train a Deep Neural Network interatomic potential with DeePMD-kit. This allowed us to study nanostructures with different edge terminations, namely the armchair edge and the three different zigzag edges with Se-poor, Se-rich and Pt-rich termination. The interatomic potential accurately reproduces energies, forces and bond lengths of the systems in question with a tendency to underestimate interlayer distances. With the interatomic potential, we determine the stability of the edges in nanoflakes, where we find that the Se-rich zigzag edge is most stable at lower temperatures but competes with the Se-poor zigzag edge at temperatures that are reached during synthesis. We investigate the lateral quantum confinement effect in ribbons, surfaces and nanoplatelets using DFT calculations on the structures obtained from the interatomic potential. The armchair and Se-poor zigzag edges lead to conducting edge states that dominate the electronic structure independent of the lateral width, whereas the Se-rich zigzag edge leads to insulating edge states. This is reflected in the Boltzmann conductivities, where we observe the largest conductivities for the Se-poor zigzag edge in ribbons and nanoplatelets. Particularly, we find that the armchair and Se-poor zigzag edge enhance the interlayer conductivity in form of conductive channels localized on the surfaces and corners of PtSe2 nanoplatelets. We argue that, especially in the size regime below 10 nm, the electrical conductivity of PtSe2 nanoplatelets is edge-dominated, which is crucial for understanding device contacts, catalytic properties, and molecular sensing.
Data Availability
The training data set has been made available in the NOMAD repository. [44] DeePMD-kit input configuration files and further details can be found in the SI and are available from the author upon reasonable request.
Methods
For all DFT calculations, including training data and electronic structure calculations, we employ the Fritz Haber Institute Ab Initio Material Simulations (FHI-aims) suite on intermediate and tight tier one basis sets (2020 default). [53] We use k-point densities ranging from 12 to 15 points per Å with the PBE functional and added non-local many-body dispersion correction (MBD-nl). [54,55] All electronic structure calculations include spin-orbit coupling (SOC). For phonon calculations and Molecular Dynamics (MDs), we employ phonopy and FHI-vibes. [56,57] MDs are run with the Langevin thermostat at timesteps of 5 fs for 500 steps targeting 700 K as temperature with a friction coefficient of 0.02 as implemented in the Atomic Simulation Environment (ASE). [58] Electrical conductivities are calculated via BoltzTraP2 using an interpolation parameter of 12 and integrated at 300 K without taking doping effects into account (chemical potential equals Fermi level). [49] The DP model was trained using DeePMD-kit with training parameters shown in the Supporting Information. [35][36][37] Supporting Information Supporting Information is available online or from the author.
Figure 1 -
1a) Different unit cell choices to model PtSe2 and the cutting vectors to obtain the armchair and zigzag edges with different terminations. b) Edge reconstruction of the five edge terminations after structural relaxation and their order of stability. c) Ribbon model with a single periodic axis to investigate one-dimensional edges. d) Stack model with two periodic axes to investigate two-dimensional surfaces with different edge terminations. e) Hexagonal nanoflake model without periodicity to investigate edge states. f) Hexagonal nanoplatelet model with one periodic axis to investigate two-dimensional surface states on all six sides.
Figure 3 -
3Electronic band structures of ribbons and surfaces of PtSe2 projected onto the edge atoms within a region of 1 nm for different lateral widths and different edge terminations. Gray bars indicate the band edges of the PtSe2 channel in ribbons. The Boltzmann conductivity in the constant relaxation time approximation is shown parallel to the edge (or surface) for different lateral widths at 300 K (the unit is 10 21 −1 −1 −1 ).
Figure 4
4a) Integrated Local Density of States of the partially occupied orbitals at an isosurface value of 0.05 for the differently terminated nanoflakes at medium diameter. b) Electronic band structure projected on the surfaces of the hexagonally shaped nanowires for different diameters and their conductivity at 300 K with constant relaxation time approximation (the unit is 10 21 −1 −1 −1 ).
Figure 5 -
5Integrated Local Density of States (iLDOS) along line scans and projected onto the basal plane of the nanoplatelets with 8.01 to 8.64 nm lateral width for different edge terminations.
AcknowledgementsThis work was financially supported by the German Ministry of Education and Research (BMBF) under the project ForMikro-NobleNEMS (16ES1121). The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at Jülich Supercomputing Centre (JSC). The authors are grateful to the Center for Information Services and High Performance Computing at TU Dresden for providing its facilities for the training of the DP model. The authors also thank DFG within CRC 1415 project for support.{ "_comment": "FHI-aims", "model": { "type_map": ["Pt", "Se"], "descriptor": { "type": "se_e2_a", "sel": [500, 500], "rcut_smth": 2.0, "rcut": 8.0, "neuron":[25,50,100], "resnet_dt": false, "axis_neuron": 12, "seed": 1, }, "fitting_net": { "type": "ener", "neuron": [240, 240,240], "resnet_dt": true, "seed": 1, "atom_ener": [-518285.968665213, -66543.4947249878], }, }, "learning_rate": { "type": "exp", "decay_steps": 5000, "start_lr": 0.0001, "stop_lr": 3.51e-08, }, "loss": { "type": "ener", "start_pref_e": 1, "limit_pref_e": 1, "start_pref_f": 100, "limit_pref_f": 1, "start_pref_v": 1, "limit_pref_v": 1, }, } Figure S1 -Training parameters for DeePMD-kit as defined in the input.json.
. M C Lemme, S Wagner, K Lee, X Fan, G J Verbiest, S Wittmann, S Lukas, R J Dolleman, F Niklaus, H S J Van Der Zant, G S Duesberg, P G Steeneken, M. C. Lemme, S. Wagner, K. Lee, X. Fan, G. J. Verbiest, S. Wittmann, S. Lukas, R. J. Dolleman, F. Niklaus, H. S. J. van der Zant, G. S. Duesberg, P. G. Steeneken, Research 2020, 2020, 1.
. M C Lemme, D Akinwande, C Huyghebaert, C Stampfer, Nat. Commun. 131392M. C. Lemme, D. Akinwande, C. Huyghebaert, C. Stampfer, Nat. Commun. 2022, 13, 1392.
. C S Boland, C Ó Coileáin, S Wagner, J B Mcmanus, C P Cullen, M C Lemme, G S Duesberg, N Mcevoy, Mater, 645029C. S. Boland, C. Ó. Coileáin, S. Wagner, J. B. McManus, C. P. Cullen, M. C. Lemme, G. S. Duesberg, N. McEvoy, 2D Mater. 2019, 6, 045029.
. S Wagner, C Yim, N Mcevoy, S Kataria, V Yokaribas, A Kuc, S Pindl, C.-P Fritzen, T Heine, G S Duesberg, M C Lemme, Nano Lett. 3738S. Wagner, C. Yim, N. McEvoy, S. Kataria, V. Yokaribas, A. Kuc, S. Pindl, C.-P. Fritzen, T. Heine, G. S. Duesberg, M. C. Lemme, Nano Lett. 2018, 18, 3738.
. S Lukas, O Hartwig, M Prechtl, G Capraro, J Bolten, A Meledin, J Mayer, D Neumaier, S Kataria, G S Duesberg, M C Lemme, Adv. Funct. Mater. 2102929S. Lukas, O. Hartwig, M. Prechtl, G. Capraro, J. Bolten, A. Meledin, J. Mayer, D. Neumaier, S. Kataria, G. S. Duesberg, M. C. Lemme, Adv. Funct. Mater. 2021, 31, 2102929.
. F Bonell, A Marty, C Vergnaud, V Consonni, H Okuno, A Ouerghi, H Boukari, M Jamet, Mater, 915015F. Bonell, A. Marty, C. Vergnaud, V. Consonni, H. Okuno, A. Ouerghi, H. Boukari, M. Jamet, 2D Mater. 2022, 9, 015015.
. M Prechtl, S Parhizkar, O Hartwig, K Lee, J Biba, T Stimpel-Lindner, F Gity, A Schels, J Bolten, S Suckow, A L Giesecke, M C Lemme, G S Duesberg, Adv. Funct. Mater. 2103936M. Prechtl, S. Parhizkar, O. Hartwig, K. Lee, J. Biba, T. Stimpel-Lindner, F. Gity, A. Schels, J. Bolten, S. Suckow, A. L. Giesecke, M. C. Lemme, G. S. Duesberg, Adv. Funct. Mater. 2021, 31, 2103936.
. S S Han, J H Kim, C Noh, J H Kim, E Ji, J Kwon, S M Yu, T.-J Ko, E Okogbue, K H Oh, H.-S Chung, Y Jung, G.-H Lee, Y Jung, ACS Appl. Mater. Interfaces. 13598S. S. Han, J. H. Kim, C. Noh, J. H. Kim, E. Ji, J. Kwon, S. M. Yu, T.-J. Ko, E. Okogbue, K. H. Oh, H.-S. Chung, Y. Jung, G.-H. Lee, Y. Jung, ACS Appl. Mater. Interfaces 2019, 11, 13598.
. H Zhang, H Li, F Wang, X Song, Z Xu, D Wei, J Zhang, Z Dai, Y Ren, Y Ye, X Ren, J Yao, ACS Appl. Electron. Mater. 20225177H. Zhang, H. Li, F. Wang, X. Song, Z. Xu, D. Wei, J. Zhang, Z. Dai, Y. Ren, Y. Ye, X. Ren, J. Yao, ACS Appl. Electron. Mater. 2022, 4, 5177.
. E Okogbue, S S Han, T.-J Ko, H.-S Chung, J Ma, M S Shawkat, J H Kim, J H Kim, E Ji, K H Oh, L Zhai, G.-H Lee, Y Jung, Nano Lett, 7598E. Okogbue, S. S. Han, T.-J. Ko, H.-S. Chung, J. Ma, M. S. Shawkat, J. H. Kim, J. H. Kim, E. Ji, K. H. Oh, L. Zhai, G.-H. Lee, Y. Jung, Nano Lett. 2019, 19, 7598.
. T.-Y Su, H Medina, Y.-Z Chen, S.-W Wang, S.-S Lee, Y.-C Shih, C.-W Chen, H.-C Kuo, F.-C Chuang, Y.-L Chueh, Small. 141800032T.-Y. Su, H. Medina, Y.-Z. Chen, S.-W. Wang, S.-S. Lee, Y.-C. Shih, C.-W. Chen, H.-C. Kuo, F.-C. Chuang, Y.-L. Chueh, Small 2018, 14, 1800032.
. Z Wang, Q Li, F Besenbacher, M Dong, Adv. Mater. Z. Wang, Q. Li, F. Besenbacher, M. Dong, Adv. Mater. 2016, 28, 10224.
. K Lee, B M Szydłowska, O Hartwig, K Synnatschke, B Tywoniuk, T Hartman, T Tomašević-Ilić, C P Gabbett, J N Coleman, Z Sofer, M Spasenović, C Backes, G S Duesberg, J. Mater. Chem. C. 2023593K. Lee, B. M. Szydłowska, O. Hartwig, K. Synnatschke, B. Tywoniuk, T. Hartman, T. Tomašević-Ilić, C. P. Gabbett, J. N. Coleman, Z. Sofer, M. Spasenović, C. Backes, G. S. Duesberg, J. Mater. Chem. C 2023, 11, 593.
. C Yim, K Lee, N Mcevoy, M O'brien, S Riazimehr, N C Berner, C P Cullen, J Kotakoski, J C Meyer, M C Lemme, G S Duesberg, ACS Nano. 109550C. Yim, K. Lee, N. McEvoy, M. O'Brien, S. Riazimehr, N. C. Berner, C. P. Cullen, J. Kotakoski, J. C. Meyer, M. C. Lemme, G. S. Duesberg, ACS Nano 2016, 10, 9550.
2D Materials for Piezoresistive Strain Gauges and Membrane Based Nanoelectromechanical Systems. S Wagner, RWTH Aachen UniversityS. Wagner, 2D Materials for Piezoresistive Strain Gauges and Membrane Based Nanoelectromechanical Systems, RWTH Aachen University, 2018.
. C Yim, V Passi, M C Lemme, G S Duesberg, C Ó Coileáin, E Pallecchi, D Fadil, N Mcevoy, Npj 2D Mater. Appl. 25C. Yim, V. Passi, M. C. Lemme, G. S. Duesberg, C. Ó. Coileáin, E. Pallecchi, D. Fadil, N. McEvoy, Npj 2D Mater. Appl. 2018, 2, 5.
. S Parhizkar, M Prechtl, A L Giesecke, S Suckow, S Wahl, S Lukas, O Hartwig, N Negm, A Quellmalz, K Gylfason, D Schall, M Wuttig, G S Duesberg, M C Lemme, ACS Photonics. 2022859S. Parhizkar, M. Prechtl, A. L. Giesecke, S. Suckow, S. Wahl, S. Lukas, O. Hartwig, N. Negm, A. Quellmalz, K. Gylfason, D. Schall, M. Wuttig, G. S. Duesberg, M. C. Lemme, ACS Photonics 2022, 9, 859.
. C Yim, N Mcevoy, S Riazimehr, D S Schneider, F Gity, S Monaghan, P K Hurley, M C Lemme, G S Duesberg, Nano Lett. 181794C. Yim, N. McEvoy, S. Riazimehr, D. S. Schneider, F. Gity, S. Monaghan, P. K. Hurley, M. C. Lemme, G. S. Duesberg, Nano Lett. 2018, 18, 1794.
. R Zhuo, L Zeng, H Yuan, D Wu, Y Wang, Z Shi, T Xu, Y Tian, X Li, Y H Tsang, Nano Res. 12183R. Zhuo, L. Zeng, H. Yuan, D. Wu, Y. Wang, Z. Shi, T. Xu, Y. Tian, X. Li, Y. H. Tsang, Nano Res. 2019, 12, 183.
D Braun, S Lukas, L Volkel, O Hartwig, M Prechtl, M Belete, S Kataria, T Wahlbrink, A Daus, G S Duesberg, M C Lemme, 2022 Device Res. Conf. DRC, IEEE. Columbus, OH, USA, 2022D. Braun, S. Lukas, L. Volkel, O. Hartwig, M. Prechtl, M. Belete, S. Kataria, T. Wahlbrink, A. Daus, G. S. Duesberg, M. C. Lemme, in 2022 Device Res. Conf. DRC, IEEE, Columbus, OH, USA, 2022, pp. 1-2.
. M Yan, E Wang, X Zhou, G Zhang, H Zhang, K Zhang, W Yao, N Lu, S Yang, S Wu, T Yoshikawa, K Miyamoto, T Okuda, Y Wu, P Yu, W Duan, S Zhou, Mater, 445015M. Yan, E. Wang, X. Zhou, G. Zhang, H. Zhang, K. Zhang, W. Yao, N. Lu, S. Yang, S. Wu, T. Yoshikawa, K. Miyamoto, T. Okuda, Y. Wu, P. Yu, W. Duan, S. Zhou, 2D Mater. 2017, 4, 045015.
. M O'brien, N Mcevoy, C Motta, J.-Y Zheng, N C Berner, J Kotakoski, K Elibol, T J Pennycook, J C Meyer, C Yim, M Abid, T Hallam, J F Donegan, S Sanvito, G S Duesberg, Mater, 321004M. O'Brien, N. McEvoy, C. Motta, J.-Y. Zheng, N. C. Berner, J. Kotakoski, K. Elibol, T. J. Pennycook, J. C. Meyer, C. Yim, M. Abid, T. Hallam, J. F. Donegan, S. Sanvito, G. S. Duesberg, 2D Mater. 2016, 3, 021004.
. M Prechtl, M Busch, O Hartwig, K Lee, T Stimpel-Lindner, C Ó Coileáin, K Zhussupbekov, A Zhussupbekova, S Berman, I V Shvets, G S Duesberg, M. Prechtl, M. Busch, O. Hartwig, K. Lee, T. Stimpel-Lindner, C. Ó. Coileáin, K. Zhussupbekov, A. Zhussupbekova, S. Berman, I. V. Shvets, G. S. Duesberg, 2023.
. A Ciarrocchi, A Avsar, D Ovchinnikov, A Kis, Nat. Commun. 9919A. Ciarrocchi, A. Avsar, D. Ovchinnikov, A. Kis, Nat. Commun. 2018, 9, 919.
. P Miró, M Ghorbani-Asl, T Heine, Angew. Chem. Int. Ed. 533015P. Miró, M. Ghorbani-Asl, T. Heine, Angew. Chem. Int. Ed. 2014, 53, 3015.
. L Ansari, S Monaghan, N Mcevoy, C Ó Coileáin, C P Cullen, J Lin, R Siris, T Stimpel-Lindner, K F Burke, G Mirabelli, R Duffy, E Caruso, R E Nagle, G S Duesberg, P K Hurley, F Gity, Npj 2D Mater. Appl. 333L. Ansari, S. Monaghan, N. McEvoy, C. Ó. Coileáin, C. P. Cullen, J. Lin, R. Siris, T. Stimpel-Lindner, K. F. Burke, G. Mirabelli, R. Duffy, E. Caruso, R. E. Nagle, G. S. Duesberg, P. K. Hurley, F. Gity, Npj 2D Mater. Appl. 2019, 3, 33.
. A Avsar, C.-Y Cheon, M Pizzochero, M Tripathi, A Ciarrocchi, O V Yazyev, A Kis, Nat. Commun. 114806A. Avsar, C.-Y. Cheon, M. Pizzochero, M. Tripathi, A. Ciarrocchi, O. V. Yazyev, A. Kis, Nat. Commun. 2020, 11, 4806.
. A Avsar, A Ciarrocchi, M Pizzochero, D Unuchek, O V Yazyev, A Kis, Nat. Nanotechnol. 14674A. Avsar, A. Ciarrocchi, M. Pizzochero, D. Unuchek, O. V. Yazyev, A. Kis, Nat. Nanotechnol. 2019, 14, 674.
. J Li, T Joseph, M Ghorbani-Asl, S Kolekar, A V Krasheninnikov, M Batzill, Adv. Funct. Mater. J. Li, T. Joseph, M. Ghorbani-Asl, S. Kolekar, A. V. Krasheninnikov, M. Batzill, Adv. Funct. Mater. 2022, 2110428.
. R Kempt, S Lukas, O Hartwig, M Prechtl, A Kuc, T Brumme, S Li, D Neumaier, M C Lemme, G S Duesberg, T Heine, Adv. Sci. 20222201272R. Kempt, S. Lukas, O. Hartwig, M. Prechtl, A. Kuc, T. Brumme, S. Li, D. Neumaier, M. C. Lemme, G. S. Duesberg, T. Heine, Adv. Sci. 2022, 9, 2201272.
. R A B Villaos, C P Crisostomo, Z.-Q Huang, S.-M Huang, A A B Padama, M A Albao, H Lin, F.-C Chuang, Npj 2D Mater. Appl. R. A. B. Villaos, C. P. Crisostomo, Z.-Q. Huang, S.-M. Huang, A. A. B. Padama, M. A. Albao, H. Lin, F.- C. Chuang, Npj 2D Mater. Appl. 2019, 3, 2.
. K Xiong, M Hilse, L Li, A Goritz, M Lisker, M Wietstruck, M Kaynak, R Engel-Herbert, A Madjar, J C M Hwang, IEEE Trans. Electron Devices. 67796K. Xiong, M. Hilse, L. Li, A. Goritz, M. Lisker, M. Wietstruck, M. Kaynak, R. Engel-Herbert, A. Madjar, J. C. M. Hwang, IEEE Trans. Electron Devices 2020, 67, 796.
. H Xu, H Zhang, Y Liu, S Zhang, Y Sun, Z Guo, Y Sheng, X Wang, C Luo, X Wu, J Wang, W Hu, Z Xu, Q Sun, P Zhou, J Shi, Z Sun, D W Zhang, W Bao, Adv. Funct. Mater. 291805614H. Xu, H. Zhang, Y. Liu, S. Zhang, Y. Sun, Z. Guo, Y. Sheng, X. Wang, C. Luo, X. Wu, J. Wang, W. Hu, Z. Xu, Q. Sun, P. Zhou, J. Shi, Z. Sun, D. W. Zhang, W. Bao, Adv. Funct. Mater. 2019, 29, 1805614.
. A El Sachat, P Xiao, D Donadio, F Bonell, M Sledzinska, A Marty, C Vergnaud, H Boukari, M Jamet, G Arregui, Z Chen, F Alzina, C M Torres, E Chavez-Angel, Npj 2D Mater. Appl. 202232A. El Sachat, P. Xiao, D. Donadio, F. Bonell, M. Sledzinska, A. Marty, C. Vergnaud, H. Boukari, M. Jamet, G. Arregui, Z. Chen, F. Alzina, C. M. Sotomayor Torres, E. Chavez-Angel, Npj 2D Mater. Appl. 2022, 6, 32.
. Y Zhang, H Wang, W Chen, J Zeng, L Zhang, H Wang, W E , Comput. Phys. Commun. 253107206Y. Zhang, H. Wang, W. Chen, J. Zeng, L. Zhang, H. Wang, W. E, Comput. Phys. Commun. 2020, 253, 107206.
. J Han, L Zhang, R Car, W E , DOI10.4208/cicp.OA-2017-0213Commun. Comput. Phys. 23J. Han, L. Zhang, R. Car, W. E, Commun. Comput. Phys. 2018, 23, DOI 10.4208/cicp.OA-2017-0213.
. T Wen, L Zhang, H Wang, W E , D J Srolovitz, Mater, T. Wen, L. Zhang, H. Wang, W. E, D. J. Srolovitz, Mater. Futur. 2022, 1, 022601.
. X Wang, Y Wang, L Zhang, F Dai, H Wang, Nucl. Fusion. 62126013X. Wang, Y. Wang, L. Zhang, F. Dai, H. Wang, Nucl. Fusion 2022, 62, 126013.
. W Zhao, H Qiu, W Guo, J. Phys. Chem. C. 202210546W. Zhao, H. Qiu, W. Guo, J. Phys. Chem. C 2022, 126, 10546.
. R Ganser, S Bongarz, A Mach, L Azevedo Antunes, A Kersch, Phys. Rev. Appl. 202254066R. Ganser, S. Bongarz, A. Von Mach, L. Azevedo Antunes, A. Kersch, Phys. Rev. Appl. 2022, 18, 054066.
. P Zhang, Z Zhang, Y Liu, Z Wang, Z Lu, R Xiong, Phys. Rev. Appl. 202254022P. Zhang, Z. Zhang, Y. Liu, Z. Wang, Z. Lu, R. Xiong, Phys. Rev. Appl. 2022, 18, 054022.
. X Liu, R Peng, Z Sun, J Liu, Nano Lett, 7791X. Liu, R. Peng, Z. Sun, J. Liu, Nano Lett. 2022, 22, 7791.
W Jia, H Wang, M Chen, D Lu, L Lin, R Car, E Weinan, L Zhang, SC20 Int. Conf. High Perform. Atlanta, GA, USAIEEEW. Jia, H. Wang, M. Chen, D. Lu, L. Lin, R. Car, E. Weinan, L. Zhang, in SC20 Int. Conf. High Perform. Comput. Netw. Storage Anal., IEEE, Atlanta, GA, USA, 2020, pp. 1-14.
. R Kempt, DOI10.17172/NOMAD/2023.05.04-12023R. Kempt, 2023, DOI 10.17172/NOMAD/2023.05.04-1.
. P Miró, J H Han, J Cheon, T Heine, Angew. Chem. Int. Ed. P. Miró, J. H. Han, J. Cheon, T. Heine, Angew. Chem. Int. Ed. 2014, n/a.
. C.-K Sin, J Zhang, K Tse, J Zhu, J Semicond, 61101C.-K. Sin, J. Zhang, K. Tse, J. Zhu, J. Semicond. 2020, 41, 061101.
. Z Li, J Zhang, Y Zeng, L Meng, M Zhou, W Wu, J. Phys. Condens. Matter. 29Z. Li, J. Zhang, Y. Zeng, L. Meng, M. Zhou, W. Wu, J. Phys. Condens. Matter 2017, 29, 23LT01.
. P Miró, M Ghorbani-Asl, T Heine, Adv. Mater. 5473P. Miró, M. Ghorbani-Asl, T. Heine, Adv. Mater. 2013, 25, 5473.
. G K H Madsen, J Carrete, M J Verstraete, Comput. Phys. Commun. 140G. K. H. Madsen, J. Carrete, M. J. Verstraete, Comput. Phys. Commun. 2018, 231, 140.
. Y Ding, B Xiao, G Tang, J Hong, J. Phys. Chem. C. 225Y. Ding, B. Xiao, G. Tang, J. Hong, J. Phys. Chem. C 2017, 121, 225.
. P Yan, G Gao, G Ding, D Qin, Adv, 912394P. Yan, G. Gao, G. Ding, D. Qin, RSC Adv. 2019, 9, 12394.
. P Miró, M Audiffred, T Heine, Chem Soc Rev. 436537P. Miró, M. Audiffred, T. Heine, Chem Soc Rev 2014, 43, 6537.
. V Blum, R Gehrke, F Hanke, P Havu, V Havu, X Ren, K Reuter, M Scheffler, Comput. Phys. Commun. 1802175V. Blum, R. Gehrke, F. Hanke, P. Havu, V. Havu, X. Ren, K. Reuter, M. Scheffler, Comput. Phys. Commun. 2009, 180, 2175.
. J Hermann, A Tkatchenko, J. Hermann, A. Tkatchenko, ArXiv191003073 Cond-Mat Physicsphysics 2019.
. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 3865J. P. Perdew, K. Burke, M. Ernzerhof, Phys. Rev. Lett. 1996, 77, 3865.
. F Knoop, T Purcell, M Scheffler, C Carbogno, J. Open Source Softw. F. Knoop, T. Purcell, M. Scheffler, C. Carbogno, J. Open Source Softw. 2020, 5, 2671.
. A Togo, I Tanaka, Scr. Mater. 1081A. Togo, I. Tanaka, Scr. Mater. 2015, 108, 1.
. A Larsen, J Mortensen, J Blomqvist, I E Castelli, R Christensen, M Dułak, J Friis, M N Groves, B Hammer, C Hargus, E D Hermes, P C Jennings, P Jensen, J Kermode, J R Kitchin, E Leonhard Kolsbjerg, J Kubal, K Kaasbjerg, S Lysgaard, J Bergmann Maronsson, T Maxson, T Olsen, L Pastewka, A Peterson, C Rostgaard, J Schiøtz, O Schütt, M Strange, K S Thygesen, T Vegge, L Vilhelmsen, M Walter, Z Zeng, K W Jacobsen, J. Phys. Condens. Matter. 29273002A. Hjorth Larsen, J. Jørgen Mortensen, J. Blomqvist, I. E. Castelli, R. Christensen, M. Dułak, J. Friis, M. N. Groves, B. Hammer, C. Hargus, E. D. Hermes, P. C. Jennings, P. Bjerre Jensen, J. Kermode, J. R. Kitchin, E. Leonhard Kolsbjerg, J. Kubal, K. Kaasbjerg, S. Lysgaard, J. Bergmann Maronsson, T. Maxson, T. Olsen, L. Pastewka, A. Peterson, C. Rostgaard, J. Schiøtz, O. Schütt, M. Strange, K. S. Thygesen, T. Vegge, L. Vilhelmsen, M. Walter, Z. Zeng, K. W. Jacobsen, J. Phys. Condens. Matter 2017, 29, 273002.
| [] |
[
"RITA: Group Attention is All You Need for Timeseries Analytics",
"RITA: Group Attention is All You Need for Timeseries Analytics"
] | [
"Jiaming Liang [email protected] ",
"Lei Cao [email protected] ",
"Samuel Madden [email protected] ",
"Zachary Ives [email protected] ",
"Guoliang Li [email protected] ",
"\nUniversity of Pennsylvania Philadelphia\nPAUSA\n",
"\nMassachusetts Institute of Technology Cambridge\nMAUSA\n",
"\nMassachusetts Institute of Technology Cambridge\nMAUSA\n",
"\nUniversity of Pennsylvania Philadelphia\nPAUSA\n",
"\nTsinghua University\nBeijingChina\n"
] | [
"University of Pennsylvania Philadelphia\nPAUSA",
"Massachusetts Institute of Technology Cambridge\nMAUSA",
"Massachusetts Institute of Technology Cambridge\nMAUSA",
"University of Pennsylvania Philadelphia\nPAUSA",
"Tsinghua University\nBeijingChina"
] | [] | Timeseries analytics is of great importance in many real-world applications. Recently, the Transformer model, popular in natural language processing, has been leveraged to learn high quality feature embeddings from timeseries, core to the performance of various timeseries analytics tasks. However, the quadratic time and space complexities limit Transformers' scalability, especially for long timeseries. To address these issues, we develop a timeseries analytics tool, RITA, which uses a novel attention mechanism, named group attention, to address this scalability issue. Group attention dynamically clusters the objects based on their similarity into a small number of groups and approximately computes the attention at the coarse group granularity. It thus significantly reduces the time and space complexity, yet provides a theoretical guarantee on the quality of the computed attention. The dynamic scheduler of RITA continuously adapts the number of groups and the batch size in the training process, ensuring group attention always uses the fewest groups needed to meet the approximation quality requirement. Extensive experiments on various timeseries datasets and analytics tasks demonstrate that RITA outperforms the state-of-the-art in accuracy and is significantly faster -with speedups of up to 63X. | null | [
"https://export.arxiv.org/pdf/2306.01926v1.pdf"
] | 259,076,443 | 2306.01926 | c4318ccbc429a98d01f719e65014ad0fa3221c66 |
RITA: Group Attention is All You Need for Timeseries Analytics
Jiaming Liang [email protected]
Lei Cao [email protected]
Samuel Madden [email protected]
Zachary Ives [email protected]
Guoliang Li [email protected]
University of Pennsylvania Philadelphia
PAUSA
Massachusetts Institute of Technology Cambridge
MAUSA
Massachusetts Institute of Technology Cambridge
MAUSA
University of Pennsylvania Philadelphia
PAUSA
Tsinghua University
BeijingChina
RITA: Group Attention is All You Need for Timeseries Analytics
Timeseries analytics is of great importance in many real-world applications. Recently, the Transformer model, popular in natural language processing, has been leveraged to learn high quality feature embeddings from timeseries, core to the performance of various timeseries analytics tasks. However, the quadratic time and space complexities limit Transformers' scalability, especially for long timeseries. To address these issues, we develop a timeseries analytics tool, RITA, which uses a novel attention mechanism, named group attention, to address this scalability issue. Group attention dynamically clusters the objects based on their similarity into a small number of groups and approximately computes the attention at the coarse group granularity. It thus significantly reduces the time and space complexity, yet provides a theoretical guarantee on the quality of the computed attention. The dynamic scheduler of RITA continuously adapts the number of groups and the batch size in the training process, ensuring group attention always uses the fewest groups needed to meet the approximation quality requirement. Extensive experiments on various timeseries datasets and analytics tasks demonstrate that RITA outperforms the state-of-the-art in accuracy and is significantly faster -with speedups of up to 63X.
INTRODUCTION
Motivation. Many data driven applications involve processing massive timeseries data, including IoT [11], medical AI [14], stock market [27], and so on. As such, there is a great need for timeseries analytics, such as forecasting [8], classification [20], clustering [31], similarity search [39], and anomaly detection [50], with applications ranging from automatically diagnosing diseases [5], recognizing human activities [29], to stopping financial fraud [59].
Effective feature extraction [40] lies at the core of almost all these timeseries analytics tasks. Recently researchers [61] have started leveraging the self-supervised pre-training methodology of Transformers [4,16,52], which have proven remarkably successful in natural language processing (NLP), to automatically learn high quality feature embeddings from timeseries. In NLP, self-supervised pre-training exploits the sequential patterns (correlations) among the words in sentences to produce contextualized feature embeddings. Timeseries bear similarity to natural language, because in timeseries data the sequential order among the values (stock price, volume, etc.) over time matters. That is, each value is highly correlated with other values observed before or after it. Therefore, * Corresponding Author pre-training a Transformer model which takes the correlations among different observations into account is a natural idea to learn feature embeddings from timeseries. Indeed, the experiments in [61] confirm that Transformer-based methods outperform traditional timeseries analytics techniques.
However, existing work [61] that directly applies Transformers to learn features from timeseries data have been shown not to be scalable to long timeseries [30]. The idea of self-attention [52] is central to pre-training methods in NLP: It computes pairwise correlations among different semantic units in a sequence (in NLP, a sentence); as such, it has quadratic time and space complexity in the length of the input sequence. Such an approach places limits on the model's scalability, especially when handling large sequences, which are common in real-world timeseries applications such as IoT, medical AI, and finance [6,34,62]. Predictions about timeseries may need to look at months or years of historical data to make accurate predictions, spanning hundreds of thousands of samples. As an example, in collaboration with a research hospital we have been developing a seizure classifier that automatically detects seizures based on EEG signals (timeseries) collected during the clinical observation of patients. As seizures last only a few seconds, we chunk long EEG data into many 2 second segments and detect seizures at a segment level. However, the classification of a particular segment depends on up to 12 hours of prior signal to determine if one 2 second segment indicates seizure or not, because seizure diagnosis needs to consider long-term trends in the EEG data [6]. The number of segments in 12 hours is more than 21k. This is far larger than the number of semantic units the typical NLP tasks expect. For example, BERT [16] limits the number of units to 512 and even massive models like GPT-3 [4] limit the number of units to 2048.
Although in NLP some lower-complexity methods have been proposed to approximately compute self-attention [10,26,54], their performance degrades dramatically when used on timeseries, due to the gap between natural language and timeseries, as we will show in our experiments. Proposed Approach. To tackle the aforementioned problem, we develop RITA, a Transformer-based timeseries analytics tool, which uses a novel attention mechanism, called group attention, to scale to long timeseries.
Leveraging the periodicity of timeseries, RITA chunks the input timeseries into segments and dynamically clusters the segments into a small number (denoted as ) of groups. Segments in the same group possess similar feature embeddings during the current training iteration, thus enabling them to approximately share the computation of attention. As the timeseries increases in length, more sharing opportunities become available. RITA then computes the self-attention at a group level and produces a compressed group attention matrix. In this way, group attention eliminates both computation and memory bottlenecks in Transformer-style models and thus more scalable to long timeseries.
However, making this idea effective and efficient in Transformer architectures is challenging for several reasons:
• Efficiently Producing High Quality Feature Embeddings. Although RITA computes the attention matrix at a group level, to preserve the quality of the feature embeddings, it still has to produce different embeddings for different segments. This is because even if some segments share the attention score temporally, it does not mean they should have the same feature embedding. However, using the group attention matrix, the existing self-attention mechanism will only produce a single feature vector for each group. A naive solution would be to restore the original attention matrix from the group attention matrix. However, in this case we again get an attention matrix with quadratic space complexity. Because GPUs have limited memory, GPU memory will remain a bottleneck in group attention.
• The Number of Groups N. In RITA, the number of groups is a crucial factor that balances the speed up and the quality of attention approximation. A small will lead to a large speedup, but the approximation errors can also be significant. On the other hand, although a large tends to produce high-quality approximations, it inevitably slows down the training process. Therefore, an appropriate is essential to the performance of group attention. However, depends on the distributional properties of the dataset. Furthermore, like the classical transformer models, RITA stacks multiple attention layers to produce better embeddings. Ideally, different layers should also use different values of . In addition, during the model training phrase, group attention should use different values of at different iterations to adapt to the varying feature embeddings. This makes manually setting appropriate almost impossible.
• Batch Size. Moreover, as we want to dynamically adjust during training, a fixed batch size is sub-optimal: as decreases, the memory usage of a single sample decreases. This allows a larger batch size which is beneficial, because: (1) it makes full use of GPU memory; (2) high-parallelism across the samples in a big batch brings better performance. Our experimental study shows that doubling the batch size reduces the training time by 30%, while still preserving the quality of the model. Thus, RITA should dynamically adjust batch size as changes.
To address the above problems, we first propose an embedding aggregation strategy and a customized group softmax function to replace the classical softmax function [52]. Together they ensure RITA is able to directly use the compressed attention matrix to produce different feature embeddings for different segments. We theoretically show the embeddings RITA produces in this way are identical to those produced by first re-storing the original large attention matrix. Thus RITA is able to produce high quality embeddings without introducing extra overhead. Further, we design a GPU friendly algorithm to group the segments in parallel, effectively minimizing the grouping cost.
RITA Encoder
Scale & Input Second, we design an adaptive scheduler which dynamically decides an appropriate for each group attention layer during the training process. It starts with a large and iteratively merges groups that are similar to each other. Guided by an error bound on the approximated self-attention that users can tolerate, it automatically determines if two groups are mergeable, performing merging efficiently in a GPU-friendly way.
Moreover, we propose a learning-based method to model the correlation between the number of groups and the batch size . This model is used to predict for a given when training RITA. Specifically, we first sample some values in a reasonable range. For each sampled , we find a batch size that consumes up to a certain percentage of GPU memory in a cost-efficient way. Using a small set of mathematical functions as a prior, RITA learns a model with only a few <N, B> pairs as ground truth labels.
Our experiments on public timeseries benchmarks and the MGH EEG data [6] confirm that RITA outperforms state-of-the-art methods in accuracy on various timeseries analytics tasks, while our group attention mechanism achieves a 63X speedup with much less memory required, compared to existing self-attention mechanisms [10,52,54].
Contributions.
The key contributions of this work include:
• Our group attention mechanism leverages the periodicity of timeseries, reducing the time and space complexity of the selfattention mechanism with accuracy guarantees, allowing RITA to scale to long timeseries data.
• Guided by an approximation error bound, our adaptive scheduler dynamically adapts the number of groups and the batch size to the distribution properties of the evolving feature embeddings, making group attention efficient and easily tunable.
• We conduct experiments on various datasets and different analytics tasks, demonstrating that RITA is 4 to 63 times faster than the state-of-the-art while achieving better accuracy when handling long timeseries (length ≥ 2000).
BACKGROUND
We provide some background on the canonical self-attention module in the Transformer [52]. A self-attention module takes hidden embedding vectors ∈ R * ℎ as input, then projects them to queries ( ), keys ( ) and values ( ) and performs Scaled-dot Product Attention, which given input hidden state , is computed by:
= , = , = = = ( √︁ )(1)
Where ∈ R ℎ * , ∈ R ℎ * , ∈ R ℎ * are projection matrices for generating , , . ∈ R * is also regarded as the packing of query vectors { 1 , ..., } with dimension into a matrix. ∈ R * , ∈ R * are regarded as the packing of key vectors { 1 , ..., } and value vectors { 1 , ..., } in the same way.
Given a matrix ∈ R * , the softmax function normalizes to ensure the sum of each row equals to 1, as shown below.
( , ) = ( , ) −1 =0 ( , )(2)
Note the attention matrix A is an × matrix, where represents the number of elements in the input sequence (e.g. words in NLP).
RITA OVERVIEW
Given a collection of unlabeled timeseries, RITA first pre-trains a Transformer-style model to produce high quality feature embeddings for timeseries data. This pre-trained model is then used to support various downstream tasks, similar to BERT [16]. Next, we overview the model architecture of RITA. We show how RITA supports various downstream tasks in Appendix A.7.
As shown in Fig. 1, RITA is consist of two components: (1) Timeaware Convolution Layer (2) RITA Encoder. Time-aware Convolution Layer fills the gap between timeseries and natural language. Despite their high-level similarity, there is a big gap between timeseries and natural language. First, in natural language each word, as a discrete semantic unit, has an independent meaning, while each element in a timeseries is a continuous, numerical value and does not necessarily constitute an independent event. Furthermore, the input sequences are single-channeled in NLP, but often multi-channeled in timeseries (i.e., sensor data often consists of several related channels).
RITA leverages the classical convolution [28] strategy to solve this problem. Convolution is widely used to capture the local structures of an image. We use convolution to chunk one input timeseries into a sequence of windows and learn the local structure of each window, similar to the discrete semantic units in natural language. It also discovers the correlations across different channels, thus naturally solving the multi-channel problem.
More specifically, treating a multi-variate timeseries of length and with variables as an n × m matrix , RITA uses convolution kernels to chunk into n windows and produce one d-dimensional embedding per window using the convolution operation [28]. Each convolution kernel corresponds to a w × m matrix, where defines the number of timestamps that each convolution kernel covers, identical to the window size in sliding window. RITA Encoder functions as Transformer Encoder as described in the original Transformer work [52]. It takes the embeddings of semantic units 1 , 2 , ..., ( ∈ ) as input (e.g. embeddings of windows for a timeseries), then models the correlations between the semantic units and outputs 1 , ..., ( ∈ ) as the contextaware embedding of each unit.
What makes RITA Encoder different from Transformer Encoder is that: at the core of Transformer Encoder lies self-attention mechanism which incurs a ( 2 ) time complexity and memory usage. This quadratic cost becomes prohibitive for long timeseries and limits the scalablity of Transformer-based models. To make the attention computation efficient yet high-quality, we replace the canonical self-attention with our proposed group attention. Self-supervised Pretraining. Inspired by the "cloze text" pretraining task in NLP, we designed a mask-and-predict task as the pretraining task for our model. The timeseries is randomly masked and the model should recover the masked values based on corresponding contextual information.
To be specific, we generate masks on time-stamps, with a mask rate . The timeseries is scaled to be non-negative and the values across all the channels on the masked timestamps are set to be -1, an impossible value on normal timestamps. Then the masked timeseries is fed into RITA and the output representation is translated to the recovered timeseries by a Transpose Convolution layer.
GROUP ATTENTION MECHANISM
Group attention, a novel and efficient approximate attention mechanism, addresses the performance bottleneck of self-attention in the vanilla Transformer. In this section, we first introduce the framework of group attention and then theoretically establish the bound of its approximation error.
The Idea of Group Attention
As periodicity is a natural property of timeseries [56], similar windows frequently occur. Similar windows result in similar queries/keys for attention computation, bringing opportunities for saving computation.
As discussed in Sec. 2, , the attention score of window onto window , is determined by the inner product between the query vector of window and the key vector of window , that is, · . Given another window , if window has the similar key vector to window , that is, ≈ , then · ≈ · . In other words, ≈ when ≈ . This observation inspires our group attention mechanism. That is, we group the windows by their similarity in keys. Assuming all windows in the same group have the same attention score onto another window , we then only compute the attention once by using one single key to represent this group, for example the centroid of the group of keys. This thus saves significant computation cost.
Better yet, after grouping windows into groups, group attention compresses the attention matrix from an × matrix to an × matrix. Because (number of groups) tends to be much smaller than (number of windows) due to the periodicity of timeseries, group attention consumes much less memory than the original self-attention mechanism, successfully eliminating the memory bottleneck. Note that it also doesn't hurt quality all that much, as confirmed in our experiments (Sec. 6.2).
Computing the Output Feature Embedding
We now discuss how to efficiently compute the output feature embeddings using the small compressed group attention matrix.
Problem: Producing Embeddings w/ Group Attention Matrix
As described in the Background, once we have acquired the attention matrix , canonical self-attention computes the output embedding as O = AV . Because is an × matrix and is an × matrix, the matrix product operation still produces an × matrix . That is, it produces a dimensional feature vector for each window. However, our group attention will produce an × attention matrix , where corresponds to the number of groups. In this case the matrix product will produce a × matrix . That is, it produces a feature vector for each group. However, our goal is to produce different embeddings for different windows, because even if some windows share the attention score temporally, it does not mean they should have the same feature embedding. A Naive Solution. A naive solution would be to restore the full attention matrix from the group attention matrix . For example, given one group composed of and , we map its group attention vector in into two rows that correspond to and in . However, in this case we again get a × attention matrix; and GPU memory remains a bottleneck in group attention.
Solution: Embedding Aggregation and Group SoftMax
Using an embedding aggregation operation and a group softmax function, RITA produces embeddings without restoring the full attention matrix. Fig. 2 shows the workflow of group attention. Embedding Aggregation. The idea is inspired by the observation on the matrix product operation O = AV conducted on the fully restored attention matrix .
Given an element , of corresponding to the ℎ dimension of 's feature vector, , = · , where vector a i ∈ R n denotes the ℎ row of the attention matrix and vector v j ∈ R n denotes the ℎ dimension of all the feature vectors. Given a i =< a 1 i , a 2 i , · · · , a n i > and v j =< v 1 j , v 2 j , · · · , v n j >, , = n k=1 a k i v k j . As an example, assume 1 and 2 belong to the same group 1 . Then 1 = 2 = 1 , where 1 ∈ corresponds to the attention of group 1 onto . Therefore, 1 1 + 2 2 = 1 ( 1 + 2 ). As an immediate generalization of the above analysis, if we aggregate up the windows that belong to the same group and convert the n-dimensional feature vector into a -dimensional group feature vector beforehand, we could directly use the group attention vector and the group feature vector to compute , .
Using embedding aggregation, RITA is able to produce the feature embedding that is identical to the embedding produced by using the full attention matrix and the embedding matrix . Group Softmax Function. In canonical self-attention the attention matrix is computed as = SoftMax (
QK T √ d k ). To compute ,
we have to first compute (denoted as ) which is an × matrix. Then normalizing the matrix with softmax produces the attention matrix .
Group attention follows the same procedure. But after grouping keys into , produces an × matrix . Due to the nonlinearity of the softmax function, applying softmax directly on will result in a group attention matrix from which we are not able to recover a full attention matrix that is identical to first restoring to and then applying softmax on . The matrix produced by the latter is desirable, as we want to approximate the original attention matrix as accurately as possible. However, restoring the small × matrix is not memory efficient, as it will end up with a full × matrix .
To solve the above problems, we introduce a new group softmax function to replace the original softmax function (Eq. 2).
( , ) = ( , ) −1 =0 ( , )(3)
In Eq. 3, represents the number of windows that Group contains. Compared to the original softmax, our group softmax considers each group as elements and counts it times when summing up the exponential of each group's , . In this way, the group softmax function operating on the small matrix will produce exactly the same result to the softmax function operating on the full matrix. Theoretical Guarantee. In Appendix A.4, we prove that the group softmax function and the embedding aggregation operation produce the same output feature embedding with the naive method that has to first restore the big full attention matrix.
We show an efficient implementation of the embedding aggregation operation and group softmax function in Appendix A.2, Alg. 1. Time Complexity. The time complexity of Alg. 1 is ( ) and the space complexity is ( ), while the time and space complexity of the original self-attention mechanism are ( 2 ) and ( 2 ).
Error Bound
Group attention produces a group attention matrix which approximates the attention matrix produced by the classical self-attention with a bounded error, as shown in Lemma 1. Lemma 1. Let be the radius of the ball where all key vectors live; be the representative of the group that contains key . Let denote the full attention matrix restored from . Suppose the distance between and (|| k − k ||) satisfies: || k − k || ≤ d.
Then ∀ > 1, if d ≤ ln( ) 2R , 1 ≤ A i,j A i,j ≤
Lemma 1 shows that the error bound of the group attention is determined by the distance . As discussed in Sec. 5.1, it inspires us to design a strategy to dynamically determine the number of groups -the most critical parameter of group attention. Please refer to Appendix A.5 for the proof.
GPU Friendly Grouping Method
In this section, we discuss the implementation of a grouping method. To make group attention efficient and effective, the grouping method has to satisfy the following requirements:
(1) Tight distance bound: to ensure the approximation quality, the distance between each key and its group representative should be minimized according to Lemma 1.
(2) Lightweight: to ensure the performance gain, the grouping method must be lightweight, at worst not exceeding the complexity of group attention itself ( ( )).
(3) GPU friendly: to take advantage of GPUs, we prefer a grouping method that mainly consists of matrix operations, which can be efficiently executed on a GPU.
To satisfy the above requirements, after thorough investigation on various clustering algorithms, we design a GPU friendly Kmeans [35] as the grouping method.
First, K-means minimizes the overall distance between any object and its cluster center, hence naturally satisfying Requirement 1.
Second, given centers, in each iteration the time and space complexity of K-means is ( ). Usually, the iteration goes until convergence. However, we observe that rather than seeking a perfect K-means clustering, training a few iterations is sufficient to get a good grouping for group attention, because typically the later iterations only slightly update the clustering and group attention is robust to such imperfection.
Third, we design a GPU-friendly implementation of K-means. The performance bottleneck of K-means comes from the distance computation between each vector and its center, that is,
|v i − c j | = √︃ (v i − c j ) 2 , i ∈ [1, n], j ∈ [1, N ]. The performance bot- tleneck is − . We instead use a different formulation: | − | = |v i − c j | = √︃ |v i | 2 + |c j | 2 − 2v i · c j , i ∈ [1, n], j ∈ [1, N ]
. This is because in this formulation, the performance bottleneck is · , which could be implemented as a matrix product operation. Although the complexity of the two formulations is the same, in GPUs matrix product is much more efficient than pairwise difference.
ADAPTIVE SCHEDULER
Next, we present the adaptive scheduler of RITA which addresses the challenges of determining an appropriate number of groups and accordingly the batch size , as described in Introduction. Using a dynamic scheduling method we propose, the scheduler automatically determines and adjusts and based on the distributional properties of the feature embeddings produced over the iterative training process, while guaranteed to produce high quality attention approximation that meets the requirement of users.
In Sec. 5.1 we show how RITA automatically determines . Then we introduce in Sec. 5.2 the learning-based method which given an , immediately predicts a good batch size.
Dynamically Determining the Number of Groups N
Without loss of generality, we use one group attention module as an example to show how RITA automatically gets an appropriate . The adaptive scheduler of RITA starts with a large and decreases it dynamically. This is because in the training process of RITA, the feature embeddings produced epoch by epoch tend to get stabler and stabler and gradually converge, thus no need to increase . RITA reduces the number of groups by merging similar groups. Intuitively, given two groups, we could measure their similarity based on the distance of their centers. If the distance between their centers is smaller than a distance threshold, then the two groups could be merged. However, setting an appropriate distance threshold seems hard -as difficult as setting an appropriate .
To solve this problem, RITA leverages the error bound of group attention introduced in Sec. 4.3. It only requires users to set an error bound , and then uses Lemma 1 to translate to a distance threshold . RITA then uses Lemma 2 to determine if merging some given clusters still meets the error bound threshold .
Lemma 2. Denote to be the cluster center of . Assume the existing grouping satisfies ∀k, max
x ∈cluster k |c k − x | ≤ d ,∈ | − | + | − | ≤ , , ∈ [1, ](4)
merging them into one cluster still meets the error bound .
Please refer to Appendix A.6 for the proof. Finding the Mergable Clusters. We formulate the problem of finding mergeable clusters using graph theory:
(1) each cluster is a node in the graph;
(2) if and satisfy:
∈ | − |+| − | ≤ , and ∈ | − |+| − | ≤
there is an undirected edge between and ; In this scenario, finding the maximum number of mergeable clusters is equivalent to finding the minimal clique cover in the corresponding graph, which is an NP-hard problem [24]. Such heavy computation overhead is not acceptable for RITA. We thus offer a simplified solution:
(1) Halve the clusters into two sets 1 , 2 ;
(2) If ∈ 1 and ∈ 2 satisfy:
∈ | − | + | − | ≤ , ∈ | − | + | − | ≤ 2(5)
is marked. (3) Decrease the number of clusters by counting the masks in 2 . In this solution, clusters in 1 can be regarded as transfer nodes. If (5) holds for ( ∈ 1 , 1 ∈ 2 ) and ( ∈ 1 , 2 ∈ 2 ), respectively, we have,
∈ 1 | 1 − 2 | + | − 1 | ≤ ∈ 1 | 1 − | + | − 2 | + | − 1 | ≤ ∈ 1 | 1 − | + | − 2 | + | − 1 | + | − 2 | ≤(6)
Thus (4) holds when merging several clusters in 2 with one cluster in 1 . As a result, we can greedily merge clusters in 2 , as illustrated in step (3).
Assume the number of clusters decreases by after merging, we apply a momentum update [42] on the number of clusters , as is commonly used in machine learning to smooth the changing of and avoid sample selection bias. To be specific:
= ( − ) + (1 − ) ,
where is a hyper-parameter for momentum.
Dynamically Determining the Batch Size
Because of the dynamic grouping operation, the computational graph in deep learning training [1] varies from sample to sample. As a result, it is impossible to precisely compute a batch's GPU memory usage without indeed feeding it into the model. To overcome this problem, RITA learns a batch size prediction function offline; then at the RITA training time, given a number of groups , RITA uses this function to predict a proper batch size.
When the model architecture and hardware are fixed, the batch size depends on the length of the timeseries and the average group number among all attention module . So RITA samples several ( , ) pairs and estimate a proper batch size for each pair.
More specifically, given a user-defined timeseries maximal length , we randomly sample integral points ( , ) from plane {1 ≤ ≤ , 1 ≤ ≤ }. Then we use a binary search based algorithm to find the maximal batch size that consumes less than 90% available GPU memory, aiming to avoid wasting GPU memory and the risks of out of memory (OOM).
Treating these pairs as ground truth labels, we use function fitting [18] to learn the batch size predicting function B = f (L, N ), where B is a function of two variables and .
Learning the Prediction Function. We apply curve fit from SciPy [53] as the function fitting tool to fit the two-variable function = ( , ) on plane {1 ≤ ≤ , 1 ≤ ≤ }. We observe that applying one function to the whole plane incurs a huge estimation error. So we develop a dynamic-programming (DP) method to divide the plane into several sub-planes and apply a distinct function to each sub-plane respectively. It is optimal in minimizing the total estimation error on all sub-planes With the learned prediction function , we can estimate a proper batch size for any ( , ) during training, even if it is not seen in the sampled ( , ) pairs. The Algorithms and Optimality Proof. Please refer to Appendix A.3 for the pseudo code of the binary search-based algorithm and the description of the DP method for plane-division and the proof for its optimality.
EVALUATION
Our experimental study focuses on the following questions:
1. Effectiveness and efficiency of RITA: How does RITA compare with other Transformer-based methods and traditional timeseries representation learning methods in accuracy and efficiency? 2. Ablation Study: How do the key techniques of RITA work?
Experimental Setup
Datasets. We evaluate RITA on classification and imputation tasks using 5 multi-variate and 3 uni-variate timeseries datasets.
• WISDM [55] is a popular multivariate timeseries dataset generated from the accelerometer in the mobile phone. The subjects performed 18 daily activities (e.g. walking, jogging). The dataset was collected from 51 subjects and the sampling rate is 20 Hz.
• HHAR dataset [46] contains sensing data of accelerometer collected from 9 users performing 5 activities with 12 different smartphones (varying in sampling rate). This increases the complexity of the task and thus can test the model's robustness.
• RWHAR RealWorld HAR dataset [48] covers 15 subjects performing 8 locomotion-style activities. Each subject wears the sensors for approximately ten minutes. The sampling rate is 50 Hz.
• ECG dataset [34] consists of 10,000 EEG recordings for arrhythmia classification. Each recording has an uncertain length ranging from 6 to 60 seconds sampled at 500 Hz. The ECG recordings correspond to 9 types of heart problems such as atrial fibrillation (AF) and premature atrial contraction (PAC), etc.
• MGH [6] is a EEG dataset collected by Mass. General Hospital. Each timeseries corresponds to the EEG data observed from one patient during their stay in ICU for a couple of days. The EEG monitoring produced data with 20 channels. The sampling rate is 200 HZ. So it produces very long timeseries.
• WISDM*/HHAR*/RWHAR* are three uni-variate datasets derived by picking one channel from WISDM/HHAR/RWHAR. Training/Validation Data Generation. We apply a sliding window on the raw timeseries to get training/validation samples. The size of the sliding window is set as 200 on small datasets (WISDM, HHAR, RWHAR), 2000 on medium size dataset (ECG), and 10,000 on the large dataset (MGH). Table 1 shows the statics of the generated datasets. They are randomly split into training/validation set in a proportion of 0.9/0.1. In "pretraining + few-label finetuning" scenario, we use 100 labeled data per class for finetuning. We guarantee that training set does not overlap with the validation set. To evaluate our group attention (referred to as Group Attn.), we develop three baselines by replacing the group attention component in RITA with the classic vanilla Self-Attention [52](referred to as Vanilla) and two SOTA methods that reduce the complexity of self-attention by approximation in NLP, namely, Performer [10] (referred to as Performer) and Linformer [54] (referred to as Linformer). Similar to our proposed Group Attn., Vanilla, Performer, Linformer all use RITA's time-aware convolution operation (Sec. 3) to turn timeseries segments into input feature vectors. We also compare Group Attn. against GRAIL [40], which is the SOTA of the non-deep learning methods for timeseries representation learning. GRAIL supports classification tasks by feeding the learned representations into a Support-Vector Machine [12] or K-Nearest Neighbor [17] classifier. Note GRAIL only targets uni-variate timeseries and cannot support imputation tasks. Methodology. We mainly focus on two downstream tasks:
(1) Classification. First, we train Group Attn. and the baselines with full labels from scratch to test the effectiveness of RITA framework and the approximation quality of our group attention.
Second, to measure the effectiveness of self-supervised pretraining, we evaluate the accuracy of training on few labeled timeseries with/without pretraining on large scales of unlabeled timeseries. To be specific, we split the training set into a pretraining set and a finetuning set, with very few data in the latter (100 labeled samples per class in our experiment). We train the model on the cloze pretraining task with a mask rate = 0.2. Then we train two classification models using the finetuning set, either based on the pretrained version or from scratch. We repeat the experiment 5 times with random data splits and report the median accuracy.
(2) Imputation. We run the imputation task on the datasets used in classification as well as the large unlabeled MGH dataset, and measure the mean square error and absolute imputation error. To get timeseries with missing values, we randomly mask the values with an expected mask rate of = 0.2. The masked values are replaced with a special value.
Finally, to evaluate Group Attn. 's benefit on efficiency, the total time of forward computation, backward propagation, and grouping are measured for all methods in all the experiments.
To save space, we only report the average training time per epoch here and refer readers to Appendix A.8 for the inference time.
We first compare against the Transformer-based methods on multi-variate datasets (sec. 6.2, 6.3), then compare against the nondeep learning method GRAIL on uni-variate datasets (sec. 6.4). Configuration. Please refer to Appendix A.1 for the experiment configuration and hyper-parameter settings.
Effectiveness: Transformer-Based Methods
We first evaluate the quality of the models trained with full labels from scratch. We then show how the pretraining of RITA increases the accuracy of the downstream tasks.
full-label training (Multi-variate classification)
Results shown in Figure 3(a) get us the following observations:
(1) RITA's advantage over TST. On all four datasets for the classification tasks, Group Attn. and the other three baselines that use RITA architecture (Vanilla, Performer, and Linformer) outperform TST. In particular, Group Attn. outperforms TST by 49 percentage points on the ECG dataset (88.48% vs 39.93%) with long timeseries. Two deficiencies in TST may cause its poor performance on the long timeseries. Firstly, TST concatenates the output embedding vector of each time stamp, then uses a linear classifier to do classification on the concatenated vector. When the timeseries is long, the linear classifier has so many parameters that it tends to overfit easily. Secondly, TST replaces Layer Normalization in vanilla Transformer with Batch Normalization. When the timeseries is long, it can only accommodate a small number of timeseries in each batch, leading to bias in Batch Normalization.
(2) Group-attention's advantage over other attention mechanisms. Group Attn. is better than Performer and Linformer on 3 out of 4 datasets for classification. Although Linformer works slightly better than Group Attn. on the ECG dataset (90.37% vs 88.84%), its performance is the worst in all other cases compared to any other RITA-based methods. Vanilla computes the attention scores precisely. Thus it is expected to work well. However, Group Attn. outperforms Vanilla on WISDM (87.50% vs 86.95%) and is very close to it on other 3 datasets. This suggests that group attention's approximation quality is good.
pretraining + few label finetune (Multi-variate classification)
The results shown in Table 3 get us the following observation:
(1) Pretraining is effective. Pretraining always leads to better accuracy than training with a few labels from scratch. In particular, on WISDM data all the methods using RITA architecture increase the accuracy by at least 10%. This is impressive considering we do not have a very large unlabeled pre-training set to use.
(2) RITA's advantage over TST. our Group Attn. and other three baselines using RITA architecture (Vanilla, Performer, and Linformer) significantly outperform TST on all four classification datasets by 25 percentage points.
(3) Group Attention's advantage over other attention mechanisms. Group Attn. is better than Performer and Linformer on 3 out of 4 datasets. When compared to Vanilla, Group Attn. is better on HHAR and ECG, and comparable on the other two, further confirming its high quality on approximation. Further, we notice that Linformer struggles in this setting: in average its accuracy is worse than Vanilla by 8.22% and Group Attn. by 8.01%. This is because the low-rank projection operation introduces extra model parameters, making Linformer more easily overfit, while overfitting is especially harmful when there are only a few labeled training samples.
full-dataset training (Multi-variate imputation)
Similar to classification tasks, the results of imputation tasks ( Table.2) show that Group Attn. consistently outperforms the baselines in training time while achieving comparable/better MSE. Again, on the large dataset MGH (length = 10,000), TST and Vanilla fail due to out of memory (OOM) errors. Methods using RITA framework (Group Attn., Performer, Linformer) all achieve very low MSE (are highly accurate). Among them Linformer is the worst.
Efficiency: Transformer-based Methods
We measure the efficiency by the average training time per epoch including the cost of the forward computation + backward propagation and the grouping overhead. We first show the results on all the 5 datasets in Sec. 6.3.1. We then vary the length of the timeseries on the MGH dataset to show group attention's scalability on long timeseries in Sec. 6.3.2.
Training Time: All Multi-variate Datasets
The results in Fig. 3(b) and Table 2 lead to the below observations:
(1) Vanilla Self-Attention is not scalable. In average, it takes 2-3 minutes to train one epoch when the length of the timeseries is only 200 (WISDM, HHAR, RWHAR), takes over 15 minutes when the length increases to 2,000 (ECG), and fails on the long MGH data when the length reaches 10,000 due to out of GPU memory.
(2) Group Attn.'s advantage over all other attention mechanisms. As we have shown in Sec. 6 than Performer and Linformer in classification and imputation tasks, while Group Attn. is always faster than Performer, Linformer, and all other baselines on all 5 multi-variate datasets, thus a win-win.
(3) The longer the timeseries, the larger the speedup. On the medium sized ECG dataset with a length of 2,000, Group Attn. has a speedup of 3.86/1.36/2.27 compared to Vanilla/Performer/Linformer. When the length increases to 10,000, the speedup on the MGH dataset increases to 6.59/7.48 compared to Performer/Linformer (Vanilla and TST failed in this case) on imputation task (Table. 2). However, even on the short WISDM, HHAR, RWHAR datasets, Group Attn. still consistently outperforms other methods, confirming that it does not introduce much overhead. This is because when the length of the timeseries gets longer, Group Attn. gets more opportunities to find windows with similar properties.
Training time: Varying the Length
In this experiment, we truncate the original MGH timseries into sequences with the lengths at 2000/4000/6000/8000/10000, and compare Group Attn. against Vanilla and other attention mechanisms. Vanilla cannot handle sequences longer than 8000.
The results in Fig. 4 again show that the longer the timeseries, the larger the speed up. With comparable MSE, Group Attn. outperforms Vanilla by 63X. Moreover, as the length increases from 2000 to 10000, the training time of Group Attn. only increases from 31.2 seconds to 54.4 seconds per epoch. The reason is that as the timeseires becomes longer, there are more grouping opportunities because of the similarity of the timeseries segments.
Comparison to Non-deep Learning Methods
We compare against GRAIL, the SOTA of non-deep learning timeseries representation learning. We use the three uni-variate datasets, because GRAIL only targets uni-variate timeseries. Results in Fig. 5 show that on all 3 datasets RITA significantly outperforms GRAIL in accuracy by 45, 16, and 21 percentage points because of the expressive power of Transformer. Moreover, thanks to the GPU-friendly design of RITA, it is at least 2× faster than GRAIL in training time.
Ablation Study
Adaptive Scheduler
To evaluate the effectiveness of RITA's adaptive scheduler (Sec. 5), we compare it against a baseline using a fixed group number . We vary and the error bound threshold used by RITA.
From the results in Table 4 we get the following observations:
(1) Adaptive Scheduler is better than fixed . Training with Adaptive Scheduler already achieves better or comparable performance compared to the best performing . More specifically, on the MGH dataset, dynamic scheduler always achieves better accuracy and is much faster compared to fixed . On the ECG dataset, although fixed is slightly better than adaptive scheduler in accuracy when setting the N as 512, it runs much slower than adaptive scheduler. Of course, finding the best that balances the accuracy and running time requires careful tuning.
(2) Adaptive Scheduler is tuning free. It is robust on both accuracy and running time when varies, while the results of fixed vary significantly when the value of changes. Therefore, Adaptive Scheduler frees the users from tuning the threshold, while it is hard to find an appropriate for a given dataset. Table 5: RITA Pretraining: increasing sizes of pretrain set.
The Sizes of the Pretraining Data
Next, we evaluate how the number of unlabeled data influences the effectiveness of pretraining. To get empirical results, we pretrain RITA on WISDM dataset with 20%/40%/60%/80% of the pretraining data and finetune each pretrained model with 100 labels per class. The results in Table 5 show that: (1) The more pretraining data, the larger the improvement. The accuracy increases with the sizes of the pretraining data; (2) Marginal utility diminishing. The first 20% pretraining data gives a 10.38% improvement in accuracy (72.94% vs 62.56%), while the remaining 80% pretraining data only gives an additional improvement of 2.12% (75.06% vs 72.94%).
RELATED WORK 7.1 Timeseries Analytics
There is a great deal of prior work on timeseries analytics methods. This work can be divided into three categories: (1) non-deep learning methods; (2) CNN/RNN-based deep learning methods; and (3) Transformer-based deep learning methods. Traditional Methods. These methods, such as TS-CHIEF [45], HIVE-COTE [33], ROCKET [15] have achieved notable performance on public datasets. Despite that, traditional methods suffer from one or more issues: they (1) rely on expert knowledge for feature extraction; (2) incur heavy computation cost and are inappropriate for GPU devices; (3) support only uni-variate timeseries; (4) perform classification solely. Some work [61] shows that the transformedbased methods outperform these traditional methods especially on multi-variate timeseries.
In particular, as the SOTA of timeseries representation learning, GRAIL [40] extracts landmarks from data and computes the representations with the combination of the landmarks. However, GRAIL only supports uni-variate timeseries. Our experiments (Sec. 6.4) show that RITA significantly outperforms GRAIL in both effectiveness and efficiency on uni-variate timeseries.
CNN/RNN-based Deep Learning Methods. CNN-based methods, such as InceptionTime [21] and Resnet [19], are good at classification tasks, but can not handle generative tasks such as forecasting because of the inductive bias of convolution networks. RNN-based methods, such as Brit [7] and deepAR [44], are capable for classification, regression and generation. However, the recurrent structure brings a lot of problems: (1) limiting the model's ability in capturing long-range correlation; (2) notoriously difficult to train [41] because of gradient vanishing and exploding problem. As a result, such methods can hardly scale to very long timeseries. Transformer-based Deep Learning Methods. Given that Transformer is the best choice for backbone in almost all sequence modeling tasks, some effort has been made to apply Transformer to timeseries analytics. Targeting forecasting of uni-variate timeseries, LogTrans [30] introduced a log sparsity assumption to attention computation. Informer [62] pushes LogTrans a step further and scales forecasting to multi-variate timeseries. Autoformer [57] performs forecasting by decomposing timeseries into two parts, i.e. the trend part and the seasonal part.
For imputation tasks, CDSA [37] outperforms statistical methods and the SOTA of RNN-based method Brit [7] on 3 public and 2 competition datasets. For timeseries classification, AutoTransformer [43] performs architecture search to adapt to the tasks in different domains. For timeseries anomaly detection, Anomaly Transformer [58] outperforms many widely-used methods such as OmniAnomaly [47], assuming the attention score maps show Gaussian distribution.
All of these works are designed for specific tasks, rather than functioning as a representation learning framework to serve different downstream tasks. To fill this gap, some researchers proposed a Transformer-based architecture, called TST [61]. Like RITA, TST supports regression, classification, and unsupervised learning through the "cloze test" pretraining task on timeseries. However, TST directly uses the classical Vanilla self-attention, thus not scalable to long timeseries as shown in our experiments (Sec. 6.3.2).
Efficient Transformers
The need of improving the scalability of Transformers has led to more efficient variations of Transformers, especially for accommodating long text data in NLP [49].
Introducing fixed/random patterns to self-attention mechanism is an intuitive idea. Sparse Transformer [9] and Longformer [3] only compute attention at fixed intervals. ETC [2] and BigBird [60] use global-local attention: the attention computation is limited within a fixed radius, while some auxiliary tokens are added to attend/get attended globally. The deficiencies of fixed attention patterns are obvious: it heavily depends on users to give an optimal setting.
To decrease the reliance on human labor, some works seek to introduce learnable/adaptive attention patterns instead of fixed patterns. Reformer [26] proposed only computing the dominant attention terms based on their observation of sparsity in attention matrix from language/image data. Such sparsity is intuitive in language data, in which a word's attention mainly focuses on the nearby sentences. However, attention in timeseries data shows strong seasonal patterns rather than sparse patterns, mainly as result of the periodicity of timeseries data. Therefore, such works do not work well for timeseries.
Apart from introducing attention patterns, some works seek to solve this problem with applied mathematics techniques. Linformer [54] performs a projection to decrease the size of query, key and value matrices before attention computation, because the attention matrix tends to be low-ranked. Performer [10] uses linear functions to approximate the kernel function softmax, making attention computation commutative. When the sequence length is far greater than the dimension of embedding vectors, Performer benefits from changing the order of matrix multiplication. Linformer and Performer do not depend on the unique properties of language data, thus potentially fitting timeseries better than other techniques, which is why we compared against them in our experiments. However as shown in Sec. 6, our group attention significantly outperforms them in both accuracy and efficiency (training time), because group attention fully leverages the periodicity of timeseries.
CONCLUSION
In this work, we presented RITA, an automatic, self-supervised, and scalable timeseries analytics tool. RITA effectively adapts Transformer, popular in NLP, into timeseries analytics. As the key component of RITA, group attention eliminates the performance bottleneck of the classical self-attention mechanisms, thus successfully scaling RITA to highly complex, long timeseries data. Our experiments confirm that RITA significantly speeds up the state-of-the-art by 63X with a better accuracy.
A APPENDIX: SUPPLEMENTARY MATERIAL A.1 Experiment Configuration and
Hyper-parameter Settings
Configuration. All models were trained on an NVIDIA Tesla V100 16GB GPU. All the methods are optimized with AdamW [36] of which the starting learning rate and weight decay parameter are both 1 −4 . In full-label training scenario, we train the models for 100 epochs. In "pretraining + few-label finetuning scenario", as the pretrained models require fewer epochs to converge [61], we train the model for 50 epochs. For a fair comparison, the baselines use a maximal batch size within GPU's capacity during training.
As for model hyper-parameter setting, RITA and the baselines use a Transformer structure balancing Vanilla 's accuracy and efficiency: 8-layer stack of 2-head attention with hidden vectors in dimension of 64. Convolution kernel size is set to 5 by default. We set the error bound threshold ( , Sec. 5.1) of Group Attention to 2, as it balances the accuracy and the efficiency in general on all datasets. Because Linformer requires the users to set the sizes of projection matrix, in different settings we choose an accuracyefficiency balancing one among {64,128,256,512}. In Alg. 1, we denote to be the size of the ℎ group, to be the number of groups, r to be the representative key of the ℎ group and R to be the matrix consisting of all r , to be the group that k belongs to. , are the packing matrices of query vectors and value vectors as described in Sec. 2 ( ) ← ( ( ), ( ) + ( , ) ) return ( )
A.2 Efficient Computation of Group Attention
We describe Alg. 3 and intuitively show its optimality. We assume that Scipy [53] learns an optimal function in Line 4 so that function COST gives the optimal estimation error when fitting the points in set . When fitting very few points, we assign an infinite cost to prevent a biased fitting function (Line 2). ( ) denotes the minimal estimation error for points in sub-plane { 2 ≤ ≤ 1 , ≤ }. In Lines 11-13, we enumerate all possible ways of cutting { 2 ≤ ≤ 1 , ≤ } horizontally into two sub-plane { 2 ≤ ≤ 1 , ≤ } and { 2 ≤ ≤ 1 , ≤ ≤ } by iterating from 1 to n. Choosing the cutting strategy that minimizes estimation error gets us a ( 1 ) with minimal estimation error for sub-plane { 2 ≤ ≤ 1 , ≤ 1 }, which is recorded as 1 , 2 in Line 14. ( ) denotes the minimal estimation error for sub-plane { ≤ }. We enumerate all the possible ways of cutting { ≤ } vertically into two sub-plane { ≤ } and { ≤ ≤ } by iterating from 1 to (Line 17-19). Finally, we have the minimal estimation error for the whole plane as ( ). Based on the above discussion, this algorithm guarantees to not miss any better solution, hence optimal.
A.4 The Correctness of Group Attention
Lemma 3. Assuming the windows belonging to the same group have the same key vector, i.e. = ( ∈ ), then the feature embedding produced by the original self-attention mechanism is identical to the output of our group attention mechanism implemented in Algorithm 1.
Proof. Denote to be the representative vectors of , i.e. = = ( ∈ ). Algorithm 1 gives that
= −1 ∑︁ =0 ( == )v , , = q · r = −1 ∑︁ =0 ( , ) , = −1 ∑︁ =0 ,(7)
By the canonical self-attention mechanism introduced in Sec. 2, we get:
, = q · k j , , = ( , ) −1 =0 ( , ) , o = −1 ∑︁ =0 , v(8)
With 7 and 8, we have
−1 ∑︁ =0 ( , ) = −1 ∑︁ =0 (q · k ) = −1 ∑︁ =0 −1 ∑︁ =0 ( == ) (q · k ) = −1 ∑︁ =0 (q · r ) −1 ∑︁ =0 ( == ) = −1 ∑︁ =0 (q · r ) = −1 ∑︁ =0 ( , ) =(9)
Further,
o = −1 ∑︁ =0 , v j = −1 ∑︁ =0 −1 ∑︁ =0 ( == ) , v = −1 ∑︁ =0 −1 ∑︁ =0 ( == ) ( , ) −1 =0 ( , ) v = −1 ∑︁ =0 −1 ∑︁ =0 ( == ) (q · k ) −1 =0 ( , ) v = −1 ∑︁ =0 −1 ∑︁ =0 ( == ) (q · r j ) −1 =0 ( , ) v = −1 ∑︁ =0 (q · r j ) −1 =0 ( , ) −1 ∑︁ =0 ( == )v = −1 ∑︁ =0 (q · r j ) −1 =0 ( , )(10)
Combining (7), (9) (10), we have o i = N −1 j=0 P i,j s i v j = o i . This concludes that the output of our group attention is identical to vanilla self-attention's.
□ − | + | − | ) =1 ≤ =1 =1 =(15)
A.7 Downstream Tasks
RITA supports a variety of downstream tasks. In this section, we show that with minimal modification RITA can effectively support classification, imputation and forecasting tasks. Other unsupervised tasks such as similarity search or clustering are naturally supported by extracting feature embeddings from RITA.
A.7.1 Classification
To classify timeseries, we input timeseries to the model as described in Sec. 3 and attach a special token [CLS] as the first input embedding.
[CLS]'s embedding acts as the embedding for the entire timeseries, and the output representation of [CLS] is fed into a classifier: y = Softmax (W cls Z [CLS] + B cls ), where [ ] ∈ R is the output representation of [CLS], C is the number of classes, and W cls ∈ R C×d , B cls ∈ R C are learnable parameters for classification task. The result vector ∈ R represents the possibility that the input timeseries belongs to each class.
We apply Cross Entropy Loss as the loss function of the classification task [13]: L = 1 C C i=1 −ŷ(i)log(y(i)), whereˆis a binary indicator for ground truth label:
( ) = 1 is ground truth label 0 ℎ(16)
A.7.2 Imputation Timeseries are mainly generated by sensors, a common problem of which is missing values. This becomes a challenge when many downstream analytics require the missing values to be recovered. The recovering task is imputation. Denote the real timeseries as ∈ R × , the observed timeseries with missing values as ∈ R × , and the set of missing values' positions as . We scale the values of all timeseries to non-negative and use a special value (-1) to indicate missing values:
( , ) = −1 ( , ) ∈ ( , ) ( , ) ∉(17)
is fed into the RITA as input, and the output representations are concatenated and fed into a Transpose Convolution layer which decodes the output embedding vectors from hidden space to timeseries values, corresponding to the convolution operation in the input stage, i.e., Y = TransposeCNN (Z 1 + ○Z 2 + ○... + ○Z n ), where ∈ R × is the recovered timeseries, and ∈ R is the output of each position.
Here Mean Square Error is chosen as the loss function [51]: = 1 | | ( , ) ∈ ( ( , ) − ( , )) 2 .
A.7.3 Forecasting
Forecasting can be regarded as a special case of imputation, in which all missing values are at the end of timeseries. So like in imputation task, we scale the timeseries to nonnegative and use a special value (-1) to indicate the values to be predicted:
( , ) = ( , ) ≤ −1 ℎ(18)
Where is the observed timestamp. Then the output representations are fed into a Transpose Convolution layer using Mean Squared Error as loss function, as described above.
A.7.4 Other Unsupervised Tasks
RITA naturally supports other unsupervised tasks, such as similarity search and clustering [25,31,32], by producing the embedding of one timeseries (output representation of the special token [CLS]). Clustering can be performed on the embeddings with flexible choice of distance metrics. Similarly, a high dimensional similarity search system [22,23,38] can be built on the embeddings. In this section, we present the average inference time on validation sets. The results in Table. 6 and 7 correspond to the average inference time on validation sets of classification and imputation tasks, respectively. Consistent with the results in Section. 6.3, our method Group Attn. outperforms the baselines on both classification and imputation tasks, particularly on the datasets comprising long timeseries (ECG and MGH).
A.8 Inference Time
P
Figure 1 :
1RITA Architecture
Figure 2 :
2Group Attention
Figure 3 :
3Full-label classification results (multi-variate data).
Figure 4 :
4Varying the lengths of timeseries.
Figure 5 :
5Comparison to non-deep learning method (univariate data).
Algorithm 1
1Efficient Computation of Group AttentionRequire: , , , , Ensure: , ∈ R * , ∈ R * , ∈ N , ∈ N 1: function group_attention( , ,
0Position
Embedding
W 1
+
+
+
Window
Embedding
+
E 0
Raw
Timeseries
Time-aware
Convolution
W[CLS]
W 2
⊗
.....
W n
P 1
P 2
.....
P n
.....
E 1
E 2
E n
.....
O 0
O 1
O 2
O n
.....
thus satisfying an error bound by Lemma 1. If there exist clusters, namely,1 ,
2 , ...,
, satisfying that:
Table 1 :
1The statistics of the datasets Alternative Methods. We compare RITA against the SOTA Transformer based timeseries representation learning method TST[61].
.2, Group Attn. is more accurateDataset
Length
TST [61]
Vanilla
Performer
Linformer
Group Attn.
MSE
Time/s
MSE
Time/s
MSE
Time/s
MSE
Time/s
MSE
Time/s
WISDM
200
13.30
150.3
3.240
178.1
3.449
162.6
3.852
141.9
3.277
136.7
HHAR
200
1.085
78.2
0.2968
97.4
0.2980
82.6
0.3198
81.1
0.2974
73.3
RWHAR
200
0.0882
83.9
0.0478
108.1
0.0489
89.1
0.0572
98.4
0.0478
81.3
ECG
2000
0.0905
696.3
0.0037
857.9
0.0033
270.2
0.0035
291.38
0.0038
164.36
MGH
10000
N/A
N/A
N/A
N/A
0.00014
356.2
0.00088
404.9
0.00042
54.4
Table 2 :
2Imputation results (multi-variate data). The best results are marked with bold.Dataset
Pretrain Size
TST [61]
Vanilla
Performer
Linformer
Group Attn.
Scratch
Pre.
Scratch
Pre.
Scratch
Pre.
Scratch
Pre.
Scratch
Pre.
WISDM
62,231
49.13%
50.03%
66.16%
75.89%
66.09%
73.97%
50.12%
67.44%
62.56%
75.06%
HHAR
68,294
72.56%
75.30%
75.60%
81.35%
76.52%
80.70%
65.94%
76.52%
76.17%
82.62%
RWHAR
63,599
69.46%
80.41%
85.68%
91.14%
87.54%
91.33%
81.03%
86.33%
86.13%
89.63%
ECG
561,358
20.98%
27.99%
42.05%
46.16%
43.34%
45.58%
27.19%
31.34%
42.58%
46.39%
Table 3 :
3Pretrain + few-label finetuning results. The best results are marked with bold.Training Time/sec
MSE
Table 4 :
4Adaptive Scheduling VS Fixed N.Pretrain Data size Few-label Accuracy
N/A
62.56%
12,446
72.94%
24,892
72.78%
37,338
74.10%
49,784
74.22%
62,231
75.06%
. Alg. 1 outputs the packing matrix for new feature emebddings { 1 , ..., }, where corresponds to the feature embedding of . Lines 2-3 implement the embedding aggregation operation, while Lines 8-11 implement the group softmax function. A.3 The Algorithms and Optimality Proof for Dynamically Determing Batch Size Algorithm 2 Binary Search for Batch SizeRequire: ,
Ensure: 1 ≤ ≤
, 1 ≤ ≤
1: function binary_search( , )
2:
← 1
3:
←
ℎ
4:
←
ℎ
5:
6:
while ≤ do
7:
←
×
8:
(
)
9:
10:
←
11:
if 0.9 > then
12:
←
+ 1
13:
←
14:
else
15:
←
− 1
16:
←
⌊ + ⌋
2
17:
return
Algorithm 3 Dynamic Programming for Plane Division
Require: , , ,
Ensure: 1 ≤
≤
, 1 ≤
≤
1: function cost(S)
2:
if | | < then return +∞
3:
, , ←
4:
←
( | , )
return ( , , | )
5: function dynamic_programming( , ,
)
6:
for 1 = 1 →
do
7:
for 2 = 1 → 1 do
8:
for = 1 → 1 do
9:
←
{ 2 ≤ ≤ 1 , ≤ }
10:
( ) ←
( )
11:
for = 1 → do
12:
←
{ 2 ≤ ≤ 1 , ≤
≤ }
13:
( ) ←
( ( ), ( ) +
( ) )
14:
2 , 1 ← ( 1 )
15:
16:
for = 1 →
do
17:
( ) ← (1, )
18:
for = 1 → do
19:
Dataset Length TST[61] Vanilla Performer Linformer Group Attn.WISDM
200
2.18
2.26
2.35
2.22
2.17
HHAR
200
1.19
1.23
1.28
1.21
1.18
RWHAR
200
1.32
1.37
1.42
1.34
1.31
ECG
2000
18.44
15.26
5.80
6.08
5.16
Table 6 :
6Inference time: Classification on multi-variate data (seconds).Dataset Length TST[61] Vanilla Performer Linformer Group Attn.
WISDM
200
2.03
2.11
2.19
2.07
2.02
HHAR
200
1.11
1.14
1.19
1.12
1.10
RWHAR
200
1.23
1.27
1.32
1.25
1.22
ECG
2000
17.22
14.32
4.73
4.99
4.11
MGH
10000
N/A
N/A
6.58
6.88
1.35
Table 7 :
7Inference time: Imputation on multi-variate data (seconds).
A.5 The Proof of Error Bound (Lemma 1)Proof. We haveSoThen we have:Combining(12) (13), the error is bounded byThusThis proves Lemma 1.A.6 The Proof of Merge Operation (Lemma 2)Proof. Denote the cluster size of to be .After mergeing, the new center will be:, it holds that:
Tensorflow: Large-scale machine learning on heterogeneous distributed systems. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, arXiv:1603.04467arXiv preprintMartín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016).
Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang, arXiv:2004.08483ETC: Encoding long and structured inputs in transformers. arXiv preprintJoshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding long and structured inputs in transformers. arXiv preprint arXiv:2004.08483 (2020).
Longformer: The longdocument transformer. Iz Beltagy, E Matthew, Arman Peters, Cohan, arXiv:2004.05150arXiv preprintIz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long- document transformer. arXiv preprint arXiv:2004.05150 (2020).
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877-1901.
Time series forecasting for healthcare diagnosis and prognostics with the focus on cardiovascular diseases. C Bui, Pham, Vo, Tran, T Nguyen, Le, International conference on the development of biomedical engineering in Vietnam. SpringerC Bui, N Pham, A Vo, A Tran, A Nguyen, and T Le. 2017. Time series forecasting for healthcare diagnosis and prognostics with the focus on cardiovascular dis- eases. In International conference on the development of biomedical engineering in Vietnam. Springer, 809-818.
Lei Cao, Wenbo Tao, Sungtae An, Jing Jin, Yizhou Yan, Xiaoyu Liu, Wendong Ge, Adam Sah, Leilani Battle, Jimeng Sun, Remco Chang, M Brandon Westover, Samuel Madden, Michael Stonebraker, Smile: A System to Support Machine Learning on EEG Data at Scale. Proc. VLDB Endow. 12Lei Cao, Wenbo Tao, Sungtae An, Jing Jin, Yizhou Yan, Xiaoyu Liu, Wendong Ge, Adam Sah, Leilani Battle, Jimeng Sun, Remco Chang, M. Brandon Westover, Samuel Madden, and Michael Stonebraker. 2019. Smile: A System to Support Machine Learning on EEG Data at Scale. Proc. VLDB Endow. 12, 12 (2019), 2230- 2241.
Brits: Bidirectional recurrent imputation for time series. Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, Yitan Li, Advances in neural information processing systems. 31Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. 2018. Brits: Bidirectional recurrent imputation for time series. Advances in neural information processing systems 31 (2018).
Time-series forecasting. Chris Chatfield, Chapman and Hall/CRCChris Chatfield. 2000. Time-series forecasting. Chapman and Hall/CRC.
Generating long sequences with sparse transformers. Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever, arXiv:1904.10509arXiv preprintRewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 (2019).
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, arXiv:2009.14794Rethinking attention with performers. arXiv preprintKrzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2020. Rethinking attention with performers. arXiv preprint arXiv:2009.14794 (2020).
Anomaly detection for IoT time-series data: A survey. A Andrew, Göksel Cook, Zhong Mısırlı, Fan, IEEE Internet of Things Journal. 7Andrew A Cook, Göksel Mısırlı, and Zhong Fan. 2019. Anomaly detection for IoT time-series data: A survey. IEEE Internet of Things Journal 7, 7 (2019), 6481-6494.
Support-vector networks. Corinna Cortes, Vladimir Vapnik, Machine learning. 20Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning 20, 3 (1995), 273-297.
The regression analysis of binary sequences. R David, Cox, Journal of the Royal Statistical Society: Series B (Methodological). 20David R Cox. 1958. The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological) 20, 2 (1958), 215-232.
The individual over time: time series applications in health care research. Benjamin F Crabtree, C Subhash, Priscilla M Ray, Schmidt, T Patrick, David D O'connor, Schmidt, Journal of clinical epidemiology. 43Benjamin F Crabtree, Subhash C Ray, Priscilla M Schmidt, Patrick T O'Connor, and David D Schmidt. 1990. The individual over time: time series applications in health care research. Journal of clinical epidemiology 43, 3 (1990), 241-260.
ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Angus Dempster, François Petitjean, Geoffrey I Webb, Data Min. Knowl. Discov. 34Angus Dempster, François Petitjean, and Geoffrey I. Webb. 2020. ROCKET: excep- tionally fast and accurate time series classification using random convolutional kernels. Data Min. Knowl. Discov. 34, 5 (2020), 1454-1495.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USALong and Short Papers1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers). 4171- 4186.
Discriminatory analysis. Nonparametric discrimination: Consistency properties. Evelyn Fix, Joseph Lawson Hodges, International Statistical Review/Revue Internationale de Statistique. 57Evelyn Fix and Joseph Lawson Hodges. 1989. Discriminatory analysis. Nonpara- metric discrimination: Consistency properties. International Statistical Review/Re- vue Internationale de Statistique 57, 3 (1989), 238-247.
Numerical methods of curve fitting. George Philip, Philip George Guest, Guest, Cambridge University PressPhilip George Guest and Philip George Guest. 2012. Numerical methods of curve fitting. Cambridge University Press.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770-778.
Deep learning for time series classification: a review. Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller, Data mining and knowledge discovery. 33Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Deep learning for time series classification: a review. Data mining and knowledge discovery 33, 4 (2019), 917-963.
Inceptiontime: Finding alexnet for time series classification. Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, F Daniel, Jonathan Schmidt, Geoffrey I Weber, Lhassane Webb, Pierre-Alain Idoumghar, François Muller, Petitjean, Data Mining and Knowledge Discovery. 34Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F Schmidt, Jonathan Weber, Geoffrey I Webb, Lhassane Idoumghar, Pierre- Alain Muller, and François Petitjean. 2020. Inceptiontime: Finding alexnet for time series classification. Data Mining and Knowledge Discovery 34, 6 (2020), 1936-1962.
Product quantization for nearest neighbor search. Herve Jegou, Matthijs Douze, Cordelia Schmid, 33Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence 33, 1 (2010), 117-128.
Billion-scale similarity search with gpus. Jeff Johnson, Matthijs Douze, Hervé Jégou, IEEE Transactions on Big Data. 7Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data 7, 3 (2019), 535-547.
Reducibility among combinatorial problems. M Richard, Karp, Complexity of computer computations. SpringerRichard M Karp. 1972. Reducibility among combinatorial problems. In Complexity of computer computations. Springer, 85-103.
Dimensionality reduction for fast similarity search in large time series databases. Eamonn Keogh, Kaushik Chakrabarti, Michael Pazzani, Sharad Mehrotra, Knowledge and information Systems. 3Eamonn Keogh, Kaushik Chakrabarti, Michael Pazzani, and Sharad Mehrotra. 2001. Dimensionality reduction for fast similarity search in large time series databases. Knowledge and information Systems 3, 3 (2001), 263-286.
Reformer: The efficient transformer. Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya, arXiv:2001.04451arXiv preprintNikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451 (2020).
Determinants of common stock prices: A time series analysis. John Kraft, Arthur Kraft, The journal of finance. 32John Kraft and Arthur Kraft. 1977. Determinants of common stock prices: A time series analysis. The journal of finance 32, 2 (1977), 417-425.
ImageNet Classification with Deep Convolutional Neural Networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems. F. Pereira, C.J. Burges, L. Bottou, and K.Q. WeinbergerCurran Associates, Inc25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Clas- sification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, F. Pereira, C.J. Burges, L. Bottou, and K.Q. Wein- berger (Eds.), Vol. 25. Curran Associates, Inc. https://proceedings.neurips.cc/ paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
A survey on human activity recognition using wearable sensors. D Oscar, Miguel A Lara, Labrador, IEEE communications surveys & tutorials. 15Oscar D Lara and Miguel A Labrador. 2012. A survey on human activity recog- nition using wearable sensors. IEEE communications surveys & tutorials 15, 3 (2012), 1192-1209.
Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, Xifeng Yan, Advances in Neural Information Processing Systems. 32Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. 2019. Enhancing the locality and breaking the memory bottle- neck of transformer on time series forecasting. Advances in Neural Information Processing Systems 32 (2019).
Clustering of time series data-a survey. T Warren Liao, Pattern recognition. 38T Warren Liao. 2005. Clustering of time series data-a survey. Pattern recognition 38, 11 (2005), 1857-1874.
Fast similarity search in the presence of noise, scaling, and translation in time-series databases. Proceeding of the 21th International Conference on Very Large Data Bases. Rake& Agrawal King-lp Lin and Harpreet S Sawhney Kyuseok Shimeeding of the 21th International Conference on Very Large Data BasesRake& Agrawal King-lp Lin and Harpreet S Sawhney Kyuseok Shim. 1995. Fast similarity search in the presence of noise, scaling, and translation in time-series databases. In Proceeding of the 21th International Conference on Very Large Data Bases. 490-501.
Time Series Classification with HIVE-COTE: The Hierarchical Vote Collective of Transformation-Based Ensembles. Jason Lines, Sarah Taylor, Anthony Bagnall, ACM Trans. Knowl. Discov. Data. 12ArticleJason Lines, Sarah Taylor, and Anthony Bagnall. 2018. Time Series Classification with HIVE-COTE: The Hierarchical Vote Collective of Transformation-Based Ensembles. ACM Trans. Knowl. Discov. Data 12, 5, Article 52 (jul 2018), 35 pages.
An open access database for evaluating the algorithms of electrocardiogram rhythm and morphology abnormality detection. Feifei Liu, Chengyu Liu, Lina Zhao, Xiangyu Zhang, Xiaoling Wu, Xiaoyan Xu, Yulin Liu, Caiyun Ma, Shoushui Wei, Zhiqiang He, Journal of Medical Imaging and Health Informatics. 8Feifei Liu, Chengyu Liu, Lina Zhao, Xiangyu Zhang, Xiaoling Wu, Xiaoyan Xu, Yulin Liu, Caiyun Ma, Shoushui Wei, Zhiqiang He, et al. 2018. An open access database for evaluating the algorithms of electrocardiogram rhythm and morphology abnormality detection. Journal of Medical Imaging and Health Informatics 8, 7 (2018), 1368-1373.
Least squares quantization in PCM. Stuart Lloyd, IEEE transactions on information theory. 28Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE transactions on information theory 28, 2 (1982), 129-137.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017).
CDSA: cross-dimensional self-attention for multivariate, geo-tagged time series imputation. Jiawei Ma, Zheng Shou, Alireza Zareian, Hassan Mansour, Anthony Vetro, Shih-Fu Chang, arXiv:1905.09904arXiv preprintJiawei Ma, Zheng Shou, Alireza Zareian, Hassan Mansour, Anthony Vetro, and Shih-Fu Chang. 2019. CDSA: cross-dimensional self-attention for multivariate, geo-tagged time series imputation. arXiv preprint arXiv:1905.09904 (2019).
Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. A Yu, Malkov, Dmitry A Yashunin, 42Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence 42, 4 (2018), 824-836.
Time series: Similarity search and its applications. Tripti Negi, Veena Bansal, Proceedings of the International Conference on Systemics, Cybernetics and Informatics: ICSCI-04. the International Conference on Systemics, Cybernetics and Informatics: ICSCI-04Hyderabad, IndiaTripti Negi and Veena Bansal. 2005. Time series: Similarity search and its appli- cations. In Proceedings of the International Conference on Systemics, Cybernetics and Informatics: ICSCI-04, Hyderabad, India. 528-533.
Grail: efficient time-series representation learning. John Paparrizos, J Michael, Franklin, Proceedings of the VLDB Endowment. 12John Paparrizos and Michael J Franklin. 2019. Grail: efficient time-series repre- sentation learning. Proceedings of the VLDB Endowment 12, 11 (2019), 1762-1777.
On the difficulty of training recurrent neural networks. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, PMLRInternational conference on machine learning. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International conference on machine learning. PMLR, 1310-1318.
On the momentum term in gradient descent learning algorithms. Ning Qian, Neural networks. 12Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural networks 12, 1 (1999), 145-151.
AutoTransformer: Automatic Transformer Architecture Design for Time Series Classification. Yankun Ren, Longfei Li, Xinxing Yang, Pacific-Asia Conference on Knowledge Discovery and Data Mining. SpringerYankun Ren, Longfei Li, Xinxing Yang, and Jun Zhou. 2022. AutoTransformer: Automatic Transformer Architecture Design for Time Series Classification. In Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, 143- 155.
DeepAR: Probabilistic forecasting with autoregressive recurrent networks. David Salinas, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, International Journal of Forecasting. 36David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. 2020. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Inter- national Journal of Forecasting 36, 3 (2020), 1181-1191.
TS-CHIEF: a scalable and accurate forest algorithm for time series classification. Ahmed Shifaz, Charlotte Pelletier, François Petitjean, Geoffrey I Webb, Data Mining and Knowledge Discovery. 34Ahmed Shifaz, Charlotte Pelletier, François Petitjean, and Geoffrey I. Webb. 2020. TS-CHIEF: a scalable and accurate forest algorithm for time series classification. Data Mining and Knowledge Discovery 34 (2020), 742-775.
Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. Allan Stisen, Henrik Blunck, Sourav Bhattacharya, Thor Siiger Prentow, Mikkel Baun Kjaergaard, Anind Dey, Tobias Sonne, Mads Møller Jensen, Proceedings of the 13th ACM conference on embedded networked sensor systems. the 13th ACM conference on embedded networked sensor systemsAllan Stisen, Henrik Blunck, Sourav Bhattacharya, Thor Siiger Prentow, Mikkel Baun Kjaergaard, Anind Dey, Tobias Sonne, and Mads Møller Jensen. 2015. Smart devices are different: Assessing and mitigatingmobile sensing het- erogeneities for activity recognition. In Proceedings of the 13th ACM conference on embedded networked sensor systems. 127-140.
Robust anomaly detection for multivariate time series through stochastic recurrent neural network. Ya Su, Youjian Zhao, Chenhao Niu, Rong Liu, Wei Sun, Dan Pei, Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. the 25th ACM SIGKDD international conference on knowledge discovery & data miningYa Su, Youjian Zhao, Chenhao Niu, Rong Liu, Wei Sun, and Dan Pei. 2019. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2828-2837.
On-body localization of wearable devices: An investigation of position-aware activity recognition. Timo Sztyler, Heiner Stuckenschmidt, 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom). Timo Sztyler and Heiner Stuckenschmidt. 2016. On-body localization of wearable devices: An investigation of position-aware activity recognition. In 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom).
. IEEE. IEEE, 1-9.
Efficient transformers: A survey. Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler, ACM Computing Surveys (CSUR. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey. ACM Computing Surveys (CSUR) (2020).
Anomaly detection on time series. Mingyan Teng, 2010 IEEE International Conference on Progress in Informatics and Computing. IEEE1Mingyan Teng. 2010. Anomaly detection on time series. In 2010 IEEE International Conference on Progress in Informatics and Computing, Vol. 1. IEEE, 603-608.
An MSE statistic for comparing forecast accuracy across series. A Patrick, Thompson, International Journal of Forecasting. 6Patrick A Thompson. 1990. An MSE statistic for comparing forecast accuracy across series. International Journal of Forecasting 6, 2 (1990), 219-227.
Attention is All you Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Con- ference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA. 5998-6008.
Algorithms for Scientific Computing in Python. Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, J Stéfan, Matthew Van Der Walt, Joshua Brett, K Jarrod Wilson, Nikolay Millman, Mayorov, R J Andrew, Eric Nelson, Robert Jones, Eric Kern, C J Larson, İlhan Carey, Yu Polat, Eric W Feng, Jake Moore, Denis Vanderplas, Josef Laxalde, Robert Perktold, Ian Cimrman, E A Henriksen, Charles R Quintero, Anne M Harris, Antônio H Archibald, Fabian Ribeiro, Pedregosa, 10.1038/s41592-019-0686-2Nature Methods. 17Paul van Mulbregtand SciPy 1.0 Contributors. 2020. SciPy 1.0: FundamentalPauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jar- rod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Al- gorithms for Scientific Computing in Python. Nature Methods 17 (2020), 261-272. https://doi.org/10.1038/s41592-019-0686-2
Sinong Wang, Z Belinda, Madian Li, Han Khabsa, Hao Fang, Ma, arXiv:2006.04768Linformer: Self-attention with linear complexity. arXiv preprintSinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Lin- former: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768 (2020).
Smartphone and smartwatch-based biometrics using activities of daily living. M Gary, Kenichi Weiss, Thaier Yoneda, Hayajneh, IEEE Access. 7Gary M Weiss, Kenichi Yoneda, and Thaier Hayajneh. 2019. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access 7 (2019), 133190-133202.
RobustPeriod: Robust Time-Frequency Mining for Multiple Periodicity Detection. Qingsong Wen, Kai He, Liang Sun, Yingying Zhang, Min Ke, Huan Xu, 10.1145/3448016.3452779Proceedings of the 2021 International Conference on Management of Data (Virtual Event, China) (SIGMOD '21). the 2021 International Conference on Management of Data (Virtual Event, China) (SIGMOD '21)New York, NY, USAAssociation for Computing MachineryQingsong Wen, Kai He, Liang Sun, Yingying Zhang, Min Ke, and Huan Xu. 2021. RobustPeriod: Robust Time-Frequency Mining for Multiple Periodicity Detection. In Proceedings of the 2021 International Conference on Management of Data (Virtual Event, China) (SIGMOD '21). Association for Computing Machinery, New York, NY, USA, 2328-2337. https://doi.org/10.1145/3448016.3452779
Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long, Advances in Neural Information Processing Systems. 34Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. 2021. Autoformer: De- composition transformers with auto-correlation for long-term series forecasting. Advances in Neural Information Processing Systems 34 (2021), 22419-22430.
Jiehui Xu, Haixu Wu, Jianmin Wang, Mingsheng Long, arXiv:2110.02642Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. arXiv preprintJiehui Xu, Haixu Wu, Jianmin Wang, and Mingsheng Long. 2021. Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. arXiv preprint arXiv:2110.02642 (2021).
A review of data mining-based financial fraud detection research. Dianmin Yue, Xiaodan Wu, Yunfeng Wang, Yue Li, Chao-Hsien Chu, International Conference on Wireless Communications, Networking and Mobile Computing. Dianmin Yue, Xiaodan Wu, Yunfeng Wang, Yue Li, and Chao-Hsien Chu. 2007. A review of data mining-based financial fraud detection research. In 2007 Interna- tional Conference on Wireless Communications, Networking and Mobile Computing.
. Ieee, Ieee, 5519-5522.
Big bird: Transformers for longer sequences. Manzil Zaheer, Guru Guruganesh, Joshua Kumar Avinava Dubey, Chris Ainslie, Santiago Alberti, Philip Ontanon, Anirudh Pham, Qifan Ravula, Li Wang, Yang, Advances in Neural Information Processing Systems. 33Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems 33 (2020), 17283-17297.
A Transformer-based Framework for Multivariate Time Series Representation Learning. George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, Carsten Eickhoff, KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. SingaporeGeorge Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, and Carsten Eickhoff. 2021. A Transformer-based Framework for Multivariate Time Series Representation Learning. In KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021. 2114-2124.
Informer: Beyond efficient transformer for long sequence time-series forecasting. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang, Proceedings of AAAI. AAAIHaoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2021. Informer: Beyond efficient transformer for long se- quence time-series forecasting. In Proceedings of AAAI.
| [] |
[
"CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions",
"CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions"
] | [
"Janis Goldzycher [email protected] \nDepartment of Computational Linguistics\nUniversity of Zurich\n\n"
] | [
"Department of Computational Linguistics\nUniversity of Zurich\n"
] | [] | The widespread popularity of social media has led to an increase in hateful, abusive, and sexist language, motivating methods for the automatic detection of such phenomena. The goal of the SemEval shared task Towards Explainable Detection of Online Sexism (EDOS 2023) is to detect sexism in English social media posts (subtask A), and to categorize such posts into four coarse-grained sexism categories (subtask B), and eleven fine-grained subcategories (subtask C). In this paper, we present our submitted systems for all three subtasks, based on a multi-task model that has been fine-tuned on a range of related tasks and datasets before being fine-tuned on the specific EDOS subtasks. We implement multi-task learning by formulating each task as binary pairwise text classification, where the dataset and label descriptions are given along with the input text. The results show clear improvements over a finetuned DeBERTa-V3 serving as a baseline leading to F 1 -scores of 85.9% in subtask A (rank 13/84), 64.8% in subtask B (rank 19/69), and 44.9% in subtask C (26/63). 1 | null | [
"https://export.arxiv.org/pdf/2306.03907v1.pdf"
] | 259,088,933 | 2306.03907 | b1ccb36d08e529653dd167fe5541025fe8346371 |
CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions
Janis Goldzycher [email protected]
Department of Computational Linguistics
University of Zurich
CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions
The widespread popularity of social media has led to an increase in hateful, abusive, and sexist language, motivating methods for the automatic detection of such phenomena. The goal of the SemEval shared task Towards Explainable Detection of Online Sexism (EDOS 2023) is to detect sexism in English social media posts (subtask A), and to categorize such posts into four coarse-grained sexism categories (subtask B), and eleven fine-grained subcategories (subtask C). In this paper, we present our submitted systems for all three subtasks, based on a multi-task model that has been fine-tuned on a range of related tasks and datasets before being fine-tuned on the specific EDOS subtasks. We implement multi-task learning by formulating each task as binary pairwise text classification, where the dataset and label descriptions are given along with the input text. The results show clear improvements over a finetuned DeBERTa-V3 serving as a baseline leading to F 1 -scores of 85.9% in subtask A (rank 13/84), 64.8% in subtask B (rank 19/69), and 44.9% in subtask C (26/63). 1
Introduction
With social media's expanding influence, there has been a rising emphasis on addressing the widespread issue of harmful language, especially sexist language (Meyer and Cukier, 2006;Simons, 2015;Das et al., 2020). Automatic content moderation and monitoring methods have become indispensable due to the sheer amount of posts and comments on social media platforms. However, the deployment of automatic methods has led to a new problem: current approaches to sexism detection rely on transformer-based language models whose inner workings, in spite of model interpretability research, generally remain opaque (Sun et al., 2021). This stands in contrast with the need for explainable and transparent decision processes in content moderation.
The EDOS 2023 shared task (Kirk et al., 2023) focuses on the detection (subtask A), and coarse-(subtask B) and fine-grained (subtask C) categorization of sexism. The purpose of sexism categorization is to aid the explainability of sexism detection models, where categorization can serve as additional information for why a post was classified as sexist.
In this paper, we present our approach for all three subtasks. The annotated data for detecting sexism is scarce compared to other natural language processing tasks and is often not publicly available. In response to this, we adopt a multi-task learning approach, where we first train a general model for the detection of hateful and abusive language, and incrementally adapt it to the target task. We implement multi-task learning via manipulation of the input, concretely by adding label descriptions, and dataset identifiers. This means that the model is presented with a pairwise text classification task where it gets a label description and a dataset identifier as a first sequence and the text to classify as the second sequence. The model then learns to predict if the label description presented in the first sequence, in the context of a dataset identifier, applies to the input text of the second sequence. Figure 1 demonstrates the approach. We collect data for a range of related tasks, including hate speech detection, offensive language detection, emotion detection, stance detection on feminism and abortion, and target group detection, leading to an auxiliary training set of over 560,000 annotated Figure 1: Task Formulation: The task is formulated as binary pairwise text classification where the model receives as input a dataset identifier, a label description, and an input text and predicts if the label, as learned for the given dataset, applies to the input text. Note that the same input text can appear with different annotations.
examples.
Our method involves a three-stage training process. As a first step, we train a general abusive language detection model using all available training data. In the second step, we further fine-tune this model on all three EDOS subtasks, and finally, in the third step, we fine-tune the model only on the target subtask.
Our models obtain strong results for subtask A, a macro-F 1 score of 0.859 achieving place 13 out of 84, but rank lower in subtasks B and C, indicating the proposed approach works comparatively well with few classes during inference time, but decreases in performance, relative to other approaches, with a higher number of classes. Our ablation study demonstrates that multi-task learning with label descriptions leads to clear performance improvements over a baseline consisting of DeBERTa-V3 (He et al., 2021) fine-tuned on each subtask. However, it remains unclear if there is a positive contribution from the additionally proposed dataset identifier.
Related Work
Sexism Detection
Sexism detection, sometimes also called sexism identification, is the task of predicting if a given text (typically a social media post) is sexist or not. Most research on the detection of harmful language has focused on more general phenomena such as offensive language (Pradhan et al., 2020), abusive lan-guage (Nakov et al., 2021), or hate speech (Fortuna and Nunes, 2018). Sexism intersects with these concepts but is not entirely covered by them since it also refers to subtle prejudices and stereotypes expressed against women. Accordingly, datasets for hate speech often include women as one of multiple target groups (Mollas et al., 2022;Vidgen et al., 2021;Waseem, 2016), and thus contain sexist texts, but are not exhaustive of sexism, since they do not cover its subtle forms. Recently, there has been increased attention on the detection of sexism and misogyny, leading to one shared task on sexism detection (Rodríguez-Sánchez et al., 2021) and three shared tasks on misogyny detection (Fersini et al., 2018a(Fersini et al., ,b, 2020.
Harmful language detection tasks, such as sexism detection, are typically formulated as binary text classification tasks (Fortuna and Nunes, 2018). Categorizing sexism and misogyny is usually cast as a single-label multi-class classification task (Fersini et al., 2018a,b) with the exception of Parikh et al. (2019) who formulate the task as multi-label multi-task classification. Earlier approaches to sexism detection varied in their methods ranging from manual feature engineering using n-grams, lexical features, and word embeddings with statistical machine learning (Fersini et al., 2018a,b) to custom neural architectures (Fersini et al., 2020). Current approaches typically rely on fine-tuning pre-trained language models (Fersini et al., 2020;Rodríguez-Sánchez et al., 2021). Note that the category not sexist is absent from subtasks B and C. The percentages provided for these subtasks pertain solely to the sexist class, rather than the entire dataset.
Label descriptions and Multi-Task Learning
Prompts (Liu et al., 2021), and task descriptions (Raffel et al., 2020) have been used to condition generative models, while hypotheses and label descriptions (Zhang et al., 2022) have been used to condition classification models in multi-task settings to produce a desired output. Multiple works have shown that multi-task learning (Caruana, 1998;Ruder, 2017) with auxiliary tasks such as polarity classification, aggression detection, emotion detection, and offensive language detection can benefit sexism detection (Abburi et al., 2020;Plaza-del Arco et al., 2021;Rodrıguez-Sánchez et al., 2021;Safi Samghabadi et al., 2020). However, to the best of our knowledge, our approach is the first to implement multitask learning for sexism detection and categorization via label descriptions and without multiple model heads.
Data
EDOS Dataset
The EDOS 2023 dataset (Kirk et al., 2023) contains 20,000 posts from Gab and Reddit, labelled on three levels. On the top level, it is annotated with the binary labels sexism and not-sexism, where sexism is defined as "any abuse or negative sentiment that is directed towards women based on their gender or based on their gender combined with one or more other identity attributes (e.g. Black women, Muslim women, Trans women).". 2 The posts labelled as sexist are further classified into one of four categories, and eleven subcategories called vectors. The label taxonomy and the respective class distributions are displayed in Figure 2. 14,000 labelled examples are released as training data, and 2,000 examples and 4,000 examples are held back for validation and testing, respectively. Additionally, the shared task organizers provide one million unlabelled examples from Reddit and one million unlabelled examples from Gab. For our approach, we do not make use of the unlabelled data.
Auxiliary Datasets
However, we do make use of the following additional, labelled datasets as auxiliary training sets for multi-task learning:
DGHSD The "Dynamically Generated Hate Speech Dataset" (Vidgen et al., 2021) contains artificial adversarial examples aimed at tricking a binary hate speech detection model into predicting the wrong class.
MHS The "Measuring Hate Speech" dataset (Kennedy et al., 2020) contains comments sourced from Youtube, Twitter, and Reddit and is annotated for ten attributes related to hate speech. We only use the subset of labels listed in Table 1 thal et al., 2017), and stance detection (Mohammad et al., 2016) for stances on the topics feminism and abortion.
Preprocessing
During preprocessing, we replaced all URLs in the input texts with the placeholder string "[URL]", all usernames (strings starting with an "@") with "[USER]", and all emojis with the respective textual description, also surrounded by brackets.
System Description
We formulate each EDOS subtask as a binary pairwise classification task where the model predicts if a given label applies to the input text. This allows us to simultaneously train on multiple datasets with different labeling schemes and a different number of distinct labels without adjusting the model architecture or having to use multiple model heads. Formally, our model receives as input (1) the concatenation of a dataset identifier d i ∈ D and a label description l j ∈ L, and (2) the input text t. It predicts the probability distribution y = softmax(model(concat(d i , l j ), t)) where y ∈ R 2 . y 1 then denotes the probability that l j , given the context of d i , does apply to t.
Model Details
We use DeBERTa (He et al., 2020), specifically DeBERTa-V3-large (He et al., 2021) fine-tuned on a range of natural language inference (NLI) datasets (Laurer et al., 2022) 3 , since this model is already fine-tuned to classify and relate text pairs. We only change the output dimensionality from 3 to 2 for binary classification. In ablation tests, we also use the DeBERTa-V3-large without further fine-tuning. 4 3 The model is publicly available at https://huggingface.co/MoritzLaurer/ DeBERTa-v3-large-mnli-fever-anli-ling-wanli. 4 The model is publicly available at https: //huggingface.co/microsoft/deberta-v3-large.
Label Descriptions
Where possible, we use the label names listed in Figure 2 and Table 1 as the label description. However, we make the following exceptions and adjustments: We strip the numbering from the label names for EDOS subtasks B, and C, and add the string " (against women)" at the end since this target group information is not yet in the label name. For multi-class classification in auxiliary datasets, we follow the format "<label type>: <label value>". For example, for sentiment classification, which has the three possible label values negative, neutral, and positive, we can generate a true example for positive sentiment with the label description "sentiment: positive".
Dataset Identifier
The same labels may have slightly different definitions in different datasets or may be differently applied due to different annotators. If no further information is given to the model, this could be a source of noise. Multiple datasets contain the label hate speech for our auxiliary datasets. To account for this, we introduce dataset identifiers, which are short dataset abbreviations of a few characters in length that are concatenated with the label description.
Training Procedure
We train the model in three phases: In the first phase, the model is trained with all available examples of all collected datasets. In the second phase, the best checkpoint from the previous phase is further fine-tuned on EDOS data from all three task levels (subtasks A, B, and C). Finally, in the third phase, the model is fine-tuned only on examples from the relevant subtask. We consider all three annotations from the three subtasks per example for validation during the first two phases, resulting in 6,000 annotations for each validation. In the last training phase, the model is only fine-tuned on one subtask. Thus, we only validated on the labels for that specific subtask. Further training details are provided in Table 2.
Random Negative Sampling
When converting a multi-class classification task (such as subtask B and C) to a binary pairwise text classification task, each positive example for class c k ∈ C, where k ∈ [0, ..., |C|], can be turned into |C| − 1 negative examples by choosing a label c k ∈ C \ {c k }. However, generating all possible negative examples for a positive example would result in an imbalanced training set. Therefore, in settings with more than two classes, we instead sample a random wrong class label during training to create one negative example for each positive example. 5 This means the model will be trained on different negative examples in each epoch while the positive examples stay the same.
Inference
During inference, we predict a probability p i for each candidate class c i ∈ C and select the class with the highest probability. This means that we perform |C| number of forward passes per prediction, except for binary classification (subtask A), where we can use just one forward pass to predict a probability for the label sexism.
Since our model produces just one probability for subtask A, we can select a probability threshold. For our official submission and for our ablation experiments, we test the thresholds {0.5, 0.6, 0.7, 0.8, 0.9} on the validation set and use the highest performing threshold for the test set. Table 4 contains the official evaluation scores showing strong results for subtask A and moderately good results for subtasks B and C.
Experiments and Results
Ablation Study
To illustrate the relative importance of the proposed methods, we systematically add components to a baseline model until we arrive at the submitted models and run each model version with three random seeds for our ablation tests. We evaluate the following settings: (1) "single task EDOS": We start with DeBERTa-V3-large models, fine-tuned on each subtask individually, serving as our baseline. (2) "+ label description": We add label descriptions while still only training each model on one subtask. (3) "multi-task EDOS via label descriptions": The models are trained on all three subtasks simultaneously using label descriptions.
(4) "+ NLI fine-tuning": We repeat the setting but start training from the DeBERTa-V3 checkpoint fine-tuned on NLI datasets (see Section 4.1). (5) "+ single task fine-tuning": We add a second finetuning phase in which the multi-task model is only fine-tuned on the target subtask. (6) "+ fine-tuning on AUX ": We add fine-tuning on auxiliary tasks and EDOS simultaneously as a first training phase. (7) "+ dataset identifier": We add the dataset identifier to the input. (8) "class balancing": Finally, we perform upsampling to increase the relative frequency of scarce classes. 6 The upsampled version of the dataset is only used during the last finetuning phase. Table 3 contains the test set results averaged over the three runs with different seeds. The full results for each run, including evaluations after intermediary training phases, are displayed in Appendix A.
In what follows we analyze the effects of different system components and settings.
Baseline
The baseline single task EDOS shows already a strong performance on subtask A, but leads to surprisingly low scores on subtasks B and C. We assume that this is due to underprediction and low performance of the very scarce classes (four classes of subtask C are below 3%), which can drastically reduce the macro-F 1 score.
Multi-Task Learning on all EDOS-Subtasks
Comparing the baseline, with multi-task EDOS via 6 In subtask B, we increase classes with a frequency below 19% to~19%. For subtask C, we upsample classes below 9% to~9%.
label descriptions, shows a clear improvement of 1.1 percentage points (pp) from multi-task learning with label descriptions on subtask A, and drastic improvements, more than doubling performance, for subtasks B and C. Looking at single task models with label descriptions (+ label description) reveals that on subtask A the entire increase in performance is due to label descriptions while in subtasks B and C the dramatic performance increases are due to multi-task learning.
Starting with an NLI Model Starting training from a DeBERTa-V3 checkpoint that is already fine-tuned on NLI increases scores on all three subtasks. The more classes the task has, the larger is the increase.
Additional Single-Task Fine-Tuning Adding a second subtask-specific fine-tuning phase after training on all EDOS subtasks leads to increases of 6.7pp and 4.0pp for subtasks B and C. However, it reduces performance on subtask A by 0.4pp.
Multi-Task Learning on Auxiliary Tasks Inserting a first training phase that includes all auxiliary tasks and EDOS subtasks into the training process leads to further improvements of up to 1.0pp on all subtasks.
The Dataset Identifier Adding a dataset identifier to the input leads to mixed results. On subtask A the model does not change in performance, on subtask B it slightly decreases, and on subtask C we observe a clear increase of 1.4pp. Overall, we cannot draw a clear conclusion about the effects of the dataset identifier.
Class Balancing Finally, we observe that upsampling low-frequency classes in subtasks B and C has positive effects of 1.3pp and 3.5pp respectively.
Error Analysis
Figures 3 and 4 display the confusion matrices averaged over three random seeds for the submitted model configurations for subtasks B and C.
In subtask B, we see that the category threats, plans to harm, and incitement is accurately predicted. The derogation class has a high recall, likely because it is the most common category. However, this also results in a significant number of false positives from the animosity and prejudiced discussions classes. As a consequence, these last two classes are strongly underpredicted.
In subtask C, it is evident that mispredictions generally stem from class confusions within the subtask B categories. Erroneous predictions outside of these categories are uncommon, except for descriptive attacks, which are frequently mistaken for various forms of animosity.
Conclusion
In this paper, we presented our approaches and results for all three subtasks of the shared task Towards Explainable Sexism Detection. We developed and evaluated a multi-task learning model that is trained in three phases: (1) training a general multi-task abusive language detection model, (2) fine-tuning the model on all three EDOS subtasks, thus specializing it in sexism detection, and (3) fine-tuning the model only on the target subtask. We implemented the multi-task capabilities only via input manipulation, i.e., label descriptions and dataset identifiers, without modifying the model architecture or using multiple model heads.
In the official shared task evaluation, our approach led to strong results on subtask A and moderately good results on subtask B and C, indicating that the method decreases more in performance with a higher number of classes than other approaches. Our ablation tests demonstrate that multitask learning via label descriptions led to significant performance improvements on subtask A and large performance improvements on subtasks B and C. It remains unclear if the dataset identifier has any positive effect. Overall the results show that our model for binary sexism detection is reliable, but that there is still much room for improvement in sexism categorization.
A Full Results
Development set Test set Training sets and training phases
Run Table 6: Full results of the ablation study on the development and test set. LD refers to label descriptions, and DI refers to dataset identifiers. DBV3 refers to DeBERTa-V3-large and DBV3-NLI refers to DeBERTa-V3-large fine-tuned on NLI datasets. ρ refers to the threshold applied for subtask A. The settings containing the settings of the models submitted to the official evaluation are marked in grey. Ph1, Ph2, and Ph3 stand for training phase 1, 2, and 3 respectively.
LD DI Base model A ρ B C A B C EDOS A 1 ✗ ✗ DBV3 0.840 0.7 - - 0.837 - - EDOS A 2 ✗ ✗ DBV3 0.845 0.5 - - 0.848 - - EDOS A 3 ✗ ✗ DBV3 0.837 0.5 - - 0.836 - - EDOS B 1 ✗ ✗ DBV3 - - 0.159 - - 0.159 - EDOS B 2 ✗ ✗ DBV3 - - 0.159 - - 0.159 - EDOS B 3 ✗ ✗ DBV3 - - 0.306 - - 0.287 - EDOS C 1 ✗ ✗ DBV3 - - - 0.114 - - 0.115 EDOS C 2 ✗ ✗ DBV3 - - - 0.136 - - 0.126 EDOS C 3 ✗ ✗ DBV3 - - - 0.110 - - 0.124 EDOS A 1 ✓ ✗ DBV3 0.853 0.5 - - 0.852 - - EDOS A 2 ✓ ✗ DBV3 0.845 0.5 - - 0.852 - - EDOS A 3 ✓ ✗ DBV3 0.851 0.5 - - 0.849 - - EDOS B 1 ✓ ✗ DBV3 - - 0.162 - - 0.159 - EDOS B 2 ✓ ✗ DBV3 - - 0.162 - - 0.159 - EDOS B 3 ✓ ✗ DBV3 - - 0.159 - - 0.161 - EDOS C 1 ✓ ✗ DBV3 - - - 0.117 - - 0.118 EDOS C 2 ✓ ✗ DBV3 - - - 0.094 - - 0.086 EDOS C 3 ✓ ✗ DBV3 - - - 0.078 - - 0.089 EDOS ABC 1 ✓ ✗ DBV3 0.
Figure 2 :
2EDOS class distribution.
Figure 3 :
3Visualized normalized confusion matrix for subtask B.
Figure 4 :
4Visualized normalized confusion matrix for subtask C.
1 :
1. Label distributions of the auxiliary datasets.dataset
label type
label value
size
DGHSD hate speech
yes: 46.1%
no: 53.9%
32,924
SBF
lewd
yes: 10.1%
no: 89.9%
35,424
offensive
yes: 47.1%
no: 52.9%
35,424
MHS
hate speech
yes: 40.5%
no: 59.5%
130,000
targets gender
yes: 29.8%
no: 70.2%
130,000
targets women
yes: 21.9%
no: 78.1%
130,000
TWE
offensive
yes: 33.1%
no: 66.9%
11,916
sentiment
negative: 15.5%
neutral: 45.3%
positive: 39.1%
45,615
emotion
anger: 43.0%
joy: 21.7%
optimism: 9.0%
sadness: 26.3%
3,257
hate
yes: 42.0%
no: 58.0%
9,000
irony
yes: 50.5%
no: 49.5%
2,862
stance feminist
none: 18.9%
against: 49.4%
favor: 31.7%
597
stance abortion
none: 27.1%
against: 54.3%
favor: 18.6%
587
Table SBF The "Social Bias Frames" dataset (Sap et al.,
2020) is a combination of multiple Twitter datasets
(Founta et al., 2018; Davidson et al., 2017; Waseem
and Hovy, 2016) with newly collected data from
Reddit, Gab, and Stormfront. We only use the
subset of labels listed in Table 1.
TWE "TweetEval" (Barbieri et al., 2020) com-
bines multiple datasets for different tasks into a
single benchmark for detecting various aspects
of tweets. We use the datasets for emotion clas-
sification (Mohammad et al., 2018), irony detec-
tion (Van Hee et al., 2018), hate speech detection
(Basile et al., 2019), offensive language detection
(Zampieri et al., 2019), sentiment detection (Rosen
Table 2 :
2Training hyperparameters. The left column refers to the different training phases. general applies to all training phases and all EDOS subtasks. AUX+EDOS refers to training on all auxiliary datasets, EDOS to training on all EDOS subtasks and EDOS A/B/C to training only on the target subtask.
Table 3 :
3Results of the ablation study on the test set. The metric is macro-F 1 .F 1
Rank
Subtask A 0.859 13/84
Subtask B 0.648 19/69
Subtask C 0.449 26/63
Table 4 :
4Results of the official evaluation on the test set.
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS A AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS A AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS A AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B (up to~19%) AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B (up to~19%) AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B (up to~19%) AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C (up to~9%) AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C (up to~9%) AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C (up to~9%)865
0.5
0.556
0.225
0.851
0.530
0.253
EDOS ABC
2
✓
✗
DBV3
0.850
0.5
0.466
0.193
0.845
0.449
0.184
EDOS ABC
3
✓
✗
DBV3
0.860
0.7
0.570
0.329
0.857
0.533
0.309
EDOS ABC
1
✓
✗
DBV3-NLI
0.855
0.5
0.616
0.439
0.854
0.554
0.350
EDOS ABC
2
✓
✗
DBV3-NLI
0.855
0.5
0.617
0.431
0.852
0.555
0.355
EDOS ABC
3
✓
✗
DBV3-NLI
0.854
0.6
0.614
0.440
0.855
0.558
0.351
Ph1: EDOS ABC, Ph2: EDOS A
1
✓
✗
DBV3-NLI
0.853
0.5
-
-
0.848
-
-
Ph1: EDOS ABC, Ph2: EDOS A
2
✓
✗
DBV3-NLI
0.855
0.6
-
-
0.848
-
-
Ph1: EDOS ABC, Ph2: EDOS A
3
✓
✗
DBV3-NLI
0.854
0.9
-
-
0.855
-
-
Ph1: EDOS ABC, Ph2: EDOS B
1
✓
✗
DBV3-NLI
-
-
0.665
-
-
0.615
-
Ph1: EDOS ABC, Ph2: EDOS B
2
✓
✗
DBV3-NLI
-
-
0.683
-
-
0.637
-
Ph1: EDOS ABC, Ph2: EDOS B
3
✓
✗
DBV3-NLI
-
-
0.683
-
-
0.616
-
Ph1: EDOS ABC, Ph2: EDOS C
1
✓
✗
DBV3-NLI
-
-
-
0.496
-
-
0.412
Ph1: EDOS ABC, Ph2: EDOS C
2
✓
✗
DBV3-NLI
-
-
-
0.496
-
-
0.412
Ph1: EDOS ABC, Ph2: EDOS C
3
✓
✗
DBV3-NLI
-
-
-
0.496
-
-
0.412
Ph1: AUX + EDOS ABC
1
✓
✗
DBV3-NLI
0.825
0.5
0.283
0.247
0.831
0.263
0.228
Ph1: AUX + EDOS ABC
2
✓
✗
DBV3-NLI
0.825
0.5
0.291
0.230
0.828
0.269
0.237
Ph1: AUX + EDOS ABC
3
✓
✗
DBV3-NLI
0.824
0.5
0.302
0.245
0.827
0.284
0.239
Ph1: AUX + EDOS ABC Ph2: EDOS ABC
1
✓
✗
DBV3-NLI
0.850
0.6
0.601
0.422
0.860
0.541
0.382
Ph1: AUX + EDOS ABC Ph2: EDOS ABC
2
✓
✗
DBV3-NLI
0.851
0.6
0.608
0.421
0.859
0.549
0.379
Ph1: AUX + EDOS ABC Ph2: EDOS ABC
3
✓
✗
DBV3-NLI
0.853
0.5
0.602
0.424
0.857
0.538
0.380
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS A
1
✓
✗
DBV3-NLI
0.855
0.6
-
-
0.860
-
-
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS A
2
✓
✗
DBV3-NLI
0.854
0.7
-
-
0.858
-
-
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS A
3
✓
✗
DBV3-NLI
0.855
0.7
-
-
0.857
-
-
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B
1
✓
✗
DBV3-NLI
-
-
0.663
-
-
0.594
-
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B
2
✓
✗
DBV3-NLI
-
-
0.693
-
-
0.657
-
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS B
3
✓
✗
DBV3-NLI
-
-
0.689
-
-
0.649
-
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C
1
✓
✗
DBV3-NLI
-
-
-
0.507
-
-
0.423
Ph1: AUX + EDOS ABC Ph2: EDOS ABC, Ph3: EDOS C
2
✓
✗
DBV3-NLI
-
-
-
0.495
-
-
0.425
3
✓
✗
DBV3-NLI
-
-
-
0.478
-
-
0.401
Ph1: AUX + EDOS ABC
1
✓
✓
DBV3-NLI
0.796
0.5
0.260
0.196
0.805
0.236
0.214
Ph1: AUX + EDOS ABC
2
✓
✓
DBV3-NLI
0.798
0.5
0.259
0.206
0.801
0.237
0.207
Ph1: AUX + EDOS ABC
3
✓
✓
DBV3-NLI
0.793
0.5
0.257
0.226
0.803
0.253
0.230
Ph1: AUX + EDOS ABC Ph2: EDOS ABC
1
✓
✓
DBV3-NLI
0.862
0.6
0.637
0.466
0.859
0.605
0.395
Ph1: AUX + EDOS ABC Ph2: EDOS ABC
2
✓
✓
DBV3-NLI
0.852
0.5
0.565
0.411
0.857
0.533
0.370
Ph1: AUX + EDOS ABC Ph2: EDOS ABC
3
✓
✓
DBV3-NLI
0.849
0.6
0.569
0.424
0.859
0.534
0.366
1
✓
✓
DBV3-NLI
0.858
0.7
-
-
0.859
-
-
Ph1: 2
✓
✓
DBV3-NLI
0.855
0.6
-
-
0.856
-
-
Ph1: 3
✓
✓
DBV3-NLI
0.862
0.5
-
-
0.861
-
-
Ph1: 1
✓
✓
DBV3-NLI
-
-
0.674
-
-
0.633
-
Ph1: 2
✓
✓
DBV3-NLI
-
-
0.665
-
-
0.642
-
Ph1: 3
✓
✓
DBV3-NLI
-
-
0.664
-
-
0.613
-
Ph1: 1
✓
✓
DBV3-NLI
-
-
-
0.522
-
-
0.455
Ph1: 2
✓
✓
DBV3-NLI
-
-
-
0.464
-
-
0.419
Ph1: 3
✓
✓
DBV3-NLI
-
-
-
0.473
-
-
0.419
Ph1: 1
✓
✓
DBV3-NLI
-
-
0.679
-
-
0.653
-
Ph1: 2
✓
✓
DBV3-NLI
-
-
0.677
-
-
0.642
-
Ph1: 3
✓
✓
DBV3-NLI
-
-
0.661
-
-
0.632
-
Ph1: 1
✓
✓
DBV3-NLI
-
-
-
0.473
-
-
0.462
Ph1: 2
✓
✓
DBV3-NLI
-
-
-
0.497
-
-
0.470
Ph1: 3
✓
✓
DBV3-NLI
-
-
-
0.516
-
-
0.466
We make our code publicly available at https://github. com/jagol/CL-UZH-EDOS-2023.
https://codalab.lisn.upsaclay.fr/ competitions/7124#learn_the_details-overview
This applies to EDOS subtasks B and C, and the sentiment-emotion-and stance-detection tasks in TweetEval.
AcknowledgmentsWe thank Chantal Amrhein and Simon Clematide, for the valuable conversations, and suggestions, and Jonathan Schaber, and Gerold Schneider for the helpful comments. We also thank the anonymous reviewers for their constructive feedback.
Semi-supervised multi-task learning for multi-label fine-grained sexism classification. Harika Abburi, Pulkit Parikh, Niyati Chhaya, Vasudeva Varma, 10.18653/v1/2020.coling-main.511Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (OnlineInternational Committee on Computational LinguisticsHarika Abburi, Pulkit Parikh, Niyati Chhaya, and Va- sudeva Varma. 2020. Semi-supervised multi-task learning for multi-label fine-grained sexism classifi- cation. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 5810- 5820, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.
TweetEval: Unified benchmark and comparative evaluation for tweet classification. Francesco Barbieri, Jose Camacho-Collados, Luis Espinosa Anke, Leonardo Neves, 10.18653/v1/2020.findings-emnlp.148Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsFrancesco Barbieri, Jose Camacho-Collados, Luis Es- pinosa Anke, and Leonardo Neves. 2020. TweetEval: Unified benchmark and comparative evaluation for tweet classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1644-1650, Online. Association for Computational Linguistics.
SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti, 10.18653/v1/S19-2007Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationMinnesota, USAMinneapolis. Association for Computational LinguisticsValerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.
. Rich Caruana, 10.1007/978-1-4615-5529-2_5Multitask Learning. Learning to Learn. Springer USRich Caruana. 1998. Multitask Learning. Learning to Learn, pages 95-133. Springer US, Boston, MA.
Hate speech in online social media. Mithun Das, Binny Mathew, Punyajoy Saha, Pawan Goyal, Animesh Mukherjee, 10.1145/3427478.3427482SIGWEB Newsl. Mithun Das, Binny Mathew, Punyajoy Saha, Pawan Goyal, and Animesh Mukherjee. 2020. Hate speech in online social media. SIGWEB Newsl., Autumn 2020. New York, NY, USA.
Automated hate speech detection and the problem of offensive language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, 10.1609/icwsm.v11i1.14955Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social MediaMontréal, Québec, Canada11Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detec- tion and the problem of offensive language. Pro- ceedings of the International AAAI Conference on Web and Social Media, 11(1):512-515. Montréal, Québec, Canada.
Overview of the EVALITA 2018 Task on Automatic Misogyny Identification (ami). Elisabetta Fersini, Debora Nozza, Paolo Rosso, EVALITA Evaluation of NLP and Speech Tools for Italian Proceedings of the Final Workshop 12-13. NaplesAccademia University PressElisabetta Fersini, Debora Nozza, Paolo Rosso, et al. 2018a. Overview of the EVALITA 2018 Task on Au- tomatic Misogyny Identification (ami). In EVALITA Evaluation of NLP and Speech Tools for Italian Pro- ceedings of the Final Workshop 12-13 December 2018, Naples. Accademia University Press.
AMI@ EVALITA2020: Automatic Misogyny Identification. Elisabetta Fersini, Debora Nozza, Paolo Rosso, Proceedings of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for ItalianCEUR Workshop ProceedingsElisabetta Fersini, Debora Nozza, Paolo Rosso, et al. 2020. AMI@ EVALITA2020: Automatic Misog- yny Identification. In Proceedings of the 7th Eval- uation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2020). CEUR Workshop Proceedings. Online.
Overview of the Task on Automatic Misogyny Identification at IberEval 2018. Ibereval@ sepln. Elisabetta Fersini, Paolo Rosso, Maria Anzovino, 2150Seville (SpainElisabetta Fersini, Paolo Rosso, and Maria Anzovino. 2018b. Overview of the Task on Automatic Misog- yny Identification at IberEval 2018. Ibereval@ sepln, 2150:214-228. Seville (Spain).
A Survey on Automatic Detection of Hate Speech in Text. Paula Fortuna, Sérgio Nunes, 10.1145/3232676ACM Comput. Surv. 514Paula Fortuna and Sérgio Nunes. 2018. A Survey on Automatic Detection of Hate Speech in Text. ACM Comput. Surv., 51(4). New York, NY, USA.
Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior. Constantinos Antigoni Maria Founta, Despoina Djouvas, Ilias Chatzakou, Jeremy Leontiadis, Gianluca Blackburn, Athena Stringhini, Michael Vakali, Nicolas Sirivianos, Kourtellis, Twelfth International AAAI Conference on Web and Social Media. Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large Scale Crowdsourcing and Characterization of Twitter Abu- sive Behavior. In Twelfth International AAAI Confer- ence on Web and Social Media.
Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. Pengcheng He, Jianfeng Gao, Weizhu Chen, arXiv:2111.09543arXiv preprintPengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre- training with gradient-disentangled embedding shar- ing. arXiv preprint arXiv:2111.09543.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, arXiv:2006.03654Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprintPengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application. Chris J Kennedy, Geoff Bacon, Alexander Sahn, Claudia Von Vacano, ArXiv:2009.10277csChris J. Kennedy, Geoff Bacon, Alexander Sahn, and Claudia von Vacano. 2020. Constructing inter- val variables via faceted Rasch measurement and multitask deep learning: a hate speech application. ArXiv:2009.10277 [cs].
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
SemEval-2023 Task 10: Explainable Detection of Online Sexism. Wenjie Hannah Rose Kirk, Bertie Yin, Paul Vidgen, Röttger, 10.48550/arXiv.2303.04222Proceedings of the 17th International Workshop on Semantic Evaluation. the 17th International Workshop on Semantic EvaluationToronto, CanadaAssociation for Computational LinguisticsHannah Rose Kirk, Wenjie Yin, Bertie Vidgen, and Paul Röttger. 2023. SemEval-2023 Task 10: Explainable Detection of Online Sexism. In Proceedings of the 17th International Workshop on Semantic Evaluation, Toronto, Canada. Association for Computational Lin- guistics.
Less Annotating, More Classifying -Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT -NLI. Moritz Laurer, Andreu Wouter Van Atteveldt, Kasper Casas, Welbers, onlineMoritz Laurer, Wouter van Atteveldt, Andreu Casas, and Kasper Welbers. 2022. Less Annotating, More Classifying -Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT -NLI. online.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, arXiv:2107.13586[cs].ArXiv:2107.13586Pretrain, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre- train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. arXiv:2107.13586 [cs]. ArXiv: 2107.13586.
Assessing the attack threat due to irc channels. R Meyer, M Cukier, 10.1109/DSN.2006.12International Conference on Dependable Systems and Networks (DSN'06). R. Meyer and M. Cukier. 2006. Assessing the at- tack threat due to irc channels. In International Conference on Dependable Systems and Networks (DSN'06), pages 467-472.
Semeval-2018 Task 1: Affect in Tweets. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, Svetlana Kiritchenko, Proceedings of the 12th International Workshop on Semantic Evaluation. the 12th International Workshop on Semantic EvaluationNew Orleans, LA, USASaif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval- 2018 Task 1: Affect in Tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 1-17. New Orleans, LA, USA.
Semeval-2016 Task 6: Detecting Stance in Tweets. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, Colin Cherry, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, California, USASaif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval- 2016 Task 6: Detecting Stance in Tweets. In Proceed- ings of the 10th International Workshop on Seman- tic Evaluation (SemEval-2016), pages 31-41. San Diego, California, USA.
Ioannis Mollas, Zoe Chrysopoulou, 10.1007/s40747-021-00608-2Stamatis Karlos, and Grigorios Tsoumakas. 2022. ETHOS: A Multi-Label Hate Speech Detection Dataset. Complex & Intelligent Systems. Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. 2022. ETHOS: A Multi- Label Hate Speech Detection Dataset. Complex & Intelligent Systems.
Preslav Nakov, Vibha Nayak, Kyle Dent, Ameya Bhatawdekar, Muhammad Sheikh, Momchil Sarwar, Yoan Hardalov, Dimitrina Dinkov, Zlatkova, arXiv:2103.00153[cs].ArXiv:2103.00153Guillaume Bouchard, and Isabelle Augenstein. 2021. Detecting Abusive Language on Online Platforms: A Critical Analysis. Preslav Nakov, Vibha Nayak, Kyle Dent, Ameya Bhatawdekar, Sheikh Muhammad Sarwar, Momchil Hardalov, Yoan Dinkov, Dimitrina Zlatkova, Guil- laume Bouchard, and Isabelle Augenstein. 2021. De- tecting Abusive Language on Online Platforms: A Critical Analysis. arXiv:2103.00153 [cs]. ArXiv: 2103.00153.
Multi-label categorization of accounts of sexism using a neural framework. Pulkit Parikh, Harika Abburi, Pinkesh Badjatiya, Radhika Krishnan, Niyati Chhaya, Manish Gupta, Vasudeva Varma, 10.18653/v1/D19-1174Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsPulkit Parikh, Harika Abburi, Pinkesh Badjatiya, Rad- hika Krishnan, Niyati Chhaya, Manish Gupta, and Vasudeva Varma. 2019. Multi-label categorization of accounts of sexism using a neural framework. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1642- 1652, Hong Kong, China. Association for Computa- tional Linguistics.
Sexism identification in social networks using a multi-task learning system. Flor Miriam Plaza-Del Arco, Dolores Molina-González, M T Lau López, Martín-Valdivia, SE- PLN 2021Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021) co-located with the Conference of the Spanish Society for Natural Language Processing. the Iberian Languages Evaluation Forum (IberLEF 2021) co-located with the Conference of the Spanish Society for Natural Language ProcessingMálaga, Spain2943XXXVII International Conference of the Spanish Society for Natural Language ProcessingFlor Miriam Plaza-del Arco, M Dolores Molina- González, LAU López, and MT Martín-Valdivia. 2021. Sexism identification in social networks us- ing a multi-task learning system. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021) co-located with the Conference of the Span- ish Society for Natural Language Processing (SE- PLN 2021), XXXVII International Conference of the Spanish Society for Natural Language Processing., Málaga, Spain, volume 2943, pages 491-499.
A Review on Offensive Language Detection. Rahul Pradhan, Ankur Chaturvedi, Aprna Tripathi, Dilip Kumar Sharma, 10.1007/978-981-15-0694-9_41Advances in Data and Information Sciences. SingaporeSpringerRahul Pradhan, Ankur Chaturvedi, Aprna Tripathi, and Dilip Kumar Sharma. 2020. A Review on Offensive Language Detection. In Advances in Data and In- formation Sciences, Lecture Notes in Networks and Systems, pages 433-439, Singapore. Springer.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Lim- its of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21(140):1-67.
A multi-task and multilingual model for sexism identification in social networks. Francisco Rodrıguez-Sánchez, Jorge Carrillo-De Albornoz, Laura Plaza, Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021) co-located with the Conference of the Spanish Society for Natural Language Processing. the Iberian Languages Evaluation Forum (IberLEF 2021) co-located with the Conference of the Spanish Society for Natural Language ProcessingSEPLN 2021Francisco Rodrıguez-Sánchez, Jorge Carrillo-de Al- bornoz, and Laura Plaza. 2021. A multi-task and multilingual model for sexism identification in so- cial networks. Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021) co-located with the Conference of the Spanish Society for Natural Language Processing (SEPLN 2021).
Overview of exist 2021: sexism identification in social networks. Francisco Rodríguez-Sánchez, Jorge Carrillo De Albornoz, Laura Plaza, Julio Gonzalo, Paolo Rosso, Miriam Comet, Trinidad Donoso, Proces. del Leng. Natural. 69Francisco Rodríguez-Sánchez, Jorge Carrillo de Al- bornoz, Laura Plaza, Julio Gonzalo, Paolo Rosso, Miriam Comet, and Trinidad Donoso. 2021. Overview of exist 2021: sexism identification in so- cial networks. Proces. del Leng. Natural, 69:229- 240.
SemEval-2017 task 4: Sentiment analysis in Twitter. Sara Rosenthal, Noura Farra, Preslav Nakov, 10.18653/v1/S17-2088Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsSara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502- 518, Vancouver, Canada. Association for Computa- tional Linguistics.
An overview of multi-task learning in. Sebastian Ruder, arXiv:1706.05098deep neural networks. arXiv preprintSebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.
Amitava Das, and Thamar Solorio. 2020. Aggression and misogyny detection using BERT: A multi-task approach. Parth Niloofar Safi Samghabadi, Patwa, Pykl Srinivas, Prerana Mukherjee, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. the Second Workshop on Trolling, Aggression and CyberbullyingMarseille, FranceEuropean Language Resources Association (ELRANiloofar Safi Samghabadi, Parth Patwa, Srinivas PYKL, Prerana Mukherjee, Amitava Das, and Thamar Solorio. 2020. Aggression and misogyny detection using BERT: A multi-task approach. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 126-131, Marseille, France. European Language Resources Association (ELRA).
Social bias frames: Reasoning about social and power implications of language. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, Yejin Choi, 10.18653/v1/2020.acl-main.486Proceedings of the 58th. the 58thMaarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf- sky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power im- plications of language. In Proceedings of the 58th
Annual Meeting of the Association for Computational Linguistics. Online. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 5477-5490, Online. Association for Computational Linguistics.
Addressing Gender-Based Harassment in Social Media: A Call to Action. Rachel Noelle Simons, Proceedings. Online. Rachel Noelle Simons. 2015. Addressing Gender-Based Harassment in Social Media: A Call to Action. iCon- ference 2015 Proceedings. Online.
Xiaofei Sun, Diyi Yang, Xiaoya Li, Tianwei Zhang, Yuxian Meng, Han Qiu, Guoyin Wang, Eduard Hovy, Jiwei Li, Arxiv:2110.10470Interpreting Deep Learning Models in Natural Language Processing: A Review. Xiaofei Sun, Diyi Yang, Xiaoya Li, Tianwei Zhang, Yuxian Meng, Han Qiu, Guoyin Wang, Eduard Hovy, and Jiwei Li. 2021. Interpreting Deep Learning Models in Natural Language Processing: A Review. Arxiv:2110.10470.
SemEval-2018 task 3: Irony detection in English tweets. Cynthia Van Hee, Els Lefever, Véronique Hoste, 10.18653/v1/S18-1005Proceedings of the 12th International Workshop on Semantic Evaluation. the 12th International Workshop on Semantic EvaluationNew Orleans, LouisianaAssociation for Computational LinguisticsCynthia Van Hee, Els Lefever, and Véronique Hoste. 2018. SemEval-2018 task 3: Irony detection in En- glish tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 39-50, New Orleans, Louisiana. Association for Computational Linguistics.
Learning from the worst: Dynamically generated datasets to improve online hate detection. Bertie Vidgen, Tristan Thrush, Zeerak Waseem, Douwe Kiela, 10.18653/v1/2021.acl-long.132Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021. Learning from the worst: Dy- namically generated datasets to improve online hate detection. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1667-1682, Online. Association for Computa- tional Linguistics.
Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, Hao Ma, arXiv:2104.14690[cs].ArXiv:2104.14690Entailment as Few-Shot Learner. Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. 2021. Entailment as Few-Shot Learner. arXiv:2104.14690 [cs]. ArXiv: 2104.14690.
Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter. Zeerak Waseem, 10.18653/v1/W16-5618Proceedings of the First Workshop on NLP and Computational Social Science. the First Workshop on NLP and Computational Social ScienceAustin, TexasAssociation for Computational LinguisticsZeerak Waseem. 2016. Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138- 142, Austin, Texas. Association for Computational Linguistics.
Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. Zeerak Waseem, Dirk Hovy, 10.18653/v1/N16-2013Proceedings of the NAACL Student Research Workshop. the NAACL Student Research WorkshopSan Diego, CaliforniaAssociation for Computational LinguisticsZeerak Waseem and Dirk Hovy. 2016. Hateful Sym- bols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88- 93, San Diego, California. Association for Computa- tional Linguistics.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar, Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval). Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Semeval-2019 task 6: Identifying and catego- rizing offensive language in social media (offenseval).
Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationIn Proceedings of the 13th International Workshop on Semantic Evaluation, pages 75-86.
Long-tailed extreme multi-label text classification with generated pseudo label descriptions. Ruohong Zhang, Yau-Shian Wang, Yiming Yang, Donghan Yu, Tom Vu, Li Lei, abs/2204.00958ArXiv. Ruohong Zhang, Yau-Shian Wang, Yiming Yang, Dong- han Yu, Tom Vu, and Li Lei. 2022. Long-tailed extreme multi-label text classification with generated pseudo label descriptions. ArXiv, abs/2204.00958.
| [] |
[
"GMMap: Memory-Efficient Continuous Occupancy Map Using Gaussian Mixture Model",
"GMMap: Memory-Efficient Continuous Occupancy Map Using Gaussian Mixture Model"
] | [
"Peter Zhi ",
"Xuan Li ",
"Member, IEEE Student Member, IEEESertac Karaman ",
"Senior Member, IEEEVivienne Sze "
] | [] | [] | Energy consumption of memory accesses dominates the compute energy in energy-constrained robots which require a compact 3D map of the environment to achieve autonomy. Recent mapping frameworks only focused on reducing the map size while incurring significant memory usage during map construction due to multi-pass processing of each depth image. In this work, we present a memory-efficient continuous occupancy map, named GMMap, that accurately models the 3D environment using a Gaussian Mixture Model (GMM). Memory-efficient GMMap construction is enabled by the single-pass compression of depth images into local GMMs which are directly fused together into a globally-consistent map. By extending Gaussian Mixture Regression to model unexplored regions, occupancy probability is directly computed from Gaussians. Using a low-power ARM Cortex A57 CPU, GMMap can be constructed in real-time at up to 60 images per second. Compared with prior works, GMMap maintains high accuracy while reducing the map size by at least 56%, memory overhead by at least 88%, DRAM access by at least 78%, and energy consumption by at least 69%. Thus, GMMap enables real-time 3D mapping on energy-constrained robots. | null | [
"https://export.arxiv.org/pdf/2306.03740v1.pdf"
] | 259,088,988 | 2306.03740 | 7337c56dd295291a8021e05f8166735f247e717a |
GMMap: Memory-Efficient Continuous Occupancy Map Using Gaussian Mixture Model
Peter Zhi
Xuan Li
Member, IEEE Student Member, IEEESertac Karaman
Senior Member, IEEEVivienne Sze
GMMap: Memory-Efficient Continuous Occupancy Map Using Gaussian Mixture Model
1Index Terms-MappingRGB-D PerceptionSensor FusionMemory Efficiency
Energy consumption of memory accesses dominates the compute energy in energy-constrained robots which require a compact 3D map of the environment to achieve autonomy. Recent mapping frameworks only focused on reducing the map size while incurring significant memory usage during map construction due to multi-pass processing of each depth image. In this work, we present a memory-efficient continuous occupancy map, named GMMap, that accurately models the 3D environment using a Gaussian Mixture Model (GMM). Memory-efficient GMMap construction is enabled by the single-pass compression of depth images into local GMMs which are directly fused together into a globally-consistent map. By extending Gaussian Mixture Regression to model unexplored regions, occupancy probability is directly computed from Gaussians. Using a low-power ARM Cortex A57 CPU, GMMap can be constructed in real-time at up to 60 images per second. Compared with prior works, GMMap maintains high accuracy while reducing the map size by at least 56%, memory overhead by at least 88%, DRAM access by at least 78%, and energy consumption by at least 69%. Thus, GMMap enables real-time 3D mapping on energy-constrained robots.
I. INTRODUCTION
E NERGY-constrained microrobots [1]- [3] could enable a wide variety of applications, from autonomous navigation, search and rescue, and space exploration. Due to the limited battery capacity onboard these robots, the amount of energy available for computation is extremely limited and could be dominated by memory accesses. For instance, the energy required for accessing on-chip memory (e.g., cache) is more than an order-of-magnitude higher than that when performing a 32-bit multiplication [4]. The energy consumption of memory access increases with the size and distance of the memory from the processor. Within the same chip, accessing a higher-level L2 cache (a few MBs) requires up to an order-of-magnitude more energy than lower-level L0 and L1 caches (a few KBs). However, accessing data stored in a larger, off-chip memory such as DRAM (GBs of storage) requires more than two orders-of-magnitude higher energy than smaller, on-chip (local) CPU caches [4]. The memory (capacity) usage of an algorithm is not only consisted of output variables but also input and temporary variables allocated during computation. Thus, algorithms designed for many robotics applications, especially the ones involving energy-constrained robots, should be memory efficient during computation such that: i) the number of memory accesses do not dominate the algorithm; ii) amount of memory (capacity) overhead for storing input and temporary variables will be small enough to remain in energy-efficient lower-level caches.
For mapping algorithms, both memory overhead and accesses could easily dominate. During map construction, the multi-pass processing of sensor measurements requires them to be stored (i.e., as input and temporary variables) entirely in memory to support repeated accesses, which increases overhead and reduces the remaining memory for map storage. Incrementally updating/reconstructing a previously-observed region in the map is typically performed by casting rays associated with all sensor measurements into the map. Since these rays diverge away from the sensor origin, memory accesses along these rays often lack the (spatial and temporal) locality required for effective cache usage, and thus require a significant number of memory accesses to DRAM. Thus, achieving memory efficiency is both crucial and challenging for mapping algorithms.
arXiv:2306.03740v1 [cs.RO] 6 Jun 2023
In addition to achieving memory efficiency, the resulting map should satisfy the following requirements to enable memory-efficient, real-time processing of a variety of downstream applications that enable autonomy. 1) Compactness: A compact map can represent a larger portion of the environment in both on-chip (cache) and off-chip memory (DRAM). When accessing a region of the environment that does not reside in the cache, a compact map also reduces the number of energyintensive DRAM accesses required to update the cache. 2) Modeling unexplored regions: In autonomous exploration, the robot seeks to minimize the number of unexplored regions while traversing in obstacle-free regions. Thus, the ability to determine and update the regions that are currently unexplored enables state-of-the-art autonomous exploration algorithms based on frontier [5] or mutual information [6], [7]. 3) Query compute efficiency: During path planning and autonomous exploration, the robot needs to query multiple locations in the map to determine the current state of the environment [8]. The results of these queries are often used to make decisions, such as the next location to travel, in real time. Thus, the state of the environment should be efficiently computed from the map.
Current state-of-the-art mapping frameworks require the probabilistic modeling of occupancy (i.e., whether or not an obstacle exists) at every location in the 3D environment. These frameworks can be classified based on their underlying probabilistic models used to infer occupancy. For instance, the well-known framework, OctoMap [9], contains a set of Bernoulli random variables for modeling the occupancy at a discrete set of homogeneous regions in the environment. Even though OctoMap could model unexplored space and achieve query efficiency, OctoMap is not compact enough for storage on energy-constrained robots. By using more compact models (e.g., set of Gaussians or kernels), recent frameworks (e.g., NDT-OM [10], Hilbert Map [11], HGMM [12]) only focused on reducing the map size while incurring significant memory overhead and accesses for multi-pass processing of raw sensor measurements in nearly every stage of the mapping pipeline. In addition, the resulting maps produced by these frameworks cannot satisfy all of the above-mentioned requirements for enabling efficient downstream applications.
In this paper, we propose a continuous occupancy map comprised of a compact Gaussian Mixture Model (GMM), named GMMap, that is efficiently and accurately constructed from a sequence of depth images and poses of a robot. To achieve significantly higher memory efficiency than prior works, our GMMap accurately compresses each depth image into a compact GMM in a single pass, and directly operates on Gaussians in the GMM (i.e., without other intermediate representations) for all remaining mapping operations. Our contributions are summarized as follows:
1) Single-pass compression: A single-pass procedure that accurately compresses a depth image into a local GMM representing both free and occupied regions. Prior works [11]- [16] require significant overhead for storing the entire image in memory due to multi-pass processing. 2) Gaussian-direct map construction: A novel procedure that directly fuses the local GMMs across multiple images into a globally-consistent GMM without casting sensor rays (i.e., one ray for each pixel in the depth image) into the map. Prior works [9], [10], [12], [17] requires a significant number of memory accesses during ray casting in order to update the previously-observed region that intersects with all sensor rays. 3) Gaussian-direct occupancy query: An extension of Gaussian Mixture Regression to directly compute occupancy from GMM while accounting for unexplored regions. Prior works require constructing and storing intermediate representations for modeling unexplored regions [11], [14] or do not model them at all [12]. In our previous work [18], we proposed the Single-Pass Gaussian Fitting (SPGF) algorithm that enables single-pass compression of depth image into a GMM representing only the occupied region (i.e., a part of the first contribution) but not the obstacle-free region. In this work, we not only extend our previous work to also construct a GMM representing the free region (i.e., the first contribution) but also illustrate how to directly operate on Gaussians during map construction and occupancy query (i.e., the second and third contribution). An overview of the GMMap and its representation for the first floor of MIT's Stata Center are illustrated in Fig. 1.
This paper is organized as follows. After analyzing existing works in Section II, we describe how the occupancy is compactly represented and efficiently estimated from our GMMap in Section III. Memory-efficient algorithms that incrementally and accurately construct the GMMap given a sequence of depth images are presented in Section IV. Finally, we validate GMMap against existing works in terms of mapping accuracy, memory footprint, throughput and energy consumption across multiple environments in Section V.
II. RELATED WORKS
Constructing an accurate and compact representation of the 3D environment is crucial for enabling many downstream robotics applications such as path planning and autonomous exploration. During the past few decades, many frameworks proposed different models to represent the distribution of the occupancy probability (i.e., the likelihood that a region contains an obstacle) across the 3D environment. These models exhibit different trade-offs in memory and computational efficiency during the construction and querying of the map.
Discrete representations: Some of the most popular mapping frameworks discretize the environment into cubic regions (i.e., grids in 2D and voxels in 3D) such that each region contains a Bernoulli random variable representing the occupancy probability and is assumed to be spatially independent of each other. One of the earliest 2D mapping frameworks, the occupancy grid map [19], discretizes the environments into equally-sized grids. However, the map size is prohibitively large in 3D because the size scales cubically with the dimensions of the voxels and the environment. To reduce map size in 3D, OctoMap [9] stores the occupancy probabilities in voxels whose sizes can adapt to homogeneous regions in the environment. Like other similar discrete representations such as [19], [20], OctoMap suffers from artifacts associated with voxelization and requires a significant amount of memory accesses during construction. To incrementally construct the map given a set of sensor rays (more than 300,000 in each 640×480 depth image), each ray is cast into the map to update the subset of voxels such rays intersect. Since these rays diverge away from the sensor origin, memory accesses along these rays often lack spatial and temporal locality (especially if the map is too large to fit within on-chip caches). Since OctoMap is often not compact, updating the map requires a significant amount of memory accesses (more than 300,000 per image) to off-chip DRAM.
Non-parametric representations: To relax the spatial independence assumption in discrete map representations, Gaussian process (GP) was proposed to estimate a continuous distribution of occupancy [21] using a covariance function that captures the spatial correlation among all sensor measurements. Since GP requires the storage of all sensor measurements (since the beginning of the robotics experiment) to update the covariance function, the memory overhead scales with the total number of measurements N . During a map query, the covariance function generates a large matrix that requires O(N 3 ) to invert, which greatly reduces the query efficiency. To enable faster map construction and query, recent non-parametric methods such as GPOctoMap [22] and BGKOctoMap-L [17] discretize the environment into blocks of octrees (i.e., a test-data octree). For subsets of measurements (i.e., training data) that lie within each block, GPOctoMap and BGKOctoMap-L update the octrees in each block and its neighbors (i.e., extended blocks) using GP and Bayesian Generalized Kernel (BGK) inference, respectively. Similar to OctoMap, both GPOctoMap and BGKOctoMap-L directly operate on sensor rays that are cast into the map during incremental construction and require significant memory accesses to DRAM. In addition, both frameworks require the storage of training data for each block during map construction which incurs significantly larger memory overhead than OctoMap.
Semi-parametric representations: To create an extremely compact representation of the environment, several frameworks compress the sensor measurements using a set of parametric functions (e.g., Gaussians or other kernels) which are then used to infer occupancy. One of the well-known semi-parametric representations is the Normal Distribution Transform Occupancy Map (NDT-OM) [10] that partitions the environment into large voxels such that measurements within each voxel are represented by a Gaussian. Since measurements within a voxel could belong to multiple objects, representing them with a single Gaussian often leads to a loss of accuracy in the resulting map. Similar to OctoMap, NDT-OM requires significant memory access to DRAM due to the casting of sensor rays into the map during construction.
To further reduce map size, recent frameworks, such as Hilbert Map (HM) [11], Variable Resolution GMM (VRGMM) map [14], Hierarchical GMM (HGMM) map [12], compress sensor rays into special kernels (in HM) or Gaussians (in VRGMM and HGMM). Such compression is performed using techniques such as Quick-Means (QM) [11], Hierarchical Expectation-Maximization (H-EM) [13], Region Growing (RG) [15], and Self-Organizing GMMs (SOGMM) [16]. However, these techniques require significant memory overhead to store all sensor measurements (more than 300,000 pixels in a 640x480 depth image) due to their multi-pass processing. Even though the resulting maps are compact after compression, they either could not model unexplored regions (in HGMM), or require online training (for a logistic regression classifier in HM) and intermediate representations to model these regions (using Monte Carlo sampling to create an intermediate grid map in VRGMM). Even though our GMMap is also classified as a semi-parametric representation, we can accurately construct and query the map directly using Gaussians (while preserving unexplored regions) to reduce memory overhead and accesses.
III. OCCUPANCY REPRESENTATION & ESTIMATION
In this section, we describe how to compactly model a continuous distribution of occupancy using a Gaussian Mixture Model (GMM) in the proposed GMMap. In addition, we illustrate how to directly estimate the occupancy probability from Gaussians using Gaussian Mixture Regression (GMR) while accounting for the initial unknown state of the environment so that the unexplored regions are preserved.
Let X ∈ R 3 denote the 3D coordinate in the world frame. Let O ∈ R denote the occupancy value such that regions with values greater than one are occupied with obstacles, and regions with values less than zero are obstacle free. In addition, unexplored regions have an occupancy value near 0.5. Let P denote the joint random variable such that
P = X O .(1)
The map of the 3D environment is represented by the following GMM which is an unnormalized distribution for the joint variable P , i.e.,
M P (p) ∼ K i=1 π i N (p | µ i , Σ i ) ,(2)
where K is the number of Gaussians. The weight π i , mean µ i , and covariance Σ i are the parameters of the ith Gaussian such that
µ i = µ iX µ iO , Σ i = Σ iX Σ iXO Σ iOX Σ iO .(3)
Note that the GMM in Eqn. (2) can be compactly stored because each Gaussian is parameterized by µ i , Σ i , and π i . For the rest of the paper, we drop the index i for all variables when we refer to any Gaussian in the GMM. During the experiment, the robot makes a sequence of range measurements. Each range measurement consists of a ray that originates from the robot, passes through a free region, and ends at the surface of an obstacle (occupied region). Regions that are traversed by all such rays are observed by the robot. We determine the parameters of the GMM in Eqn. (2) using range measurements such that it compactly models all observed regions. Thus, regions that have not been observed (i.e., unexplored) cannot be modeled by the GMM alone.
To compactly model the unexplored region, we use the unexplored prior Q O|X with its weight π 0 to represent the initial unknown state of the entire environment, i.e.,
Q O|X (o | x) = N (o | µ 0 , σ 2 0 ),(4)
where
µ 0 = 0.5, σ 2 0 = 0.25.(5)
The weight π 0 should be set to a large value such that measurements from multiple timesteps are required to shift the occupancy value of an unexplored region (i.e., 0.5) towards zero (free region) or one (occupied region) during GMR. Unlike prior semi-parametric representations that estimate occupancy probability using either a classifier that requires additional online training [11], [23] or intermediate representations that require additional memory overhead [14], we efficiently preserve these regions by incorporating the unexplored prior into the Gaussian Mixture Regression (GMR) [24]. We describe the GMR procedure used to estimate occupancy directly from Gaussians as follows.
Using the GMM in Eqn. (2) and the unexplored prior in Eqn. (4), the occupancy O conditioned on the query location X = x is computed as
P O|X (o | x) = K i=0 ω i (x)N o | m i (x), σ 2 i (x) ,(6)
where
ω i (x) = π0 K j=1 πj N (x|µ jX ,Σ jX )+π0 , if i = 0, πiN (x|µ iX ,Σ iX ) K j=1 πj N (x|µ jX ,Σ jX )+π0 , otherwise,(7)m i (x) = µ 0 , if i = 0, µ iO + Σ iOX Σ −1 iX (x − µ iX ), otherwise,(8)σ 2 i (x) = σ 2 0 , if i = 0, Σ iO − Σ iOX Σ −1 iX Σ iXO , otherwise.(9)
The expected occupancy value and its variance at location x is regressed using GMR as
m(x) = E[O|X = x] = K i=0 ω i (x)m i (x),(10)v(x) = Var[O|X = x] = K i=0 ω i (x) m i (x) + σ 2 i (x) − m(x) 2 .(11)
The occupancy value can transition suddenly across the boundaries separating occupied and free regions (e.g., at surfaces of obstacles). To better capture such transitions, each Gaussian in the GMMap models either an occupied or free region, but not both. Thus, the set of Gaussians representing occupied regions is defined as occupied Gaussians with an occupancy value of one (i.e., µ O = 1). In addition, the set of Gaussians representing free regions is defined as free Gaussians with an occupancy value of zero (i.e., µ O = 0).
Representing occupied and free regions using separate sets of Gaussians also guarantees that the expectation m(x) in Eqn. (10) is bounded within [0, 1]. Thus, the expectation m(x) becomes the occupancy probability of the environment at the query location x. In addition, the covariance terms Σ XO , Σ OX and Σ O in Eqn. (3) become zero for all Gaussians, which significantly simplifies the entire GMR procedure and reduces the memory required to store the Gaussians in our map.
Since each Gaussian distribution tapers off from its mean at an exponential rate, the entire GMR procedure from Eqn. (6) to (11) can be accurately and efficiently approximated using a small subset of Gaussians whose Mahalanobis distances between their means to the query location x are less than a threshold α M , i.e.,
(x − µ X ) ⊤ Σ −1 X (x − µ X ) ≤ α M ,(12)
where µ X and Σ X are parameters of a Gaussian defined in Eqn. (3). In our experiments, we chose α M = 2 to ensure that more than 95% of the Gaussian distribution is considered.
To efficiently obtain the subset of Gaussians that satisfies Eqn. (12) in O(log(K)) time (where K is the total number of Gaussians), we store the GMMap using an R-tree [25] constructed with bounding boxes that are axis-aligned with the world frame. Since the surface that satisfies the equality in Eqn. (12) for each Gaussian can be visualized as an ellipsoid in 3D, the bounding box at the leaf node of the R-tree for each Gaussian is sized to enclose such ellipsoid. Across all figures, occupied and free Gaussians are represented by red and blue ellipsoids, respectively. The occupied and free Gaussians in GMMap with their corresponding bounding boxes (dotted rectangles) are illustrated at the bottom of Fig. 1a.
IV. MEMORY-EFFICIENT MAP CONSTRUCTION
In this section, we present a memory-efficient framework to construct the GMMap M (i.e., Eqn. (2)). At each timestep t, we incrementally constructs the GMMap M by updating the previous GMMap M t−1 with current measurements from the depth image Z t ∈ R U ×V obtained at pose T t ∈ SE(3). As illustrated in Fig. 1a, our framework consists of the following two procedures executed sequentially for each depth image: 1) Per-image GMM construction: The depth image Z t with width U and height V obtained at pose T t is compressed into a compact local GMMap G t . A memoryefficient algorithm is proposed in Section IV-B to perform such compression one pixel at a time in a single pass through the depth image. Memory overhead is greatly reduced by avoiding the storage of the entire depth image in memory which is required for prior multi-pass approaches [11]- [16]. 2) Globally-consistent GMM fusion: The local GMMap G t is fused into the previous global GMMap M t−1 to obtain the updated GMMap M t . A memory-efficient algorithm is proposed in Section IV-C to perform such fusion directly using Gaussians. The amount of memory accesses is greatly reduced by avoiding casting rays (more than 300,000) from each 640×480 image into the map as seen in prior works [9], [10], [12], [17].
To efficiently update Gaussian parameters during the abovementioned procedures with little memory overhead, we present preliminaries that illustrate in-place construction of Gaussians using the method of moments (MoM) [26] in Section IV-A.
A. Efficiently Updating Gaussian Parameters
In this section, we illustrate how to efficiently update the parameters of occupied and free Gaussians in-place given new measurements. In the method of moments (MoM) [26], the first and second moments of a Gaussian are intermediate representations of its mean and covariance, respectively. Let P = [X, O] ⊤ denote the joint variable for the 3D coordinate X (with respect to the current sensor origin) and its occupancy O. The unnormalized first m (1) and second m (2) moments of each Gaussian are defined as
m (1) = ξE[P ], m (2) = ξE[P 2 ],(13)
where ξ is a normalization constant. Thus, the mean µ and covariance Σ of each Gaussian defined in Eqn. (3) can be recovered in-place from the unnormalized moments as
µ = 1 ξ m (1) , Σ = 1 ξ m (2) − µµ ⊤ .(14)
Unnormalized moments can be incrementally updated without relying on past measurements (which do not need to be stored in memory). Thus, during map construction, the moments for each Gaussian are stored instead of its mean and covariance. Recall that from Section III, each measurement (ray) consists of a point (i.e., end of a ray representing the surface of an obstacle) and a line (i.e., from the sensor origin to the end of the ray representing free region). Fusing a point p = [x, 1] ⊤ ∈ P into an occupied Gaussian is computed as
m (1) ← m (1) + p, (15a) m (2) ← m (2) + pp ⊤ , (15b) ξ ← ξ + 1. (15c)
Fusing a line from the sensor origin to the endpoint p = [x, 0] ⊤ into a free Gaussian is computed as
m (1) ← m (1) + ∥p∥ 2 p,(16a)m (2) ← m (2) + ∥p∥ 3 pp ⊤ ,(16b)ξ ← ξ + ∥p∥.(16c)
Note that the second term on the right side of Eqn. (16a) and (16b) is the closed-from expression for the first and second moments of the line, respectively. Our closed-form expression is accurate and more computationally efficient than prior works [11], [12] that approximate both moments by using points sampled at a fixed interval along the line. When regressing occupancy using GMR, the unnormalized weight π of each occupied or free Gaussian should represent the amount of occupied or free evidence in the region where such Gaussian resides. For each free Gaussian, its weight π equals to the total length of all line segments used during construction. When a line from sensor origin to p = [x, 0] ⊤ is fused into a free Gaussian, its weight π is updated as
π ← π + ∥p∥.(17)
To ensure that the occupancy regressed using GMR is meaningful, the weights for occupied Gaussians should have the same unit as those for the free Gaussians. Thus, when a new endpoint p is fused into an occupied Gaussian, its weight is also updated using Eqn. (17). Lastly, the Gaussian containing the fusion of two occupied or free Gaussians indexed by i and j is computed as
m (1) ← m (1) i + m (1) j , (18a) m (2) ← m (2) i + m (2) j , (18b) ξ ← ξ i + ξ j , (18c) π ← π i + π j .
(18d)
B. Per-Image GMM Construction
As illustrated in Fig. 2, we present a single-pass algorithm that constructs a local GMMap G t given the depth image Z t obtained at pose T t . From Section III, occupied and free regions in the environment are separately modeled using occupied (visualized using red ellipsoids) and free (visualized using blue ellipsoids) Gaussians, respectively. Thus, our algorithm, described in Alg. 1, creates both types of Gaussians in the local map G t by executing the following procedures sequentially: 1) SPGF* (Line 2 in Alg. 1) : A memory-efficient algorithm that constructs the occupied GMM G t,occ and a compact free GMM basis F t,free in a single pass through the image Z t using only the endpoints of the sensor rays.
As illustrated in Fig. 3, operations within SPGF* are extended from our prior work SPGF [18]. 2) Construct Free GMM (Line 3 in Alg. 1): By only considering the endpoints of the sensor rays in SPGF*, Gaussians represented by the free GMM basis F t,free cannot represent free region encoded within the camera frustum very well (see Fig. 4a). Thus, the basis F t,free is processed to construct the free GMM G t,free that better represents the free region (see Fig. 4b).
3) Construct Local Map (Lines 4 to 6 in Alg. 1): Occupied G t,occ and free G t,free GMMs are transformed to the world frame using the pose T t . Then, these GMMs are inserted into the R-tree to create the local map G t . SPGF* (Lines 8 to 20 in Alg. 1): The SPGF* algorithm constructs an occupied GMM G t,occ and a compact free GMM basis F t,free by processing one scanline (i.e., a row of pixels) at a time in a single pass through the entire depth image Z t . Note that measurements that are neighbors in the 3D world are also neighbors in the 2D depth image. Thus, SPGF* exploits this property to efficiently infer surface geometries so that the accuracy and compactness of the GMM are maintained compared to prior multi-pass approaches [11]- [16]. SPGF* is an extension of our prior work SPGF [18] which is summarized as follows. As illustrated in Fig. 3, each scanline is denoted by L v , where v is the row index. In Scanline Segmentation (SS, Line 12), pixels from each scanline are
Depth Image Z t Occupied GMM (G t, occ ) & Free GMM Basis (F t, free ) Local GMMap (G t ) Pose T t SPGF* Construct Local Map Occupied (G t, occ ) & Free (G t, free ) GMMs
Construct Free GMM
Sensor origin Sensor origin Fig. 2. Per-image GMM construction: Constructing a local GMMap Gt that accurately represents both occupied and free regions from the current depth image Zt obtained at pose Tt. Rays associated with each pixel in the depth image are illustrated with dotted arrows. Occupied and free GMMs are illustrated with red and blue ellipsoids, respectively. Dotted rectangles in the map Gt represent the bounding boxes at the leaf nodes of the R-tree. The green rectangle represents the bounding box at the root node of the R-tree that encloses the entire map Gt.
Algorithm 1: Per-Image GMM Construction
Input: Depth image Zt, pose Tt Output: Local GMMap Gt 1 function constructLocalGMM(Zt, Tt) 2 Gt,occ, Ft,free ← SPGF*(Zt) 3 Gt,free ← constructFreeGMM(Ft,free) 4 Gt ← Gt,free ∪ Gt,occ 5 Gt ← transform(Gt, Tt) 6 Gt ← constructRtree(Gt) 7 return Gt 8 subfunction SPGF*(Zt) 9 Q ← ∅, Qprev ← ∅ 10 for ( v = 0; v < V ; v = v + 1 ) { 11 Lv ← extractScanline(Zt, v) 12 S ← scanlineSegmentation(Lv) 13 if v = 0 then 14 Qprev ← S 15 else 16 Qprev, Qcomp ← segmentFusion(Qprev, S) 17 Q ← Q ∪ Qcomp 18 Q ← Q ∪ Qprev 19
Gt,occ, Ft,free ← Q 20 return Gt,occ, Ft,free partitioned into a set of line segments S such that each segment represent a locally planar surface with distinct orientation. In Segment Fusion (SF, Line 16), segments are fused across successive scanlines to form a set of completed Gaussians Q comp (appended to output in Line 17) and incomplete Gaussians Q prev (for fusion with the next scanline).
The implementation of SS and SF in SPGF* is almost identical to those in SPGF except for the following differences. Recall that each pixel in the depth image is a sensor ray that originates from the robot. In SPGF, only occupied GMM G t,occ is constructed using endpoints of the sensor rays from all depth pixels in the image. In particular, Eqn. (15) and (18) are used to construct each occupied Gaussian (say a i ) in SS and SF, respectively. Since we would like to construct Gaussians associated with the free region as well, SPGF* constructs two free Gaussians ϕ j (using the entire sensor ray) and β j (using normalized sensor ray with depth z = 1) concurrently with each occupied Gaussian a i . Those free Gaussians are constructed using Eqn. (16) in SS and Eqn. (18) in SF. Thus, for SPGF*, each element in the set S, Q, Q prev and Q comp of Alg. 1 includes the occupied Gaussian a j with its associated (b) Visualization of the free Gaussian B i,j that is recovered in subregion B i from basis f j = (ϕ j , β j ). The set of all free Gaussians accurately represents the free region. Fig. 4. Visualization of (a) the free Gaussian basis which can be used to recover (b) a corresponding set of free Gaussians in subregion B i whose size increases with the distance from the sensor origin.
free Gaussians ϕ j and β j . Fig. 4a illustrates the free Gaussians ϕ j and β j associated with each occupied Gaussian a i . Note that the free Gaussians ϕ j and β j do not represent the free space traversed by the sensor rays very well. However, these free Gaussians can be used to reconstruct a better representation of free space as illustrated in Fig. 4b during the subsequent procedure. Thus, we define each free Gaussian basis f j such that f j = (ϕ j , β j ). The set of all free Gaussian bases generated at the output of SPGF* forms the free GMM basis F t,free .
Since the criteria for constructing and updating the Gaus-sians in SPGF* is identical to SPGF, SPGF* inherits many desirable properties from SPGF. Since SS in Line 12 dominates SPGF* and can be executed independently for each scanline, SPGF* can be parallelized by concurrently executing SS for different scanlines across multiple CPU or GPU cores. Due to single-pass pixel-per-pixel processing in SS, only one pixel is stored in memory at any time. Thus, SPGF* is memoryefficient and avoids the storage of the entire depth image in memory as seen in most prior works. Construct Free GMM (Alg. 2): In this section, we present Alg. 2 that directly generates the set of free Gaussians G t,free from their basis F t,free . These Gaussians should accurately and compactly model the free space traversed by the sensor rays (i.e., within the viewing frustum). In prior works [11], [12], the free Gaussians G t,free are inefficiently constructed from a large number of free-space points sampled at a fixed interval along all sensor rays. In contrast, the free Gaussians G t,free in GMMap are directly constructed from their basis F t,free with little computational and memory overhead.
The free space is contained within the viewing frustrum which is a pyramidal region with significantly different symmetries than each elliptical equipotential surface of the Gaussian distribution. Thus, as illustrated in Fig. 4a, free Gaussians (i.e., ϕ and β) from the basis F t,free cannot faithfully represent the free region (especially near the obstacles). To achieve a more accurate representation, we partition the viewing frustum into subregions {B 0 , B 1 , . . . } along the z-axis that is perpendicular to the image plane of the camera. Each subregion B i is enclosed between two partitioning planes z = d i,max and z = d i,min . As illustrated in Fig. 4b, free Gaussians are constructed to model each subregion separately.
The free Gaussians in each subregion (Fig. 4b) can be directly recovered from each basis f = (ϕ, β). Let the index i f denote the minimum index across all subregions containing the endpoints of rays used to construct Gaussian ϕ. For instance, the index i f = 2 for basis f 0 in Fig. 4. The subregion B i f is the difference between the region from sensor origin to the obstacle (represented by Gaussian ϕ) and the region from the sensor origin to the partitioning plane d i f ,min (represented by Gaussian β scaled to d i f ,min ). Each remaining subregion is the difference between regions from the sensor origin to two enclosing partitioning planes d i f ,min and d i f ,max (represented by Gaussian β scaled to d i f ,min and d i f ,max ). For each basis f , the parameters of the free Gaussian g (i.e., first moment m (1) g , second moment m (2) g , normalizing constant ξ g , and weight π g ) in subregion B i are directly recovered from the differences between parameters of Gaussians ϕ and β as follows:
m (1) g = m (1) ϕ − d 2 i,min m (1) β , if i = i f , m (1) β d 2 i,max − d 2 i,min , if 0 ≤ i < i f ,(19a)m (2) g = m (2) ϕ − d 3 i,min m(2)β , if i = i f , m (2) β d 3 i,max − d 3 i,min , if 0 ≤ i < i f ,(19b)ξ g = π g = ξ ϕ − d i,min ξ β , if i = i f , ξ β (d i,max − d i,min ) , if 0 ≤ i < i f ,(19c)
See Fig. 4b for an illustration of the recovered free Gaussians B i,j in subregion B i generated from basis f j . To retain high mapping fidelity, each subregion is sized according to its spatial resolution (i.e., the density of the sensor rays) such that regions with higher resolution are modeled by smaller Gaussians. Since the sensor rays emanate outwards from the origin, the spatial resolution of each subregion B i decreases as its index i increases (see Fig. 4b). To ensure that the maximum size of each Gaussian is inversely proportional to the spatial resolution, the distance between the partitioning planes that enclose each subregion B i should increase with index i. Thus, given the maximum slope of the frustum's boundary γ frum along the z axis (see Fig. 4a) and the initial distance d 0 between partitioning planes, the locations of these planes for each subregion B i are computed as
d i,max = d 0 k=i k=0 (α d γ frum ) i = d 0 (α d γ frum ) i+1 − 1 α d γ frum − 1 , (20a) d i,min = 0, if i = 0, d i−1,max , otherwise,(20b)
where α d is a scaling parameter. We ensured α d γ frum > 1 by choosing α d = 0.5 in all our experiments. Although free Gaussians recovered from the basis can accurately model the free region, they are not as compact as the occupied GMM G t,occ . Thus, after recovery, free Gaussians are fused with each other in each subregion to further enhance the compactness of the map. Alg. 2 efficiently performs Gaussian recovery and fusion. After sorting each basis f into its associated subregion based on index i f (Line 4 to 8), free Gaussians are constructed within each subregion (from Line 9 onward) starting from the one that is furthermost away from the sensor origin (see the green arrow in Fig. 4b). Using the bases, free Gaussians in each subregion B i are initially recovered (Line 12 to 16) which are fused with each other using a region growing approach (Line 17 to 33).
During region growing, we need to ensure that the fused Gaussian can still accurately represent the free region within each subregion B i . After fusing a free Gaussian q g with its neighbor c g (determined by whether their bounding boxes intersect in Line 19), we accept the fused Gaussian r g (in Line 26) if it accurately represents its original components (i.e., q g and c g ). In prior works [11], the fused Gaussian r g is accepted if the probabilistic distance d h between two components q g and c g are below a pre-defined low threshold α h,free . Thus, only Gaussians that completely overlap the same region can be fused (see Fig. 5a). However, there exist many opportunities to fuse Gaussians that only partially overlap but accurately represent neighboring parts of the same region (see Fig. 5b). To also exploit these opportunities, our distance measure d h is computed between the fused Gaussians r g and its components {q g , c g } using the Unscented Hellinger Distance [27] in Line 23. To maintain mapping accuracy, we scale the distance threshold α h,free using the geometric similarity s r ∈ [0, 1] between Gaussians q g and c g in Line 25. The geometric similarity s r between the two components is computed as the intersection over union ratio for the z dimension of their bounding boxes in Line 24. Even though free Gaussians are constructed to separately represent each subregion B i , the fusion decision made between Gaussians (Line 25) in the current subregion B i can be propagated to reduce the number of computations in subsequent subregions. The Gaussians recovered by the same basis across most subregions are almost relatively similar in shape (e.g., Gaussians B 2,1 , B 1,1 and B 0,1 in Fig. 4b). Thus, the successful fusion between two Gaussians in the current subregion B i (e.g., between B 2,1 and B 2,2 ) implies the same for other subregions B i−1 , . . . , B 0 (e.g., between B 1,1 and B 1,2 , B 0,1 and B 0,2 ). To automatically propagate the fusion decision from the current subregion B i , the fused basis q f is simply transferred into the following subregions (B i−1 , . . . , B 0 ) at Line 33 across multiple iterations of the outer loop (Line 9).
i f ← region(f ) 6 Ft,free ← Ft,free \ f 7 Bi f ← Bi f ∪ f 8 imax ← max(imax, i f ) 9 for ( i = imax; i ≥ 0; i = i − 1 ) { // See Eqn.
(a) Fusion of two completely overlapping Gaussians (blue) into a single Gaussian (green).
(b) Fusion of two partially overlapping Gaussians (blue) into a single Gaussian (green). Fig. 5. Our fusion criteria using the Unscented Hellinger Distance [27] allows for the creation of a single Gaussian (green) from two Gaussians (blue) when they (a) completely overlap to represent the same region or (b) partially overlap to represent neighboring parts of the same region in the environment.
Construct Local Map (Lines 4 to 6 in Alg. 1): To enable the fusion between the local GMMs (occupied G t,occ and free G t,free ) and the global map M t−1 in Section IV-C, local GMMs need to transform into the world frame as follows:
µ X ← R t µ X + ϵ t , Σ X ← R t Σ X R ⊤ t ,(21)
where R t and ϵ t are the rotation and translation matrix associated with pose T t . The mean µ X and covariance Σ X for each Gaussian in the GMM are defined in Eqn. (3). After the transformation, an R-tree is created for all Gaussians in Line 6 to form the local map G t . First, a bounding box is constructed for each Gaussian to enclose its ellipsoidal bound as defined in Eqn. (12). Then, each Gaussian and its bounding box are inserted into the R-tree as shown in Fig. 2.
C. Globally-Consistent GMM Fusion
In this section, we present a novel memory-efficient procedure in Alg. 3 to directly update the global GMMap M t−1 in place using Gaussians from the local GMMap G t . When the robot obtains a new depth image Z t at timestep t, rays associated with a subset of pixels in the image traverse through a previously observed region C t that is already modeled by the global map M t−1 . To ensure that the global map remains compact, these rays should be fused with the map to update the state of the region C t .
In prior works [9], [10], [12], [17], the ray associated with each pixel in the image is casted into the global map to determine the location of the region C t . Since each 640×480 depth image contains more than 300,000 rays, casting all rays requires a significant amount of time and accesses to off-chip DRAM where the map is stored. Since rays from the depth image are accurately compressed into a local GMMap G t , geometric properties of G t are exploited to i) identify the location of the region C t in the global map M t−1 with little memory access, and ii) directly update the region C t using Gaussians in G t to maintain the compactness and accuracy of the resulting global map M t . Fig. 6 illustrates the entire procedure for fusing the local map G t into the global map M t−1 . Recall that from Section IV-B, Gaussians in the local map G t are already transformed in the world frame and organized using an Rtree. Using the bounding box (at the root node of the R-tree)
Global GMMap (M t-1 ) Global GMMap (M t ) Extract Previously Observed Region GMM Fusion Merge & Update Global Map Local GMMap (G t ) Cropped Map (M t-1 \ C t ) Previously Observed Region (C t ) Fused Local Map (G t U C t )
Bounding box that encloses local GMMap G t Mt ← updateRtree(Mt) 28 return Mt that encloses G t , Gaussians in the previously observed region C t can be extracted from the previous global map M t−1 in a single traversal through its R-tree (Line 2) without ray casting. After extraction, Gaussians in the region C t are directly fused with the local map G t (Lines 5 to 25). Since the Gaussians in C t and G t are extremely compact for storage within the on-chip cache, the entire fusion process is expected to require little DRAM accesses. After completion, the fused local map (i.e., G t ∪ C t ) is simply appended to the previous global map M t−1 in Line 26 to produce the updated global map M t .
Our fusion process (Lines 5 to 25) enhances the compactness of the local map G t while maintaining its accuracy. For each Gaussian c ∈ C t , the R-tree in the local map G t is used to efficiently search for the set of Gaussians Q that intersects with and represents the same type of region (i.e., free or occupied) as c. In Line 14, the Gaussian c is fused with each neighbor q ∈ Q into a fusion candidate r using Eqn. (18). Similar to Alg. 2, the fusion candidate r is accepted in Line 18 if the Unscented Hellinger Distance [27] between candidate r and its components {c, q} is less than a distance threshold α h . Recall that our fusion criteria can exploit a wide range of scenarios (i.e., Fig. 5a and 5b) to enhance the compactness of the map. To maintain accuracy, we scale the distance threshold α h in Line 17 using the geometric similarity s r between components q and c. In Line 16, the similarity measure s r between these components is computed as the intersection over union ratio of their 3D bounding boxes.
V. EXPERIMENTAL RESULTS & ANALYSIS
In this section, we compare our GMMap against current state-of-the-art frameworks with different types of occupancy representations: OctoMap 1 [9] (discrete), NDT-OM 2 [10] (semi-parametric), and BGKOctoMap-L 3 [17] (nonparametric). This comparison was performed using four diverse indoor and outdoor environments (i.e., Room, Warehouse, Soulcity, and Gascola) generated from sequences of depth images and ground-truth poses. Table I summarizes the characteristics of all four environments. In particular, Room (from real-world TUM-RGBD datasets [28]) is a small structured environment that models crowded cubicles inside an office. Warehouse (from realworld TUM-RGBD datasets [28]) is a larger structured indoor environment captured using a longer and noisier range of the Kinect camera. In contrast, Soulcity (from synthetic TartanAir dataset [29]) is a large structured outdoor environment in a city containing several multi-story buildings with intricate sets of walkways. Finally, Gascola (from synthetic TartanAir dataset) is a large unstructured outdoor environment in a forest consisting of trees and a small hill.
To emulate an energy-constrained setting, all experiments were performed on the low-power NVIDIA Jetson TX2 platform in MAXP CORE ARM power mode [30]. All frameworks, implemented in C++, were compiled using the same settings. To reduce the memory overhead and map size, the floating point variables (and their associated operations) across all frameworks are stored as (and performed in) 32-bit single precision. Our single-core, multi-core, and GPU-accelerated GMMap implementations (visualized in Open3D [31]) can be obtained at https://github.com/mit-lean/GMMap.
Prior works achieve high mapping accuracy but are neither computationally nor memory efficient due to multi-pass processing of each depth image. In contrast, our GMMap is highly accurate and memory efficient. Across a diverse set of indoor and outdoor environments, the mapping accuracy of GMMap is comparable with prior works (Section V-A). In addition, our GMMap is highly parallelizable and can be constructed in real-time at up to 81 images per second, which is 4× to 146× higher than prior works on the low-power Jetson TX2 platform (Section V-B). Due to single-pass depth image compression in SPGF* and directly operating on Gaussians during map construction, our GMMap is extremely memory efficient. Compared with prior works (in Section V-C), our CPU implementation reduces i) the map size by at least 56%, ii) the memory overhead for storing input and temporary variables by at least 88%, and iii) the number of DRAM accesses by at least 78% during map construction. Thus, in Section V-D, the computational and memory efficiency of our GMMap reduces energy consumption by at least 69% compared with prior works.
A. Accuracy of Occupancy Estimation
In this section, we compare the accuracy of the proposed GMMap against NDT-OM, BGKOctoMap-L, and OctoMap. The hyperparameters of all frameworks are presented in Table II and are manually tuned to reduce the size of the maps without significant deviation from their peak accuracy. For GMMap, the hyperparameters are the unknown prior weight π 0 in Eqn. (4), the initial distance d 0 between partitioning planes in Eqn. (20), and the distance thresholds (α h,free and α h,occ ) for fusing Gaussians in Alg. 2 and 3. For NDT-OM, BGKOctoMap-L and OctoMap, the environment is voxelized so that the minimum voxel size is the hyperparameter. Since BGKOctoMap-L partitions the environment into equally-sized cubic blocks such that each block contains an octree (i.e., defined as the test-data octrees [17]), the depth of the octree in each block is an additional hyperparameter. To generate compact training data representing the free region, BGKOctoMap-L samples points along each sensor ray at an interval dictated by the free resolution hyperparameter. A visual comparison for all the above-mentioned frameworks is presented in Fig. 7.
We use the receiver operating characteristics (ROC) curve to compare the accuracy across all frameworks. To generate the ROC curves, the occupancy probability is queried from each map at the locations of all measurements used to construct the map. In particular, measurements along each sensor ray are the ground-truth free regions, and the measurement at the end of each sensor ray is the ground-truth occupied region. By sweeping the thresholds for classifying occupied or free regions from each occupancy probability, the true positive rate (i.e., the proportion of correct classifications during the prediction of occupied regions) of each map varies with the false positive rate (i.e., the proportion of incorrect classifications during the prediction of the free regions). In addition, the area under the curve (AUC) represents the probability that the map estimates a higher occupancy for the occupied region than that of the free region. Thus, a map with high accuracy should generate a ROC curve that tends towards the upper-left corner of the plot to achieve a large AUC close to one. Fig. 8 illustrates the ROC curve for each framework across all environments. The AUC associated with our GMMap is slightly higher than other frameworks in structured indoor (i.e., Room and Warehouse) and outdoor (i.e., Soulcity) environments. These environments contain many locally planner surfaces that cannot be well-represented by cubic voxels in OctoMap and BGKOctoMap-L. Even though NDT-OM also utilizes Gaussians, they are constructed under the assumption that all measurements within each voxel belong to the same surface, and thus also suffer from voxelization artifacts when such assumption is invalid (e.g., at corners of objects in Fig. 7i-b). Unlike other frameworks, our GMMap does not require voxelization. During the SPGF* algorithm, the number of Gaussians and their associated parameters are accurately inferred from the continuity of planar surfaces from each depth image (e.g., see Fig. 7i-a).
To achieve a compact representation and avoid modeling spurious measurements, SPGF* prunes away occupied Gaussians containing less than a certain number of measurements (i.e., 200 in our experiments). In the unstructured Gascola environment, Gaussians representing many distant small surfaces (such as leafs) are mistakenly treated as spurious and thus pruned away (as illustrated in Fig. 7ii-a). However, in Fig. 8d, the accuracy of the GMMap is still comparable with existing frameworks because the occupancy associated with prune Gaussians can be partially recovered by regressing on other neighboring Gaussians. Unlike OctoMap which ignores the spatial correlation of the occupancy for all voxels, our GMMap's continuous representation can often exploit such correlation to infer occupancy in regions modeled by sparse measurements. Similar to other frameworks, our GMMap can preserve the locations of unexplored regions. The visualization of GMMap and the preservation of unexplored regions for Warehouse and Soulcity are illustrated in Fig. 9.
(i-a) GMMap (Room) (i-b) NDT-OM (Room) (i-c) BGKOctoMap-L (Room) (i-d) OctoMap (Room) (ii-a) GMMap (Gascola) (ii-b) NDT-OM (Gascola) (ii-c) BGKOctoMap-L (Gascola) (ii-c) OctoMap (Gascola)
B. Construction & Query Throughput
In this section, we compare the computational efficiency of our GMMap against other frameworks using the NVIDIA Jetson TX2 platform. The computational efficiency is evaluated in terms of the throughput for constructing the map (i.e., depth images per second) and also querying the map (i.e., locations per second). Table III summarizes these metrics for all frameworks across the four environments.
The NVIDIA Jetson TX2 platform contains a low-power ARM Cortex A57 CPU with four cores and a Pascal GPU with two Streaming Multiprocessors (SMs). Due to computationally efficient GMM generation and fusion, our GMMap can be constructed at a throughput of 11 to 18 images per second using only one CPU core, which is 4× to 36× higher than other frameworks. Since Scanline Segmentation (Line 12 in Alg. 1) dominates the amount of computation during map construction and can be concurrently executed across multiple rows of the depth image, our construction throughput can be significantly increased via parallelization. By using all four CPU cores, our multi-core implementation reaches a throughput of 31 to 60 images per second. Multi-core implementations of existing frameworks are either not publicly available or highly experimental. Even if these frameworks can be effectively parallelized with four cores, their throughputs are expected to be 4× higher, which are still much lower than our multi-core implementation. By concurrently executing Scanline Segmentation across four images, our GPU implementation of GMMap offers the highest construction throughput of 44 to 81 images per second, which is up to 2× higher 4 than our CPU multi-core implementation. Finally, Table III also compares the query throughput of our GMMap against existing frameworks. To emulate an energyconstrained setting during path planning, each map is queried at locations throughout all observed regions (i.e., no unexplored regions) in the environment using only a single CPU core. Recall that each map consists of geometric primitives (e.g., Gaussians or voxels) stored using a spatial data structure (e.g., grid, R-tree, or octree). For existing frameworks, either traversing the spatial data structure (i.e., accessing a voxel from a grid in NDT-OM) or inferring occupancy from primitives (i.e., reading occupancy probability in BGKOctoMap-L and OctoMap) require little compute, which leads to high query throughputs ranging from 9.3 × 10 5 to 4.2 × 10 6 locations per second. However, in our GMMap, both R-tree traversal and Gaussian Mixture Regression (GMR) require more computation. Thus, the query throughput is lower than other frameworks but still sufficiently high (ranging from 4.6 × 10 5 to 7.9 × 10 5 locations per second). If needed, the query throughput can be increased by accessing the map with multiple cores and/or partitioning the query locations into more localized sets for batch processing.
C. Memory Footprint
In this section, we compare the memory efficiency of our GMMap against other frameworks when executing on the NVIDIA Jetson TX2 platform. In addition to the map size, we are interested in the memory overhead (for storing input and temporary variables) and the amount of DRAM access per 4 Even though we are processing four images at the same time, the throughput is not four times higher because other sequential procedures of the GMMap construction (i.e., segment and GMM fusion) start to dominate. pixel (which dictates DRAM energy consumption) during the map construction. Table III presents our results. First, we compare the size of the map that includes the geometric primitives (e.g., Gaussians and/or voxels) and the spatial data structure (i.e., R-tree, grid and/or octree) among all frameworks. Due to the compactness and strong representational power of the Gaussians, NDT-OM achieves comparable accuracy while reducing the map size by 61% to 96% compared with BGKOctoMap-L and OctoMap. However, the extent of each Gaussian in NDT-OM is restricted by the constant voxel size across the entire environment. Thus, all Gaussians appear similarly sized as shown in Fig. 7. By using SPGF* to construct Gaussians that appropriately adapt to the geometries of occupied and free regions in the environment (see Fig. 7), our GMMap achieves comparable accuracy while reducing the map size by 56% to 73% compared with NDT-OM. Across all frameworks, GMMap requires the least amount of memory (167KB to 850KB) across all four environments.
In addition to map size, we are interested in the memory overhead (defined as the peak memory usage minus the map size) for storing input and temporary variables during map construction. For a memory-efficient framework, its memory overhead should be insignificant compared with the map size. Unfortunately, existing frameworks are not memory efficient. For NDT-OM, the memory overhead mostly comprises of the point cloud associated with each depth image (up to 3.6MB) for supporting a variety of edge cases during recency-weighted covariance update [10]. For BGKOctoMap-L, the memory overhead mostly is comprised of subsampled measurements in free and occupied regions (up to 21MB) for performing multipass BGK inference. For OctoMap, the memory overhead is mostly comprised of pointers to a large number of voxels intersected by sensor rays from each depth image (up to 1MB). * Unlike other metrics, query throughput is computed using a single CPU core. ** High memory overhead due to the necessary allocation of large GPU-accessible buffers (used to store input images and output results of Scanline Segmentation) for concurrent processing of four images. Allocations of these buffers are not required for CPU-only implementations.
In contrast, our GMMap requires very little memory overhead. Since SPGF* processes the depth image one pixel at a time in a single pass, only one (for single-core implementation) or four pixels (for multi-core implementation) are stored in memory at any time. Thus, the memory overhead associated with map construction is mostly comprised of compact line segments S from Scanline Segmentation in SPGF* and the local GMMap G t generated at the output of Alg. 1. From Table III, the memory overhead of our single-core implementation is only 31KB to 106KB, which is at least 90% lower than other frameworks. Since four scanlines are segmented concurrently in our multi-core implementation, the memory overhead increases and ranges from 41KB to 128KB, which is at least 88% lower than other frameworks. However, our GPU implementation requires much larger memory overhead (around 24MB) due to the allocations of large GPU-accessible buffers for transferring four depth images and their Scanline Segmentation outputs to and from the GPU.
Finally, we compare the average amount of DRAM access required for integrating each measurement (i.e., pixel in the depth image) into the map among all frameworks. The amount of DRAM access correlates with the energy consumption of the DRAM and is computed by multiplying the number of lastlevel cache misses (obtained from hardware counters) with the size of the cache line. Recall that existing frameworks update the map by incrementally casting each measurement ray (more than 300,000 rays in a 640×480 depth image) into the current map. Since these rays diverge away from the sensor origin, memory accesses along these rays often lack spatial and temporal locality (especially if the map is too large to fit within on-chip caches). Thus, the single-core implementations of existing frameworks require significant number of DRAM accesses ranging from 160 bytes to more than 1KB per pixel.
In contrast, our GMMap avoids ray casting by directly fusing Gaussians from a compact local map G t with Gaussians in a compact global map M t−1 (see Fig. 6). Since a majority of the local and global map is expected to remain in cache, our single-core implementation reduces DRAM access by at least 85% (compared to existing frameworks) by accessing only 14 bytes to 54 bytes per pixel. Since multiple cores share the last-level cache, the number of cache misses increases for our multi-core implementation which requires slightly higher DRAM accesses ranging from 27 bytes to 78 bytes per pixel (at least 78% lower than existing frameworks). Our GPU implementation requires much larger DRAM accesses due to the higher amount of cache misses from the concurrent segmentation of all scanlines in four images. However, most DRAM accesses from our GPU implementation are coalesced (i.e., multiple accesses can be serviced with a single transaction). Thus, the energy consumption of the DRAM slightly increases compared with our multi-core CPU implementation.
D. Energy Consumption
Table III summarizes the average energy consumption per depth image during map construction. Across all frameworks, the energy consumption of the DRAM is significant compared with that of CPU and GPU, which underscores the importance of reducing memory overhead and access. Due to computationally efficient single-pass GMM creation and fusion, the energy consumption of the CPU in our single-core implementation is reduced by at least 71% compared with other frameworks. By avoiding ray casting (and its associated DRAM accesses), the energy consumption of the DRAM in our single-core implementation is reduced by at least 68% compared with existing frameworks. Our multi-core and GPU implementations are even more energy efficient because the static power consumption of the CPU, GPU, and DRAM are amortized across multiple cores and/or SMs. Thus, the total energy consumption of our CPU single-core, CPU multi-core, and GPU implementations are reduced by at least 69%, 83%, and 84% compared with existing frameworks, respectively.
VI. CONCLUSION
In this work, we proposed the GMMap that uses a compact Gaussian Mixture Model to accurately model the continuous distribution of occupancy in 3D environments. Occupancy probability is inferred with Gaussian Mixture Regression which is extended to retain unexplored regions. In contrast with prior works, novel algorithms are proposed to achieve real-time map construction on energy-constrained platforms while significantly reducing memory overhead and access. When benchmarked on the low-power NVIDIA Jetson TX2 platform across a diverse set of environments, GMMap can be constructed at a throughput of up to 60 images per second using the CPU and up to 81 images per second using the GPU, which is 4× to 146× higher than prior works. While achieving comparable accuracy as prior works, our CPU implementation of GMMap reduces map size by at least 56%, memory overhead by at least 88%, DRAM access by at least 78%, and energy consumption by at least 69%. Thus, to the best of our knowledge, GMMap not only enables real-time largescale 3D mapping for energy-constrained robots for the first time but also illustrates the significance of memory-efficient algorithms for enabling low-power autonomy on these robots.
construction of the GMMap from a depth image Zt and pose Tt obtained at time t. Each depth image is compressed into a local GMMap Gt which is then fused with current global GMMap M t−1 . (b) Visualization of the first floor of the MIT Stata Center and its GMMap representation consisting of GMMs representing occupied (red) and free (blue) regions. Each Gaussian is visualized as an ellipsoid in 3D.
Fig. 1 .
1Illustration of GMMap's (a) memory-efficient construction procedure and (b) representation for the MIT's Stata Center. GMMs representing occupied and free regions in the environment are illustrated by red and blue ellipsoids, respectively. The GMMap models a continuous distribution of occupancy in the State Center while requiring only 296KB to store.
Fig. 3 .
3Single-pass processing of the depth image in SPGF* for constructing the set of Gaussians Q where each element q j ∈ Q contains one occupied Gaussian a j and a free Gaussian basis f j . In Scanline Segmentation, each row (indexed by v) of the depth image is partitioned into a set of line segments S = {s v,i }, which are fused across rows to form q j ∈ Q in Segment Fusion. Visualization of the free Gaussian basis f j = (ϕ j , β j ) associated with each occupied Gaussian a j . These bases cannot represent the free region faithfully (e.g., near the obstacles).
h ← unscentHellingerDistance(rg, cg, qg) 24 sr ← geometricSimilarity(cg, qg) 25 if d h ≤ sr · α h,
Fig. 6 . 1 ,
61Globally-consistent GMM fusion: Constructing the current global GMMap Mt by fusing the local GMMap Gt into the previous global GMMap M t−1 . The bounding box (green rectangle) of local map Gt is used to determine the Gaussians Ct in the global map M t−1 that overlaps with Gt . Occupied and free GMMs are illustrated with red and blue ellipsoids, respectively. Dotted rectangles represent the bounding boxes at the leaf nodes of the R-tree. Algorithm 3: Globally-Consistent GMM Fusion Input: Local GMMap Gt, previous global GMMap Mt−1 Output: Updated global GMMap Mt 1 function updateGlobalMap(Mt−Gt
Fig. 7 .
7Visualization of the (a) GMMap, (b) NDT-OM, (c) BGKOctoMap-L, and (d) OctoMap overlaid over the ground-truth (i) Room (structured indoor) and (ii) Gascola (unstructured outdoor) environments. For GMMap and NDT-OM, occupied Gaussians are visualized as wireframes of 3D ellipsoids. For BGKOctoMap-L and OctoMap, wireframes of voxels with an occupancy probability greater than 0.9 are visualized.
Fig. 8 .
8Comparison of receiver operating characteristic (ROC) curves for the proposed GMMap against OctoMap, NDT-OM, and BGKOctoMap-L in four environments: (a) Room, (b) Warehouse, (c) Soulcity, and (d) Gascola.The area under the ROC curve (AUC) equals to the probability that an occupied region is assigned a higher occupancy probability than the free region in the map.
(i- a )
aWarehouse point cloud (i-b) GMMap (i-c) Occupancy distribution at a cross section (ii-a) Soulcity point cloud (ii-b) GMMap (ii-c) Occupancy distribution at a cross section Fig. 9. Visualization of (a) the point cloud overlaid with its (b) GMMap for (i) Warehouse (structured indoor) and (ii) Soulcity (structured outdoor) environments. For ease of visualization, only occupied Gaussians are shown for the GMMap. In (c), the distribution of occupancy is visualized in the free regions (blue), unexplored regions (yellow), and occupied regions (red). The locations of the unexplored regions are well-preserved across both environments.
Algorithm 2: Free GMM Construction From Basis Input: Free GMM basis Ft,free Output: Free GMM Gt,free 1 function constructFreeGMM(Ft,free) B ← ∅, Gt,free ← ∅ // Sort basis using its subregion index i f foreach f ∈ Ft,free do2
3
imax ← 0
4
5
TABLE I PROPERTIES
IOF ALL FOUR ENVIRONMENTS USED FOR EVALUATION.Environment
Dimensions (m)
Images
Depth Image
Resolution
Avg. Sensor
Range (m)
Room
(freiburg1 room)
11.28 × 12.05 × 3.45
1311
640×480
0.97
Warehouse
(freiburg2
pioneer slam)
23.52 × 17.90 × 4.29
2169
640×480
1.13
Soulcity
73.90 × 62.41 × 42.69
1083
640×480
10.85
Gascola
59.04 × 52.93 × 33.71
382
640×480
4.06
TABLE II PARAMETERS
IIUSED IN GMMAP, NDT-OM, BGKOCTOMAP-L, AND OCTOMAP ACROSS ALL FOUR ENVIRONMENTS.Environment
GMMap
NDT-OM
BGKOctoMap-L
OctoMap
Unknown Prior
Weight (π0)
Partition Plane
Distance (d0)
Free Gaussian
Fusion Threshold (α h,free )
Occupied Gaussian
Fusion Threshold (α h,occ )
Voxel
Size
Voxel
Size
Free
Resolution
Block Octree
Depth
Voxel
Size
Room
500,000
0.5m
0.26
0.70
0.4m
0.1m
0.3m
3
0.1m
Warehouse
500,000
0.5m
0.26
0.70
0.5m
0.1m
0.3m
3
0.2m
Soulcity
500,000
0.5m
0.63
1.41
1.2m
0.3m
3.0m
3
0.3m
Gascola
500,000
0.6m
0.63
1.41
1.2m
0.3m
3.0m
3
0.3m
TABLE III COMPARISON
IIIOF THE GMMAP AGAINST PRIOR WORKS USING THE NVIDIA JETSON TX2. ALL FRAMEWORKS ACHIEVE COMPARABLE ACCURACY.Environment
Framework
Compute
Resource
Throughput
Memory Footprint
Energy Consumption
Construction
Query *
Map Size Overhead DRAM Access CPU & GPU
DRAM
Total
(C = CPU core)
(images/s)
(10 6 locations/s)
(KB)
(KB)
(bytes/pixel)
(mJ/image)
(mJ/image) (mJ/image)
Room
GMMap
GPU & 4 C
81
0.79
167
24, 563 **
477
41
17
58
4 C
60
0.79
176
41
27
36
16
52
1 C
18
0.79
176
31
14
59
51
110
NDT-OM
1 C
5.0
3.5
426
3, 146
160
202
157
359
BGKOctoMap-L
1 C
2.8
0.93
4, 935
7, 101
242
352
272
624
OctoMap
1 C
3.6
4.0
2, 190
629
164
298
209
507
Warehouse
GMMap
GPU & 4 C
73
0.52
268
24, 596 **
492
43
16
59
4 C
58
0.51
269
56
30
37
14
51
1 C
18
0.51
269
41
20
59
41
100
NDT-OM
1 C
3.7
3.7
614
3, 436
199
273
209
482
BGKOctoMap-L
1 C
0.5
1.4
13, 811
21, 265
940
1, 888
1, 463
3, 351
OctoMap
1 C
4.3
4.2
1, 590
606
143
256
176
433
Soulcity
GMMap
GPU & 4 C
60
0.46
850
24, 740 **
625
56
23
79
4 C
31
0.47
838
128
76
73
25
98
1 C
11
0.47
838
106
44
92
66
158
NDT-OM
1 C
3.1
3.9
1, 925
4, 391
372
324
248
572
BGKOctoMap-L
1 C
0.8
1.0
23, 265
5, 502
596
1, 204
926
2, 130
OctoMap
1 C
2.1
4.1
10, 452
1, 068
644
485
373
858
Gascola
GMMap
GPU & 4 C
44
0.62
362
24, 644 **
1, 048
69
36
105
4 C
32
0.62
361
79
78
73
29
102
1 C
11
0.62
361
63
54
97
81
178
NDT-OM
1 C
2.6
3.9
1, 339
4, 392
358
383
291
674
BGKOctoMap-L
1 C
0.4
1.1
16, 736
9, 993
899
2, 407
1, 840
4, 248
OctoMap
1 C
1.6
3.9
9, 376
760
1, 136
634
494
1, 129
https://github.com/OctoMap/octomap 2 https://github.com/OrebroUniversity/perception oru/tree/port-kinetic 3 https://github.com/RobustFieldAutonomyLab/la3dm
Handbook of unmanned aerial vehicles. K P Valavanis, G J Vachtsevanos, Springer2077K. P. Valavanis and G. J. Vachtsevanos, Handbook of unmanned aerial vehicles. Springer, 2015, vol. 2077.
Development of the nano hummingbird: A tailless flapping wing micro air vehicle. M Keennon, K Klingebiel, H Won, 50th AIAA aerospace sciences meeting including the new horizons forum and aerospace exposition. 588M. Keennon, K. Klingebiel, and H. Won, "Development of the nano hummingbird: A tailless flapping wing micro air vehicle," in 50th AIAA aerospace sciences meeting including the new horizons forum and aerospace exposition, 2012, p. 588.
Design of single-motor nano aerial vehicle with a gearless torque-canceling mechanism. R He, S Sato, M Drela, 46th AIAA Aerospace Sciences Meeting and Exhibit. 1417R. He, S. Sato, and M. Drela, "Design of single-motor nano aerial vehicle with a gearless torque-canceling mechanism," in 46th AIAA Aerospace Sciences Meeting and Exhibit, 2008, p. 1417.
1.1 computing's energy problem (and what we can do about it). M Horowitz, 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers. M. Horowitz, "1.1 computing's energy problem (and what we can do about it)," in 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014, pp. 10-14.
A frontier-based approach for autonomous exploration. B Yamauchi, IEEE International Symposium on Computational Intelligence in Robotics and Automation. B. Yamauchi, "A frontier-based approach for autonomous exploration," in IEEE International Symposium on Computational Intelligence in Robotics and Automation, 1997, pp. 146-151.
Fsmi: Fast computation of shannon mutual information for information-theoretic mapping. Z Zhang, T Henderson, S Karaman, V Sze, The International Journal of Robotics Research. 399Z. Zhang, T. Henderson, S. Karaman, and V. Sze, "Fsmi: Fast computa- tion of shannon mutual information for information-theoretic mapping," The International Journal of Robotics Research, vol. 39, no. 9, pp. 1155- 1177, 2020.
An efficient and continuous approach to information-theoretic exploration. T Henderson, V Sze, S Karaman, 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEET. Henderson, V. Sze, and S. Karaman, "An efficient and continuous approach to information-theoretic exploration," in 2020 IEEE Interna- tional Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 8566-8572.
Sampling-based algorithms for optimal motion planning. S Karaman, E Frazzoli, The international journal of robotics research. 307S. Karaman and E. Frazzoli, "Sampling-based algorithms for optimal motion planning," The international journal of robotics research, vol. 30, no. 7, pp. 846-894, 2011.
Octomap: An efficient probabilistic 3d mapping framework based on octrees. A Hornung, K M Wurm, M Bennewitz, C Stachniss, W Burgard, Autonomous robots. 343A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, "Octomap: An efficient probabilistic 3d mapping framework based on octrees," Autonomous robots, vol. 34, no. 3, pp. 189-206, 2013.
3d normal distributions transform occupancy maps: An efficient representation for mapping in dynamic environments. J P Saarinen, H Andreasson, T Stoyanov, A J , The International Journal of Robotics Research. 3214J. P. Saarinen, H. Andreasson, T. Stoyanov, and A. J. Lilienthal, "3d normal distributions transform occupancy maps: An efficient represen- tation for mapping in dynamic environments," The International Journal of Robotics Research, vol. 32, no. 14, pp. 1627-1644, 2013.
Towards real-time 3d continuous occupancy mapping using hilbert maps. V Guizilini, F Ramos, The International Journal of Robotics Research. 376V. Guizilini and F. Ramos, "Towards real-time 3d continuous occupancy mapping using hilbert maps," The International Journal of Robotics Research, vol. 37, no. 6, pp. 566-584, 2018.
Efficient, multifidelity perceptual representations via hierarchical gaussian mixture models. S Srivastava, N Michael, IEEE Transactions on Robotics. 351S. Srivastava and N. Michael, "Efficient, multifidelity perceptual repre- sentations via hierarchical gaussian mixture models," IEEE Transactions on Robotics, vol. 35, no. 1, pp. 248-260, 2018.
Accelerated generative models for 3d point cloud data. B Eckart, K Kim, A Troccoli, A Kelly, J Kautz, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionB. Eckart, K. Kim, A. Troccoli, A. Kelly, and J. Kautz, "Accelerated generative models for 3d point cloud data," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5497- 5505.
Variable resolution occupancy mapping using gaussian mixture models. C O'meadhra, W Tabib, N Michael, IEEE Robotics and Automation Letters. 42C. O'Meadhra, W. Tabib, and N. Michael, "Variable resolution occu- pancy mapping using gaussian mixture models," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2015-2022, 2018.
Efficient parametric multi-fidelity surface mapping. A Dhawale, N Michael, Robotics: Science and Systems (RSS). 25A. Dhawale and N. Michael, "Efficient parametric multi-fidelity surface mapping," in Robotics: Science and Systems (RSS), vol. 2, no. 3, 2020, p. 5.
Probabilistic point cloud modeling via self-organizing gaussian mixture models. K Goel, N Michael, W Tabib, IEEE Robotics and Automation Letters. 85K. Goel, N. Michael, and W. Tabib, "Probabilistic point cloud model- ing via self-organizing gaussian mixture models," IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2526-2533, 2023.
Learning-aided 3-d occupancy mapping with bayesian generalized kernel inference. K Doherty, T Shan, J Wang, B Englot, 10.1109/tro.2019.2912487IEEE Transactions on Robotics. K. Doherty, T. Shan, J. Wang, and B. Englot, "Learning-aided 3-d occupancy mapping with bayesian generalized kernel inference," IEEE Transactions on Robotics, pp. 1-14, 2019. [Online]. Available: https://doi.org/10.1109/tro.2019.2912487
Memory-efficient gaussian fitting for depth images in real time. P Z X Li, S Karaman, V Sze, 2022 International Conference on Robotics and Automation (ICRA). IEEEP. Z. X. Li, S. Karaman, and V. Sze, "Memory-efficient gaussian fitting for depth images in real time," in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 8003-8009.
Sonar-based real-world mapping and navigation. A Elfes, IEEE Journal on Robotics and Automation. 33A. Elfes, "Sonar-based real-world mapping and navigation," IEEE Journal on Robotics and Automation, vol. 3, no. 3, pp. 249-265, 1987.
Virtual occupancy grid map for submap-based pose graph slam and planning in 3d environments. B.-J Ho, P Sodhi, P Teixeira, M Hsiao, T Kusnur, M Kaess, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEB.-J. Ho, P. Sodhi, P. Teixeira, M. Hsiao, T. Kusnur, and M. Kaess, "Virtual occupancy grid map for submap-based pose graph slam and planning in 3d environments," in 2018 IEEE/RSJ International Con- ference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 2175-2182.
Gaussian process occupancy maps. S T O'callaghan, F T Ramos, The International Journal of Robotics Research. 311S. T. O'Callaghan and F. T. Ramos, "Gaussian process occupancy maps," The International Journal of Robotics Research, vol. 31, no. 1, pp. 42- 62, 2012.
Fast, accurate gaussian process occupancy maps via test-data octrees and nested bayesian fusion. J Wang, B Englot, 2016 IEEE International Conference on Robotics and Automation (ICRA). J. Wang and B. Englot, "Fast, accurate gaussian process occupancy maps via test-data octrees and nested bayesian fusion," in 2016 IEEE International Conference on Robotics and Automation (ICRA), May 2016, pp. 1003-1010.
Hilbert maps: Scalable continuous occupancy mapping with stochastic gradient descent. F Ramos, L Ott, The International Journal of Robotics Research. 3514F. Ramos and L. Ott, "Hilbert maps: Scalable continuous occupancy mapping with stochastic gradient descent," The International Journal of Robotics Research, vol. 35, no. 14, pp. 1717-1730, 2016.
Gaussian mixture regression and classification. H G Sung, Rice UniversityH. G. Sung, Gaussian mixture regression and classification. Rice University, 2004.
R-trees: A dynamic index structure for spatial searching. A Guttman, Proceedings of the 1984 ACM SIGMOD international conference on Management of data. the 1984 ACM SIGMOD international conference on Management of dataA. Guttman, "R-trees: A dynamic index structure for spatial searching," in Proceedings of the 1984 ACM SIGMOD international conference on Management of data, 1984, pp. 47-57.
From kernels to mixtures. D W Scott, W F Szewczyk, Technometrics. 433D. W. Scott and W. F. Szewczyk, "From kernels to mixtures," Techno- metrics, vol. 43, no. 3, pp. 323-335, 2001.
Multivariate online kernel density estimation. M Kristan, A Leonardis, Computer Vision Winter Workshop. M. Kristan and A. Leonardis, "Multivariate online kernel density esti- mation," in Computer Vision Winter Workshop, 2010, pp. 77-86.
A benchmark for the evaluation of rgb-d slam systems. J Sturm, N Engelhard, F Endres, W Burgard, D Cremers, Proc. of the International Conference on Intelligent Robot Systems (IROS). of the International Conference on Intelligent Robot Systems (IROS)J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, "A benchmark for the evaluation of rgb-d slam systems," in Proc. of the International Conference on Intelligent Robot Systems (IROS), Oct. 2012.
Tartanair: A dataset to push the limits of visual slam. W Wang, D Zhu, X Wang, Y Hu, Y Qiu, C Wang, Y Hu, A Kapoor, S Scherer, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEW. Wang, D. Zhu, X. Wang, Y. Hu, Y. Qiu, C. Wang, Y. Hu, A. Kapoor, and S. Scherer, "Tartanair: A dataset to push the limits of visual slam," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 4909-4916.
Jetson Download Center. NVIDIA Developer. "Jetson Download Center," NVIDIA Developer, available: https:// developer.nvidia.com/jetson-tx2-nx-system-module-data-sheet.
Open3D: A modern library for 3D data processing. Q.-Y Zhou, J Park, V Koltun, arXiv:1801.09847VII. BIOGRAPHYQ.-Y. Zhou, J. Park, and V. Koltun, "Open3D: A modern library for 3D data processing," arXiv:1801.09847, 2018. VII. BIOGRAPHY
he worked in the High-Speed Converters Group at Analog Devices, Toronto, as an integrated circuit engineer. His research focuses on the co-design of algorithms and specialized hardware for localization, mapping. Peter Zhi, Xuan Li, Canada. Student Member, IEEE) received the B.A.Sc. in Engineering Science from the University of Torontoand path-planning on energyconstrained miniature robotsPeter Zhi Xuan Li (Student Member, IEEE) re- ceived the B.A.Sc. in Engineering Science from the University of Toronto, Canada, in 2018. Between 2016 and 2017, he worked in the High-Speed Con- verters Group at Analog Devices, Toronto, as an in- tegrated circuit engineer. His research focuses on the co-design of algorithms and specialized hardware for localization, mapping, and path-planning on energy- constrained miniature robots.
He is currently an Associate Professor of Aeronautics and Astronautics with MIT. His research interests include the broad areas of robotics and control theory. In particular, he is focusing on the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. Dr. Karaman was the recipient of the IEEE Robotics and Automation Society Early Career Award. Sertac Karaman, 2007, the S.M. degree in mechanical engineering and the Ph.D. degree in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT). Istanbul, Turkey; Cambridge, MA, USA2017, the Army Research Office Young Investigator Award, in 2015, the National Science Foundation Faculty Career Development (CAREER) Award. in 2014, the AIAA Wright Brothers Graduate Award, in 2012, and the NVIDIA FellowshipSertac Karaman (Member, IEEE) received the B.S. degrees in mechanical engineering and computer engineering from the Istanbul Technical University, Istanbul, Turkey, in 2007, the S.M. degree in me- chanical engineering and the Ph.D. degree in elec- trical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cam- bridge, MA, USA, in 2009 and 2012, respectively. He is currently an Associate Professor of Aeronau- tics and Astronautics with MIT. His research inter- ests include the broad areas of robotics and control theory. In particular, he is focusing on the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. Dr. Karaman was the recipient of the IEEE Robotics and Automation Society Early Career Award, in 2017, the Office of Naval Research Young Investigator Award, in 2017, the Army Research Office Young Investigator Award, in 2015, the National Science Foundation Faculty Career Development (CAREER) Award, in 2014, the AIAA Wright Brothers Graduate Award, in 2012, and the NVIDIA Fellowship, in 2011.
Her research interests include computing systems that enable energy-efficient machine learning, computer vision, and video compression/processing for various applications, including autonomous navigation, digital health, and the Internet of Things. Prior to joining MIT. Vivienne Sze, 2004, and the S.M. and Ph.D. degree in electrical engineering from the Massachusetts Institute of Technology (MIT). Toronto, ON, Canada; Cambridge, MA; Dallas, TXshe was a Member of the Technical Staff in the Systems and Applications R&D Center at Texas Instruments (TI). She also represented TI in the Joint Collaborative Team on Video Coding (JCT-VCVivienne Sze (Senior Member, IEEE) received the B.A.Sc. (Hons) degree in electrical engineering from the University of Toronto, Toronto, ON, Canada, in 2004, and the S.M. and Ph.D. degree in elec- trical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, MA, in 2006 and 2010 respectively. In 2011, she received the Jin-Au Kong Outstanding Doctoral Thesis Prize in Electrical Engineering at MIT. She is an Associate Professor at MIT in the Electrical Engineering and Computer Science De- partment. Her research interests include computing systems that enable energy-efficient machine learning, computer vision, and video compres- sion/processing for various applications, including autonomous navigation, digital health, and the Internet of Things. Prior to joining MIT, she was a Member of the Technical Staff in the Systems and Applications R&D Center at Texas Instruments (TI), Dallas, TX, where she designed low-power algorithms and architectures for video coding. She also represented TI in the Joint Collaborative Team on Video Coding (JCT-VC).
Dr, Sze was a recipient of the Air Force Young Investigator Research Program Award, the DARPA Young Faculty Award, the Edgerton Faculty Award, several faculty awards from Google, Facebook, and Qualcomm, the 2021 University of Toronto Engineering Mid-Career Achievement Award, and the 2020 ACM-W Rising Star Award, and a co-recipient of the 2018 Symposium on VLSI Circuits Best Student Paper Award, the 2017 CICC Outstanding Invited Paper Award, and the 2016 IEEE Micro Top Picks Award. She was a member of the JCT-VC team that received the Primetime Engineering Emmy Award for the development of the HEVC video compression standardDr. Sze was a recipient of the Air Force Young Investigator Research Pro- gram Award, the DARPA Young Faculty Award, the Edgerton Faculty Award, several faculty awards from Google, Facebook, and Qualcomm, the 2021 University of Toronto Engineering Mid-Career Achievement Award, and the 2020 ACM-W Rising Star Award, and a co-recipient of the 2018 Symposium on VLSI Circuits Best Student Paper Award, the 2017 CICC Outstanding Invited Paper Award, and the 2016 IEEE Micro Top Picks Award. She was a member of the JCT-VC team that received the Primetime Engineering Emmy Award for the development of the HEVC video compression standard.
| [
"https://github.com/mit-lean/GMMap.",
"https://github.com/OctoMap/octomap",
"https://github.com/OrebroUniversity/perception",
"https://github.com/RobustFieldAutonomyLab/la3dm"
] |
[
"Efficient Anomaly Detection with Budget Annotation Using Semi-Supervised Residual Transformer",
"Efficient Anomaly Detection with Budget Annotation Using Semi-Supervised Residual Transformer"
] | [
"Hanxi Li \nJiangxi Normal University\nJiangxiChina\n\nZhejiang University\nZhejiangChina\n",
"Jingqi Wu \nJiangxi Normal University\nJiangxiChina\n",
"Hao Chen \nZhejiang University\nZhejiangChina\n",
"Mingwen Wang \nJiangxi Normal University\nJiangxiChina\n",
"Chunhua Shen \nZhejiang University\nZhejiangChina\n"
] | [
"Jiangxi Normal University\nJiangxiChina",
"Zhejiang University\nZhejiangChina",
"Jiangxi Normal University\nJiangxiChina",
"Zhejiang University\nZhejiangChina",
"Jiangxi Normal University\nJiangxiChina",
"Zhejiang University\nZhejiangChina"
] | [] | Anomaly Detection (AD) is challenging as usually only the normal samples are seen during training and the detector needs to discover anomalies onthe-fly. The recently proposed deep-learning-based approaches could somehow alleviate the problem but there is still a long way to go in obtaining an industrialclass anomaly detector for real-world applications. On the other hand, in some particular AD tasks, a few anomalous samples are labeled manually for achieving higher accuracy. However, this performance gain is at the cost of considerable annotation efforts, which can be intractable in many practical scenarios.In this work, the above two problems are addressed in a unified framework. Firstly, inspired by the success of the patch-matching-based AD algorithms, we train a sliding vision transformer over the residuals generated by a novel position-constrained patch-matching. Secondly, the conventional pixel-wise segmentation problem is cast into a block-wise classification problem. Thus the sliding transformer can attain even higher accuracy with much less annotation labor. Thirdly, to further reduce the labeling cost, we propose to label the anomalous regions using only bounding boxes. The unlabeled regions caused by the weak labels are effectively exploited using a highly-customized semi-supervised learning scheme equipped with two novel data augmentation methods. The proposed method, termed "Semisupervised RESidual Transformer" or "SemiREST" in This work was mainly done when Hanxi Li was visiting Zhejiang University. ⋆ Corresponding author † These authors contributed equally to this work short, outperforms all the state-of-the-art approaches using all the evaluation metrics in both the unsupervised and supervised scenarios. On the popular MVTec-AD dataset, our SemiREST algorithm obtains the Average Precision (AP) of 81.2% (vs. previous best result of AP 75.8%) in the unsupervised condition and 84.4% AP (vs. previous best result of AP 78.6%) for supervised anomaly detection. Surprisingly, with the bounding-box-based semi-supervisions, SemiREST still outperforms the SOTA methods with full supervision (83.8% AP vs. previous SOTA AP 78.6%) on MVTec-AD. Similar precision advantages are also observed on the other two well-known AD datasets, i.e., BTAD, and KSDD2. Overall, the proposed SemiREST generates new records of AD performances while at a remarkably low annotation cost. It is not only cheaper in annotation but also performs better than most recent methods. The code of this work is available at: https://github.com/BeJane/Semi_REST | null | [
"https://export.arxiv.org/pdf/2306.03492v1.pdf"
] | 259,089,112 | 2306.03492 | ad4a998b7b2af6e607ea917d5e6974b0a097cc54 |
Efficient Anomaly Detection with Budget Annotation Using Semi-Supervised Residual Transformer
Hanxi Li
Jiangxi Normal University
JiangxiChina
Zhejiang University
ZhejiangChina
Jingqi Wu
Jiangxi Normal University
JiangxiChina
Hao Chen
Zhejiang University
ZhejiangChina
Mingwen Wang
Jiangxi Normal University
JiangxiChina
Chunhua Shen
Zhejiang University
ZhejiangChina
Efficient Anomaly Detection with Budget Annotation Using Semi-Supervised Residual Transformer
Noname manuscript No. (will be inserted by the editor) the date of receipt and acceptance should be inserted laterAnomaly detection · Vision Transformer · Semi-supervised learning
Anomaly Detection (AD) is challenging as usually only the normal samples are seen during training and the detector needs to discover anomalies onthe-fly. The recently proposed deep-learning-based approaches could somehow alleviate the problem but there is still a long way to go in obtaining an industrialclass anomaly detector for real-world applications. On the other hand, in some particular AD tasks, a few anomalous samples are labeled manually for achieving higher accuracy. However, this performance gain is at the cost of considerable annotation efforts, which can be intractable in many practical scenarios.In this work, the above two problems are addressed in a unified framework. Firstly, inspired by the success of the patch-matching-based AD algorithms, we train a sliding vision transformer over the residuals generated by a novel position-constrained patch-matching. Secondly, the conventional pixel-wise segmentation problem is cast into a block-wise classification problem. Thus the sliding transformer can attain even higher accuracy with much less annotation labor. Thirdly, to further reduce the labeling cost, we propose to label the anomalous regions using only bounding boxes. The unlabeled regions caused by the weak labels are effectively exploited using a highly-customized semi-supervised learning scheme equipped with two novel data augmentation methods. The proposed method, termed "Semisupervised RESidual Transformer" or "SemiREST" in This work was mainly done when Hanxi Li was visiting Zhejiang University. ⋆ Corresponding author † These authors contributed equally to this work short, outperforms all the state-of-the-art approaches using all the evaluation metrics in both the unsupervised and supervised scenarios. On the popular MVTec-AD dataset, our SemiREST algorithm obtains the Average Precision (AP) of 81.2% (vs. previous best result of AP 75.8%) in the unsupervised condition and 84.4% AP (vs. previous best result of AP 78.6%) for supervised anomaly detection. Surprisingly, with the bounding-box-based semi-supervisions, SemiREST still outperforms the SOTA methods with full supervision (83.8% AP vs. previous SOTA AP 78.6%) on MVTec-AD. Similar precision advantages are also observed on the other two well-known AD datasets, i.e., BTAD, and KSDD2. Overall, the proposed SemiREST generates new records of AD performances while at a remarkably low annotation cost. It is not only cheaper in annotation but also performs better than most recent methods. The code of this work is available at: https://github.com/BeJane/Semi_REST
Introduction
Product quality control is crucial in many manufacturing processes. Manual inspection is expensive and unreliable considering the limited time budget for inspection on a running assembly line. As a result, automatic defect inspection is compulsory in modern manufacturing industries Cao et al. (2023b); Ni et al. (2021); Niu et al. (2021); Tao et al. (2022). Given training samples with sufficient supervision, it is straightforward to perform defect detection using off-the-shelf detection or segmentation algorithms Cheng et al. (2018); Fig. 1 The comparison of our framework with the conventional paradigm. The annotation strength can be divided into three levels, as shown in the three rows on the right-hand side of the figure. Top (conventional): pixel-wise segmentation for anomaly detection; middle (proposed): solving the AD problem as block-wise binary classifications with negative samples, positive samples, and ignored samples are shown in blue, orange, and gray respectively; bottom (proposed): a more aggressive semi-supervised manner with only bonding boxes labeled for the anomaly regions. The numbers in the parenthesis denote the decreasing order of magnitudes (from 10 4 to 10 0 ) of the annotation numbers under the three supervision conditions. Better view in color. Wang et al. (2022a,b,c). Unfortunately, in most practical cases, one can only obtain much fewer "anomaly" samples than the normal ones and the anomaly pattern varies dramatically over different samples. As a result, the defect detection task is often cast into an Anomaly Detection (AD) problem Bergmann et al. (2019); Mishra et al. (2021), with only normal instances supplied for training.
During the past few years, a great amount of effort has been invested in developing better AD algorithms. First of all, inspired by the pioneering works Scholkopf et al. (2000); Schölkopf et al. (2001), a large part of AD literature focus on modeling normal images or patches as a single class while the abnormal ones are detected as outliers Chen et al. (2022); Defard et al. (2021); Liznerski et al. (2021); Massoli et al. (2021); Ruff et al. (2018); Yi and Yoon (2020); Zhang et al. (2022). Normalizing Flow Dinh et al. (2014Dinh et al. ( , 2016; Kingma and Dhariwal (2018) based AD algorithms propose to further map the normal class to fit a Gaussian distribution in the feature spaceLei et al. (2023); Rudolph et al. (2021); Tailanian et al. (2022); Yu et al. (2021). Secondly, by observing that most normal samples can be effectively reconstructed by other normal ones, a number of researchers have tried to conduct the recon-struction in various spaces for the test images or image patches and estimate the anomaly scores based on the reconstruction residuals Dehaene and Eline (2020); Hou et al. (2021); Shi et al. (2021); Wu et al. (2021); Zong et al. (2018). In addition to the reconstruction, some more sophisticated anomaly detectors Li et al. (2021); Yang et al. (2023); Zavrtanik et al. (2021) also learn discriminative models with "pseudo anomalies" for higher accuracy. At the other end of the spectrum, however, the seminar work Roth et al. (2022) shows that a well-designed nearest-neighbor matching process can already achieve sufficiently good detection accuracies. Thirdly, to obtain distinct responses to anomaly inputs, distillation-based approaches Bergmann et al. (2022); Deng and Li (2022); Salehi et al. (2021); Zhang et al. (2023b) distill a "teacher network" (which is pretrained on a large but "neutral" dataset) into a "student network" only based on normal samples. Then the anomaly score can be calculated according to the response differences between the two networks. Last but not least, inspired by the prior observations that the position information of image patches may benefit the AD processes Bae et al. (2022); Gudovskiy et al. (2022), researchers further propose to geometrically align the test images with the reference ones and perform local comparisons between the aligned images and their prototypes Huang et al. (2022); Liu et al. (2023a).
Despite the mainstream "unsupervised" fashion, in Ding et al. (2022); Yao et al. (2023); Zhang et al. (2023a), anomaly detectors are trained in a "few-shot" manner and unsurprisingly higher performances are achieved, compared with their "zero-shot" counterparts. In addition, the traditional surface defect detection tasks Božič et al. (2021); Huang et al. (2020); Tabernik et al. (2020); Zhang et al. (2021b) are mostly solved in a fully-supervised version. Considering that in practice, the detection algorithm needs to "warm-up" for a certain duration along with the running assemble-line, the abnormal samples are not difficult to obtain in this scenario. And thus this "supervised" setting is actually valid.
In this paper, we claim that compared with the lack of the training anomalies, a more realistic problem to address is the limited time budget for labeling the newly-obtained anomalous images. Accordingly, we propose a novel AD algorithm that simultaneously enjoys high detection robustness and low annotation cost. The high-level concept of the proposed algorithm is illustrated in Figure 1. Compared with the conventional pixel-wise supervision, the proposed method can utilize weak labels, i.e., the block labels or bounding-box labels (as shown in the bottom part of Figure 1) to get even higher AD performances.
In specific, we consider the AD task as a block-wise classification problem (as shown in the right-middle part of Figure 1) and thus one only needs to label hundreds of anomaly blocks instead of thousands of anomaly pixels on a defective image. The patch-matching residual is generated for each block constrained by the block's position-code. And then the blocks are classified by a sliding Swin Transformer ) that receives the patch-matching residuals as its tokens. The final anomaly score of a block is obtained via a bagging process over all the Swin Transformers whose input involves this block. Thanks to the novel features and the bagging strategy, the proposed algorithm performs better than those with pixel-level supervision. More aggressively, we propose to further reduce the annotation cost by labeling the anomaly regions with only bounding boxes. As can be seen in the right-bottom part of Figure 1, the boxes cover all the anomaly regions of the image so the outside blocks (shown in green) can be directly used as negative samples. On the other hand, the identities of the inside blocks (shown in yellow) are not given and seem useless in the conventional learning paradigm. In this work, nonetheless, the unlabeled blocks are utilized effectively via a modified Mix-Match (Berthelot et al., 2019b) algorithm that is smartly customized for the transformer-based anomaly detector. The productive utilization of the unlabeled information reduces the performance drop from full supervision.
We term the proposed AD algorithm as "Semi-supervised RESidual Transformer" or "SemiREST" in short. Our experiments in this paper verify the superiority of the proposed method: SemiREST outperforms all the state-of-the-art AD algorithms on the three well-known datasets (MVTec-AD (Bergmann et al., 2019), BTAD (Mishra et al., 2021) and KSDD2 (Božič et al., 2021)) in both the mainstream "unsupervised" setting and the "few-shot" setting. In addition, with only bounding-box annotations, the SemiREST also beats all the SOTA methods with fully-supervised pixel labels with all the employed evaluation metrics. In summary, our main contribution is three fold, as listed below.
-Cheaper in terms of annotation: From a realistic perspective, we suggest that, compared with the "one-class" compatibility, a more desirable property for AD algorithms is the high efficiency of using annotation information. Accordingly, this work proposes two types of low-cost annotations: the blockwise labels and the bounding box labels. To the best of our knowledge, it is the first time that the blockwise label is used for anomaly detection and the usage of the bounding-box label in the context of semi-supervised AD learning is also creative.
-Better: We design the SemiREST algorithm based on a modified Swin-Transformer Liu et al. (2021) and the patch-matching residuals generated with position-code constraints. The proposed algorithm consistently surpasses the SOTA methods by large margins, on the three most acknowledged datasets and in all the supervision conditions. -Cheaper & Better: Given the cheap boundingbox labels, SemiREST could effectively utilize the unlabeled features thanks to the highly-customized MixMatch (Berthelot et al., 2019b) algorithm proposed in this work. Merely with these lightweight annotations, SemiREST still outperforms ALL the comparing SOTA methods fed with fully annotated pixel-wise labels.
2 Related work
Anomaly Detection via Patch Feature Matching
A straightforward way for realizing AD is to compare the test patch with the normal patches. The test patches similar to the normal ones are considered to be unlikely to be anomalies while those with low similarities are assigned with high anomaly scores. Patch-Core algorithm proposes the coresetsubsampling algorithm to build a "memory bank" of patch features, which are obtained via smoothing the neutral deep features pre-learned on ImageNet (Deng et al., 2009;Russakovsky et al., 2015). The anomaly score is then calculated based on the Euclidean distance between the test patch feature and its nearest neighbor in the "memory bank". Despite the simplicity, PatchCore performs dramatically well on the MVTec-AD dataset (Bergmann et al., 2019).
The good performance of PatchCore encourages researchers to develop better variants based on it. The PAFM algorithm applies patch-wise adaptive coreset sampling to improve the speed. (Bae et al., 2022) introduces the position and neighborhood information to refine the patch-feature comparison. Graphcore utilizes graph representation to customize PatchCore for the few-shot setting. (Saiku et al., 2022) modifies PatchCore by compressing the memory bank via k-means clustering. (Zhu et al., 2022) combines PatchCore and Defect GAN (Zhang et al., 2021a) for higher AD performances.
Those variants, though achieving slightly better performances, all fail in noticing the potential value of the intermediate information generated by the patchmatching. In this work, we use the matching residuals as the input tokens of our transformer model. The individual and the mutual information of the residuals are effectively exploited and new SOTA performances are then obtained.
Swin Transformer for Anomaly Detection
As a variation of Vision Transformer (ViT) (Dosovitskiy et al., 2021), Swin Transformer proposed a hierarchical Transformer with a shifted windowing scheme, which not only introduces several visual priors into Transformer but also reduces computation costs. Swin Transformer its variants illustrate remarkable performances in various computer vision tasks, such as semantic segmentation (Cao et al., 2023a;Hatamizadeh et al., 2022;Huang et al., 2021), instance segmentation (Dong et al., 2021;Li et al., 2022) and object detection (Dai et al., 2021;Liang et al., 2022;Xu et al., 2021).
In the literature on anomaly detection, some recently proposed methods also employ Swin-Transformer as the backbone network. (Üzen et al., 2022) develops a hybrid structure decoder that combines convolution layers and the Swin Transformer. (Gao et al., 2022) improves the original shifted windowing scheme of the Swin-Transformer for surface defect detection. Despite the success of Swin Transformer models in other domains, the Swin-Transformer-based AD algorithms could hardly outperform the SOTA methods on most acknowledged datasets such as (Bergmann et al., 2019), (Mishra et al., 2021) and (Božič et al., 2021).
In this paper, we successfully tame the Swin Transformer for the small training sets of AD problems by introducing a series of novel modifications.
MixMatch and Weak Labels based on Bounding Boxes
Semi-supervised Learning (SSL) is always attractive since it can save massive labeling labor. Many efforts have been devoted to utilizing the information from the unlabeled data (Berthelot et al., 2019a,b;Sohn et al., 2020;Wang et al., 2023;Zhu and Goldberg, 2009), mainly focusing on the generation of high-quality pseudo labels. Inspired by the seminar work (Yun et al., 2019;Zhang et al., 2018) for data augmentation, Mix-Match proposes a single-loss SSL method that relies on a smart fusion process between labeled and unlabeled samples. In this way, MixMatch enjoys high accuracy and a relatively simple training scheme.
On the other hand, in the literature on semantic segmentation, bounding boxes are usually used as weak supervision to save labeling costs. (Hsu et al., 2019) exploits the tightness prior to the bounding boxes to generate the positive and negative bags for multiple instance learning (MIL), which is integrated into a fully supervised instance segmentation network. (Kervadec et al., 2020) integrates the tightness prior and a global background emptiness constraint derived from bounding box annotations into a weak semantic segmentation of medical images. (Lee et al., 2021) propose a bounding box attribution map (BBAM) to produce pseudo ground-truth for weakly supervised semantic and instance segmentation.
In this work, within the block-wise classification framework, MixMatch is smartly employed to exploit the information of unlabeled blocks which are brought by the weak supervision of bounding boxes. This combination between MixMatch and the bounding box labels is remarkably effective according to the experiment results and also novel in the literature, to our best knowledge.
The proposed method
Swin Transformer Bagging with Position-constrained Residuals
The overall inference process of our SemiREST algorithm is illustrated in Figure 2. As it can be seen, there are two stages in the whole process, i.e., the generation of position-constrained residual features (the green box) and the bagging process over sliding Swin Transformers (the yellow box).
Position constrained residual features
As mentioned in Section 2.1, we try to exploit the potential usage of the intermediate product of the patchmatching process in PatchCore . In this work, the feature-matching difference vector, rather than the euclidean distance between matched features, is used as the network input. Similar usage of the matching differences can also be find in some recently proposed methods (Ding et al., 2022;Zhang et al., 2023a), but they focus on the residual tensors between two holistic feature maps rather than the residual vectors between two feature vectors. Besides, some new evidences show that positional information of the comparing patches could improve the AD performance (Bae et al., 2022;Gudovskiy et al., 2022) mainly because of the rough alignment of the test images captured in the industrial environment. Accordingly, we introduce the "positional embedding" concept from the literature of Transformers (Dosovitskiy et al., 2021;Liu et al., 2021) to the PatchCore algorithm: the original patch features are aggregated with their positional features encoded in the Transformer way. In this way, the whole patch feature matching process is constrained by the positional information and the ablation study in Section 4.8 verifies the merit of this constraint.
X/s-Y/s RGB Patch Matching (•)
Mathematically, for a input image I ∈ R hI×wI×3 , one can extract deep features as
[f 1 , f 2 , · · · , f M ] ← Flatten ← F = Ψ CNN (I)(1)
where Ψ CNN (·) represents a deep network, which is pretrained on a large but neutral dataset (ImageNet (Russakovsky et al., 2015) for example);
F ∈ R h f ×w f ×d f denotes the generated deep feature tensor with M fea- ture vectors (M = h f · w f ); f i ∈ R d f , i = 1, 2, · · · , M
stands for the i-th deep feature vector extracted from the tensor F on the row-column position
p i = [r f i , c f i ] and c f i ∈ [0, w f ), r f i ∈ [0, h f ).
For a deep feature f i , we modify it and generate its "Position-Constrained Feature" (PCF) aŝ
f i = f i + λ PE η i = f i + λ PE Φ PE (p i )(2)
where η = Φ PE (p i ) ∈ R d f is termed "position code" (PE) in this paper, with its weight paramerter λ PE and the function Φ PE (·) denotes the positional embedding process that calculates the k-th element of η as
η (k) i = sin( c f i /ρ 10000 8k/d f ) k ∈ [0, d f 4 ) cos( c f i /ρ 10000 8(k−d f /4)/d f ) k ∈ [ d f 4 , d f 2 ) sin( r f i /ρ 10000 8(k−d f /2)/d f ) k ∈ [ d f 2 , 3d f 4 ) cos( r f i /ρ 10000 8(k−3d f /4)/d f ) k ∈ [ 3d f 4 , d f )(3)
where ρ is the token patch size which will be explained in Section 3.1.2. Note that Equation 3 mainly follows the positional embedding method used in (He et al., 2022). Better position code may be obtained via a careful design of Φ PE (·), but that is out of the scope of this work.
One then can build the memory bank following the method proposed in aŝ
M = Ψ (F trn ) = {m t ∈ R d f | t = 1, 2, · · · , T }(4)
whereM is the memory bank containing T PCFs (m t , ∀t), Ψ (·) represents the sophisticated sampling scheme of PatchCore ,
F trn = {f i ∈ R d f | i = 1
, 2, · · · , N } denotes the original feature set consisting N PCFs, which are extracted from all the normal images for training. Finally, given a test PCFf i , the corresponding "Position constrained Residual" (PCR) can be calculated in the PatchCore manner aŝ
r i = S f i − arg min ∀mt∈M ∥f i −m t ∥ l2 ∈ R d f ,(5)
with the element-wise square function S(·) defined as
S [x 1 , x 2 , · · · , x d f ] T = [x 2 1 , x 2 2 , · · · , x 2 d f ] T .(6)
Compared with the absolute values of the tensor difference employed in Zhang et al. (2023a), the yielded residual vector (r i ) of Equation 6 is more sparse and this "feature-selection" property empirically benefits the following training process, as it is shown in Section 4.8. According to Equations 1 and 5, we can generate M PCRs from a test image I as shown in Figure 2. The M PCRs are then reorganized to form the PCR tensor
R ∈ R h f ×w f ×d f .
Bagging over sliding Swin Transformers
The Swin Transformer is adopted as the backbone of our deep learning model with the PCRs as its input features, rather than normal images. Follow the conventional ViT way (Dosovitskiy et al., 2021;Liu et al., 2021Liu et al., , 2022, the collection of PCRsR is firstly partitioned into several "feature patches" over the "Row-Column" plane. Given the pre-defined patch size ρ, the flattened transformer tokens are stored in the token tensor
T ∈ R ht×wt×dt ,(7)
with the row number h t = h f /ρ, column number w t = w f /ρ and the channel dimension d t = d f ρ 2 . Note that here every token vector t ∈ R dt geometrically corre-
sponds to a (h I ρ/h t ) × (w I ρ/w t ) image-pixel block on the image input of Ψ CNN .
As it is illustrated in Figure 2, a number of "subtensors" are extracted from T in a sliding-window fashion. Let us define the square windows sliding on the Row-Column plane as:
w = [r t , c t , µ] T
where [r t , c t ] stands for the coordinate of the window's top-left corner; µ is the size of the square window; c t ∈ [0, w t − µ) and r t ∈ [0, h t − µ). A sub-tensor is sliced from T according to w as:
T w = T[r t : r t + µ − 1, c t : c t + µ − 1, :] ∈ R µ×µ×dt . (8)
The sub-tensor is then fed into the Swin Transformer model to generate the prediction map as
Ψ Swin (T w ) = Θ w ∈ R µ×µ .(9)
Note that the output of Ψ Swin (·) for each token is actually a 2-D vector, we directly copy the confidences corresponding to the anomaly class to Θ w . In specific, given a pre-defined sampling step s and a fixed size µ, one can generates Q square sliding windows {w 1 , w 2 , · · · , w Q } and the corresponding prediction maps {Θ w1 , Θ w2 , · · · , Θ w Q }. Now each element of the final prediction map Θ ∈ R ht×wt for the holistic tensor T can be calculated as
Θ(r, c) = 1 |N r,c | i∈Nr,c Θ wi (r − r t i , c − c t i )(10)
where r ∈ [0, h t ) and c ∈ [0, w t ) are the row and column indexes of the map element; N r,c stands for the set of sliding windows overlapping with location (r, c); |·| represents the set cardinality; the subtractions r −r t i and c − c t i are the coordinate translation operations. From the perspective of machine learning, the anomaly score for each token t r,c is achieved by aggregating the outputs of |N r,c | Swin Transformers which are fed with different PCRs. In this way, the noise could be averaging out from the final prediction. In this paper, with a gentle generalization to the original definition, we term this sliding-aggregating strategy as Swin Transformer "Bagging" (Bootstrap Aggregating) (Breiman, 1996). The ablation study in Section 4.8 proves the advantage of the Bagging strategy.
Train the Swin Transformer using focal loss
Considering the normal samples usually dominate the original data distribution in most AD datasets, we employ the focal loss (Lin et al., 2017) to lift the importance of the anomaly class. The focal loss used in this work writes
L F = − 1 |Z − | i∈Z − (1 − α)p γ + log(1 − p + ) − 1 |Z + | i∈Z + [α(1 − p + ) γ log(p + )](11)
where Z + and Z − stands for the training sample sets (here are transformer tokens) corresponding to defective (+) and normal (−) classes respectively, p + ∈ [0, 1] denotes the anomaly confidence (of the current token) predicted by Ψ Swin .
The AdamW (Loshchilov and Hutter, 2019) optimizer with the weight decay of 0.05 is used for transformer training and the exponential moving average (EMA) (Haynes et al., 2012) trick is adopted to smooth the parameters for the final model.
Two novel data augmentation approaches
Different from the ordinary vision transformers (Dosovitskiy et al., 2021;Liu et al., 2021Liu et al., , 2022, the input of our model is essentially feature residual vectors. Most conventional data augmentation methods (Shorten and Khoshgoftaar, 2019;Yang et al., 2022;Yun et al., 2019) designed for images can not be directly employed in the current scenario. In this work, we propose two novel data augmentation approaches for the PCR input.
Residuals based on k-nearest neighbor
Firstly, recalling that a PCR is the difference vector between a query feature and its nearest neighbor retrieved in the memory bankM, this input feature of the Swin Transformer is unstable as the neighborhood order is sensitive to slight feature perturbations. In this work, we propose to randomly augment the PCRs based on k-NN of the query feature, in every forward-backward iteration. Concretely, the PCR is obtained aŝ
r i = S(f i −m 1 ) ⊗ δ, τ ∈ [0, α 1 ] S(f i −m 2 ) ⊗ δ, τ ∈ (α 1 , α 2 ] S(f i −m 3 ) ⊗ δ, τ ∈ (α 2 , 1](12)
wherem 1 ,m 2 ,m 3 aref i 's first, second and third neighbors respectively; τ is a random variable distributes uniformly on the interval [0, 1]; α 1 and α 2 are two constants determining the selection probabilities of nearest neighbors; δ ∈ R d f is a small random noise vector and ⊗ denotes element-wise multiplications. In practice, Equation 12 is used to randomly generate some augumented copies of a given sub-tensor T w as we illustrate in Algorithm 1 and Figure 3.
In this way, the model Ψ Swin (·) can witness a more comprehensive data distribution of PCRs during training and consequently higher performance is achieved, as it is shown in the ablation study Section 4.8.
Random PCR dropout
Inspired by the classic "dropout" learning trick (Srivastava et al., 2014) and the recently proposed MAE autoencoder (He et al., 2022), we design a simple feature augmentation approach termed "random PCR dropout" for realizing higher generalization capacity. In specific, when training, all tokens in the tensor T w defined in Equation 8 is randomly reset as
T w [r, c, :] = 0 T ∈ R dt τ ∈ [0, α] unchanged τ ∈ (α, 1](13)
where again τ denotes the random variable sampled from the uniform distribution [0, 1]; α is the constant controlling the frequency of the reset operation. We found this data augmentation also benefit to the final performance in the experiment part of this paper Section 4.8.
Off-the-shelf augmentation methods for generating fake anomalies
Besides the proposed augmentation methods for PCRs, we also follow the off-the-shelf fake/simulated anomaly generation approach proposed in MemSeg (Yang et al., 2023) to ensure an effective learning process of our discriminative model. Readers are recommended to the original work (Yang et al., 2023) for more details.
Utilize the unlabeled information with MixMatch
A long-standing dilemma existing in anomaly detection is whether to involve the real-world defective samples in the training stage. Different choices lead to two main types of AD tasks, i.e., the mainstream "unsupervised" setting (Bergmann et al., 2022;Defard et al., 2021;Hou et al., 2021;Lei et al., 2023;Liu et al., 2023a;Roth et al., 2022) and the minority "supervised" setting (Božič et al., 2021;Ding et al., 2022;Yao et al., 2023;Zhang et al., 2023a). 1 As we analysis in the introduction part, in practice, the key issue is the annotation time rather than the difficulty of obtaining the defective samples.
To strike the balance between the annotation time and model performances, we propose to label the realistic defects using bounding boxes which requires much less annotation labor comparing with the commonlyused pixel-wise labels in the "supervised" setting (Božič et al., 2021;Ding et al., 2022;Yao et al., 2023;Zhang et al., 2023a). However, as shown in Figure 1, the bounding boxes, that jointly cover all the anomaly pixels on the image, can only guarantee the correctness of the negative (normal) label of the outside region of the boxes. The pixel labels inside the box union is unknown. Fortunately, this semi-supervised situation is highly well studied in the machine learning literature (Berthelot et al., 2019a,b;Sohn et al., 2020;Wang et al., 2023;Zhu and Goldberg, 2009). In this work, we customize the MixMatch method (Berthelot et al., 2019b) for semi-supervised PCR learning. The loss generation of modified MixMatch scheme is summarized in Algorithm 1. Note that here we assume that the mini-batch only contains one sub-tensor T w ∈ R µ×µ×dt , for the reason of simplicity.
From the algorithm we can see that, besides the same parts, the proposed method differentiates from (Berthelot et al., 2019b) mainly on the following two factors.
-We use the two novel augmentation methods i.e., the K-NN augmetation defined in Equation 12 and the random PCR dropout defined in Equation 13 for the MixMatch sample generation. The new augmentation methods are carefully desigend to suit PCRs and the Swin Transformer and thus increase the recognition accuracy according to the experiment. -In this work, all the tokens are not treated independently. They are implicitly linked to their original image positions ([r, c]) and image index. When performing the Swin Transformer Ψ Swin (·), the tokens and the corresponding labels from the same image are reorganized into PCR tensors and label maps, as shown in step-2 and step-6 in Algorithm 1. In this way, one can sufficiently use the self-attention mechanism of transformers to achieve higher generalization capacity. -Instead of the conventional cross entropy loss used in (Berthelot et al., 2019b), we employ the focal loss (Lin et al., 2017) for the imbalanced label distribution in AD tasks.
In practice, the Swin Transformer model is firstly pre-trained in the unsupervised fashion with true normal samples and synthetic anomalies. Then we finetune the model under the semi-supervision determined by the bounding-box labels using the MixMatch scheme.
The overall training process of SemiREST
The generation of block-wise labels
As shown in Figure 1, we solve the AD task as a block-wise binary classification problem to significantly reduce annotation costs. Recalling that an input image I ∈ R hI×wI×3 is mapped into the token tensor T ∈ R ht×wt×dt via Equations 1 and 7, we define a token vector in T as a block which covers a β = (h I ρ/h t ) × (w I ρ/w t ) pixel region on the original image with ρ defined in Section 3.1.2.
In this paper, we only predict the block labels. Given the pixel binary label map Y * P ∈ Z µ×µ 2 with Y * P (r, c) ∈ {0 (normal), 1 (anomaly)}, the ground-truth label map of the block Y * B is estimated as
Y * B (r b , c b ) = 1 (rp,cp)∈Br b ,c b Y * P (r p , c p ) > ϵ + β 0 (rp,cp)∈Br b ,c b Y * P (r p , c p ) < ϵ − β ∅ otherwise(14)
where B r b ,c b denotes the pixel sets corresponding to the image block; ϵ + and ϵ − are the two thresholds determining the signs of labels. If the block is labeled as ∅ (as shown in Figure 1), no loss gradient will be backpropagated through the corresponding token when training. In this paper, this block labeling strategy is adopted for both the real defects in the supervised setting and the simulated defects in the unsupervised setting.
In the semi-supervised scenario, the pixel labels are estimated by the bounding boxes. In particular, the pixels outside the union of boxes are labeled as 0 (normal) and other pixels are labeled as −1 (unknown). So in this scenario Y * P ∈ Z µ×µ 2 with Y * P (r, c) ∈ {0 (normal), −1 (unknown)}. Similarly to Equation 14, the blockwise semi-supervision is defined as
Y * B (r b , c b ) = −1 (rp,cp)∈Br b ,c b |Y * P (r p , c p )| > υβ 0 otherwise(15)
where υ is the threshold for determining whether the label of the block is unknown. Note that the proposed bounding-box label is partially inspired by the pioneering works (Božič et al., 2021;Tabernik et al., 2020) that employ bounding box to annotate defective parts. However, those bounding boxes consider all the inside pixels as anomalies and thus usually lead to significant label ambiguities. In contrast, SemiREST uses the bounding box to annotate the normal region and leave other pixels as unknown. This labeling strategy is always correct and the unknown region can be successfully exploited by the proposed semi-supervised learning scheme.
Overview of the training process
In summary, the whole training procedure of our SemiREST is depicted in Figure 3. In practice, the fullsupervision part (shown in blue) is adopted for the supervised setting as well as the "unsupervised" setting where simulated anomalies are generated. In the semi-supervised setting, both the semi-supervision part (red) and the full-supervision part (blue) are employed to guarantee high performances. Note that this is not Algorithm 1 MixMatch training of SemiREST 1: Input: Swin Transformer model Ψ Swin (·), One sub-tensors Tw ∈ R µ×µ×d t determined by the sliding window w, the correspoding label maps Θ * w ∈ Z µ×µ 2 with Θ * w (r, c) ∈ {0, -1(unknown)}, sharpening temperature Γ , unlabeled loss weight λu, and focal loss parameters {αx, αu, γx, γu}.
2: K-NN data augmentation as Equation 12
{T j , ∀j | j = 1, 2, · · · , M } ← K-NN-Augmentation ← Tw {Θ * j , ∀j | j = 1, 2, · · · , M } ← Copy ← Θ * w 3: Guess pseudo labels through augmentation (Berthelot et al., 2019b) {Θ j , ∀j | j = 1, 2, · · · , M } ← Copy ← Sharpen
1 M M j=1 Ψ Swin (T j ), Γ {x i ∈ R d t | i = 1, 2, ..., N } ← Flatten ← {T j , ∀j} {y * i ∈ {0, -1} | i = 1, 2, ..., N } ← Flatten ← {Θ * j , ∀j} {ȳ i ∈ [0, 1]} | i = 1, 2, ..., N } ← Flatten ← {Θ j ,X = {X i = {x i , y * i }, ∀i | y * i = 0}, U = {U i = {x i ,ȳ i }, ∀i | y * i = -X ← {MixUp(X i , W i ), ∀i | i = 1, · · · , |X|} U ← {MixUp(U i , W i+|X| ), ∀i | i = 1, · · · , |U|}
7: Random dropout some tokens as Equation 13
{{T j ,Θ * j }, ∀j} ← Retensorize(Union(X,Û)) ∀j,T j = RandomDropout PCR (T j ) 8: Generate the prediction maps ∀j,Θ j = Ψ Swin (T j ) 9: Compute the labeled loss Lx and unlabeled loss Lu Fig. 3 The overview of the training process of SemiREST. Note that in the unsupervised and supervised settings, only the lower part (shown in blue) is used and the red part, i.e., the semi-supervision is leveraged in the semi-supervised setting. Better view in color.
{ŷ * i ∈ [0, 1] | i = 1, 2, ..., N } ← Flatten ← {Θ * j , ∀j} {ŷ i ∈ [0, 1] | i = 1, 2, ..., N } ← Flatten ← {Θ j , ∀j} ∀i, p i = (1 −ŷ * i )(1 −ŷ i ) +ŷ * iŷi Z + k = {∀i | y * i = 0 &ŷ * i > 0.5}, Z + u = {∀i | y * i = -1 &ŷ * i > 0.5} Z − k = {∀i | y * i = 0 &ŷ * i ⩽ 0.5}, Z − u = {∀i | y * i = -1 &ŷ * i ⩽ 0.5} Lx = − i∈Z − k [(1 − αx)(1 − p i ) γx log(p i )] + i∈Z + k [αx(1 − p i ) γx log(p i )] |Z + k ∪ Z − k | Lu = − i∈Z − u [(1 − αu)(1 − p i ) γu log(p i )] + i∈Z + u [αu(1 − p i ) γu log(p i )] |Z + u ∪ Z − u | 10: Output: L mix = Lx + λuLu
against to our original motivation of reducing the annotation effort because one can always use the synthetic defects with no labeling effort.
In this section, extensive experiments are conducted to evaluate the proposed method, compared with a comprehensive collection of SOTA methods (Deng and Li, 2022;Roth et al., 2022;Zavrtanik et al., 2021), (Ding et al., 2022;Gudovskiy et al., 2022;Lei et al., 2023;Liu et al., 2023a,b;Pang et al., 2021;Ristea et al., 2022;Tien et al., 2023;Yao et al., 2023;Zhang et al., 2023a,b), on three well-acknowledged benchmarks, namely, the MVTec-AD (Bergmann et al., 2019) dataset, the BTAD (Mishra et al., 2021) dataset and the KolektorSDD2 dataset (Božič et al., 2021), respectively.
Three levels of supervision
First of all, let us clarify the experiment settings for the three kinds of supervision.
Unsupervised (Un) setting
The unsupervised setting is the mainstream in the AD literature, where only normal data can be accessed during training. However, discriminative models can be learned by generating synthetic anomalies. The involved algorithms are evaluated on the entire test set, including both normal and anomalous images.
Supervised (Sup) setting
In this scenario, a few anomaly training samples are available to improve the discriminative power of the algorithm. Following the popular setting described in , we randomly draw 10 anomalous images with block-wise annotations from various types of defects to the train set and remove them from the test set. Note that synthetic anomalies can usually be used here for higher performance.
Semi-supervised (Semi) setting
The semi-supervised setting is the newly proposed supervision and is thus employed in this work only. The same 10 anomalous images are used as training samples while the block labels of them are determined by the bounding-boxes which are defined in Equation 15. Note that in this case, all the blocks are annotated as either unknown or normal.
Implementation details
Hyper parameters
The layer-2 and layer-3 feature maps of the Wide-Resnet-50 model (Zagoruyko and Komodakis, 2016) (pretrained on ImageNet-1K) are concatenated with a proper scale adaptation and the yielded d f = 1024 "hypercolumns" are smoothed via average pooling in the 3 × 3 neighborhoods. We set λ PE = 0.1 to generate PCFs. For MVTec-AD (Bergmann et al., 2019) and BTAD (Mishra et al., 2021) datasets, we subsample 10% of the PCFs to get the memory banksM, while the subsampling ratio for KolektorSDD2 (Božič et al., 2021) is 0.1% considering the much larger training set of normal images. Our Swin Transformer model Ψ Swin (·) consists of 4 blocks with patch size ρ = 1, the window size is 8 and the number of heads is set to 32. The parameters of the focal loss α x , α u , γ x and γ u are set to 0.25, 0.75, 4 and 4 respectively. The sliding window w slides over the PCR tensor with a step s = 8 and the window size is µ = 32. a 1 and a 2 are set to 0.5 and 0.8 respectively. To speed up training, we sample p of sliding windows in b 1 normal images, b 2 simulated defective images, and b 3 true anomalous images at each training iteration. To compare the final prediction map Θ with the groundtruth label map, it is firstly upscaled to the same size as the ground-truth via bilinear interpolation and then smoothed using a Gaussian of kernel with σ = 4, as it is done in .
The values of lr, p, ϵ + , ϵ − , b 1 , b 2 and b 3 vary among different supervision conditions:
-Unsupervised setting: lr = 10 −4 , p = 1/4, ϵ + = 50%, ϵ − = 8%, b 1 = 4, b 2 = 2 (lr = 3 × 10 −4 , p = 1/6, ϵ + = 25%, b 2 = 4 for BTAD (Mishra et al., 2021)). -Supervised setting: lr = 10 −4 , p = 1/4, ϵ + = 25%, ϵ − = 8%, b 1 = 2, b 2 = 2 and b 3 = 2. -Semi-supervised setting: lr = 3 × 10 −5 , p = 1/10, ϵ + = 25%, ϵ − = 8%, b 1 = 3, b 2 = 3 and b 3 = 2 (lr = 3.125 × 10 −5 , p = 1/25, ϵ + = 80%, ϵ − = 0% for BTAD (Mishra et al., 2021)).
As to the semi-supervised learning, the hyperparameters v, α, Γ and M are set to 50%, 25%, 0.5 and 3 respectively. The random noise for agumentation is set to δ = e z , z ∼ N(0, 0.2), z ∈ [-0.223, 0.223]. Following the MixMatch algorithm (Berthelot et al., 2019b), we linearly ramp up the unlabeled loss weight to λ u = 5 (10 for BTAD) over the first 400 steps of training.
Train and inference time
The proposed method requires around 120 ms to predict anomaly on a 512 × 512 image. It takes 42, 48, and 56 minutes to train the Swin Transformer model on MVTec-AD (Bergmann et al., 2019) with the unsupervised, semi-supervised, and supervised settings respectively.
Evaluation methods
In this work, the involved AD algorithms are measured comprehensively by three popular threshold-indeendent metrics: Pixel-AUROC, PRO (Bergmann et al., 2020) (per region overlap) and AP (Zavrtanik et al., 2021) (average precision). In specific, Pixel-AUROC is the area under the receiver operating characteristic curve at the pixel level. It is the most popular AD measuring method while fails to reflect the real performance difference between algorithms when a serious class imbalance exists. The PRO score, on the contrary, focuses on the anomaly pixels and treats the AD performance on each individual anomaly region equally. Consequently, the PRO metric is more robust to the class imbalance which is actually a common situation in most AD benchmarks. The AP (average precision) metric (Zavrtanik et al., 2021), as a conventional metric for semantic segmentation, is frequently adopted in recently proposed AD algorithms (Zavrtanik et al., 2021;Zhang et al., 2023a). It reflects the anomaly detection performance from a pixel-level perspective.
Results on MVTec-AD
MVTec-AD (Bergmann et al., 2019) is the most popular AD dataset with 5, 354 high-resolution color images belonging to 5 texture categories and 10 object categories. Each category contains a train set with only normal images and a test set with various kinds of defects as well as defect-free images. We conduct the experiments on this dataset within all three supervision conditions.
The unsupervised AD results of the comparing algorithms on MVTec-AD (Bergmann et al., 2019) are shown in Table 1. As shown in the table, our method achieves the highest average AP, average PRO and average pixel AUROC, for both texture and object categories and outperforms the unsupervised SOTA by 5.4%, 1.6% and 1.0% respectively. In specific, Semi-REST ranks the first on 67% (10 out of 15) categories with AP metric and the "first-ranking" ratios for the PRO and Pixel-AUROC are 47% and 80%.
In addition, Table 2 illustrates that with full supervision, SemiREST still ranks first for the average AD performance evaluated by using all three metrics. In particular, our method outperforms the supervised SO-TAs by 5.8% on AP, 0.4% on PRO and 0.3% on Pixel-AUROC. The "first-ranking" ratios of SemiREST in the supervised scenario are 47%, 67% and 67% one AP, PRO and Pixel-AUROC respectively. Table 2 also reports the performances of our method with the bounding-box-determined semi-supervisions. It can be seen that the semi-supervised SemiREST performs very similarly to the fully-supervised SemiREST, thanks to the effective usage of the unlabeled blocks. In addition, even with much less annotation information,
Category
PatchCore DRAEM RD SSPCAB NFAD DMAD SimpleNet DeSTSeg PyramidFlow RD++ Ours (Un) (Zavrtanik et al., 2021) (Deng and Li, 2022) (Ristea et al., 2022) (Yao et al., 2023) (Liu et al., 2023a) (Zhang et al., 2023b) (Lei et al., 2023) (Tien et al., 2023) Table 1 The comparison on the Average Precision (AP), Per-Region Overlap (PRO) and pixel AUROC metrics for zero-shot anomaly localization on the MVTec-AD dataset. The best accuracy in one comparison with the same data and metric condition is shown in red while the second one is shown in blue. Table 2 The comparison on the Average Precision (AP), Per-Region Overlap (PRO) and pixel AUROC metrics for few-shot anomaly localization on the MVTec-AD dataset. The best accuracy in one comparison with the same data and metric condition is shown in red while the second one is shown in blue.
the semi-supervised SemiREST still beats the fullysupervised SOTA methods by large margins (5.2% for AP, 0.4% for PRO and 0.3% for Pixel-AUROC).
It is interesting to see that, with only synthetic defective samples, the unsupervised SemiREST remains the superiority to the supervised SOTA, with the AP and Pixel-AUROC metrics (see Table 1 and see Table 2). The proposed algorithm illustrates remarkably high generalization capacities.
Readers can also find the qualitative results of the proposed method compared with other SOTA algorithms in Figure 6.
Results on BTAD
As a more challenging alternative to MVTec-AD, BTAD (Mishra et al., 2021) (beanTech Anomaly Detection) contains 2, 830 high-resolution color images of three industrial products. Each product includes nor-mal images in the train set and the corresponding test set consists of both defective and defect-free images.
We further evaluate our algorithm on the BTAD dataset with those SOTA methods also reporting their results on this dataset. Table 3 shows that SemiREST achieves comparable performances to the unsupervised SOTA. Furthermore, as shown in Table 4, with full supervision, the proposed method surpasses SOTA methods by a large margin (6.7%, 5.5% and 0.3%) for all three metrics. Similarly to the situation of MVTec-AD, the semi-supervised SemiREST also obtains higher average performances than the supervised SOTA algorithms.
Results on KolektorSDD2
KolektorSDD2 (Božič et al., 2021) dataset is designed for surface defect detection and includes various types of defects, such as scratches, minor spots, and (Zavrtanik et al., 2021) (Ristea et al., 2022) (Gudovskiy et al., 2022) (Deng and Li, 2022) (Lei et al., 2023) (Yao et al., 2023) (Tien et al., 2023) Table 3 Results of the AP, PRO and pixel AUROC metrics for unsupervised anomaly localization performance on BTAD.
The best accuracy in one comparison with the same data and metric condition is shown in red while the second one is shown in blue. Table 4 Results of the AP, PRO and pixel AUROC metrics for supervised and semi-supervised anomaly localization performance on BTAD. The best accuracy in one comparison with the same data and metric condition is shown in red while the second one is shown in blue.
Category
surface imperfections. It comprises a training set with 246 positive (defective) and 2, 085 negative (defect-free) images, as well as a test set with 110 positive and 894 negative images. We compare the performances of SemiREST with the SOTA results that are available in the literature. In the unsupervised setting, the algorithms are tested on the original test set. For the supervised and semi-supervised settings, a new training set is generated by combining all the normal training images and 10 defective images which are randomly selected from the original training set. As shown in Table 5, Our unsupervised performances beat those of SOTA methods by a large margin (8.6%, 3.4% and 1.8% for AP, PRO and Pixel-AUROC, respectively). Under supervised and semi-supervised settings, our method also achieves better results. It is worth noting that the unsupervised SemiREST performs better than itself with full supervision and semi-supervision. This over-fitting phenomenon might be caused by the (unnecessarily) low sampling rate of defective images in training.
Analysis on weak labels
Recall that the main motivation of this paper is to reduce the labeling cost of AD tasks, we report the annotation time-consumption of the proposed two weak labels compared with pixel-level annotations.
To obtain the labeling time, the pixel labels, block labels and the bounding boxes of anomaly regions on a subset of MVTec-AD (10 defective image for each subcategory) are all manually annotated. Four master students majoring in computer vision complete the labeling task using a self-developed labeling tool, as shown in Figure 4. The annotators are asked to mimic the ground Table 5 Results of anomaly localization performance on KolektorSDD2. The best accuracy in one comparison with the same data and metric condition is shown in red while the second one is shown in blue. Note that the upper sub-table shows the results obtained in the unsupervised condition and the lower part reports those with full-supervision or semisupervision (only for SemiREST).
truth annotations shown in a sub-window of the GUI and the annotating time for each image is recorded. The average annotation times of three kinds of labels are illustrated in Figure 5, along with the corresponding best AD performances (Pixel-AUROC, PRO, AP). According to the figure, one requires only around 5 seconds for labeling bounding boxes and around 17 seconds for generating block-wise labels, on one image. In contrast, the pixel-label consumes more than 32 seconds for one image, while with consistently lower accuracy.
Ablation study
In this section, the most influential modules of Semi-REST are evaluated in the manner of an ablation study. The involved modules include: the usage of the PCF Section 3.1.1, the Bagging prediction of Swin Trans- Fig. 4 Our labeling tool and the yielded annotation maps. The upper part is the GUI window with the pixel-wise label map (red) displayed in the top-left corner. The lower part illustrates the annotations with three levels of fineness. They are, from left to right, pixel-wise labels, block-wise labels, and bounding-box labels, respectively.
SemiREST-Semi
SemiREST-Sup SOTA-Pixel-Sup Pixel-AUROC PRO AP Fig. 5 The per-image annotation costs (x-axis) of the three levels of anomaly labels, are shown as the pentagram (bounding-box label), pentagon (block-wise label) and circle (pixel-wise label) shapes. The y-axis stands for the AD performances with the three metrics, shown as blue-dot (Pixel-AUROC), orange-solid (PRO) and green-dashed (AP) lines.
formers Section 3.1.2, the K-NN augmentation of PCRs Section 3.2.1, the customized MixMatch algorithm Section 3.3 and the random dropout scheme Section 3.2.2, respectively. From Table 6 one can observe a consistent increase in the AD performance as more modules are added to the SemiREST model.
In addition, two element-wise distance functions (see Section 3.1.1), i.e., the absolute value of the difference (ABS) and the difference square (Square) are also compared in Table 6. According to the table, the square function outperforms the ABS function within unsupervised and semi-supervised settings where no real defective pixels are seen or labeled. The performance gain might be related to the "feature-selection" property of the square operation, which only focuses on the significant components of the residual vector. On the contrary, when the real defective samples are given for generating PCRs, more useful information can be maintained via the conservative ABS function.
Conclusion
In this paper, we propose to solve the AD problem via block-wise classifications which require much less annotation effort than pixel-wise segmentation. To achieve this, a sliding vision transformer is employed to predict block labels, based on the smartly designed position-constrained residuals. The proposed bagging strategy of the Swin Transformers lead to the new SOTA accuracy on three well-known AD datasets. In addition, even cheaper bounding-box labels are proposed to further reduce the labeling time. Given only partially labeled normal regions, the customized MixMatch learning scheme successfully exploits the information of unlabeled regions and achieves the AD performances close to that with full-supervisions. The proposed SemiREST algorithm brings record-breaking AD performances to the literature while only requiring much coarser annotations or in short, our SemiREST is cheaper in annotation and better in accuracy.
Thus, SemiREST paves a novel way to reduce the annotation cost for AD problems while maintaining accuracy. According to the experiment of this work, the weak/semi-supervised setting seems a more practical alternative to the classic few-shot setting that directly limits the number of training images. In the future, we believe that better semi-supervised AD algorithms will be developed by exploiting more useful information from the unlabeled image regions.
Input
GT BGAD DevNet PatchCore DRAEM Ours (Semi) Ours (Sup) Ours (Un) Fig. 6 Qualitative results of our SemiREST on MVTec-AD, with the three levels of supervision: Un (unsupervised), Sup (supervised), and Semi (semi-supervised). Two unsupervised SOTA methods (PatchCore and DRAEM (Zavrtanik et al., 2021)) and two SOTA methods with full supervision (DevNet (Pang et al., 2021) and BGAD (Yao et al., 2023)) are also involved in the comparison.
Fig. 2
2The overview of the inference process of SemiREST. Note that all the tensors with a number of channels are all shown as matrix for the reason of simplicity. Better view in color.
∀j} 4 :
4Divide the tokens into labeled set X and unlabeled set U(Berthelot et al., 2019b)
1} 5 :
5Combine the labeled and unlabeled tokens and shuffle W = Shuffle(Union(X, U)) 6: Apply MixUp to all tokens(Berthelot et al., 2019b)
Position code , ∀+
Input
image
PCR: , ∀
Sliding windows
Swin
Transformer
Sliding predictions
Prediction aggregation
Final prediction
Stage1: Position-Constrained Residual Generation
Stage2: Bagging over Sliding Swin Transformers
PCF: , ∀
Feature bank ℳ
Deep feature , ∀
(•)
PRN Method
AP
PRO AUROC
PatchCore(Roth et al., 2022)
64.1
88.8
97.1
DRAEM(Zavrtanik et al., 2021)
39.1
67.9
85.6
SSPCAB(Ristea et al., 2022)
44.5
66.1
86.2
CFLOW(Gudovskiy et al., 2022) 46.0
93.8
97.4
RD(Deng and Li, 2022)
43.5
94.7
97.6
Ours(Un)
72.8
98.1
99.4
72.5
94.9
97.6
Ours(Sup)
73.6
96.7
98.0
Ours(Semi)
72.1
97.5
99.1
Table 6Ablation study results on MVTec-AD. Note that the semi-supervised SemiREST inherits the PCF feature, the bagging of swin transformer and K-NN augmentation modules from the fully-supervised version. Zhang X,Li S, Li X, Huang P, Shan J, Chen T (2023b) Destseg: Segmentation guided denoising studentteacher for anomaly detection.In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 3914-3923 Zhu H, Kang Y, Zhao Y, Yan X, Zhang J (2022) Anomaly detection for surface of laptop computer based on patchcore gan algorithm. In: 2022 41st Chinese Control Conference (CCC), IEEE, pp 5854-5858 Zhu X, Goldberg AB (2009) Introduction to semisupervised learning. Synthesis lectures on artificial intelligence and machine learning 3(1):1-130 Zong B, Song Q, Min MR, Cheng W, Lumezanu C, Cho D, Chen H (2018) Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: International conference on learning representationsModule
Performance
S(·)
PCF Bagging K-NN Augmentation MixMatch Random Dropout
Unsupervised
Supervised
Semi-supervised
Square
70.5/95.4/98.4
75.1/96.6/98.9
-
ABS
69.3/94.8/98.3
76.8/96.9/99.0
-
Square
✓
72.4/95.8/98.9
76.4/96.6/99.0
-
Square
✓
✓
79.3/96.9/98.8
83.4/98.2/99.4
-
Square
✓
✓
✓
81.2/97.5/99.3
84.4/98.2/99.5
80.7/97.5/99.3
Square
✓
✓
✓
✓
81.2/97.5/99.3
84.4/98.2/99.5
82.1/97.8/99.4
Square
✓
✓
✓
✓
✓
81.2/97.5/99.3
84.4/98.2/99.5
83.8/98.1/99.5
ABS
✓
✓
✓
✓
✓
76.4/96.3/98.7
84.9/98.3/99.5
83.2/98.0/99.3
150
Note that here the term "unsupervised" refers to the label absence of the anomaly, normal images are known to be defect-free.
Image anomaly detection and localization with position and neighborhood information. References Bae, J Lee, J H Kim, S , arXiv:221112634arXiv preprintReferences Bae J, Lee JH, Kim S (2022) Image anomaly detection and localization with position and neighborhood in- formation. arXiv preprint arXiv:221112634
Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. P Bergmann, M Fauser, D Sattlegger, C Steger, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionBergmann P, Fauser M, Sattlegger D, Steger C (2019) Mvtec ad-a comprehensive real-world dataset for un- supervised anomaly detection. In: Proceedings of the IEEE/CVF conference on computer vision and pat- tern recognition, pp 9592-9600
Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings. P Bergmann, M Fauser, D Sattlegger, C Steger, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionBergmann P, Fauser M, Sattlegger D, Steger C (2020) Uninformed students: Student-teacher anomaly de- tection with discriminative latent embeddings. In: Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition, pp 4183-4192
Beyond dents and scratches: Logical constraints in unsupervised anomaly detection and localization. P Bergmann, K Batzner, M Fauser, D Sattlegger, C Steger, International Journal of Computer Vision. 1304Bergmann P, Batzner K, Fauser M, Sattlegger D, Steger C (2022) Beyond dents and scratches: Logical con- straints in unsupervised anomaly detection and lo- calization. International Journal of Computer Vision 130(4):947-969
Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. D Berthelot, N Carlini, E D Cubuk, A Kurakin, K Sohn, H Zhang, C Raffel, arXiv:191109785arXiv preprintBerthelot D, Carlini N, Cubuk ED, Kurakin A, Sohn K, Zhang H, Raffel C (2019a) Remixmatch: Semi-supervised learning with distribution align- ment and augmentation anchoring. arXiv preprint arXiv:191109785
Mixmatch: A holistic approach to semi-supervised learning. D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, C A Raffel, Advances in neural information processing systems. 32Berthelot D, Carlini N, Goodfellow I, Papernot N, Oliver A, Raffel CA (2019b) Mixmatch: A holistic approach to semi-supervised learning. Advances in neural information processing systems 32
Mixed supervision for surface-defect detection: From weakly to fully supervised learning. J Božič, D Tabernik, D Skočaj, Computers in Industry. 129103459Božič J, Tabernik D, Skočaj D (2021) Mixed super- vision for surface-defect detection: From weakly to fully supervised learning. Computers in Industry 129:103459
Bagging predictors. L Breiman, Machine learning. 24Breiman L (1996) Bagging predictors. Machine learning 24:123-140
Swin-unet: Unet-like pure transformer for medical image segmentation. H Cao, Y Wang, J Chen, D Jiang, X Zhang, Q Tian, M Wang, Computer Vision-ECCV 2022 Workshops: Tel Aviv, Israel. SpringerCao H, Wang Y, Chen J, Jiang D, Zhang X, Tian Q, Wang M (2023a) Swin-unet: Unet-like pure trans- former for medical image segmentation. In: Com- puter Vision-ECCV 2022 Workshops: Tel Aviv, Is- rael, October 23-27, 2022, Proceedings, Part III, Springer, pp 205-218
Collaborative discrepancy optimization for reliable image anomaly localization. Y Cao, X Xu, Z Liu, W Shen, DOI10.1109/TII.2023.3241579IEEE Transactions on Industrial Informatics. Cao Y, Xu X, Liu Z, Shen W (2023b) Collaborative discrepancy optimization for reliable image anomaly localization. IEEE Transactions on Industrial Infor- matics pp 1-10, DOI 10.1109/TII.2023.3241579
Deep oneclass classification via interpolated gaussian descriptor. Y Chen, Y Tian, G Pang, G Carneiro, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Chen Y, Tian Y, Pang G, Carneiro G (2022) Deep one- class classification via interpolated gaussian descrip- tor. In: Proceedings of the AAAI Conference on Ar- tificial Intelligence, vol 36, pp 383-392
Revisiting rcnn: On awakening the classification power of faster rcnn. B Cheng, Y Wei, H Shi, R Feris, J Xiong, T Huang, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Cheng B, Wei Y, Shi H, Feris R, Xiong J, Huang T (2018) Revisiting rcnn: On awakening the classifica- tion power of faster rcnn. In: Proceedings of the Eu- ropean conference on computer vision (ECCV), pp 453-468
Dynamic head: Unifying object detection heads with attentions. X Dai, Y Chen, Xiao B Chen, D Liu, M Yuan, L Zhang, L , Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionDai X, Chen Y, Xiao B, Chen D, Liu M, Yuan L, Zhang L (2021) Dynamic head: Unifying object de- tection heads with attentions. In: Proceedings of the IEEE/CVF conference on computer vision and pat- tern recognition, pp 7373-7382
Padim: a patch distribution modeling framework for anomaly detection and localization. T Defard, A Setkov, A Loesch, R Audigier, Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event. SpringerPart IVDefard T, Setkov A, Loesch A, Audigier R (2021) Padim: a patch distribution modeling framework for anomaly detection and localization. In: Pat- tern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Pro- ceedings, Part IV, Springer, pp 475-489
Anomaly localization by modeling perceptual features. D Dehaene, P Eline, arXiv:200805369arXiv preprintDehaene D, Eline P (2020) Anomaly localization by modeling perceptual features. arXiv preprint arXiv:200805369
Anomaly detection via reverse distillation from one-class embedding. H Deng, X Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDeng H, Li X (2022) Anomaly detection via reverse dis- tillation from one-class embedding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 9737-9746
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeDeng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, Ieee, pp 248-255
Catching both gray and black swans: Open-set supervised anomaly detection. C Ding, G Pang, C Shen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDing C, Pang G, Shen C (2022) Catching both gray and black swans: Open-set supervised anomaly de- tection. In: Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pp 7388-7398
Nice: Non-linear independent components estimation. L Dinh, D Krueger, Y Bengio, arXiv:14108516arXiv preprintDinh L, Krueger D, Bengio Y (2014) Nice: Non-linear independent components estimation. arXiv preprint arXiv:14108516
Density estimation using real nvp. L Dinh, J Sohl-Dickstein, S Bengio, arXiv:160508803arXiv preprintDinh L, Sohl-Dickstein J, Bengio S (2016) Den- sity estimation using real nvp. arXiv preprint arXiv:160508803
Solq: Segmenting objects by learning queries. B Dong, F Zeng, T Wang, X Zhang, Y Wei, Advances in Neural Information Processing Systems. 34Dong B, Zeng F, Wang T, Zhang X, Wei Y (2021) Solq: Segmenting objects by learning queries. Advances in Neural Information Processing Systems 34:21898- 21909
An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, International Conference on Learning Representations. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al. (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In: International Conference on Learning Rep- resentations
Cas-vswin transformer: A variant swin transformer for surfacedefect detection. L Gao, J Zhang, C Yang, Y Zhou, Computers in Industry. 140103689Gao L, Zhang J, Yang C, Zhou Y (2022) Cas-vswin transformer: A variant swin transformer for surface- defect detection. Computers in Industry 140:103689
Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. D Gudovskiy, S Ishizaka, K Kozuka, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionGudovskiy D, Ishizaka S, Kozuka K (2022) Cflow-ad: Real-time unsupervised anomaly detection with lo- calization via conditional normalizing flows. In: Pro- ceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp 98-107
Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. A Hatamizadeh, V Nath, Y Tang, D Yang, H R Roth, D Xu, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event. SpringerRevised Selected PapersHatamizadeh A, Nath V, Tang Y, Yang D, Roth HR, Xu D (2022) Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th Interna- tional Workshop, BrainLes 2021, Held in Conjunc- tion with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part I, Springer, pp 272-284
An exponential moving average algorithm. D Haynes, S Corns, G K Venayagamoorthy, IEEE Congress on Evolutionary Computation. IEEEHaynes D, Corns S, Venayagamoorthy GK (2012) An exponential moving average algorithm. In: 2012 IEEE Congress on Evolutionary Computation, IEEE, pp 1- 8
Masked autoencoders are scalable vision learners. K He, X Chen, S Xie, Y Li, P Dollár, R Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHe K, Chen X, Xie S, Li Y, Dollár P, Girshick R (2022) Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pp 16000- 16009
Divide-and-assemble: Learning block-wise memory for unsupervised anomaly detection. J Hou, Y Zhang, Q Zhong, D Xie, S Pu, H Zhou, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionHou J, Zhang Y, Zhong Q, Xie D, Pu S, Zhou H (2021) Divide-and-assemble: Learning block-wise memory for unsupervised anomaly detection. In: Proceedings of the IEEE/CVF International Conference on Com- puter Vision, pp 8791-8800
Weakly supervised instance segmentation using the bounding box tightness prior. C C Hsu, K J Hsu, C C Tsai, Y Y Lin, Y Y Chuang, Advances in Neural Information Processing Systems. 32Hsu CC, Hsu KJ, Tsai CC, Lin YY, Chuang YY (2019) Weakly supervised instance segmentation using the bounding box tightness prior. Advances in Neural In- formation Processing Systems 32
Registration based few-shot anomaly detection. C Huang, H Guan, A Jiang, Y Zhang, M Spratling, Y F Wang, Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel. SpringerProceedings, Part XXIVHuang C, Guan H, Jiang A, Zhang Y, Spratling M, Wang YF (2022) Registration based few-shot anomaly detection. In: Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Is- rael, October 23-27, 2022, Proceedings, Part XXIV, Springer, pp 303-319
Fapn: Featurealigned pyramid network for dense image prediction. S Huang, Z Lu, R Cheng, C He, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionHuang S, Lu Z, Cheng R, He C (2021) Fapn: Feature- aligned pyramid network for dense image prediction. In: Proceedings of the IEEE/CVF international con- ference on computer vision, pp 864-873
Surface defect saliency of magnetic tile. Y Huang, C Qiu, K Yuan, The Visual Computer. 36Huang Y, Qiu C, Yuan K (2020) Surface defect saliency of magnetic tile. The Visual Computer 36:85-96
Ayed IB (2020) Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision. H Kervadec, J Dolz, S Wang, E Granger, PMLRMedical imaging with deep learning. Kervadec H, Dolz J, Wang S, Granger E, Ayed IB (2020) Bounding boxes for weakly supervised seg- mentation: Global constraints get close to full su- pervision. In: Medical imaging with deep learning, PMLR, pp 365-381
Fapm: Fast adaptive patch memory for real-time industrial anomaly detection. D Kim, C Park, S Cho, S Lee, arXiv:221107381arXiv preprintKim D, Park C, Cho S, Lee S (2022) Fapm: Fast adap- tive patch memory for real-time industrial anomaly detection. arXiv preprint arXiv:221107381
Glow: Generative flow with invertible 1x1 convolutions. D P Kingma, P Dhariwal, Advances in neural information processing systems. 31Kingma DP, Dhariwal P (2018) Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems 31
Bbam: Bounding box attribution map for weakly supervised semantic and instance segmentation. J Lee, J Yi, C Shin, S Yoon, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionLee J, Yi J, Shin C, Yoon S (2021) Bbam: Bounding box attribution map for weakly supervised seman- tic and instance segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pat- tern recognition, pp 2643-2652
Pyramidflow: High-resolution defect contrastive localization using pyramid normalizing flow. J Lei, X Hu, Y Wang, D Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Lei J, Hu X, Wang Y, Liu D (2023) Pyramidflow: High-resolution defect contrastive localization using pyramid normalizing flow. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pp 14143-14152
Cutpaste: Selfsupervised learning for anomaly detection and localization. C L Li, K Sohn, J Yoon, T Pfister, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionLi CL, Sohn K, Yoon J, Pfister T (2021) Cutpaste: Self- supervised learning for anomaly detection and local- ization. In: Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pp 9664-9674
Mask dino: Towards a unified transformerbased framework for object detection and segmentation. F Li, H Zhang, S Liu, L Zhang, L M Ni, H Y Shum, arXiv:220602777arXiv preprintLi F, Zhang H, Liu S, Zhang L, Ni LM, Shum HY, et al. (2022) Mask dino: Towards a unified transformer- based framework for object detection and segmen- tation. arXiv preprint arXiv:220602777
Cbnet: A composite backbone network architecture for object detection. T Liang, X Chu, Y Liu, Y Wang, Z Tang, W Chu, J Chen, H Ling, IEEE Transactions on Image Processing. 31Liang T, Chu X, Liu Y, Wang Y, Tang Z, Chu W, Chen J, Ling H (2022) Cbnet: A composite backbone net- work architecture for object detection. IEEE Trans- actions on Image Processing 31:6893-6906
Focal loss for dense object detection. T Y Lin, P Goyal, R Girshick, K He, P Dollár, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionLin TY, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vi- sion, pp 2980-2988
Diversity-measurable anomaly detection. W Liu, H Chang, B Ma, S Shan, X Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Liu W, Chang H, Ma B, Shan S, Chen X (2023a) Diversity-measurable anomaly detection. In: Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 12147- 12156
Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionLiu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 10012-10022
Swin transformer v2: Scaling up capacity and resolution. Z Liu, H Hu, Y Lin, Z Yao, Z Xie, Y Wei, J Ning, Y Cao, Z Zhang, L Dong, F Wei, B Guo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Liu Z, Hu H, Lin Y, Yao Z, Xie Z, Wei Y, Ning J, Cao Y, Zhang Z, Dong L, Wei F, Guo B (2022) Swin trans- former v2: Scaling up capacity and resolution. In: Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pp 12009-12019
Simplenet: A simple network for image anomaly detection and localization. Z Liu, Y Zhou, Y Xu, Z Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Liu Z, Zhou Y, Xu Y, Wang Z (2023b) Simplenet: A simple network for image anomaly detection and lo- calization. In: Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), pp 20402-20411
Explainable deep one-class classification. P Liznerski, L Ruff, R A Vandermeulen, B J Franks, M Kloft, K R Muller, International Conference on Learning Representations. Liznerski P, Ruff L, Vandermeulen RA, Franks BJ, Kloft M, Muller KR (2021) Explainable deep one-class classification. In: International Confer- ence on Learning Representations, URL https:// openreview.net/forum?id=A5VV3UyIQz
Decoupled weight decay regularization. I Loshchilov, F Hutter, International Conference on Learning Representations. Loshchilov I, Hutter F (2019) Decoupled weight de- cay regularization. In: International Conference on Learning Representations
Mocca: Multilayer one-class classification for anomaly detection. F V Massoli, F Falchi, A Kantarci, Ş Akti, H K Ekenel, G Amato, IEEE Transactions on Neural Networks and Learning Systems. 336Massoli FV, Falchi F, Kantarci A, Akti Ş, Ekenel HK, Amato G (2021) Mocca: Multilayer one-class classifi- cation for anomaly detection. IEEE Transactions on Neural Networks and Learning Systems 33(6):2313- 2323
Vt-adl: A vision transformer network for image anomaly detection and localization. P Mishra, R Verk, D Fornasier, C Piciarelli, G L Foresti, 2021 IEEE 30th International Symposium on Industrial Electronics. IEEEMishra P, Verk R, Fornasier D, Piciarelli C, Foresti GL (2021) Vt-adl: A vision transformer network for image anomaly detection and localization. In: 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), IEEE, pp 01-06
Attention network for rail surface defect detection via consistency of intersection-over-union (iou)-guided center-point estimation. X Ni, Z Ma, J Liu, B Shi, H Liu, IEEE Transactions on Industrial Informatics. 183Ni X, Ma Z, Liu J, Shi B, Liu H (2021) Attention net- work for rail surface defect detection via consistency of intersection-over-union (iou)-guided center-point estimation. IEEE Transactions on Industrial Infor- matics 18(3):1694-1705
Region-and strength-controllable gan for defect generation and segmentation in industrial images. S Niu, B Li, X Wang, Y Peng, IEEE Transactions on Industrial Informatics. 187Niu S, Li B, Wang X, Peng Y (2021) Region-and strength-controllable gan for defect generation and segmentation in industrial images. IEEE Transac- tions on Industrial Informatics 18(7):4531-4541
Explainable deep few-shot anomaly detection with deviation networks. G Pang, C Ding, C Shen, Hengel Avd, arXiv:210800462arXiv preprintPang G, Ding C, Shen C, Hengel Avd (2021) Explain- able deep few-shot anomaly detection with deviation networks. arXiv preprint arXiv:210800462
Self-supervised predictive convolutional attentive block for anomaly detection. N C Ristea, N Madan, R T Ionescu, K Nasrollahi, F S Khan, T B Moeslund, M Shah, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRistea NC, Madan N, Ionescu RT, Nasrollahi K, Khan FS, Moeslund TB, Shah M (2022) Self-supervised predictive convolutional attentive block for anomaly detection. In: Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pp 13576-13586
Towards total recall in industrial anomaly detection. K Roth, L Pemula, J Zepeda, B Schölkopf, T Brox, P Gehler, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRoth K, Pemula L, Zepeda J, Schölkopf B, Brox T, Gehler P (2022) Towards total recall in in- dustrial anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pp 14318-14328
Same same but differnet: Semi-supervised defect detection with normalizing flows. M Rudolph, B Wandt, B Rosenhahn, Proceedings of the IEEE/CVF winter conference on applications of computer vision. the IEEE/CVF winter conference on applications of computer visionRudolph M, Wandt B, Rosenhahn B (2021) Same same but differnet: Semi-supervised defect detection with normalizing flows. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 1907-1916
Deep one-class classification. L Ruff, R Vandermeulen, N Goernitz, L Deecke, S A Siddiqui, A Binder, E Müller, M Kloft, PMLRInternational conference on machine learning. Ruff L, Vandermeulen R, Goernitz N, Deecke L, Sid- diqui SA, Binder A, Müller E, Kloft M (2018) Deep one-class classification. In: International conference on machine learning, PMLR, pp 4393-4402
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, International journal of computer vision. 115Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, et al. (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115:211-252
Enhancing anomaly detection performance and acceleration. R Saiku, J Sato, T Yamada, K Ito, IEEJ Journal of Industry Applications. 114Saiku R, Sato J, Yamada T, Ito K (2022) Enhanc- ing anomaly detection performance and acceleration. IEEJ Journal of Industry Applications 11(4):616-622
Multiresolution knowledge distillation for anomaly detection. M Salehi, N Sadjadi, S Baselizadeh, M H Rohban, H R Rabiee, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionSalehi M, Sadjadi N, Baselizadeh S, Rohban MH, Ra- biee HR (2021) Multiresolution knowledge distilla- tion for anomaly detection. In: Proceedings of the IEEE/CVF conference on computer vision and pat- tern recognition, pp 14902-14912
Support vector method for novelty detection. B Scholkopf, R Williamson, A Smola, J Shawe-Taylor, J Platt, Advances in neural information processing systems. 123Scholkopf B, Williamson R, Smola A, Shawe-Taylor J, Platt J, et al. (2000) Support vector method for nov- elty detection. Advances in neural information pro- cessing systems 12(3):582-588
Estimating the support of a high-dimensional distribution. B Schölkopf, J C Platt, J Shawe-Taylor, A J Smola, R C Williamson, Neural computation. 137Schölkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC (2001) Estimating the support of a high-dimensional distribution. Neural computation 13(7):1443-1471
Unsupervised anomaly segmentation via deep feature reconstruction. Y Shi, J Yang, Z Qi, Neurocomputing. 424Shi Y, Yang J, Qi Z (2021) Unsupervised anomaly seg- mentation via deep feature reconstruction. Neuro- computing 424:9-22
A survey on image data augmentation for deep learning. C Shorten, T M Khoshgoftaar, Journal of big data. 61Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. Journal of big data 6(1):1-48
Fixmatch: Simplifying semi-supervised learning with consistency and confidence. K Sohn, D Berthelot, N Carlini, Z Zhang, H Zhang, C A Raffel, E D Cubuk, A Kurakin, C L Li, Advances in neural information processing systems. 33Sohn K, Berthelot D, Carlini N, Zhang Z, Zhang H, Raffel CA, Cubuk ED, Kurakin A, Li CL (2020) Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural infor- mation processing systems 33:596-608
Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, 15Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. The jour- nal of machine learning research 15(1):1929-1958
Segmentation-based deep-learning approach for surface-defect detection. D Tabernik, S Šela, J Skvarč, D Skočaj, Journal of Intelligent Manufacturing. 313Tabernik D,Šela S, Skvarč J, Skočaj D (2020) Segmentation-based deep-learning approach for surface-defect detection. Journal of Intelligent Manufacturing 31(3):759-776
U-flow: A u-shaped normalizing flow for anomaly detection with unsupervised threshold. M Tailanian, Pardoá, P Musé, arXiv:221112353arXiv preprintTailanian M, PardoÁ, Musé P (2022) U-flow: A u-shaped normalizing flow for anomaly detec- tion with unsupervised threshold. arXiv preprint arXiv:221112353
Unsupervised anomaly detection for surface defects with dual-siamese network. X Tao, D Zhang, W Ma, Z Hou, Z Lu, C Adak, IEEE Transactions on Industrial Informatics. 1811Tao X, Zhang D, Ma W, Hou Z, Lu Z, Adak C (2022) Unsupervised anomaly detection for surface defects with dual-siamese network. IEEE Transactions on In- dustrial Informatics 18(11):7707-7717
Revisiting reverse distillation for anomaly detection. T D Tien, A T Nguyen, N H Tran, T D Huy, S T Duong, Cdt Nguyen, Sqh Truong, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Tien TD, Nguyen AT, Tran NH, Huy TD, Duong ST, Nguyen CDT, Truong SQH (2023) Revisiting reverse distillation for anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 24511-24520
Swin-mfinet: Swin transformer based multi-feature integration network for detection of pixel-level surface defects. H Uzen, M Türkoglu, B Yanikoglu, D Hanbay, Expert Systems with Applications. 209118269Uzen H, Türkoglu M, Yanikoglu B, Hanbay D (2022) Swin-mfinet: Swin transformer based multi-feature integration network for detection of pixel-level sur- face defects. Expert Systems with Applications 209:118269
Yolov7: Trainable bag-of-freebies sets new state-ofthe-art for real-time object detectors. C Y Wang, A Bochkovskiy, Hym Liao, arXiv:220702696arXiv preprintWang CY, Bochkovskiy A, Liao HYM (2022a) Yolov7: Trainable bag-of-freebies sets new state-of- the-art for real-time object detectors. arXiv preprint arXiv:220702696
Image as a foreign language: Beit pretraining for all vision and vision-language tasks. W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, O K Mohammed, S Singhal, S Som, arXiv:220810442arXiv preprintWang W, Bao H, Dong L, Bjorck J, Peng Z, Liu Q, Ag- garwal K, Mohammed OK, Singhal S, Som S, et al. (2022b) Image as a foreign language: Beit pretrain- ing for all vision and vision-language tasks. arXiv preprint arXiv:220810442
Internimage: Exploring large-scale vision foundation models with deformable convolutions. W Wang, J Dai, Z Chen, Z Huang, Z Li, X Zhu, X Hu, T Lu, L Lu, H Li, arXiv:221105778arXiv preprintWang W, Dai J, Chen Z, Huang Z, Li Z, Zhu X, Hu X, Lu T, Lu L, Li H, et al. (2022c) Intern- image: Exploring large-scale vision foundation mod- els with deformable convolutions. arXiv preprint arXiv:221105778
Freematch: Self-adaptive thresholding for semi-supervised learning. Y Wang, H Chen, Q Heng, W Hou, Y Fan, Z Wu, J Wang, M Savvides, T Shinozaki, B Raj, B Schiele, X Xie, The Eleventh International Conference on Learning Representations. Wang Y, Chen H, Heng Q, Hou W, Fan Y, Wu Z, Wang J, Savvides M, Shinozaki T, Raj B, Schiele B, Xie X (2023) Freematch: Self-adaptive thresholding for semi-supervised learning. In: The Eleventh Interna- tional Conference on Learning Representations, URL https://openreview.net/forum?id=PDrUPTXJI_A
Learning unsupervised metaformer for anomaly detection. J C Wu, D J Chen, C S Fuh, T L Liu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionWu JC, Chen DJ, Fuh CS, Liu TL (2021) Learning unsupervised metaformer for anomaly detection. In: Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, pp 4369-4378
Pushing the limits of fewshot anomaly detection in industry vision: Graphcore. G Xie, J Wang, J Liu, Y Jin, F Zheng, The Eleventh International Conference on Learning Representations. Xie G, Wang J, Liu J, Jin Y, Zheng F (2023) Push- ing the limits of fewshot anomaly detection in in- dustry vision: Graphcore. In: The Eleventh Interna- tional Conference on Learning Representations, URL https://openreview.net/forum?id=xzmqxHdZAwO
End-to-end semi-supervised object detection with soft teacher. M Xu, Z Zhang, H Hu, J Wang, L Wang, F Wei, X Bai, Z Liu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionXu M, Zhang Z, Hu H, Wang J, Wang L, Wei F, Bai X, Liu Z (2021) End-to-end semi-supervised object detection with soft teacher. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 3060-3069
Memseg: A semisupervised method for image surface defect detection using differences and commonalities. Engineering Applications of. M Yang, P Wu, H Feng, Artificial Intelligence. 119105835Yang M, Wu P, Feng H (2023) Memseg: A semi- supervised method for image surface defect detection using differences and commonalities. Engineering Ap- plications of Artificial Intelligence 119:105835
Image data augmentation for deep learning: A survey. S Yang, W Xiao, M Zhang, S Guo, J Zhao, F Shen, arXiv:220408610arXiv preprintYang S, Xiao W, Zhang M, Guo S, Zhao J, Shen F (2022) Image data augmentation for deep learning: A survey. arXiv preprint arXiv:220408610
Explicit boundary guided semi-push-pull contrastive learning for supervised anomaly detection. X Yao, R Li, J Zhang, J Sun, C Zhang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Yao X, Li R, Zhang J, Sun J, Zhang C (2023) Explicit boundary guided semi-push-pull contrastive learning for supervised anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 24490-24499
Patch svdd: Patch-level svdd for anomaly detection and segmentation. J Yi, S ; Yoon, J Yu, Y Zheng, X Wang, W Li, Y Wu, R Zhao, L Wu, arXiv:211107677Fastflow: Unsupervised anomaly detection and localization via 2d normalizing flows. arXiv preprintProceedings of the Asian Conference on Computer VisionYi J, Yoon S (2020) Patch svdd: Patch-level svdd for anomaly detection and segmentation. In: Proceedings of the Asian Conference on Computer Vision Yu J, Zheng Y, Wang X, Li W, Wu Y, Zhao R, Wu L (2021) Fastflow: Unsupervised anomaly detec- tion and localization via 2d normalizing flows. arXiv preprint arXiv:211107677
Cutmix: Regularization strategy to train strong classifiers with localizable features. S Yun, D Han, S J Oh, S Chun, J Choe, Y Yoo, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionYun S, Han D, Oh SJ, Chun S, Choe J, Yoo Y (2019) Cutmix: Regularization strategy to train strong clas- sifiers with localizable features. In: Proceedings of the IEEE/CVF international conference on computer vi- sion, pp 6023-6032
S Zagoruyko, N Komodakis, arXiv:160507146Wide residual networks. arXiv preprintZagoruyko S, Komodakis N (2016) Wide residual net- works. arXiv preprint arXiv:160507146
Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. V Zavrtanik, M Kristan, D Skočaj, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZavrtanik V, Kristan M, Skočaj D (2021) Draem-a dis- criminatively trained reconstruction embedding for surface anomaly detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 8330-8339
Defect-gan: High-fidelity defect synthesis for automated defect inspection. G Zhang, K Cui, T Y Hung, S Lu, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionZhang G, Cui K, Hung TY, Lu S (2021a) Defect-gan: High-fidelity defect synthesis for automated defect in- spection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp 2524-2534
Prototypical residual networks for anomaly detection and localization. H Zhang, M Cisse, Y N Dauphin, D ; Rb Lopez-Paz, H Zhang, Z Wu, Z Wang, Z Chen, Y G Jiang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)International Conference on Learning RepresentationsZhang H, Cisse M, Dauphin YN, Lopez-Paz D (2018) mixup: Beyond empirical risk minimization. In: International Conference on Learning Representa- tions, URL https://openreview.net/forum?id= r1Ddp1-Rb Zhang H, Wu Z, Wang Z, Chen Z, Jiang YG (2023a) Prototypical residual networks for anomaly detection and localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition (CVPR), pp 16281-16291
Cadn: a weakly supervised learning-based category-aware object detection network for surface defect detection. J Zhang, H Su, W Zou, X Gong, Z Zhang, F Shen, Pattern Recognition. 109107571Zhang J, Su H, Zou W, Gong X, Zhang Z, Shen F (2021b) Cadn: a weakly supervised learning-based category-aware object detection network for surface defect detection. Pattern Recognition 109:107571
Pedenet: Image anomaly localization via patch embedding and density estimation. K Zhang, B Wang, Ccj Kuo, Pattern Recognition Letters. 153144Zhang K, Wang B, Kuo CCJ (2022) Pedenet: Image anomaly localization via patch embedding and den- sity estimation. Pattern Recognition Letters 153:144-
| [
"https://github.com/BeJane/Semi_REST"
] |
[] | [] | [] | [] | This paper explores how AI-owners can develop safeguards for AI-generated content by drawing from established codes of conduct and ethical standards in other content-creation industries. It delves into the current state of ethical awareness on Large Language Models (LLMs). By dissecting the mechanism of content generation by LLMs, four key areas (upstream/downstream and at user prompt/answer), where safeguards could be effectively applied, are identified. A comparative analysis of these four areas follows and includes an evaluation of the existing ethical safeguards in terms of cost, effectiveness, and alignment with established industry practices. The paper's key argument is that existing IT-related ethical codes, while adequate for traditional IT engineering, are inadequate for the challenges posed by LLM-based content generation. Drawing from established practices within journalism, we propose potential standards for businesses involved in distributing and selling LLM-generated content. Finally, potential conflicts of interest between dataset curation at upstream and ethical benchmarking downstream are highlighted to underscore the need for a broader evaluation beyond mere output. This study prompts a nuanced conversation around ethical implications in this rapidly evolving field of content generation. | null | [
"https://export.arxiv.org/pdf/2306.03503v1.pdf"
] | 259,089,228 | 2306.03503 | 16b3008865ae222efb4d19fe4751bc274c07e175 |
This paper explores how AI-owners can develop safeguards for AI-generated content by drawing from established codes of conduct and ethical standards in other content-creation industries. It delves into the current state of ethical awareness on Large Language Models (LLMs). By dissecting the mechanism of content generation by LLMs, four key areas (upstream/downstream and at user prompt/answer), where safeguards could be effectively applied, are identified. A comparative analysis of these four areas follows and includes an evaluation of the existing ethical safeguards in terms of cost, effectiveness, and alignment with established industry practices. The paper's key argument is that existing IT-related ethical codes, while adequate for traditional IT engineering, are inadequate for the challenges posed by LLM-based content generation. Drawing from established practices within journalism, we propose potential standards for businesses involved in distributing and selling LLM-generated content. Finally, potential conflicts of interest between dataset curation at upstream and ethical benchmarking downstream are highlighted to underscore the need for a broader evaluation beyond mere output. This study prompts a nuanced conversation around ethical implications in this rapidly evolving field of content generation.
I. INTRODUCTION
A. Motivation
Artificial intelligence (AI) technology is advancing rapidly, while simultaneously becoming widely accessible to broader society. The speed of this progression reveals oversights regarding ethical awareness in the standards used in Large Language Models (LLM)-based products. These include (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. With governments considering regulatory measures for LLM-generated content, LLM-based service providers could draw lessons from other contentproducing industries to self-regulate. In addition, in the existing debate around LLM-based AI [1], the implementation of safeguards has primarily focused on output filtering, which overlooks safeguards that could be applied to check the quality of the data used to train the LLMs. This paper will address pedagogy and the adoption of ethics in computer science. It comprises an exploration of possible points of application of standards at both input and output stages of LLMs. These stages are referred to elsewhere as upstream and downstream, respectively. Furthermore, the paper highlights best practices in journalism as one profession that has well-established ethics standards, which could be extrapolated to LLMbased services. (1,2), or at the output (3,4).
Table I. Codes of ethics by year
Field Document Year
Medicine Hippocratic Oath e [11] a.
(latest version in 2018) b.
(latest revision in 2006) c.
(latest revision in 2015) d.
(latest version in 2017) e.
Major revision in 1948. In Nazi Germany, medical students did not take the Hippocratic Oath [24] B. IT Ethics awareness post-2005 WASC (Western Association of Schools and Colleges) and ABET (Accreditation Board for Engineering and Technology) both emphasize the importance of incorporating professional responsibility and ethics into the curriculum for engineering, software, and IT programs.
ABET has been accrediting programs since 1932, and has always been focused on continuous improvement and evolving standards to meet industry and societal needs. Established later, in 1962, WASC also aims to foster a culture of continuous improvement. One of ABET's criteria for accrediting computing programs is Criterion 3: Student Outcomes, which states that "students must have an understanding of professional, ethical, legal, security, and social issues and responsibilities". This criterion applies to programs in various areas such as electrical engineering, software engineering, and IT [2][3][4]. ABET and WASC guidelines can influence the content of universities curricula pursuant to their accreditations, thus comprising a major reason why ethics or professional responsibility subjects are included in said programs.
C. Ethics in IT pre-2005
Interestingly, prior to 2005, we seldom find any reference to ethics in IT curricula. Today, as noted by several authors, most IT programs include some sort of ethics content in their curricula [5][6][7][8]. However, compared to other fields, IT has been a relative latecomer to said trend. See Table 1. This is unsurprising. The term 'IT' was only coined after the 1950s (after the invention of the solidstate transistors) and only became mainstream after 1981 with the publication of the PC standard by IBM in the same year.
D. IT leaders lacking formal ethics training
This broader trend is reflected in the personal academic journey of the first author's professional experience; the first author initially studied as an undergraduate at what is now known as BarcelonaTech from 1994-1999, followed by doctoral studies at TokyoTech from 2005-2007. Throughout his education, several subjects addressed humanistic principles, such as co-existence, but very few provided actionable ethical frameworks, including tools such as introduction to ethical analysis, cost-benefit evaluation, and the like. Fast-forward a decade, he transitioned from student to a lecturer entrusted with teaching an "Ethics for IT" course (ITBP370) from 2014 to 2021. During this period, he employed various educational resources, including Reynolds' textbook on Ethics for IT from 2003 [9], and the interactive Moral Machine website [10], along with case studies and simulated games. This experience led him to two key realizations. First, before his doctoral studies of 2002-2005, he spent several years working in the industry, specifically writing Java code for a German bank, with little formal awareness of ethical considerations. The word "ethics" or "compliance" never surfaced in their teams' discussions; their primary focus was to ensure the code functioned correctly. Second, we postulate that this lack of formal ethics training likely extends to many IT leaders and workers who graduated before 2005.
E. Awareness as a prerequisite
It is important to note that ethics training does not guarantee ethical behavior. Rather, it serves as a prerequisite for ethical performance [9]. Ethical codes are not a recent development either. Table 1 provides a comprehensive historical timeline. For example, the earliest known code specific to a professional trade can be traced back to Greece in the 5th century B.C. The first recorded ethical code tailored to Computer Engineering was established much later, in 1992. For context, Facebook launched in 2004 and became publicly available outside university campuses in 2006. Meanwhile, the EU General Data Protection Regulation (GDPR) was not enacted until a decade later, in 2016 [25]. The first GDPR-sanctioned fines to Facebook were issued in 2022 for violations in 2018.
II. CURRENT STATE OF STANDARDS AND ETHICS IN LLM
A. The data equivalence to the model Fig. 1 illustrates a simplified information flow in an LLM-based chat service. The model's weights are symbolized by an abacus. Given a specific set of documents for training, most LLMs, such as Facebook's LLaMA [26], and others [27], produce similar "weights" that respond in comparable ways to identical prompts.
These weights can be considered a knowledge representation of the underlying training data mediated by user prompts. See Data-Information-Knowledge-Wisdom model in our previous work [28]. From a user's perspective, the model appears to display creativity and "sparks" of abstract reasoning [29]. However, for an informed observer, [30] LLMs are merely a predictor, trained through reinforcement learning to cater to human preferences [31]. This distinction is evident in the HuggingFace LLM leaderboard, a popular platform for comparing LLMs. The leaderboard [27] reveals that the data used for training has a more significant impact on the model's performance than the size of the model (measured in billions of weights) [32a]. In essence, the data quality has a more profound influence than the algorithm in the performance. This principle is often referred to in data science as "garbage in, garbage out".
B. The fine-tunning problem
To increase their practical utility and to align their behavior more closely with human expectations, LLMs are fine-tuned post initial training. This process is a computationally smaller [30], more focused training regime where the model is further refined, usually using a carefully curated dataset comprised of human feedback on the answers received to prompts [32b]. Ideally, the model learns to follow instructions more accurately and reduce the likelihood of providing answers that receive poor evaluation by the users (see thumbs up thumbs down image in Table 2). This fine-tuning has proven crucial to commercialize LLM-based services. While the raw models have an impressive ability to understand and generate human-like text, their behavior can sometimes diverge from political correctness. For this purpose, companies such as OpenAI use a method called Reinforcement Learning from Human Feedback (RLHF) to "tune" the raw LLM output [32c].
1) Benefits
At the heart this fine-tuning is the principle of shaping the model's output to better fit specific, desired characteristics. For instance, model developers might wish to ensure that the AI does not propagate harmful misinformation or express biased views. To achieve this, they might use a dataset specifically designed to promote responsible behavior in AI systems during the fine-tuning process. These datasets could contain examples of appropriate responses to a wide range of prompts, potentially mitigating the risk of harmful outputs. Various studies have indicated that this approach can help in curbing the undesirable outputs of the raw LLM, especially when combined with a comprehensive evaluation mechanism.
2) Risks
However, the process is not without challenges. Prompt injection and other techniques can still be effective, even on a fine-tuned model (See "Do Anything Now" hack). In addition, fine-tuning is ideally tailored to the specific contexts and use-cases, therefore, it can also be used to "game" any ethical benchmark. See TruthFulQA in next section.
The fine-tuning relies on another dataset, if not disclosed it is naturally hard to oversee any ethics. Note that the finetuning process can itself introduce its own form of bias. Striking the right balance between allowing an LLM to generate diverse outputs and ensuring it adheres to ethical and legal norms is therefore a complex task. (pers. comm., Emil Ahlbäck 2023).
C. Current standards to compare models
In the said HuggingFace leaderboard, the models are ranked by a weighted average of four benchmarks. According to HuggingFace, these are:
1. AI2 Reasoning Challenge (25-shot): This benchmark consists of a set of elementary-level science questions [33].
2. HellaSwag (10-shot): Comprises a test of commonsense inference, which is easy for humans (~95%) but poses a challenge for state-of-the-art models [34].
3. MMLU (5-shot): This measure tests a text model's multitask accuracy across 57 tasks, including elementary mathematics, US history, computer science, law, and more [35]. 4. TruthfulQA (0-shot): This benchmark evaluates the truthfulness of a language model's generated answers to questions [36a].
As these, and other ethic-focused benchmarks [36b-36d] evolve into standards, their influence is anticipated to grow. However, note how in HuggingFace only one of the four benchmarks addresses ethics. In particular, the narrow issue of a model's propensity to regenerate conspiracy theories that were present in the training dataset. Also of note is the lack of concern regarding the provenance of the training datasets [31].
However, what is accepted in IT could be considered a grave violation in other fields such as journalism. As LLMbased services increasingly overlap with journalism, should they not adopt their standards too? What are the potential risks of the lack of integration of ethics in these leaderboards?
D. Where can we do better? Table 2 outlines the possible approaches for selfregulation, which can be implemented across four touchpoints in an LLM-based service. Table 3 compares each of the four touchpoints listed earlier in terms of effectiveness, cost, and risk of misalignment. First, the most effective solution, as proposed by the CEO of Stability [37,38], advocates addressing potential 'AI misalignment' at its root causethe training datasets. However, this method is also the most expensive and demands a commitment to disclosure and transparency. It is this step that the industry leader, OpenAI, has been reluctant to take. Instead, they have chosen to not disclose the specific contents of their datasets so far. Table 4 compares three leading LLM-based services in terms of ethical safeguards they currently implement.
E. Hiding problems
In general, engineering practice discourages the avoidance of addressing root causes of problems [39][40][41][42]; a premise that holds true in software engineering too. For example, several views from thought leaders on the subject include:
1. Robert C. Martin, also known as "Uncle Bob", discusses the concept of writing clean code in his 2008 series and book. One characteristic of clean code is its readability and comprehensibility; hence problems should be promptly addressed rather than concealed or disregarded [43].
2. In his 1999 book, Refactoring: Improving the Design of Existing Code, Martin Fowler contends that refactoring is fundamentally the process of identifying and tackling the root causes of code design problems [44].
3. Kent Beck, in his work Test-Driven Development, proposes a software development methodology that stresses writing tests prior to implementation code. This process ensures that problems are swiftly identified and addressed, rather than hidden [45]. 4. In 2004, Beck's comprehensive guide to software construction included a discussion on the importance of debugging and testing code. He emphasized that code issues should be detected and rectified, not ignored or concealed [46]. These insights -from software engineering expertsunderscore a universally accepted principle in engineering: Ignoring, concealing, or failing to address the root causes is seen as detrimental practice, which can lead to poorquality code, technical debt, long-term maintenance challenges, and perhaps more concerningly to a moral slippery slope.
F. Upstream vs downstream safeguards
Other arguments in favor of ethical controls upstream or at input, instead of downstream-only are that:
1. Humans who build the data in the first place deserve fair compensation, 2. Controlling things at source follows the design paradigm of prevention, rather than remedial action, and, 3. It also avoids teaching AIs to "game" the system [47][48][49][50]; The main argument in favor of downstream-only safeguards is one of cost. While evaluating output is straightforward with benchmarks, evaluating the input (training data) for misinformation, conspiracy theories, etc. is more expensive. Once a "poisoned" document is discovered in the training set, engineers do not know how to "remove" its effects on the model weights. The only way forward at this stage is to retrain the model from zero, which takes various days and several millions of dollars in computational costs [30].
G. Symptoms of slippery slope 1) Use of blanket disclaimers
See Fig. 2 for an example of the slippery slope -a wellknown concept. The term refers to a type of argument where a specific decision or action leads to a series of events that result in an undesirable outcome, the said "slippery" descent into unethical or immoral behavior. This slippery slope phenomena is more prevalent than reported in the media (see for example, [51][52][53][54][55][56][57][58][59]). In Fig. 2, the statement does not seem in compliance with a few items in the ACM code of ethics, see Annex A.
2) Shifting blame to user
This blanket statement uses a technique called blame shifting [60][61][62][63] where users are coopted into shouldering responsibility of misinformation and/or unethical responses. This practice would be similar to a company asking the user to help improve the products but without financial compensation. A further point is that these LLM products have been rolled out to the public, skipping the common practices of beta testing and slow progressive rollouts.
3) What if the AI was a journalist?
If the chatbot in Fig. 2 was subjected to the same standards of journalism, such blanket statements would not be allowed. As one user exclaimed, this disclaimer is "equivalent to the New York Times posting a statement on the front page that its content may be wrong, but the NYT isn't responsible. Sorry". With this comparison front of mind, we now turn to exploring how the news industry deals with setting standards for content.
III. WHAT AI-OWNERS CAN LEARN FROM JOURNALISM
The process of advancing AI technology and making AI-generated content attractive to and available to the general public is comparable to that of the printing press and its pivotal influence on newspapers and journalism. Like AI-owners today, publishers at the time did not start off with a full-fledged set of standards and ethics to guide them. Some newspaper owners pushed for sensational content to attract paying readers, resulting in the creation of the derogatory term "yellow journalism" to describe inaccurate or misleading content [64]. To avoid content that would cause harm, journalists and newspaper owners agreed to set standards for content creation that benefited the consumer of the content and built their own reputations as reliable [65]. Here are some common tenants in these ethical codes that hold relevance for owners and users of AI-generated content.
A. Accuracy
Editors have argued that the single most important thing in journalism is accuracy. The editor-in-chief emeritus at Bloomberg News, Matthew Winkler, interviewed and hired hundreds of reporters across his career. He usually asked one question that would result in an automatic rejection if the candidate got it wrong. The question was "What is the most important thing in journalism?" The right answer was "accuracy", which one author of this paper got right before her 18-year career at the organization. Accuracy and factual reporting are the backbone for building and maintaining trust with the content consumer. Accuracy is ensured through mechanisms including:
1) Editors
Editors serve as gatekeepers for content. They are the first line filter after the journalist. Editors will monitor for grammar and factual accuracy. They will employ a series of flags to check for accidental inaccuracy. Some editorial tasks are programmed right into the desktop tools, while most traditionally remain with human editors.
2) Fact-checking
Publications can have separate departments that check the accuracy of statements through researching databases and calling sources to confirm information. Indeed, some publications have fact-checking reporters who fact-check other news and publish their findings.
3) Sourcing
Publications have specific rules for sharing their sources. Bloomberg, for example, requires citing a source before publishing. Exceptions are considered when revealing a source may cause them harm. This decision often requires a rigorous review process and approval from an executive editor [65].
4) Ombudsman
Publications employ ombudspersons, who handle complaints from the public. They also have a broad mandate to maintain ethical standards within an organization.
5) Fireable offenses
Journalists who do not adhere to these standards are dismissed. Jayson Blair, formerly of the New York Times, was dismissed for fabricating content, events and sources.
In an attempt to restore its reputation, the NYT revealed details of what Blair did to deceive the readers and how NYT will endeavor to ensure similar situations do not arise [66].
B. Transparency
Transparency is about showing your work and your sources so the consumers of the content can make their own decisions about the information. Transparency gives the consumer power to check accuracy themselves across various media. In research, transparency is usually revealed in footnotes and citation practices. In journalism, it is part of the text, and can be revealed as follows:
1) Citing sources
Citing sources means revealing the name and location of the source of the material and information used. If the source content is digital then it necessitates a link directly to the source.
2) Revealing omission
When information that may be pertinent cannot be secured, journalistic standards require revealing details about why the information could not be secured. A typical example is when the reporter writes a person "did not respond to requests for an interview".
3) Transparency of ownership
If the publication is owned by a party that may have an interest in the topic, or may gain or lose something because of the news, this fact must be revealed.
4) Conflict of interest
All potential conflicts of interest are expected to be revealed or avoided. A reporter should not interview their own relative for a story or use a good friend as a source of information.
C. Do-no-harm principle
Similar to the Hippocratic Oath, journalists follow a principle of conducting interviews and research so as not to use information gathered in a manner that would harm the people involved. Academic research has similar guidelines for its information gathering processes.
1) Use of adjectives and adverbs
Journalists are trained to avoid using adjectives and adverbs that are not clearly backed up with fact to avoid unintended bias. Calling something "tall" is relative, and may be misconstrued from its intended meaning. Saying something is 165 centimeters explains exactly what it is to a person who is 200 cm as well as one who is 150 cm. It is "tall" to one but actually "short" to the other. So, all adjectives and adverbs are potential flags for misleading content.
2) Clarity between opinion and fact
The news profession often uses labels for clarity. An opinion piece or a column is clearly labelled Opinion and not News. Labels can be used to show accuracy and provide transparency.
3) Use of personal information
When reporting court cases, for example, journalists also use labels to define a defendant as alleged so to ensure readers do not inadvertently believe they are guilty. Also, when writing about a person, the reporter endeavors to give that person time to comment or respond before publication.
D. Laws and accountability
Like other industries, there are laws that govern content creation by journalists. These often revolve around libel. Publishers can take out libel insurance to cover their exposure to risk. In AI-generated content, it remains unclear where accountability lies. Will libel for AIgenerated content be accountable to the maker of the model or the user of the model?
IV. DISCUSSION
By comparing the ethical standards awareness in section 1 with the already established codes of conduct in journalism addressed in section III, AI-owners can begin to consider what elements might apply to AI-generated content. Section II explores where in the AI-generation process the standards from journalism could be applied.
To be sure, this paper is not attempting to decide what is and is not ethical for LLM-generated content. Rather it is suggesting that adopting fundamental best-practices in transparency and accuracy will allow for ethical assessment and ethical-based decisions in content creation --for both provider and user.
A. Case where journalism standards would have been effective
An example worth considering as a discussion prompt is the lawyer who recently sued an airline on behalf of the client [67]. The lawyer submitted a brief that included a number of relevant court decisions. However, it was later revealed that no one could actually find the decisions cited in the brief. The lawyer had used OpenAI's ChatGPT to do his research. Ironically, the lawyer even asked ChatGPT to verify the cases were real. The program confirmed they were real. But they were not; a fact confirmed when the judge went looking for them and found nothing. They were fabricated by AI. Had this been content created in journalism, many of the tools used to guarantee accuracy would have caught the erroneous information before publication. For AI-owners, the tools could be applied to the point of entry of data into the model, the model itself or to post-production filters. AI-owners could also create an ombudsperson system for fielding reports of erroneous or harmful content from end-users.
This example provides just one instance that can prompt more nuanced ethics-based discussions. AI-owners can look to other industries that deal with content for inspiration for others [68][69]. We encourage those in other industries such as the real estate or legal professions, where content is created for contracts, to join us in exploring what can be extrapolated from their standards to help benefit this new AI industry. In fact, any industry that could incur risk with their actions can hold ethical inspiration.
V. CONCLUSION
LLM-bases services generate content. Currently, this content is facing a backlash when it is incorrect, misleading, or potentially dangerous to society. AI-owners can remove some of the concern by creating a code of conduct or standards for their products. None of these issues related to the control or misuse of information are new. They have been faced before in other industries. This paper has discussed the current state of ethics in computer science, where standards could be applied in the current state of AI-generated content, and how another industry -journalism and publishing -developed effective codes of conduct and standards for addressing contentspecific issues of accuracy, transparency, and conflict of interest.
The paper shows there has been a lack of training in ethics in higher education among IT leaders who graduated before 2005, and discussed the influence of WASC and ABET in IT program curricula.
It has discussed a number of touch points in the LLM processes that can be exposed to safeguards and control measures to improve accuracy and transparency, both at the entry of the data -upstream -and after the generation of content -downstream.
It has been noted that there has been a tendency to put efforts into the downstream controls only. This paper raises the question: Why? And why not apply efforts in upstream too? Is there a conflict of interest here for the AIowners if upstream checks are more costly? As the leading academic institutions in AI also depended on funding from big tech, is this conflict of interest limited to LLM service providers only?
Finally, we listed some principles from journalism for insight into which standards have historically been effective in dealing with the same issues LLM-based generated content is facing today.
Further discussion and research are needed into the usefulness and application of established standards in other content-creating industries. The key motivation is that AIowners minimize harm and promote accuracy and transparency, thus inserting some of the fundamental standards and behaviors needed to advance ethical AI.
VI. FINAL REMARKS
Journalism's codes of ethical conduct have developed, and been refined, over several decades. These processes of refinement have resulted in a system of content generation with high level of public trust and reliability. Yet, such trust did not come overnight, it took time. The utility of using journalism as a comparison in this ethics-based discussion manifests in two points relating to speed and trust. We know that the speed at which AI based content is developing is exceeding expectations; with experts in the computer science field calling for a pause its development for six months to allow government regulators to 'catch up' [70]. This call is indicative of the widespread concern for its rapid, and somewhat ethically unfettered development. Second, and corollary to the first, is trust. As regulators scramble to address the daily array of concerns regarding misinformation and the veracity of AI-generated content, trust in the information produced fluctuates. In the context of the computer science, such trust issues are compounded by the fact the discipline does not have a robust foundational base in ethics education to springboard from.
VII. ANNEX
A. Annex A
Excerpts abridged from Ethical Guidelines from the Association for Computing Machinery (ACM).
• Contribute to society and human well-being.
should work to develop computer systems that can reduce negative consequences to society, …
• Avoid harm to others. Computer systems have an indirect impact on third parties…software developers should minimize the risk of harming others…
• Be honest and trustworthy. This principle encourages programmers to be honest and aware of their limitations in knowledge and education when writing computer systems. Also, if a programmer knows there is something wrong with a computer system, he or she should report it immediately …
• Give proper credit for intellectual property. It is mandatory for every software developer to never use and take credit for someone else's work, even when it has not been protected by a copyright law… They must recognize and fully credit other people's works, and they should use their own ideas to develop software.
• Respect the privacy of others. Computer systems are wrongly used by some people to violate the privacy of others. Software developers should write programs that can protect users' private information and that can avoid other undesired people to have unauthorized access to it (Code of Ethics and Professional Conduct).
• Honor confidentiality. Unless required by law or any other ethical guideline, a programmer must keep secret any additional information related to his or her employer that arises from working in a project.
Fig 1 .
1An LLM-based service can be regulated at four points divided in two categories: at the input
Fig 2 .
2A chatbot blanket disclaimer example. Source: Google.
Table II .
IITouchpoints where LLMs can be regulated hides the problem recognized as poor practice in engineering[80][81][82][83][84][85][86][87][88][89][90] At input
(upstream)
At output
(downstream)
At user level
Prompt censoring
User reports *
At model level
Dataset curation
Filtering answers
*image source: chat.openai.com
Table III. Cost-benefit analysis
Safeguard
touchpoint
Risk addressed
Effectiveness
Cost
Upstream
Prompt
censoring
Prompt injection,
jail break
<100%
(See "Do
Anything
Now" hack)
✅
Low
Dataset
curation
Regenerating
unethical patterns
in training data
Common
sense
[40]
⚠
High
Downstream
downstream
Output
filter
Censor
problematic
outputs
⚠
Hides
problem b
✅
Low
User
reports a
Detect problems
that passed the
previous filter
Feedback used
primarily for
RL (not
ethics) [30]
✅
Low
a.
image source: chat.openai.com
b.
Table IV .
IVSafeguard measures by vendorEthics check
OpenAI
Midjourney
Stability
Potential issue
Prompt
censoring
✅
✅
✅
Hides the problem
No use blanket
disclaimers
❌
✅
✅
Slippery slope
Train data
disclosed?
❌
❌
✅
Potential use of content
without crediting sources a
a.
Stable Diffusion is "trained" on a 100TB dataset that contains 2bn images,
including copyrighted photos.
ACKNOWLEDGEMENTSDanielle Drozdzewski for editorial assistance.
A.I. Poses 'Risk of Extinction. K Roose, The New York TimesK. Roose, "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn," The New York Times, 30 May 2023. [Online].
History. ABETABET, "History," [Online].
About WSCUC. WSCUC. WSCUC, "About WSCUC," [Online].
Criteria for Accrediting Computing Programs. ABETABET, "Criteria for Accrediting Computing Programs, 2021 - 2022," [Online].
Teaching computer ethics: a broader perspective. C D Martin, E Yale-Weltz, Journal of Information Systems Education. 104This article explores the importance of incorporating ethics into computer science and IT educationMartin, C. D., & Yale-Weltz, E. (1999). Teaching computer ethics: a broader perspective. Journal of Information Systems Education, 10(4), 175-180. This article explores the importance of incorporating ethics into computer science and IT education.
Computing consequences: a framework for teaching ethical computing. C Huff, C D Martin, Commun. ACM. 3812C. Huff and C. D. Martin, "Computing consequences: a framework for teaching ethical computing," Commun. ACM, vol. 38, no. 12, pp. 75-84, Dec. 1995.
Informatics and professional responsibility. D Gotterbarn, Sci. Eng. Ethics. 72D. Gotterbarn, "Informatics and professional responsibility," Sci. Eng. Ethics, vol. 7, no. 2, pp. 221-230, 2001.
Teaching computer ethics at a United States university. B Brinkman, R Sanders, Cases on digital technologies in higher education: issues and challenges. R. LuppiciniIGI GlobalB. Brinkman and R. Sanders, "Teaching computer ethics at a United States university," in R. Luppicini (Ed.), Cases on digital technologies in higher education: issues and challenges, pp. 180- 193, IGI Global, 2013.
G W Reynolds, Ethics in Information Technology, Course Technology. G. W. Reynolds, Ethics in Information Technology, Course Technology, 2003.
The moral machine experiment. E Awad, S Dsouza, R Kim, J Schulz, J Henrich, A Shariff, J F Bonnefon, I Rahwan, Nature. 5637729Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.F. and Rahwan, I., 2018. The moral machine experiment. Nature, 563(7729), pp.59-64.
A short history of medical ethics. A R Jonsen, Oxford University PressJonsen, A. R. (2000). A short history of medical ethics. Oxford University Press.
ACM Code of Ethics and Professional Conduct. Association for Computing MachineryAssociation for Computing Machinery. (2018). ACM Code of Ethics and Professional Conduct. Retrieved from https://www.acm.org/code-of-ethics
Code of Ethics. American Society of Civil EngineersAmerican Society of Civil Engineers. (2006). Code of Ethics. Retrieved from https://www.asce.org/code-of-ethics/
SCB Code of Ethics. Society for Conservation Biology. Society for Conservation Biology. (2004). SCB Code of Ethics. Retrieved from https://conbio.org/publications/scb-code-of- ethics/
The Chemical Professional's Code of Conduct. American Chemical SocietyAmerican Chemical Society. (2012). The Chemical Professional's Code of Conduct. Retrieved from https://www.acs.org/content/acs/en/about/governance/committe es/ethics/chemical-professional-code-conduct.html
AIP Statement of Ethical Principles. American Institute of PhysicsAmerican Institute of Physics. (2002). AIP Statement of Ethical Principles. Retrieved from https://www.aip.org/aip/statement- ethical-principles
International Association of Chiefs of Police. Law Enforcement Code of EthicsInternational Association of Chiefs of Police. (1957). Law Enforcement Code of Ethics. Retrieved from https://www.theiacp.org/resources/document/law-enforcement- code-of-ethics
International Association of Fire Chiefs. IAFC Firefighter'sInternational Association of Fire Chiefs. (2000). IAFC Firefighter's
Code of Ethics for Nurses with Interpretive Statements. American Nurses Association. American Nurses Association. (2015). Code of Ethics for Nurses with Interpretive Statements. Retrieved from https://www.nursingworld.org/coe-view-only
Ethical Principles of Psychologists and Code of Conduct. Accessed: XX-Month- 2023American Psychological AssociationAmerican Psychological Association, "Ethical Principles of Psychologists and Code of Conduct," 2017. [Online]. Available: https://www.apa.org/ethics/code/. [Accessed: XX-Month- 2023].
RIBA Code of Professional Conduct. Royal Institute of British ArchitectsRoyal Institute of British Architects, "RIBA Code of Professional Conduct," 2005. [Online].
National Association of Social Workers. NASW Code of EthicsNational Association of Social Workers, "NASW Code of Ethics," 2017. [Online].
E Wager, S Kleinert, Cooperation between research institutions and journals on research integrity cases: guidance from the Committee on Publication Ethics (COPE). 72E. Wager and S. Kleinert, "Cooperation between research institutions and journals on research integrity cases: guidance from the Committee on Publication Ethics (COPE)," Maturitas, vol. 72, no. 2, pp. 165-169, Jun. 2012.
N Baumslag, Murderous Medicine: Nazi Doctors, Human Experimentation, and Typhus. Praeger Publishers9780275983123N. Baumslag, Murderous Medicine: Nazi Doctors, Human Experimentation, and Typhus. Praeger Publishers, 2005. ISBN 9780275983123.
on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 27Regulation (EU) 2016/679 of the European Parliament. Accessed: XX-Month-2023Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). [Online]. Available: https://eur-lex.europa.eu/eli/reg/2016/679/oj. [Accessed: XX- Month-2023].
Llama: Open and efficient foundation language models. H Touvron, arXiv:2302.13971arXiv preprintH. Touvron et al., "Llama: Open and efficient foundation language models," arXiv preprint arXiv:2302.13971, 2023.
Hugging Face. E Beeching, S Han, N Lambert, N Rajani, O Sanseviero, L Tunstall, T Wolf, Open LLM Leaderboard. E. Beeching, S. Han, N. Lambert, N. Rajani, O. Sanseviero, L. Tunstall, T. Wolf, "Open LLM Leaderboard," Hugging Face, 2023. [Online].
Introduction to Data Visualization & Storytelling A Guide For The Data Scientist. J Berengueres, M Sandell, J. Berengueres and M. Sandell, Introduction to Data Visualization & Storytelling A Guide For The Data Scientist, ch. 1, pp. 4-5, 2019.
Memory Augmented Large Language Models are Computationally Universal. D Schuurmans, ; A Karpathy, arXiv:2301.04589YouTube. arXiv preprintState of GPT | BRK216HFS. Video file], 2023. [OnlineD. Schuurmans, "Memory Augmented Large Language Models are Computationally Universal," arXiv preprint arXiv:2301.04589, 2023.[30] A. Karpathy, "State of GPT | BRK216HFS," YouTube, [Video file], 2023. [Online].
Do Large Language Models Show Decision Heuristics Similar to Humans? A Case Study Using GPT-3.5. G Suri, L R Slater, A Ziaee, M Nguyen, arXiv:2305.04400arXiv preprintG. Suri, L. R. Slater, A. Ziaee, and M. Nguyen, "Do Large Language Models Show Decision Heuristics Similar to Humans? A Case Study Using GPT-3.5," arXiv preprint arXiv:2305.04400, 2023.
E Seger, A Ovadya, B Garfinkel, D Siddarth, A Dafoe, arXiv:2303.12642Democratising AI: Multiple Meanings, Goals, and Methods. arXiv preprintE. Seger, A. Ovadya, B. Garfinkel, D. Siddarth, and A. Dafoe, "Democratising AI: Multiple Meanings, Goals, and Methods," arXiv preprint arXiv:2303.12642, 2023.
Lima: Less is more for alignment. C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, S Zhang, arXiv:2305.11206arXiv preprintC. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, and S. Zhang, "Lima: Less is more for alignment," arXiv preprint arXiv:2305.11206, 2023. [Online]. Available: https://arxiv.org/abs/2305.11206
Training language models to follow instructions with human feedback. L Ouyang, J Wu, X Jiang, D Almeida, C Wainwright, P Mishkin, C Zhang, S Agarwal, K Slama, A Ray, J Schulman, Advances in Neural Information Processing Systems. 35L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, and J. Schulman, "Training language models to follow instructions with human feedback," in Advances in Neural Information Processing Systems, vol. 35, pp. 27730-27744, 2022.
Training language models to follow instructions with human feedback. L Ouyang, J Wu, X Jiang, D Almeida, C Wainwright, P Mishkin, C Zhang, S Agarwal, K Slama, A Ray, J Schulman, Advances in Neural Information Processing Systems. 35L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, and J. Schulman, "Training language models to follow instructions with human feedback," in Advances in Neural Information Processing Systems, vol. 35, pp. 27730-27744, 2022.
Think you have solved question answering? try arc, the ai2 reasoning challenge. P Clark, I Cowhey, O Etzioni, T Khot, A Sabharwal, C Schoenick, O Tafjord, arXiv:1803.05457arXiv preprintP. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord, "Think you have solved question answering? try arc, the ai2 reasoning challenge," arXiv preprint arXiv:1803.05457, 2018.
HellaSwag: Can a machine really finish your sentence?. R Zellers, A Holtzman, Y Bisk, A Farhadi, Y Choi, arXiv:1905.07830arXiv preprintR. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi, "HellaSwag: Can a machine really finish your sentence?," arXiv preprint arXiv:1905.07830, 2019.
Measuring massive multitask language understanding. D Hendrycks, C Burns, S Basart, A Zou, M Mazeika, D Song, J Steinhardt, arXiv:2009.03300arXiv preprintD. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt, "Measuring massive multitask language understanding," arXiv preprint arXiv:2009.03300, 2020.
Truthfulqa: Measuring how models mimic human falsehoods. S Lin, J Hilton, O Evans, arXiv:2109.07958arXiv preprintS. Lin, J. Hilton, and O. Evans, "Truthfulqa: Measuring how models mimic human falsehoods," arXiv preprint arXiv:2109.07958, 2021.
. R Bommasani, P Liang, arXiv:2212.11672Trustworthy Social Bias Measurement. arXiv preprintR. Bommasani and P. Liang, "Trustworthy Social Bias Measurement," arXiv preprint arXiv:2212.11672, 2022.
. R Bommasani, P Liang, arXiv:2212.11672Trustworthy Social Bias Measurement. arXiv preprintR. Bommasani and P. Liang, "Trustworthy Social Bias Measurement," arXiv preprint arXiv:2212.11672, 2022.
M Lee, M Srivastava, A Hardy, J Thickstun, E Durmus, A Paranjape, I Gerard-Ursin, X L Li, F Ladhak, F Rong, R E Wang, arXiv:2212.09746Evaluating Human-Language Model Interaction. arXiv preprintM. Lee, M. Srivastava, A. Hardy, J. Thickstun, E. Durmus, A. Paranjape, I. Gerard-Ursin, X.L. Li, F. Ladhak, F. Rong, and R.E. Wang, "Evaluating Human-Language Model Interaction," arXiv preprint arXiv:2212.09746, 2022.
Poetry Will Not Optimize, or What Is Literature to AI?. M Elam, American Literature. 10575077M. Elam, "Poetry Will Not Optimize, or What Is Literature to AI?," American Literature, p.10575077, 2023.
Why AI Will Have a Bigger Impact Than COVID, Why No Models Used Today Will Be Used in a Year, Why All Models are Biased and How AI Kills Traditional Media with Emad Mostaque, Founder & CEO @ Stability AI. H Stebbings, 20VC: Why the AI Bubble Will Be Bigger Than The Dot Com Bubble. 27The Twenty Minute VCH. Stebbings, "20VC: Why the AI Bubble Will Be Bigger Than The Dot Com Bubble, Why AI Will Have a Bigger Impact Than COVID, Why No Models Used Today Will Be Used in a Year, Why All Models are Biased and How AI Kills Traditional Media with Emad Mostaque, Founder & CEO @ Stability AI," The Twenty Minute VC, 17 May 2023. [Online]. Available: https://www.thetwentyminutevc.com/emad-mostaque/. [Accessed: 27-May-2023].
Learning to See: Value Stream Mapping to Add Value and Eliminate Muda. T Ohno ; M. Rother, J Shook, Toyota Production System: Beyond Large-Scale Production. Productivity PressLean Enterprise InstituteT. Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988. [40] M. Rother and J. Shook, Learning to See: Value Stream Mapping to Add Value and Eliminate Muda, Lean Enterprise Institute, 2003.
Decoding the DNA of the Toyota Production System. S Spear, H K Bowen, Harvard Business Review. S. Spear and H. K. Bowen, "Decoding the DNA of the Toyota Production System," Harvard Business Review, 1999.
The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. J Liker, McGraw-HillJ. Liker, The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer, McGraw-Hill, 2004.
Clean Code: A Handbook of Agile Software Craftsmanship. R C Martin, Prentice HallR. C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship, Prentice Hall, 2008.
Refactoring: Improving the Design of Existing Code. M Fowler, Addison WesleyM. Fowler, Refactoring: Improving the Design of Existing Code, Addison Wesley, 1999.
Test-Driven Development: By Example. K Beck, Addison-Wesley ProfessionalK. Beck, Test-Driven Development: By Example, Addison- Wesley Professional, 2003.
Code Complete: A Practical Handbook of Software Construction. S Mcconnell, Microsoft PressS. McConnell, Code Complete: A Practical Handbook of Software Construction, Microsoft Press, 2004.
In chatgpt we trust? measuring and characterizing the reliability of chatgpt. X Shen, Z Chen, M Backes, Y Zhang, arXiv:2304.08979arXiv preprintX. Shen, Z. Chen, M. Backes, and Y. Zhang, "In chatgpt we trust? measuring and characterizing the reliability of chatgpt," arXiv preprint arXiv:2304.08979, 2023.
Adversarial Demonstration Attacks on Large Language Models. J Wang, Z Liu, K H Park, M Chen, C Xiao, arXiv:2305.14950arXiv preprintJ. Wang, Z. Liu, K. H. Park, M. Chen, and C. Xiao, "Adversarial Demonstration Attacks on Large Language Models," arXiv preprint arXiv:2305.14950, 2023.
On the opportunities and risks of foundation models. R Bommasani, D A Hudson, E Adeli, R Altman, S Arora, S Arx, M S Bernstein, J Bohg, A Bosselut, E Brunskill, E Brynjolfsson, arXiv:2108.07258arXiv preprintR. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, E. Brynjolfsson et al., "On the opportunities and risks of foundation models," arXiv preprint arXiv:2108.07258, 2021.
Oops, Did I Just Say That. P Ma, Z Li, A Sun, S Wang, arXiv:2305.02626Testing and Repairing Unethical Suggestions of Large Language Models with Suggest-Critique-Reflect Process. arXiv preprintP. Ma, Z. Li, A. Sun, and S. Wang, "Oops, Did I Just Say That?" Testing and Repairing Unethical Suggestions of Large Language Models with Suggest-Critique-Reflect Process. arXiv preprint arXiv:2305.02626, 2023.
Amazon's ring: surveillance as a slippery slope service. E Selinger, D Durant, Science as Culture. 311E. Selinger and D. Durant, "Amazon's ring: surveillance as a slippery slope service," Science as Culture, vol. 31, no. 1, pp. 92- 106, 2022.
M Dorner, M Capraro, O Treidler, T E Kunz, D Šmite, E Zabardast, D Mendez, K Wnuk, arXiv:2304.06539Taxing Collaborative Software Engineering. arXiv preprintM. Dorner, M. Capraro, O. Treidler, T. E. Kunz, D. Šmite, E. Zabardast, D. Mendez, and K. Wnuk, "Taxing Collaborative Software Engineering," arXiv preprint arXiv:2304.06539, 2023.
Big tech. K Birch, K Bronson, Science as Culture. 311K. Birch and K. Bronson, "Big tech," Science as Culture, vol. 31, no. 1, pp. 1-4, 2022.
The public good and public attitudes toward data sharing through IoT. K Mossberger, S Cho, P H Cheong, D Kuznetsova, Policy & Internet. K. Mossberger, S. Cho, P. H. Cheong, and D. Kuznetsova, "The public good and public attitudes toward data sharing through IoT," Policy & Internet, 2022.
Social media's slippery slope: challenges, opportunities and future research directions. D E Schultz, J J Peltier, Journal of research in interactive marketing. D. E. Schultz and J. J. Peltier, "Social media's slippery slope: challenges, opportunities and future research directions," Journal of research in interactive marketing, 2013.
Antisocial media: How Facebook disconnects us and undermines democracy. S Vaidhyanathan, Oxford University PressS. Vaidhyanathan, Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press, 2018.
Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems. D Rozado, D. Rozado, "Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems," 2023.
The GDPR enforcement fines at glance. J Ruohonen, K Hjerppe, Information Systems. 106101876J. Ruohonen and K. Hjerppe, "The GDPR enforcement fines at glance," Information Systems, vol. 106, p. 101876, 2022.
Early GDPR penalties: Analysis of implementation and fines through. J Wolff, N Atallah, Journal of Information Policy. 11J. Wolff and N. Atallah, "Early GDPR penalties: Analysis of implementation and fines through May 2020," Journal of Information Policy, vol. 11, pp. 63-103, 2021.
Seeing green: The use and abuse of American environmental images. F Dunaway, University of Chicago PressF. Dunaway, Seeing green: The use and abuse of American environmental images. University of Chicago Press, Dec. 2019.
Why the wrong people are blamed for climate change. K Oakes, BBC Future. 1K. Oakes, "Why the wrong people are blamed for climate change," BBC Future, May 2023. [Online]. Available: https://www.bbc.com/future/article/20220504-why-the-wrong- people-are-blamed-for-climate-change. [Accessed: 01-Jun- 2023].
Big oil coined 'carbon footprints' to blame us for their greed. Keep them on the hook. R Solnit, The Guardian. R. Solnit, "Big oil coined 'carbon footprints' to blame us for their greed. Keep them on the hook," The Guardian, Aug. 23, 2021. [Online].
Capitalism is killing the planet. G Monbiot, The Guardian. G. Monbiot, "Capitalism is killing the planet," The Guardian, Oct. 30, 2021.
. The Crucible of Journalism. 1PBS"The Crucible of Journalism," PBS, 2023. [Online]. Available: http://www.pbs.org/crucible/frames/_journalism.html. [Accessed: 01-Jun-2023].
The Bloomberg Way: Guide for Reporters and Editors now available to the public. 1Bloomberg"The Bloomberg Way: Guide for Reporters and Editors now available to the public," Bloomberg, 2023. [Online]. Available: https://www.bloomberg.com/company/press/the-bloomberg- way-guide-for-reporters-and-editors-now-available-to-the- public/. [Accessed: 01-Jun-2023].
Correcting the record; Times Reporter Who Resigned Leaves Long Trail of Deception. J R Barstow, L Steinberg, The New York Times. 1J. R. Barstow and L. Steinberg, "Correcting the record; Times Reporter Who Resigned Leaves Long Trail of Deception," The New York Times, May 11, 2003. [Online]. Available: https://www.nytimes.com/2003/05/11/us/correcting-the-record- times-reporter-who-resigned-leaves-long-trail-of- deception.html. [Accessed: 01-Jun-2023].
Avianca Airline Lawsuit ChatGPT. The New York Times. "Avianca Airline Lawsuit ChatGPT," The New York Times, May 27, 2023. [Online].
AP Stylebook. AP Stylebook Online. [68] "AP Stylebook," AP Stylebook Online, 2023. [Online].
B R Thomas, Fundamentals of Journalism: Reporting, Writing, and Editing. Spokane, WAMarquette BooksB. R. Thomas, Fundamentals of Journalism: Reporting, Writing, and Editing. Spokane, WA: Marquette Books, 200.
Pause Giant AI Experiments. Future of Life Institute.Future of Life Institute. (2015, July 28). Pause Giant AI Experiments. [Online]. Available: https://futureoflife.org/open- letter/pause-giant-ai-experiments/ [Accessed: June 5, 2023].
| [] |
[
"Faster real root decision algorithm for symmetric polynomials",
"Faster real root decision algorithm for symmetric polynomials"
] | [
"George Labahn \nCheriton School of Computer Science\nDepartment of Mathematics and Statistics UiT\nUniversity of Waterloo\nOntarioCanada\n",
"Cordian Riener \nThe Arctic University of Norway\nTromsøNorway\n",
"Mohab Safey \nCheriton School of Computer Science\nSorbonne Université\nCNRS\nLIP6 F-75005ParisFrance\n",
"El Din \nCheriton School of Computer Science\nSorbonne Université\nCNRS\nLIP6 F-75005ParisFrance\n",
"Éric Schost \nDepartment of Mathematics and Statistics UiT\nUniversity of Waterloo\nOntarioCanada\n",
"Thi Xuan Vu \nThe Arctic University of Norway\nTromsøNorway\n"
] | [
"Cheriton School of Computer Science\nDepartment of Mathematics and Statistics UiT\nUniversity of Waterloo\nOntarioCanada",
"The Arctic University of Norway\nTromsøNorway",
"Cheriton School of Computer Science\nSorbonne Université\nCNRS\nLIP6 F-75005ParisFrance",
"Cheriton School of Computer Science\nSorbonne Université\nCNRS\nLIP6 F-75005ParisFrance",
"Department of Mathematics and Statistics UiT\nUniversity of Waterloo\nOntarioCanada",
"The Arctic University of Norway\nTromsøNorway"
] | [] | In this paper, we consider the problem of deciding the existence of real solutions to a system of polynomial equations having real coefficients, and which are invariant under the action of the symmetric group. We construct and analyze a Monte Carlo probabilistic algorithm which solves this problem, under some regularity assumptions on the input, by taking advantage of the symmetry invariance property.The complexity of our algorithm is polynomial in , + , and +1 , where is the number of variables and is the maximal degree of input polynomials defining the real algebraic set under study. In particular, this complexity is polynomial in when and are fixed and is equal to (1) 2 when = .INTRODUCTIONLet = ( 1 , . . . , ) be polynomials in the multivariate polynomial ring Q[ 1 , . . . , ] and let ( ) ⊂ C be the algebraic set defined by . We denote by R ( ) := ( ) ∩ R the set of solutions in R to the system . In addition we assume that all 's are invariant under the action of the symmetric group , that is, are symmetric polynomials (or equivalently, -invariant polynomials). Under this invariance property, we design an algorithm which, on input , decides whether R ( ) is empty or not. As is typical for such problems, we assume that the Jacobian matrix of with respect to 1 , . . . , has rank at any point of ( ). In this case the Jacobian criterion[22,Thm 16.19] implies that the complex algebraic set ( ) is smooth and ( − )-equidimensional (or empty).Previous work. The real root decision problem for polynomial systems of equations (and more generally systems of inequalities) lies at the foundations of computational real algebraic geometry. Algorithms for solving polynomial systems over the real numbers start with Fourier [29] who provided a first algorithm for solving linear systems of inequalities (rediscovered in 1919 by Dines [21]). | 10.1145/3597066.3597097 | [
"https://export.arxiv.org/pdf/2306.03855v1.pdf"
] | 259,089,243 | 2306.03855 | 6ad3221b5c7bf0c93ca71feac7edb9eee65824c2 |
Faster real root decision algorithm for symmetric polynomials
6 Jun 2023
George Labahn
Cheriton School of Computer Science
Department of Mathematics and Statistics UiT
University of Waterloo
OntarioCanada
Cordian Riener
The Arctic University of Norway
TromsøNorway
Mohab Safey
Cheriton School of Computer Science
Sorbonne Université
CNRS
LIP6 F-75005ParisFrance
El Din
Cheriton School of Computer Science
Sorbonne Université
CNRS
LIP6 F-75005ParisFrance
Éric Schost
Department of Mathematics and Statistics UiT
University of Waterloo
OntarioCanada
Thi Xuan Vu
The Arctic University of Norway
TromsøNorway
Faster real root decision algorithm for symmetric polynomials
6 Jun 202310.1145/3597066.3597097
In this paper, we consider the problem of deciding the existence of real solutions to a system of polynomial equations having real coefficients, and which are invariant under the action of the symmetric group. We construct and analyze a Monte Carlo probabilistic algorithm which solves this problem, under some regularity assumptions on the input, by taking advantage of the symmetry invariance property.The complexity of our algorithm is polynomial in , + , and +1 , where is the number of variables and is the maximal degree of input polynomials defining the real algebraic set under study. In particular, this complexity is polynomial in when and are fixed and is equal to (1) 2 when = .INTRODUCTIONLet = ( 1 , . . . , ) be polynomials in the multivariate polynomial ring Q[ 1 , . . . , ] and let ( ) ⊂ C be the algebraic set defined by . We denote by R ( ) := ( ) ∩ R the set of solutions in R to the system . In addition we assume that all 's are invariant under the action of the symmetric group , that is, are symmetric polynomials (or equivalently, -invariant polynomials). Under this invariance property, we design an algorithm which, on input , decides whether R ( ) is empty or not. As is typical for such problems, we assume that the Jacobian matrix of with respect to 1 , . . . , has rank at any point of ( ). In this case the Jacobian criterion[22,Thm 16.19] implies that the complex algebraic set ( ) is smooth and ( − )-equidimensional (or empty).Previous work. The real root decision problem for polynomial systems of equations (and more generally systems of inequalities) lies at the foundations of computational real algebraic geometry. Algorithms for solving polynomial systems over the real numbers start with Fourier [29] who provided a first algorithm for solving linear systems of inequalities (rediscovered in 1919 by Dines [21]).
These algorithms are important because they make the first connection with elimination theory. Tarski's theorem [54] states that the projection of a semi-algebraic set on a coordinate subspace is a semi-algebraic set. This theorem, and its algorithmic counterpart which relies on Sturm's theorem for real root counting in the univariate case, enable recursive algorithmic patterns (eliminating variables one after another). The first algorithm with an elementary recursive complexity, Cylindrical Algebraic Decomposition, is due to Collins (see [19] and references in [16,17,24,35,37,38,51,52] for various further improvements).
It turns out that these algorithms run in time doubly exponential in [13,20]. Note that some variants actually solve the quantifier elimination problem, a much more general and difficult computational problem than the real root decision problem.
Algorithms which solve the real root decision problem in time singly exponential in and polynomial in the maximum degree of the input were pioneered by Grigoriev and Vorobjov [32] and Renegar [40], and further improved by Canny [15], Heintz, Roy and Solernó [34] and Basu, Pollack and Roy [8]. The method used in this framework is referred to as the critical point method. It reduces the real root decision problem to the computation of finitely many complex critical points of a polynomial map which reaches extrema at each connected component of the semi-algebraic set under study.
The algorithm proposed here for solving the real root decision problem for systems of symmetric polynomial equations also builds on the critical point method. It borrows ideas from probabilistic algorithms which have been designed to obtain sharper complexity estimates (e.g. cubic either in some Bézout bound attached to some critical point system or in some geometric intrinsic degree) and obtain practical performances that reflect the complexity gains [2][3][4][5][6][7]45]. These algorithms make use of geometric resolution or symbolic homotopy techniques to control the complexity of the algebraic elimination step (see e.g. [31,46] and references therein), and of regularity assumptions to easily derive critical point systems from the input polynomials.
Under the Jacobian criterion assumptions, critical points are defined as the intersection of the affine variety ( ) with a determinantal variety derived from a certain Jacobian matrix. The design of dedicated algebraic elimination algorithms for this particular setting has attracted some attention already [1,27,33,47,50].
When adding the symmetry property to polynomials defining the variety and the polynomial map for which one computes the critical points, significant improvements have been achieved recently in [25] by using the symbolic homotopy algorithms in [36].
These improvements, which allows one to obtain complexity gains related to the combinatorial complexity of the symmetric group, also borrow ideas from algebraic algorithms working with data which are invariant by the action of this group [28]. We emphasize that taking advantage of symmetries in data is a topical and difficult issue, which involves a variety of methodologies [14,18,26,39,53].
In [55], Timofte proves a breakthrough result which is now known as the degree principle. It states that a symmetric polynomial of degree with real coefficients has real solutions if and only if one of these solutions has at most distinct coordinates.
This shows that when is fixed and grows, the real root decision problem can be solved in polynomial time. This is far better than computing at least one sample point per connected component (see also [10][11][12]), and is one of the rare interesting cases where the best known algorithms for these two problems admit different complexities. This is also the starting point of several results which enhance the real root decision problem and polynomial optimization under some -invariance property for classes of problems where remains fixed and grows (see [30,41,42,44] and [43] for equivariant systems).
Main contributions. Being able to leverage -invariance for critical point computations is not sufficient to solve root decision problems more efficiently using the critical point method. Additional techniques are needed.
Indeed, to solve the real root decision problem by finding the critical points of a polynomial map , one typically defines as the distance from points on the variety to a generic point. This map reaches extrema at each connected component of the semialgebraic set under study. However, the map is not symmetric. If it was, our problem would be solved by the critical point algorithm of [25]. Unfortunately there does not appear to be an obvious symmetric map that fits the bill.
Instead, our approach is to apply the critical point method on individual -orbits, with suitable found for each orbit. Thus while we cannot use the critical point algorithm of [25] directly we can make use of the various subroutines used in it to construct a fast decision procedure. Intuitively, working with -orbits is the same as separately searching for real points having distinct coordinates, or real points having two or more coordinates which are the same, or groups of coordinates each of which has equal coordinates and so on. In each case an orbit can be described by points having or fewer pairwise distinct coordinates, a key observation in constructing generic maps invariant for each orbit.
T 1.1. Let = ( 1 , .
. . , ) be symmetric polynomials in Q[ 1 , . . . , ] having maximal degree . Assume that the Jacobian matrix of with respect to 1 , . . . , has rank at any point of ( ). Then there is a Monte Carlo algorithm Real_emptiness which solves the real root decision problem for with ˜ 6 +2 11 + 6 + + + 1
⊂ + + 1 (1) operations in Q.
Here the notion ˜indicates that polylogarithmic factors are omitted.
The remainder of the paper proceeds as follows. The next section reviews known material, on invariant polynomials over products of symmetric groups, the tools we use to work with -orbits, and our data structures. Section 3 discusses our smoothness requirement and shows that it is preserved by alternate representations of invariant polynomials. Section 4 shows how we construct critical point functions along with their critical point set. This is followed in Section 5 by a description of our algorithm along with a proof of correctness and complexity. The paper ends with a section on topics for future research.
PRELIMINARIES 2.1 Invariant Polynomials
We briefly review some properties of polynomials invariant under the action of 1 × · · · × , with the symmetric group on elements, for all . In this paragraph, we work with variables = ( 1 , . . . , ), with each = ( 1, , . . . , , ); for all , the group permutes the variables . For ≥ 0, we denote by
, = 1≤ 1 < 2 <··· < ≤ 1 , 2 , · · · , ,
the elementary polynomial in the variables , with each , having degree , and by , = 1, + · · · + , the -th Newton sum in the variables , for = 1, . . . , . The following two results are well-known.
Describing -orbits via Partitions
-orbits are subsets of C that play a central role in our algorithm. In this section, we review notation and description of -orbits, along with the form of the output used in [25].
A simple way to parameterize -orbits is through the use of partitions of . A sequence = ( 1 1 . . . ), where 1 < · · · < and 's and 's are positive integers, is called a partition of if 1 1 + · · · + = . The length of the partition is defined as ℓ := 1 + · · · + .
For a partition = ( 1 1 . . . ) of , we use the notation from [25, Section 2.3] and let denote the set of all points in C that can be written as
(1)
For any point in C , we define its type as the unique partition of such that there exists ∈ such that ( ) ∈ , with the , 's in (1) pairwise distinct. Points of a given type = ( 1 1 . . . ) are stabilized by the action of := 1 ×· · ·× , the cartesian product of symmetric groups .
For a partition as above, we can then define a mapping : → C ℓ as as in (1) Furthermore, the map is onto: for any = ( 1,1 , . . . , , ) ∈ C ℓ , we define polynomials 1 ( ), . . . , ( ) by
( ) = − 1, −1 + · · · + (−1)
, . We can then find a point ∈ C in the preimage −1 ( ) by finding the roots 1, , . . . , , of ( ).
Zero-Dimensional Parametrizations
The subroutines we use from [25] give their output in terms of zero-dimensional parametrizations, which are defined as follows.
Let ⊂ C be a variety of dimension zero, defined over Q. A zero-dimensional parametrization R = (( , 1 , . . . , ), ) of is (i) a squarefree polynomial in Q[ ],
where is a new indeterminate, and deg( ) = | |, When these conditions hold, we write = (R). Representing the points of by means of rational functions with ′ as denominator is not necessary, but allows for a sharp control of the bit-size of the output.
(ii) polynomials 1 , . . . , in Q[ ] such that deg( ) < deg( ) for all and = 1 ( ) ′ ( ) , . . . , ( ) ′ ( ) ∈ C : ( ) = 0 ,(iii)
PRESERVING SMOOTHNESS
In our main algorithm, we assume that our input system = ( 1 , . . . , ) satisfies the following smoothness condition (A) : the Jacobian matrix of has rank at any point of ( ).
In this section, we discuss consequences of this assumption for symmetric polynomials. The operator T extends to vectors of polynomials and polynomial matrices entry-wise. The key observation here is that if is symmetric, then its image through T is 1 × · · · × -invariant.
Fix a partition = ( 1 1 . . . ) of , and let ℓ be its length. Set
, := { , + 1, . . . , , + }, 1 ≤ ≤ ; 1 ≤ ≤ with , := −1 =1 + ( − 1)
. Variables , for in , , are precisely those that map to , under T . Define further the matrix ∈ Q ℓ × with ℓ = 1 + · · · + , where rows are indexed by pairs ( , ) as above and columns by ∈ {1, . . . , }. For all such ( , ), the entry of row index ( , ) and column index ∈ , is set to 1/ , all others are zero. In other words, = diag( 1 , . . . , ), where
= 1 · · · 1 0 · · · 0 0 1 · · · 1 · · · 0 . . . . . . . . . 0 0 · · · 1 · · · 1 is a matrix in Q × . E 3.1.
Consider the partition = (2 2 3 1 ) of = 7. Then 1 = 2, 1 = 2, 2 = 3, 2 = 1 and the length of is 3. In this case,
= 1 2 1 2 1 2 1 2 1 3 1 3 1 3 . L 3.2. Let = ( 1 , . . . , ) ⊂ Q[ 1 , . . . ,
] be a sequence of symmetric polynomials, and let be a partition of . Then T (Jac 1 ,..., ( )) = Jac 1 ,..., (T ( )) · , where is the matrix defined above.
P
. For any polynomial in Q[ 1 , . . . , ], applying the operator T on evaluates at = , for 1 ≤ ≤ , 1 ≤ ≤ and in , . By the multivariable chain rule,
T ( ) , = ∈ ,
T .
If is symmetric, for , ′ in , , we then have
T = T ′ ,
so that, for in , ,
T = 1 T ( ) , .
This argument can be extended to a sequence of polynomials to obtain our claim.
E 3.3. We continue Example 3.1 with a single 7 -invariant polynomial = 1≤ ≤ ≤7
. Then T ( ) = 3 2 1,1 + 3 2 2,1 + 6 2 1,2 + 6 1,1 1,2 + 4 1,1 2,1 + 6 1,2 2,1 , and so
Jac(T ( )) = (6 1,1 +6 1,2 +4 2,1 , 4 1,1 +6 1,2 +6 2,1 , 6 1,1 +12 1,2 +6 2,1 ).
This implies that Jac(T ( )) · is equal to ( , , , , , , ), with
= 3 1,1 +3 1,2 +2 2,1 , = 2 1,1 +3 1,2 +3 2,1 , = 2 1,1 +4 1,2 +2 2,1 .
This is precisely T (Jac( )). Since Jac( )( ) has rank (by condition A), the left kernel of Jac( )( ) is trivial. It follows that the left kernel of Jac 1 ,..., (T ( ))( ) is also trivial.
When we represent 1 × · · · × -invariant functions in terms of Newton sums, we can show that the new representation also preserves condition (A).
P
. The Jacobian matrix Jac( ) of ( 1 , . . . , ) factors as
Jac( ) = Jac( )( ) · , where = diag( 1 , . . . , )
with each a row-scaled Vandermonde matrix given by
= 1 2 . . . 1 1 · · · 1 1, 2, · · · , . . . . . . −1 1, −1 2, · · · −1 , .(4)
Let be a point in the vanishing set of (ℎ 1 , . . . , ℎ ) and let be in −1 ( ). If Jac( ) is rank deficient at then Jac( )( )( ) is also rank deficient. This implies that the rank of Jac( )( ), which is bounded above by those of Jac( )( )( ) and ( ), is deficient.
Similarly, instead of using a row-scaled Vandermonde matrix as in (4), we can use as the Jacobian matrix of elementary symmetric functions in . This gives a similar result but for the polynomials 1 , . . . , .
CRITICAL LOCI
If ⊂ C ℓ is an equidimensional algebraic set, and a polynomial function defined on , a non-singular point ∈ is called a critical point of on if the gradient of at is normal to the tangent space of at . If = ( 1 , . . . , ) are generators of the ideal associated to , then is the right kernel of the Jacobian matrix Jac( ) of evaluated at . In the cases we will consider, this matrix will have rank at all points of (that is, satisfies condition A). The set of critical points of the restriction of to is then defined by the vanishing of , and of the ( + 1)-minors of the Jacobian matrix Jac( , ) of and . ; we write ℓ = 1 + · · · + . We introduce some useful 1 × · · · × -invariant mappings and discuss the properties of their critical points on ( ) ⊂ C ℓ .
Finiteness through genericity
For 1 ≤ ≤ , let = ( 1, , . . . , , ) be new indeterminates, and recall that , is the -th Newton sum for the variables . Set
= =1 +1, + =1 =1 , ,(5)
where = 1 if is odd and = 0 if is even. So has even degree and is invariant under the action of 1 × · · · × . For = ( 1 , . . . , ) in C 1 × · · · × C , with each in C , we denote by the polynomials in C[ 1 , . . . , ] obtained by evaluating the indeterminates at in , for all . Then there exists a non-empty Zariski open set A ⊂ C 1 × · · · × C such that for ∈ A, the restriction of to ( ) has finitely many critical points in U.
Proof of Proposition 4.1
For new variables 1 , . . . , , we denote by S the polynomials S = 1 , . . . , , [ 1 · · · 1] · Jac( , ) . Then for ∈ C 1 × · · · × C , ( (S )) is the critical locus of the restriction of the map to ( ).
P .
For any ∈ C 1 × · · · × C , we denote by ( , ) the set of critical points of the restriction of to ( ). Since satisfies condition (A), the set ( , ) is given by
{ | 1 ( ) = · · · = ( ) = 0, rank(Jac( , )( )) ≤ }.
Consider in ( , ) and a nonzero vector in the left kernel of Jac( , )( ), of the form = ( 1 , . . . , , +1 ). The last coordinate +1 cannot vanish, as otherwise ( 1 , . . . , ) would be a nonzero vector in the left kernel of Jac( )( ) (which is ruled out by condition (A)). Dividing through by +1 , the point ( ′ , ), with ′ = / +1 for = 1, . . . , , is a solution of S . Conversely, take (ℓ, ) in (S ). Thus, cancels , and Jac( , ) has rank less than + 1 at , so that ( (S )) is in ( , ).
Let
and be defined as in (5) and Lemma 2.2, respectively. For = 1, . . . , , set = +1, , and let ℎ 1 , . . . , ℎ = 1 , . . . , . In particular, Lemma 2.2 implies that is given by
=1 + =1 =1 , , .
The sequence S can be rewritten as
ℎ 1 • , . . . , ℎ • , [ 1 . . . 1] ℎ 1 1,1 · · · ℎ 1 , . . . . . . ℎ 1, · · · ℎ , 1 1 1,1 + 1,1 · · · , + , ( ) · ,
where is a multi-row-scaled Vandermonde matrix which is the Jacobian matrix of with respect to . This matrix has full rank at any point in the open set U defined in Subsection 4.1.
In particular, for any ∈ C 1 × · · · × C , the intersection of (S ) with C × U is contained in the preimage by the map Id × of the vanishing set of the sequence : ℎ 1 , . . . , ℎ ,
[ 1 · · · 1] ℎ 1 1,1 · · · ℎ 1 , . . . . . . ℎ 1,1 · · · ℎ , 1 1 1,1 + 1,1 · · · , + , .
Since for all 1 ≤ ≤ , defines a map with finite fibers (by Newton identities and Vieta's formula, the preimage by of some point is the set of roots of some polynomial of degree ), we deduce that and consequently Id × define maps with finite fibers. Thus It remains to investigate finiteness properties of ( ). . , ] is a radical ideal whose zero-set is finite.
P
. Let ⊂ C 1 ×· · ·×C be the vanishing set of (ℎ 1 , . . . , ℎ ). Using techniques from [23], one could give a simple exponential upper bound the degree of a hypersurface containing the complement of A. Let = ( 1 , . . . , ) + −1 ( 1 , . . . , ) + · · · + 0 ( 1 , . . . , ) : R → R be a real polynomial, where is the homogeneous component of degree of . Assume further that the leading form of is positive definite; then, is proper. In particular, the map
Finding extrema using proper maps
2 + 2 −1 =0
, with the Newton sums in 1 , . . . , and all in Q, is proper. We can extend this to blocks of variables. L 4.5. Let 1 , . . . , be blocks of 1 , . . . , variables, respectively. If , := 1, + · · · + , , then for any 1 , . . . , ≥ 1 and coefficients , in Q, the map
=1 2 , + =1 2 −1 =0 , ,
is proper.
MAIN RESULT
Let = ( 1 , . . . , ) be a sequence of symmetric polynomials in Q[ 1 , . . . , ] that satisfies condition (A). In this section we present an algorithm and its complexity to decide whether the real locus of ( ) is empty or not.
To exploit the symmetry of and to decide whether the set R ( ) is empty or not, our main idea is slicing the variety ( ) with hyperplanes which are encoded by a partition of . This way, we obtain a new polynomial system which is invariant under the action := 1 × · · · × of symmetric groups. We proved in Lemma 3.4 that this new system also satisfies condition (A). We then use the critical point method to decide whether the real locus of the algebraic variety defined by this new system is empty or not by taking a -invariant map as defined in the previous section.
Critical points along -orbits
Let = ( 1 , . . . , ) be a sequence of -invariant polynomials and be a -invariant map in Q[ 1 , . . . , ], with = ( 1, , . . . , , ) for all . As before, we set ℓ = 1 + · · · + , and we assume that ≤ ℓ. Assume further that the sequence satisfies condition (A). 5.1. Let , , and as above. Assume further that has finitely many critical points on ( ). Then there exists a randomized algorithm Critical_points ( , , ) which returns a zero-dimensional parametrization of the critical points of restricted to ( ). The algorithm uses
˜ 2 ( + 5 ) 4 Γ operations in Q, where = deg( 1 ) · · · deg( ) · ℓ − ( − 1, . . . , − ℓ) 1 ! · · · ! , Γ = 2 + + 4 + 1 , and = (deg( 1 ) + 1) · · · (deg( ) + 1) · ℓ − ( , . . . , − ℓ + 1) 1 ! · · · ! , with = max(deg( ), deg( )).
The number of solutions is at most .
P
. The Critical_points procedure contains two steps: first finding and from and and then computing a representation for the set ( , ) of critical points of on ( ). The first step can be done using the algorithm Symmetric_Coordinates from [25, Lemma 9], which uses ˜ ℓ+ 2 operations in Q.
Since the sequence satisfies condition (A), Lemma 3.6 implies that also satisfies condition (A). Then, the set ( , ) is the zero set of and all the ( + 1)-minors of Jac( , ). In particular, when ℓ = , ( , ) = ( ).
Since each , has degree , it is natural to assign a weight to the variable , , so that the polynomial ring Q[ 1 , . . . , ] is weighted of weights (1, . . . , 1 , . . . , 1, . . . , ). The weighted degrees of and are then equal to those of and , respectively. To compute a zero-dimensional parametrization for ( , ) we use the symbolic homotopy method for weighted domain given in [36,Thm 5.3] (see also [25,Sec 5.2] for a detailed complexity analysis). This procedure is randomized and requires ˜ 2 ( + 5 ) 4 Γ operations in Q.
Furthermore, results from [36,Thm 5.3] also imply that the number of points in the output is at most .
Thus, the total complexity of the Critical_points algorithm is then ˜ 2 ( + 5 ) 4 Γ operations in Q. ( 1, , . . . , , ), . . . , , ( 1, , . . . , , )) 1≤ ≤ , (6) where , ( 1, , . . . , , ) is the -th elementary symmetric function in 1, , . . . , , for = 1, . . . , and = 1, . . . , .
The Decide procedure
Let be the preimage of by . In this subsection we present a procedure called Decide(R ) which takes as input R , and decides whether the set contains real points. In order to do this, a straightforward strategy consists in solving the polynomial system to invert the map . Because of the group action of 1 × · · · × , we would then obtain 1 ! · · · ! points in the preimage of a single point in : we would lose the benefit of all that had been done before.
This difficulty can be bypassed by encoding one single point per orbit in the preimage of the points in . This can be done via the following steps.
(i) Group together the variables = ( 1, , . . . , , ) which encode the values taken by the elementary symmetric functions ,1 , . . . , , (see Sec. 2.2) and denote by ,1 , . . . , , the parametrizations corresponding to 1, , . . . , , ; (ii) Make a reduction to a bivariate polynomial system by considering the polynomial with coefficients in Q[ ]
= ′ − 1, −1 + · · · + (−1) , ∈ Q[ ] [ ]
and "solving" the system = = 0. Here we recall that ∈ Q[ ] and is square-free, so that and ′ are coprime. (iii) It remains to decide whether, for all 1 ≤ ≤ , there is a real root of such that when replacing by in , the resulting polynomial has all its roots real. To do this we proceed by performing the following steps for 1 ≤ ≤ : (1) first we compute the Sturm-Habicht sequence associated to , in Q[ ] (the Sturm-Habicht sequence is a signed subresultant sequence, see [9, Chap. 9, Algo. 8.21]);
(2) next, we compute Thom-encodings of the real roots of , which is a way to uniquely determine the roots of a univariate polynomial with real coefficients by means of the signs of its derivatives at the considered real root (see e.g. [9, Chap. 10, Algo. 10.14]); (3) finally, for each real root of , evaluate the signed subresultant sequence at [9, Chap. 10, Algo. 10.15] and compute the associated Cauchy index to deduce the number of real roots of (see [9,Cor. 9.5]). (iv) For a given real root of , it holds that, for all 1 ≤ ≤ , the number of real roots of equals its degree, if and only if is non-empty.
The above steps describe our Decide, which returns false if contains real points, else true. Here = ( 1, , . . . , , ) denotes the vector of elementary symmetric polynomials in variables . In the next step, we compute a zero-dimensional parametrization R of the critical set := ( , ) of restricted to ( ) by using the Critical_points algorithm from Lemma 5.1. The parametrization R is given by a sequence of polynomials ( , 1,1 , . . . , 1 ,1 , . . . , 1, , . . . , , ) in Q[ ] and a linear form .
The main algorithm
At the final step, we run the Decide(R ) in order to determine whether the preimage of by the map contains real points. , we construct as in (5) and then compute (c) compute R = Critical_points( , )
(d) run Decide(R ) (e) if Decide(R ) is false return false (2) return true. P 5.2.
Assume that, on input symmetric as above, and satisfying condition (A), for all partitions of length at least , is chosen in A and that all calls to the randomized algorithm Criti-cal_points return the correct result. Then Algorithm Real_emptiness returns true if ( ) ∩ R is empty and otherwise it returns false.
P
. Since satisfies condition (A), Lemma 3.4 implies that also satisfies this condition. Then, by the Jacobian criterion [22,Thm 16.19], ( ) is smooth and equidimensional of dimension (ℓ − ), where ℓ is the length of . Therefore, if ℓ < , then the algebraic set ( ) is empty. Thus, the union of ( ) ∩ U where U is the open set defined in Subsection 4.1 and runs over the partitions of of length at least , forms a partition of ( ). Hence, ( ) ∩ R is non-empty if and only if there exists at least one such partition for which ( ) ∩ U ∩ R is non-empty.
We already observed that for all , does satisfy condition (A). Since we have assumed that each time Step 1b is performed, is chosen in A , we apply Proposition 4.4 to deduce that the conditions of Lemma 5.1 are satisfied. Hence, all calls to Critical_points are valid.
Note that since we assume that all these calls return the correct result, we deduce that their output encodes points which all lie in ( ). Hence, if ( ) ∩R is empty, applying the routine Decide on these outputs will always return true and, all in all, our algorithm returns true when ( ) ∩ R is empty.
It remains to prove that it returns false when ( ) ∩ R is nonempty. Note that there is a partition such that ( ) ∩ R is nonempty and has an empty intersection with the complement of U . That is, all connected components of ( ) ∩ R are in U .
Let be such a connected component. By Lemma 4.5, the map is proper, and non-negative. Hence, its restriction to ( ) ∩R reaches its extremum at all connected components of ( ) ∩ R . This implies that the restriction of to ( ) has real critical points which are contained in (and by Proposition 4.1 there are finitely many). Those critical points are then encoded by the output of the call to Critical_points (Step 1c) and false is returned.
Complexity analysis
Let = max(deg( )). First for a partition , applying T to takes linear time in ( + ), the number of monomials of and the cost of Step 1b is nothing. At the core of the algorithm, computing R at Step 1c requires ˜ 2 ( + 5 ) 4 Γ operations in Q by Lemma 5.1, where = max( , deg( )). Also, the degree of R is at most .
In order to determine the cost of the Decide process at Step 1d, let be the degree of and be the maximum of the partial degrees of 's w.r.t. . By the complexity analysis of [9, Algo. 8.21 ; Sec. 8.3.6],
Step (1) (3), we evaluate the signs of polynomials of degree ≤ 2 at the real roots of (of degree ) whose Thom encodings were just computed. This is performed using 3 ((log( ) + )) arithmetic operations in Q following the complexity analysis of [9, Algo 10.15; Sec. 10.4]. The sum of these estimates lies in 4 + 4 ((log( ) + )) . Now, recall that the degree of is the degree of R , so ≤ . The degree of w.r.t. equals and ≤ . This means ≤ . All in all, we deduce that the total cost of this final step lies in 4 + 2 , which is negligible compared to the previous costs.
In the worst case, one need to consider all the partitions of of length at least . Thus the total complexity of Real_emptiness is ,ℓ ≥ ˜ 2 ( + 5 ) 4 Γ operations in Q. In addition, Lemma 34 in [25] implies that further that + ≤ ( + 1) + −1 and = ( + 1) + ≤ ( + 1) 5 for ≥ 2. In addition, since deg( ) ≤ max( ) + 1 ≤ , the total cost of our algorithm is ˜ 2 6 6 Γ = ˜ 6 +2 11 + 6 + + + 1 operations in Q.
An example
Let = 4 and = 1 with = ( ) where = 2 1 + 2 2 + 2 3 + 2 4 − 6 1 2 3 4 − 1.
Consider first the partition = (4 1 ). Then := T ( ) = −6 4 1,1 + 4 2 1,1 − 1 which has no real solution as = −2 4 1,1 − (2 2 1,1 − 1) 2 < 0 for all 1,1 ∈ R.
Next we consider = (2 2 ). Then (2 2 ) = 2 2 1,1 + 2 2 2,1 − 6 2 1,1 2 2,1 − 1 and we take = 5( 2 1,1 + 2 2,1 ) − 9( 1,2 + 2,1 ) − 3. In this case (2 2 ) = 2 2 1,1 − 6 2 2,1 − 4 2,1 − 1 and = 5 2 1,1 − 9 1,1 − 10 2,1 − 3. The critical points of restricted to ( (2 2 ) ) are solutions to (2 2 ) = det Jac( (2 2 ) , ) = 0, that is 2 2 1,1 − 6 2 2,1 − 4 2,1 − 1 = 120 1,1 2,1 − 108 2,1 − 36 = 0. A zero-dimensional parametrization of these critical points is given by (( , 1,1 , 2,1 ) At the final step, we check that the system 1 = = 0, with 1 = ′ 2 − 1,1 + 2,1 ∈ Q[ , ], has real solutions. This implies that R ( ) is non-empty.
The output of our algorithm is consistent with the fact that the point (1, 1, 1/2, 1/2) is in R ( ).
TOPICS FOR FUTURE RESEARCH
Determining topological properties of a real variety R ( ) is an important algorithmic problem. Here we have presented an efficient algorithm to determine if R ( ) is empty or not. More generally, we expect that the ideas presented here may lead to algorithmic improvements also in more refined questions, like computing one point per connected component or the Euler characteristic for a real symmetric variety. Furthermore, while our complexity gains are significant for symmetric input we conjecture that we can do better in certain cases. In particular, when the degree of the polynomials is at most then we expect we that a combination with the topological properties of symmetric semi algebraic sets found in [12,Prop 9] can reduce the number of orbits considered, for example, instead of we might only need /2 for fixed . Finally, a generalization to general symmetric semi algebraic sets should be possible.
For = 1 ,
1. . . , , let = ( 1, , . . . , , ) be a set of new variables and let = ( 1, , . . . , , ); we write = ( 1 , . . . , ) and = ( 1 , . . . , ).
L 2. 1 .
1Let ∈ [ 1 , . . . , ] be invariant under the action of 1 × · · · × . Then there exists a unique in Q[ ] such that = ( ). Similarly, let , be new variables, and consider the sequences = ( 1, , . . . , , ) and = ( 1 , . . . , ), together with their polynomial counterparts = ( 1, , . . . , , ) and = ( 1 , . . . , ).
L
, ( 1, , . . . , , ), . . . , , ( 1, , . . . , , )) 1≤ ≤ , where , ( 1, , . . . , , ) is the -th elementary symmetric function in 1, , . . . , , for = 1, . . . , and = 1, . . . , . One can think of the map as a compression of orbits. By applying this map, we can represent an -orbit O of type by the single point (O ∩ ).
a linear form in variables such that ( 1 , . . . , ) = ′ (so the roots of are the values taken by on ).
Mapping to orbits: the map T . For a partition = ( 1 1 . . . ) of , we define the Q-algebra homomorphism T : Q[ 1 , . . . , ] → Q[ 1 , . . . , ], with = ( 1, , . . . , , ) for all , which maps the variables 1 , . . . , to 1,1 , . . . , , . . . , 1, , . . . , , , . . . , , . (2)
.
Under the assumptions of the previous lemma, if satisfies condition (A), then T ( ) ⊂ Q[ 1 , . . . , ] does as well.
P.
Let = ( 1,1 , . . . , 1 ,1 , . . . , 1, , . . . , ) be a zero of T ( ) in C ℓ . We have to prove that Jac 1 ,..., (T ( ))( ) has a trivial left kernel. Consider the point = 1,1 , . . . , , . . . , 1, , . . . , , , . . . , , ∈ C , (3) which lies in ( ). In particular, for any in Q[ 1 , . . . , ], we have T ( )( ) = ( ). Applying this to the Jacobian matrix of , we obtain T (Jac( ))( ) = Jac( )( ). Since by assumption is symmetric, the previous lemma implies that Jac( )( ) = Jac 1 ,..., (T ( ))( ) · .
L 3. 5 .
5Assume ( 1 , . . . , ) ⊂ Q[ 1 , . . . , ] is 1 ×· · ·×invariant and satisfies condition (A). If we set ℎ = for all , then (ℎ 1 , . . . , ℎ ) also satisfies condition (A).
L 3. 6 .
6Assume ( 1 , . . . , ) ⊂ Q[ 1 , . . . , ] is 1 ×· · ·×invariant and satisfies condition (A). Then the sequence of polynomials ( 1 , . . . , ) also satisfies condition (A).
Let = ( 1 , . . . , ) in Q[ 1 , . . . . ] with each invariant under the action of 1 × · · · ×
Further, we denote by U ⊂ C ℓ the open set consisting of points = ( 1 , . . . , ) such that the coordinates of are pairwise distinct for = 1, . . . , . Note that U depends on the partition = ( 1 1 . . . ); when needed because of the use of different partitions, we will denote it by U .
.
Let = ( 1 , . . . , ) be 1 × · · · × -invariant polynomials in Q[ 1 , . . . , ]. Suppose further that satisfies condition (A).
For
= ( 1 , . . . , ) in C 1 ×· · ·×C , with each in C , we denote by S the polynomials in C[ 1 , . . . , , 1 , . . . . ] obtained by evaluating at in S , for all . Finally, denote by the projection from the ( , )-space C +ℓ to the -space C ℓ .
L 4. 2 .
2Suppose that satisfies condition (A).
.
If ( ) is finite, then (S ) ∩ (C × U) is finite.
P 4. 4 .
4Suppose that satisfies condition (A). Then, there exists a non-empty Zariski open set A ⊂ C 1 × · · · × C such that for any ∈ A, ⊂ C[ 1 , . . . , , 1 , . .
.
By Sard's theorem[49, Chap. 2, Sec. 6.2, Thm 2], the set of critical values of this map is contained in a proper Zariski closed set B of C 1 × · · · × C . Since satisfies condition (A), for outside B, the Jacobian matrix of has full rank at any ( , ) with in . Hence, by the Jacobian criterion[22, Thm 16.19], the ideal generated by in C[ 1 , . . . , , 1 , . . . , ] is radical and is of dimension at most zero.P P 4.1. Let A be the non-empty Zariski open set defined in Prop 4.4. Since satisfies condition (A), Lemma 4.2 implies that, for any ∈ A, the critical locus of the map restricted to ( ) is equal to ( (S )). In addition, the sequence ( ) also satisfies condition (A) by Lemma 3.5. Then, by Prop. 4.4, for any ∈ A, the algebraic set defined by is finite. By Lemma 4.3, this implies that (S ) contains finitely many points in C × U. This finishes our proof of Prop. 4.1.
A real valued function : R → R is proper at ∈ R if there exists an > 0 such that −1 ([ − , + ]) is compact. Such functions are of interest because a proper polynomial restricted to a real algebraic set reaches extrema on each connected component of . Using [48, Thm 2.1 and Cor 2.2] one can construct proper polynomials in the following way.
Let be a -invariant map in Q[ 1 , . . . , ]. Let and in Q[ 1 , . . . , ], where = ( 1, , . . . , , ) is a set of new variables, be such that = ( 1 , . . . , ) and = ( 1 , . . . , ). Here = ( 1, , . . . , , ) denotes the vector of elementary symmetric polynomials in variables , with each , having degree for all , .
L
a partition = ( 1 1 . . . ) of , and let R = ( , 1,1 , . . . , 1 ,1 , . . . , 1, , . . . , , , ) be a parametrization which encodes a finite set ⊂ C ℓ . This set lies in the target space of the algebraic map : → C ℓ defined in Subsection 2.2 as = ( 1,1 , . . . ,
Our main algorithm Real_emptiness takes symmetric polynomials = ( 1 , . . . , ) in Q[ 1 , . . . , ], with < , which satisfy condition (A), and decides whether R ( ) is empty.For a partition , we first find the polynomials:= T ( ), which are -invariant in Q[ 1 , . . . , ],where T is defined as in (2). By Corollary 3.4, satisfies condition (A), so we can apply the results of Section 4.Let be the map defined in (5) and A ⊂ C 1 × · · · × C 1 be the non-zero Zariski open set defined in Proposition 4.1. Assume is chosen in A (this is one of the probabilistic aspects of our algorithm) at step 1b. ByCorollary 3.4, satisfies condition (A). Then, the critical locus of the restriction of to ( ) is of dimension at most zero (by Proposition 4.1). In addition, the map is invariant under the action of the group . Let and in Q[ 1 , . . . , ] such that = ( 1 , . . . , ) and = ( 1 , . . . , ).
Algorithm 1
1Real_emptiness( ) Input: symmetric polynomials = ( 1 , . . . , ) in Q[ 1 , . . . , ] with < such that satisfies (A) Output: false if ( ) ∩ R is non-empty; true otherwise (1) for all partitions = ( 1 1 . . . ) of of length at least , do (a) compute = T ( ), where T is defined in (2) (b) using a chosen ∈ A, where A is defined as in Prop 4.1
where = deg( ) + −1 and = (deg( ) + 1) + . Notice
2 and 1 = 2 = 2. In this case, we have= (3 2
1,1 − 1,2 ) 2,2
and hence = (3 2
1,1 − 1,2 ) 2,2 ∈ Q[ 1,1 , 1,2 , 2,1 , 2,2 ].
above is performed within 4 arithmetic operations in Q[ ] using a classical evaluation interpolation scheme (there are polynomials to interpolate, all of them being of degree ≤ 2 ). Step (2) above requires 4 log( ) arithmetic operations in Q (see the complexity analysis of [9, Algo 10.14; Sec. 10.4]). Finally, in Step
Degeneracy loci and polynomial equation solving. B Bank, M Giusti, J Heintz, G Lecerf, G Matera, P Solernó, Foundations of Computational Mathematics. 15B. Bank, M. Giusti, J. Heintz, G. Lecerf, G. Matera, and P. Solernó. 2015. De- generacy loci and polynomial equation solving. Foundations of Computational Mathematics 15, 1 (2015), 159-184.
Polar varieties and efficient real equation solving: the hypersurface case. B Bank, M Giusti, J Heintz, G.-M Mbakop, Journal of Complexity. 13B. Bank, M. Giusti, J. Heintz, and G.-M. Mbakop. 1997. Polar varieties and effi- cient real equation solving: the hypersurface case. Journal of Complexity 13, 1 (1997), 5-27.
Polar varieties and efficient real elimination. B Bank, M Giusti, J Heintz, G.-M Mbakop, Mathematische Zeitschrift. 238B. Bank, M. Giusti, J. Heintz, and G.-M. Mbakop. 2001. Polar varieties and effi- cient real elimination. Mathematische Zeitschrift 238, 1 (2001), 115-144.
On the intrinsic complexity of point finding in real singular hypersurfaces. B Bank, M Giusti, J Heintz, L M Pardo, Inform. Process. Lett. 109B. Bank, M. Giusti, J. Heintz, and L.M. Pardo. 2009. On the intrinsic complexity of point finding in real singular hypersurfaces. Inform. Process. Lett. 109, 19 (2009), 1141-1144.
Generalized polar varieties and efficient real elimination procedure. B Bank, M Giusti, J Heintz, L.-M Pardo, Kybernetika. 40B. Bank, M. Giusti, J. Heintz, and L.-M. Pardo. 2004. Generalized polar varieties and efficient real elimination procedure. Kybernetika 40, 5 (2004), 519-550.
Generalized polar varieties: Geometry and algorithms. B Bank, M Giusti, J Heintz, L.-M Pardo, Journal of complexity. B. Bank, M. Giusti, J. Heintz, and L.-M. Pardo. 2005. Generalized polar varieties: Geometry and algorithms. Journal of complexity (2005).
Intrinsic complexity estimates in polynomial optimization. B Bank, M Giusti, J Heintz, M Safey El Din, 10.1016/j.jco.2014.02.005Journal of Complexity. 30B. Bank, M. Giusti, J. Heintz, and M. Safey El Din. 2014. Intrinsic complexity estimates in polynomial optimization. Journal of Complexity 30, 4 (2014), 430- 443. https://doi.org/10.1016/j.jco.2014.02.005
On the combinatorial and algebraic complexity of quantifier elimination. S Basu, R Pollack, M.-F Roy, Journal of ACM. 43S. Basu, R. Pollack, and M.-F. Roy. 1996. On the combinatorial and algebraic complexity of quantifier elimination. Journal of ACM 43, 6 (1996), 1002-1045.
S Basu, R Pollack, M.-F Roy, Algorithms in real algebraic geometry. Springer-Verlag. online versionsecond edition ed.S. Basu, R. Pollack, and M.-F. Roy. 2006. Algorithms in real algebraic geometry (second edition ed.). Springer-Verlag. online version (2008).
Bounding the equivariant Betti numbers of symmetric semi-algebraic sets. S Basu, C Riener, 10.1016/j.aim.2016.09.015Advances in Mathematics. 305S. Basu and C. Riener. 2017. Bounding the equivariant Betti numbers of symmetric semi-algebraic sets. Advances in Mathematics 305 (2017), 803-855. https://doi.org/10.1016/j.aim.2016.09.015
Efficient algorithms for computing the eulerpoincaré characteristic of symmetric semi-algebraic sets. S Basu, C Riener, Ordered Algebraic Structures and Related Topics: International Conference on Ordered Algebraic Structures and Related Topics. Luminy, France; Providence, Rhode IslandAmerican Mathematical Soc697Centre International de Rencontres Mathématiques (CIRM)S. Basu and C. Riener. 2017. Efficient algorithms for computing the euler- poincaré characteristic of symmetric semi-algebraic sets. In Ordered Algebraic Structures and Related Topics: International Conference on Ordered Algebraic Struc- tures and Related Topics, October 12-16, 2015, Centre International de Rencontres Mathématiques (CIRM), Luminy, France, Vol. 697. American Mathematical Soc. Providence, Rhode Island, 53-81.
Vandermonde varieties, mirrored spaces, and the cohomology of symmetric semi-algebraic sets. S Basu, C Riener, Foundations of Computational Mathematics. 22S. Basu and C. Riener. 2022. Vandermonde varieties, mirrored spaces, and the cohomology of symmetric semi-algebraic sets. Foundations of Computational Mathematics 22, 5 (2022), 1395-1462.
The complexity of quantifier elimination and cylindrical algebraic decomposition. C W Brown, J H Davenport, Proceedings of the 2007 international symposium on Symbolic and algebraic computation. the 2007 international symposium on Symbolic and algebraic computationC. W. Brown and J. H. Davenport. 2007. The complexity of quantifier elimination and cylindrical algebraic decomposition. In Proceedings of the 2007 international symposium on Symbolic and algebraic computation. 54-60.
Resultant of an equivariant polynomial system with respect to the symmetric group. L Busé, A Karasoulou, Journal of Symbolic Computation. 76L. Busé and A. Karasoulou. 2016. Resultant of an equivariant polynomial system with respect to the symmetric group. Journal of Symbolic Computation 76 (2016), 142-157.
The complexity of robot motion planning. J Canny, MIT PressJ. Canny. 1987. The complexity of robot motion planning. MIT Press.
Triangular decomposition of semi-algebraic systems. C Chen, J H Davenport, J P May, M M Maza, B Xia, R Xiao, Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation. the 2010 International Symposium on Symbolic and Algebraic ComputationC. Chen, J. H. Davenport, J. P. May, M. M. Maza, B. Xia, and R. Xiao. 2010. Tri- angular decomposition of semi-algebraic systems. In Proceedings of the 2010 In- ternational Symposium on Symbolic and Algebraic Computation. 187-194.
Computing cylindrical algebraic decomposition via triangular decomposition. C Chen, M M Maza, B Xia, L Yang, Proceedings of the 2009 international symposium on Symbolic and algebraic computation. the 2009 international symposium on Symbolic and algebraic computationC. Chen, M. M. Maza, B. Xia, and L. Yang. 2009. Computing cylindrical algebraic decomposition via triangular decomposition. In Proceedings of the 2009 interna- tional symposium on Symbolic and algebraic computation. 95-102.
Solving a system of algebraic equations with symmetries. A Colin, Journal of Pure and Applied Algebra. A. Colin. 1997. Solving a system of algebraic equations with symme- tries. Journal of Pure and Applied Algebra 117-118 (1997), 195 -215.
. 10.1016/S0022-4049(97)00011-X11https://doi.org/10.1016/S0022-4049(97)00011-X
Quantifier elimination for real closed fields by cylindrical algebraic decomposition. G E Collins, Lecture notes in computer science. 33G. E. Collins. 1975. Quantifier elimination for real closed fields by cylindrical algebraic decomposition. Lecture notes in computer science 33 (1975), 515-532.
Real quantifier elimination is doubly exponential. J H Davenport, J Heintz, Journal of Symbolic Computation. 5J. H. Davenport and J. Heintz. 1988. Real quantifier elimination is doubly expo- nential. Journal of Symbolic Computation 5, 1-2 (1988), 29-35.
Systems of linear inequalities. L L Dines, Annals of Mathematics. L. L. Dines. 1919. Systems of linear inequalities. Annals of Mathematics (1919), 191-199.
Commutative algebra: with a view toward algebraic geometry. D Eisenbud, Springer Science & Business Media150D. Eisenbud. 2013. Commutative algebra: with a view toward algebraic geometry. Vol. 150. Springer Science & Business Media.
On the bit complexity of finding points in connected components of a smooth real hypersurface. J Elliott, M Giesbrecht, É Schost, Proceedings of ISSAC 2020. ISSAC 2020ACMJ. Elliott, M. Giesbrecht, and É. Schost. 2020. On the bit complexity of finding points in connected components of a smooth real hypersurface. In Proceedings of ISSAC 2020. ACM, 170-177.
Cylindrical algebraic decomposition with equational constraints. M England, R Bradford, J H Davenport, Journal of Symbolic Computation. 100M. England, R. Bradford, and J. H. Davenport. 2020. Cylindrical algebraic de- composition with equational constraints. Journal of Symbolic Computation 100 (2020), 38-71.
Computing critical points for invariant algebraic systems. J.-C Faugère, G Labahn, M Safey El Din, É Schost, T X Vu, Journal of Symbolic Computation. 116J.-C. Faugère, G. Labahn, M. Safey El Din, É. Schost, and T. X. Vu. 2023. Com- puting critical points for invariant algebraic systems. Journal of Symbolic Com- putation 116 (2023), 365-399.
Solving systems of polynomial equations with symmetries using SAGBI-Gröbner bases. J.-C Faugère, S Rahmany, Proceedings of ISSAC 2009. ISSAC 2009n. d.J.-C. Faugère and S. Rahmany. [n. d.]. Solving systems of polynomial equa- tions with symmetries using SAGBI-Gröbner bases. In Proceedings of ISSAC 2009. https://hal.archives-ouvertes.fr/hal-01294702
Critical Points and Gröbner Bases: The Unmixed Case. J.-C Faugère, M Safey El Din, P.-J Spaenlehauer, Proceedings of ISSAC 2012. ISSAC 2012ACMJ.-C. Faugère, M. Safey El Din, and P.-J. Spaenlehauer. 2012. Critical Points and Gröbner Bases: The Unmixed Case. In Proceedings of ISSAC 2012. ACM, 162-169.
. 10.1145/2442829.2442855https://doi.org/10.1145/2442829.2442855
Solving Polynomial Systems Globally Invariant Under an Action of the Symmetric Group and Application to the Equilibria of N Vortices in the Plane. J.-C Faugère, J Svartz, 10.1145/2442829.2442856Proceedings of ISSAC 2012. ISSAC 2012ACMJ.-C. Faugère and J. Svartz. 2012. Solving Polynomial Systems Globally Invari- ant Under an Action of the Symmetric Group and Application to the Equilib- ria of N Vortices in the Plane. In Proceedings of ISSAC 2012. ACM, 170-178. https://doi.org/10.1145/2442829.2442856
1826. Solution d'une question particuliere du calcul des inégalités. J B J Fourier, Nouveau Bulletin des Sciences par la Société philomatique de Paris. 99100J. B. J. Fourier. 1826. Solution d'une question particuliere du calcul des inégalités. Nouveau Bulletin des Sciences par la Société philomatique de Paris 99 (1826), 100.
Symmetry groups, semidefinite programs, and sums of squares. K Gatermann, P A Parrilo, Journal of Pure and Applied Algebra. 192K. Gatermann and P. A. Parrilo. 2004. Symmetry groups, semidefinite programs, and sums of squares. Journal of Pure and Applied Algebra 192, 1-3 (2004), 95-128.
A Gröbner Free Alternative for Polynomial System Solving. M Giusti, G Lecerf, B Salvy, Journal of Complexity. 17M. Giusti, G. Lecerf, and B. Salvy. 2001. A Gröbner Free Alternative for Polyno- mial System Solving. Journal of Complexity 17, 1 (2001), 154-211.
Solving systems of polynomials inequalities in subexponential time. D Grigoriev, N Vorobjov, Journal of Symbolic Computation. 5D. Grigoriev and N. Vorobjov. 1988. Solving systems of polynomials inequalities in subexponential time. Journal of Symbolic Computation 5 (1988), 37-64.
Solving determinantal systems using homotopy techniques. J D Hauenstein, M Safey El Din, É Schost, T X Vu, 10.1016/j.jsc.2020.09.008Journal of Symbolic Computation. 104J. D. Hauenstein, M. Safey El Din, É. Schost, and T. X. Vu. 2021. Solving determi- nantal systems using homotopy techniques. Journal of Symbolic Computation 104 (2021), 754-804. https://doi.org/10.1016/j.jsc.2020.09.008
On the theoretical and practical complexity of the existential theory of the reals. J Heintz, M.-F Roy, P Solernò, Comput. J. 36J. Heintz, M.-F. Roy, and P. Solernò. 1993. On the theoretical and practical com- plexity of the existential theory of the reals. Comput. J. 36, 5 (1993), 427-431.
Heuristic Search Strategies for Cylindrical Algebraic Decomposition. H Hong, Proceedings of Artificial Intelligence and Symbolic Mathematical Computing. Artificial Intelligence and Symbolic Mathematical ComputingSpringer737H. Hong. 1992. Heuristic Search Strategies for Cylindrical Algebraic Decompo- sition. In Proceedings of Artificial Intelligence and Symbolic Mathematical Com- puting, Springer Lecture Notes in Computer Science 737. 152-165.
Homotopy techniques for solving sparse column support determinantal polynomial systems. G Labahn, M Safey El Din, É Schost, T X Vu, Journal of Complexity. 66101557G. Labahn, M. Safey El Din, É. Schost, and T. X. Vu. 2021. Homotopy techniques for solving sparse column support determinantal polynomial systems. Journal of Complexity 66 (2021), 101557.
An improved projection operator for Cylindrical Algebraic Decomposition. S Mccallum, Ph. D. Dissertation. University of Wisconsin-MadisonS. McCallum. 1984. An improved projection operator for Cylindrical Algebraic Decomposition. Ph. D. Dissertation. University of Wisconsin-Madison.
On projection in CAD-based quantifier elimination with equational constraint. S Mccallum, Proceedings of ISSAC 1999. ISSAC 1999ACMS. McCallum. 1999. On projection in CAD-based quantifier elimination with equational constraint. In Proceedings of ISSAC 1999. ACM, 145-149.
N Perminov, Sh, Shakirov, arXiv:0910.5757Discriminants of symmetric polynomials. arXiv preprintN. Perminov and Sh. Shakirov. 2009. Discriminants of symmetric polynomials. arXiv preprint arXiv:0910.5757 (2009).
On the computational complexity and geometry of the first order theory of the reals. J Renegar, Journal of Symbolic Computation. 13J. Renegar. 1992. On the computational complexity and geometry of the first order theory of the reals. Journal of Symbolic Computation 13, 3 (1992), 255- 352.
On the degree and half-degree principle for symmetric polynomials. C Riener, 10.1016/j.jpaa.2011.08.012Journal of Pure and Applied Algebra. 216C. Riener. 2012. On the degree and half-degree principle for symmetric polynomials. Journal of Pure and Applied Algebra 216, 4 (2012), 850 -856. https://doi.org/10.1016/j.jpaa.2011.08.012
Symmetric semi-algebraic sets and non-negativity of symmetric polynomials. C Riener, 10.1016/j.jpaa.2015.12.010Journal of Pure and Applied Algebra. 220C. Riener. 2016. Symmetric semi-algebraic sets and non-negativity of symmetric polynomials. Journal of Pure and Applied Algebra 220, 8 (2016), 2809 -2815. https://doi.org/10.1016/j.jpaa.2015.12.010
Real Root Finding for Equivariant Semi-Algebraic Systems. C Riener, M Safey El Din, 10.1145/3208976.3209023Proceedings of ISSAC 2018. ISSAC 2018ACMC. Riener and M. Safey el Din. 2018. Real Root Finding for Equivari- ant Semi-Algebraic Systems. In Proceedings of ISSAC 2018. ACM, 335-342. https://doi.org/10.1145/3208976.3209023
Exploiting symmetries in SDP-relaxations for polynomial optimization. C Riener, T Theobald, L J Andrén, J B Lasserre, Mathematics of Operations Research. 38C. Riener, T. Theobald, L. J. Andrén, and J. B. Lasserre. 2013. Exploiting symme- tries in SDP-relaxations for polynomial optimization. Mathematics of Operations Research 38, 1 (2013), 122-141.
Polar varieties and computation of one point in each connected component of a smooth real algebraic set. M Safey El Din, É Schost, Proceedings of ISSAC 2003. ISSAC 2003ACMM. Safey El Din and É. Schost. 2003. Polar varieties and computation of one point in each connected component of a smooth real algebraic set. In Proceedings of ISSAC 2003. ACM, 224-231.
Bit complexity for multihomogeneous polynomial system solving -Application to polynomial minimization. M Safey El Din, É Schost, Journal of Symbolic Computation. 87M. Safey El Din and É. Schost. 2018. Bit complexity for multi- homogeneous polynomial system solving -Application to polynomial minimization. Journal of Symbolic Computation 87 (2018), 176-206.
. 10.1016/j.jsc.2017.08.001https://doi.org/10.1016/j.jsc.2017.08.001
Critical Point Computations on Smooth Varieties: Degree and Complexity Bounds. M Safey El Din, P.-J Spaenlehauer, 10.1145/2930889.2930929Proceedings of ISSAC 2016. ISSAC 2016ACMM. Safey El Din and P.-J. Spaenlehauer. 2016. Critical Point Computations on Smooth Varieties: Degree and Complexity Bounds. In Proceedings of ISSAC 2016. ACM, 183-190. https://doi.org/10.1145/2930889.2930929
A note on proper polynomial maps. T Sakkalis, Communications in Algebra. 33T. Sakkalis. 2005. A note on proper polynomial maps. Communications in Alge- bra 33, 9 (2005), 3359-3365.
Basic algebraic geometry. I R Shafarevich, M Reid, Springer2I. R. Shafarevich and M. Reid. 1994. Basic algebraic geometry. Vol. 2. Springer.
On the Complexity of Computing Critical Points with Gröbner Bases. P.-J Spaenlehauer, SIAM Journal on Optimization. 24P.-J. Spaenlehauer. 2014. On the Complexity of Computing Critical Points with Gröbner Bases. SIAM Journal on Optimization 24, 3 (2014), 1382-1401.
Cylindrical algebraic decomposition using validated numerics. A W Strzeboński, Journal of Symbolic Computation. 419A. W. Strzeboński. 2006. Cylindrical algebraic decomposition using validated numerics. Journal of Symbolic Computation 41, 9 (2006), 1021-1038.
Cylindrical algebraic decomposition using local projections. A W Strzeboński, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation. the 39th International Symposium on Symbolic and Algebraic ComputationA. W. Strzeboński. 2014. Cylindrical algebraic decomposition using local projec- tions. In Proceedings of the 39th International Symposium on Symbolic and Alge- braic Computation. 389-396.
B Sturmfels, Algorithms in Invariant Theory (Texts and Monographs in Symbolic Computation). Springer Publishing Company2nd ed.; vii, 197 pp.; 5 figs. IncorporatedB. Sturmfels. 2008. Algorithms in Invariant Theory (Texts and Monographs in Symbolic Computation) (2nd ed.; vii, 197 pp.; 5 figs. ed.). Springer Publishing Company, Incorporated.
A Decision Method for Elementary Algebra and Geometry. The Rand Corporation. A Tarski, Santa Monica, Califiii+60 pagesA. Tarski. 1948. A Decision Method for Elementary Algebra and Geometry. The Rand Corporation, Santa Monica, Calif. iii+60 pages.
On the positivity of symmetric polynomial functions.: Part I: General results. V Timofte, V. Timofte. 2003. On the positivity of symmetric polynomial functions.: Part I: General results.
. J. Math. Anal. Appl. 284J. Math. Anal. Appl. 284, 1 (2003), 174 -190.
. 10.1016/S0022-247X(03)00301-9https://doi.org/10.1016/S0022-247X(03)00301-9
| [] |
[] | [] | [] | [] | УДК 681.5.015ПРИМЕНЕНИЕ НЕЛИНЕЙНОГО ОПЕРАТОРА ДЛЯ ИДЕНТИФИКАЦИИ НЕИЗВЕСТНОГО ПАРАМЕТРА ДЛЯ СКАЛЯРНОГО РЕГРЕССИОННОГО УРАВНЕНИЯ С ПОМЕХОЙ В КАНАЛЕ ИЗМЕРЕНИЯВ.С.Воробьев, А.А.Бобцов, Николаев Н.А., Пыркин А.А. Университет ИТМО, Санкт-Петербург, 197101, Российская Федерация Адрес для переписки: [email protected] Аннотация Предмет исследования. В статье исследуется алгоритм идентификации неизвестного постоянного параметра для скалярной регрессионной модели с применением нелинейного оператора, позволяющего получить новое регрессионное уравнение (с расширенным числом неизвестных параметров), для которого влияние помех в измерении или возмущающего воздействия будет минимальным. Целью работы является разработка нового метода идентификации неизвестного постоянного параметра для классической линейной регрессионной модели при наличии помех измерения или возмущающих воздействий. Поставленная задача решается при условии, что модуль амплитуды помехи или возмущающего воздействия меньше единицы и, более того, не превосходит полезную составляющую измеряемого сигнала. Методы. Предложен метод оценивания параметра линейной скалярной регрессионной модели на базе применения нелинейного оператора (в рамках данной статьи была выбрана экспоненциальная функция/оператор), позволяющего расширить регрессионное уравнение до нескольких неизвестных параметров, но при этом нивелировать влияние помех или возмущений. Метод основан на разложении экспоненциальной функции в ряд Тейлора с отсеканием части членов с существенно малой амплитудой, а также последующего применения метода динамического расширения и смешивания регрессора, обеспечивающего возвращение к скалярной регрессионной модели, что в свою очередь, обеспечивает равномерную сходимость и высокое быстродействие при использовании градиентных методов идентификации. Основные результаты. Предлагаемый метод позволяет при наличии помех измерения повысить точность оценки параметра регрессионной модели по сравнению с классическим методом. Более того, предлагаемый подход дает методику расширения регрессионного уравнения, обеспечивающего уменьшение влияния помех. Иными словами, чем больше становится параметров в новой регрессионной модели (полученной при использовании нелинейного оператора для исходной регрессии), тем меньше значение шума или возмущения. Практическая значимость. Предлагаемый в статье метод является новым рабочим инструментом в задачах идентификации неизвестных постоянных параметров. Областью применения данного метода является идентификация параметров математических моделей систем, сводимых к виду линейных регрессионных уравнений, содержащих помехи измерений или возмущающие воздействия. Данный подход может использоваться для широкого класса задач управления техническими объектами, для которых актуальна задача идентификации неизвестных постоянных параметров. Ключевые слова: идентификация параметров, линейная регрессия, нелинейный оператор, помехи в измерениях. | 10.48550/arxiv.2305.16359 | [
"https://export.arxiv.org/pdf/2305.16359v1.pdf"
] | 258,947,516 | 2305.16359 | a3b776d9b7ee993bbeb26c8ab02303ecf6ef9631 |
УДК 681.5.015ПРИМЕНЕНИЕ НЕЛИНЕЙНОГО ОПЕРАТОРА ДЛЯ ИДЕНТИФИКАЦИИ НЕИЗВЕСТНОГО ПАРАМЕТРА ДЛЯ СКАЛЯРНОГО РЕГРЕССИОННОГО УРАВНЕНИЯ С ПОМЕХОЙ В КАНАЛЕ ИЗМЕРЕНИЯВ.С.Воробьев, А.А.Бобцов, Николаев Н.А., Пыркин А.А. Университет ИТМО, Санкт-Петербург, 197101, Российская Федерация Адрес для переписки: [email protected] Аннотация Предмет исследования. В статье исследуется алгоритм идентификации неизвестного постоянного параметра для скалярной регрессионной модели с применением нелинейного оператора, позволяющего получить новое регрессионное уравнение (с расширенным числом неизвестных параметров), для которого влияние помех в измерении или возмущающего воздействия будет минимальным. Целью работы является разработка нового метода идентификации неизвестного постоянного параметра для классической линейной регрессионной модели при наличии помех измерения или возмущающих воздействий. Поставленная задача решается при условии, что модуль амплитуды помехи или возмущающего воздействия меньше единицы и, более того, не превосходит полезную составляющую измеряемого сигнала. Методы. Предложен метод оценивания параметра линейной скалярной регрессионной модели на базе применения нелинейного оператора (в рамках данной статьи была выбрана экспоненциальная функция/оператор), позволяющего расширить регрессионное уравнение до нескольких неизвестных параметров, но при этом нивелировать влияние помех или возмущений. Метод основан на разложении экспоненциальной функции в ряд Тейлора с отсеканием части членов с существенно малой амплитудой, а также последующего применения метода динамического расширения и смешивания регрессора, обеспечивающего возвращение к скалярной регрессионной модели, что в свою очередь, обеспечивает равномерную сходимость и высокое быстродействие при использовании градиентных методов идентификации. Основные результаты. Предлагаемый метод позволяет при наличии помех измерения повысить точность оценки параметра регрессионной модели по сравнению с классическим методом. Более того, предлагаемый подход дает методику расширения регрессионного уравнения, обеспечивающего уменьшение влияния помех. Иными словами, чем больше становится параметров в новой регрессионной модели (полученной при использовании нелинейного оператора для исходной регрессии), тем меньше значение шума или возмущения. Практическая значимость. Предлагаемый в статье метод является новым рабочим инструментом в задачах идентификации неизвестных постоянных параметров. Областью применения данного метода является идентификация параметров математических моделей систем, сводимых к виду линейных регрессионных уравнений, содержащих помехи измерений или возмущающие воздействия. Данный подход может использоваться для широкого класса задач управления техническими объектами, для которых актуальна задача идентификации неизвестных постоянных параметров. Ключевые слова: идентификация параметров, линейная регрессия, нелинейный оператор, помехи в измерениях.
Введение
Статья посвящена классической задаче идентификации параметров для скалярной линейной регрессионной модели, то есть статического уравнения левая часть, которого известна, а правая содержит сумму из n неизвестных постоянных параметров, умноженных на n известных функций -регрессоров. Существует множество подходов к оценке параметров линейной регрессионной модели (большинство из них можно найти в [1]). Если линейное регрессионное уравнение не содержит помех в измерениях или возмущений в своей правой части, то при условии незатухающего возбуждения на регрессоры (см., например, [2] и [3]) в случае использования метода градиентного спуска параметры будут найдены асимптотически точно. Однако при наличии шумов измерений или возмущающих воздействий оценивание параметров будет осуществляться с ошибкой.
В целом методы идентификации параметров можно разделить на две большие группы: с постобработкой данных и реального времени. Методы, использующие постобработку данных, допускают применение более сложных алгоритмов и предусматривают знание статистических величин, полученных в результате многочисленных наблюдений за некоторым процессом (см.
[4], [5]). Например, наблюдатель, построенный на основе фильтра Калмана запрашивает знания дисперсии возмущения [6]. Методы идентификации в реальном времени не требуют набора экспериментальных/статических данных, поэтому широко применяются в задачах оценивания параметров (см., например, [7], [8] и [9]).
Как уже указывалось ранее, одним из классических и широко распространенным подходов в идентификации является метод градиентного спуска, требующий для асимптотической сходимости по настраиваемым параметрам выполнения условия незатухающего возбуждения [10]. В работе [11] отмечено, что настройка градиентного метода идентификации обычно заключается в подборе коэффициента усиления и требует множества попыток. При этом, как показано в [12], увеличение коэффициента усиления в методе градиентного спуска не всегда приводит к повышению скорости переходного процесса, но при этом ведет к росту пульсации и обострению выбросов. Однако при наличии помех или возмущений изменение коэффициента усиления на точность сходимости оценок параметров к истинным значениям не оказывает существенного влияния. Для скалярных регрессионных моделей (то есть для уравнений с одним неизвестным параметром) для высокочастотных помех уменьшение коэффициента усиления может привести к улучшению точности, но при этом увеличивается время сходимости. При этом увеличение коэффициента усиления приводит к увеличению быстродействия, но ухудшению точности.
В этой работе предлагается новый подход, позволяющий увеличить точность оценивания параметра скалярной регрессионной модели при наличии возмущения, основанный на применении нелинейного экспоненциального оператора и перепараметризации существующей регрессионной модели. Предлагаемый подход позволяет исключить рост влияния возмущения при увеличении коэффициента усиления при идентификации параметра с использованием метода градиентного спуска.
Постановка задачи
Рассмотрим скалярную регрессионную модель вида:
(1) где и -измеряемые сигналы, -неизвестный постоянный параметр, некоторая неизвестная ограниченная функция времени или помеха измерений.
Ставится задача синтеза алгоритма оценки неизвестного параметра , чтобы:
1) при = 0 выполнялось (2) где -евклидова норма;
2) при ненулевой функции обеспечивалось неравенство
(3)
где -в обще случае, некоторое малое число. Сформулированная задача будет решаться при следующих допущениях.
Допущение 1. Помеха измерений строго меньше модуля полезного сигнала .
Допущение 2. Функция такая, что выполняется .
( ) ( ) ( ), y t t t f q d = + ( ) y t ( ) t f q ( ) t d ( ) t q q ( ) t dl im ( ) 0, t t q q ®¥ - = × ( ) t d 0 ( ) , t q q e - £ 0 0 e > ( ) t d ( ) t f q ( ) t d ( ) 1 t d £ Основной результат К регрессионной модели (1) применим линейный оператор/фильтр , где -оператор дифференцирования, -коэффициент усиления фильтра. В результате уравнение (1) примет вид: (4) где , , .
Для нахождения оценки неизвестного постоянного параметра применим к уравнению (4) нелинейный оператор вида .
(5)
Замечание 1. Следует отметить, что в этой статьей в качестве нелинейного оператора была использована экспоненциальная функция, хотя данный выбор, на взгляд авторов, не является единственным и для преобразования уравнения (4) может использоваться любой оператор, позволяющий нивелировать влияние сигнала .
Подставляя (4) в уравнение (5), получаем
где член был представлен в виде разложения в ряд Тейлора в окрестности нуля, а далее все слагаемые разложения со степенью больше квадрата были опущены (полагая их существенно малыми, согласно Допущению 2).
Для компактности записи введем новые обозначения:
С учетом (7) выражение (6) примет вид: .
Откуда легко видеть, что .
Найдем производную для (8):
.
Подставляя (7) и (8) в уравнение (9), получаем: . x x x x x x x a tq rq a t q r q af q tf q rf q é ù
k p k + : d p dt = 0 k > ( ) ( ) ( ), y t t t f q d = + ( ) ( ) k y t y t k p = + ( ) ( ) k t t k p f f = + ( ) ( ) k t t k p d d = + ( ) new t q q ( ) ( ) : y t x t e = ( ) t d(11)+ + = + + + + + ë û ! ! ! ! ! ! !
Откуда после раскрытия скобок имеем:
.
Введем новые обозначения:
Поскольку , то с учетом обозначений (7) для уравнения (13) имеем:
где символ « » обозначает производную. Таким образом после всех преобразований получаем новую регрессионную модель вида , (15) где -новый вектор регрессии, -вектор, состоящий из степеней оцениваемого параметра, -измеряемый сигнал.
Замечание 2. Следует отметить, что в новой регрессионной модели (15) отсутствуют какие-либо неучтенные помехи или шумы, вызванные сигналом . Как было показано ранее, все слагаемые разложения со степенью больше квадрата были приняты пренебрежимо малыми. Таким образом вместо регрессионных моделей (1) и (4), содержащих помеху, получено уравнение (15), включающее в себя три неизвестных параметра, являющихся нелинейной комбинацией .
Для получения оценки неизвестного параметра применим процедуру ДРСР (динамического расширения и смешивания регрессора, Dynamic Regressor Extension and Mixing, DREM), впервые предложенную в [13] (см. также [14] и [15] для подробностей и расширений).
Согласно процедуре ДРСР:
2 3 x x x x x x x x x
a a t t af q r r tf q rf q
é ù é ù -= -+ + -+ + ë û ë û ! ! ! ! ! ! ! ! ! 1 2 3 ; ;
;
.
q x x x x x x x x
x a a y t t af y r r tf y rf
= - ì ï = -+ ï í = -+ ï ï = î ! ! ! ! ! ! ! ! ! ( ) y y x e ye yx ¢ = = = ! ! ! 1 1 1 1 1 1 ; 2 2 2 2 q y y x y y x yx yyx y x yx yyx y x ¢ é ù é ù = + + -+ + = + + -- = ê ú ê ú ë û ë û ! ! ! ! ! ! ! [ ] [ ] 2 1 1 1 2 y x y x y y x y f f f f f é ù ¢ = -- --- + + + = ê ú ë û ! ! 2 2 1 1 ; 2 2 x yx yx yx yyx x y x y x yyx y x f f f f f f f f f f = -- - + + + + + = + ! ! ! ! ! ! ! ! ! ! [ ] 2 2 2 2 1 1 1 ; 2 2 2 x x y x y x y x y f f f f f f f f ¢ é ù é ù = - + -- =- - ê ú ê ú ë û ë û ! ! ! ! 2 3 1
. 2
x
y f f = ! ¢ ( ) 2 3 2 1 2 3 1 2 3 3
T q q y q y q y q y y y q q Тогда оценка может быть найдена методом градиентного спуска из уравнения (17) следующим образом:
ae ö ç ÷ = + + = =Y Q ç ÷ ç ÷ è ø ( ) 1 2 3 T y y y Y = ( ) 2 3 T q q q Q = q ( ) t d e d q ( ) new t q q(18) где
-задаваемый коэффициент усиления, -первый элемент вектора .
Заметим, что применение нового подхода не исключает использования классического варианта решения задачи идентификации неизвестного параметра для модели (1). В самом деле оценка из скалярной регрессии (4) может быть найдена с помощью градиентного метода:
,
где -задаваемый коэффициент усиления. Однако, как будет показано далее в рамках проведения компьютерного моделирования, точность оценивания параметров в случае применения нового подхода превосходит точность классического градиентного вида (19).
Моделирование
Для иллюстрации работоспособности нового подхода проведено численное моделирование в пакете прикладных программ Matlab. Новый метод оценивания неизвестного параметра регрессионной модели (1) сравнивается с градиентным методом
ae ö Y ç ÷ Y = Y ç ÷ ç ÷ Y è ø e Y Z = DQ ( ) det e D = Y { }
19). Параметры при моделировании выбраны следующим образом: коэффициент усиления в (18). В процессе моделирования сравниваются получаемые оценки параметра регрессионной модели (3) предлагаемым методом (на рисунках обозначены ) и методом градиентного спуска (19) (на рисунках обозначены ). Исследуется работа алгоритмов оценивания при различных видах помех измерения. На Рис. 1 представлена помеха измерения и сигнал .-
оцениваемый параметр,
-начальные условия оценки параметра,
-
регрессор,
-коэффициент фильтра ,
,
-коэффициенты фильтров в
методе ДРСР,
-коэффициент усиления в методе градиентного спуска (19),
-
( )
i
i
i
p
p
b
b
B
=
+
1, 2
i =
e
e
Q = Y Q
1
2
e
f i l t e r e d
filtered
q
Q
q
q
ae
ö
ç
÷
= ç
÷
ç
÷
è
ø
1
2
T
T
e
filtered
T
filtered
2 2 2
( ) sin10 t t d =
Рис. 1 -Помеха измерения и сигнал На Рис. 2 представлена работа алгоритмов при помехе измерения и коэффициентах и .Рис. 2 -Графики оценки параметра предлагаемым в работе методом и градиентным методом при наличии в регрессии помехи оценивания На Рис. 3 представлена помеха измерения и сигнал .Рис. 3 -Помеха измерения и сигналРис. 4 -Графики оценки параметра предлагаемым в работе методом и градиентным методом при наличии в регрессии помехи оценивания На Рис. 5 представлена помеха измерения , заданная в виде равномерного случайного распределения на интервале , и соответствующий ей сигнал .Рис. 5 -Помеха измерения , заданная в виде равномерного случайного распределения на интервале , и сигнал На Рис. 6 представлена работа алгоритмов при помехе измерения , заданной в виде равномерного случайного распределения на интервале и коэффициентах и .Рис. 6 -Графики оценки параметра предлагаемым в работе методом и градиентным методом при наличии в регрессии помехи оценивания , заданной в виде равномерного случайного распределения на интервале Как следует из графиков переходных процессов, для трех исследованных случаев значения оценки параметра , полученные предлагаемым в работе методом, превосходят по точности (а в некоторых случаях существенно лучше) оценки , полученные методом градиентного спуска (19).ЗаключениеВ статье представлен новый метод оценивания параметра скалярной регрессионной модели вида (1), содержащей помехи измерений иливозмущение. Задача идентификации параметра была решена с помощью получения новой регрессионной модели посредством применения нелинейного оператора, представляющего собой экспоненциальную функцию. Использование подобного преобразования, как было показано в статье, позволило существенно нивелировать влияние помехи измерения на оценку неизвестного параметра.Для иллюстрации работоспособности предложенного подхода было проведено компьютерное моделирование с использованием пакета Matlab. В ходе моделирования было показано преимущество в точности оценивания параметра регрессионной модели предложенным методом по сравнению с методом градиентного спуска.
. Twente, Enschede, The NetherlandsTwente: Enschede, The Netherlands. -2012.
Adaptive control tutorial. P Ioannou, B Fidan, -Society for Industrial and Applied MathematicsIoannou P., Fidan B. Adaptive control tutorial. -Society for Industrial and Applied Mathematics, 2006.
Adaptive control: stability, convergence, and robustness. S Sastry, M Bodson, J Bartram, Sastry S., Bodson M., Bartram J. F. Adaptive control: stability, convergence, and robustness. -1990.
On parameter tuning and convergence properties of the DREM procedure //2020 European Control Conference (ECC). -IEEE. M Korotina, Korotina M. et al. On parameter tuning and convergence properties of the DREM procedure //2020 European Control Conference (ECC). -IEEE, 2020. -С. 53-58.
Design of impulsive adaptive observers for improvement of persistency of excitation // International Journal of Adaptive Control and Signal Processing. D Efimov, A Fradkov, 2015. -Т. 29Efimov D., Fradkov A. Design of impulsive adaptive observers for improvement of persistency of excitation // International Journal of Adaptive Control and Signal Processing. -2015. -Т. 29. -№. 6. -С. 765-782.
Performance Enhancement of Parameter Estimators via Dynamic Regressor Extension and Mixing. S Aranovskiy, 2017. - Т. 62Aranovskiy S. et al. Performance Enhancement of Parameter Estimators via Dynamic Regressor Extension and Mixing // IEEE Transactions on Automatic Control. -2017. - Т. 62. -№. 7. -С. 3546-3550.
New results on parameter estimation via dynamic regressor extension and mixing: Continuous and discrete-time cases. R Ortega, 2020. -Т. 66Ortega R. et al. New results on parameter estimation via dynamic regressor extension and mixing: Continuous and discrete-time cases // IEEE Transactions on Automatic Control. -2020. -Т. 66. -№. 5. -С. 2265-2272.
Fixed-time estimation of parameters for non-persistent excitation // European Journal of Control. J Wang, 2020. -Т. 55Wang J. et al. Fixed-time estimation of parameters for non-persistent excitation // European Journal of Control. -2020. -Т. 55. -С. 24-32.
| [] |
[
"A Unified Approach for Maximizing Continuous DR-submodular Functions",
"A Unified Approach for Maximizing Continuous DR-submodular Functions"
] | [
"Mohammad Pedramfar [email protected] ",
"Christopher John Quinn [email protected] ",
"Vaneet Aggarwal [email protected] "
] | [] | [] | This paper presents a unified approach for maximizing continuous DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the general convex set. We consider settings where the oracle provides access to either the gradient of the function or only the function value, and where the oracle access is either deterministic or stochastic. We determine the number of required oracle accesses in all cases. Our approach gives new/improved results for nine out of the sixteen considered cases, avoids computationally expensive projections in two cases, with the proposed framework matching performance of state-of-the-art approaches in the remaining five cases. Notably, our approach for the stochastic function value-based oracle enables the first regret bounds with bandit feedback for stochastic DR-submodular functions. | 10.48550/arxiv.2305.16671 | [
"https://export.arxiv.org/pdf/2305.16671v1.pdf"
] | 258,947,711 | 2305.16671 | f910bee4a4102be291ddfaaaebc1380e163efb9b |
A Unified Approach for Maximizing Continuous DR-submodular Functions
26 May 2023
Mohammad Pedramfar [email protected]
Christopher John Quinn [email protected]
Vaneet Aggarwal [email protected]
A Unified Approach for Maximizing Continuous DR-submodular Functions
26 May 2023
This paper presents a unified approach for maximizing continuous DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the general convex set. We consider settings where the oracle provides access to either the gradient of the function or only the function value, and where the oracle access is either deterministic or stochastic. We determine the number of required oracle accesses in all cases. Our approach gives new/improved results for nine out of the sixteen considered cases, avoids computationally expensive projections in two cases, with the proposed framework matching performance of state-of-the-art approaches in the remaining five cases. Notably, our approach for the stochastic function value-based oracle enables the first regret bounds with bandit feedback for stochastic DR-submodular functions.
Introduction
The problem of optimizing DR-submodular functions over a convex set has attracted considerable interest in both the machine learning and theoretical computer science communities (Bach, 2019;Bian et al., 2019a;Hassani et al., 2017;Niazadeh et al., 2020). This is due to its many practical applications in modeling real-world problems, as demonstrated in works such as (Djolonga and Krause, 2014;Ito and Fujimaki, 2016;Gu et al., 2023;Li et al., 2023). Numerous studies investigated developing approximation algorithms for constrained DR-submodular maximization, utilizing a variety of algorithms and proof analysis techniques. These studies have addressed both monotone and non-monotone functions and considered various types of constraints on the feasible region. The studies have also considered different types of oracles-gradient oracles and value oracles, where the oracles could be exact (deterministic) or stochastic. Lastly, for some of the aforementioned offline problem settings, some studies have also considered analogous online optimization problem settings as well, where performance is measured in regret over a horizon. This paper aims to unify the disparate offline problems under a single framework by providing a comprehensive algorithm and analysis approach that covers a broad range of setups. By providing a unified framework, this paper presents novel results for several cases where previous research was either limited or non-existent, both for offline optimization problems and extensions to related stochastic online optimization problems. This paper presents a Frank-Wolfe based meta-algorithm for (offline) constrained DRsubmodular maximization where we could only query within the constraint set, with sixteen variants for sixteen problem settings. The algorithm is designed to handle settings where (i) the function is monotone or non-monotone, (ii) the feasible region is a downward-closed (d.c.) set (extended to include 0 for monotone functions) or a general convex set, (iii) gradient or value oracle access is available, and (iv) the oracle is exact or stochastic. Table 1 enumerates the cases and corresponding results on oracle complexity (further details are provided in Appendix A). We derive the first oracle complexity guarantees for nine cases, derive the oracle complexity in two cases where previous result had a computationally expensive projection step (Hassani et al., 2017) (and we obtain matching complexity in one of these), and obtain matching guarantees in the remaining five cases.
In addition to proving approximation ratios and oracle complexities for several (challenging) settings that are the first or improvements over the state of the art, technical novelties of our approach include (i) new construction procedure of a shrunk constraint set that allows us to work with lower dimensional feasible sets when given a value oracle, resulting in the first results on general lower dimensional feasible sets given a value oracle, and (ii) our procedure is the first Frank-Wolfe type algorithm for analyzing monotone functions over general convex set when the oracle is only allowed to query within the feasible set, for any type of oracle.
Furthermore, we also consider online stochastic DR-submodular optimization with bandit feedback, where an agent sequentially picks actions (from a convex feasible region), receives stochastic rewards (in expectation a DR-submodular function) but no additional information, and seeks to maximize the expected cumulative reward. Performance is measured against the best action in expectation (or a near-optimal baseline when the offline problem is NP-hard but can be approximated to within α in polynomial time), the difference denoted as expected α-regret. For each of the offline setups, we extend the offline algorithm (the respective variants for stochastic value oracle) and oracle query guarantees to provide algorithms and α-regret bounds in the bandit feedback scenario. Table 2 enumerates the problem settings and expected regret bounds with bandit and semi-bandit feedback. The key contributions of this work can be summarized as follows: 1. This paper proposes a unified approach for maximizing continuous DR-submodular functions in a range of settings with different oracle access types, feasible region properties, and function properties. A Frank-Wolfe based algorithm is introduced, which compared to SOTA methods for each of the sixteen settings, achieves the best-known approximation coefficients for each case while providing (i) the first guarantees in nine cases, (ii) reduced computational complexity by avoiding projections in two cases, and (iii) matching guarantees in remaining five cases. O(1/ǫ 5 ) This table compares the different results for the number of oracle calls (complexity) within the feasible set for DR-submodular maximization. Shaded rows indicate problem settings for which our work has the first guarantees or beats the SOTA. The different columns enumerate properties of the function, the convex feasible region (downward-closed, includes the origin, or general), and the oracle, as well as the approximation ratios and oracle complexity (the number of queries needed to achieve the stated approximation ratio with at most ǫ > 0 additive error). (See Theorem 8 in Appendix A.1 regarding (Mokhtari et al., 2020)). † when the oracle can be queried for any points in [0, 1] d (even outside the feasible region K), the problem of optimizing monotone DR-submodular functions over a general convex set simplifies - (Bian et al., 2017b) and (Mokhtari et al., 2020) achieve the same ratios and complexity bounds as listed above for 0 ∈ K; (Chen et al., 2020) can achieve an approximation ratio of 1 − 1/e with the O(1/ǫ 3 ) and O(1/ǫ 5 ) complexity for exact and stochastic value oracles respectively. (*) The rows marked with a blue star correspond to cases where Algorithm 2 generalizes the corresponding algorithm and therefore has the same performance. ‡ (Hassani et al., 2017) uses gradient ascent, requiring potentially computationally expensive projections. O(T 5/6 ) This table compares the different results for the expected α-regret for online stochastic DR-submodular maximization for the under bandit and semi-bandit feedback. Shaded rows indicate problem settings for which our work has the first guarantees or beats the SOTA † the analysis in (Chen et al., 2018a) has an error (see the supplementary material for details). ‡ (Chen et al., 2018b) uses gradient ascent, requiring potentially computationally expensive projections.
2. In particular, this paper gives the first results on offline DR-submodular maximization (for both monotone and non-monotone functions) over general convex sets and even for downward-closed convex sets, when only a value oracle is available over the feasible set. Most prior works on offline DR-submodular maximization require access to a gradient oracle. 3. The results, summarized in Table 2, are presented with two feedback models-bandit feedback where only the (stochastic) reward value is available and semi-bandit feedback where a single stochastic sample of the gradient at the location is provided. This paper presents the first regret analysis with bandit feedback for stochastic DR-submodular maximization for both monotone and non-monotone functions. For semi-bandit feedback case, we provide the first result in one case, improve the state of the art result in two cases, and gives the result without computationally intensive projections in one case.
Related Work:
The key related works are summarized in Tables 1 and 2, with comparisons to the proposed results. For online DR-Submodular optimization with bandit feedback, there has been some prior works in the adversarial setup (Zhang et al., 2019(Zhang et al., , 2023Niazadeh et al., 2021) which are not included in Table 2 as we consider the stochastic setup. (Zhang et al., 2019) considered monotone DR-submodular functions over downward-closed convex sets and achieved (1 − 1/e)-regret of O(T 8/9 ) in adversarial setting. (Zhang et al., 2023) considered non-monotone DR-submodular functions over downward-closed convex sets and achieved 1/e-regret of O(T 8/9 ) in adversarial setting. We note that in both the cases, the stochastic case leads to improved regret bounds. Further, we note that the regret analysis in (Niazadeh et al., 2021) for adversarial case has errors (See Appendix B), and is thus not compared, while our results for stochastic case are still better than theirs in the adversarial case. Further details on prior works given in Tables 1 and 2 are provided in the supplementary materials.
Background and Notation
We introduce some basic notions, concepts and assumptions which will be used throughout the paper. For any vector x ∈ R d , [x] i is the i-th entry of x. We consider the partial order
on R d where x ≤ y if and only if [x] i ≤ [y] i for all 1 ≤ i ≤ d.
For two vectors x, y ∈ R d , the join of x and y, denoted by x ∨ y and the meet of x and y, denoted by x ∧ y, are defined by
x ∨ y := (max{[x] i , [y] i }) d i=1 and x ∧ y := (min{[x] i , [y] i }) d i=1 ,(1)
respectively. Clearly, we have x ∧ y ≤ x ≤ x ∨ y. We use · to denote the Euclidean norm, and · ∞ to denote the supremum norm. In the paper, we consider a bounded convex domain K and w.l.o.g. assume that K ⊆
[0, 1] d . We say that K is down-closed (d.c.) if there is a point u ∈ K such that for all z ∈ K, we have {x | u ≤ x ≤ z} ⊆ K. The diameter D of the convex domain K is defined as D := sup x,y∈K x − y . We use B r (x)
to denote the open ball of radius r centered at x. More generally, for a subset X ⊆ R d , we define B r (X) := x∈X B r (x). If A is an affine subspace of R d , then we define B A r (X) := A∩B r (X). We will use R d + to denote the set {x ∈ R d |x ≥ 0}. For any set X ⊆ R d , the affine hull of X, denoted by aff(X), is defined to be the intersection of all affine subsets of R d that contain X. The relative interior of a set X is defined by
relint(X) := {x ∈ X | ∃ε > 0, B aff (X) ε (x) ⊆ X}.
It is well known that for any non-empty convex set K, the set relint(K) is always nonempty. We will always assume that the feasible set contains at least two points and therefore dim(aff(K)) ≥ 1, otherwise the optimization problem is trivial and there is nothing to solve.
A set function f :
{0, 1} d → R + is called submodular if for all x, y ∈ {0, 1} d with x ≥ y, we have f (x ∨ a) − f (x) ≤ f (y ∨ a) − f (y), ∀a ∈ {0, 1} d .(2)
Submodular functions can be generalized over continuous domains. A function F : [0, 1] d → R + is called DR-submodular if for all vectors x, y ∈ [0, 1] d with x ≤ y, any basis vector e i = (0, · · · , 0, 1, 0, · · · , 0) and any constant c > 0 such that
x + ce i ∈ [0, 1] d and y + ce i ∈ [0, 1] d , it holds that F (x + ce i ) − F (x) ≥ F (y + ce i ) − F (y).(3)
Note that if function F is differentiable then the diminishing-return (DR) property (3) is
equivalent to ∇F (x) ≥ ∇F (y) for x ≤ y with x, y ∈ [0, 1] d . A function F : D → R + is G-Lipschitz continuous if for all x, y ∈ D, F (x) − F (y) ≤ G x − y . A differentiable function F : D → R + is L-smooth if for all x, y ∈ D, ∇F (x) − ∇F (y) ≤ L x − y .
A (possibly randomized) offline algorithm is said to be an α-approximation algorithm (for constant α ∈ (0, 1]) with ǫ ≥ 0 additive error for a class of maximization problems over non-negative functions if, for any problem instance max z∈K F (z), the algorithm output x that satisfies the following relation with the optimal solution z *
αF (z * ) − E[F (x)] ≤ ǫ,(4)
where the expectation is with respect to the (possible) randomness of the algorithm. Further, we assume an oracle that can query the value F (x) or the gradient ∇F (x). The number of calls to the oracle to achieve the error in (4) is called the evaluation complexity.
Offline Algorithms and Guarantees
In this section, we consider the problem of maximizing a DR-submodular function over a general convex set in sixteen different cases, enumerated in Table 1. After setting up the problem in Section 3.1, we then explain two key elements of our proposed algorithm when we only have access to a value oracle, (i) the Black Box Gradient Estimate (BBGE) procedure (Algorithm 1) to balance bias and variance in estimating gradients (Section 3.2) and (ii) the construction of a shrunken feasible region to avoid infeasible value oracle queries during the BBGE procedure (Section 3.3). Our main algorithm is proposed in Section 3.4 and analyzed in Section 3.5.
Problem Setup
We consider a general non-oblivious constrained stochastic optimization problem
max z∈K F (z) := max z∈K E x∼p(x;z) [F (z, x)],(5)
where F is a DR-submodular function, andF : K × X → R is determined by z and the random variable x which is independently sampled according to x ∼ p(x; z). We say the oracle has variance σ 2 if sup z∈K var x∼p(x;z) [F (z, x)] = σ 2 . In particular, when σ = 0, then we say we have access to an exact (deterministic) value oracle. Similarly, we say we have access to a stochastic gradient oracle if we can sample from functionĜ : K × Y → R such that ∇F (z) = E y∼q(y;z) [Ĝ(z, y)], andĜ is determined by z and the random variable y which is sampled according to y ∼ q(y; z). Note that oracles are only defined on the feasible set.
Assumption 1 We assume that F : [0, 1] d → R is DR-submodular, first-order differentiable, non-negative, G-Lipschitz for some G < ∞, and L-smooth for some L < ∞. We also assume the feasible region K is a closed convex domain in [0, 1] d with at least two points. Moreover, we also assume that we either have access to a value oracle with variance σ 2 0 ≥ 0 or a gradient oracle with variance σ 2 1 ≥ 0.
Remark 1 The proposed algorithm does not need to know the values of L, G, σ 0 or σ 1 . However, these constants appear in the final expressions of the number of oracle calls and the regret bounds.
Black Box Gradient Estimate
Without access to a gradient oracle (i.e., first-order information), we estimate gradient information using samples from a value oracle. We will use a variation of the "smoothing trick" technique (Flaxman et al., 2005;Hazan et al., 2016;Agarwal et al., 2010;Shamir, 2017;Zhang et al., 2019;Chen et al., 2020;Zhang et al., 2023), which involves averaging through spherical sampling around a given point.
Definition 2 (Smoothing Trick) For a function F : D → R defined on D ⊆ R d , its δ-smoothed versionF δ is given as
F δ (x) := E z∼B aff(D) δ (x) [F (z)] = E v∼B aff(D)−x 1 (0) [F (x + δv)],(6)
where v is chosen uniformly at random from the dim(aff(D))-dimensional ball B aff(D)−x 1 (0). Thus, the function valueF δ (x) is obtained by "averaging" F over a sliced ball of radius δ around x.
When the value of δ is clear from the context, we may drop the subscript and simply usẽ F to denote the smoothed version of F . It can be easily seen that if F is DR-submodular, G-Lipschitz continuous, and L-smooth, then so isF and F (x) − F (x) ≤ δG, for any point in the domain of both functions. Moreover, if F is monotone, then so isF (Lemma 11). ThereforeF δ is an approximation of the function F . A maximizer ofF δ also maximizes F approximately.
Algorithm 1 Black Box Gradient Estimate (BBGE) 1: Input: Point z, sampling radius δ, constraint linear space L, batch size
B 2: Sample u 1 , · · · , u B i.i.d. from S d−1 ∩ L 3: For i = 1 to B, let y + i ← z + δu i , y − i ← z − δu i , and evaluateF (y + i ),F (y − i ) 4: g ← 1 B B i=1 d 2δ F (y + i ) −F (y − i ) u i 5: Output g
Our definition of smoothing trick differs from the standard usage by accounting for the affine hull containing D. This will be of particular importance when the feasible region is of (affine) dimension less than d, such as when there are equality constraints. When aff(D) = R d , our definition reduces to the standard definition of the smoothing trick. In this case, it is well-known that the gradient of the smoothed functionF δ admits an unbiased one-point estimator (Flaxman et al., 2005;Hazan et al., 2016). Using a two-point estimator instead of the one-point estimator results in smaller variance (Agarwal et al., 2010;Shamir, 2017). In Algorithm 1, we adapt the two-point estimator to the general setting.
Construction of K δ
We want to run Algorithm 1 as a subroutine within the main algorithm to estimate the gradient. However, in order to run Algorithm 1, we need to be able to query the oracle within the set B aff(K) δ (x). Since the oracle can only be queried at points within the feasible set, we need to restrict our attention to a set K δ such that B aff(K) δ (K δ ) ⊆ K. On the other hand, we want the optimal point of F within K δ to be close to the optimal point of F within K. One way to ensure that is to have K δ not be too small. More formally, we want that B aff(K) δ ′ (K δ ) ⊇ K, for some value of δ ′ ≥ δ that is not too large. The constraint boundary could have a complex geometry, and simply maintaining a δ sized margin away from the boundary can result in big gaps between the boundary of K and K δ . For example, in two dimensions, if K is polyhedral and has an acute angle, maintaining a δ margin away from both edges adjacent to the acute angle means the closest point in the K δ to the corner may be much more than δ. For this construction, we choose a c ∈ relint(K) and a real number r > 0 such that B aff(K) r (c) ⊆ K. For any δ < r, we define
K c,r δ := (1 − δ r )K + δ r c.(7)
Clearly if K is downward-closed, then so is K c,r δ . Lemma 15 shows that for any such choice of c and r > 0, we have δ ′ δ ≤ D r . See Appendix E for more details about the choice of c and r. We will drop the superscripts in the rest of the paper when there is no ambiguity.
Remark 3 This construction is similar to the one carried out in (Zhang et al., 2019) which was for d-dimensional downward-closed sets. Here we impose no restrictions on K beyond Assumption 1. A simpler construction of shrunken constraint set was proposed in (Chen et al., 2020). However, as we discuss in Appendix B, they require to be able to query outside of the constraint set.
Generalized DR-Submodular Frank-Wolfe
Algorithm 2 Generalized DR-Submodular Frank-Wolfe 1: Input: Constraint set K, iteration limit N ≥ 4, sampling radius δ, gradient step-size {ρ n } N n=1 2: Construct K δ 3: Pick any z 1 ∈ argmin z∈K δ z ∞ 4:ḡ 0 ← 0 5: for n = 1 to N do 6: g n ← estimate-grad(z n , δ, L = aff(K) − z 1 ) 7:ḡ n ← (1 − ρ n )ḡ n−1 + ρ n g n 8:
v n ← optimal-direction(ḡ n , z n ) 9: z n+1 ← update(z n , v n , ε) 10: end for 11: Output z N +1
The pseudocode of our proposed offline algorithm, Generalized DR-Submodular Frank-Wolfe, is shown in Algorithm 2. At a high-level, it follows the basic template of Frank-Wolfe type methods, where over the course of a pre-specified number of iterations, the gradient (or a surrogate thereof) is calculated, an optimization sub-routine with a linear objective is solved to find a feasible point whose difference (with respect to the current solution) has the largest inner product with respect to the gradient, and then the current solution is updated to move in the direction of that feasible point.
However, there are a number of important modifications to handle properties of the objective function, constraint set, and oracle type. For the oracle type, for instance, standard Frank-Wolfe methods assume access to a deterministic gradient oracle. Frank-Wolfe methods are known to be sensitive to errors in estimates of the gradient (e.g., see (Hassani et al., 2017)). Thus, when only a stochastic gradient oracle or even more challenging, only a stochastic value oracle is available, the gradient estimators must be carefully designed to balance query complexity on the one hand and output error on the other. The Black Box Gradient Estimate (BBGE) sub-routine, presented in Algorithm 1, utilizes spherical sampling to produce an unbiased gradient estimate. This estimate is then combined with past estimates using momentum, as seen in (Mokhtari et al., 2020), to control and reduce variance Our algorithm design is influenced by state-of-the-art methods that have been developed for specific settings. One of the most closely related works is (Chen et al., 2020), which also dealt with using value oracle access for optimizing monotone functions. They used momentum and spherical sampling techniques that are similar to the ones we used in our Algorithm 1. However, we modified the sampling procedure and the solution update step. In their work, (Chen et al., 2020) also considered a shrunken feasible region to avoid sampling close to the boundary. However, they assumed that the value oracle could be queried outside the feasible set (see Appendix B for details).
In Algorithm 3, we consider the following cases for the function and the feasible set.
(A) If F is monotone DR-submodular and 0 ∈ K, we choose optimal-direction(ḡ n , z n ) = argmax v∈K δ −z 1 v,ḡ n , update(z n , v n , ε) = z n + εv n ,
and ε = 1/N . We start at a point near the origin and always move to points that are bigger with respect to the partial order on R d . In this case, since the function is monotone, the optimal direction is a maximal point with respect to the partial order. The choice of ε = 1/N guarantees that after N steps, we arrive at a convex combination of points in the feasible set and therefore the final point is also in the feasible set. The fact that the origin is also in the feasible set shows that the intermediate points also belong to the feasible set.
(B) If F is non-monotone DR-submodular and K is a downward closed set containing 0, we choose optimal-direction(ḡ n , z n ) = argmax v∈K δ −z 1 ,v≤1−zn v,ḡ n , update(z n , v n , ε) = z n +εv n , and ε = 1/N . This case is similar to (A). However, since F is not monotone, we need to choose the optimal direction more conservatively.
(C) If F is monotone DR-submodular and K is a general convex set, we choose optimal-direction(ḡ n , z n ) = argmax v∈K δ v,ḡ n , update(z n , v n , ε) = (1 − ε)z n + εv n , and ε = log(N )/2N . In this case, if we update like in cases (A) and (B), we do not have any guarantees of ending up in the feasible set, so we choose the update function to be a convex combination. Unlike (B), we do not need to limit ourselves in choosing the optimal direction and we simply choose ε to obtain the best approximation coefficient.
(D) If F is non-monotone DR-submodular and K is a general convex set, we choose optimal-direction(ḡ n , z n ) = argmax v∈K δ v,ḡ n , update(z n , v n , ε) = (1 − ε)z n + εv n , and ε = log(2)/N . This case is similar to (C) and we choose ε to obtain the best approximation coefficient.
The choice of subroutine estimate-grad and ρ n depend on the oracle. If we have access to a gradient oracleĜ, we set estimate-grad(z, δ, L) to be the average of B evaluations ofĜ(z)| L . Otherwise, we run Algorithm 1 with input z, δ, L. If we have access to a deterministic gradient oracle, then there is no need to use any momentum and we set ρ n = 1. In other cases, we choose ρ n = 2 (n+3) 2/3 .
Approximation Guarantees for the Proposed Offline Algorithm
Theorem 4 Suppose Assumption 1 holds. Let N ≥ 4, B ≥ 1 and choose c ∈ K and r > 0 according to Section 3.3. If we have access to a gradient oracle, we choose δ = 0, otherwise we choose δ ∈ (0, r). Then the following results hold for the output z N +1 of Algorithm 2.
(A) If F is monotone DR-submodular and 0 ∈ K, then
(1 − e −1 )F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 N 1/3 + LD 2 2N + δG(2 + √ d + D r ).(8)
(B) If F is non-monotone DR-submodular and K is a downward closed set containing 0, then
e −1 F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 N 1/3 + LD 2 2N + δG(2 + √ d + D r ).(9)
(C) If F is monotone DR-submodular and K is a general convex set, then
1 2 F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 log(N ) 2N 1/3 + 4DG + LD 2 log(N ) 2 8N + δG(2 + D r ). (10) (D)
If F is non-monotone DR-submodular and K is a general convex set, then
1 4 (1 − z 1 ∞ )F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 N 1/3 + DG + 2LD 2 4N + δG(2 + D r ). (11)
In all these cases, we have
Q = 0 det. grad. oracle, max{4 2/3 G 2 , 6L 2 D 2 + 4σ 2 1 B } stoch. grad. oracle with variance σ 2 1 > 0, max{4 2/3 G 2 , 6L 2 D 2 + 4CdG 2 +2d 2 σ 2 0 /δ 2 B } value oracle with variance σ 2 0 ≥ 0,
C is a constant, D = diam(K), and z * is the global maximizer of F on K.
Theorem 4 characterizes the worst-case approximation ratio α and additive error bounds for different properties of the function and feasible region, where the additive error bounds depend on selected parameters N for the number of iterations, batch size B, and sampling radius δ.
The proof of Parts (A)-(D) is provided in Appendix G-J, respectively. The proof of parts (A), (B) and (D), when we have access to an exact gradient oracle is similar to the proofs presented in (Bian et al., 2017b,a; Mualem and Feldman, 2023), respectively. Part (C) is the first analysis of a Frank-Wolfe type algorithm over general convex sets when the oracle can only be queried within the feasible set. When we have access to a stochastic gradient oracle, directly using a gradient sample can result in arbitrary bad performance as shown in Appendix B of (Hassani et al., 2017). The momentum technique, first used in continuous submodular maximization in (Mokhtari et al., 2020), is used when we have access to a stochastic gradient oracle. The control on the estimate of the gradient is deferred to Lemma 17. Since the momentum technique is robust to noise in the gradient, when we only have access to a value oracle, we can use Algorithm 1, similar to (Chen et al., 2020), to obtain an unbiased estimate of the gradient and complete the proof.
Theorem 5 converts those bounds to characterize the oracle complexity for a userspecified additive error tolerance ǫ based on oracle properties (deterministic/stochastic gradient/value). The 16 combinations of the problem settings listed in Table 1 Theorem 5 The number of oracle calls for different oracles to achieve an α-approximation error of smaller than ǫ using Algorithm 1 is Moreover, in all of the cases above, if F is non-monotone or 0 ∈ K, we may replaceÕ with O.
See Appendix K for proof.
Online DR-submodular optimization under bandit or semi-bandit feedback
In this section, we first describe the Black-box Explore-Then-Commit algorithm that uses the offline algorithm for exploration, and uses the solution of the offline algorithm for exploitation. This is followed by the regret analysis of the proposed algorithm. This is the first algorithm for stochastic continuous DR-submodular maximization under bandit feedback and obtains state-of-the-art for semi-bandit feedback.
Problem Setup
There are typically two settings considered in online optimization with bandit feedback. The first is the adversarial setting, where the environment chooses a sequence of functions F 1 , · · · , F N and in each iteration n, the agent chooses a point z n in the feasible set K, observes F n (z n ) and receives the reward F n (z n ). The goal is to choose the sequence of actions that minimize the following notion of expected α-regret.
R adv := α max z∈K N n=1 F n (z) − E N n=1 F n (z n ) .(12)
In other words, the agent's cumulative reward is being compared to α times the reward of the best constant action in hindsight. Note that, in this case, the randomness is over the actions of the policy. The second is the stochastic setting, where the environment chooses a function F : K → R and a stochastic value oracleF . In each iteration n, the agent chooses a point z n in the feasible set K, receives the reward (F (z n )) n by querying the oracle at z n and observes this reward. Here the outer subscript n indicates that the result of querying the oracle at time n, since the oracle is stochastic. The goal is to choose the sequence of actions that minimize the following notion of expected α-regret.
R stoch := αN max z∈K F (z) − E N n=1 (F (z n )) n = αN max z∈K F (z) − E N n=1 F (z n ) (13)
Further, two feedback models are considered -bandit and semi-bandit feedback. In the bandit feedback setting, the agent only observes the value of the function F n at the point z n . In the semi-bandit setting, the agent has access to a gradient oracle instead of a value oracle and observesĜ(z n ) at the point z n , whereĜ is an unbiased estimator of ∇F .
In unstructured multi-armed bandit problems, any regret bound for the adversarial setup could be translated into bounds for the stochastic setup. However, having a nontrivial correlation between the actions of different arms complicates the relation between the stochastic and adversarial settings. Even in linear bandits, the relation between adversarial linear bandits and stochastic linear bandits is not trivial. (e.g. see Section 29 in (Lattimore and Szepesvári, 2020)) While it is intuitively reasonable to assume that the optimal regret bounds for the stochastic case are better than that of the adversarial case, such a result is not yet proven for DR-submodular functions. Thus, while the cases of bandit feedback has been studied in the adversarial setup, the results do not reduce to stochastic setup. We also note that in the cases where there are adversarial setup results, this paper finds that the results in the stochastic setup achieve improved regret bounds (See Table 3 in Supplementary for the comparison).
Algorithm for DR-submodular maximization with Bandit Feedback
Algorithm 3 DR-Submodular Explore-Then-Commit 1: Input: Horizon T , inner time horizon T 0 2: Run Algorithm 2 for T 0 , with according to parameters described in Theorem 5. 3: for remaining time do
4:
Repeat the last action of Algorithm 2. 5: end for The proposed algorithm is described in Algorithm 3. In Algorithm 3, if there is semibandit feedback in the form of a stochastic gradient sample for each action z n , we run the offline algorithm (Algorithm 2) with parameters from the proof of case 2 of Theorem 5 for T 0 = ⌈T 3/4 ⌉ total queries. If only the stochastic reward for each action z n is available (bandit feedback), we run the offline algorithm (Algorithm 2) with parameters from the proof of case 4 of Theorem 5 for T 0 = ⌈T 5/6 ⌉ total queries. Then, for the remaining time (exploitation phase), we run the last action in the exploration phase.
Regret Analysis for DR-submodular maximization with Bandit Feedback
In this section, we provide the regret analysis for the proposed algorithm. We note that by Theorem 5, Algorithm 2 requires a sample complexity ofÕ(1/ǫ 5 ) with a stochastic value oracle for offline problems (any of (A)-(D) in Theorem 4). Thus, the parameters and the results with bandit feedback are the same for all the four setups (A)-(D). Likewise, when a stochastic gradient oracle is available, Algorithm 2 requires a sample complexity of O(1/ǫ 3 ). Based on these sample complexities, the overall regret of online DR-submodular maximization problem is given as follows.
Theorem 6 For an online constrained DR-submodular maximization problem over a horizon T , where the expected reward function F , feasible region type K, and approximation ratio α correspond to any of the four cases (A)-(D) in Theorem 4, Algorithm 3 achieves α-regret (13) that is upper-bounded as:
Semi-bandit Feedback (Case 2):Õ(T 3/4 ), Bandit Feedback (Case 4):Õ(T 5/6 ).
Moreover, in either type of feedback, if F is non-monotone or 0 ∈ K, we may replaceÕ with O.
See Appendix L for the proof.
Conclusion
This work provides a novel and unified approach for maximizing continuous DR-submodular functions across various assumptions on function, constraint set, and oracle access types. The proposed Frank-Wolfe based algorithm improves upon existing results for nine out of the sixteen cases considered, and presents new results for offline DR-submodular maximization with only a value oracle. Moreover, this work presents the first regret analysis with bandit feedback for stochastic DR-submodular maximization, covering both monotone and nonmonotone functions. These contributions significantly advance the field of DR-submodular optimization (with multiple applications) and open up new avenues for future research in this area.
References
Appendix A. Details of Related Works
A.1 Offline DR-submodular maximization
The authors of (Bian et al., 2017b) considered the problem of maximizing a monotone DR-submodular function over a downward-closed convex set given a deterministic gradient oracle. They showed that a variant of the Frank-Wolfe algorithm guarantees an optimal (1 − 1 e )-approximation for this problem. While they only claimed their result for downwardclosed convex sets, their result holds under a more general setting where the convex set contains the origin. In (Bian et al., 2017a), a non-monotone variant of the algorithm for downward-closed convex sets with 1 e -approximation was proposed. The authors of (Hassani et al., 2017) used gradient ascent to obtain 1 2 -guarantees for the maximization of a monotone DR-submodular function over a general convex set given a gradient oracle which could be stochastic. They proved that gradient ascent cannot guarantee better than a 1 2 -approximation by constructing a convex set K and a function F : K → R such that F has a local maximum that is a 1 2 -approximation of its optimal value on K. They also showed that a Frank-Wolfe type algorithm similar to (Bian et al., 2017b) cannot be directly used when we only have access to a stochastic gradient oracle. Later, (Mokhtari et al., 2020) resolved the issue of stochastic gradient oracles with a momentum technique and obtained (1 − 1 e )-approximation in the case of monotone functions over sets that contain the origin, and 1 e -approximation in the case of non-monotone functions over downward closed sets. In the first case, while they consider monotone DR-submodular functions over general convex sets K, they query the oracle over the convex hull of K ∪ {0} (See Remark 8).
For non-monotone maps over general convex sets, no constant approximation ratio can be guaranteed in sub-exponential time due to a hardness result by (Vondrák, 2013). However, (Dürr et al., 2019) bypassed this issue by finding an approximation guarantee that depends on the geometry of the convex set. Specifically, they showed that given a deterministic gradient oracle for a non-monotone function over a general convex set K ⊆ [0, 1] d , their proposed algorithm obtains 1 3 √ 3 (1 − h)-approximation of the optimal value where h := min z∈K z ∞ . An improved sub-exponential algorithm was proposed by (Du et al., 2022) that obtained a 1 4 (1 − h)-approximation guarantees, which is optimal. Later, (Du, 2022) provided the first polynomial time algorithm for this setting with the same approximation coefficient.
Remark 7 In the special case of maximizing a non-monotone continuous DR-submodular over a box, i.e. [0, 1] d , one could discretize the problem and use discrete algorithms to solve the continuous version. The technique has been employed in (Bian et al., 2017a) to obtain a 1 3 -approximation and in (Bian et al., 2019b;Niazadeh et al., 2020) to obtain 1 2approximations for the optimal value. We have not included these results in Table 1 since using discretization has only been successfully applied to the case where the convex set is a box and can not be directly used in more general settings.
Remark 8 Let K ⊆ [0, 1] d be a convex set, and define K * as the convex hull of K ∪ {0}. For a problem in the setting of monotone functions over a general set K, we can consider the same problem on K * . Since the function is monotone, the optimal solution in K * is the same as the optimal solution in K. However, solving this problem in K * may require evaluating the function in the larger set K * , which may not always be possible. In fact, the result of (Mokhtari et al., 2020) mentioned in Table 1 is for monotone functions over general convex sets K, but their algorithms require evaluating the function on K * . This is why we have classified their results as algorithms for convex sets that contain the origin. The problem of offline DR-submodular maximization with only a value oracle was first considered by (Chen et al., 2020) for monotone maps over convex sets that contain the origin. However, their result requires evaluation of query in a neighborhood of K * which violates our requirement to only query the oracle within the feasible set (See Appendix B).
A.2 Online DR-submodular maximization with bandit feedback
There has been growing interest in online DR-submodular maximization in the recent years In the adversarial setting, the environment chooses a sequence of functions F 1 , · · · , F N and in each iteration n, the agent chooses a point z n in the feasible set K, observes F n and receives the reward F n (z n ). For the regret bound, the agents reward is being compared to α times the reward of the best constant action in hindsight. With full-information feedback, if at each iteration when the agent observes F n , it may be allowed to query the value of ∇F n or maybe F n at any number of arbitrary points within the feasible set. Further, we consider stochastic setting, where the environment chooses a function F : K → R and a sequence of independent noise functions η n : K → R with zero mean. In each iteration n, the agent chooses a point z n in the feasible set K, receives the reward (F + η n )(z n ) and observes the reward. For the regret bound, the agents reward is being compared to α times the reward of the best action. Detailed formulation of adversarial and stochastic setups and why adversarial results cannot be reduced to stochastic results is given in Section 4.1. In this paper, we consider two feedback models -bandit feedback where only the (stochastic) reward value is available and semi-bandit feedback where a single stochastic sample of the gradient at the location is provided. Bandit Feedback: We note that this paper is the first work for bandit feedback for stochastic online DR-submodular maximization. The prior works on this topic has been in the adversarial setup (Zhang et al., 2019(Zhang et al., , 2023Niazadeh et al., 2021), and the results in this work is compared with their results in Table 3 and it is found that the stochastic case leads to improved regret. In (Zhang et al., 2019), the adversarial online setting with bandit feedback has been studied for monotone DR-submodular functions over downward-closed convex sets. Later (Zhang et al., 2023) extended this framework to the setting with non-monotone DRsubmodular functions over downward-closed convex sets. (Niazadeh et al., 2021) described a framework for converting certain greedy-type offline algorithms with robustness guarantees into adversarial online algorithms for both full-information and bandit feedback. They apply their framework to obtain algorithms for non-monotone functions over a box, with 1 2 -regret ofÕ(T 4/5 ), and monotone function over downward-closed convex sets. The offline algorithm they use for downward-closed convex sets is the one described in (Bian et al., 2017b) which only requires the convex set to contain the origin. However, as we describe in Appendix B, their application is not correct.
Semi-bandit Feedback: In semi-bandit feedback, a single stochastic sample of the gradient is available. The problem has been considered in (Chen et al., 2018a), while the results have an error (See Appendix B). Further, they only obtain 1 e -regret for the monotone case. One could consider a generalization of the adversarial and stochastic setting in the following manner. The environment chooses a sequence of functions F n and a sequence of value oraclesF n such thatF n estimates F n . In each iteration n, the agent chooses a point z n in the feasible set K, receives the reward (F n (z n )) n by querying the oracle at z n and observes this reward. The goal is to choose the sequence of actions that minimize the following notion of expected α-regret.
R stoch-adv := α max z∈K N n=1 F n (z) − E N n=1 (F n (z n )) n = α max z∈K N n=1 F n (z) − E N n=1 F n (z n ) (14)
Algorithm 3 in (Chen et al., 2018b) solves this problem in semi-bandit feedback setting with a deterministic value oracle and stochastic gradient oracles. Any bound for a problem in this setting implies bounds for stochastic semi-bandit and adversarial semi-bandit settings. The same is true for Mono-Frank-Wolfe Algorithms in (Zhang et al., 2019(Zhang et al., , 2023. We have included these results in Table 2 as benchmark to compare with results in stochastic setting.
Appendix B. Comments on previous results in literature
Construction of K ′ and error estimate in (Chen et al., 2020) In (Chen et al., 2020), the set K ′ + δ1 plays a role similar to the set K δ defined in this paper. Algorithm 2, in the case with access to value oracle for monotone DR-submodular function with the constraint set K, such that aff(K) = R d and 0 ∈ K, reduced to BBCG algorithm in (Chen et al., 2020) if we replace K δ with their construction of K ′ + δ1. In their paper, K ′ is defined by
K ′ := (K − δ1) ∩ [0, 1 − 2δ] d .(15)
There are a few issues with this construction and the subsequent analysis that requires more care.
1. The BBCG algorithm almost always needs to be able to query the value oracle outside the feasible set.
We have
K ′ + δ1 = K ∩ [δ, 1 − δ] d .
The BBCG algorithm starts at δ1 and behaves similar to Algorithm 2 in the monotone 0 ∈ K case. It follows that the set of points that BBCG requires to be able to query is
Q δ := B δ (convex-hull((K ′ + δ1) ∪ {δ1})) = B δ (convex-hull(K ∪ {δ1}) ∩ [δ, 1 − δ] d ).
If 1 ∈ K, then the problem becomes trivial since F is monotone. If K is contained in the boundary of [0, 1] d , then we need to restrict ourselves to the affine subspace containing K and solve the problem in a lower dimension in order to be able to use BBCG algorithm as K ′ will be empty otherwise. We want to show that in all other cases, Q δ \ K = ∅. If K ′ is non-empty and 1 / ∈ K, then let x δ be a maximizer of
· ∞ over K ′ + δ1. If x δ = (1 − δ)1, then there is a point y ∈ B δ (x δ ) ∩ [δ, 1 − δ] d ⊆ Q δ such that y > x which implies that y / ∈ K.
Therefore, we only need to prove the statement when (1 − δ)1 ∈ K ∩ [δ, 1 − δ] d for all small δ. In this case, since K is closed, we see that (1 − δ)1 → 1 ∈ K. In other words, except in trivial cases, BBCG always requires being able to query outside the feasible set.
2. The exact error bound could be arbitrarily far away from the correct error bound depending on the geometry of the constraint set.
In Equation (69) in the appendix of (Chen et al., 2020), it is mentioned that
F (x * δ ) ≥F (x * ) − δG √ d,(16)
where x * is the optimal solution and x * δ is the optimal solution within K ′ + δ1 and G is the Lipschitz constant. Next we construct an example where this inequality does not hold.
Consider the set K = {(x, y) ∈ [0, 1] 2 | x + λy ≤ 1} for some value of λ to be specified and let F ((x, y)) = Gx. Clearly we have x * = (1, 0). Thus, for any δ > 0, we have
K ′ + δ1 = {(x, y) ∈ [δ, 1 − δ] 2 | x + λy ≤ 1}.
It follows that when λ ≤ 1 δ − 1, then K ′ is non-empty and x * δ = (1 − λδ, δ). Then we haveF
(x * δ ) −F (x * ) = −λδG. Therefore, (16) is correct if and only if λ ≤ √ d = √ 2.
Since this does not hold in general as λ depends on the geometry of the convex set, this equation is not true in general making the overall proof incorrect. The issue here is that λ, which depends on the geometry of the convex set K, should appear in (16). Without restricting ourselves to convex sets with "controlled" geometry and without including a term, such as 1 r in Theorem 4, we would not be able to use this method to obtain an error bound. We note that while their analysis has an issue, the algorithm is still fine. Using a proof technique similar to ours, their proof can be fixed, more precisely, we can modify (16) in a manner similar to (24) and (31), depending on the case, and that will help fix their proofs.
Approximation guarantee of Algorithm 10 in (Niazadeh et al., 2021) In (Niazadeh et al., 2021, a framework is introduced for creating online algorithms from offline ones under certain robustness conditions. As mentioned in Algorithm 7 in that paper, the offline algorithm used for monotone DR-submodular maximization over downward-closed convex sets is the one proposed in (Bian et al., 2017b). While this algorithm works well with exact gradients, it was shown in Appendix B of (Hassani et al., 2017) that directly replacing the gradient with an unbiased estimator of the gradient can not guarantee the (1 − 1 e )-approximation coefficient and in fact could result in arbitrarily poor output. In Algorithm 10 in (Niazadeh et al., 2021), as described in Section F.3 of their appendix, they replace the exact gradient with an unbiased estimator before converting the offline algorithm into an online one. Since the approximation guarantees no longer hold for the offline algorithm after such a replacement, the approximation guarantees for the online algorithm also fails.
One-Shot Frank-Wolfe algorithm in (Chen et al., 2018a) In (Chen et al., 2018a), the authors claim their proposed algorithm, One-Shot Frank-Wolfe (OSFW), achieves a (1 − 1 e )-regret for monotone DR-submodular maximization under semi-bandit feedback for general convex set with oracle access to the entire domain of F , i.e. [0, 1] d . In their regret analysis in the last page of the supplementary material, the inequality (1 − 1/T ) t ≤ 1/e is used for all 0 ≤ t ≤ T − 1. Such an inequality holds for t = T but as t decreases, the value of (1 − 1/T ) t becomes closer to 1 and the inequality fails. If we do not use this inequality and continue with the proof, we end up with the following approximation coefficient.
1 − 1 T T −1 t=0 (1 − 1/T ) t = 1 − 1 T · 1 − (1 − 1/T ) T 1 − (1 − 1/T ) = 1 − (1 − (1 − 1/T ) T ) = (1 − 1/T ) T ∼ 1 e .
Appendix C. Useful lemmas
Here we state some lemmas from the literature that we will need in our analysis of DRsubmodular functions.
Lemma 9 (Lemma 2.2 of (Mualem and Feldman, 2023)) For any two vectors x, y ∈ [0, 1] d and any continuously differentiable non-negative DR-submodular function F we have
F (x ∨ y) ≥ (1 − x ∞ )F (y).
The following lemma can be traced back to (Hassani et al., 2017) (see Inequality 7.5 in the arXiv version), and is also explicitly stated and proved in (Dürr et al., 2019).
Lemma 10 (Lemma 1 of (Dürr et al., 2019)) For every two vectors x, y ∈ [0, 1] d and any continuously differentiable non-negative DR-submodular function F we have
∇F (x), y − x ≥ F (x ∨ y) + F (x ∧ y) − 2F (x).
Appendix D. Smoothing trick
The following Lemma is well-known when aff(D) = R d (e.g., Lemma 1 in (Chen et al., 2020), Lemma 7 in (Zhang et al., 2019)). The proof in the general case is similar to the special case aff(D) = R d .
Lemma 11
If F : D → R is DR-submodular, G-Lipschitz continuous, and L-smooth, then so isF δ and for any x ∈ D such that B
aff(D) δ (x) ⊆ D, we have F δ (x) − F (x) ≤ δG.
Moreover, if F is monotone, then so isF δ .
Proof Let A := aff(D) and A 0 := aff(D) − x for some x ∈ D. Using the assumption that F is G-Lipschitz continuous, we have
|F (x) −F (y)| = E v∼B A 0 1 (0) [F (x + δv) − F (y + δv)] ≤ E v∼B A 0 1 (0) [|F (x + δv) − F (y + δv)|] ≤ E v∼B A 0 1 (0) [G (x + δv) − (y + δv) ] = G x − y , and |F (x) − F (x)| = |E v∼B A 0 1 (0) [F (x + δv) − F (x)]| ≤ E v∼B A 0 1 (0) [|F (x + δv) − F (x)|] ≤ E v∼B A 0 1 (0) [Gδ v ] ≤ δG.
If F is G-Lipschitz continuous and monotone continuous DR-submodular, then F is differentiable and we have ∇F (x) ≥ ∇F (y) for ∀x ≤ y.
By definition ofF , we see thatF is differentiable and
∇F (x) − ∇F (y) = ∇E v∼B A 0 1 (0) [F (x + δv)] − ∇E v∼B A 0 1 (0) [F (y + δv)] = E v∼B A 0 1 (0) [∇F (x + δv) − ∇F (y + δv)] ≥ E v∼B A 0 1 (0) [0] = 0, for all x ≤ y. If F is also monotone, then we have F (x) ≤ F (y) for all x ≤ y. Thereforẽ F (x) −F (y) = E v∼B A 0 1 (0) [F (x + δv)] − E v∼B A 0 1 (0) [F (y + δv)] = E v∼B A 0 1 (0) [F (x + δv) − F (y + δv)] ≤ E v∼B A 0 1 (0) [0] = 0,
for all x ≤ y. HenceF is also monotone.
Lemma 12 (Lemma 10 of (Shamir, 2017)) Let D ⊆ R d such that aff(D) = R d . Assume F : D → R is a G-Lipschitz continuous function and letF be its δ-smoothed version. For any z ∈ D such that B δ (z) ⊆ D, we have
E u∼S d−1 d 2δ (F (z + δu) − F (z − δu))u = ∇F (z), E u∼S d−1 d 2δ (F (z + δu) − F (z − δu))u − ∇F (z) 2 ≤ CdG 2 ,
where C is a constant.
When the convex feasible region K lies in an affine subspace, we cannot employ the standard spherical sampling method. We extend Theorem 12 to that case.
Lemma 13 Let D ⊆ R d and A := aff(D). Also let A 0 be the translation of A that contains 0. Assume F : D → R is a G-Lipschitz continuous function and letF be its δ-smoothed version. For any z ∈ D such that B A δ (z) ⊆ D, we have
E u∼S d−1 ∩A 0 d 2δ (F (z + δu) − F (z − δu))u = ∇F (z), E u∼S d−1 ∩A 0 d 2δ (F (z + δu) − F (z − δu))u − ∇F (z) 2 ≤ CdG 2 ,
where C is the constant in Lemma 12.
Note that the function F is defined only on D and therefore the gradient ∇F lives within the linear space A. Proof Let k = dim(A). First consider the case where A = R k × (0, · · · , 0). In this case, we restrict ourselves to first k coordinates and see that the problem reduces to Lemma 12.
For the general case, let O be an orthonormal transformation that maps
R k × (0, · · · , 0) into A 0 . Now define D ′ = O −1 (D − z) and F ′ : D ′ → R : x → F (O(x) + z). LetF ′ be the δ-smoothed version of F ′ . Note that O ∇F ′ (0) = ∇F (z). On the other hand, we have aff(D ′ ) = O −1 (A − z) = O −1 (A 0 ) = R k × (0, · · · , 0). Therefore E u∼S d−1 ∩(R k ×(0,··· ,0)) d 2δ (F ′ (δu) − F ′ (−δu))u = ∇F ′ (0), and E u∼S d−1 ∩(R k ×(0,··· ,0) d 2δ (F ′ (δu) − F ′ (−δu))u − ∇F ′ (0) 2 ≤ CdG 2 .
Hence, if we set v = O −1 (u), we have
E u∼S d−1 ∩A 0 d 2δ (F (z + δu) − F (z − δu))u = E v∼S d−1 ∩(R k ×(0,··· ,0)) d 2δ (F ′ (δv) − F ′ (−δv))O(v) = O E v∼S d−1 ∩(R k ×(0,··· ,0)) d 2δ (F ′ (δv) − F ′ (−δv))v = O ∇F ′ (0) = ∇F (z).
Similarly
E u∼S d−1 ∩A 0 d 2δ (F (z + δu) − F (z − δu))u − ∇F (z) 2 = E v∼S d−1 ∩(R k ×(0,··· ,0)) d 2δ (F ′ (δv) − F ′ (−δv))O(v) − O ∇F ′ (0) 2 = E v∼S d−1 ∩(R k ×(0,··· ,0)) O d 2δ (F ′ (δv) − F ′ (−δv))v − ∇F ′ (0) 2 = E v∼S d−1 ∩(R k ×(0,··· ,0)) d 2δ (F ′ (δv) − F ′ (−δv))v − ∇F ′ (0) 2 ≤ CdG 2 .
Appendix E. Construction of K δ Lemma 14 Let K ⊆ [0, 1] d be a convex set containing the origin. Then for any choice of c and r with B
aff(K) r (c) ⊆ K, we have argmin z∈K δ z ∞ = δ r c and min z∈K δ z ∞ ≤ δ r .
Proof The claim follows immediately from the definition and the fact that c ∞ ≤ 1.
Lemma 15 Let K be an arbitrary convex set, D := Diam(K) and δ ′ := δD r . We have
B aff(K) δ (K δ ) ⊆ K ⊆ B aff(K) δ ′ (K δ ).
Proof Define ψ : K → K δ := x → (1 − δ r )x + δ r c. Let y ∈ K δ and x = ψ −1 (y). Then
B aff(K) δ (y) = B aff(K) δ (ψ(x)) = B aff(K) δ ((1 − δ r )x + δ r c) = (1 − δ r )x + B aff(K) δ ( δ r c) = (1 − δ r )x + δ r B aff(K) r (c) ⊆ K,
where the last inclusion follows from the fact that K is convex and contains both x and B aff(K) r (c). On the other hand, for any x ∈ K ⊆ aff(K), we have
ψ(x) − x = δ r x − c < δ r D = δ ′ . Therefore x ∈ B δ ′ (ψ(x)) ∩ aff(K) = B aff(K) δ ′ (ψ(x)) ⊆ B aff(K) δ ′ (K δ ).
Choice of c and r While the results hold for any choice of c ∈ K and r with B aff r (c) ⊆ K, as can be seen in Theorem 2, the approximation errors depends linearly on 1/r. Therefore, it is natural to choose the point c that maximizes the value of r, the Chebyshev center of K.
Analytic Constraint Model -Polytope When the feasible region K is characterized by a set of q linear constraints Ax ≤ b with a known coefficient matrix A ∈ R q×d and vector b ∈ R q , thus K is a polytope, by the linearity of the transformation (7), the shrunken feasible region K δ is similarly characterized by a (translated) set of q linear constraints Ax ≤ (1 − δ r )b + δ r Ac.
Appendix F. Variance reduction via momentum
In order to prove main regret bounds, we need the following variance reduction lemma, which is crucial in characterizing how much the variance of the gradient estimator can be reduced by using momentum. This lemma appears in (Chen et al., 2018a) and it is a slight improvement of Lemma 2 in (Mokhtari et al., 2018) and Lemma 5 in (Mokhtari et al., 2020).
Lemma 16 (Theorem 3 of (Chen et al., 2018a)) Let {a n } N n=0 be a sequence of points in R d such that a n − a n−1 ≤ G 0 /(n + s) for all 1 ≤ n ≤ N with fixed constants G 0 ≥ 0 and s ≥ 3. Let {ã n } N n=1 be a sequence of random variables such that E[ã n |F n−1 ] = a n and E[ ã n −a n 2 |F n−1 ] ≤ σ 2 for every n ≥ 0, where F n−1 is the σ-field generated by {ã i } n i=1 and F 0 = ∅. Let {d n } N n=0 be a sequence of random variables where d 0 is fixed and subsequent d n are obtained by the recurrence
d n = (1 − ρ n )d n−1 + ρ nãn(17)
with ρ n = 2 (n+s) 2/3 . Then, we have
E[ a n − d n 2 ] ≤ Q (n + s + 1) 2/3 ,(18)
where Q := max{ a 0 − d 0 2 (s + 1) 2/3 , 4σ 2 + 3G 2 0 /2}.
We now analyze the variance of our gradient estimator, which, in the case when we only have access zeroth-order information, uses batched spherical sampling and momentum for gradient estimation. Calculations similar to the proof of the following Lemma, in the value oracle case, appear in the proof of Theorem 2 in (Chen et al., 2020). The main difference is that here we consider a more general smoothing trick and therefore we estimate the gradient along the affine hull of K.
Lemma 17 Under the assumptions of Theorem 4, in Algorithm 2, we have
E ∇(F | L )(z n ) −ḡ n 2 ≤ Q (n + 4) 2/3 , for all 1 ≤ n ≤ N where L = aff(K), Q = 0 det. grad. oracle, max{4 2/3 G 2 , 6L 2 D 2 + 4σ 2 1 B } stoch. grad. oracle with variance σ 2 1 > 0, max{4 2/3 G 2 , 6L 2 D 2 + 4CdG 2 +2d 2 σ 2 0 /δ 2 B } value oracle with variance σ 2 0 ≥ 0,
C is a constant and D = diam(K).
Remark 18 As we will see in the proof of Theorem 4, except for the case with deterministic gradient oracle, the dominating term in the approximation error is a constant multiple of
1 N N n=1 E ∇(F | L )(z n ) −ḡ n 2 .
Therefore, any improvement in Lemma 17 will result in direct improvement of the approximation error.
Proof If we have access to a deterministic gradient oracle, then the claim is trivial. Let F 1 := ∅ and F n be the σ-field generated by {ḡ 1 , . . . ,ḡ n−1 } and let
σ 2 = σ 2 1 B
stoch. grad. oracle with variance σ 2 1 > 0,
CdG 2 +d 2 σ 2 0 /2δ 2 B value oracle with variance σ 2 0 ≥ 0.
If we have access to a stochastic gradient oracle, then g n is computed by taking the average of B gradient samples ofĜ(z)| L . Let P denote the projection into the linear space L − x for some x ∈ L. Since P is a 1-Lipscitz linear map, we see that
E[Ĝ(z)| L ] = E[P (Ĝ(z))] = P (∇F (z)) = ∇(F | L )(z) and E Ĝ (z)| L − ∇(F | L )(z) 2 = E P (Ĝ(z) − ∇F (z)) 2 ≤ E Ĝ (z) − ∇F (z) 2 ≤ σ 2 1 .
Note that, in cases where we have access to a gradient oracle, we have δ = 0 andF = F . Therefore
E g n |F n−1 = ∇(F | L )(z n ) and E g n − ∇(F | L )(z n ) 2 |F n−1 ≤ σ 2 1 B = σ 2 .
Next we assume that we have access to a value oracle. By the unbiasedness ofF and Theorem 13, we have
E d 2δ (F (y + n,i ) −F (y − n,i ))u n,i |F n−1 =E E d 2δ (F (y + n,i ) −F (y − n,i ))u n,i |F n−1 , u n,i |F n−1 =E d 2δ (F (y + n,i ) − F (y − n,i ))u n,i |F n−1 =∇(F | L )(z n ), and E d 2δ (F (y + n,i ) −F (y − n,i ))u n,i − ∇(F | L )(z n ) 2 |F n−1 = E E d 2δ (F (y + n,i ) − F (y − n,i ))u n,i − ∇(F | L )(z n ) + d 2δ (F (y + n,i ) − F (y + n,i ))u n,i − d 2δ (F (y − n,i ) − F (y − n,i ))u n,i 2 |F n−1 , u n,i |F n−1 ≤ E E d 2δ (F (y + n,i ) − F (y − n,i ))u n,i − ∇(F | L )(z n ) 2 |F n−1 , u n,i |F n−1 + E E d 2δ (F (y + n,i ) − F (y + n,i ))u n,i 2 |F n−1 , u n,i |F n−1 + E E d 2δ (F (y − n,i ) − F (y − n,i ))u n,i 2 |F n−1 , u n,i |F n−1 ≤ E d 2δ (F (y + n,i ) − F (y − n,i ))u n,i − ∇(F | L )(z n ) 2 |F n−1 + d 2 4δ 2 E E |F (y + n,i ) − F (y + n,i )| 2 · u n,i 2 |F n−1 , u n,i |F n−1 + d 2 4δ 2 E E |F (y − n,i ) − F (y − n,i )| 2 · u n,i 2 |F n−1 , u n,i |F n−1 ≤ CdG 2 + d 2 4δ 2 σ 2 0 + d 2 4δ 2 σ 2 0 = CdG 2 + d 2 2δ 2 σ 2 0 . So we have E g n |F n−1 = E 1 B B i=1 d 2δ (F (y + n,i ) −F (y − n,i ))u n,i |F n−1 = ∇(F | L )(z n ), and E g n − ∇(F | L )(z n ) 2 |F n−1 = 1 B 2 B i=1 E d 2δ (F (y + n,i ) −F (y − n,i ))u n,i − ∇(F | L )(z n ) 2 |F n−1 ≤ CdG 2 + d 2 2δ 2 σ 2 0 B = σ 2 .
Using Lemma 16 with d n =ḡ n ,ã n = g n , a n = ∇(F | L )(z n ) for all n ≥ 1, a 0 = ∇(F | L )(z 1 ), G 0 = 2LD and s = 3, we have
E[ ∇(F | L )(z n ) −ḡ n 2 ] ≤ Q ′ (n + 4) 2/3 ,(19)
where Q ′ = max{ ∇(F | L )(z 1 ) 2 4 2/3 , 6L 2 D 2 + 4σ 2 }. Note that by Theorem 11, we have
∇(F | L )(x) ≤ G, thus we have Q ′ ≤ Q.
Appendix G. Proof of Theorem 4 for monotone maps over convex sets containing zero
Proof By the definition of z n , we have z n = z 1 + n−1 i=1 v i N . Therefore z n − z 1 is a convex combination of v n 's and 0 which belong to K δ − z 1 and therefore z n − z 1 ∈ K δ − z 1 . Hence we have z n ∈ K δ ⊆ K for all 1 ≤ n ≤ N + 1.
Let L := aff(K). According to Lemma 11, the functionF is L-smooth. So we havẽ
F (z n+1 ) −F (z n ) ≥ ∇(F | L )(z n ), z n+1 − z n − L 2 z n+1 − z n 2 = ε ∇(F | L )(z n ), v n − ε 2 L 2 v n 2 ≥ ε ∇(F | L )(z n ), v n − ε 2 L 2 D 2 = ε ḡ n , v n + ∇(F | L )(z n ) −ḡ n , v n − ε 2 LD 2 2 .(20)
Let z * δ := argmax z∈K δ −z 1F (z). We have z * δ ∈ K δ −z 1 , which implies that ḡ n , v n ≥ ḡ n , z * δ . Therefore
ḡ n , v n ≥ ḡ n , z * δ = ∇(F | L )(z n ), z * δ + ḡ n − ∇(F | L )(z n ), z * δ
Hence we obtain
ḡ n , v n + ∇(F | L )(z n ) −ḡ n , v n ≥ ∇(F | L )(z n ), z * δ − ∇(F | L )(z n ) −ḡ n , z * δ − v n
Using the Cauchy-Schwartz inequality, we have
∇(F | L )(z n ) −ḡ n ), z * δ − v n ≤ ∇(F | L )(z n ) −ḡ n z * δ − v n ≤ D ∇(F | L )(z n ) −ḡ n Therefore ḡ n , v n + ∇(F | L )(z n ) −ḡ n , v n ≥ ∇(F | L )(z n ), z * δ − D ∇(F | L )(z n ) −ḡ n .
Plugging this into 20, we see that
F (z n+1 ) −F (z n ) ≥ ε ∇(F | L )(z n ), z * δ − εD ∇(F | L )(z n ) −ḡ n − ε 2 LD 2 2 .(21)
On the other hand, we have z * δ ≥ (z * δ − z n ) ∨ 0. Since F is monotone continuous DRsubmodular, by Lemma 11, so isF . Moreover monotonicity ofF implies that ∇(F | L ) is non-negative in positive directions. Therefore we have
∇(F | L )(z n ), z * δ ≥ ∇(F | L )(z n ), (z * δ − z n ) ∨ 0 (monotonicity) ≥F (z n + ((z * δ − z n ) ∨ 0)) −F (z n ) (DR-submodularity) =F (z * δ ∨ z n ) −F (z n ) ≥F (z * δ ) −F (z n )
After plugging this into (21) and re-arranging terms, we obtain
h n+1 ≤ (1 − ε)h n + εD ∇(F | L )(z n ) −ḡ n + ε 2 LD 2 2
where h n :=F (z * δ ) −F (z n ). After taking the expectation and using Lemma 17, we see that
E(h n+1 ) ≤ (1 − ε)E(h n ) + εDQ 1/2 (n + 4) 1/3 + ε 2 LD 2 2 .
Using the above inequality recursively and 1 − ε ≤ 1, we have
E[h N +1 ] ≤ (1 − ε) N E[h 1 ] + N n=1 εDQ 1/2 (n + 4) 1/3 + N ε 2 LD 2 2 .
Note that we have ε = 1/N . Using the fact that (1 − 1 N ) N ≤ e −1 and N n=1 DQ 1/2 (n + 4) 1/3 ≤ DQ 1/2 N 0 dx (x + 4) 1/3 ≤ DQ 1/2 3 2 (N + 4) 2/3 ≤ DQ 1/2 3 2 (2N ) 2/3 ≤ 3DQ 1/2 N 2/3 ,
we see that
E[h N +1 ] ≤ e −1 E[h 1 ] + 3DQ 1/2 N 1/3 + LD 2 2N .
By re-arranging the terms and using the fact thatF is non-negative, we conclude
(1 − e −1 )F (z * δ ) − E[F (z N +1 )] ≤ −e −1F (z 1 ) + 3DQ 1/2 N 1/3 + LD 2 2N ≤ 3DQ 1/2 N 1/3 + LD 2 2N .(23)
According to Lemma 11, we haveF (z N +1 ) ≤ F (z N +1 )+δG. Moreover, using Lemma 15, we see that z * ∈ B δ ′ (K δ ) where δ ′ = δD/r. Therefore, there is a point y * ∈ K δ such that
y * − z * ≤ δ ′ .F (z * δ ) ≥F (y * − z 1 ) ≥F (y * ) − G z 1 ≥ F (y * ) − ( z 1 + δ)G ≥ F (z * ) − ( z 1 + δ + δD r )G.
According to Lemma 14, we have
z 1 ≤ √ d z 1 ∞ ≤ δ √ d/r. F (z * δ ) ≥ F (z * ) − (1 + √ d + D r )δG.(24)
After plugging these into 23, we see that
(1 − e −1 )F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 N 1/3 + LD 2 2N + δG(2 + √ d + D r ).
Next we show that
1 − z n ∞ ≥ (1 − ε) n−1 ,(27)
for all 1 ≤ n ≤ N + 1. We use induction on n to show that for each coordinate 1 ≤ i ≤ d, we have 1 − [z n ] i ≥ (1 − ε) n−1 . The claim is obvious for n = 1. Assuming that the inequality is true for n, using the fact that v n ≤ 1 − z n , we have
1 − [z n+1 ] i = 1 − [z n ] i − ε[v n ] i ≥ 1 − [z n ] i − ε(1 − [z n ] i ) = (1 − ε)(1 − [z n ] i ) ≥ (1 − ε) n ,
which completes the proof by induction.
SinceF is DR-submodular, it is concave along non-negative directions. Therefore, using Lemma 9 and Equation (27), we have (26), we get
∇(F | L )(z n ), z * δ ∨ z n − z n ≥F (z * δ ∨ z n ) −F (z n ) ≥ (1 − z n ∞ )F (z * δ ) −F (z n ) ≥ (1 − ε) n−1F (z * δ ) −F (z n ).
Plugging this into Equation
F (z n+1 ) −F (z n ) ≥ ε (1 − ε) n−1F (z * δ ) −F (z n ) − εD ∇(F | L )(z n ) −ḡ n − ε 2 LD 2 2 .
Taking expectations of both sides and using Lemma 17, we see that
E(F (z n+1 )) ≥ (1 − ε)E(F (z n )) + ε(1 − ε) n−1F (z * δ ) − εDQ 1/2 (n + 4) 1/3 − ε 2 LD 2 2 .
Using this inequality recursively and Equation (22), we get
E(F (z N +1 )) ≥ (1 − ε) N E(F (z 1 )) + N ε(1 − ε) N −1F (z * δ ) − N n=1 εDQ 1/2 (n + 4) 1/3 − N ε 2 LD 2 2 ≥ (1 − ε) N E(F (z 1 )) + N ε(1 − ε) N −1F (z * δ ) − 3εDQ 1/2 N 2/3 − N ε 2 LD 2 2
SinceF is non-negative, this implies that
E(F (z N +1 )) ≥ N ε(1 − ε) N −1F (z * δ ) − 3εDQ 1/2 N 2/3 − N ε 2 LD 2 2 .
After setting ε = 1/N and using (1 − 1/N ) N −1 ≥ e −1 , we see that
e −1F (z * δ ) − E(F (z N +1 )) ≤ 3DQ 1/2 N 1/3 + LD 2 2N .
Using the argument presented in Appendix G, i.e. Lemma 11 and Equation 24, we conclude that
e −1 F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 N 1/3 + LD 2 2N + δG(2 + √ d + D r ).
Using this inequality recursively together with Equation (22) and the fact thatF is nonnegative, we get
E[F (z N +1 )] ≥ (1 − 2ε) N E[F (z 1 )] + εF (z * δ ) N n=1 (1 − 2ε) N −n − N n=1 εDQ 1/2 (n + 4) 1/3 − N ε 2 LD 2 2 . ≥ 1 2 (1 − 2ε) N E[F (z 1 )] + εF (z * δ ) N n=1 (1 − 2ε) N −n − 3εDQ 1/2 N 2/3 − N ε 2 LD 2 2 = 1 2 (1 − 2ε) N E[F (z 1 )] + 1 2 (1 − (1 − 2ε) N )E[F (z * δ )] − 3εDQ 1/2 N 2/3 − N ε 2 LD 2 2 = 1 2F (z * δ ) − 1 2 (1 − 2ε) N (F (z * δ ) − E[F (z 1 )]) − 3εDQ 1/2 N 2/3 − N ε 2 LD 2 2 ≥ 1 2F (z * δ ) − 1 2 (1 − 2ε) N DG − 3εDQ 1/2 N 2/3 − N ε 2 LD 2 2 .
Note that (1 − log(N )/N ) N ≤ e − log(N ) = 1/N . Therefore, since ε = log(N )/2N , we have
E[F (z N +1 )] ≥ 1 2F (z * δ ) − DG 2N − 3DQ 1/2 log(N ) 2N 1/3 − LD 2 log(N ) 2 8N .(30)
According to Lemma 11, we haveF (z N +1 ) ≤ F (z N +1 )+δG. Moreover, using Lemma 15, we see that z * ∈ B δ ′ (K δ ) where δ ′ = δD/r. Therefore, there is a point y * ∈ K δ such that
y * − z * ≤ δ ′ .F (z * δ ) ≥F (y * ) ≥F (y * ) ≥ F (y * ) − δG ≥ F (z * ) − (δ + δD r )G.(31)
After plugging these into (30), we see that
1 2F (z * ) − E[F (z N +1 )]
≤ 3DQ 1/2 log(N ) 2N 1/3 + 4DG + LD 2 log(N ) 2 8N + δG(2 + D r ).
which completes the proof.
Appendix J. Proof of Theorem 4 for non-monotone maps over general convex sets
Proof First we show that
1 − z n ∞ ≥ (1 − ε) n−1 (1 − z 1 ∞ ),(32)
for all 1 ≤ n ≤ N + 1. We use induction on n to show that for each coordinate 1 ≤ i ≤ d, we have 1 − [z n ] i ≥ (1 − ε) n−1 (1 − [z 1 ] i ). The claim is obvious for n = 1. Assuming that the inequality is true for n, we have
1 − [z n+1 ] i = 1 − (1 − ε)[z n ] i − ε[v n ] i ≥ 1 − (1 − ε)[z n ] i − ε = (1 − ε)(1 − [z n ] i ) ≥ (1 − ε) n (1 − [z 1 ] i ),
which completes the proof by induction. Let z * δ := argmax z∈K δF (z). Using the same arguments as in Appendix I, we see that
F (z n+1 ) −F (z n ) ≥ ε ∇(F | L )(z n ), z * δ − z n − εD ∇(F | L )(z n ) −ḡ n − ε 2 LD 2 2 .
Using Lemmas 10 and 9 and Equation (32), we have
∇(F | L )(z n ), z * δ − z n ≥F (z * δ ∨ z n ) +F (z * δ ∧ z n ) − 2F (z n ) ≥ (1 − z n ∞ )F (z * δ ) +F (z * δ ∧ z n ) − 2F (z n ) ≥ (1 − ε) n−1 (1 − z 1 ∞ )F (z * δ ) +F (z * δ ∧ z n ) − 2F (z n ) ≥ (1 − ε) n−1 (1 − z 1 ∞ )F (z * δ ) − 2F (z n ). ThereforeF (z n+1 ) −F (z n ) ≥ ε(1 − ε) n−1 (1 − z 1 ∞ )F (z * δ ) − 2εF (z n ) −εD ∇(F | L )(z n ) −ḡ n − ε 2 LD 2 2 .
After taking the expectation, using Lemma 17 and re-arranging the terms, we see that
E[F (z n+1 )] ≥ (1 − 2ε)E[F (z n )] + ε(1 − ε) n−1 (1 − z 1 ∞ )F (z * δ ) − εDQ 1/2 (n + 4) 1/3 − ε 2 LD 2 2 .(33)
Using this inequality recursively together with Equation (22), we see that
E[F (z N +1 )] ≥ ε(1 − z 1 ∞ )F (z * δ ) N n=1 (1 − ε) n−1 (1 − 2ε) N −n + (1 − 2ε) N E[F (z 1 )] − Nn=1
εDQ 1/2 (n + 4) 1/3 − N ε 2 LD 2 2 .
(1 − 2ε) N ≥ e −2 log(2) (1 − 2ε) = 1 4 1 − 2 log(2) N ≥ 1 4N .
On the other hand
ε N n=1 (1 − 2ε) N −n (1 − ε) n−1 = ε(1 − 2ε) N −1 N n=1 1 − ε 1 − 2ε n−1 ≥ ε(1 − 2ε) N −1 N n=1
(1 + ε) n−1 = (1 − 2ε) N −1 ((1 + ε) N − 1).
We have (1 + c N ) N ≥ e c (1 − c 2 2N ) for c ≥ 0 and N ≥ 1. 2 Therefore
ε N n=1 (1 − 2ε) N −n (1 − ε) n−1 = (1 − 2ε) N −1 (1 + ε) N − 1 ≥ e −2 log(2) 1 + log(2) N N − 1 ≥ e −2 log(2) e log 2 1 − log(2) 2 2N − 1 = 1 4 1 − log(2) 2 N ≥ 1 4 − 1 4N .
Plugging this and 35 into 34 and using the fact thatF (z 1 ) is non-negative, we get
E[F (z N +1 )] ≥ 1 4 − 1 4N (1 − z 1 ∞ )F (z * δ ) + 1 4N E[F (z 1 )] − 3DQ 1/2 N 1/3 − LD 2 2N ≥ 1 4 (1 − z 1 ∞ )F (z * δ ) + 1 4N E[F (z 1 )] −F (z * δ ) − 3DQ 1/2 N 1/3 − LD 2 2N ≥ 1 4 (1 − z 1 ∞ )F (z * δ ) − 3DQ 1/2 N 1/3 − DG + 2LD 2 4N .
Using the same argument as in Appendix I, we obtain 1 4 (1 − z 1 ∞ )F (z * ) − E[F (z N +1 )] ≤ 3DQ 1/2 N 1/3 + DG + 2LD 2 4N + δG(2 + D r ).
1. For 0 ≤ x ≤ 1 2 , we have log(1 − x) ≥ −x − x 2 2 − x 3 . Therefore, for 0 ≤ c ≤ 2 and N ≥ 4, we have log(1 − c N ) ≥ − c N − c 2 2N 2 − c 3 N 3 ≥ − c N−1 . 2. For x ≥ 0, we have log(1 + x) ≥ x − x 2 2 and −x ≥ log(1 − x). Therefore N log(1 + c N ) ≥ N ( c N − c 2 2N 2 ) = c − c 2 2N ≥ c + log(1 − c 2 2N ).
Appendix K. Proof of Theorem 5
Proof Let T = O(BN ) denote the number of evaluations 3 and let E α := αF (z * ) − E[F (z N +1 )] denote the α-approximation error. We prove Cases 1-4 separately. Note that F being non-monotone or 0 ∈ K correspond to cases ( We choose δ = Θ(ǫ), N = Θ(log 6 (ǫ)/ǫ 3 ), B = 1 and T = Θ(log 6 (ǫ)/ǫ 3 ) to bound αapproximation error by O(ǫ). = O(N −1/3 log(N ) 2 + δ −1 B −1/2 N −1/3 log(N ) 2 + δ).
We choose δ = Θ(ǫ), N = Θ(log 6 (ǫ)/ǫ 3 ), B = Θ(1/ǫ 2 ), and T = Θ(log 6 (ǫ)/ǫ 5 ) to bound α-approximation error by O(ǫ).
Appendix L. Proof of Theorem 6
Proof Since the parameters of Algorithm 2 are chosen according to Theorem 5, we see that the α-approximation error is bounded byÕ(T −β 0 ) where β = 1/3 in case 2 (stochastic gradient oracle) and β = 1/5 in case 4 (stochastic value oracle).
Recall that F is G-Lipschitz and the feasible region K has diameter D. Thus, during the first T 0 time-steps, the α-regret can be bounded by
sup z,z ′ ∈K αF (z) − F (z ′ ) ≤ sup z,z ′ ∈K F (z) − F (z ′ ) ≤ DG.
Therefore the total α-regret is bounded by If F is non-monotone or 0 ∈ K, the exact same argument applies withÕ replaced by O.
are enumerated by four cases (A)-(D) in Theorem 4 of function and feasible region properties (resulting in different approximation ratios) and the four cases 1-4 enumerated in Theorem 5 below of oracle properties. For the oracle properties, we consider the four cases as (Case 1): deterministic gradient oracle, (Case 2): stochastic gradient oracle, (Case 3): deterministic value oracle, and (Case 4): stochastic value oracle.
(Chen et al., 2018b), (Chen et al., 2018a), (Zhang et al., 2019), (Thang and Srivastav, 2021), (Niazadeh et al., 2021), (Zhang et al., 2023), (Mualem and Feldman, 2023). Most of these results are focused on adversarial online full-information feedback.
A), (B) and (D) of Theorem 4 where log(N ) does not appear in the approximation error bound, which is whyÕ can be replaced with O. Case 1 (deterministic gradient oracle): In this case, we have Q = δ = 0. According to Theorem 4, in cases (A), (B) and (D), the approximation error is bounded by DG+2LD 2 4N = O(N −1 ), and thus we choose T= N = Θ(1/ǫ) to get E α = O(ǫ). Similarly, in case (C), we have E α ≤ 4DG + LD 2 log(N ) 2 8N = O(N −1 log(N ) 2 ). We choose T = N = Θ(log 2 (ǫ)/ǫ) to bound α-approximation error by O(ǫ).Case 2 (stochastic gradient oracle): In this case, we have Q = Θ(1) and δ = 0. According to Theorem 4, in cases (A), (B) and (D), the approximation error is bounded by3DQ 1/2 N 1/3 + DG + 2LD 2 4N = O(N −1/3 + N −1 ) = O(N −1/3 ),so we choose N = Θ(1/ǫ 3 ), B = 1 and T = Θ(1/ǫ 3 ) to get E α = O(ǫ). Similarly, in case (LD 2 log(N ) 2 8N = O(N −1/3 log(N ) + N −1 log(N ) 2 ) Since E α ≤ O(N −1/3 log(N ) 2 ), we choose N = Θ(log 6 (ǫ)/ǫ 3 ), B = 1 and T = Θ(log 6 (ǫ)/ǫ 3 ) to bound α-approximation error by O(ǫ). Case 3 (deterministic value oracle): In this case, we have Q = Θ(1) and δ = 0. According to Theorem 4, in cases (A), (B) and (D), the approximation error is bounded by 3DQ 1/2 N 1/3 + DG + 2LD 2 4N + O(δ) = O(N −1/3 + δ), so we choose δ = Θ(ǫ), N = Θ(1/ǫ 3 ), B = 1 and T = Θ(1/ǫ 3 ) to get E α = O(ǫ). Similarly, in case (C), we have E α ≤ 3DQ 1/2 log(N ) 2N 1/3 + 4DG + LD 2 log(N ) 2 8N + O(δ) = O(N −1/3 log(N ) 2 + δ).
choose δ = Θ(ǫ), N = Θ(1/ǫ 3 ), B = Θ(1/ǫ 2 ) and T = Θ(1/ǫ 5 ) to get E α = O(ǫ).Similarly, in case (C)log(N ) + N −1 log(N ) 2 + δ)
T 0
0DG + (T − T 0 )Õ(T −β 0 ) ≤ T 0 DG + TÕ(T
Table 1 :
1Offline DR-submodular optimization results.F Set Oracle Setting
Reference
Appx.
Complexity
Monotone
0
∈ K
∇F
det.
(Bian et al., 2017b), (*)
1 − 1/e
O(1/ǫ)
stoch.
(Mokhtari et al., 2020), (*) 1 − 1/e
O(1/ǫ 3 )
F
det.
This paper
1 − 1/e
O(1/ǫ 3 )
stoch.
This paper
1 − 1/e
O(1/ǫ 5 )
general †
∇F
det.
(Hassani et al., 2017) ‡
1/2
O(1/ǫ)
This paper
1/2Õ(1/ǫ)
stoch.
(Hassani et al., 2017) ‡
1/2
O(1/ǫ 2 )
This paper
1/2Õ(1/ǫ 3 )
F
det.
This paper
1/2Õ(1/ǫ 3 )
stoch.
This paper
1/2Õ(1/ǫ 5 )
Non-Monotone
d.c.
∇F
det.
(Bian et al., 2017a), (*)
1/e
O(1/ǫ)
stoch.
(Mokhtari et al., 2020), (*)
1/e
O(1/ǫ 3 )
F
det.
This paper
1/e
O(1/ǫ 3 )
stoch.
This paper
1/e
O(1/ǫ 5 )
general
∇F
det.
(Dürr et al., 2019)
1−h
3
√
3
O(e
√
dL/ǫ )
(Du et al., 2022)
1−h
4
O(e
√
dL/ǫ )
(Du, 2022), (*)
1−h
4
O(1/ǫ)
stoch.
This paper
1−h
4
O(1/ǫ 3 )
F
det.
This paper
1−h
4
O(1/ǫ 3 )
stoch.
This paper
1−h
4
Table 2 :
2Online stochastic DR-submodular optimization.F
Set
Feedback
Reference
Coef. α α-Regret
Monotone
0 ∈ K
∇F
(Chen et al., 2018a) †,
1/e
O(T 2/3 )
(Zhang et al., 2019)
1 − 1/e O(T 4/5 )
This paper
1 − 1/e O(T 3/4 )
F
This paper
1 − 1/e O(T 5/6 )
general
∇F
(Chen et al., 2018b) ‡
1/2
O(T 1/2 )
This paper
1/2Õ(T 3/4 )
F
This paper
1/2Õ(T 5/6 )
Non-mono.
d.c.
∇F
(Zhang et al., 2023)
1/e
O(T 4/5 )
This paper
1/e
O(T 3/4 )
F
This paper
1/e
O(T 5/6 )
general
∇F
This paper
1−h
4
O(T 3/4 )
F
This paper
1−h
4
Yatao Bian, Joachim Buhmann, and Andreas Krause. Optimal continuous DRsubmodular maximization and applications to provable mean field inference. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 644-653. PMLR, 09-15 Jun 2019b. URL https://proceedings.mlr.press/v97/bian19a.html. Chen, Christopher Harshaw, Hamed Hassani, and Amin Karbasi. Projection-free online optimization with stochastic gradient: From convexity to submodularity. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 814-823. PMLR, 10-15 Jul 2018a. URL https://proceedings.mlr.press/v80/chen18c.html. Hamed Hassani, Mahdi Soltanolkotabi, and Amin Karbasi. Gradient methods for submodular maximization. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/24b43fb034a10d78bec71274033b409 Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends® Shinji Ito and Ryohei Fujimaki. Large-scale price optimization via network flow. Advances in Neural Information Processing Systems, 29, 2016. Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. Nguyen Kim Thang and Abhinav Srivastav. Online non-monotone dr-submodular maximization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11):9868-9876, May 2021. doi: 10.1609/aaai.v35i11.17186. URL https://ojs.aaai.org/index.php/AAAI/article/view/17186. Jan Vondrák. Symmetry and approximability of submodular maximization problems. SIAM Mingrui Zhang, Lin Chen, Hamed Hassani, and Amin Karbasi. Online continuous submodular maximization: From full-information to bandit feedback. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/b43a6403c17870707ca3c44984a2da2 Qixin Zhang, Zengde Deng, Zaiyi Chen, Kuangqi Zhou, Haoyuan Hu, and Yu Yang. Online learning for non-monotone dr-submodular maximization: From full information to bandit feedback. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 3515-3537. PMLR, 25-27 Apr 2023. URL https://proceedings.mlr.press/v206/zhang23f.html.Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimiza-
tion with multi-point bandit feedback. In Adam Tauman Kalai and Mehryar Mohri,
editors, Proceedings of the 23rd Annual Conference on Learning Theory (COLT 2010),
pages 28-40, 2010.
Francis Bach. Submodular functions: from discrete to continuous domains. Mathematical
Programming, 175:419-459, 2019.
An Bian, Kfir Levy, Andreas Krause, and Joachim M Buhmann. Continuous DR-
submodular maximization: Structure and algorithms. In I. Guyon, U. Von Luxburg,
S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in
Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017a. URL
https://proceedings.neurips.cc/paper_files/paper/2017/file/58238e9ae2dd305d79c2ebc8c188342
Andrew An Bian, Baharan Mirzasoleiman, Joachim Buhmann, and Andreas Krause.
Guaranteed Non-convex Optimization: Submodular Maximization over Continuous
Domains. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th Inter-
national Conference on Artificial Intelligence and Statistics, volume 54 of Proceed-
ings of Machine Learning Research, pages 111-120. PMLR, 20-22 Apr 2017b. URL
https://proceedings.mlr.press/v54/bian17a.html.
Yatao Bian, Joachim Buhmann, and Andreas Krause. Optimal continuous DR-submodular
maximization and applications to provable mean field inference. In International Con-
ference on Machine Learning, pages 644-653. PMLR, 2019a.
Lin Lin Chen, Hamed Hassani, and Amin Karbasi. Online continuous submodular maximiza-
tion. In Amos Storkey and Fernando Perez-Cruz, editors, Proceedings of the Twenty-First
International Conference on Artificial Intelligence and Statistics, volume 84 of Proceed-
ings of Machine Learning Research, pages 1896-1905. PMLR, 09-11 Apr 2018b. URL
https://proceedings.mlr.press/v84/chen18f.html.
Lin Chen, Mingrui Zhang, Hamed Hassani, and Amin Karbasi. Black box submodular max-
imization: Discrete and continuous settings. In Silvia Chiappa and Roberto Calandra, ed-
itors, Proceedings of the Twenty Third International Conference on Artificial Intelligence
and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 1058-1070.
PMLR, 26-28 Aug 2020. URL https://proceedings.mlr.press/v108/chen20c.html.
Josip Djolonga and Andreas Krause. From map to marginals: Variational inference in
bayesian submodular models. Advances in Neural Information Processing Systems, 27,
2014.
Donglei Du. Lyapunov function approach for approximation algorithm design and analysis:
with applications in submodular maximization. arXiv preprint arXiv:2205.12442, 2022.
Donglei Du, Zhicheng Liu, Chenchen Wu, Dachuan Xu, and Yang Zhou. An improved
approximation algorithm for maximizing a dr-submodular function over a convex set.
arXiv preprint arXiv:2203.14740, 2022.
Christoph Dürr, Nguyen Kim Thang, Abhinav Srivastav, and Léo Tible. Non-monotone
dr-submodular maximization: Approximation and regret guarantees. arXiv preprint
arXiv:1905.09595, 2019.
Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex
optimization in the bandit setting: gradient descent without a gradient. In Proceedings
of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385-394,
2005.
Shuyang Gu, Chuangen Gao, Jun Huang, and Weili Wu. Profit maximization in social
networks and non-monotone dr-submodular maximization. Theoretical Computer Science,
957:113847, 2023.
in Optimization, 2(3-4):157-325, 2016.
Yuanyuan Li, Yuezhou Liu, Lili Su, Edmund Yeh, and Stratis Ioannidis. Experimental
design networks: A paradigm for serving heterogeneous learners under networking con-
straints. IEEE/ACM Transactions on Networking, 2023.
Aryan Mokhtari, Hamed Hassani, and Amin Karbasi. Conditional gradient method for
stochastic submodular maximization: Closing the gap. In International Conference on
Artificial Intelligence and Statistics, pages 1886-1895. PMLR, 2018.
Aryan Mokhtari, Hamed Hassani, and Amin Karbasi. Stochastic conditional gradient meth-
ods: From convex minimization to submodular maximization. The Journal of Machine
Learning Research, 21(1):4232-4280, 2020.
Loay Mualem and Moran Feldman. Resolving the approximability of offline and online
non-monotone dr-submodular maximization over general convex sets. In Francisco Ruiz,
Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th Inter-
national Conference on Artificial Intelligence and Statistics, volume 206 of Proceed-
ings of Machine Learning Research, pages 2542-2564. PMLR, 25-27 Apr 2023. URL
https://proceedings.mlr.press/v206/mualem23a.html.
Rad Niazadeh, Tim Roughgarden, and Joshua R Wang. Optimal algorithms for continuous
non-monotone submodular and dr-submodular maximization. The Journal of Machine
Learning Research, 21(1):4937-4967, 2020.
Rad Niazadeh, Negin Golrezaei, Joshua R. Wang, Fransisca Susan, and Ashwinkumar
Badanidiyuru. Online learning via offline greedy algorithms: Applications in market
design and optimization. In Proceedings of the 22nd ACM Conference on Economics
and Computation, EC '21, page 737-738, New York, NY, USA, 2021. Association for
Computing Machinery. ISBN 9781450385541. doi: 10.1145/3465456.3467571. URL
https://doi.org/10.1145/3465456.3467571.
Ohad Shamir. An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with
Two-Point Feedback. Journal of Machine Learning Research, 18(52):1-11, 2017. ISSN
1533-7928. URL http://jmlr.org/papers/v18/16-632.html.
Journal on Computing, 42(1):265-304, 2013. doi: 10.1137/110832318.
Table 3 :
3This table presents the different results for the regret for DR-submodular maximization under bandit feedback, and gives the related works and regret bounds in the adversarial case.
. We have T = BN when we have access to a gradient oracle and T = 2BN otherwise.
Appendix H. Proof of Theorem 4 for non-monotone maps over downward-closed convex setsProof Similar to Appendix G, we see that z n ∈ K δ for all 1 ≤ n ≤ N + 1 andOn the other hand, z * δ ∨z n −z n ≤ 1−z n . Therefore, we have ḡ n , v n ≥ ḡ n , z * δ ∨ z n − z n , which implies thatUsing the Cauchy-Shwarz inequality, we see thatwhere the last inequality follows from the fact that both v n and z * δ ∨ z n − z n belong to K δ . ThereforePlugging this into Equation(25), we getAppendix I. Proof of Theorem 4 for monotone maps over general convex sets Proof Using the fact thatF is L-smooth, we havẽLet z * δ := argmax z∈K δF (z). Using the fact that ḡ n , v n ≥ ḡ n , z * δ , we haveUsing the Cauchy-Schwarz inequality, we see thatPlugging this into 28, we getUsing Lemma 10 and the fact thatF is monotone, we see thatAfter plugging this into (29), we getAfter taking the expectation, using Lemma 17 and re-arranging the terms, we see that E[F (z n+1 )] ≥ (1 − 2ε)E[F (z n )] + εF (z * δ ) − εDQ 1/2 (n + 4) 1/3 − ε 2 LD 2 2 . | [] |
[
"Redshift evolution of the dark matter haloes shapes",
"Redshift evolution of the dark matter haloes shapes"
] | [
"P Cataldi 1★ \nInstituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina\n",
"S E Pedrosa \nInstituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina\n",
"P B Tissera \nInstituto de Astrofísica\nPontificia Universidad Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoChile\n\nCentro de Astro-Ingeniería\nUniversidad Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoPontificia, Chile\n",
"M C Artale \nPhysics and Astronomy Department Galileo Galilei\nUniversity of Padova\nVicolo dell'Osservatorio 3I-35122PadovaItaly\n\nINFN -Padova\nVia Marzolo 8I-35131PadovaItaly\n\nDepartment of Physics and Astronomy\nPurdue University\n525 Northwestern Avenue47907West LafayetteINUSA\n",
"N D Padilla \nInstituto de Astronomía Teórica y Experimental\nUNC-CONICET\nLaprida 854X5000BGRCórdobaArgentina\n",
"R Dominguez-Tenreiro \nDepartamento de Física Teórica\nUniversidad Autónoma de Madrid\nE-28049Cantoblanco, MadridSpain\n\nCentro de Investigación Avanzada en Física Fundamental\nUniversidad Autónoma de Madrid\nE-28049Cantoblanco, MadridSpain\n",
"L Bignone \nInstituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina\n",
"R Gonzalez \nCentro de Astro-Ingeniería\nUniversidad Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoPontificia, Chile\n",
"L J Pellizza \nInstituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina\n"
] | [
"Instituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina",
"Instituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina",
"Instituto de Astrofísica\nPontificia Universidad Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoChile",
"Centro de Astro-Ingeniería\nUniversidad Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoPontificia, Chile",
"Physics and Astronomy Department Galileo Galilei\nUniversity of Padova\nVicolo dell'Osservatorio 3I-35122PadovaItaly",
"INFN -Padova\nVia Marzolo 8I-35131PadovaItaly",
"Department of Physics and Astronomy\nPurdue University\n525 Northwestern Avenue47907West LafayetteINUSA",
"Instituto de Astronomía Teórica y Experimental\nUNC-CONICET\nLaprida 854X5000BGRCórdobaArgentina",
"Departamento de Física Teórica\nUniversidad Autónoma de Madrid\nE-28049Cantoblanco, MadridSpain",
"Centro de Investigación Avanzada en Física Fundamental\nUniversidad Autónoma de Madrid\nE-28049Cantoblanco, MadridSpain",
"Instituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina",
"Centro de Astro-Ingeniería\nUniversidad Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoPontificia, Chile",
"Instituto de Astronomía y Física del Espacio\nCONICET-UBA\nCasilla de Correos 67, Suc. 281428Buenos AiresArgentina"
] | [
"MNRAS"
] | In this work, we aim at investigating the morphology evolution of Milky Way mass-like dark matter haloes selected from the and I TNG Projects. The connection between halo shapes and their environment has been studied in previous works at z=0 but their connection remains yet to be fully understood. We focus on the evolution across cosmic time of the halo shapes and the relation with the infalling material, using hydrodynamical simulations. Our findings show that haloes tend to be more triaxial at earlier times as a consequence of stronger accretion in the direction of the filaments. As the haloes evolve towards a dominant isotropic accretion mode and relaxation, their shape at 20 percent of the virial mass becomes more spherical. In agreement with previous results, baryons have an important effect within the inner regions of the haloes, driving them from triaxial to rounder shapes. We also find a correlation between the strength of the quadrupole infalling mode and the degree of ellipticity of the haloes: as the filament strength decreases steadily with redshift, the haloes became more spherical and less elliptical. | 10.1093/mnras/stad1601 | [
"https://export.arxiv.org/pdf/2302.08853v1.pdf"
] | 257,019,918 | 2302.08853 | 70056f2fc65375d0964b2be167515718ccbedd1d |
Redshift evolution of the dark matter haloes shapes
2022
P Cataldi 1★
Instituto de Astronomía y Física del Espacio
CONICET-UBA
Casilla de Correos 67, Suc. 281428Buenos AiresArgentina
S E Pedrosa
Instituto de Astronomía y Física del Espacio
CONICET-UBA
Casilla de Correos 67, Suc. 281428Buenos AiresArgentina
P B Tissera
Instituto de Astrofísica
Pontificia Universidad Católica de Chile
Av. Vicuña Mackenna 4860SantiagoChile
Centro de Astro-Ingeniería
Universidad Católica de Chile
Av. Vicuña Mackenna 4860SantiagoPontificia, Chile
M C Artale
Physics and Astronomy Department Galileo Galilei
University of Padova
Vicolo dell'Osservatorio 3I-35122PadovaItaly
INFN -Padova
Via Marzolo 8I-35131PadovaItaly
Department of Physics and Astronomy
Purdue University
525 Northwestern Avenue47907West LafayetteINUSA
N D Padilla
Instituto de Astronomía Teórica y Experimental
UNC-CONICET
Laprida 854X5000BGRCórdobaArgentina
R Dominguez-Tenreiro
Departamento de Física Teórica
Universidad Autónoma de Madrid
E-28049Cantoblanco, MadridSpain
Centro de Investigación Avanzada en Física Fundamental
Universidad Autónoma de Madrid
E-28049Cantoblanco, MadridSpain
L Bignone
Instituto de Astronomía y Física del Espacio
CONICET-UBA
Casilla de Correos 67, Suc. 281428Buenos AiresArgentina
R Gonzalez
Centro de Astro-Ingeniería
Universidad Católica de Chile
Av. Vicuña Mackenna 4860SantiagoPontificia, Chile
L J Pellizza
Instituto de Astronomía y Física del Espacio
CONICET-UBA
Casilla de Correos 67, Suc. 281428Buenos AiresArgentina
Redshift evolution of the dark matter haloes shapes
MNRAS
0002022Accepted XXX. Received YYY; in original form ZZZPreprint 20 February 2023 Compiled using MNRAS L A T E X style file v3.0galaxies: clusters: general -galaxies: haloes -cosmology: theory -dark matter -methods: numerical
In this work, we aim at investigating the morphology evolution of Milky Way mass-like dark matter haloes selected from the and I TNG Projects. The connection between halo shapes and their environment has been studied in previous works at z=0 but their connection remains yet to be fully understood. We focus on the evolution across cosmic time of the halo shapes and the relation with the infalling material, using hydrodynamical simulations. Our findings show that haloes tend to be more triaxial at earlier times as a consequence of stronger accretion in the direction of the filaments. As the haloes evolve towards a dominant isotropic accretion mode and relaxation, their shape at 20 percent of the virial mass becomes more spherical. In agreement with previous results, baryons have an important effect within the inner regions of the haloes, driving them from triaxial to rounder shapes. We also find a correlation between the strength of the quadrupole infalling mode and the degree of ellipticity of the haloes: as the filament strength decreases steadily with redshift, the haloes became more spherical and less elliptical.
INTRODUCTION
In the current cosmological paradigm Λ-CDM model, dark matter (DM) and dark energy are the main ingredients that drive the formation and evolution of cosmic structures. In particular, DM haloes grow hierarchically and continuously by successive mergers and accretion (Zel'dovich 1970), embedded within the filamentary structures of the cosmic web (e.g. White & Rees 1978;Peebles 1980;Ghigna et al. 1998;Springel et al. 2008).
The characteristic structures of DM haloes have been studied extensively using cosmological simulations. Early N-body simulations show that DM haloes can be described by a universal radial density profile (NFW, Navarro et al. 1997), while their shape tends to be triaxial and prolate in the inner regions (e.g. Frenk et al. 1988;Jing & Suto 2002;Allgood 2005;Stadel et al. 2009). Cosmological hydrodynamical simulations, which account for the DM assembly together with galaxy formation have proved to be of great help to investigate the impact of baryons on different properties of the haloes . In fact, baryons and the physics processes associated to ★ Contact e-mail: [email protected] their evolution can modify the shape of DM haloes sphericalizing them (Tissera & Dominguez-Tenreiro 1998;Kazantzidis et al. 2004;Zemp et al. 2012;Zhu et al. 2017;Chua et al. 2019;Cataldi et al. 2020), change the distribution of the DM in the inner regions (Governato et al. 2012;Di Cintio et al. 2014;Artale et al. 2019) and change their specific angular momentum Zavala et al. 2016;Lagos et al. 2017).
Regarding the mass density profile, Tollet et al. (2016) investigated the properties of haloes with virial masses in the range of 10 10 − 10 12 M , finding that in the central regions of the halo, the inner DM density slope depends on the stellar-to-halo mass ratio at all analysed redshifts, in agreement with Di Cintio et al. (2014) which investigate this dependence at = 0.
More recently, Artale et al. (2019), inspecting the mass accretion history (MAH) of the DM haloes selected from a cosmological hydrodynamical simulation, showed that they assemble earlier that their dark matter only (DMo) counterparts. This change in formation history was explained since baryons make haloes more concentrated and in turn more massive than their DMo counterparts, reporting a close connection between the MAH, the amount of baryons, and the evolution of the DM density profiles.
The spherical collapse and gaussian random fields model of halo formation (Press & Schechter 1974) needs a extending as the mass accretion on the haloes falls along a preferential direction (mostly along the filaments Zel'dovich 1970) and tends to be clumpy. Given this preferential direction in the accretion along filaments, DM haloes show mostly non-spherical symmetry, especially if their relaxation times are no larger than the time between mergers or accretion events. In contrast, early-formed objects were weakly connected to their environment and were highly relaxed (Gouin et al. 2021). Ludlow et al. (2014) and Bonamigo et al. (2015) studying DMo simulations, used the dimensionless parameter peak height, ( , ), to characterise the shape of haloes within a wide range of masses and redshifts. They found that DM haloes are triaxial with a tendency to be prolate. In particular, more massive objects are less spherical than low mass haloes, essentially because high mass haloes formed later on (Despali et al. 2017). This increase in triaxiality, correlates both with mass and with redshift because haloes seem to be affected by the direction of the last major merger accreted along the filaments around them (Jing & Suto 2002;Allgood 2005;Vega-Ferrero et al. 2017). Following the same approach as Bonamigo et al. (2015), Vega-Ferrero et al. (2017) found that minor-to-major axis ratio can be expressed by a universal function in terms of ( , ).
It has been reported (Oguri et al. 2005;Giocoli et al. 2012a,b;Wojtak 2013;Lau et al. 2021) a close connection between the triaxiality of DM haloes and the cluster mass and concentration and the inner slope of the DM density profile, with consequences on the strong lensing cross-sections. Satellite galaxies are preferentially accreted along filaments (Libeskind et al. 2014;Tempel et al. 2015). Because infall is driven by the surrounding large-scale structure, we expect a significant correlation between the halo shapes and their environment (Vera-Ciro et al. 2011, reference within). Governato et al. (2012) found that haloes tend to point their minor axes perpendicular to the infall (filament) direction. Additionally, subhalos are predominantly accreted along the major axis of the host halo, and the alignment increases with the host halo mass (Kang & Wang 2015).
At later times the cross-section of the filaments becomes larger than the typical size of the MW-mass haloes and, as a result, accretion turns more isotropic and the objects evolve into a more oblate configuration. Interestingly, haloes retain memory of their structure at earlier times (Vera-Ciro et al. 2011). This is imprinted in their present-day shape dependency with radius, which changes from typically prolate in the inner (earlier collapsed) regions to a triaxial in the outskirts (corresponding to the shells that have last collapsed and are now at about the virial radius).
In the case of MW-like galaxies, Shao et al. (2021) investigated how the disc of satellite galaxies can be used to infer the orientation and some aspects of the formation history of the Galactic DM halo, using the EAGLE simulation. These authors found that the normal to the common orbital plane of satellites, as well as the central stellar disc, is well aligned with the minor axis of the DM host halo. Also, Shao et al. (2021) found that the DM halo of each of their MWanalogue is "twisted" such that the orientation of the outer halo is perpendicular to that of the inner halo. This occurs because the inner halo is aligned with the central disc, whereas the outer halo is nearly perpendicular to the stellar disc, with a tight alignment towards the filamentary network along which mass is accreted .
Observational studies based on using X-ray data (Fabricant et al. 1984;Buote & Canizares 1996;Kawahara 2010;Lau et al. 2013), Sunyaev Zel'dovich (Sayers et al. 2011) and strong and weak gravitational lensing methods (Soucail et al. 1987;Evans & Bridle 2009;Oguri et al. 2010Oguri et al. , 2012 indicate that cluster-size DM haloes are often not spherical. However, the observational determination of the shapes of DM haloes is quite challenging. Few studies attempted to infer the shape and orientation of the galactic DM halo. Preferably dynamical tracers at large radii are to be used, in which many cases, by definition are rare (Vera-Ciro et al. 2011). Dynamical tracers such as kinematics and morphology of the HI layer have been used to impose constraints on the halo morphology (Becquaert & Combes 1997;Swaters et al. 1997), also the temperature profile of X-ray isophotes (Buote & Canizares 1998;Buote et al. 2002), gravitational lensing (Hoekstra et al. 2004) and the spatial distribution of galaxies within groups (Paz et al. 2006;Robotham et al. 2008). The general trend of all these studies is that haloes tend to be roughly oblate, with the smallest axis pointing perpendicular to the symmetry plane defined by the stellar component.
In the case of the MW, the shape constraints often rely on the kinematics of stars, which include the proper motions of hypervelocity stars (Gnedin et al. 2005) or the dynamics of stellar streams (Koposov et al. 2010). These studies show a nearly spherical Galactic halo (Ibata et al. 2001;Law et al. 2005Law et al. , 2009Law & Majewski 2010;Bovy et al. 2016;Malhan & Ibata 2019), in agreement with numerical studies (e.g. Chua et al. 2019;Cataldi et al. 2020).
Studying the shape, the MAH and the concentration of haloes provides an opportunity to learn about individual growth histories and the connection between them and the properties of their host galaxy. (Drakos et al. 2019). Measurements of structural properties for large, well-defined samples of haloes may also provide new cosmological tests (see e.g. Taylor 2011, for discussion).
In this paper, we use a set of haloes identified from the and I TNG projects to deepen the impact of different sub-grid models and different cosmic environments might have on the DM halo morphologies. These results extend and strengthen previous studies on halo shapes (Cataldi et al. 2020;Cataldi et al. 2022), by including now the temporal evolution of the analysed properties.
The paper is structured as follows. Section 2 reviews the simulations setup and sample selection. Section 3 presents the results on the evolution of DM structure divided into: subsection 3.1 focuses on analyzing the mass accretion history and halo size evolution, subsection 3.2 analyze the shape profiles evolution and their dependence with merger events, and subsection 3.3 studies the impact of the infall matter configurations on the shape analysis. Finally, conclusions are summarized in Section 4.
SIMULATIONS
Here we use two simulation suites, namely the and I -TNG (Nelson et al. 2019) simulations which were run with different prescriptions for the baryonic processes involved in galaxy formation. We summarize the main features of each case.
simulations
The Chemo-dynamIcal propertiEs of gaLaxies and the cOsmic web, , is a project aimed to study the formation of galaxies in different environments, with virial mass haloes within the range M 200 = 10 10 − 10 12 M (Rodríguez et al. 2022). It also includes two Local Group (LG) analogues. The simulations assume a Λ-CDM universe model with a cosmology consistent with Planck Collaboration et al. (2014), given by Ω 0 = 0.317, Ω Λ = 0.6825, Ω = 0.049 and ℎ = 0.6711.
was run with a version of GADGET-3 based on GADGET-2 (Springel & Hernquist 2003;Springel 2005). It includes a multiphase model for the gas component metal-dependant cooling, star formation and energy feedback Type II and Type Ia Supernovae (SNeII and SNeIa, respectively), as described by Scannapieco et al. (2005) and Scannapieco et al. (2006). The simulations assume an Initial Mass Function of Chabrier (2003). The chemical evolution model included follows the enrichment by SNII and SNIa types, keeping track of 12 different chemical elements (Mosconi et al. 2001). This version of GADGET-3 has been previously used by Pedrosa & Tissera (2015) to study the mass-size relation and specific angular momentum content of galaxies, and by Tissera et al. (2016a,b) to investigate the origin of the metallicity gradients of the gas-phase components and stellar population of galaxies in the Fenix simulation.
The initial conditions of the simulations were taken from a DMo run of a cosmological periodic cubic box of side length L = 100Mpch −1 . The MUSIC code (Hahn & Abel 2011), which computes multi-scale cosmological initial conditions under different approximations and transfer functions, was applied to extract the object and increase the numerical resolution. A first set of 20 LG analogues were initially selected and two pairs of them (LG1 and LG2) were chosen by imposing constraints on the relative velocity, separation and the mass of the DM haloes (see, Rodríguez et al. 2022).The two LG selected were re-run with a DM particle resolution of m dm = 1.2 × 10 6 M h −1 . Baryons were added with an initial gas mass of m baryon = 2.0 × 10 5 M h −1 .
I TNG simulations
The Next Generation Illustris Simulation (I TNG) suite is a set of cosmological simulations that were run with the AREPO code (Weinberger et al. 2020). The I TNG simulation follows an updated model of galaxy formation based on the results from the original Illustris simulation, which includes subgrid models to account for different baryonic processes such as star formation, stellar feedback, gas cooling and AGN feedback (see Weinberger et al. 2017;Pillepich et al. 2018;Nelson et al. 2019 The initial conditions of I TNG were generated with the Zel'dovich approximation (Zel'dovich 1970). Here we use the highest resolution available for I TNG 50 (hereafter, TNG50). The TNG50 consists of a periodic box size of 35Mpc h −1 . TNG50 contains 2160 3 DM particles and the same initial number of gas cells. The mass of the DM particles is uniform, m dm = 3.0 × 10 5 M h −1 , and the average mass of the gas cells (and stellar particles) is m baryon = 5.8 × 10 4 M h −1 .
Haloes selection
In simulations, galaxies are identified by using a Friendsof-Friends and SUBFIND algorithms. Among them, we select for this study the most massive central galaxies from simulated LG1 and LG2. In particular, LG1 was previously analysed by Rodríguez et al. (2022) to study the evolution of infalling disc satellites and by Tapia et al. (2022) to analyse the metallicity gradients of the central galaxies.
In the case of TNG50, we select haloes from the simulated box, restricting the stellar masses of the host galaxy of the haloes to the range of 4 × 10 9 < M star /M < 6 × 10 10 , with a star formation rate and metallicity above zero. In this work, we use the virial radius, r 200 , and the virial mass, M 200 , as the radius and mass within a sphere containing ∼ 200 times the cosmic critical matter density at the corresponding redshift. 3.6 × 10 11 6.1 × 10 9 3.4 × 10 11 137.0 h87 3.6 × 10 11 2.8 × 10 9 3.4 × 10 11 118.8 h115 2.0 × 10 11 3.2 × 10 9 1.9 × 10 11 97.0
TNG50 M tot 200 [M /h] M star 200 [M /h] M DM 200 [M /h] r 200 [kpc/h] h476266
1.1 × 10 12 5.0 × 10 10 9.4 × 10 11 161.3 h533590 5.6 × 10 11 2.3 × 10 10 4.9 × 10 11 125.0 h593480 4.7 × 10 11 2.1 × 10 10 4.1 × 10 11 112.6 h631558 2.9 × 10 11 1.2 × 10 10 2.5 × 10 11 95.6 h649627 2.3 × 10 11 8.0 × 10 9 2.0 × 10 11 88.7 h656142 2.4 × 10 11 6.8 × 10 9 2.2 × 10 11 89.1
From the 517 haloes fulfilling the aforementioned conditions in TNG50, we remove those with a recent major merger. For this purpose, we select galaxies with minor mergers since = 2, with the stellar mass ratio < 1/4. This constraint follows recent studies of Helmi et al. (2018) and Belokurov et al. (2018) who inferred that the Galaxy has undergone the last major merger event around at = 1 − 2 by using data from Gaia mission (Gaia Collaboration et al. 2018). From our main sample, we found 15 MW-like haloes that fulfill all selection criteria adopted. For the TNG50 selection, we do not impose environment constraints such as physical separation or relative velocity on other nearby simulated galaxies.
Since the main purpose of this work is to follow in time the evolution of halo morphologies, we made use of the subhalo merger tree catalogue available in each simulation. The simulations have a merger tree computed using the M T routine of the A H F (Knollmann & Knebe 2009) and in the case of TNG50, the I TNG simulation data base provides a merger trees computed with SubLink algorithm (Rodriguez-Gomez et al. 2015), which we use in order to select and follow back the main branch of subhaloes chosen at = 0. Therefore the haloes from simulations analysed were h4337 (LG1), h87 (LG2) and h115 (LG2). In the case of TNG50, we opt to analyse one-third of the selection, 5 of them with halo IDs h533690, h593480, h631558, h649627 and h656142, as we want to focus on the detailed individual changes of their structure evolution. Table 1 summarises the main properties of the MW-like analogues. We kept for comparison analysis, two haloes without constraints in merger activity. From , LG1-h4469, which had a major mass accretion due to a closed interaction at = 0.38 and from TNG50, h476266, which presents intermediate merger events since = 2. Both haloes are indicated in Table 1 with boldface.
Although the nearly identical cosmologies between TNG50 and , we chose to show the quantities in terms of ℎ to compare results between simulations.
RESULTS
The mass density profile evolution
Firstly, we study the DM density profile of the individual haloes selected at different redshifts. We estimate the halo structural param- eters by fitting to the mass profile the NFW model (Navarro et al. 1997):
(r) = s r r s 1 + r r s 2 ,(1)
where r s and s are the scale radius and density. In general, the NFW profile provides a good fit to the spherically-averaged ( ) profiles. In fact, for all redshift analysed, we estimate the standard error associated with the parameter r s . In both simulations, the fitting errors were below < 1.6%. The best-fit NFW profiles yield estimates of the halo structural parameters r s and s for each halo in our sample, which we use in turn to estimate the concentration parameter c 200 = r 200 /r s , shown in Fig. 1.
Overall, we find that the concentration parameter c 200 evolves towards higher values (i.e. more concentrated) in time for both the and TNG50 haloes in agreement with previous works (Gao et al. 2008;Ludlow et al. 2014). Fig. 1 also show, in terms of line widths in the plots, the relation of the halo mass with c 200 . We do not find a clear dependence in both halo selections between halo mass and concentration. Ludlow et al. (2014) found that the concentration is a monotonic but weak function of mass, varying by only a factor of ∼ 4 over a mass range of M 200 = 10 10 − 10 15 M at = 0. Our halo selection only changes one decade in mass, which can explain our results. This seemingly complex mass-redshift-concentration dependence has been described using the dimensionless 'peak height' mass parameter (M, z) = crit (z)/ (M, z), where (M, z) is the linear fluctuation at in spheres of mass M halo (Ludlow et al. 2014).
Additionally, the evolution of baryon accretion contributes to contract the inner region of haloes (see Fig. A2, to follow the MAH of baryons). The evolution of halo concentration is better reproduced by models that link the concentration of a halo with its mass accretion history. The concentration is empirically found to trace the time when halos transition from a period of "fast growth" to another where mass is accreted more gradually. (Wechsler et al. 2002;Zhao et al. 2003;Lu et al. 2006).
Motivated by these findings, in Fig. 2 we show the MAH of DM within the virial radius (filled lines) and the correspondent inside the 20%r 200 (dotted lines). The black horizontal lines indicate when the halo reaches half of its final mass at = 0 (z form,50 ). The effects of major accretion due to material stripped from a close satellite (h4469) or due to merger activity (h476266) in recent times appears as sudden peaks in the curve of MAH, a product of a gain or loss in mass. Merger events can also be spot at 20%r 200 with smoother changes with respect to the outer regions of the haloes. There is a general trend of haloes increasing their mass over time as expected, with haloes with lower merger activity in recent times reaching sooner their z form,50 in comparison with the ones with major accretion at ∼ 0 (h4469 and h476266). The latter have to gain most of their final mass until late times.
The increment of mass in recent times can be studied in Fig. A3 and in Tab. A1 in the Appendix section, inspecting the instantaneous halo growth at ∼ 0 (i.e. dlog(M)/dt z∼0 ) and the formation redshift at 70% (z form,70 ). Haloes with merger activity at late time report the higher dlog(M)/dt z∼0 and later formation redshifts, z form,70 , among the halo selection.
In agreement with Zavala et al. (2016) and Lagos et al. (2017) we observe two different regimes in the evolution of the virial radius r 200 as we can see in Fig. A1 in the Appendix section. First, a slow increase of halo size over time, up to = 2 followed by an acceleration in the increasing of r 200 , showing the turnaround point. Before that point in time, the DM halo gain angular momentum through tidal torques from their environment until maximum expansion (turnaround point), and afterward the halo collapse into virialized structures that conserve their angular momentum (Doroshkevich 1970;White 1984;Catelan & Theuns 1996b,a).
Halo morphology evolution
In Cataldi et al. (2020) we find that for EAGLE and Fenix haloes, baryons have a significant impact on the shape of the inner halo, mainly within ∼ 20 percent of the virial radius. In order to dig into the evolution of halo shape, we compute the shapes of our selected halo sample and focus the analysis mainly at 20% r 200 .
We describe them using the semi-axes of the triaxial ellipsoids, a > b > c, where a, b and c are the major, intermediate and minor axis respectively of the shape tensor S ij (e.g. Bailin & Steinmetz 2005;Zemp et al. 2011). Here we use an iterative method that starts with particles within a spherical shell (i.e. q = s = 1 Dubinski & Carlberg 1991;Curir et al. 1993).
To obtain the ratios q ≡ b/a and s ≡ c/a, we diagonalise the reduced inertia tensor to compute the eigenvectors and eigenvalues, as in Tissera & Dominguez-Tenreiro (1998). Traditionally the s shape parameter has been used as a measure of halo sphericity (e.g. Allgood
where L ≡ 1 + (b/a) 2 + (c/a) 2 . The condition a ≥ b ≥ c implies that the domain of e is the range [0, 1/2]. In Fig. 3, we show the evolution of the ellipticity as a function of the distance to the halo center. To inspect the evolution in further detail, we extend the range of redshift up to 0 < < 8. For more recent redshifts, the haloes become less elliptical and in correspondence, more spherical. For earlier times, haloes present triaxial shapes, while as they evolve in time their ellipticity decreases. This result is in agreement with previous works (Allgood 2005;Chua et al. 2019;Cataldi et al. 2020). Interestingly, the morphologies of haloes present a more ordered trend in their decrease of ellipticity with redshift in comparison with TNG50 haloes, which report a weaker trend in their decrease of ellipticity.
A useful way to study the shape evolution of the DM haloes with time and radii is the plane proposed by Trayford et al. (2019), also applied in previous studies (see , Cataldi et al. 2020;Cataldi et al. 2022). However, we modify it to visualize the evolution of the halo axis ratios at different radii. Fig. 4 and Fig. 5 present the results for the and TNG50 haloes samples, respectively, at = 0. In our representation, the upper right corner, b/a ∼ 1.0 and c/b ∼ 1.0, correspond to spherical haloes. The spherical, prolate, triaxial, and oblate regions are labeled correspondingly, and the color code indicates the distance to the halo center.
For , all haloes morphology tends to be more triaxial for the outer radii and more spherical in the inner regions. In the case of TNG50, haloes in the central regions are distributed between spherical and oblate (h593480) shapes. However, the evolution path to this shape configurations is remarkably different in each case. Chua et al. (2022) studied the halo morphology dependence on radii and for different mass resolution in the TNG50 suite. These authors reported spherical and oblate shapes for haloes at central radii and a small tendency for haloes of TNG50 to be more spherical and oblate than the lower mass resolution simulations of the suite.
The configuration goes from typically spherical in the inner regions (related to the collapse of matter in earlier times) to a triaxial shape in the outskirts (corresponding to the shells that have collapsed more recently through a preferred direction). As the haloes evolve with time, there is also the effect of baryonic condensation to form the galaxy at the inner region of the halo (see Fig. A2). This baryonic concentration contributes to rounding up the halo morphology.
The halo shape structure at earlier times is imprinted in the present day ( = 0) shape trends with radius (Vera-Ciro et al. 2011). Therefore, the different paths to sphericity can be studied analyzing the evolution of halo morphologies. In Fig. 4 and Fig. 5 we also show the halo shape at 20% r 200 with cross symbols. We compare the shapes at the galactocentric radii, where the stellar disc is located, which is the radius that maximize the effects of baryonic concentration (Cataldi et al. 2020).
To shed light on this, we analyse the correlation between merger events and the resulting morphology in Fig. 6 and Fig. 7. We show the axial ratios c/b vs. b/a within 20% r 200 . Here each point is colored according to the redshift. The size of the dots is proportional to the ratio , a quantitative measure of the merger events at a given redshift.
The morphology of DM haloes, although varies greatly, presents a trend in the sense of haloes being more spherical in more recent times. These trends are clearer in than TNG50 samples. The effects of recent major accretion (h4469) and recent merger activity (h47626) were found with larger dispersion reaching lower redshifts, although with a path to more spherical shapes. Fig. 8 shows the evolution of the merger stellar ratio, , across cosmic time for all selected haloes in each simulation. The last major events ( > 0.25) occurred at ∼ 3 in the case of h4437 and at ∼ 6 for the rest of haloes. In the case of TNG50, the last major event is reported at ∼ 4 for h593480 and at ∼ 7 for the rest of the selection.
The differences in merger histories between and TNG50 could be explained due to the different environments for halo selection.
haloes were chosen to correspond to an analogue environment of the Local Group. In the case of TNG50 haloes, there were no environmental constraints. The difference in where the haloes were embedded in the cosmic web, leads to a higher merger activity for the TNG50 at late times, which results in a much wider variety of morphologies configuration at ∼ 0.
Although being evident that recent major merger affects DM shapes, the exact connection is not yet fully understood. However, it is clear that the structure of individual haloes is closely related to their merger history. Shape changes, for instance, have been linked to the properties of the last major merger (e.g. Despali et al. 2017) and the remnant has been found to be elongated along the merger axis (e.g. Macciò et al. 2007;Vera-Ciro et al. 2011).
We also find that haloes with greater changes in morphology through redshift are correlated with the number and importance of the merger events. This effect is even more significant for h476266 in TNG50 with recent merger activity. Mergers can be followed by slow accretion along filaments until the cluster ends up in a relatively viralized final phase with a nearly regular and spherical shape. For Fig. 4 but for the TNG50 selected haloes at = 0. We find that the general trend of the haloes shapes are distributed between spherical and oblate shapes. recent redshifts (see blue dots Fig. 6 and Fig. 7), the merger activity weakens for both simulations and in consequence the relaxation times increase, contributing to more spherical shapes.
Effects of mass infall
In the previous section, we discussed two main physical processes that contribute to halo shape evolution: the condensation of baryons in the inner regions and the infall of matter in the outer shells of the halo. While the presence of baryons tends to round up the halo, the infall through filaments produces differences in axis length and, in consequence, contributes to more elongated ones (Despali et al. 2014;Gouin et al. 2021). In this section, we discuss the influence of the cosmic web environment on the DM halo shapes.
Using N-body simulations, previous studies investigate the correlation between the environment and shape of haloes (e.g. Libeskind et al. 2011;Vera-Ciro et al. 2011). DM haloes grow over time fed by the surrounding density field, through continuous injection of matter. This accretion may be secular or in a series of more violent mergers.
Through the virialization process, each halo acquires a new equilibrium configuration as new material is accreted into the gravitational potential well. The preferential direction of the infalling ma- Figure 6. Same plane as Fig. 4 but for the inner axial ratio 20% r 200 at different redshifts, for the haloes. The size of the circles is proportional to the merger stellar ratio, . We find that haloes tend to be more spherical at lower redshift. Inspecting the merge rate, for a greater merger event in a given redshift (marked with bigger symbols) bigger changes in halo morphology. In this figure, we extended the redshift range up to 0 < < 8. Black symbols on the top left panel are the observational constraints for the DM halo shape in the Milky Way by Law & Majewski (2010), Bovy et al. (2016), and Malhan & Ibata (2019). The overall evolution across the shape parameter space is for haloes evolution from triaxial to spherical configuration. The effect of recent major accretion can be observed in h4469, with a backward tendency to be less spherical at ∼ 0. terial in the cosmic environment is usually given by the filaments, whereas a more isotropic mode is expected when the halo is embedded in a large structure (e.g Wang et al. 2011;Vera-Ciro et al. 2011;Shao et al. 2021;Baptista et al. 2022).
As an example of the aforementioned, we show in Fig. 9 the Aitoff map of the projected DM particles infall at = 0 for two haloes, h4377 from (top panel) and h593480 from TNG50 (bottom panel). The map projected is centered around the stellar disc frame. At each output time, we select particles with negative radial velocity pointing towards the center of mass of a halo, v r < 0 (infalling particle), in different spherical shells: 1.0 < r/r opt < 1.2 1 , 0.15 < r/r 200 < 0.25, 0.45 < r/r 200 < 0.55 and 1.0 < r/r 200 < 1.2.
Regions, where the infalling material is larger at a given redshift, correspond to regions with a major overdensity (e.g., Libeskind et al. 2011). As a result of this, the density retains the configuration of where the mass was accreted. Albeit weaker with decreasing radii, the self-similar patterns across the four shells are present, as the accreted material goes from outer radii to more central regions.
This different form of material infall to the haloes structure follows a specific distribution on the sky. For instance, whereas isotropic 1 The optical radius, r opt , is define as the radius that encloses 80 percent of the baryonic mass (gas and stars) of the galaxy. Fig. 6 but for the TNG50 selected haloes. In contrast to haloes, not all TNG50 haloes end up in a spherical configuration. Interestingly, the halo h593480 stabilizes its configuration with an oblate morphology, and the halo h476266 (with recent mergers activity) with a final spherical shape but with a great dispersion. accretion would indicate a uniform signal in the sky, a bi-modal distribution of points in two opposite directions would lead to the presence of a thin filament (Tormen 1997;Colberg et al. 1999;Libeskind et al. 2011;Vera-Ciro et al. 2011). A multipole expansion of the infalling particles in the sky at a given time could be quantified through the power spectrum for the mode ℓ:
C ℓ = 1 2ℓ + 1 ℓ ∑︁ m=−ℓ a m ℓ 2 ,(3)
where the expansion coefficients are,
a l m = 1 N N ∑︁ i=1 Y m ℓ ( i , i ),(4)
where the subscript indicates the -th particle crossing the chosen shell in the specific angular position, with a negative radial velocity (v r < 0). The number of particles, N, can be represented with N = 4m/( r 2 200 ). The ℓ = 0 term is the monopole, representing in this scheme the isotropic accretion. The ℓ = 2 corresponds to the quadrupolar moment, meaning that the accretion occurs through a well-defined direction in the space. Similarly, accretion corresponding to more than one preferential direction will shift the power towards higher moments. When a satellite occupies a large area of the sky the configuration will then resemble a dipole and the power spectrum will exhibit higher power in the ℓ = 1 mode (Vera-Ciro et al. 2011).
We describe the infall of DM particles and the self-similar distribution across a large radial extent. We take the outskirts of the chosen shells (at r = r 200 ) and compute the corresponding amplitude ( ℓ ) of the spherical distribution. For a given DM infall particle with coordinates, (longitude) and
(latitude), we evaluate the spherical harmonic function for a given ℓ. The distribution of the infall particles can be approximated by a smooth angular surface density constructed by summing over all and ℓ up to ℓ according to the following equation
( , ) = ℓ max ∑︁ ℓ=0 ℓ ∑︁ m=−ℓ a m ℓ Y m ℓ ( i , i )(5)
In Fig. 10 and Fig. 11 we show the multipole expansion of the infalling material (v r < 0) in the region 1.0 < r/r 200 < 1.2 as a function of redshift for and TNG50 haloes. The color map are coded in terms of log(C ℓ /C 0 ). In all haloes, the monopole mode is the dominant configuration of infalling material, followed by the quadrupole (ℓ = 2, i.e filaments). The strength of the filament varies across haloes and redshift. Compared to the smooth accretion, we expect that satellite infall events to excite a wide range of modes with similar power. Major accretion such as that of halo h4469 at ∼ 0.4 is reflected by a rapid peak excitement across all modes (see also Figure 8). Fig. 12 and Fig. 13 show the evolution of filament strength by quantifying the relative contribution of the ℓ = 2 mode to the total power spectrum (C 2 / ℓ C ℓ ) as a function of r/r 200 . In each panel, the lines represent different redshifts. The quadrupole strength decrease with radii. In central regions, the motion of the DM particles is expected to be dominated by other velocities configurations (e.g. tube orbits Zhu et al. 2017). Additionally, in Fig. 12 and Fig. 13, at r ∼ 50%r 200 , the quadrupole strength decreases steadily with redshift.
The filamentary structures and the connectivity of the more massive hence largest and latest formed haloes decrease over time (Choi et al. 2010;Codis et al. 2018;Kraljic et al. 2020). For our selected sample, the accretion tends to be more isotropic over time, with a tendency to become more spherical. Even though the filamentary accretion importance weakens with time, haloes with recent merger activity, still accrete at a higher rate than the rest of haloes (see the instantaneous logarithmic growth rate and the formation redshift presented in Fig. A3 in the Appendix section A).
Different physical processes intervene in the evolution in the accretion of mass and in the merger rate: the initial mass and statistics of the primordial density field (Bond et al. 1991), mass and kinematic of subhaloes (e.g. Zhao et al. 2003), and tidal forces (Lapi & Cavaliere 2011), which are possibly conditioned by dark energy (Pace et al. 2019). The filament (ℓ = 2) higher strength at higher redshifts can be explain due to the effect of satellite haloes closer to the main one during the late stage of evolution before virialisation (Schimd & Sereno 2021).
In order to find a connection between filamentary accretion and ellipticity, we shift our analysis from 20%r 200 to outer regions 50%r 200 , where the filamentary accretion still have a clear signal. In central regions, particle motions are dominated by other physical processes, with a subsequent loss in the quadrupole signal. Accordingly, in Fig. 14, we inspect how the halo shape is influenced by the filament strength at 50%r 200 . We perform a logarithmic regression between the ellipticity and C 2 / ℓ C ℓ . The linear regression yields a positive slope for all haloes. The relation between ellipticity and filamentary accretion suggests that the accumulation of matter in a particular orientation increases the ellipticity of the halo. This connection also has a dependency on time, reflected by the color of the linear regression line in Fig. 14. As the filament strength decreases steadily with redshift, the haloes became more spherical and less elliptical.
CONCLUSIONS
In this work, we studied the properties of DM halo shapes and the interconnection with the halo assembly evolution. For this purpose, we study two samples from simulations with different subgrid physics implementations: a subsample of MW-like haloes from the highest resolution box of the I TNG Project (TNG50) and from the zoom-in simulation. We investigate the evolution of the halo shape with redshift and the physical processes affecting them. Our main findings can be summarized as follow:
(i) The concentration of the halo density profiles increases at lower redshifts and in more massive haloes. This increment in concentration, a product of both, the baryonic condensation in the inner regions and the relaxation of haloes, decreases when merger events happen. The evolution of halo size experience two different regimes before and after the turnaround point at ∼ 2. The MAH also shows the effects of mergers. Haloes with major mergers in recent times, reach afterward z form,50 .
(ii) We find that at more recent redshifts, haloes become less elliptical and in correspondence, more spherical in central regions. Also, haloes evolve to be more spherical/oblate in the inner regions and more triaxial in the outer ones. In the case of TNG50 haloes, this evolution is also present albeit weaker.
(iii) For all analysed samples, we find that morphology tends to be more spherical as we go towards inner regions at = 0, with different paths for the shape parameter space between haloes. Focusing on 20%r 200 (where we expected the halo morphology to maximize their changes (Cataldi et al. 2020)), the evolution of the shapes shows a tendency toward more spherical configurations, although this evolution has a dependence on the merger force through time. In the case of TNG50 haloes, the path through the shape parameter space is more diverse and has a weaker tendency to sphericalization with time than haloes. In particular, h593480 shows a final oblate configuration at = 0. The effects of recent major accretion (h4469 and h476266) can be spot in the figures as a larger dispersion reaching lower redshifts.
(iv) Exploring the halo assembly history can provide insight into the connection of mergers and halo shapes. We find that all haloes accrete matter with a dominant isotropic (i.e., monopole) accretion mode. The quadrupole mode (i.e. filaments) has the next dominant contribution in accretion, with a preferential direction. We find that the strength of the quadrupole mode decreases with radii and also with redshift, as haloes lose their connection to the cosmic web.
(v) We find a stronger connection between the strength of the filament and the degree of ellipticity of the halo shape at 50%r 200 . The filaments with a given preferential direction, accumulate mass so that the halo axis becomes elongated in the same direction as the infalling matter. This results in haloes being more ellipticals. With the weakening of the preferential direction of accretion in recent times, the accretion of mass becomes more isotropic, with a subsequent transformation to more spherical shapes. We find that this connection can be well described as a logarithmic regression fit for all our halo selections.
Our results show that the assembly history has a key role in understanding the resulting halo morphology at = 0. There is an interconnection between the halo shape driven by the cosmic web at outskirt radii and the assembly of baryons at inner regions. The shape evolution is an important ingredient to understand the halo assembly history as well as the merger history.
APPENDIX A: EXTENDED ANALYSIS OF HALO SHAPE EVOLUTION
In Fig. A1 we show the evolution of the virial radius r 200 . We observe two different regimes in the evolution. First, a slow increase of halo size over time, up to = 2, follow by an acceleration in the increasing of r 200 . The effects of recent mergers activity (h476266) or major accretion (h4469) can be spot with a sudden increment at late times of the size of the halo. We also study he evolution of baryonic MAH for the selected haloes in Fig. A2. The accretion of baryons (stars and gas) follow the same trend as DM MAH. Inspecting Fig. A2, the baryons have a MAH slope which is less monotonically increasing than DM MAH in Fig. 2, product of the star formation and complex gas dynamics within each galaxy.
By computing the mass growth of halos over cosmic time, we can calculate different proxies of their mass assembly history. Several studies describe the instantaneous mass accretion rate (e.g. Rodríguez-Puebla et al. 2016;Gouin et al. 2021 In Fig. A3 we estimated the logarithmic halo growth rate. We choose for this, to compute the mass at 20% r 200 , where we studied the changes in morphologies.
In Tab. A1 we present the instantaneous logarithmic growth rate dlog(M)/dt z∼0 at = 0, together with the formation redshift at 70% ,z form,70 , defined as the redshift which the mass of the halo main progenitor at 20% r 200 is equal to 70% of the mass enclosed at the same radius at = 0. Haloes with recent mergers accretion (h4469) and merger activity (h476266) are the last to reach 70% of the mass at = 0 and consequently these are the haloes with greater instantaneous growth rate at = 0. Figure 10. Multipole expansion of the infalling DM particles (v r < 0) in the region 1.0 < r/r 200 < 1.2 as a function of redshift for the haloes in simulation. The color map indicates the ratio log(C ℓ /C 0 ). After the monopole (ℓ = 0), the quadrupole mode (ℓ = 2) is the main component of the DM particle's infall suggesting that filaments are a fundamental ingredient to understanding the mass accretion history and halo shape. Figure 11. Same as Fig. 10 for the TNG50 haloes. TNG50 haloes had a deeper contribution to filament accretion than haloes.
Figure 12.
Relative contribution of the ℓ = 2 mode to the total power spectrum as a function of r/r200 for the haloes in the simulation at different redshifts. C 2 / ℓ C ℓ provides information about the material infalling along a filament. The strength of quadrupole, a privileged direction of infalling mass, decreases with radii as in central regions the motion of the particles is dominated by other physical processes. At half the virial radius (∼ 50%r 200 ) we find a trend for the quadrupole strength to decrease steadily with time. This trends weakens for other regions.
. The updated model of I TNG includes cosmic magnetic fields and adopts a cosmology consistent with Planck Collaboration et al. (2016), given by Ω = 0.3809, Ω Λ = 0.6911, Ω = 0.0486, 8 = 0.8159, = 0.9667, ℎ = 0.6774.
Figure 1 .
1Halo concentration as a function of the redshift, , for the (left panel) and TNG50 (right panel) haloes. The line widths represent the total halo mass M tot 200 , where thicker lines indicate the most massive systems (see Tab. 1). Dashed lines indicate the haloes with recent merger. Our results show that the halo concentration increases as the redshift decreases.
Figure 2 .
2The cosmic evolution of the mass accretion history (MAH) normalized by the halo mass at = 0 for the DM halos of (left) and TNG50 (right). Dashed lines indicate the haloes with recent mergers while dotted lines indicate the MAH within r < 20%r 200 for each halo. Black horizontal lines are used as reference to estimate the formation time of the haloes, as the redshift at which the mass of the halo reach half of their mass at = 0 (z form,50 ).
Figure 3 .
3The evolution of the shape parameters as a function of the distance to the halo center. Each panel represents a different halo, as indicated by the label. We show the ellipticity e ≡ 1 − (c/a) 2 /2L for the (top panels) and TNG50 (bottom panels) haloes versus r/r 200 , colored by the redshift between 0 < < 8. We also show the median values for four subsample in redshift bins. Blue lines for 0 < < 0.5, cyan for 0.5 < < 1.0, orange 1.0 < < 2.0 and red lines 2.0 < < 8.0, according to the redshift color coded. In all cases for our selected haloes, the ellipticity increase for outer radii and for higher redshifts.2005; Vera-Ciro et al. 2014;Chua et al. 2019). We adopted the triaxiality parameter, defined as T ≡ (1 − q 2 )/(1 − s 2 ), which quantifies the degree of prolatness or oblatness: T = 1 describes a completely prolate halo (a > b ≈ c) while T = 0 describes a completely oblate halo (a ≈ b > c). Haloes with T > 0.67 are considered prolate and haloes with T < 0.33 oblates, while those with 0.33 < T < 0.67 are considered triaxial(Allgood 2005;Artale et al. 2019). We define the ellipticity as, e ≡ 1 − (c/a) 2 /2L,
Figure 4 .
4The distribution of haloes as a function of their inner halo axial ratios measured for −2.5 < log(r/r 200 ) < 0 at = 0. When b/a ∼ 1.0 and c/b ∼ 1.0, the haloes become more spherical (upper right corner). The regions of parameter space corresponding to spherical, prolate, triaxial and oblate haloes are indicated in each panel. For all the haloes the morphology tends to be more spherical for the inner regions and more triaxial for the outer radii. We indicate the shape ratios at 20% r 200 with a cross symbol. We compare our results with observational constraints for the DM halo shape in the Milky Way byLaw & Majewski (2010), Bovy et al. (2016) and Malhan & Ibata (2019) (see legend and symbols on the top left panel).
Figure 5 .
5Same as
Figure 7 .
7Same as
Figure 8 .
8The evolution of the merger stellar ratio, for the (left panel) and TNG50 haloes (right panel). Dashed lines indicate the haloes with no constraints in mergers activity. Black horizontal lines correspond to = 0.01, while dotted and dashed lines correspond to = 0.25 and = 0.50, respectively.
Figure 9 .
9The projected infall DM particles for the halo h4337 (top panels) and h593480 (bottom panels). From left to right, each panel is computed on concentric shells of radius r opt , 20%r 200 , 50%r 200 and r 200 . Each time a DM particle is accreted across one of these shells, its entry point is recorded and plotted. The red regions indicate high density while the blue ones indicate low density.
Figure 13 .
13Same asFig. 12but for the TNG50 selected haloes.
Figure 14 .
14The relation of quadrupole strength and ellipticity at 50%r 200 . The accumulation of matter induce an increasing ellipticity in the direction of infalling matter. The filament strength decrease in recent time, with a subsequent less mergers events, which makes the haloes more spherical and less elliptic. A logarithmic regression fit is included coloured according to the median redshift in each bin of , along its 1 dispersion (dashed black lines). Each panel shows the linear regression slope obtained.
Figure A1 .
A1The cosmic evolution of the virial radius, r 200 , for the DM haloes of (left panel) and TNG50 (right panel). Dashed lines indicate the haloes with recent mergers activity.
Figure A2 .
A2The baryonic MAH of the haloes vs redshift , in (left panel) and TNG50 (right panel) haloes. Dashed lines indicate the haloes with no constraints in mergers activity. Black horizontal line (z form,50 ) in the baryonic MAH panels is used as reference to estimate the formation time of the haloes, as the redshift at which the mass of the halo reach half of their mass at = 0.
Figure A3 .
A3The instantaneous halo growth rate for (left panel) and TNG50 haloes (right panel) as a function of lookback time.
)
Table 1 .
1An overview of the main characteristics of the selected and TNG50 haloes at = 0. From left to right, we show the halo IDs, the total In bold, the haloes with a recent major mass accretion.virial mass (M tot
200 ), the total stellar and DM virial mass, (M star
200 , M DM
200 ) and
the virial radius (r 200 ). M tot
200 [M /h]
M star
200 [M /h]
M DM
200 [M /h]
r 200 [kpc/h]
h4337
9.4 × 10 11
4.0 × 10 10
8.7 × 10 11
159.9
h4469
Table A1 .
A1The formation redshift at 70% ,z form,70 , and the instantaneous logarithmic growth rate dlog(M)/dt z∼0 at = 0 for the selected haloes.z form,70
dlog(M)/dt z∼0
h4337
0.75
-0.02
h4469
0.44
0.40
h87
0.88
0.09
h115
0.55
0.03
TNG50
z form,70
dlog(M)/dt z∼0
h476266
0.48
0.27
h533590
0.82
0.05
h593480
0.68
0.11
h631558
0.70
0.14
h649627
0.85
0.11
h656142
0.62
0.06
MNRAS 000, 1-11(2022)
This paper has been typeset from a T E X/L A T E X file prepared by the author.MNRAS 000, 1-11(2022)
ACKNOWLEDGEMENTSPC and SP acknowledges partial support from MinCyT through BID PICT 2020 00582. PBT acknowledges partial support from Fondecyt Regular 20201200703 and ANID BASAL project ACE210002 (Chile). This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No 734374 (LACEGAL) and the GALNET Network (ANID, Chile). The Project was run in Marenostrum (Barcelona Supercomputer Centre, Spain), Ladgerda (IA, PUC) and the National Laboratory for High Performance Computing (NLHPC, Chile). We thank the Ministerio de Ciencia e Innovación (Spain) for financial support under Project grants PGC2018-094975-C21 and PID2021-122603NB-C21. MCA acknowledges financial support from the Seal of Excellence @UNIPD 2020 program under the ACROGAL project.DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the corresponding authors.
. B A Allgood, California, USAUniversity of California, nta CruzPhD thesisAllgood B. A., 2005, PhD thesis, University of California, nta Cruz, Califor- nia, USA
. M C Artale, S E Pedrosa, P B Tissera, P Cataldi, Di Cintio, A , 10.1051/0004-6361/201834096A&A. 622197Artale M. C., Pedrosa S. E., Tissera P. B., Cataldi P., Di Cintio A., 2019, A&A, 622, A197
. J Bailin, M Steinmetz, 10.1086/430397ApJ. 627647Bailin J., Steinmetz M., 2005, ApJ, 627, 647
. J Baptista, arXiv:2211.16382arXiv e-printsBaptista J., et al., 2022, arXiv e-prints, p. arXiv:2211.16382
. J F Becquaert, F Combes, A&A. 32541Becquaert J. F., Combes F., 1997, A&A, 325, 41
. V Belokurov, D Erkal, N W Evans, S E Koposov, A J Deason, 10.1093/mnras/sty982MNRAS. 478611Belokurov V., Erkal D., Evans N. W., Koposov S. E., Deason A. J., 2018, MNRAS, 478, 611
. M Bonamigo, G Despali, M Limousin, R Angulo, C Giocoli, G Soucail, 10.1093/mnras/stv417MNRAS. 4493171Bonamigo M., Despali G., Limousin M., Angulo R., Giocoli C., Soucail G., 2015, MNRAS, 449, 3171
. J R Bond, S Cole, G Efstathiou, N Kaiser, 10.1086/170520ApJ. 379440Bond J. R., Cole S., Efstathiou G., Kaiser N., 1991, ApJ, 379, 440
. J Bovy, A Bahmanyar, T K Fritz, N Kallivayalil, 10.3847/1538-4357/833/1/31ApJ. 83331Bovy J., Bahmanyar A., Fritz T. K., Kallivayalil N., 2016, ApJ, 833, 31
. D A Buote, C R Canizares, 10.1086/176753ApJ. 457565Buote D. A., Canizares C. R., 1996, ApJ, 457, 565
. D A Buote, C R Canizares, 10.1046/j.1365-8711.1998.01663.xMNRAS. 298811Buote D. A., Canizares C. R., 1998, MNRAS, 298, 811
. D A Buote, T E Jeltema, C R Canizares, G P Garmire, 10.1086/342158ApJ. 577183Buote D. A., Jeltema T. E., Canizares C. R., Garmire G. P., 2002, ApJ, 577, 183
. P Cataldi, S E Pedrosa, P B Tissera, M C Artale, 10.1093/mnras/staa3988Monthly Notices of the Royal Astronomical Society. 5015679Cataldi P., Pedrosa S. E., Tissera P. B., Artale M. C., 2020, Monthly Notices of the Royal Astronomical Society, 501, 5679
. P Cataldi, S Pedrosa, N Padilla, S Landau, C Arnold, B Li, 10.1093/mnras/stac2122MNRAS. 5155358Cataldi P., Pedrosa S., Padilla N., Landau S., Arnold C., Li B., 2022, MNRAS, 515, 5358
. P Catelan, T Theuns, 10.1093/mnras/282.2.436MNRAS. 282436Catelan P., Theuns T., 1996a, MNRAS, 282, 436
. P Catelan, T Theuns, 10.1093/mnras/282.2.455MNRAS. 282455Catelan P., Theuns T., 1996b, MNRAS, 282, 455
. G Chabrier, 10.1086/376392PASP. 115763Chabrier G., 2003, PASP, 115, 763
. E Choi, N A Bond, M A Strauss, A L Coil, M Davis, C N A Willmer, 10.1111/j.1365-2966.2010.16707.xMNRAS. 406320Choi E., Bond N. A., Strauss M. A., Coil A. L., Davis M., Willmer C. N. A., 2010, MNRAS, 406, 320
. K T E Chua, A Pillepich, M Vogelsberger, L Hernquist, 10.1093/mnras/sty3531MNRAS. 484476Chua K. T. E., Pillepich A., Vogelsberger M., Hernquist L., 2019, MNRAS, 484, 476
. K T E Chua, M Vogelsberger, A Pillepich, L Hernquist, 10.1093/mnras/stac1897MNRAS. 5152681Chua K. T. E., Vogelsberger M., Pillepich A., Hernquist L., 2022, MNRAS, 515, 2681
. S Codis, D Pogosyan, C Pichon, 10.1093/mnras/sty1643MNRAS. 479973Codis S., Pogosyan D., Pichon C., 2018, MNRAS, 479, 973
. J M Colberg, S D M White, A Jenkins, F R Pearce, 10.1046/j.1365-8711.1999.02400.xMNRAS. 308593Colberg J. M., White S. D. M., Jenkins A., Pearce F. R., 1999, MNRAS, 308, 593
. A Curir, A Diaferio, F De Felice, 10.1086/172978ApJ. 41370Curir A., Diaferio A., de Felice F., 1993, ApJ, 413, 70
. G Despali, C Giocoli, G Tormen, 10.1093/mnras/stu1393MNRAS. 4433208Despali G., Giocoli C., Tormen G., 2014, MNRAS, 443, 3208
. G Despali, C Giocoli, M Bonamigo, M Limousin, G Tormen, 10.1093/mnras/stw3129466181MN-RASDespali G., Giocoli C., Bonamigo M., Limousin M., Tormen G., 2017, MN- RAS, 466, 181
. Di Cintio, A Brook, C B Dutton, A A Macciò, A V Stinson, G S Knebe, A , 10.1093/mnras/stu729MNRAS. 4412986Di Cintio A., Brook C. B., Dutton A. A., Macciò A. V., Stinson G. S., Knebe A., 2014, MNRAS, 441, 2986
. A G Doroshkevich, Astrofizika. 6581Doroshkevich A. G., 1970, Astrofizika, 6, 581
. N E Drakos, J E Taylor, A Berrouet, A S G Robotham, C Power, 10.1093/mnras/stz1306MNRAS. 487993Drakos N. E., Taylor J. E., Berrouet A., Robotham A. S. G., Power C., 2019, MNRAS, 487, 993
. J Dubinski, R G Carlberg, 10.1086/170451ApJ. 378496Dubinski J., Carlberg R. G., 1991, ApJ, 378, 496
. A K D Evans, S Bridle, 10.1088/0004-637X/695/2/1446ApJ. 6951446Evans A. K. D., Bridle S., 2009, ApJ, 695, 1446
. D Fabricant, G Rybicki, P Gorenstein, 10.1086/162586ApJ. 286186Fabricant D., Rybicki G., Gorenstein P., 1984, ApJ, 286, 186
. C S Frenk, S D M White, M Davis, G Efstathiou, 10.1086/166213ApJ. 327507Frenk C. S., White S. D. M., Davis M., Efstathiou G., 1988, ApJ, 327, 507
. Gaia Collaboration, 10.1051/0004-6361/201833051A&A. 6161Gaia Collaboration et al., 2018, A&A, 616, A1
. L Gao, J F Navarro, S Cole, C S Frenk, S D M White, V Springel, A Jenkins, A F Neto, 10.1111/j.1365-2966.2008.13277.xMNRAS. 387536Gao L., Navarro J. F., Cole S., Frenk C. S., White S. D. M., Springel V., Jenkins A., Neto A. F., 2008, MNRAS, 387, 536
. S Ghigna, B Moore, F Governato, G Lake, T Quinn, J Stadel, 10.1046/j.1365-8711.1998.01918.xMNRAS. 300146Ghigna S., Moore B., Governato F., Lake G., Quinn T., Stadel J., 1998, MNRAS, 300, 146
. C Giocoli, M Meneghetti, M Bartelmann, L Moscardini, M Boldrin, 10.1111/j.1365-2966.2012.20558.xMNRAS. 4213343Giocoli C., Meneghetti M., Bartelmann M., Moscardini L., Boldrin M., 2012a, MNRAS, 421, 3343
. C Giocoli, M Meneghetti, S Ettori, L Moscardini, 10.1111/j.1365-2966.2012.21743.xMNRAS. 4261558Giocoli C., Meneghetti M., Ettori S., Moscardini L., 2012b, MNRAS, 426, 1558
. O Y Gnedin, A Gould, J Miralda-Escudé, A R Zentner, 10.1086/496958ApJ. 634344Gnedin O. Y., Gould A., Miralda-Escudé J., Zentner A. R., 2005, ApJ, 634, 344
. C Gouin, T Bonnaire, N Aghanim, 10.1051/0004-6361/202140327A&A. 65156Gouin C., Bonnaire T., Aghanim N., 2021, A&A, 651, A56
. F Governato, 10.1111/j.1365-2966.2012.20696.xMNRAS. 4221231Governato F., et al., 2012, MNRAS, 422, 1231
. O Hahn, T Abel, 10.1111/j.1365-2966.2011.18820.xMNRAS. 4152101Hahn O., Abel T., 2011, MNRAS, 415, 2101
. A Helmi, C Babusiaux, H H Koppelman, D Massari, J Veljanoski, A G A Brown, 10.1038/s41586-018-0625-xNature. 56385Helmi A., Babusiaux C., Koppelman H. H., Massari D., Veljanoski J., Brown A. G. A., 2018, Nature, 563, 85
. H Hoekstra, H K C Yee, M D Gladders, 10.1086/382726ApJ. 60667Hoekstra H., Yee H. K. C., Gladders M. D., 2004, ApJ, 606, 67
. R Ibata, G F Lewis, M Irwin, E Totten, T Quinn, 10.1086/320060ApJ. 551294Ibata R., Lewis G. F., Irwin M., Totten E., Quinn T., 2001, ApJ, 551, 294
. Y P Jing, Y Suto, 10.1086/341065ApJ. 574538Jing Y. P., Suto Y., 2002, ApJ, 574, 538
. X Kang, P Wang, 10.1088/0004-637X/813/1/6ApJ. 8136Kang X., Wang P., 2015, ApJ, 813, 6
. H Kawahara, 10.1088/0004-637X/719/2/1926ApJ. 7191926Kawahara H., 2010, ApJ, 719, 1926
. S Kazantzidis, L Mayer, C Mastropietro, J Diemand, J Stadel, B Moore, 10.1086/420840ApJ. 608663Kazantzidis S., Mayer L., Mastropietro C., Diemand J., Stadel J., Moore B., 2004, ApJ, 608, 663
. S R Knollmann, A Knebe, 10.1088/0067-0049/182/2/608ApJS. 182608Knollmann S. R., Knebe A., 2009, ApJS, 182, 608
. S E Koposov, H.-W Rix, D W Hogg, 10.1088/0004-637X/712/1/260ApJ. 712260Koposov S. E., Rix H.-W., Hogg D. W., 2010, ApJ, 712, 260
. K Kraljic, 10.1093/mnras/stz3319MNRAS. 4914294Kraljic K., et al., 2020, MNRAS, 491, 4294
. C D P Lagos, T Theuns, A R H Stevens, L Cortese, N D Padilla, T A Davis, S Contreras, D Croton, 10.1093/mnras/stw2610MNRAS. 4643850Lagos C. d. P., Theuns T., Stevens A. R. H., Cortese L., Padilla N. D., Davis T. A., Contreras S., Croton D., 2017, MNRAS, 464, 3850
. A Lapi, A Cavaliere, 10.1088/0004-637X/743/2/127ApJ. 743127Lapi A., Cavaliere A., 2011, ApJ, 743, 127
. E T Lau, D Nagai, K Nelson, 10.1088/0004-637X/777/2/151ApJ. 777151Lau E. T., Nagai D., Nelson K., 2013, ApJ, 777, 151
. E T Lau, A P Hearin, D Nagai, N Cappelluti, 10.1093/mnras/staa3313MNRAS. 5001029Lau E. T., Hearin A. P., Nagai D., Cappelluti N., 2021, MNRAS, 500, 1029
. D R Law, S R Majewski, 10.1088/0004-637X/714/1/229ApJ. 714229Law D. R., Majewski S. R., 2010, ApJ, 714, 229
. D R Law, K V Johnston, S R Majewski, 10.1086/426779ApJ. 619807Law D. R., Johnston K. V., Majewski S. R., 2005, ApJ, 619, 807
. D R Law, S R Majewski, K V Johnston, 10.1088/0004-637X/703/1/L67ApJ. 70367Law D. R., Majewski S. R., Johnston K. V., 2009, ApJ, 703, L67
. N I Libeskind, A Knebe, Y Hoffman, S Gottlöber, G Yepes, M Steinmetz, 10.1111/j.1365-2966.2010.17786.xMNRAS. 4111525Libeskind N. I., Knebe A., Hoffman Y., Gottlöber S., Yepes G., Steinmetz M., 2011, MNRAS, 411, 1525
. N I Libeskind, A Knebe, Y Hoffman, S Gottlöber, 10.1093/mnras/stu1216Monthly Notices of the Royal Astronomical Society. 4431274Libeskind N. I., Knebe A., Hoffman Y., Gottlöber S., 2014, Monthly Notices of the Royal Astronomical Society, 443, 1274
. Y Lu, H J Mo, N Katz, M D Weinberg, 10.1111/j.1365-2966.2006.10270.xMonthly Notices of the Royal Astronomical Society. 3681931Lu Y., Mo H. J., Katz N., Weinberg M. D., 2006, Monthly Notices of the Royal Astronomical Society, 368, 1931
. A D Ludlow, J F Navarro, R E Angulo, M Boylan-Kolchin, V Springel, C Frenk, S D M White, 10.1093/mnras/stu483MNRAS. 441378Ludlow A. D., Navarro J. F., Angulo R. E., Boylan-Kolchin M., Springel V., Frenk C., White S. D. M., 2014, MNRAS, 441, 378
. A V Macciò, A A Dutton, F C Van Den Bosch, B Moore, D Potter, J Stadel, 10.1111/j.1365-2966.2007.11720.xMNRAS. 37855Macciò A. V., Dutton A. A., van den Bosch F. C., Moore B., Potter D., Stadel J., 2007, MNRAS, 378, 55
. K Malhan, R A Ibata, 10.1093/mnras/stz1035MNRAS. 4862995Malhan K., Ibata R. A., 2019, MNRAS, 486, 2995
. A D Montero-Dorta, J Chaves-Montero, M C Artale, G Favole, 10.1093/mnras/stab2556MNRAS. 508940Montero-Dorta A. D., Chaves-Montero J., Artale M. C., Favole G., 2021, MNRAS, 508, 940
. M B Mosconi, P B Tissera, D G Lambas, S A Cora, 10.1046/j.1365-8711.2001.04198.xMNRAS. 32534Mosconi M. B., Tissera P. B., Lambas D. G., Cora S. A., 2001, MNRAS, 325, 34
. J F Navarro, C S Frenk, S D M White, 10.1086/304888ApJ. 490493Navarro J. F., Frenk C. S., White S. D. M., 1997, ApJ, 490, 493
. D Nelson, 10.1186/s40668-019-0028-xComputational Astrophysics and Cosmology. 62Nelson D., et al., 2019, Computational Astrophysics and Cosmology, 6, 2
. M Oguri, M Takada, K Umetsu, T Broadhurst, 10.1086/452629ApJ. 632841Oguri M., Takada M., Umetsu K., Broadhurst T., 2005, ApJ, 632, 841
. M Oguri, M Takada, N Okabe, G P Smith, 10.1111/j.1365-2966.2010.16622.xMNRAS. 4052215Oguri M., Takada M., Okabe N., Smith G. P., 2010, MNRAS, 405, 2215
. M Oguri, M B Bayliss, H Dahle, K Sharon, M D Gladders, P Natarajan, J F Hennawi, B P Koester, 10.1111/j.1365-2966.2011.20248.xMNRAS. 4203213Oguri M., Bayliss M. B., Dahle H., Sharon K., Gladders M. D., Natarajan P., Hennawi J. F., Koester B. P., 2012, MNRAS, 420, 3213
. F Pace, C Schimd, D F Mota, Del Popolo, A , 10.1088/1475-7516/2019/09/060J. Cosmology Astropart. Phys. 60Pace F., Schimd C., Mota D. F., Del Popolo A., 2019, J. Cosmology Astropart. Phys., 2019, 060
. D J Paz, D G Lambas, N Padilla, M Merchán, 10.1111/j.1365-2966.2005.09934.xMNRAS. 3661503Paz D. J., Lambas D. G., Padilla N., Merchán M., 2006, MNRAS, 366, 1503
. S E Pedrosa, P B Tissera, 10.1051/0004-6361/201526440A&A. 58443Pedrosa S. E., Tissera P. B., 2015, A&A, 584, A43
. S Pedrosa, P B Tissera, C Scannapieco, 10.1111/j.1365-2966.2009.15951.xMNRAS. 402776Pedrosa S., Tissera P. B., Scannapieco C., 2010, MNRAS, 402, 776
The large-scale structure of the universe Pillepich A. P J E ; Peebles, 10.1093/mnras/stx2656MNRAS. 4734077Peebles P. J. E., 1980, The large-scale structure of the universe Pillepich A., et al., 2018, MNRAS, 473, 4077
. 10.1051/0004-6361/201321529A&A. 5711Planck Collaboration et al., 2014, A&A, 571, A1
. 10.1051/0004-6361/201525830A&A. 59413Planck Collaboration et al., 2016, A&A, 594, A13
. W H Press, P Schechter, 10.1086/152650ApJ. 187425Press W. H., Schechter P., 1974, ApJ, 187, 425
. A Robotham, S Phillipps, R De Propris, 10.1086/523885ApJ. 672834Robotham A., Phillipps S., De Propris R., 2008, ApJ, 672, 834
. V Rodriguez-Gomez, 10.1093/mnras/stv264MNRAS. 44949Rodriguez-Gomez V., et al., 2015, MNRAS, 449, 49
. S Rodríguez, D Garcia Lambas, N D Padilla, P Tissera, L Bignone, R Dominguez-Tenreiro, R Gonzalez, S Pedrosa, 10.1093/mnras/stac1377MNRAS. 5146157Rodríguez S., Garcia Lambas D., Padilla N. D., Tissera P., Bignone L., Dominguez-Tenreiro R., Gonzalez R., Pedrosa S., 2022, MNRAS, 514, 6157
. A Rodríguez-Puebla, P Behroozi, J Primack, A Klypin, C Lee, D Hellinger, 10.1093/mnras/stw1705Monthly Notices of the Royal Astronomical Society. 462893Rodríguez-Puebla A., Behroozi P., Primack J., Klypin A., Lee C., Hellinger D., 2016, Monthly Notices of the Royal Astronomical Society, 462, 893
. J Sayers, S R Golwala, S Ameglio, E Pierpaoli, 10.1088/0004-637X/728/1/39ApJ. 72839Sayers J., Golwala S. R., Ameglio S., Pierpaoli E., 2011, ApJ, 728, 39
. C Scannapieco, P B Tissera, S D M White, V Springel, 10.1111/j.1365-2966.2005.09574.xMNRAS. 364552Scannapieco C., Tissera P. B., White S. D. M., Springel V., 2005, MNRAS, 364, 552
. C Scannapieco, P B Tissera, S D M White, V Springel, 10.1111/j.1365-2966.2006.10785.xMNRAS. 3711125Scannapieco C., Tissera P. B., White S. D. M., Springel V., 2006, MNRAS, 371, 1125
. C Schimd, M Sereno, 10.1093/mnras/stab274MNRAS. 5023911Schimd C., Sereno M., 2021, MNRAS, 502, 3911
. S Shao, M Cautun, A Deason, C S Frenk, 10.1093/mnras/staa3883MNRAS. 5046033Shao S., Cautun M., Deason A., Frenk C. S., 2021, MNRAS, 504, 6033
. G Soucail, B Fort, Y Mellier, J P Picat, A&A. 17214Soucail G., Fort B., Mellier Y., Picat J. P., 1987, A&A, 172, L14
. V Springel, 10.1111/j.1365-2966.2005.09655.xMNRAS. 3641105Springel V., 2005, MNRAS, 364, 1105
. V Springel, L Hernquist, 10.1046/j.1365-8711.2003.06206.xMNRAS. 339289Springel V., Hernquist L., 2003, MNRAS, 339, 289
. V Springel, 10.1111/j.1365-2966.2008.14066.xMNRAS. 3911685Springel V., et al., 2008, MNRAS, 391, 1685
. J Stadel, D Potter, B Moore, J Diemand, P Madau, M Zemp, M Kuhlen, V Quilis, 10.1111/j.1745-3933.2009.00699.xMNRAS. 39821Stadel J., Potter D., Moore B., Diemand J., Madau P., Zemp M., Kuhlen M., Quilis V., 2009, MNRAS, 398, L21
. R A Swaters, R Sancisi, J M Van Der Hulst, 10.1086/304958ApJ. 491140Swaters R. A., Sancisi R., van der Hulst J. M., 1997, ApJ, 491, 140
Boletin de la Asociacion Argentina de Astronomia La Plata Argentina. B Tapia, P B Tissera, E Sillero, C Casanueva, S Pedrosa, L Bignone, R Dominguez Tenreiro, N Padilla, 63256Tapia B., Tissera P. B., Sillero E., Casanueva C., Pedrosa S., Bignone L., Dominguez Tenreiro R., Padilla N., 2022, Boletin de la Asociacion Ar- gentina de Astronomia La Plata Argentina, 63, 256
. J E Taylor, 10.1155/2011/604898Advances in Astronomy. 604898Taylor J. E., 2011, Advances in Astronomy, 2011, 604898
. E Tempel, Q Guo, R Kipper, N I Libeskind, 10.1093/mnras/stv919MNRAS. 4502727Tempel E., Guo Q., Kipper R., Libeskind N. I., 2015, MNRAS, 450, 2727
. P B Tissera, R Dominguez-Tenreiro, 10.1046/j.1365-8711.1998.01440.xMNRAS. 297177Tissera P. B., Dominguez-Tenreiro R., 1998, MNRAS, 297, 177
. P B Tissera, S D M White, S Pedrosa, C Scannapieco, 10.1111/j.1365-2966.2010.16777.xMNRAS. 406922Tissera P. B., White S. D. M., Pedrosa S., Scannapieco C., 2010, MNRAS, 406, 922
. P B Tissera, S E Pedrosa, E Sillero, J M Vilchez, 10.1093/mnras/stv2736MNRAS. 4562982Tissera P. B., Pedrosa S. E., Sillero E., Vilchez J. M., 2016a, MNRAS, 456, 2982
. P B Tissera, R E G Machado, P Sanchez-Blazquez, S E Pedrosa, S F Sánchez, O Snaith, J Vilchez, 10.1051/0004-6361/201628188A&A. 59293Tissera P. B., Machado R. E. G., Sanchez-Blazquez P., Pedrosa S. E., Sánchez S. F., Snaith O., Vilchez J., 2016b, A&A, 592, A93
. E Tollet, 10.1093/mnras/stv2856MNRAS. 4563542Tollet E., et al., 2016, MNRAS, 456, 3542
. G Tormen, 10.1093/mnras/290.3.411MNRAS. 290411Tormen G., 1997, MNRAS, 290, 411
. J W Trayford, C S Frenk, T Theuns, J Schaye, C Correa, 10.1093/mnras/sty2860MNRAS. 483744Trayford J. W., Frenk C. S., Theuns T., Schaye J., Correa C., 2019, MNRAS, 483, 744
. J Vega-Ferrero, G Yepes, S Gottlöber, 10.1093/mnras/stx282MNRAS. 4673226Vega-Ferrero J., Yepes G., Gottlöber S., 2017, MNRAS, 467, 3226
. C A Vera-Ciro, L V Sales, A Helmi, C S Frenk, J F Navarro, V Springel, M Vogelsberger, S D M White, 10.1111/j.1365-2966.2011.19134.xMonthly Notices of the Royal Astronomical Society. 4161377Vera-Ciro C. A., Sales L. V., Helmi A., Frenk C. S., Navarro J. F., Springel V., Vogelsberger M., White S. D. M., 2011, Monthly Notices of the Royal Astronomical Society, 416, 1377
. C A Vera-Ciro, L V Sales, A Helmi, J F Navarro, 10.1093/mnras/stu153MNRAS. 4392863Vera-Ciro C. A., Sales L. V., Helmi A., Navarro J. F., 2014, MNRAS, 439, 2863
. J Wang, 10.1111/j.1365-2966.2011.18220.xMonthly Notices of the Royal Astronomical Society. 4131373Wang J., et al., 2011, Monthly Notices of the Royal Astronomical Society, 413, 1373
. R H Wechsler, J S Bullock, J R Primack, A V Kravtsov, A Dekel, 10.1086/338765The Astrophysical Journal. 56852Wechsler R. H., Bullock J. S., Primack J. R., Kravtsov A. V., Dekel A., 2002, The Astrophysical Journal, 568, 52
. R Weinberger, 10.1093/mnras/stw2944MNRAS. 4653291Weinberger R., et al., 2017, MNRAS, 465, 3291
. R Weinberger, V Springel, R Pakmor, 10.3847/1538-4365/ab908cApJS. 24832Weinberger R., Springel V., Pakmor R., 2020, ApJS, 248, 32
. S D M White, 10.1086/162573ApJ. 28638White S. D. M., 1984, ApJ, 286, 38
. S D M White, M J Rees, 10.1093/mnras/183.3.341MNRAS. 183341White S. D. M., Rees M. J., 1978, MNRAS, 183, 341
. R Wojtak, 10.1051/0004-6361/201322509A&A. 55989Wojtak R., 2013, A&A, 559, A89
. J Zavala, 10.1093/mnras/stw1286MNRAS. 4604466Zavala J., et al., 2016, MNRAS, 460, 4466
. Y B Zel'dovich, A&A. 584Zel'dovich Y. B., 1970, A&A, 5, 84
. M Zemp, O Y Gnedin, N Y Gnedin, A V Kravtsov, 10.1088/0067-0049/197/2/30ApJS. 19730Zemp M., Gnedin O. Y., Gnedin N. Y., Kravtsov A. V., 2011, ApJS, 197, 30
. M Zemp, O Y Gnedin, N Y Gnedin, A V Kravtsov, 10.1088/0004-637X/748/1/54ApJ. 74854Zemp M., Gnedin O. Y., Gnedin N. Y., Kravtsov A. V., 2012, ApJ, 748, 54
. D H Zhao, H J Mo, Y P Jing, G Börner, 10.1046/j.1365-8711.2003.06135.xMNRAS. 33912Zhao D. H., Mo H. J., Jing Y. P., Börner G., 2003, MNRAS, 339, 12
. Q Zhu, L Hernquist, F Marinacci, V Springel, Y Li, 10.1093/mnras/stw3387MNRAS. 4663876Zhu Q., Hernquist L., Marinacci F., Springel V., Li Y., 2017, MNRAS, 466, 3876
| [] |
[
"Parallel tomography of quantum non-demolition measurements in multi-qubit devices",
"Parallel tomography of quantum non-demolition measurements in multi-qubit devices"
] | [
"L Pereira \nInstituto de Física Fundamental IFF-CSIC\nCalle Serrano 113b28006MadridSpain\n",
"J J García-Ripoll \nInstituto de Física Fundamental IFF-CSIC\nCalle Serrano 113b28006MadridSpain\n",
"T Ramos \nInstituto de Física Fundamental IFF-CSIC\nCalle Serrano 113b28006MadridSpain\n"
] | [
"Instituto de Física Fundamental IFF-CSIC\nCalle Serrano 113b28006MadridSpain",
"Instituto de Física Fundamental IFF-CSIC\nCalle Serrano 113b28006MadridSpain",
"Instituto de Física Fundamental IFF-CSIC\nCalle Serrano 113b28006MadridSpain"
] | [] | An efficient characterization of QND measurements is an important ingredient towards certifying and improving the performance and scalability of quantum processors. In this work, we introduce a parallel tomography of QND measurements that addresses single-and two-qubit readout on a multi-qubit quantum processor. We provide an experimental demonstration of the tomographic protocol on a 7-qubit IBM-Q device, characterizing the quality of conventional qubit readout as well as generalized measurements such as parity or measurement-and-reset schemes. Our protocol reconstructs the Choi matrices of the measurement processes, extracts relevant quantifiers-fidelity, QND-ness, destructiveness-and identifies sources of errors that limit the performance of the device for repeated QND measurements. We also show how to quantify measurement cross-talk and use it to certify the quality of simultaneous readout on multiple qubits. | 10.1038/s41534-023-00688-7 | [
"https://export.arxiv.org/pdf/2204.10336v3.pdf"
] | 257,312,776 | 2204.10336 | b8cc7c9b8526f9528202159a8629be81a78a527c |
Parallel tomography of quantum non-demolition measurements in multi-qubit devices
L Pereira
Instituto de Física Fundamental IFF-CSIC
Calle Serrano 113b28006MadridSpain
J J García-Ripoll
Instituto de Física Fundamental IFF-CSIC
Calle Serrano 113b28006MadridSpain
T Ramos
Instituto de Física Fundamental IFF-CSIC
Calle Serrano 113b28006MadridSpain
Parallel tomography of quantum non-demolition measurements in multi-qubit devices
An efficient characterization of QND measurements is an important ingredient towards certifying and improving the performance and scalability of quantum processors. In this work, we introduce a parallel tomography of QND measurements that addresses single-and two-qubit readout on a multi-qubit quantum processor. We provide an experimental demonstration of the tomographic protocol on a 7-qubit IBM-Q device, characterizing the quality of conventional qubit readout as well as generalized measurements such as parity or measurement-and-reset schemes. Our protocol reconstructs the Choi matrices of the measurement processes, extracts relevant quantifiers-fidelity, QND-ness, destructiveness-and identifies sources of errors that limit the performance of the device for repeated QND measurements. We also show how to quantify measurement cross-talk and use it to certify the quality of simultaneous readout on multiple qubits.
INTRODUCTION
Quantum non-demolition (QND) measurements allow the repeated evaluation of an observable without changing its expected value [1,2]. They have been implemented in many quantum platforms such as atomic [3][4][5][6][7][8] or solid-state systems [9][10][11][12][13][14]. In superconducting quantum processors, in particular, the most widespread qubit measurement is, in its ideal form, also a QND measurement [15][16][17]. In practice, this qubit readout is not yet perfectly QND and has larger errors than single-and twoqubit gates [18,19]. The origin of these measurement errors is diverse: non-dispersive interactions [20,21], leakage to excited states [22,23], decoherence [17,24], or cross-talk [25,26], and they accumulate exponentially with repeated measurements.
While state-of-the-art is adequate for restricted models of computation-e.g. variational quantum algorithms [27,28] or proof-of-principle quantum error correction [18,29,30]-, large-scale and fault-tolerant quantum computing schemes [31][32][33][34][35] require that we improve on the quality of QND measurements, through efficient, reliable, and self-consistent characterization techniques, which also help us identify and mitigate experimental errors [36].
Quantum tomography (QT) is a powerful and general technique to characterize the evolution of a physical system [37], used e.g. in superconducting qubits [38][39][40], trapped ions [41][42][43], or photonic systems [44][45][46]. We proposed QND measurement tomography (QND-MT) [47] as a self-consistent reconstruction of the Choi operators for a general QND detector, describing the measurement process, its dynamics, relevant quantifiers, and sources of error. A similar approach based on gate set tomography have also been recently developed [48]. QND-MT is more informationally complete than a direct estimation of readout fidelity and QND-ness [15,49,50], or a standard measurement tomography (MT) [26,[51][52][53] of the positive operator-valued measurements (POVM).
In this work, we experimentally implement an efficient parallel QND-MT to characterize the most important measurement properties of a 7-qubit IBM-Q quantum computer [19]. The protocol exploits the low correlations between the qubit readout to implement a cheap parallel single-qubit characterization of each measurement, obtaining relevant quantifiers from the Choi operators such as readout fidelity, QND-ness, and destructiveness [47]. We observe that the device is optimized to maximize the fidelity--calibrated at around ∼ 98% for every qubitbut not the QND-ness, which varies more accross the device and it is lower on average ∼ 96.7%. QND-MT also reveals that bit flip errors are the main source of imperfections. Using a two-qubit QND-MT we quantify measurement cross-talk across device. We find a similar correlation strength between local and non-local pairs of qubits, which introduces an error of less than 1% in the simultaneous execution of qubit readout. This validates the application in parallel of the single-and two-qubit tomographic protocol on the IBM-Q device, which can be executed with a constant number of circuits, avoiding the exponential scaling of a full QT. This parallelization is also extended to the post-processing of data on classical computers.
Finally, we demonstrate the generality of QND-MT by reconstructing composite measurement processes relevant to quantum error correction protocols such as parity measurements and measurement-and-reset schemes with classical feedback. Our experiment shows that the parity measurement involves more errors-mainly nondispersive-than a direct QND measurement due to the presence of an entangling gate. In addition, we observe that the measurement-and-reset scheme can enhance the QND nature of the readout.
RESULTS
QND measurement tomography on a multi-qubit device
A generalized quantum measurement of an N -qubit system in state ρ is described by a set of non tracepreserving quantum processes E n , which add up to a trace-preserving one, E = n E n [2]. Each individual process determines a post-measurement state ρ n = E n (ρ)/p(n), conditioned to the measurement outcome occurring with probability p(n) = Tr(E n (ρ)). A representation of quantum processes commonly used in quantum tomography are the Choi matrices [54]. In this representation, a measurement is described by a set of Choi operators {Υ n } whose matrix elements are given by [47] Υ ijkl n = ⟨ij|Υ n |kl⟩ = ⟨i|E n (|k⟩⟨l|)|j⟩ ,
with {|i⟩} the basis of the measured system with dimension d. In terms of these matrices we can conveniently determine the dynamics of the post-measurement states E n (ρ) = ijkl Υ ijkl n ρ kl |i⟩ ⟨j|, the POVM elements Π n = ijk Υ kjki n |i⟩⟨j|, and the measurement statistics p(n) = Tr{Π n ρ}, where ρ kl = ⟨k| ρ |l⟩ are the components of the density matrix before measurement. Note that Υ n is the transposed of the positive Choi operatorΥ n , whose components are related by ⟨ij| Υ n |kl⟩ = ⟨ik|Υ n |jl⟩ [54].
The Choi matrices Υ ijkl n are a complete description of the quantum processes of a system and from them we can extract all the relevant physical properties of the measurements. We discuss three relevant quantifiers of the measurement: the readout fidelity F , the QND-ness Q, and the destructiveness D [47] (see methods). Comparing them, we can quantify the quality of the mea-surement for particular tasks and discriminate between different types of measurements. The readout fidelity F describes the efficiency of the readout irrespective of the post-measurement state, and it is thus maximal when the POVMs are projectors, Π n = |n⟩⟨n|. Operationally, it is defined as the average probability of successfully detecting a state |n⟩ of the computational basis, after preparing the system in the same state. The QND-ness Q is the fidelity respect to an ideal measurement of an observable O, that is, a measurement that projects the states into the eigenvectors |n⟩ of O and whose Choi matrices are projectors, Υ n = |nn⟩⟨nn|. QND-ness incorporates information of the post-measurement states and can be determined by the average probability that states of the computational basis |n⟩ are preserved in two consecutive measurements. Finally, the destructiveness D quantifies the back-action introduced by the measurement [47]. For D = 0, the measurement is exactly QND which means that it preserves the expected value of the observable O after consecutive measurement ⟨O⟩ = Tr[Oρ] = Tr[OE(ρ)]. For D > 0, the destructiveness signals a deviation from the QND condition, which can occur independent on how ideal the measurement is. Therefore, it is convenient to know the three quantifiers F , Q, and D to provide a more complete analysis of general non-destructive measurements.
Our QND-MT protocol [47] reconstructs the Choi matrices of a QND measurement self-consistently. As shown in Fig. 1(a), it consists of two applications of the measurement interspersed by a unitary gate V i that prepares a complete basis of initial states, and a second gate U j that enables a complete set of measurements. Given sufficient statistics, this protocol provides conditional probability distributions of single QND measurements p(n|i) and of consecutive measurements p(nm|ij). A maximumlikelihood-based classical post-processing [55][56][57] transforms p(n|i) and p(nm|ij) into a set of physically admissible set of POVM elements {Π n } and Choi matrices {Υ n }, requiring to solve a total of N + 1 optimization problems, with N the number of outcomes (see methods). The full characterization of QND detectors with N qubits and N = 2 N possible outcomes demands reconstructing 2 N Choi operators of size 4 N . In the general case, using a strategy based on Pauli observables, QND-MT requires a total of 18 N circuits, corresponding to 6 N initial gates prepared with the tensor product of V i ∈ {I, σ x , e ∓iπσy/4 , e ∓iπσx/4 } and 3 N intermediate unitaries given by the tensor products of U i ∈ {I, e −iπσy/4 , e −iπσx/4 }, with I and σ j the identity and Pauli operators. Figures 1(b)-(d) show the circuits for the particular case of a single-qubit and two-qubits, as explained below.
The exponential scaling in the number of circuits makes quantum tomography-in the QND-MT or in any other form-unfeasible for systems with large numbers of qubits. This scaling may be avoided if the measurements of separate quantum subsystems are shown to be independent. QND measurements in superconducting circuits are implemented via dispersive readout [15][16][17], where each qubit is coupled to an off-resonance cavity, on which one performs homodyne detection to individually extract the outcome of each qubit state. These readouts is thus built to be independent of each other, but imperfections in the device can lead to cross-talk between the qubit measurements [26,58]. Nevertheless, these cor-relations can be characterized by two-qubit QND-MT of each pair of qubits. If the correlations are weak enough, it is possible to execute a highly parallelized and scalable QND-MT for multi-qubit detectors, reducing the number of circuits and the classical post-processing time.
Parallel single-qubit QND measurement tomography Let us first discuss the tomographic reconstruction of every single-qubit measurement of a quantum processor with N qubits. This means using QND-MT to reconstruct 2N single-qubit Choi matrices Υ α n , for qubits α = 1, ..., N and n = 0, 1. Single-qubit QND-MT applies two measurements Υ α n in between single-qubit gates V i and U j , as shown in Fig.1(c), requiring the evaluation of 18 different circuits. For each pair of measurement outcomes the associated Choi matrix Υ α n is estimated as the solution of a maximum likelihood optimization problem.
Initialization and gate errors are accounted for in this optimization by means of single-qubit gate set tomography (GST) [48,[59][60][61]. GST self-consistently characterizes the initial state, the final POVM measurement, and a complete set of linearly independent gates G i of a device. In our case, G i ∈ {I, σ x , e −iπσy/4 , e −iπσx/4 } are the gates used to implement all the V i and U i operations from QND-MT. Note that GST requires 64 circuits, each composed of three gates G i and a measurement, as shown in Fig. 1(b).
The execution of the circuits of the N single-qubit QND-MTs and GST circuits can be efficiently paral-lelized, applying the single-qubit operations simultaneously as sketched in Figs. 1(b-c). This reduces the total number of experiments from O(N ) down to the sum of 18 QMD-MT and 64 GST circuits. With this refinement, we studied the readout of all qubits in the IBM quantum device ibm_perth. This processor has a quantum volume [62] of 32 and CLOPS (Circuit Layer operations per second) of 2.9 × 10 3 [63], and a qubit connectivity graph shown in Fig. 1(e). Each circuit was evaluated with 2 13 shots, resulting in an experiment that takes approximately 2 minutes, with classical post-processing of 30 seconds on a Ryzen-7 5800H processor with 8 cores.
From the reconstructed Choi matrices of each qubit, we derived the three quantifiers-readout fidelity F , the QND-ness Q, and the indestructiveness 1 − D-shown in Figure 2(a). The ibm_perth processor exhibits readout fidelities between 0.969 and 0.992, with an average of F = 0.98. QND-ness varies much more along the device, ranging from 0.951 in the qubit α = 0 to 0.987 in α = 6, with an average ofQ = 0.967. Indestructiveness behaves similarly to fidelity except for qubit α = 1 and ranges between 0.97 and 0.991. The arithmetic mean of F , Q, and 1 − D-see colormap in Fig. 2(a)-characterizes the performance across the device: qubits in the upper sector (α = 0, 1, 2) perform notably worse than those in the lower half of the chip.
The Choi matrices not only provide individual qubit metrics (F, Q, D) but also hint the physical processes behind measurement errors. The Choi matrix element p a→b n = ⟨bb|Υ n |aa⟩ quantifies the probability that the state flips from |a⟩ to |b⟩ when outcome n is detected. Each element informs about deviations from the ideal projective measurement p a→a a = 1, as well as possible origins for those deviations.
Let us first put this into practice using the averaged Choi matricesῩ n = α Υ α n /N , shown in Figure 2(b). Note how the readout of the |0⟩ state (p 0→0 0 = 0.975) is implemented with a better quality than that of |1⟩ (p 1→1 1 = 0.960). Bit flip noise is identified as the main source of errors, dominated by the qubit decay process |1⟩ → |0⟩ (p 1→0 1 = 0.02), and slightly less influenced by the excitation channel |0⟩ → |1⟩ (p 0→1 1 = 0.016). Considering that ibm_perth has a relaxation time T 1 ≈ 100µs and a measurement time T = 700ns [19], we estimate a baseline probability of qubit relaxation p th = 1 − e −T /T1 ≈ 0.007, which accounts for 35% of the observed decay error. The remaining bit-flip error may be due to Purcell-induced decay and other non-dispersive errors that occur during the measurement process itself [47].
This analysis may also be done qubit by qubit. = 0.021 in the outcome |1⟩, and non-dispersive errors given by elements ⟨ab|Υ 0 |00⟩ and ⟨ab|Υ 1 |11⟩, with a ̸ = b. The projection in this outcome is thus not done correctly, which explains the reduction in the QND-ness and indestructiveness.
The parallelized tomography of the qubits has obvious performance advantages, but it could increase the error of the operations [25]. To quantify potential deviations, we have compared the outcome of parallel tomography on ibm_perth with the independent characterization of those qubits, running the O(N ) circuits separately. As shown in Fig 2(e), the differences in the three quantifiers-fidelity |∆F | = |F ind − F par |, QNDness |∆Q| = |Q ind − Q par |, and destructiveness |∆D| = |D ind − D par |-lay below 10 −2 , and are smaller than the non-idealities of those quantifiers (see Fig. 2(a)). Similarly, we have quantified the average distance in diamond norm [64,65] between the Choi operators computed using both strategies |∆Υ| ⋄ = |Υ ind −Υ par | ⋄ ≤ 1, and these lay below 1.4 × 10 −2 (see Fig. 2(e)), validating the use of the parallelized strategy.
Two-qubit QND measurement tomography and cross-talk quantification
The low distinguishability between parallel and independent single-qubit QND-MT suggests that measurement correlations are weak across the device. We can further quantify such correlations comparing the joint measurement process for pairs of qubits (α, β), given by Υ αβ mn for outcomes m, n = 0, 1, with the individual measurement processes Υ α m ⊗ Υ β n . The two-qubit QND-MT requires the evaluation of 324 circuits-two measurement processes interspersed by layers of gates V i and U j (cf. Fig. 1(d)). Since characterizing all N (N − 1)/2 pairs on an N -qubit device is very costly, we first focused on neighboring qubits, which we expect to exhibit the greatest correlations. More precisely, for a device with M physical connections (α, β) ∈ C of M , we aim to reconstruct the 4M two-qubit Choi matrices.
This two-qubit QND-MT can be parallelized by executing similar circuits on non-overlapping pairs of physically connected qubits. This requires dividing the quantum processor into sets of edges that do not share a common qubit. For the 7-qubit ibm_perth and the 65qubit ibm_brooklyn quantum processors, illustrated in Fig. 1(e)-(f), we only need three sets. For a generic planar graph with M vertices, coloring theorems [66] ensure that the number of sets is never larger than 4, setting a bound on the number of circuits 4 × 18 2 that does not grow with the processor's size. Finally, the protocol requires solving 5M optimization problems, a task that can be efficiently parallelized on classical computers. Here, we employ a parallel two-qubit QND-MT to characterize the readout of physically connected qubits on the IBM quantum device ibm_perth. The experiment runs approximately in 38 minutes -using 2 13 shots per circuit-and the post-processing in 3 minutes. Fig. 3(a) shows the experimental results for the quantifiers F , Q, and 1 − D describing the two-qubit measurement. We see an overall decrease in the readout performance of pairs qubits with respect to the single-qubit results in Fig. 2(a), but still we identify the same qualitative behavior: F and Q increase from top to bottom of the device (see inset of Fig. 3(a)) and QND-ness is the worst and most fluctuating quantifier. As discussed below, the two-qubit quantifiers of all connected pairs are very well approximated as products of the single-qubit ones-e.g. F αβ ≈ F α F β and Q αβ ≈ Q α Q β -. This explains the reduction in average two-qubit fidelity and QND-ness between pairs toF = 0.958 andQ = 0.937. Indestructiveness 1 − D is the most stable quantifier, ranging between 0.955 and 0.97, but reduces its average in a similar amount to 1 −D = 0.963. ) 2 ), which suffers more from bit-flip errors. As in the single-qubit case, we estimate the errors introduced by parallelization by comparing the parallelized two-qubit QND-MT with the independent tomography of each pair. Fig. 3(c) shows the error in fidelity, QND-ness, destructiveness, and Choi operators for each pair of physically connected qubits, as defined in the previous section. Parallelization introduces an error below 2 × 10 −2 in all quantifiers and Choi operators.
We quantify the measurement crosstalk by comparing the measurement processes of individual and pairs of qubits. This is done at the level of quantifiers, introducing heuristic measures of separability for the fidelity C[F αβ ] = |F αβ − F α F β | and for the QNDness C[Q αβ ] = |Q αβ − Q α Q β |. It is also done at the level of operators, with estimates of the POVM correlation As hinted above, we observe a good separability of quantifiers. In Fig. 4(a) we see correlations of ibm_perth device below 10 −2 for all pairs, allowing us to estimate the fidelity and QNDness of pairs of qubits as products of the properties of individual qubits. Figure 4(a) shows the POVM and Choi correlations for the ibm_perth device. We certify the presence of measurement cross-talk between all physically connected pairs of qubits: all POVM elements and Choi matrices are non-separable with correlations on the order of 10 −2 , which exceed the statistical error bars from the tomography for most of the qubits. This represents a crosstalk error of about 1%, which is smaller than the physical error found on single-and two-qubit tomography.
QND-MT is not restricted to nearest-neighbor correlations. As example, we have analyzed the correlations between all qubits in ibm_perth and the central qubit α = 3, in 6 sets of separate experiments. This produces pairs at two different distances, the first neighbors (1, 3) and (3,5), and the second neighbors (0, 3), (2, 3), (3,4), and (3, 6). Fig. 4(b) shows the correlations obtained for those pairs. We can see that correlations C[F αβ ] and C[Q αβ ] are of order 10 −3 for all qubits, while C[Π αβ ] and C[Υ αβ ] are approximately 10 −2 . In this small device we do not observe a clear decay of correlations with distance, but we verify that all correlations are smaller than the measurement errors detected for independent single-qubit tomography.
Scaling of QND-MT on larger devices
Parallel single-qubit QND-MT is an efficient technique to characterize large devices, that requires a fixed number of circuits-82 including GST-independently of the device size. Using the execution times obtained in the experiments on the ibm_perth we can extrapolate the performance in larger devices. For the 65-qubit ibm_brooklyn, with a degree 3 connectivity shown in Fig. 1f and a smaller CLOPS number of 1.5×10 3 [63], we estimate 4 minutes for the single-qubit characterization and 5 minutes of post-processing in a Ryzen-7 5800H processor with 8 cores. Notice that all experimental execution times do not depend on the size of the device but they are limited by the number of CLOPS, which are typically lower for larger devices.
We have discussed also three strategies to certify the errors in parallel QND-MT. One strategy is the application of QND-MT of individual qubits in separate, nonparallel experiments. This has a cost that grows linearly O(N ) with the number of sampled qubits, but it is a routine that may be applied with less frequency than the complete calibration. This method enables the development of heat maps of the chip and suggests the order of magnitude of underlying correlations.
The second strategy is the parallelized QND-MT of pairs of neighboring qubits, a method that will provide results consistent with the previous methodology, but also give information about the strength of the cross-talk. In the two-qubit parallelized strategy, our estimate give a total of 1296 independent circuits for any device size, taking 63 minutes for the two-qubit circuit evaluation in a 65-qubit ibm_brooklyn processor, and 30 minutes in a Ryzen-7 5800H processor with 8 cores.
The third and most expensive strategy is to implement a two-qubit QND-MT for all qubit pairs in large devices with O(log(N )) parallel groups [67] and O(N 2 ) optimization problems. In this case, we estimate 2.5 hours for the experiment and a similar amount of post-processing to characterize the ibm_brooklyn device. This is an efficient scaling that enables a very robust calibration of the complete device, to be done only sporadically.
Finally, for larger devices, the execution and postprocessing times could be too long for a complete twoqubit measurement tomography-extending to days for devices with more than 1000 qubits-in which case it makes sense to either randomly sample those pairs, or concentrate the study to specific regions of the chip, that revealed more problematic in the first two methods.
QND measurement tomography of generalized measurements
The QND-MT protocol we introduce can be applied to any kind of generalized measurements [2]. These include synthetic measurements that combine standard detectors with other computing elements, such as local and entangling gates, auxiliary qubits, and resets.
In this work we discuss the application to stabilizer measurements, a relevant example which are widely used in quantum error correction protocols [18,29,30]. Such measurements are usually implemented with controlled operations over an auxiliary qubit, which is finally measured and reset, to discriminate states with different stabilizer value. If we trace over the auxiliary qubit, the generalized measurement is, up to implementation errors, QND, enabling the repetitive monitoring of error syndromes.
As an illustration of how QND-MT works with a generalized measurement, we discuss a single-qubit parity measurement (PM). As shown in Fig. 5(a), this protocol includes an auxiliary qubit, a controlled CNOT operation, and a single-qubit readout and reset. Note that, unlike all higher parity measurements, the single-qubit PM does not entangle multiple system qubits and thus it is not directly applicable to quantum error correction codes. However, it already includes all the underlying operations supporting multi-qubit PM, which can be scaled to characterize multi-qubit measurement errors in practical error correction codes. Here, we study the performance of the single-qubit PM using two fixed qubits of the ibm_perth quantum processor, and we compare it with the performance of the direct measurement (DM) on the same system qubit, as shown in Fig. 5(b). The Choi matrices and the quantifiers obtained for the parity and direct measurements are shown in Figs. 5(d) and (e), respectively.
In this study we observe a decrease of the fidelity and QND-ness of the PM with respect to the DM. The readout fidelity of the parity measurement F PM = 0.958 is close to the product of F DM = 0.973 and the fidelity of the CNOT provided by IBM F CNOT = 0.9897. Therefore, we can conclude that this decrease is mainly due to the CNOT gate as the error from reset is expected to be smaller than 1% [68]. The indestructiveness is the same for parity and direct measurements, 1 − D PM = 1 − D DM = 0.969, which is consistent with the fact that the CNOT and reset operations do not add measurement back-action on the system. In the Choi operators, we can also see the appearance of new bars that describe the noise introduced by the CNOT gate, as well as an increase in the overall error bars and fluctuations.
Another interesting example of generalized measure-ment is the measure-reset-feedback (MRF) operation, shown in Fig. 5(c). It consists of a QND measurement followed by a reset and a classically-conditioned NOT operation that brings the measured qubit exactly to the quantum state selected as measurement outcome-i.e. the qubit is reset to state |0⟩ or |1⟩ when the measurement outcome was deemed n = 0 or 1, respectively. If the reset and NOT operations have high fidelities, measurementand-reset should fix the QND nature of a measurement, bringing the errors 1 − Q and D closer to the measurement infidelity 1 − F . We applied QND-MT to this generalized measurement using a single qubit of the IBM-Q ibm_perth processor. The resulting Choi matrices and quantifiers are shown in Figure 5(f). The MRF scheme has better performance than the DM in the same qubit, having approximately the same fidelity F DM ≈ F MRF ≈ 0.975 and indestructiveness 1 − D DM ≈ 1 − D MRF ≈ 0.969, but with an increase of QNDness from Q DM = 0.954 to Q DM = 0.960. Considering the error bars of the QND-ness and indestructiveness, we find that the worst case MRF provides a QND readout with the same quality as a direct measurement. Moreover, we also witness a reduction in the noisy components of the Choi operator, such as those describing bit-flip errors.
DISCUSSION
In this work we have demonstrated an efficient, highly parallelizable protocol for QND measurement tomography of a state-of-the-art multi-qubit quantum computer, which works with both single-qubit measurement, as well as generalized measurements-e.g. error syndrome measurements, parity measurements, etc. Our method is based on a self-consistent reconstruction of the Choi matrices for single-qubit and two-qubit measurements, which provides information about measurement quality, the QND nature of the measurement and the strength and type of errors.
In the single-qubit scenario, we have developed strategies to massively parallelize the tomography, an approximation that works when multiple measurements can be executed with small crosstalk or correlation. We have applied this protocol in experiments with a 7-qubit IBM quantum computer, obtaining fascinating insight into the performance of the device. First of all, we have found that the chip is well tuned to high-fidelity measurements, with weak and long pulses-much longer than single-or two-qubit gates-that mitigate non-dispersive and discrimination errors, at the expense of increasing incoherent errors, in particular single-qubit bit flip. This limits the QND nature of the measurement which fluctuates along the different qubits of the device.
We have also developed different strategies determine whether single-qubit measurements are independent, and can be parallelized. The most sophisticated strategy involves applying QND-MT to the simultaneous measurement of two qubits, to reconstruct the joint Choi matrices and quantify the degree of correlation. In the setup considered, these correlations lay below 1% and validate the parallelization strategy which, as discussed above, can be efficiently scaled to large multi-qubit processors with an almost fixed cost.
Finally, we have also demonstrated how QND-MT can be generalized to custom measurements, in particular to parity-type measurements relevant to quantum error correcting codes and measurement-and-reset schemes with classical feedback. We used the Choi matrices to identify coherent errors introduced by the CNOT gate in parity measurements and we provided evidence that the reset operation with classical feedback is an appealing way to improve the QND quality of a measurement. This work opens several avenues for further research. The obvious one is to use QND-MT as an input for systematic optimization of the measurement pulses. The goal here is to optimize the driving amplitude and measurement time, minimizing the errors that manifest in the Choi matrices. This would allow us to reduce the decay channels found in the experiment, while keeping other sources of error at bay-e.g. non-dispersive effects [21], discrimination errors [47,69], decoherence [20], leakage to higher levels of the transmon [23], or rotating wave corrections [22]. Another approach is to design alternative schemes for qubit readout that may be more QND [49,50,70], but this would add new error sources that could be similarly identified and characterized with the application of QND-MT.
An additional research avenue is to further understand and mitigate the correlations between simultaneous measurements. In this work we have explored two-qubit correlations, but higher-order correlations, involving 3 or more qubits, could also be analyzed with the help of better tomography methods, such as compressed sensing [71,72]. These methods could also be used to quantify readout and cross-talk errors occurring in multi-qubit stabilizer measurements involving plaquettes of 4 or more qubits as they are required in practical quantum error correction codes such as the surface [18,29,30] or color codes [73].
METHODS
QND measurement quantifiers
To characterize the most important properties of nondestructive measurements we employ three quantifiers: the readout fidelity, the QNDness, and the destructiveness. Here, we show how to obtain these quantifiers from the reconstructed Choi matrices as introduced in [47].
The fidelity F is the standard quantifier of a detector's readout performance, measured by the probability that an initially prepared eigenstate |n⟩ is successfully identified,
F = 1 2 N 2 N −1 n=0 ⟨n|Π n |n⟩ .(2)
The fidelity can be interpreted as the efficiency of the readout as it can be related to the signal-to-noise ratio of the measurement [15,70]. It ignores any information about the post-measurement state and the QND nature of the measurement. The QND-ness Q incorporates information from the post-measurement state and quantifies how close are the Choi matrices with respect to an ideal projective measurement. In quantitative terms, it is the probability that an initially prepared eigenstate |n⟩ is preserved and successfully identified in two consecutive measurements,
Q = 1 2 N 2 N −1 n=0 ⟨nn|Υ n |nn⟩ .(3)
The destructiveness D [47] asserts precisely the QND nature of generic measurements by verifying the preservation of the expectation value ⟨O⟩ after the measurement.
Operationally, it is defined as the largest change suffered by any observable compatible with O as,
D = 1 2 max ||Oc||=1 ∥O c − E † (O c )∥, [O, O c ] = 0,(4)
where ∥ · ∥ is the Hilbert-Schmidt norm. Unlike F and Q, computing D requires a complete tomographic reconstruction of the measurement process, E † (O c ) = ijkln Υ klij n * O kl c |i⟩⟨j|, but it allows us to quantify the measurement back-action without the bias of Q towards ideal measurements [47]. Note that equation (4) is motivated by the definition of a QND measurement, Tr(Oρ) = Tr(OE(ρ)). Moving into the Heisenberg picture this condition becomes Tr(Oρ) = Tr(E † (O)ρ), where E † is the self-adjoint process of E. Therefore, we can quantify how QND is a measurement by the deviation between O and E † (O), that is ||O − E † (O)||. The last step to obtain Eq. (4) consists in searching for the largest disagreement over the set of all normalized observables compatible with O, so that we ensure that D is an upper bound for the back-action of the measurement.
Quantification of measurement cross-talk and correlations
To quantify the correlations in the measurement of pairs of qubits we introduce heuristic metrics that compare the POVM and Choi matrices derived from twoqubit tomography with the tensor product of the operators obtained from single-qubit tomography. Although this is a comparison between two detector models rather than a intrinsic property of the operators, the outcome provides information about the operation of the device and the effect of including higher order interactions in the QND-MT. We also use these correlations to quantify the distinguishability error of performing the tomography in parallel or independently, as shown below.
First, we define the correlation in two-qubit Choi operators C[Υ αβ ]. Let Υ α n and Υ β m be the process operators of two single qubits α and β, and let Υ αβ nm be the joint measurement of both qubits. We define the Choi matrices correlation as
C[Υ αβ ] = 1 8 nm ||Υ αβ nm − Υ α n⊗ Υ β m || ⋄(5)
where⊗ is tensor product operation in the superoperator space and || · || ⋄ is the diamond norm [64,65]. This quantity not only evaluates the distance between the processes Υ α n⊗ Υ β m and Υ αβ nm , but is also related to the probability of discriminating the quantum states generated by them. This probability is given by P d = (1 + C[Υ αβ ])/2. Therefore, a small C[Υ αβ ] ≪ 1 means that the post-measurement states are nearly indistinguishable. Notice that the pre-factor in C[Υ αβ ] are chosen normalize the correlation between 0 and 1.
Conveniently, we can use the same definition in Eq. (5) to evaluate the distinguishability of Choi matrices reconstructed in parallel Υ par or independently Υ ind as C[∆Υ] = C[Υ par − Υ ind ], and thereby quantify the error introduced by the parallelization. This is done in Figs. 2(e) and 3(c).
In a similar spirit as done for C[Υ αβ ], we can define the correlation C[Π αβ ] in the POVMs of two-qubit measurements. Let Π α n and Π β m be the POVM elements of two single qubits α and β, and Π αβ nm be the joint POVM element of both qubits. We define the POVM correlation as
C[Π αβ ] = 1 4 nm ||Π αβ nm − Π α n ⊗ Π β m || 2 ,(6)
where || · || 2 is the 2-norm, that is, the largest singular value. This quantity establishes an upper bound for the average error on the probability distribution predicted by the single-qubit reconstruction P S nm = T r(ρ[Π α n ⊗ Π β m ]) compared with the joint measurement P J nm = T r(ρΠ αβ nm ), given by nm |P S nm − P J nm |/4 ≤ C[Π αβ ] for any density matrix ρ. Notice that C[Π αβ ] is normalized between 0 and 1 as C[Υ αβ ].
Maximum Likelihood estimation for QND measurement tomography
Maximum likelihood estimation (MLE) is a statistical inference method widely used in quantum tomography. MLE allows us to recover density matrices, POVMs, or Choi matrices that are meaningful and satisfy all the physical constraints of a measurement. It achieves this goal by optimizing the likelihood function L(θ|f ) of the experimental dataf for a given parametric model M(θ). We employ as a Gaussian distribution as a likelihood function,
L(θ|f ) = i [f (i) − p(i)] 2 p(i) ,(7)
wheref (i) are the estimated probabilities obtained from the experiment and p(i) are the theoretical probabilities predicted by the model M(θ). We minimize this likelihood function (7) for both the QND-MT and GST. Notice that, for simplicity, the notation of the theoretical probabilities p(i) omits the dependence on the parameters θ, and that the index i may refer to a group of indices as shown below. The QND-MT consists of two steps, first a measurement tomography of the POVM and then a process tomography of each Choi matrix. We reconstruct the POVMs {Π j } by first obtaining the theoretical probabilities,
p(n|k) = Tr(Π n V k (ρ)),(8)
of obtaining the outcome n condition to the application of gate V k . We then minimize the likelihood function of form (7) over the set of feasible matrices {Π n } satisfying Π n ≥ 0 and n Π n = 1 1. Finally, we estimate the Choi matrices Υ n by obtaining the theoretical probabilities,
p(mn|jk) = Tr[(U † j (Π m ) ⊗ V k (ρ) T )Υ n ],(9)
of obtaining the outcome n in the second measurement and the outcome m in the first measurement, condition to the application of gates j and k. We then minimize the corresponding likelihood function of the form (7) over the set of the Choi matrix Υ n that satisfiesΥ n ≥ 0 and the POVM constraint Tr 1 (Υ n ) = Π n . Here, Tr 1 (·) is the partial trace over the first subsystem, andΥ is the transposed Choi matrix, which is a positive matrix with elements ⟨ik|Υ|jl⟩ = ⟨ij|Υ|kl⟩.
To separate experimental errors in gates and state preparation from the measurement errors that we want to characterize, we can apply a GST previously to the QND-MT. The GST gives us an experimental estimate of the set {ρ, Π i , G j }, composed of estimators of the initial state, the POVM elements, and the gates, respectively. Here {G j } are generic trace-preserving processes and not necessarily unitary operations. The theoretical probabilities of obtaining the outcome l are
p(l|ijk) = Tr(Π l G k G j G i (ρ)),(10)
which are condition to the application of gates i, j, k as shown by the circuits in Fig. 1(b). We then minimize (7) by comparing the probabilities (10) with the experimental data to obtain a physically meaningful set {ρ, Π i , G j } that self-consistently accounts for state preparation, gates, and measurement errors. Notice that when we use GST, we can omit the first step of QND-MT as we already have an experimental estimate of the POVMs {Π i }. In addition, the gates {U j } and {V k } needed for the second step of QND-MT must be formed as concatenations of the {G k } processes in order to account for gate errors.
In total, QND measurement tomography of the device requires solving 3N + 5M optimization problems. We solve them using sequential least-squares programming, satisfying the positivity of operators via Cholesky decomposition, and the completeness constraints via Lagrange multipliers.
To quantify the goodness-of-fit of our estimators, we employ the χ 2 -test [74,75]. This is a standard tool for statistical hypothesis testing, that is, for rigorously deciding if there is enough evidence to reject a model. In our work, we apply this test to all single-and two-qubit Choi matrix reconstructions and demonstrate that the fits and models are in agreement with the experimental data within a standard confidence interval of 95%. See Supplementary Methods 1 for a detailed analysis and a description of the method. Let be c(mn|jk) the counts obtained from the QND-MT of measurement process, which are used to obtain an estimator {Υ n }. The goodness-of-fit χ 2 test for this data reads 1. Compute the predicted probabilities,
p(nm|jk) = Tr[(F † j (Π m ) ⊗F k (ρ) T )Υ n ],(11)
where {ρ,Π n ,F j } is the gate set estimated with GST.
2. Compute the test statistic,
χ 2 = M j=1 (c i − N s p i ) 2 N s p i ,(12)
where N s is the number of shots used to evaluate the probabilities.
3. Set an error probability q (typically as 0.05) and compute χ 2 q , implicitly defined by
q = ∞ χ 2 q P r (x)dx,(13)
where P r is the probability density function of a χ 2 variable with mean r, P r (x) = x (r−2)/2 e −x/2 2 r/2 Γ( r 2 )
.
The mean value r is given by
For a N -qubit detector, this mean value is given by
r N = 18 N × (4 N − 1) − (2 N × 16 N − 4 N ).(16)
Each term from left to right corresponds to: number of circuit (18 N ), independent probabilities per circuit (4 N − 1), number of free parameters of the Choi matrices (2 N × 16 N ), and completeness constraint (4 N ). For the cases of single-and twoqubits detectors, these are r 1 = 26 and r 2 = 3852, respectively.
4. Reject the hypothesis if χ 2 ≥ χ 2 α .
We apply this procedure to quantify the goodness-offit of our characterization of the ibm_perth device with QND-MT presented. As shown in Suplementary Figure 19, both single-(a) and two-qubit (b) characterizations have a χ 2 value below the threshold χ 2 q (black horizontal line) and therefore are consistent with the experimental data with confidence 95% (q = 0.05).
FIG. 1 .
1Quantum circuits and parallelization strategies in the QND measurement tomography. Quantum circuits for (a) QND measurement tomography of generic measurements, (b) single-qubit gate set tomography, and (c) single-qubit and (d) two-qubit QND measurement tomography. (e)-(f) Schemes of IBM quantum devices (e) ibm_perth (7 qubits) and (f) ibm_brooklyn (65 qubits). Measurements of pairs of qubits connected by a bar of the same color (red, green and yellow) are characterized simultaneously in one run of the parallel tomography.
FIG. 2 .
2Experimental tomographic characterization of single-qubit measurements in parallel. (a) Measurement quantifiers: Fidelity F (green), QND-ness Q (orange), and indestructiveness 1 − D (blue) for each qubit. The inset represents average performance (F +Q+1−D)/3 of each qubit, as they are located on the ibm_perth quantum processor. (b) Average Choi matricesῩn of all qubits for both measurement outcomes n = 0, 1. (c)-(d) Specific Choi matrices Υ 6 n and Υ 0 n corresponding to the qubits with best and worst readout performance, respectively. (e) Error in the quantifiers and the Choi operators introduced by the parallelization of the QND-MT. Error bars are the standard deviation estimated with 5 realizations of the experiment.
Figures 2(c)-(d) show the experimental Choi matrices for qubits α = 6 and α = 0, respectively (all others being included in Supplementary Figures 1). Qubit α = 6 is the best on the device, with F = 0.991, Q = 0.985, and 1 − D = 0.988. Its Choi matrices are also the closest
FIG. 3 .
3Experimental tomographic characterization of two-qubit measurements over every pair of connected qubits in parallel. (a) Measurement quantifiers F , Q, and 1 − D for each nearest neighbor pair.The inset represents the average performance (F + Q + 1 − D)/3 in the device of each qubit (blue color code in circles) and of every pair of connected qubits (green color code in bars). (b) Reconstructed two-qubit Choi matricesῩnm averaged over all connected pairs of qubits for the four possible outcomes with n, m = 0, 1. (c) Error in the quantifiers F , Q, and 1 − D and the Choi operators introduced by the parallelization. Error bars are the standard deviation estimated with 5 realization of the experiment.
Figure 3 (
3b) shows the two-qubit Choi matrices av-eraged over all pairs of connected qubits,Ῡ nm = (α,β)∈C Υ αβ nm /M . The Choi matrices for all pairs are included in Supplementary Figures 2. The largest probabilities of type p ab→ab ab = ⟨ab|Ῡ ab |ab⟩ show the same behavior as p a→a a = ⟨a|Υ a |a⟩ for the singe-qubit case. The lowest deviation from the ideal measurement corresponds to the state |00⟩ (
FIG. 4 .
4Correlations in the joint measurement of pairs of qubits obtained by QND-MT. (a) Correlations in fidelity C[F ], QND-ness C[Q], and Choi operators C[Υ] for the joint readout of each pair of physically connected qubits. (b) Same Correlations C[F ], C[Q], and C[Υ] for the readout of qubit α = 3 with every other qubit of the device. C[Π αβ ] and the Choi correlation C[Υ αβ ] (see methods).
FIG. 5 .
5Experimental QND-MT characterization of generalized measurements. Circuits depicting (a) a single-qubit parity measurement via an auxiliary qubit, (b) a direct qubit measurement, and (c) a measurement-and-reset scheme with classical feedback. (d)-(f) Choi matrices and quantifiers corresponding to each scheme. Error bars are the standard deviation estimated from 5 realization of the experiments.
ACKNOWLEDGMENTS
This work has been supported by funding from Spanish project PGC2018-094792-B-I00 (MCIU/AEI/FEDER, UE), CAM/FEDER Project No. S2018/TCS-4342 (QUITEMAD-CM), and the Proyecto Sinérgico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM). L.P. was supported by ANID-PFCHA/DOCTORADO-BECAS-CHILE/2019-772200275. T.R. further acknowledges support from the Juan de la Cierva fellowship IJC2019-040260-I. We thank the IBM Quantum Team for making multiple devices available to the CSIC-IBM Quantum Hub via the IBM Quantum Experience. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. FIG. 19. Goodness-of-fit of the single-and two-qubit QND-MT of ibm_perth device. All the χ 2 values are within the 95% confidence threshold indicated by the black horizontal line. Error bars are the standard deviation estimated with 5 realizations of the experiment. SUPPLEMENTARY METHODS I.-Goodness-of-fit via χ 2 -test
[1] V. B. Braginsky and F. Y. Khalili, Quantum measurement (Cambridge University Press, 1992).
In this section, we show the absolute value of all the reconstructed single-qubit Choi matrices Υ α n for all possible measurement outcomes n = 0, 1 of every qubit α = 0, . . . , 6 of the ibm_perth chip shown inFigure 1(e) of the main text. All the definitions, calculation procedures, and error treatments are explained in the main text and methods.Absolute value of the reconstructed Choi matrices Υ 0 n for a measurement on qubit α = 0. Error bars are the standard deviation estimated with 5 realizations of the experiment.Absolute value of the reconstructed Choi matrices Υ 1 n for a measurement on qubit α = 1. Error bars are the standard deviation estimated with 5 realizations of the experiment.Absolute value of the reconstructed Choi matrices Υ 3 n for a measurement on qubit α = 3. Error bars are the standard deviation estimated with 5 realizations of the experiment.10. Absolute value of the reconstructed Choi matrices Υ 4 n for a measurement on qubit α = 4. Error bars are the standard deviation estimated with 5 realizations of the experiment.11. Absolute value of the reconstructed Choi matrices Υ 5 n for a measurement on qubit α = 5. Error bars are the standard deviation estimated with 5 realizations of the experiment.Fig. 1(e)of the main text. All the definitions, calculation procedures, and error treatments are explained in the main text and methods. We also show the Choi matrices of the joint measurements of qubit α = 3 with respect to all other pairs in the device, which were used to calculateFig. 4(b)of the main text.
H.-P Breuer, F Petruccione, https:/oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199213900.001.0001/acprof-9780199213900The Theory of Open Quantum Systems. Oxford University PressH.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, 2002).
Quantum dynamics of single trapped ions. D Leibfried, R Blatt, C Monroe, D Wineland, 10.1103/RevModPhys.75.281Rev. Mod. Phys. 75281D. Leibfried, R. Blatt, C. Monroe and D. Wineland, Quantum dynamics of single trapped ions, Rev. Mod. Phys. 75, 281 (2003).
Optical quantum nondemolition measurement of a single rare earth ion qubit. M Raha, 10.1038/s41467-020-15138-7Nat. Commun. 111605M. Raha et al., Optical quantum nondemolition measure- ment of a single rare earth ion qubit, Nat. Commun. 11, 1605 (2020).
Quantum non-demolition measurements in optics. P Grangier, J A Levenson, J.-P Poizat, 10.1038/25059Nature. 396537P. Grangier, J. A. Levenson and J.-P. Poizat, Quantum non-demolition measurements in optics, Nature 396, 537 (1998).
Quantum jumps of light recording the birth and death of a photon in a cavity. S Gleyzes, 10.1038/nature05589Nature. 446297S. Gleyzes et al., Quantum jumps of light recording the birth and death of a photon in a cavity, Nature 446, 297 (2007).
Detecting an Itinerant Optical Photon Twice without Destroying It. E Distante, 10.1103/PhysRevLett.126.253603Phys. Rev. Lett. 126253603E. Distante et al., Detecting an Itinerant Optical Pho- ton Twice without Destroying It, Phys. Rev. Lett. 126, 253603 (2021).
Quantum Nondemolition Detection of a Propagating Microwave Photon. S R Sathyamoorthy, 10.1103/PhysRevLett.112.093601Phys. Rev. Lett. 11293601S. R. Sathyamoorthy et al., Quantum Nondemolition De- tection of a Propagating Microwave Photon, Phys. Rev. Lett. 112, 093601 (2014).
Single-Shot Readout of a Single Nuclear Spin. P Neumann, 10.1126/science.1189075Science. 329542P. Neumann et al., Single-Shot Readout of a Single Nu- clear Spin, Science 329, 542 (2010).
High-fidelity projective read-out of a solid-state spin quantum register. L Robledo, 10.1038/nature10401Nature. 477574L. Robledo et al., High-fidelity projective read-out of a solid-state spin quantum register, Nature 477, 574 (2011).
Quantum non-demolition measurement of an electron spin qubit. T Nakajima, 10.1038/s41565-019-0426-xNat. Nanotechnol. 14555T. Nakajima et al., Quantum non-demolition measure- ment of an electron spin qubit, Nat. Nanotechnol. 14, 555 (2019).
Repetitive Quantum Nondemolition Measurement and Soft Decoding of a Silicon Spin Qubit. X Xue, 10.1103/PhysRevX.10.021006Phys. Rev. X. 1021006X. Xue et al., Repetitive Quantum Nondemolition Mea- surement and Soft Decoding of a Silicon Spin Qubit, Phys. Rev. X 10, 021006 (2020).
Approaching Unit Visibility for Control of a Superconducting Qubit with Dispersive Readout. A Wallraff, 10.1103/PhysRevLett.95.060501Phys. Rev. Lett. 9560501A. Wallraff et al., Approaching Unit Visibility for Con- trol of a Superconducting Qubit with Dispersive Read- out, Phys. Rev. Lett. 95, 060501 (2005).
Dispersive Readout of Molecular Spin Qudits. A Gómez-León, F Luis, D Zueco, 10.1103/PhysRevApplied.17.064030Phys. Rev. Applied. 1764030A. Gómez-León, F. Luis and D. Zueco, Dispersive Read- out of Molecular Spin Qudits, Phys. Rev. Applied 17, 064030 (2022).
Circuit quantum electrodynamics. A Blais, A L Grimsmo, S M Girvin, A Wallraff, 10.1103/RevModPhys.93.025005Rev. Mod. Phys. 9325005A. Blais, A. L. Grimsmo, S. M. Girvin and A. Wallraff, Circuit quantum electrodynamics, Rev. Mod. Phys. 93, 025005 (2021).
Observation of Quantum Jumps in a Superconducting Artificial Atom. R Vijay, D H Slichter, I Siddiqi, 10.1103/PhysRevLett.106.110502Phys. Rev. Lett. 106110502R. Vijay, D. H. Slichter and I. Siddiqi, Observation of Quantum Jumps in a Superconducting Artificial Atom, Phys. Rev. Lett. 106, 110502 (2011).
Rapid High-Fidelity Single-Shot Dispersive Readout of Superconducting Qubits. T Walter, 10.1103/PhysRevApplied.7.054020Phys. Rev. Applied. 754020T. Walter et al., Rapid High-Fidelity Single-Shot Dis- persive Readout of Superconducting Qubits, Phys. Rev. Applied 7, 054020 (2017).
Exponential suppression of bit or phase errors with cyclic error correction. Google Quantum, A I , 10.1038/s41586-021-03588-yNature. 595383Google Quantum AI, Exponential suppression of bit or phase errors with cyclic error correction, Nature 595, 383 (2021).
. Ibm Quantum, IBM Quantum, https://quantum-computing.ibm.com/ (2021).
Dispersive regime of circuit QED: Photon-dependent qubit dephasing and relaxation rates. M Boissonneault, J M Gambetta, A Blais, 10.1103/PhysRevA.79.013819Phys. Rev. A. 7913819M. Boissonneault, J. M. Gambetta and A. Blais, Dis- persive regime of circuit QED: Photon-dependent qubit dephasing and relaxation rates, Phys. Rev. A 79, 013819 (2009).
Entanglement generated by the dispersive interaction: The dressed coherent state. L C G Govia, F K Wilhelm, 10.1103/PhysRevA.93.012316Phys. Rev. A. 9312316L. C. G. Govia and F. K. Wilhelm, Entanglement gener- ated by the dispersive interaction: The dressed coherent state, Phys. Rev. A 93, 012316 (2016).
Measurement-Induced State Transitions in a Superconducting Qubit: Beyond the Rotating Wave Approximation. D Sank, 10.1103/PhysRevLett.117.190503Phys. Rev. Lett. 117190503D. Sank et al., Measurement-Induced State Transitions in a Superconducting Qubit: Beyond the Rotating Wave Approximation, Phys. Rev. Lett. 117, 190503 (2016).
Optimal readout of superconducting qubits exploiting high-level states. C Wang, M.-C Chen, C.-Y Lu, J.-W Pan, 10.1016/j.fmre.2020.12.008Fundam. res. 116C. Wang, M.-C. Chen, C.-Y. Lu and J.-W. Pan, Optimal readout of superconducting qubits exploiting high-level states, Fundam. res. 1, 16 (2021).
Measurement-Induced Qubit State Mixing in Circuit QED from Up-Converted Dephasing Noise. D H Slichter, 10.1103/PhysRevLett.109.153601Phys. Rev. Lett. 109153601D. H. Slichter et al., Measurement-Induced Qubit State Mixing in Circuit QED from Up-Converted Dephasing Noise, Phys. Rev. Lett. 109, 153601 (2012).
Experimental Characterization of Crosstalk Errors with Simultaneous Gate Set Tomography. K Rudinger, 10.1103/PRXQuantum.2.040338PRX Quantum. 240338K. Rudinger et al., Experimental Characterization of Crosstalk Errors with Simultaneous Gate Set Tomogra- phy, PRX Quantum 2, 040338 (2021).
Measurement Crosstalk Errors in Cloud-Based Quantum Computing. S Seo, J Bae, 10.1109/MIC.2021.3133437IEEE Internet Comput. 2626S. Seo and J. Bae, Measurement Crosstalk Errors in Cloud-Based Quantum Computing, IEEE Internet Com- put. 26, 26 (2022).
Variational quantum algorithms. M Cerezo, 10.1038/s42254-021-00348-9Nat. Rev. Phys. 3625M. Cerezo et al., Variational quantum algorithms, Nat. Rev. Phys 3, 625 (2021).
Noisy intermediate-scale quantum algorithms. K Bharti, 10.1103/RevModPhys.94.015004Rev. Mod. Phys. 9415004K. Bharti et al., Noisy intermediate-scale quantum algo- rithms, Rev. Mod. Phys. 94, 015004 (2022).
Realizing repeated quantum error correction in a distance-three surface code. S Krinner, 10.1038/s41586-022-04566-8Nature. 605669S. Krinner et al., Realizing repeated quantum error cor- rection in a distance-three surface code, Nature 605, 669 (2022).
Realization of an Error-Correcting Surface Code with Superconducting Qubits. Y Zhao, 10.1103/PhysRevLett.129.030501Phys. Rev. Lett. 12930501Y. Zhao et al., Realization of an Error-Correcting Sur- face Code with Superconducting Qubits, Phys. Rev. Lett. 129, 030501 (2022).
Good quantum errorcorrecting codes exist. A R Calderbank, P W Shor, 10.1103/PhysRevA.54.1098Phys. Rev. A. 541098A. R. Calderbank and P. W. Shor, Good quantum error- correcting codes exist, Phys. Rev. A 54, 1098 (1996).
Error Correcting Codes in Quantum Theory. A M Steane, 10.1103/PhysRevLett.77.793Phys. Rev. Lett. 77793A. M. Steane, Error Correcting Codes in Quantum The- ory, Phys. Rev. Lett. 77, 793 (1996).
Surface codes: Towards practical large-scale quantum computation. A G Fowler, M Mariantoni, J M Martinis, A N Cleland, 10.1103/PhysRevA.86.032324Phys. Rev. A. 8632324A. G. Fowler, M. Mariantoni, J. M. Martinis and A. N. Cleland, Surface codes: Towards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012).
Roads towards fault-tolerant universal quantum computation. E T Campbell, B M Terhal, C Vuillot, 10.1038/nature23460Nature. 549172E. T. Campbell, B. M. Terhal and C. Vuillot, Roads to- wards fault-tolerant universal quantum computation, Na- ture 549, 172 (2017).
Building a Fault-Tolerant Quantum Computer Using Concatenated Cat Codes. C Chamberland, 10.1103/PRXQuantum.3.010329PRX Quantum. 310329C. Chamberland et al., Building a Fault-Tolerant Quan- tum Computer Using Concatenated Cat Codes, PRX Quantum 3, 010329 (2022).
Quantum certification and benchmarking. J Eisert, 10.1038/s42254-020-0186-4Nat. Rev. Phys. 2382J. Eisert et al., Quantum certification and benchmarking, Nat. Rev. Phys. 2, 382 (2020).
Focus on quantum tomography. K Banaszek, M Cramer, D Gross, 10.1088/1367-2630/15/12/125020New J. Phys. 15125020K. Banaszek, M. Cramer and D. Gross, Focus on quan- tum tomography, New J. Phys. 15, 125020 (2013).
Measurement of the Entanglement of Two Superconducting Qubits via State Tomography. M Steffen, 10.1126/science.1130886Science. 3131423M. Steffen et al., Measurement of the Entanglement of Two Superconducting Qubits via State Tomography, Sci- ence 313, 1423 (2006).
Scalable estimation of pure multi-qubit states. L Pereira, L Zambrano, A Delgado, 10.1038/s41534-022-00565-9npj Quantum Inf. 857L. Pereira, L. Zambrano and A. Delgado, Scalable esti- mation of pure multi-qubit states, npj Quantum Inf. 8, 57 (2022).
Implementing efficient selective quantum process tomography of superconducting quantum gates on IBM quantum experience. A Gaikwad, K Shende, Arvind , K Dorai, 10.1038/s41598-022-07721-3Sci. Rep. 12A. Gaikwad, K. Shende, Arvind and K. Dorai, Imple- menting efficient selective quantum process tomography of superconducting quantum gates on IBM quantum ex- perience, Sci. Rep. 12 (2022).
Scalable multiparticle entanglement of trapped ions. H Häffner, 10.1038/nature04279Nature. 438643H. Häffner et al., Scalable multiparticle entanglement of trapped ions, Nature 438, 643 (2005).
Realization of the Quantum Toffoli Gate with Trapped Ions. T Monz, 10.1103/PhysRevLett.102.040501Phys. Rev. Lett. 10240501T. Monz et al., Realization of the Quantum Toffoli Gate with Trapped Ions, Phys. Rev. Lett. 102, 040501 (2009).
Optimal quantum-state reconstruction for cold trapped ions. A B Klimov, C Muñoz, A Fernández, C Saavedra, 10.1103/PhysRevA.77.060303Phys. Rev. A. 7760303A. B. Klimov, C. Muñoz, A. Fernández and C. Saavedra, Optimal quantum-state reconstruction for cold trapped ions, Phys. Rev. A 77, 060303 (2008).
Tomography of the quantum state of photons entangled in high dimensions. M Agnew, J Leach, M Mclaren, F S Roux, R W Boyd, 10.1103/PhysRevA.84.062101Phys. Rev. A. 8462101M. Agnew, J. Leach, M. McLaren, F. S. Roux and R. W. Boyd, Tomography of the quantum state of photons en- tangled in high dimensions, Phys. Rev. A 84, 062101 (2011).
Experimental Demonstration of Self-Guided Quantum Tomography. R J Chapman, C Ferrie, A Peruzzo, 10.1103/PhysRevLett.117.040402Phys. Rev. Lett. 11740402R. J. Chapman, C. Ferrie and A. Peruzzo, Experimen- tal Demonstration of Self-Guided Quantum Tomography, Phys. Rev. Lett. 117, 040402 (2016).
Estimation of Pure States Using Three Measurement Bases. L Zambrano, 10.1103/PhysRevApplied.14.064004Phys. Rev. Applied. 1464004L. Zambrano et al., Estimation of Pure States Us- ing Three Measurement Bases, Phys. Rev. Applied 14, 064004 (2020).
Complete Physical Characterization of Quantum Nondemolition Measurements via Tomography. L Pereira, J J García-Ripoll, T Ramos, 10.1103/PhysRevLett.129.010402Phys. Rev. Lett. 12910402L. Pereira, J. J. García-Ripoll and T. Ramos, Com- plete Physical Characterization of Quantum Nondemo- lition Measurements via Tomography, Phys. Rev. Lett. 129, 010402 (2022).
Characterizing Midcircuit Measurements on a Superconducting Qubit Using Gate Set Tomography. K Rudinger, 10.1103/PhysRevApplied.17.014014Phys. Rev. Applied. 1714014K. Rudinger et al., Characterizing Midcircuit Measure- ments on a Superconducting Qubit Using Gate Set To- mography, Phys. Rev. Applied 17, 014014 (2022).
Gated Conditional Displacement Readout of Superconducting Qubits. S Touzard, 10.1103/PhysRevLett.122.080502Phys. Rev. Lett. 12280502S. Touzard et al., Gated Conditional Displacement Read- out of Superconducting Qubits, Phys. Rev. Lett. 122, 080502 (2019).
Fast High-Fidelity Quantum Nondemolition Qubit Readout via a Nonperturbative Cross-Kerr Coupling. R Dassonneville, 10.1103/PhysRevX.10.011045Phys. Rev. X. 1011045R. Dassonneville et al., Fast High-Fidelity Quantum Non- demolition Qubit Readout via a Nonperturbative Cross- Kerr Coupling, Phys. Rev. X 10, 011045 (2020).
Tomography of quantum detectors. J S Lundeen, 10.1038/nphys1133Nature Phys. 527J. S. Lundeen et al., Tomography of quantum detectors, Nature Phys. 5, 27 (2008).
Maximum-likelihood estimation of quantum measurement. J Fiurášek, 10.1103/PhysRevA.64.024102Phys. Rev. A. 6424102J. Fiurášek, Maximum-likelihood estimation of quantum measurement, Phys. Rev. A 64, 024102 (2001).
Detector tomography on IBM quantum computers and mitigation of an imperfect measurement. Y Chen, M Farahzad, S Yoo, T.-C Wei, 10.1103/PhysRevA.100.052315Phys. Rev. A. 10052315Y. Chen, M. Farahzad, S. Yoo and T.-C. Wei, Detector tomography on IBM quantum computers and mitigation of an imperfect measurement, Phys. Rev. A 100, 052315 (2019).
An Introduction to Operational Quantum Dynamics. S Milz, F A Pollock, K Modi, 10.1142/s1230161217400169Open Syst. Inf. Dyn. 241740016S. Milz, F. A. Pollock and K. Modi, An Introduction to Operational Quantum Dynamics, Open Syst. Inf. Dyn. 24, 1740016 (2017).
Maximum-likelihood estimation of quantum measurement. J Fiurášek, 10.1103/PhysRevA.64.024102Phys. Rev. A. 6424102J. Fiurášek, Maximum-likelihood estimation of quantum measurement, Phys. Rev. A 64, 024102 (2001).
Measurement of qubits. D F V James, P G Kwiat, W J Munro, A G White, 10.1103/PhysRevA.64.052312Phys. Rev. A. 6452312D. F. V. James, P. G. Kwiat, W. J. Munro and A. G. White, Measurement of qubits, Phys. Rev. A 64, 052312 (2001).
Superfast maximumlikelihood reconstruction for quantum tomography. J Shang, Z Zhang, H K Ng, 10.1103/PhysRevA.95.062336Phys. Rev. A. 9562336J. Shang, Z. Zhang and H. K. Ng, Superfast maximum- likelihood reconstruction for quantum tomography, Phys. Rev. A 95, 062336 (2017).
Quantum supremacy using a programmable superconducting processor. F Arute, 10.1038/s41586-019-1666-5Nature. 574505F. Arute et al., Quantum supremacy using a pro- grammable superconducting processor, Nature 574, 505 (2019).
Optimization of a solid-state electron spin qubit using gate set tomography. J P Dehollain, 10.1088/1367-2630/18/10/103018New J. Phys. 18103018J. P. Dehollain et al., Optimization of a solid-state elec- tron spin qubit using gate set tomography, New J. Phys. 18, 103018 (2016).
. E Nielsen, 10.22331/q-2021-10-05-557Gate Set Tomography, Quantum. 5557E. Nielsen et al., Gate Set Tomography, Quantum 5, 557 (2021).
Experimental Characterization of Crosstalk Errors with Simultaneous Gate Set Tomography. K Rudinger, 10.1103/PRXQuantum.2.040338PRX Quantum. 240338K. Rudinger et al., Experimental Characterization of Crosstalk Errors with Simultaneous Gate Set Tomogra- phy, PRX Quantum 2, 040338 (2021).
Validating quantum computers using randomized model circuits. A W Cross, L S Bishop, S Sheldon, P D Nation, J M Gambetta, 10.1103/PhysRevA.100.032328Phys. Rev. A. 10032328A. W. Cross, L. S. Bishop, S. Sheldon, P. D. Nation and J. M. Gambetta, Validating quantum computers using randomized model circuits, Phys. Rev. A 100, 032328 (2019).
Quality, Speed, and Scale: three key attributes to measure the performance of near-term quantum computers. A Wack, 10.48550/arXiv.2110.14108A. Wack et al., Quality, Speed, and Scale: three key attributes to measure the performance of near-term quantum computers, Preprint at https://doi.org/10.48550/arXiv.2110.14108 (2021).
Computing the distance between quantum channels: usefulness of the Fano representation. G Benenti, G Strini, 10.1088/0953-4075/43/21/215508J. Phys. B: At. Mol. Opt. Phys. 43215508G. Benenti and G. Strini, Computing the distance be- tween quantum channels: usefulness of the Fano repre- sentation, J. Phys. B: At. Mol. Opt. Phys. 43, 215508 (2010).
Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography. R Blume-Kohout, 10.1038/ncomms14485Nat. Commun. 814485R. Blume-Kohout et al., Demonstration of qubit opera- tions below a rigorous fault tolerance threshold with gate set tomography, Nat. Commun. 8, 14485 (2017).
Guide to Graph Colouring. R M R Lewis, 10.1007/978-3-030-81054-2Springer International PublishingR. M. R. Lewis, Guide to Graph Colouring (Springer In- ternational Publishing, 2021).
Quantum Overlapping Tomography. J Cotler, F Wilczek, 10.1103/PhysRevLett.124.100401Phys. Rev. Lett. 124100401J. Cotler and F. Wilczek, Quantum Overlapping Tomog- raphy, Phys. Rev. Lett. 124, 100401 (2020).
Fast and Unconditional All-Microwave Reset of a Superconducting Qubit. P Magnard, 10.1103/PhysRevLett.121.060502Phys. Rev. Lett. 12160502P. Magnard et al., Fast and Unconditional All-Microwave Reset of a Superconducting Qubit, Phys. Rev. Lett. 121, 060502 (2018).
Improving qubit readout with hidden Markov models. L A Martinez, Y J Rosen, J L Dubois, 10.1103/PhysRevA.102.062426Phys. Rev. A. 10262426L. A. Martinez, Y. J. Rosen and J. L. DuBois, Improving qubit readout with hidden Markov models, Phys. Rev. A 102, 062426 (2020).
Fast Quantum Nondemolition Readout by Parametric Modulation of Longitudinal Qubit-Oscillator Interaction. N Didier, J Bourassa, A Blais, 10.1103/PhysRevLett.115.203601Phys. Rev. Lett. 115203601N. Didier, J. Bourassa and A. Blais, Fast Quantum Non- demolition Readout by Parametric Modulation of Lon- gitudinal Qubit-Oscillator Interaction, Phys. Rev. Lett. 115, 203601 (2015).
Quantum State Tomography via Compressed Sensing. D Gross, Y.-K Liu, S T Flammia, S Becker, J Eisert, 10.1103/PhysRevLett.105.150401Phys. Rev. Lett. 105150401D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker and J. Eis- ert, Quantum State Tomography via Compressed Sens- ing, Phys. Rev. Lett. 105, 150401 (2010).
Experimental quantum compressed sensing for a seven-qubit system. C A Riofrío, 10.1038/ncomms15305Nat. Commun. 815305C. A. Riofrío et al., Experimental quantum compressed sensing for a seven-qubit system, Nat. Commun. 8, 15305 (2017).
Demonstration of fault-tolerant universal quantum gate operations. L Postler, 10.1038/s41586-022-04721-1Nature. 605675L. Postler et al., Demonstration of fault-tolerant univer- sal quantum gate operations, Nature 605, 675 (2022).
Errors in quantum tomography: diagnosing systematic versus statistical errors. N K Langford, 10.1088/1367-2630/15/3/035003New J. Phys. 1535003N. K. Langford, Errors in quantum tomography: diag- nosing systematic versus statistical errors, New J. Phys. 15, 035003 (2013).
Quantum chi-squared and goodness of fit testing. K Temme, F Verstraete, 10.1063/1.4905843J. Math. Phys. 5612202K. Temme and F. Verstraete, Quantum chi-squared and goodness of fit testing, J. Math. Phys. 56, 012202 (2015).
. Parallel_Qnd_Measurement_Tomography, 10.5281/zenodo.7341393Parallel_QND_measurement_tomography, https: //doi.org/10.5281/zenodo.7341393 (2022).
| [] |
[
"Exploring Partial Knowledge Base Inference in Biomedical Entity Linking",
"Exploring Partial Knowledge Base Inference in Biomedical Entity Linking"
] | [
"Hongyi Yuan \nAlibaba Group\nTsinghua University\nUniversity of Southern\nCalifornia\n",
"Keming Lu [email protected] \nAlibaba Group\nTsinghua University\nUniversity of Southern\nCalifornia\n",
"Zheng Yuan \nAlibaba Group\nTsinghua University\nUniversity of Southern\nCalifornia\n"
] | [
"Alibaba Group\nTsinghua University\nUniversity of Southern\nCalifornia",
"Alibaba Group\nTsinghua University\nUniversity of Southern\nCalifornia",
"Alibaba Group\nTsinghua University\nUniversity of Southern\nCalifornia"
] | [] | Biomedical entity linking (EL) consists of named entity recognition (NER) and named entity disambiguation (NED). EL models are trained on corpora labeled by a predefined KB. However, it is a common scenario that only entities within a subset of the KB are precious to stakeholders. We name this scenario partial knowledge base inference: training an EL model with one KB and inferring on the part of it without further training. In this work, we give a detailed definition and evaluation procedures for this practically valuable but significantly understudied scenario and evaluate methods from three representative EL paradigms. We construct partial KB inference benchmarks and witness a catastrophic degradation in EL performance due to dramatically precision drop. Our findings reveal these EL paradigms can not correctly handle unlinkable mentions (NIL), so they are not robust to partial KB inference. We also propose two simple-and-effective redemption methods to combat the NIL issue with little computational overhead. Codes are released at https://github.com/Yuanhy1997/ PartialKB-EL. | 10.48550/arxiv.2303.10330 | [
"https://export.arxiv.org/pdf/2303.10330v3.pdf"
] | 257,632,563 | 2303.10330 | 01caa1fbe78ee2234ade9228c772cbb6d5e47458 |
Exploring Partial Knowledge Base Inference in Biomedical Entity Linking
Hongyi Yuan
Alibaba Group
Tsinghua University
University of Southern
California
Keming Lu [email protected]
Alibaba Group
Tsinghua University
University of Southern
California
Zheng Yuan
Alibaba Group
Tsinghua University
University of Southern
California
Exploring Partial Knowledge Base Inference in Biomedical Entity Linking
Biomedical entity linking (EL) consists of named entity recognition (NER) and named entity disambiguation (NED). EL models are trained on corpora labeled by a predefined KB. However, it is a common scenario that only entities within a subset of the KB are precious to stakeholders. We name this scenario partial knowledge base inference: training an EL model with one KB and inferring on the part of it without further training. In this work, we give a detailed definition and evaluation procedures for this practically valuable but significantly understudied scenario and evaluate methods from three representative EL paradigms. We construct partial KB inference benchmarks and witness a catastrophic degradation in EL performance due to dramatically precision drop. Our findings reveal these EL paradigms can not correctly handle unlinkable mentions (NIL), so they are not robust to partial KB inference. We also propose two simple-and-effective redemption methods to combat the NIL issue with little computational overhead. Codes are released at https://github.com/Yuanhy1997/ PartialKB-EL.
Introduction
Biomedical entity linking (EL) aims to identify entity mentions from biomedical free texts and link them to the pre-defined knowledge base (KB, e.g. UMLS (Bodenreider, 2004)), which is an essential step for various tasks in biomedical language understanding including relation extraction (Li et al., 2016;Lin et al., 2020b;Hiai et al., 2021; and question answering (Jin et al., 2022).
EL naturally contains two subtasks: named entity recognition (NER) and named entity disambiguation (NED). NER is designed for mention detection, while NED aims to find the best match Corresponding Author.
Contributed equally. Ordering is determined by dice rolling. entities from KB. One direct way for EL is executing NER and NED sequentially (Liu et al., 2020;Zhang et al., 2021a;Yuan et al., 2022b). Neural NER and NED models are usually trained by corpora labeled with a KB. However, potential users of biomedical EL, including doctors, patients, and developers of knowledge graphs (KGs) may only be interested in entities inside a subset of KB such as SNOMED- CT (Donnelly et al., 2006), one semantic type of entities in UMLS, or KB customized by medical institutions. Besides, doctors from different medical institutions have different terminology sets. Some hospitals are using ICD-10, while some hospitals are still using ICD-9 and even custom terminology set. Patients are only interested in specific diseases, symptoms, and drugs. As for developers of KGs, they may need to build a KG for special diseases like diabetes (Chang et al., 2021) and COVID-19 (Reese et al., 2021), or particular relation types like drug-drug interaction (Lin et al., 2020a). All scenarios above need to infer EL using a partial KB. Off-the-shelf models trained on a comprehensive KB will extract mentions linked to entities outside the users' KB. Although retraining models based on users' KBs can obtain satisfactory performances, it is not feasible under most scenarios because users can have significantly different KBs and may have difficulties with computational resources in finetuning large-scale models. Therefore, we propose a scenario focusing on inference on the partial KB. We name this scenario partial knowledge base inference: Train an EL model with one KB and infer on partial of this KB without further training. Fig. 1 provides a case of this scenario. This scenario is widely faced in the medical industry but remains understudied.
This work reviews and evaluates current stateof-the-art EL methods under the partial KB inference scenario. To be specific, we evaluate three paradigms: (1) NER-NED (Yuan et al., 2021(Yuan et al., , 2022c, (2) NED-NER (Zhang et al., 2022), (3) simultaneous generation (Cao et al., 2021a). The first two paradigms are pipeline methods, whose difference is the order of NER and NED. The last paradigm is an end-to-end method that generates mention and corresponding concepts by language models. We construct partial KB inference datasets based on two widely used biomedical EL datasets: BC5CDR (Li et al., 2016) and MedMentions (Mohan and Li, 2019). Our experimental findings reveal the different implicit mechanisms and performance bottlenecks within each paradigm which shows partial KB inference is challenging.
We also propose two redemption methods based on our findings, post-pruning and thresholding, to help models improve partial KB inference performance effortlessly. Post-pruning infers with a large KB and removes entities in the large KB but not in partial KB. Post-pruning is effective but memoryunfriendly for storing embeddings of entities in the large KB. Thresholding removes entities with scores below a threshold. These two redemption methods are all designed to reduce the impact of NIL entities and boost EL performances. To our best knowledge, this is the first work that researches partial KB inference in biomedical EL. Our main contributions are the following:
• We extensively investigate partial KB inference in biomedical EL. We give a detailed definition, evaluation procedures, and open-source curated datasets.
• Experiment results show that the NED-NER paradigm behaves more robust towards partial KB inference, while the other paradigms suffer from sharp degradation caused by NIL.
• We propose two redemption techniques to address the NIL issue with little computational overhead for better partial KB inference.
Related Work
NER and NED In biomedical and general domains, NER and NED are two extensively studied sub-fields in NLP. As mentioned, EL can be decomposed and approached by NER and NED. NER is often considered a sequential labeling task (Lample et al., 2016). Neural encoders like LSTM (Gridach, 2017;Habibi et al., 2017;Cho and Lee, 2019) or pretrained language models (Weber et al., 2021) encode input text and assign BIO/BIOES tags to each word. Many biomedical pretrained language models are proposed to enhance NER performances (Beltagy et al., 2019;Peng et al., 2019;Lee et al., 2020;Gu et al., 2021;Yuan et al., 2021). Concerning NED, most methods embed mentions and concepts into a common dense space by language models and disambiguate mentions by nearest neighbor search (Bhowmik et al., 2021;Ujiie et al., 2021a;Lai et al., 2021). and Agarwal et al. (2021) first rerank the disambiguation target to boost performance. To overcome the limitation of labeled NED corpus, ; Yuan et al. (2022c,a) leverage synonyms from huge biomedical KB for zero-shot NED. Varma et al. (2021); Zhang et al. (2021b) use weakly supervised data generated from Wikipedia and PubMed for data augmentation. NER and NED are both essential components of EL. In this work, we further explore partial KB inference by analyzing performance in these two steps and reveal how the design and order of NER and NED infer EL performance in partial KB inference.
Entity Linking Although EL can be handled by a direct pipeline of NER and NED, there is limited research focusing on the task as a whole in biomedical. As EL may enjoy the mutual benefits from supervision of both subtasks, Zhao et al. (2019) deal with biomedical EL in a multi-task setting of NER and NED. MedLinker (Loureiro and Jorge, 2020) and Ujiie et al. (2021b) approach biomedical EL by sequentially dealing with NER and NED using a shared language model and they devise a dictionary-matching mechanism to deal with concepts absent from the training annotations.
In the general domain, GENRE (Cao et al., 2021a,b) is proposed and formulated EL as a seq2seq task. They detect and disambiguate mentions with constrained language generation in an end-to-end fashion. We categorize GENRE as simultaneous-generate EL. EntQA (Zhang et al., 2022) provides a novel framework by first finding probable concepts in texts and then treating each extracted concept as queries to detect corresponding mentions in a question-answering fashion which is categorized as NED-NER in our framework. Simultaneous-generate and NED-NER fashion are not widely examined in biomedical EL, and they interest us to examine their performances for biomedical EL and partial KB inferences.
Partial KB inference in EL In the biomedical domain, there is no prior work considering this setting to the best of our knowledge. NILINKER (Ruas and Couto, 2022) is the most related work which focuses on linking NIL entities out of the training KB, while ours aim to infer EL on part of the training KB and discard NIL entities.
Problem Definition
Entity Linking Let E denote a target KB comprises of a set of biomedical concepts. Given a text s with length n, an EL model aims to find the mentions m and corresponding concepts e ∈ E. Concretely, the model can be regarded as a mapping f : s → P E , where P E = {(i, j, e)|0 ≤ i ≤ j ≤ n, e ∈ E} denotes the possible target mention-concept pairs, and i, j mark the start and end positions of the mention spans in s.
Partial KB inference
In the conventional EL scenario, the target KB is the same in training and inference. In this paper, we consider a partial KB inference scenario containing two different KBs, E 1 and E 2 , and assume E 1 ⊋ E 2 . The larger KB E 1 corresponds to the training KB while the smaller KB E 2 corresponds to the partial inference KB. Models are required to map a text s to a different label set P E 2 during inference, rather than P E 1 during training, and we have P E 1 ⊋ P E 2 . There exists a label distribution shift in this scenario. We investigate whether current entity linking models are robust for partial KB inference and how models perform under the shifted distribution of targets.
Experiments
In this section, we introduce our experimental setup, which includes implementation details of EL methods we investigated ( §4.1) and datasets we create for investigating partial KB inference ( §4.2).
Direct Partial KB Inference
There are three widely-used paradigms for entity linking: (1) NER-NED; (2) NED-NER; (3) Simultaneous Generation. We introduce representative methods for each paradigm and how methods are accommodated to partial KB inference with minimal change. To be noticed, these paradigms are not aware of the KB E 1 during partial KB inference. The top subgraph in Fig. 2 depicts the overview of the three paradigms. We also describe how directly applying these methods to partial KB inference, which corresponds to the Direct inference method in Fig. 2. Hyper-parameters for experiments are reported in Appx. §A.
NER-NED
A straightforward solution for entity linking is a two-phase paradigm that first detects the entity mentions by NER models and then disambiguates the mentions to concepts in KBs by NED models, shown in the left top subgraph of Fig. 2. We finetune a pre-trained biomedical language model for token classification as the NER model in this paradigm. Specifically, we use KeBioLM (Yuan et al., 2021) as our language model backbone. We use CODER (Yuan et al., 2022b) as our NED model which is a self-supervised biomedical entity normalizer pre-trained on UMLS synonyms with contrastive learning. CODER disambiguates mentions by generating embedding from each concept synonym and recognized mentions into dense vectors and then finding the nearest concept neighbors of each mention vector by maximum inner product search (MIPS).
In partial KB inference, although the NER model is not aware of the changes in KB, the NED model only needs to search for the nearest concept within a partial KB. Smaller inference KB is challenging for the NED model. For a mention m and its corresponding concept e ∈ E 1 , if e / ∈ E 2 , the NED model will return an incorrect or less accurate concept from E 2 . Since the users are only interested in concepts within E 2 , these kinds of mention m should be linked as unlinkable entities (NIL).
NED-NER
NED-NER methods are also formatted as a twophase pipeline, which is shown in the middle top subgraph of Fig. 2. This paradigm first retrieves the concepts mentioned in the text, then identifies mentions based on retrieved concepts. This paradigm is proposed along with the method En-
Partial KB
EL Model (Large)
Filter
Results
Post Pruning
In-KB Training
Train set (Partial)
EL Model (Large)
Results
Direct
Test set (Partial)
Inference Figure 2: Overview of three different entity linking paradigms and settings of partial KB inference. The top sub-graph demonstrates three EL paradigms we investigated in this work ( §4.1). The middle sub-graph shows the relation of the large training KB and partial KB in inference ( §3). The bottom sub-graph shows two EL models obtained from full and partial training and three partial KB inference settings. The direct partial KB inference is the naive setting described in §4.1. Thresholding and post pruning are two simple redemption methods we propose and describe in §5.2.
tQA (Zhang et al., 2022). In the concept retrieval phase of EntQA, a retriever finds top-K related concepts for texts by embedding both into a common dense space using a bi-encoder, then searches nearest neighbors for texts by MIPS within the partial KB E 2 . This phase retrieves concepts from raw texts directly and we view it as the NED phase. Following its original setting, We initialize the retriever from BLINK (Wu et al., 2019) checkpoints and further fine-tune the bi-encoder on our datasets with its contrastive loss functions. In the following phase, a reader is trained to identify mentions in a question-answering fashion where mentions and concepts correspond to answers and queries respectively. This phase is viewed as NER. In partial KB inference, only concepts from the partial KB will be encoded into dense vectors for MIPS.
Simultaneous Generation
In the generative paradigm for entity linking, NER and NED are achieved simultaneously, which is shown in the right top subgraph of Fig. 2. Entity linking is modeled as a sequence-to-sequence (seq2seq) task where models insert special tokens and concept names into texts with a constrained decoding technique via a Trie. We follow the detailed model design in GENRE. Given a input text s, the target sequence is built as:
s tar = {. . . , M B , x i , . . . x j , M E , E B , e, E E , . . .}, where x i , .
. . x j are the mention tokens in s, e is a token sequence of the concept name, and M B , M E , E B , E E are special tokens marking the beginning and ending of mentions and concepts. The model is trained in seq2seq fashion by maximum log-likelihood with respect to each token. During inference, a token prefix trie is built to constrain model only output concepts within the given KB. For partial KB inference, only concept names from the partial KB are added to build the prefix Trie in GENRE. This will ensure all entity linking results will only be referred to the partial KB.
Datasets
We conduct experiments on two widely-used biomedical EL datasets and select several partial KBs used as inference. Selection biases of partial KBs may be introduced into our setting because different partial KBs may result in different target distributions of mention-concept annotations, as this may lead to different difficulties in EL due to different KB sizes, the semantics of entities, and entity occurrence frequencies in the training set.
To eliminate this effect as much as possible, we not only evaluate on partial KBs mentioned above but also their complement KBs to the training KBs.
We add ∁ to indicate the complements. The detailed statistics of datasets are listed in Tab. 6 of Appx. §B.
BC5CDR (Li et al., 2016) is a dataset that annotates 1,500 PubMed abstracts with 4,409 chemicals, 5818 disease entities, and 3,116 chemical-disease interactions. All annotated mentions are linked to concepts in the target knowledge base MeSH. We use MeSH as the training KB and we consider a smaller KB MEDIC (Davis et al., 2012) as the partial KB for inference. MEDIC is a manually curated KB composed of 9,700 selected disease concepts mainly from MeSH. , 2019) is a largescale biomedical entity linking datasets curated from annotated PubMed abstracts. We use the st21pv subset which comprises 4,392 PubMed abstracts, and over 350,000 annotated mentions linked to concepts of 21 selected semantic types in UMLS (Bodenreider, 2004). We use UMLS as the training KB and we select three representative partial KBs which are concepts from semantic types T038 (Biologic Function) and T058 (Health Care Activity) in UMLS and SNOMED.
MedMentions (Mohan and Li
Results
In this section, we present the main results of partial KB inference ( §5.1) Then, we provided two redemption methods for enhancing model performance in partial KB inference ( §5.2). In the end, we discuss the factors related to difficulties hindering partial KB inference performance ( §5.3).
Main Results
EL Tab. 1 shows entity linking results on different partial KB settings. First of all, we witness a significant and consistent performance drop in precision among all methods on MedMentions. En-tQA has the least precision drop (5.36%) while GENRE and KEBioLM+CODER have a more obvious decrease, which is 16.68% and 14.35%, respectively. On the opposite, recalls in partial KBs remained the same even slightly increased. KeBi-oLM+CODER shows the largest average recall increase (6.71%), followed by EntQA (2.13%), while the average recall of GENRE remains the same (only drops 0.11%). Due to the stability in precision, the average change of F1 by EntQA even slightly increases (-0.7%). However, the average F1 of GENRE and KeBioLM+CODER drops significantly on partial KBs, which are 12.04% and 9.96%. The same pattern appears in BC5CDR. EntQA shows extraordinary robustness in direct partial KB inference in contrast to the degradation of GENRE and KeBioLM. For individual partial KBs, a consistent pattern of precision and F1 drop is observed in GENRE and KeBioLM+CODER and EntQA is more robust compared to others. The F1 degradation led to by precision decrease reflects that the models detect redundant mentions that are
Simple Redemptions
In former subsections, we identify performance drops in partial KB inference mainly due to precision drops in the mention detection. We introduce two simple-yet-effective methods to redeem the performance drops for partial KB inference: Post-pruning and Thresholding, which are shown in Fig. 2 and an example is provided in Appx. §C.1. Two methods are motivated by removing NIL mentions for improving mention detection performances.
Post-Pruning asks the model to infer using E 1 and remove mention-entity pairs out of the partial KBs E 1 − E 2 . This redemption method is naive but needs to know E 1 .
Thresholding uses E 2 for inference. After obtaining mention-entity pairs, it will search a fixed threshold θ on the development set to maximize F1 and remove results with scores under the threshold. This method is not aware of E 1 . Specifically, for KeBioLM+CODER, we set a threshold on the cosine similarities of detected mentions and their most similar concepts:
score = max e∈E 2 cos(h m , h e ),
where m represents the mention extracted by NER and h represents embeddings.
For EntQA, we obtain K entities from the retriever and we compute the score for k th entity with starting and ending index s, t: score = P re (e k |e 1:K )P st (s|e k , s)P ed (t|e k , s)
where P re computes the probability of e k among all entities and P st and P ed computes probabilities of s and t are the starting and ending of e k . The original implementation of EntQA integrated thresholding during inference, so the partial KB inference is equivalent to inference with thresholding.
For GENRE, we use the log-likelihoods for the generated mention span and concepts names in the output sequences s e m = {M B , x i , . . . x j , M E , E B , e, E E } as scores:
score = 1 |s e m | x∈s e m log(P ar (x))
where P ar represents the token's probability autoregressively conditioned on its preceding tokens. We compare two methods with direct partial KB inference. We also include a setting where models are trained on the partial KB E 2 for comparison. We dub this 'in-domain' setting as In-KB train.
Redemptions Performances Tab. 4 shows results of partial KB inference on MEDIC and MEDIC ∁ on BC5CDR. We also identify the same pattern in other subset KBs (Appx. §C.2). Paradigms behave differently under these settings.
For KeBioLM+CODER, the best improvements are brought by thresholding. Mention-concept pairs with low similarities can be categorized into concepts within E 1 − E 2 or incorrect mention spans. These two kinds of pairs are removed by thresholding which results of the improvement of NER and NED. Post-pruning also brings improvement of NED by removing concepts within E 1 − E 2 , but it cannot deal with incorrect mention spans.
For EntQA, direct partial KB transfer achieves similar results to In-KB training. The great performance of direct partial KB transfer is due to the integration of the thresholding mechanism.
For GENRE, the best performance is achieved uniformly by post-pruning. Post-pruning removes concepts within E 1 − E 2 to boost performance. Thresholding also has significant improvement and performs better than In-KB training. The reason thresholding performs worse than post-pruning may be the log-likelihood is not a direct estimate of mention-entity pair validity.
Another observation is two redemption methods can outperform direct In-KB training, which suggests additional supervision from KB E 1 − E 2 can benefit partial KB inference on E 2 .
Discussion
In this section, we provide a further investigation into what causes performance variance across different partial KBs. In training data, annotations associated with different partial KBs may take different proportions of total annotations. Models may over-fit the frequency of mention annotations existing in training samples. We visualize F1 performance drop of entity linking and mention detection against the proportion of partial KB annotations in training data. As shown in Fig. 3(a)(b), the performance drop is negatively correlated with the annotation proportions for GENRE and Ke-BioLM+CODER. The relation is more prominent for mention detection. For EntQA, performances barely change in terms of entity linking and mention detection due to its robustness. This negative correlation suggests mention detection of GENRE and KeBioLM+CODER over-fit annotation frequency. EntQA detects mentions according to retrieved concepts. This explicit modeling makes it more robust since it handles out-of-KB mentions by filtering out irrelevant concepts in the retrieving stage.
For NED as shown in Fig. 3(c), there exists no obvious trend between accuracy drops and annotation proportions. For GENRE and KeBi-oLM+CODER, the disambiguation performances are improved when inference on partial KBs. Improvements are also observed for EntQA on concept retrieval R@100. Concept spaces are shrunk for partial KBs and therefore the disambiguation problem becomes easier to approach. Contrarily, the disambiguation accuracy of EntQA drops, which is probably because of the distribution shift of retrieved concepts between training and inference which serve as inputs for the reader. The distribution shifts in a way that for the same number of top retrieved concepts many concepts with lower ranks may be unseen for the reader in partial KB inference. This illustrates EntQA is still influenced by partial KB inference although it is robust for detecting mentions.
Conclusion
In this work, we propose a practical scenario, namely partial KB inference, in biomedical EL and give a detailed definition and evaluation procedures for it. We review and categorize current state-ofthe-art entity linking models into three paradigms. Through experiments, we show NER-NED and simultaneous generation paradigms have vulnerable performance toward partial KB inference which is mainly caused by mention detection precision drop.
The NED-NER paradigm is more robust due to well-modeled mention-concept reliance. We also propose two methods to redeem the performance drop in partial KB inference and discover out-KB annotations may enhance the in-KB performance.
Post-pruning and thresholding can both improve the performance of NER-NED and simultaneous generation paradigms. Although post-pruning is easy-to-use, it needs to store the large KB E 1 (with their embeddings or trie) which has large memory consumption. Thresholding does not rely on large KB E 1 which also has better performance on the NER-NED paradigm. Our findings illustrate the importance of partial KB inference in EL which shed light on the future research direction.
Limitations
We only investigate representative methods of three widely-used EL paradigms. However, there are more EL methods and paradigms we may not cover, and we leave them as future works. Furthermore, more auxiliary information in the biomedical domain can be introduced to address the NIL issue we identify in this work. For example, a hierarchical structure exists for concepts in KBs in the biomedical domain. Therefore, NIL may be solved by linking them to hypernym concepts in the partial KBs (Ruas and Couto, 2022). We consider the hierarchical mapping between NILs and in-KB concepts as a potential solution for performance degradation in partial KB inference.
Users can obtain different entity-linking results based on their own KBs which have the potential risk of missing important clinical information from the texts.
Ethics Statement
Datasets used for building partial KB inference do not contain any patient privacy information.
A Hyper-parameters
We demonstrates the hyper-parameters we used in training three EL models on MedMEntions and BC5CDR in Tab. 5. All other hyper-parameters in training and inference that are not mentioned in this table are the same from the public codes and scripts of GENRE 1 , EntQA 2 , KeBioLM 3 , and CODER 4 . Models are implemented on single NVIDIA V100 GPU with 32GB memory.
B Datasets Statistics
Tab. 6 shows the detailed statistics of data we used for partial KB inference. We use MeSH and MEDIC in the BC5CDR corpus 5 . The BC5CDR dataset has been identified as being free of known restrictions under copyright law. We use UMLS, MeSH and SNOMED from the 2017 AA release of UMLS. To meet the assumption that MEDIC forms a subset of MeSH, we ditch the concepts in MEDIC that do not exist in MeSH. And we use st21pv version of MedMentions 6 . The MedMentions dataset is under CC0 licence. We follow GenBioEL 7 for preprocessing the concepts and synonyms in the original KBs. To meet the assumption that the partial KBs do not contain concepts out of training KB, we ditch the concepts in partial KBs that do not exist in UMLS.
We use precision, recall, and F1 as metrics for entity linking and mention detection, and accuracy on correctly detected mentions for disambiguation performance. We also use the top 100 recall (R@100) to illustrate the performance of EntQA retriever.
C Appendix for Redemption Methods
C.1 Illustrative Example
We show an entity linking result on an example from BC5CDR:
Indomethacin induced hypotension in sodium and volume depleted rats. After a single oral dose of 4 mg/kg indomethacin (IDM) to sodium and volume depleted rats plasma renin activity (PRA) and systolic blood pressure fell significantly within four hours.
The entity linking results are shown in those linked to a concept in the partial KB MEDIC. In Thresholding, the final results (marked blue) are those scores larger than a fix threshold, for KeBioLM+CODER is 0.8, GENRE is -0.15 and EntQA is 0.043.
C.2 Results on Different Datasets
Tab. 8 shows results of the same experiments described in §5.2 on MedMentions with partial KB SNOMED and SNOMED ∁ . Results on these table also supports the conclusion we provides at §6. We find thresholding and post-pruning benefit EntQA in this additional results whereas we witness a significantly performance drop in Tab
Figure 1 :
1Visual illustration of partial KB inference scenario. Partial KB inference from training KB MeSH (left) to a partial KB MEDIC (right). Methylprednisolone is not extracted since it is not in MEDIC.
Figure 3 :
3The x-axis is the proportion of mention-concept annotations corresponding to the partial KBs in training data. The six points in each line represent different partial KBs in MedMentions.
Table 1: Results for entitly linking in parital KB inference. The first section shows results on MedMentions with UMLS as training KB. The last section shows results on BC5CDR with MeSH as training KB. Eval KB represents different partial KBs for inference. The average drops are averaged among metrics between full evaluation (first row in each section) and partial KB evaluation (other rows).Target KB.
EntQA
GENRE
KeBioLM+CODER
Train KB Eval KB
Precision Recall
F1
Precision Recall
F1
Precision Recall
F1
UMLS
UMLS
45.99
23.68 31.27
42.44
43.69 43.05
33.58
34.94 34.25
SNOMED
46.04
27.01 34.05
34.40
49.40 40.56
28.19
48.28 35.59
SNOMED ∁
36.75
23.12 28.38
19.82
39.28 26.35
14.18
37.54 20.59
T038
41.52
31.56 35.86
17.26
49.53 25.60
9.78
50.28 16.37
T038 ∁
43.43
23.24 30.28
34.97
42.45 38.35
26.52
34.59 30.02
T058
30.01
25.56 27.61
7.69
36.06 12.68
4.76
41.51 8.54
MedMentions
T058 ∁
46.02
24.34 31.84
40.45
44.76 42.50
31.95
37.74 34.61
Avg. Drop
5.36
-2.13
-0.7
16.68
0.11 12.04
14.35
-6.71 9.96
MeSH
MeSH
83.59
66.48 74.06
70.92
68.71 69.80
72.21
74.84 73.5
MEDIC
81.92
70.45 75.75
31.53
68.19 43.12
29.24
68.38 40.96
BC5.
MEDIC ∁
87.10
66.92 75.69
37.55
65.33 47.69
42.57
80.67 55.73
Avg. Drop
-0.92
-2.21 -1.66
36.38
1.95 24.40
36.31
0.32 25.16
Table 2 :
2Results for mention detection in partial KB inference.Table arrangements are the same as Tab. 1.Target KB.
EntQA
GENRE Ke.+CO.
Train KB Eval KB
R@100 Acc. Acc.
Acc.
UMLS UMLS
57.26 75.38 66.03
48.61
SNOMED 65.86 74.81 75.14
65.22
SNOMED ∁ 61.72 68.67 65.33
52.64
T038
75.10 65.34 77.26
65.86
T038 ∁
58.54 66.89 65.44
47.99
T058
74.28 57.92 67.63
61.34
MedMentions
T058 ∁
58.76 68.52 67.53
51.24
Avg. Drop
-8.45 8.35 -3.69
-8.77
MeSH MeSH
80.34 92.72 80.97
83.51
MEDIC
88.72 90.95 83.30
80.20
BC5.
MEDIC ∁
77.73 93.23 76.52
84.92
Avg. Drop
-2.89 0.63 1.06
0.95
Table 3 :
3Results for NED in partial KB inference. KB inference. The results indicate a main defect for NER-NED and simultaneous generative paradigms is that the reliance between concepts and mentions is not well modeled, hence having poor NER performance in partial KB inference.The
disambiguation accuracies (Acc.) are calculated with
respect to correctly detected mentions. For EntQA, we
additionally add recall at the top 100 (R@100) to show
its first-stage concept retrieval performance.
out of the partial KBs.
NER Tab. 2 shows the results for mention de-
tection. When inference on partial KBs, both
GENRE and KeBioLM+CODER show drastically
F1 score decrease on mention detection. On Med-
Mentions, average drops are 20.59% and 26.68%
respectively for GENRE and KeBioLM+CODER.
On BC5CDR, the average drops are 29.16% and
30.24%. The large fluctuation mainly comes
from the sharp decreases of mention detection
precision which are 27.08%/34.02% on MedMen-
tions, and 44.13%/43.18% for GENRE and KE-
BIOLM+CODER respectively. By comparison,
the recall barely changes on partial KB inference.
On the contrary, the fluctuations for EntQA are
marginal, and across metrics and datasets, the
largest drop is only 3.80% for precision. EntQA
shows rather robust performance on mention detec-
tion. The trend is consistent across different subset
KBs. Generally, GENRE and KeBioLM+CODER
are sensitive to the changes to partial KBs. These
models detect mentions in E 1 − E 2 during inference.
Therefore, these two frameworks present large pre-
cision degradation while recall barely fluctuates.
EntQA detects mentions relying on retrieved con-
cepts from the first phase. It learns to restrict men-
tions according to concepts so it behaves robustly
in partial NED Tab. 3 shows the performance on the NED
for correctly detected mentions. Disambiguation
accuracy shows little fluctuation for all methods
while slightly increases on MedMentions. For
example, the accuracy of KeBioLM+CODER in-
creases from 48.61% to 65.86% when KB transfers
from UMLS to T038 semantic type. These results
reveal that models learn the mapping between re-
lated mentions and concepts and are not biased
by the out-of-KB annotations. The shrunk concept
space of partial KBs makes the disambiguation task
easier and leads to performance improvement.
Conclusion We can conclude that (1) NER-NED
and Generative frameworks are not robust to di-
rect partial KB inference, while the performance
of NED-NER framework is more stable; (2) degra-
dation of entity linking performance is mainly a
result of drastically degenerated mention detection
performance on partial KBs and entity disambigua-
tion abilities are stable; (3) EntQA potentially han-
dled NILs via filtering out irrelevant entities before
NER, while other methods suffer from low preci-
sion due to mislinking NILs to existing entities.
Table 4 :
4Results of partial KB inference, In-KB training and two redemption methods for three investigated models. The results are evaluated on partial KB MEDIC and MEDIC ∁ in BC5CDR. The best performance for a model in each dataset is identified with bold and the second is underlined.
Table 7 .
7In Post-Pruning, the final results (marked blue) areKeBioLM
GENRE
EntQA-retriever EntQA-reader
BC5CDR
Train Length
20 Epochs 8000 Steps
20 Epochs
20 Epochs
Learning Rate
1 × 10 −5
3 × 10 −5
2 × 10 −6
1 × 10 −5
Warmup
570
600
20%
6%
Batch Size
16
8
8
2
Adam β
(0.9,0.999) (0.9,0.999)
(0.9,0.999)
(0.9,0.999)
Adam ϵ
1 × 10 −8
1 × 10 −8
1 × 10 −8
1 × 10 −8
Weight Decay
0.0
0.01
0
0
Clip Norm
1.0
0.1
-
-
Label Smoothing
0.0
0.1
-
-
MedMentions
Train Length
20 Epochs 8000 Steps
50 Epochs
50 Epochs
Learning Rate
1 × 10 −5
3 × 10 −5
5 × 10 −6
1 × 10 −5
Warmup
570
600
20%
6%
Batch Size
16
8
8
2
Adam β
(0.9,0.999) (0.9,0.999)
(0.9,0.999)
(0.9,0.999)
Adam ϵ
1 × 10 −8
1 × 10 −8
1 × 10 −8
1 × 10 −8
Weight Decay
0.0
0.01
0
0
Clip Norm
1.0
0.1
-
-
Label Smoothing
0.0
0.1
-
-
Table 5 :
5The training settings for investigated models on BC5CDR and MedMentions. We leave out CODER as CODER is not further fine-tuned on downstream samples. Target KB #Concepts #Annotations #Annotated Concepts #Annot. in Train #Concepts in TrainBC5CDR
Table 6 :
6Dataset and corresponding knowledge base statistics.
. 4. This suggests performance of thresholding and post-pruning on EntQA is different across partial KBs. Nevertheless, we have not seen a dramatic performance boost (as those in GENRE and KeBi-oLM+CODER) brought by post-processing techniques on EntQA.Methods
Post-Pruning
Thresholding
Mention Span
Concept
In Partial KB
Mention Span
Concept
Score
≥Threshold
Ke.+CO.
(0,12)
D007213:indomethacin
False
(0,12)
C564365:ilvasc
0.35
False
(21,32)
D007022:hypotension
True
(21,32)
D007022:hypotension
1.00
True
(36,53)
D005441:fluids and secretions
False
(36,53)
D003681:water stress
0.43
False
(105,117)
D007213:indomethacin
False
(105,117)
C564365:ilvasc
0.35
False
(119,122)
D003922:iddm
True
(119,122)
D003922:iddm
0.94
True
(127,133)
D012964:sodium
False
(127,133)
D000747:chloroses
0.38
False
GENRE
(0,12)
D007213:amuno
False
(21,32)
D007022:hypotension
-0.067
True
(21,32)
D007022:hypotension
True
(36,42)
D007022:hypotension
-1.349
False
(36,42)
D012964:sodium
False
(105,117)
C563086:amc syndrome
-1.848
False
(105,117)
D007213:amuno
False
(127,133)
D007022:hypotension
-0.491
False
(127,133)
D012964:sodium
False
EntQA
(0,12)
D007213:indomethacin
False
(0,12)
C564365:ilvasc
0.087
True
(21,32)
D007022:hypotension
True
(21,32)
D007022:hypotension
0.098
True
(127,133)
D012964:sodium
False
(127,133)
D000747:chloroses
0.004
False
Table 7 :
7An illustrative example of Post-Pruning and Thresholding.
AcknowledgementWe would like to express our appreciation and gratitude to Professor Sheng Yu from Center for Statistical Science, Tsinghua University and Professor Muhao Chen from University of Southern California who have provided computational resources for this research. Vive l'amitié parmi les auteurs.
Entity linking and discovery via arborescence-based supervised clustering. Dhruv Agarwal, Rico Angell, Nicholas Monath, Andrew Mccallum, Dhruv Agarwal, Rico Angell, Nicholas Monath, and An- drew McCallum. 2021. Entity linking and discovery via arborescence-based supervised clustering.
Clusteringbased inference for biomedical entity linking. Rico Angell, Nicholas Monath, Sunil Mohan, Nishant Yadav, Andrew Mccallum, 10.18653/v1/2021.naacl-main.205Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesRico Angell, Nicholas Monath, Sunil Mohan, Nishant Yadav, and Andrew McCallum. 2021. Clustering- based inference for biomedical entity linking. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 2598-2608, Online. Association for Computa- tional Linguistics.
SciB-ERT: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, 10.18653/v1/D19-1371Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsIz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.
Fast and effective biomedical entity linking using a dual encoder. Rajarshi Bhowmik, Karl Stratos, Gerard De Melo, arXiv:2103.05028arXiv preprintRajarshi Bhowmik, Karl Stratos, and Gerard de Melo. 2021. Fast and effective biomedical entity linking us- ing a dual encoder. arXiv preprint arXiv:2103.05028.
The unified medical language system (umls): integrating biomedical terminology. Olivier Bodenreider, Nucleic acids research. 32suppl_1Olivier Bodenreider. 2004. The unified medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research, 32(suppl_1):D267- D270.
Autoregressive entity retrieval. Nicola De Cao, Gautier Izacard, Sebastian Riedel, Fabio Petroni, International Conference on Learning Representations. Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021a. Autoregressive entity retrieval. In International Conference on Learning Representa- tions.
Deep learning with word embeddings improves biomedical named entity recognition. Maryam Habibi, Leon Weber, Mariana Neves, David Luis Wiegandt, Ulf Leser, 10.1093/bioinformatics/btx228Bioinformatics. 3314Maryam Habibi, Leon Weber, Mariana Neves, David Luis Wiegandt, and Ulf Leser. 2017. Deep learning with word embeddings improves biomed- ical named entity recognition. Bioinformatics, 33(14):i37-i48.
Akiva Miura, and Tomoya Iwakura. 2021. Relation extraction using multiple pre-training models in biomedical domain. Satoshi Hiai, Kazutaka Shimada, Taiki Watanabe, Proceedings of the International Conference on Recent Advances in Natural Language Processing. the International Conference on Recent Advances in Natural Language ProcessingRANLP 2021Satoshi Hiai, Kazutaka Shimada, Taiki Watanabe, Akiva Miura, and Tomoya Iwakura. 2021. Relation extrac- tion using multiple pre-training models in biomedical domain. In Proceedings of the International Con- ference on Recent Advances in Natural Language Processing (RANLP 2021), pages 530-537.
Biomedical question answering: A survey of approaches and challenges. Qiao Jin, Zheng Yuan, Guangzhi Xiong, Qianlan Yu, Huaiyuan Ying, Chuanqi Tan, Mosha Chen, Songfang Huang, Xiaozhong Liu, Sheng Yu, 10.1145/3490238ACM Comput. Surv. 552Qiao Jin, Zheng Yuan, Guangzhi Xiong, Qianlan Yu, Huaiyuan Ying, Chuanqi Tan, Mosha Chen, Song- fang Huang, Xiaozhong Liu, and Sheng Yu. 2022. Biomedical question answering: A survey of ap- proaches and challenges. ACM Comput. Surv., 55(2).
BERT might be overkill: A tiny but effective biomedical entity linker based on residual convolutional neural networks. Tuan Lai, Ji Heng, Chengxiang Zhai, 10.18653/v1/2021.findings-emnlp.140Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsTuan Lai, Heng Ji, and ChengXiang Zhai. 2021. BERT might be overkill: A tiny but effective biomedical entity linker based on residual convolutional neural networks. In Findings of the Association for Com- putational Linguistics: EMNLP 2021, pages 1631- 1639, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, 10.18653/v1/N16-1030Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsGuillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, Bioinformatics. 364Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Jiao Li, Yueping Sun, J Robin, Daniela Johnson, Chih-Hsuan Sciaky, Robert Wei, Allan Peter Leaman, Carolyn J Davis, Mattingly, C Thomas, Zhiyong Wiegers, Lu, Database. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.
Kgnn: Knowledge graph neural network for drug-drug interaction prediction. Xuan Lin, Zhe Quan, Zhi-Jie Wang, Tengfei Ma, Xiangxiang Zeng, IJCAI. 380Xuan Lin, Zhe Quan, Zhi-Jie Wang, Tengfei Ma, and Xiangxiang Zeng. 2020a. Kgnn: Knowledge graph neural network for drug-drug interaction prediction. In IJCAI, volume 380, pages 2739-2745.
High-throughput relation extraction algorithm development associating knowledge articles and electronic health records. Yucong Lin, Keming Lu, Yulin Chen, Chuan Hong, Sheng Yu, arXiv:2009.03506arXiv preprintYucong Lin, Keming Lu, Yulin Chen, Chuan Hong, and Sheng Yu. 2020b. High-throughput relation extrac- tion algorithm development associating knowledge articles and electronic health records. arXiv preprint arXiv:2009.03506.
Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, Nigel Collier, arXiv:2010.11784Self-alignment pretraining for biomedical entity representations. arXiv preprintFangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. 2020. Self-alignment pretraining for biomedical entity representations. arXiv preprint arXiv:2010.11784.
Self-alignment pretraining for biomedical entity representations. Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, Nigel Collier, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesFangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. 2021. Self-alignment pretraining for biomedical entity representations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4228-4238.
Medlinker: Medical entity linking with neural representations and dictionary matching. Daniel Loureiro, Alípio Mário Jorge , Advances in Information Retrieval. ChamSpringer International PublishingDaniel Loureiro and Alípio Mário Jorge. 2020. Medlinker: Medical entity linking with neural repre- sentations and dictionary matching. In Advances in Information Retrieval, pages 230-237, Cham. Springer International Publishing.
Medmentions: A large biomedical corpus annotated with {umls} concepts. Sunil Mohan, Donghui Li, Automated Knowledge Base Construction (AKBC). Sunil Mohan and Donghui Li. 2019. Medmentions: A large biomedical corpus annotated with {umls} con- cepts. In Automated Knowledge Base Construction (AKBC).
Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. Yifan Peng, Shankai Yan, Zhiyong Lu, Proceedings of the 2019 Workshop on Biomedical Natural Language Processing. the 2019 Workshop on Biomedical Natural Language ProcessingYifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Trans- fer learning in biomedical natural language process- ing: An evaluation of bert and elmo on ten bench- marking datasets. In Proceedings of the 2019 Work- shop on Biomedical Natural Language Processing (BioNLP 2019), pages 58-65.
Kg-covid-19: a framework to produce customized knowledge graphs for covid-19 response. Deepak Justin T Reese, Tiffany J Unni, Luca Callahan, Vida Cappelletti, Seth Ravanmehr, Carbon, A Kent, Shefchek, M Benjamin, James P Good, Tommaso Balhoff, Fontana, Patterns. 21100155Justin T Reese, Deepak Unni, Tiffany J Callahan, Luca Cappelletti, Vida Ravanmehr, Seth Carbon, Kent A Shefchek, Benjamin M Good, James P Balhoff, Tom- maso Fontana, et al. 2021. Kg-covid-19: a frame- work to produce customized knowledge graphs for covid-19 response. Patterns, 2(1):100155.
Nilinker: Attention-based approach to nil entity linking. Pedro Ruas, Francisco M Couto, 10.1016/j.jbi.2022.104137Journal of Biomedical Informatics. 132104137Pedro Ruas and Francisco M. Couto. 2022. Nilinker: Attention-based approach to nil entity linking. Jour- nal of Biomedical Informatics, 132:104137.
Hayate Shogo Ujiie, Eiji Iso, Aramaki, arXiv:2106.07583Biomedical entity linking via contrastive context matching. arXiv preprintShogo Ujiie, Hayate Iso, and Eiji Aramaki. 2021a. Biomedical entity linking via contrastive context matching. arXiv preprint arXiv:2106.07583.
End-to-end biomedical entity linking with span-based dictionary matching. Hayate Shogo Ujiie, Shuntaro Iso, Shoko Yada, Eiji Wakamiya, Aramaki, 10.18653/v1/2021.bionlp-1.18Proceedings of the 20th Workshop on Biomedical Language Processing. the 20th Workshop on Biomedical Language ProcessingOnline. Association for Computational LinguisticsShogo Ujiie, Hayate Iso, Shuntaro Yada, Shoko Wakamiya, and Eiji Aramaki. 2021b. End-to-end biomedical entity linking with span-based dictionary matching. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 162-167, Online. Association for Computational Linguistics.
Crossdomain data integration for named entity disambiguation in biomedical text. Maya Varma, Laurel Orr, Sen Wu, Megan Leszczynski, Xiao Ling, Christopher Ré, Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsMaya Varma, Laurel Orr, Sen Wu, Megan Leszczyn- ski, Xiao Ling, and Christopher Ré. 2021. Cross- domain data integration for named entity disambigua- tion in biomedical text. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2021, pages 4566-4575, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hunflair: an easy-to-use tool for state-of-the-art biomedical named entity recognition. Leon Weber, Mario Sänger, Jannes Münchmeyer, Maryam Habibi, Ulf Leser, Alan Akbik, Bioinformatics. 3717Leon Weber, Mario Sänger, Jannes Münchmeyer, Maryam Habibi, Ulf Leser, and Alan Akbik. 2021. Hunflair: an easy-to-use tool for state-of-the-art biomedical named entity recognition. Bioinformatics, 37(17):2792-2794.
Scalable zeroshot entity linking with dense entity retrieval. Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer, arXiv:1911.03814arXiv preprintLedell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2019. Scalable zero- shot entity linking with dense entity retrieval. arXiv preprint arXiv:1911.03814.
Sheng Yu, Zheng Yuan, Jun Xia, Shengxuan Luo, Huaiyuan Ying, Sihang Zeng, Jingyi Ren, Hongyi Yuan, Zhengyun Zhao, Yucong Lin, arXiv:2203.09975Bios: An algorithmically generated biomedical knowledge graph. arXiv preprintSheng Yu, Zheng Yuan, Jun Xia, Shengxuan Luo, Huaiyuan Ying, Sihang Zeng, Jingyi Ren, Hongyi Yuan, Zhengyun Zhao, Yucong Lin, et al. 2022. Bios: An algorithmically generated biomedical knowledge graph. arXiv preprint arXiv:2203.09975.
Generative biomedical entity linking via knowledge baseguided pre-training and synonyms-aware fine-tuning. Hongyi Yuan, Zheng Yuan, Sheng Yu, 10.18653/v1/2022.naacl-main.296Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsHongyi Yuan, Zheng Yuan, and Sheng Yu. 2022a. Gen- erative biomedical entity linking via knowledge base- guided pre-training and synonyms-aware fine-tuning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4038-4048, Seattle, United States. Association for Computational Linguistics.
Improving biomedical pretrained language models with knowledge. Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang, 10.18653/v1/2021.bionlp-1.20Proceedings of the 20th Workshop on Biomedical Language Processing. the 20th Workshop on Biomedical Language ProcessingOnline. Association for Computational LinguisticsZheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, and Fei Huang. 2021. Improving biomedical pre- trained language models with knowledge. In Pro- ceedings of the 20th Workshop on Biomedical Lan- guage Processing, pages 180-190, Online. Associa- tion for Computational Linguistics.
Coder: Knowledgeinfused cross-lingual medical term embedding for term normalization. Zheng Yuan, Zhengyun Zhao, Haixia Sun, Jiao Li, Fei Wang, Sheng Yu, Journal of biomedical informatics. 126103983Zheng Yuan, Zhengyun Zhao, Haixia Sun, Jiao Li, Fei Wang, and Sheng Yu. 2022b. Coder: Knowledge- infused cross-lingual medical term embedding for term normalization. Journal of biomedical informat- ics, 126:103983.
Coder: Knowledgeinfused cross-lingual medical term embedding for term normalization. Zheng Yuan, Zhengyun Zhao, Haixia Sun, Jiao Li, Fei Wang, Sheng Yu, 10.1016/j.jbi.2021.103983Journal of Biomedical Informatics. 103983Zheng Yuan, Zhengyun Zhao, Haixia Sun, Jiao Li, Fei Wang, and Sheng Yu. 2022c. Coder: Knowledge- infused cross-lingual medical term embedding for term normalization. Journal of Biomedical Informat- ics, page 103983.
Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, arXiv:2112.07887Jianfeng Gao, and Hoifung Poon. 2021a. Knowledge-rich self-supervised entity linking. arXiv preprintSheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Nau- mann, Jianfeng Gao, and Hoifung Poon. 2021a. Knowledge-rich self-supervised entity linking. arXiv preprint arXiv:2112.07887.
Knowledge-rich self-supervision for biomedical entity linking. Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon, 10.48550/ARXIV.2112.07887Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Nau- mann, Jianfeng Gao, and Hoifung Poon. 2021b. Knowledge-rich self-supervision for biomedical en- tity linking.
EntQA: Entity linking as question answering. Wenzheng Zhang, Wenyue Hua, Karl Stratos, International Conference on Learning Representations. Wenzheng Zhang, Wenyue Hua, and Karl Stratos. 2022. EntQA: Entity linking as question answering. In In- ternational Conference on Learning Representations.
A neural multi-task learning framework to jointly model medical named entity recognition and normalization. Sendong Zhao, Ting Liu, Sicheng Zhao, Fei Wang, 10.1609/aaai.v33i01.3301817Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Sendong Zhao, Ting Liu, Sicheng Zhao, and Fei Wang. 2019. A neural multi-task learning framework to jointly model medical named entity recognition and normalization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):817-824.
. Github, Github repository of GENRE: https://github.com/ facebookresearch/GENRE 2 Github repository of EntQA: https://github.com/ WenzhengZhang/EntQA 3 Github repository of KeBioLM: https://github.com/ GanjinZero/KeBioLM 4 Github repository of CODER: https://github.com/ GanjinZero/CODER 5 BC5CDR: https://biocreative.bioinformatics.
. Github Repository, Medmentions, Github repository of MedMentions: https://github. com/chanzuckerberg/MedMentions
. Github Repository, Genbioel, 53.10/26.28 35.16https://github.com/ Yuanhy1997/GenBioEL SNOMED SNOMED ∁ EL-P/R EL-F1 NER-F1 NED-Acc EL-P/R EL-F1 NER-F1 NED-Acc In-KB TrainGithub repository of GenBioEL: https://github.com/ Yuanhy1997/GenBioEL SNOMED SNOMED ∁ EL-P/R EL-F1 NER-F1 NED-Acc EL-P/R EL-F1 NER-F1 NED-Acc In-KB Train 53.10/26.28 35.16
Additional Results of partial KB inference, In-KB training and two redemption methods for three investigated models. The results are evaluated on partial KB SNOMED and SNOMED ∁ in MedMentions. The best performance for a model in each dataset is identified with bold and the second is underlined. 8Table 8: Additional Results of partial KB inference, In-KB training and two redemption methods for three investigated models. The results are evaluated on partial KB SNOMED and SNOMED ∁ in MedMentions. The best performance for a model in each dataset is identified with bold and the second is underlined.
| [
"https://github.com/Yuanhy1997/"
] |
[
"The warm inflation story",
"The warm inflation story"
] | [
"Arjun Berera \nSchool of Physics and Astronomy\nUniversity of Edinburgh\nEH9 3FDEdinburghUnited Kingdom\n"
] | [
"School of Physics and Astronomy\nUniversity of Edinburgh\nEH9 3FDEdinburghUnited Kingdom"
] | [] | Warm inflation has normalized two ideas in cosmology, that in the early universe the initial primordial density perturbations generally could be of classical rather than quantum origin and that during inflation, particle production from interactions amongst quantum field, and its backreaction effects, can occur concurrent with inflationary expansion. When we first introduced these ideas, both were met with resistance, but today they are widely accepted as possibilities with many models and applications based on them, which is an indication of the widespread influence of warm inflation. Open quantum field theory, which has been utilized in studies of warm inflation, is by now a relevant subject in cosmology, in part due to this early work. In this review I first discuss the basic warm inflation dynamics. I then outline how to compute warm inflation dynamics from first principles quantum field theory (QFT) and in particular how a dissipative term arises. Warm inflation models can have an inflaton mass bigger than the Hubble scale and the inflaton field excursion can remain sub-Planckian, thus overcoming the most prohibitive problems of inflation model building. I discuss the early period of my work in developing warm inflation that helped me arrive at these important features of its dynamics. Inflationary cosmology today is immersed in hypothetical models, which by now are acting as a diversion from reaching any endgame in this field. I discuss better ways to approach model selection and give necessary requirements for a well constrained and predictive inflation model. I point out a few warm inflation models that could be developed to this extent. I discuss how at this stage more progress would be made in this subject by taking a broader view on the possible early universe solutions that include not just inflation but the diverse range of options. keywords: early universe cosmology, warm inflation, quantum field theory, model building | 10.3390/universe9060272 | [
"https://export.arxiv.org/pdf/2305.10879v1.pdf"
] | 258,762,696 | 2305.10879 | 285799f6ee4d1856dfee518b2b9d611d736c368a |
The warm inflation story
18 May 2023
Arjun Berera
School of Physics and Astronomy
University of Edinburgh
EH9 3FDEdinburghUnited Kingdom
The warm inflation story
18 May 2023
Warm inflation has normalized two ideas in cosmology, that in the early universe the initial primordial density perturbations generally could be of classical rather than quantum origin and that during inflation, particle production from interactions amongst quantum field, and its backreaction effects, can occur concurrent with inflationary expansion. When we first introduced these ideas, both were met with resistance, but today they are widely accepted as possibilities with many models and applications based on them, which is an indication of the widespread influence of warm inflation. Open quantum field theory, which has been utilized in studies of warm inflation, is by now a relevant subject in cosmology, in part due to this early work. In this review I first discuss the basic warm inflation dynamics. I then outline how to compute warm inflation dynamics from first principles quantum field theory (QFT) and in particular how a dissipative term arises. Warm inflation models can have an inflaton mass bigger than the Hubble scale and the inflaton field excursion can remain sub-Planckian, thus overcoming the most prohibitive problems of inflation model building. I discuss the early period of my work in developing warm inflation that helped me arrive at these important features of its dynamics. Inflationary cosmology today is immersed in hypothetical models, which by now are acting as a diversion from reaching any endgame in this field. I discuss better ways to approach model selection and give necessary requirements for a well constrained and predictive inflation model. I point out a few warm inflation models that could be developed to this extent. I discuss how at this stage more progress would be made in this subject by taking a broader view on the possible early universe solutions that include not just inflation but the diverse range of options. keywords: early universe cosmology, warm inflation, quantum field theory, model building
I. INTRODUCTION
Warm inflation was introduced 28 years ago. At that time the standard inflation scenario, hereafter called cold inflation, was overwhelming accepted as the valid description of the early phases of the universe, with much anticipation of its confirmation from the planned cosmic microwave background (CMB) experiments within the coming decades. In that time warm inflation has gone from being considered by many in cosmology as a distraction to one of the most promising solutions. The idea stems from an elementary observation. The central theme of inflationary dynamics has been the evolution of a scalar field, which during inflation carries most of the energy of the universe and which interacts with other fields. On the one hand, in the standard inflation picture the tacit assumption made is that these interactions have no effect apart from modifying the scalar field effective potential through quantum corrections. On the other hand, in the warm inflation picture interactions not only do that but also lead to fluctuation and dissipation effects. In condensed matter systems, interactions certainly lead in general to all three of these effects (some examples in [1]). Moreover from a statistical mechanics perspective, the scalar field would want to dissipate its energy to other fields, and the system as a whole would try to equally distribute the available energy. Ultimately a thorough dynamical calculation is needed to address the question.
In cosmology, there is one important way this scalar field dynamics differs from condensed matter systems, which is that all processes for the former occur in an expanding universe. Expansion acts to constantly alter the state of the cosmological system. For example due to expansion, radiation energy in the universe is continually being diluted. Similarly, the configuration of any cosmological scale process is being altered over time. Thus if the quantum mechanical processes that lead to dissipation operate at a time scale much slower than the expansion rate of the universe, than these processes would be totally shut down due to expansion, even if in a nonexpanding system, like a condensed matter system, the same processes operate efficiently. This is the important question that must be understood. In the early years of inflation, there was a viewpoint that inflation had to be in a supercooled phase, since expansion would be too fast for any such microphysical processes to occur that lead to dissipation. However our work in warm inflation changed this point of view. Today this possibility is accepted without much question, thus one indicator of the wide influence and success of warm inflation.
The other major influence warm inflation has had is in normalizing the possibility of the initial primordial fluctuations being classical not quantum. Again due to our timescale analysis, we showed there is considerable dynamical range in the early universe for multiparticle processes, such as those leading to thermalization or other statistical states. When Li-Zhi Fang and I first were working on this idea [2], neither of us had a full scenario in mind. We simply wanted to demonstrate that the prevailing idea of the times that primordial fluctuations, whether during inflation or otherwise, had to be quantum could be questioned. When I first discussed these ideas back in the mid-90s, researchers were surprised. I think one reason the initial paper by Fang and me on thermal primordial fluctuations generated interest when it came out was it was very novel for its time, yet we presented the idea in a way that made it look familiar. On its own this paper would probably have been forgotten had it not been that a few years afterwards with Marcelo Gleiser and Rudnei Ramos, we showed within a quantum field theory model that relevant microphysical timescales were possible to allow for such classical fluctuations [3,4]. Over the years since then, there has been plenty of scrutiny as to whether a full warm inflation scenario is viable, but the idea that primordial fluctuations could be classical rather than quantum had permanently taken root. This changed the thinking about primordial fluctuations dating back well before inflation, that they must be of quantum origin. By now various ideas about classical primordial fluctuations have been suggested, and one of the successes of warm inflation has been that the plausibility of such ideas is routinely accepted.
In this review I will discuss the warm inflation scenario and the history of its development. In the next two Sections II and III, I will discuss the basic scenario and how to realize this picture from first principles quantum field theory (QFT). Then in Section IV, I will turn to the background of the idea, discussing my own steps in realizing and developing warm inflation. In Section V, I will discuss density perturbations in warm inflation and the crucial difference between the weak and strong dissipative regimes of warm inflation. In Section VI, I will discuss some of the first principles models of warm inflation that have been constructed. In Section VII, I will then talk about the advantages the warm inflation scenario has over cold inflation. Intrinsic features about its dynamics allows warm inflation to occur with the inflaton mass bigger then the Hubble scale m φ > H and the inflaton field excursion less then the Planck scale φ < m p . In Section VIII a critique is given about difficulties warm inflation had faced in the field for several years arising more from attitude then due to any scientific shortcoming. The particle physics consequences from warm versus cold inflation are very different. This makes it important for CMB data to be viewed with a broad perspective, with any lingering attitudes now best set aside. I discuss warm inflation within the wider context of early universe cosmology and the general direction in which this field it is headed. In Section IX, I suggest ways to better select the optimum inflation models. There is a proliferation of inflation models in the literature and some of this is taking the focus away from addressing the key quantum field theory based problems to model building. I suggest it would be useful to recognize the degree of speculation in any given inflation model as one means to separate out relevant models. In the final Discussion Section X, I point out the varied types of ideas and directions of research related to warm inflation, which I was unable to cover in detail. I also give some closing comments.
II. THE BASIC DYNAMICS
Inflation in the most general terms is a phase in which the scale factor grows at an accelerating rate,ä > 0. To derive inflation, one utilizes the cosmological Einstein equations, for example the scale factor equation
a a = − 4πG 3 (ρ + 3p) .(1)
It can be seen from this equation that obtaining an accelerating scale factorä > 0, requires p < −ρ/3, so a substance with negative pressure. This means a universe where the dominant form of matter produces a repulsive form of gravity. Vacuum energy has an equation of state p v = −ρ v , which when dominant will lead to inflation. If the vacuum energy is the only energy in the universe and it is constant, it leads to an exponential scale factor growth, which is the most common behavior associated with inflationary expansion and is called a deSitter space. In simplest terms, a universe undergoing accelerated expansion grows much bigger in the same amount of time as a universe undergoing decelerated expansion.
As it turns out, the equation of state of a scalar fields contain terms with negative pressure. The energy and pressure densities of a scalar field are
ρ =Φ 2 2 + V (Φ) + (∇Φ) 2 2a 2 p =Φ 2 2 − V (Φ) − (∇Φ) 2 6a 2 ,(2)
so that a potential energy, V (Φ), dominated state of a scalar field has a negative pressure. The idea generally adopted for realizing inflation from particle physics is to get the potential energy of some scalar field to dominate the energy density of the universe for some short period of time in the early universe, thereby generating the requisite amount of inflation needed to solve the cosmological puzzles. After sufficient inflation has then occurred, somehow place the universe back into a radiation dominated hot Big Bang regime. The scalar field that performs the task in driving inflation is called the inflaton. The inflaton must perform two essential roles. These are to supply an appropriate energy density to be conducive for inflation and the inflaton fluctuations should have the appropriate features to seed primordial density fluctuations in the universe. In both the warm and cold inflation pictures, to realize inflation the scalar inflaton field must be potential energy dominated. The difference is, cold inflation is synonymous with supercooling of the universe during inflation [5][6][7][8] (a good early review is [9]), whereas in warm inflation the inflaton is not assumed to be an isolated, noninteracting field during the inflation period. This means, rather than the universe supercooling, it instead maintains some radiation during inflation, to the extent to noticeably alter inflaton dynamics. In particular, the dividing point between warm and cold inflation is roughly at ρ 1/4 r ≈ H, where ρ r is the radiation energy density present during inflation and H is the Hubble parameter, which during the potential energy dominated inflation phase is H 2 = 8πV /(3m 2 p ), where m p is the Planck mass. Here the warm inflation regime is ρ 1/4 r > H and the cold inflation regime is ρ 1/4 r < ∼ H. These criteria are independent of thermalization, but if such were to occur, one sees that the warm inflation regime basically corresponds to when g * 1/4 T > H, where g * are the number of particle species in the universe. The relevance of this separation into these two regimes is since the typical inflaton mass during inflation is m φ ≈ H, so that when T > ∼ H, thermal fluctuations of the inflaton field will become important.
The interaction of the inflaton with other fields in general implies its effective evolution equation has terms representing dissipation of energy going out of the inflaton system into other particles. This can be expressed through a simple phenomenological Langevin type equation
φ + [3H + Υ]φ − 1 a 2 (t) ∇ 2 φ + ∂V ∂φ = ζ.(3)
In this equation, ζ is a fluctuating random force and Υφ is a dissipative term. Both these are effective terms that arise due to the interaction of the inflaton with other fields. A fluctuation-dissipation relation in general will relate these two terms, with details depending on the microscopic dynamics and the statistical state of the system. For warm inflation to occur, the potential energy ρ v must be larger than both the radiation energy density ρ r and the inflaton's kinetic energy. A major difference to cold inflation is the evolution of the energy densities. In warm inflation, vacuum energy is continuously being dissipated at the rateρ v = −Υφ 2 and so causing the radiation energy not to vanish. The General Relativity (GR) cosmological energy conservation equation,
ρ = −3H(ρ + p) ,(4)
for this system of vacuum and radiation becomesρ r = −4Hρ r + Υφ 2 .
The first term on the right-hand-side is a sink term that is depleting radiation energy, whereas the second term is sourcing this energy. These equations are meant to demonstrate the basic idea, so the notation is kept simple, but to be clear in the above the field φ is just the background mode, whereas in Eq. (3) is represents both the background and fluctuating modes. Noting from the sink term that the rate of depletion is proportional to the amount of radiation present, in general it implies there will be a nonzero approximate steady state point for ρ r controlled by the source term. This holds when φ, H, and Υ are slowly varying, which is a good approximation during inflation, As an example, if the source term is just a constant, which is a good approximation during the slow roll evolution of the inflaton, then Υφ 2 = const. ≡ c 0 . In that case the solution to the equation (5) will be ρ r ≈ c 0 /(4H) + (ρ r0 − c 0 /(4H)) exp(−4Ht). The second term on the RHS of this solution decays away any initial radiation, but at large time radiation does not entirely vanish because of the first term on the RHS. Thus at large time, the radiation in the universe depends only on the rate at which the source is producing it and so becomes independent of initial conditions. As already noted, the presence of radiation during inflation is fully consistent with the equations of General Relativity, since the single requirement by the scale factor equation to realize an inflationary scale factor growth is that the vacuum energy density is the dominant component of energy in the universe. This means supercooling is only one special limiting regime of this general case implied by these equations. Thus inflation would still happen if there was say a 10% or 1% etc... admixture of radiation, in addition to the vacuum energy. This is an important point. To appreciate it, note that there are at least five scales in inflation, the vacuum energy E v ≡ ρ 1/4 v , the radiation energy E r ≡ ρ 1/4 r , the Hubble scale H, the inflaton mass m 2 φ ≡ V ′′ (φ), and the dissipative coefficient Υ. In the cold inflation picture, these five energy scales are related as (i). E v ≫ E r , (ii). H > m φ , (iii). m φ > E r , and (iv). H ≫ Υ. Condition (i) is simply a minimal General Relativity requirement to have inflation. Condition (ii) is necessary to be in the slow roll regime. Condition (iii) implies the universe is in a low-temperature regime where radiation has an insignificant effect on inflaton fluctuations. Finally condition (iv) implies dissipative effects have an insignificant effect on inflaton evolution.
For warm inflation, there are two regimes that must be addressed, weak and strong dissipative warm inflation. In both these regimes the following energy scales are the same (i). E v > E r , (ii). max (Υ, H) > m φ , and (iii). E r > m φ . Condition (i) is required again by General Relativity to realize inflation. Condition (ii) is the warm inflation equivalent to the slow roll regime. Condition (iii) implies the inflaton fluctuations are no longer in a zero temperature state, so that radiation will have nontrivial effects on inflaton dynamics and fluctuations. Finally the last condition, and the one that leads to two regimes of warm inflation, is (iv). Υ > 3H, strong dissipative warm inflation and (iv). Υ ≤ 3H, weak dissipative warm inflation. The notation here is almost self-explanatory, with the strong dissipative regime, where the dissipative coefficient Υ controls the damped evolution of the inflaton field and the weak dissipative regime, where the Hubble damping still is the dominant term.
Even if the presence of radiation does not hinder inflationary growth, it can still influence inflaton dynamics. For example consider inflation at the Grand Unified Theory (GUT) scale, so V 1/4 ≡ E v ∼ 10 15 GeV, which means the Hubble parameter is H ∼ V 1/2 /m p ∼ 10 11 GeV. For cold inflation and weak dissipative warm inflation, since the Hubble damping term 3Hφ must be adequate to produce slow roll inflaton evolution, it requires that the inflaton mass m φ ∼ 10 9−10 GeV < ∼ 3H. The key point to appreciate is that there are five orders of magnitude difference here between the vacuum energy scale and the scale of the inflaton mass. In other words there is a huge difference in scales between the energy scale m φ governing inflaton dynamics and the energy scale E v driving inflation. This implies, for example, that in order to excite the inflaton fluctuations above their ground state, it only requires a minuscule fraction of vacuum energy dissipated at a level as low as 0.001%. This is good indication that dissipative effects during inflation can play a noticeable role. This is only an energetic assessment, but it is suggestive of interesting physics. It leaves then a question for a full dynamical calculation to answer. In particular, the universe is expanding rapidly during inflation at a rate characterized by the Hubble parameter H. The question then is whether the fundamental dynamics responsible for dissipation occurs at a rate faster than Hubble expansion.
The other difference between warm and cold inflation is how dissipation affects the parameters of the underlying first principles model, which becomes most evident in the strong dissipative regime, Υ > 3H. To understand this point, recall that in cold inflation the inflaton motion is damped by only the 3Hφ term. Thus slow roll evolution requires the inflaton mass, ∼ √ V ′′ , to be less than ∼ H. However in typical quantum field theory models of inflation, it is very difficult to maintain such a tiny inflaton mass, a point which is further addressed in Sect. VI. In one form this is called the "η-problem" [10,11]. To contrast, slow roll motion in warm inflation only requires V ′′ < (3H + Υ) 2 , so for Υ > 3H it means the inflaton mass can be bigger than in the cold inflation case, and in particular bigger than the Hubble scale. This relaxation of the inflaton mass constraint permits much greater freedom in building realistic inflaton models, since this "η-problem", infra-red, and/or swampland problem is comfortably eliminated.
Another model building feature that differs warm versus cold inflation is where inflation occurs in regards the region of the scalar field background mode amplitude, which is the zero-mode of the field φ ≡ Φ . For cold inflation with the simplest types of potentials, which also are the most commonly used, V = λΦ 4 /4! and V = m 2 φ Φ 2 /2, calculations show that the initial inflaton amplitude has to be above the quantum gravity Planck scale φ i > m p . This is because in these models, H in the Hubble damping term, 3Hφ, increases with larger field amplitude, so in order to achieve an adequately long slow roll period to yield the desired 50 or so efolds of inflation, this large field amplitude is required. However from the perspective of the ultimate goal of building a realistic particle physics inflation model, this condition poses a problem, which forces more complications into the model building. This will be discussed later in this review. On the other hand in warm inflation, both dissipation and radiation work to reduce the inflaton field amplitude. Since thermal fluctuations will always be larger than quantum, in order to constrain the scalar perturbation amplitude to be the desired ∼ 10 −5 , it requires fixing the other parameters such as the inflaton coupling and field amplitude to be smaller, which in turn lowers the tensor to scalar ratio. Moreover when Υ > 3H, this larger dissipation implies the the inflaton traverses a much smaller region of the field amplitude in the slow roll phase, so allowing its field amplitude to be smaller. The basic point that detailed calculations show is that for these simple monomial potentials, in warm inflation the inflaton field amplitude can be below the Planck scale φ < m p , thus avoiding the quantum gravity scale.
III. FIRST PRINCIPLES DYNAMICS
As already pointed out in the previous Section, the conversion of even a little vacuum energy into radiation can have significant effect during inflation. Also Eq. (3) has been presented as an effective evolution equation for the inflaton field once interactions with other fields are integrated out. The questions that remains to be answered from quantum field theory are whether both these effects actually occur and if so then in what models. In order to address these questions, in this Section our task is to understand how to derive the effective equation of motion for the scalar inflaton field starting from a fundamental Lagrangian.
The basic Lagrangian quite generally for any inflaton model has the form L = L S + L R + L I . Here L S is the inflaton system Lagrangian, which has the general form
L S = 1 2Φ 2 − 1 2 (∇Φ) 2 − V (Φ).(6)
The inflaton in any model must interact with other fields, since channels must exist from which the vacuum energy contained in the inflaton field ultimately can be released into radiation energy, so that inflation ends and the universe is put into a Hot Big Bang evolution. These interactions are contained in the L I part of the above Lagrangian. The question is whether this conversion process occurs exclusively at the end of inflation, as pictured in cold inflation, or does it occur concurrent with inflation, as pictured in warm inflation. Some common types of interactions are the inflaton coupled to bosonic fields such as 0.5g 2 Φ 2 χ 2 or fermion fields as hΦψψ, i.e. −L I = 0.5g 2 Φ 2 χ 2 + hΦψψ.
Finally L R contains all other terms associated with all fields aside from the inflaton that form the radiation bath or reservoir, like the χ and ψ fields in this example. For this Lagrangian, the quantum operator equations of motion can be immediately written down, one for each field. These equations are generally coupled to each other due to nonlinear interactions. We are interested in the evolution equation of the fields, and in particular the expectation value evolution of the inflaton field, given the state of the system at some initial time t i . Thus we wish to obtain the effective equation of motion for the scalar inflaton field configuration φ ≡ Φ , after integrating out the quantum fluctuations in Φ, and the effects of all other fields with which φ interacts, like the χ and ψ fields in L I . This is a typical "system-reservoir" decomposition of the problem, as familiar in statistical mechanics [1]. In our case the system is φ and the reservoir is all the other dynamical degrees of freedom.
The system-reservoir approach has applications to many problems in physics. It is instructive to state a few examples here. One of the most common examples is Brownian motion, where the evolution of one singled out particle is of interest, when is immersed in a fluid and interacts with particles in that fluid. One seeks the evolution equation for this Brownian particle, once the effect of all the other particles are integrated out and represented in this equation through effective terms. In our problem the background field φ is the analog of the Brownian particle and the reservoir bath in this case contains the Φ quantum modes, the scalar χ, and the spinor ψ. In condensed matter physics, the system-reservoir, or open quantum system, approach is widely used. Some examples are the tunnelling of a trapped flux in a SQUID, interaction in a metal of electrons with polarons, and in Josephson junction arrays [1,12].
In order to obtain the φ effective equation of motion, the procedure is first to replace the field Φ in the Lagrangian by Φ = φ + κ, where Φ ≡ φ and κ are the quantum fluctuations of the Φ field. Taking for example the potential V = m 2 φ Φ 2 /2, the equation of motion for φ, then becomes
φ + 3Hφ + m 2 φ φ − 1 a 2 (t) ∇ 2 φ + g 2 φ χ 2 + g 2 κχ 2 + h ψ ψ = 0 .(7)
One now wishes to solve the quantum operator equations of motion for all the other fields, i.e. κ, χ and ψ, as a function of φ, substitute these above in Eq. (7), and then take the specified expectation values. What would emerge from this is the sought after effective evolution equation for φ. This is in principle, but in practice it can not be done exactly, so various perturbative and resummation methods are used. In this review we will not explore these approximation methods, but the interested reader can examine the following [3,[13][14][15][16][17]. Here only a few general features of the effective φ equation of motion are highlighted. First, since φ is singled out, it becomes an open system, so it is expected that the φ effective equation of motion will be nonconservative. Second, the fields χ, ψ etc... at a given time t 0 in general will be functions of φ at all earlier times t < t 0 . Thus the expectation values χ 2 etc... in Eq. (7) will be nonlocal in time with respect to φ, which is consistent with the first general fact, since this will lead to a nonconservative equation. These time nonlocal terms are then expressed in a derivative expansion of φ with respect to time, and when evolution is adequately slow, only the leading term is retained to give the dissipative Υφ term in Eq. (3). For the scalar field background mode, the emerging evolution equation is simplyφ + [3H + Υ]φ + dV /dφ = 0, which after multiplying through by a factor ofφ, can also be written in terms of the scalar field Hamilitonian as dH S /dt = −[3H + Υ]φ 2 [18]. This equation conveys that the loss in energy in the scalar inflaton field sector is from the two terms on the right hand side, one due to cosmological expansion and the other due to dissipation, with the dissipative term then sourcing that energy to radiation, as shown in Eq. (5). The derivative expansion mentioned above means the leading time nonlocal term associated with dissipation goes asφ, with Υ, which has dimensions of rate (energy dimension one), controlling how fast the kinetic motion of the scalar field decays its energy into particles due to its coupling to other fields. The QFT formalism for computing Υ from the microphysical dynamics that folds into producing this macroscopic dissipation term can be found in [3,16,17,19].
For readers familiar with the effective potential in Lagrangian quantum field theory, there is a heuristic way to understand the origin of the φ effective equation of motion. The effective potential calculation applies when φ is a static background. The interaction of φ with the other quantum fields leads to the creation of quantum fluctuations, which are emitted off φ, propagate in space and time, and then are reabsorbed by φ. These processes typically are known as loop corrections, which modify the classical potential and lead to the effective potential. Now suppose φ is not in a static situation, that it is changing in time. In this case the same loop corrections mentioned above would occur. However at the time of emission and absorption, the state of φ has changed. Thus these loops no longer simply modify the potential of φ, but also introduce terms which mix products of φ at different times, therefore introducing temporally nonlocal terms into the φ evolution equation.
Thus the key question is, given a particle interaction structure in the Lagrangian, what types of dissipative effects does this lead to during inflation. In Sect. VI we will review various models. Just as one example, here is a two stage mechanism, involving the inflaton field coupled to a heavy scalar field χ which in turn is coupled to light fermion fields ψ as g 2 Φ 2 χ 2 + hχψ χ ψ χ [20,21]. In this case the background inflaton field φ acts as a time dependent mass to the χ field. As φ changes over time, the χ mass changes, thus altering the χ vacuum. This leads to virtual χ production, which then decay into real ψ χ particles. There are also direct interactions of φ and χ particles. This type of interaction structure is very common in particle physics models, thus conducive to warm inflationary dynamics.
IV. BACKGROUND
My own background in physics was not in cosmology. My PhD was on string theory and during that time I also did considerable work in statistical physics and condensed matter physics. After finishing my PhD and during my first postdoc in Tucson, Arizona I started working on perturbative QCD. During that time I was the seminar organizer. One academic who was in our department there was Fang. I frequently had conversations with him and found he had broad interests, with thought given to many areas. So I asked him to give us a seminar and he said he would on inflationary cosmology. This was not a subject I had studied before. During his talk as he discussed the scalar inflaton field and how it can drive inflation, I raised my hand and asked him where the dissipative term was in the inflaton evolution equation. Coming from a background that included condensed matter physics, I found it unusual that an interacting system did not have a term accounting for dissipation. Fang paused his talk and said I should come by afterwards for a discussion. So I did and I explained to him how I'd expect the inflaton evolution equation to have a standard dissipative term and possibly even be governed by a Langevin type evolution. This seemed to be a direction Fang had given thought to before. He showed me a paper he had written in 1980 which had basically suggested inflation [22]. It also had radiation production during inflation, but neither of us felt the dynamics of that paper was the direction to develop further.
The basic idea of an exponential expansion phase in the early Universe was first suggested in the highly insightful work by Gliner starting in the mid-1960s [23,24]. Then during the 70s, Kirzhnits and Linde developed the foundations for application of particle physics to phase transitions in cosmology, including explaining how they could be important in explaining the cosmological puzzles [25][26][27][28][29]. By the late 70s and early 80s there were several papers using these seminal ideas including Fang's paper. His was one of the set of papers prior to Guth's [5] that had suggested the inflation idea as a solution to the horizon problem [30][31][32][33][34][35][36], but without the catchy name that Guth finally gave the scenario (Guth's paper also pointed out that inflation could solve the flatness problem). All this work aside from Fang's was developing cold inflation dynamics.
Fang and I started developing our ideas and wrote the paper [2]. This addressed dissipation and noise and came up with an expression for inflaton fluctuations that were thermal. At that point my postdoc there was coming to an end, and I was moving to Penn State. During that period I started thinking about formulating a full scenario and showing one could realize inflation and have dissipation concurrently with these thermal fluctuations. I did eventually arrive at a model and I sent my results to Fang. I had naturally considered him a collaborator on this work. Fang found the results very interesting and encouraged me to write it up and publish it [37]. However he said he had not thought about our idea to this extent and so I should write this paper on my own. He and I even had a discussion about what to name this new scenario, which led to calling it warm inflation.
At the time I considered this work as just a side project to my main interests developing in perturbative QCD. However I kept thinking about it and wrote a couple more papers in the next couple of years. In one paper I studied warm inflation trajectories computed from the Friedmann equations for a system with both a decaying vacuum energy and radiation, and showing how such evolution could smoothly go from a warm inflationary phase to the radiation dominated regime, thus offering a graceful exit [38]. I also started getting interested in how an inflaton Langevin type equation could be derived from first principles. After all my initial comment in Fang's talk had been that there should be a dissipation and noise term in such equations. In my first attempt to understand such dynamics [18], resting on my condensed matter background, I studied the Caldeira-Leggett model [12]. The original such was a quantum mechanical model of a single coordinate, the system, coupled linearly to many other coordinates all of which had harmonic oscillator Hamiltonians. I made a quantum field theory extension of this model to study inflationary expansion concurrent with dissipation. This paper set some foundations for deriving warm inflation dynamics from a quantum field theory.
Around this point I was looking for a new postdoc position. By now my research interests had turned heavily toward developing warm inflation and not many people were interested in the idea, so I had difficulty getting hired. However Robert Brandenberger responded to my efforts by reaching out in an email to me saying he found my ideas interesting. He was one of the first researchers in cosmology to support my work. I am not sure that Robert believed warm inflation was necessarily THE idea of cosmology. I think his attitude, like mine, is that any reasonable idea in cosmology needs to be fully examined. There is no way that any idea in cosmology can become the single adopted picture until all reasonable ideas have been fully vetted. In that context I think he felt warm inflation deserves its time to be developed and considered. Through his help I was able to secure a postdoc position at Vanderbilt in the group of Tom Kephart and Tom Weiler. It was here that my work in warm inflation developed considerably. They had a very open minded attitude toward developing new ideas in theoretical physics and were not too bothered about just following the mainstream. This provided a conducive environment.
After writing these initial papers on warm inflation, I was contacted by Gleiser and Ramos. They had written one of the pivotal papers in deriving dissipation in a scalar quantum field theory [39] (other papers on scalar field dissipation around or previous to this are [40][41][42][43][44][45]). They saw the connection between their work and what I was trying to achieve. We started working together and wrote a paper of a scalar field φ, meant to be the inflaton, coupled to N other scalar fields which are integrated out to arrive at an effective equation of motion for φ [3]. This effective equation of motion would contain then a dissipative term. This model contained all the basic features for realizing warm inflation dynamics. Because all masses were much bigger than the Hubble scale and all microphysical dynamics was fast compared to the expansion scale, this model could be calculated still within a flat spacetime framework. This is a noteworthy feature of warm inflation that on a relative scale for a nonequilibrium open QFT problem, the underlying dynamics is fairly simple to calculate. These conditions on the field theory were imposed as consistency conditions. In particular we introduced adiabatic conditions that the dynamic time scale of evolution of the scalar field be much larger than typical collision and decay time scales
Γ −1 , φ/φ ≫ Γ −1 .(8)
We also imposed that these microphysical collision and decay time scales be much shorter than the Hubble time,
Γ ≫ H .(9)
As I mentioned earlier, a key initial barrier to why no one before me had suggested the warm inflation scenario seems to have been due to a lack of understanding of dynamical time scales relevant to inflation and in particular most researchers in this field held to a belief that inflation happens too quickly for particle production to occur. In this paper with Gleiser and Ramos my initial thoughts about such time scales was examined within quantum field theory, and we recognized that indeed timescales can allow for particle production, although it would not be easy to realize. This was one major accomplishment of this paper. Several months after we put this paper on the arxiv, Junichi Yokoyama and Andrei Linde (YL) wrote a paper titled 'Is warm inflation possible?' and seemed to have answered their question within a one sentence abstract that said it was 'extremely difficult and perhaps even impossible' [46]. The analysis in their paper was not that different from ours. They did compute the fermonic channel of dissipation whereas our paper had done the bosonic, so that added a useful new result. However their basic analysis of warm inflation followed ours as did the consistency conditions. Only the conclusions differed. Where they saw impossible we simply saw a set of constraints that would help guide us towards building a first principles model of warm inflation.
After our paper but before Yokoyama and Linde's, Ramos and I had presented our work at PASCOS98 in Boston. Linde was in the audience for our talks and told us that he did not think our warm inflation work was correct, although subsequently still during the conference he told us there was some merit to our dissipation calculations. Nevertheless some months later he wrote the above mentioned paper. Yokoyama and Linde did share the results of their upcoming paper with us with this claim about the impossibility of warm inflation. Thus, we got to work on building a quantum field theory model that demonstrated warm inflation is possible. Within a few weeks after they arxived their paper, we put out the first quantum field theory model, which we called the distributed mass model (DMM, explained below in Sect. (VI)), demonstrating that the 'impossible' was really not quite so [4]. The following year Tom Kephart and I built a string theory motivated realization of this model [47], further solidifying that not only is warm inflation possible but it has attractive model building prospects.
Around this time my interest turned to taking a deeper examination of warm inflation dynamics. The regime of warm inflation that to me seemed most interesting, and still does, is the strong dissipative regime of warm inflation
Υ > 3H ,(10)
so that the damping of the inflaton motion was dominated by the thermal damping and Hubble damping did not play a major role. In the first paper with Fang, we had computed the density perturbations only in the weak dissipative regime of warm inflation
Υ ≤ 3H .(11)
In [48] I determined the expression for the density perturbations in the strong dissipative regime. In this paper I also looked more closely at the quantum field theory dynamics using the distributed mass model we had introduced. One of the key observations I made in that paper was that in the strong dissipative regime Υ > 3H, the mass of the inflaton could be larger than the Hubble scale. This allowed for warm inflation models unlike anything that could be made for cold inflation, where slow roll under Hubble friction required that m φ < 3H. A mass less than H implies a Compton wavelength bigger than the horizon. For such a case, in the rest frame the particle is not localizable, so the field associated with it has no particle interpretation within the conventional sense. Thus a matter field with m φ < H was unlike any kind of quantum field we had any empirical knowledge about from collider experiments (Note that photons being massless have no rest frame and so are not localizable, but this is due to their Abelian gauge symmetry which makes them very different from material particles like from a scalar field.). It opened up the possibility for infra-red problems. So one of the conditions I imposed for what I considered the ideal inflation model was that m φ > H and I called it the infra-red condition [48]. I had also understood that dissipation could lower the background inflaton field amplitude and in particular for monomial potentials could allow φ < m p [49]. Both these conditions I recognized by the early 2000s from simple reasoning as being important for an ideal inflation model, well before they emerged in the swampland conditions [50,51]. These conditions set the goal for what to look for in deriving a warm inflation model from first principles quantum field theory. There has been some success in this direction, which I will discuss in Sect. (VI). However it has proven very difficult to find models in this regime. Nevertheless in principle it is possible, which is very different from cold inflation where such a regime is very difficult to achieve and in particular having a sub-Hubble mass is out of the question.
It remains an open model building question for warm inflation to find such models. At this point Rudnei and I set out to build more first principles warm inflation models. Initially we explored supersymmetry (SUSY), which was one of the most well utilized symmetries in inflation model building to realize the ultraflat inflaton potential that was needed. In warm inflation there were the dual requirements for having this very flat inflaton potential but at the same time creating a large enough dissipative term. This effort resulted in the SUSY two-stage dissipation model [21] (further explained in Sect. (VI)). We computed the dissipative coefficient, the radiative corrections, and consistency conditions for this model and obtained a warm inflation regime, although like DMM once again this model required a large number of fields going upward of thousands. At around this time, I also started working with Mar Bastero-Gil and we further examined this and other SUSY models [52]. Also around this time Ian Moss got interested in warm inflation, with Hall, Moss, and Berera [53] doing a more detailed examination of density perturbations. He continued to study this even further, with Graham and Moss [54] finding a certain growing mode for the fluctuations if the dissipative coefficient was temperature dependent. This result added yet more numerical difficulties in computing warm inflation from a model. Bastero-Gil and Ramos developed a set of codes that could do the needed calculations. The warm inflation power spectrum would have contribution from both quantum and thermal noise. Ramos and da Silva [55] did a careful analysis of these contributions starting with my basic expressions for the primordial density perturbations and came up with a total power spectrum.
Alongside these developments of the theory, we did model building and made predictions for the CMB from the two-stage mechanism model. In 2009 the paper by Bastero-Gil and me [56] demonstrated that in warm inflation the tensor-to-scalar ratio, r, would be suppressed compared to the comparable cold inflation model for the monomial potentials Φ 2 and Φ 4 . At the time this was a result that went against the growing tide of expectation of finding a high scale tensor mode at the Grand Unified Theory scale. In 2014 our paper [57] did a more detailed study to show the suppression of the tensor mode for the warm Φ 4 model and also consistency for n s with the Planck 2013 results. This paper also showed that as the dissipative coefficient increased, r would decrease, thus demonstrating again the parametric suppression of the tensor mode with increasing dissipation. This was a significant finding. Although by this time there was a trend in CMB data that the upper bound on r was decreasing, it was edging on ruling out the cold inflation Φ 4 model, there was anticipation that a tensor mode would be found. Our results went contrary to such expectations and indicated that the tensor mode would be suppressed.
Although the two-stage model was very successful in developing a working warm inflation model, the fact it required a huge number of fields was an issue we were not too happy about. We wanted to find a simple warm inflation model, that contained only a small number of fields. Our efforts in doing that with SUSY we felt had been exhausted, so we started exploring other symmetries. One that we had been thinking about for some time was the pseudoscalar symmetry. This led us to construct the warm little inflaton model [58]. This model will be discussed in more detail below in Sec. VI, but in short it obtained warm inflation with just a few fields, thus was a major step forward in constructing successful particle physics models of warm inflation.
There had been work previous to my warm inflation paper in 1995 and that by Fang and me earlier that year, which had discussed dissipation during inflation. To start with was Fang's 1980 paper [22] that I already mentioned. He examined a source of dissipation associated with bulk viscosity specific to a phase transition and was not the direction that seemed could be developed in any detail. Subsequently in the mid-1980s, Moss [59] and Yokoyama and Maeda [60] suggested the idea of dissipation in the inflaton evolution equation similar to warm inflation, though we were not aware of these two papers when initially developing warm inflation. The success of warm inflation gave these interesting early works a new lease on life. However none of this early work appreciated the importance of time scales, so that the dissipation could only be present if the microphysical dynamics producing it operated faster than the macroscopic time scale of expansion. This is really the key question to answer as to whether warm inflation is a viable idea. Nor did these early works pick up on the underlying fluctuation-dissipation dynamics of warm inflation, and that property is more general than just the thermal limit. Finally an expression for density perturbation was obtained in Moss's paper, but it was only for the weak dissipative regime, with his paper not understanding the distinction between dissipative regimes. In fact Fang and I made a similar oversight in deriving the same expression and thinking it was generally valid, when actually it was only for the weak regime [2] unaware of the Moss paper [59]. Yokoyama and Linde [46] had briefly commented that the presence of the dissipative coefficient in the inflaton evolution equation may affect the expression for the density perturbation but gave no details and just used the expression of Fang and me. It was only after a few years of studying warm inflation did I realize there are two very distinct regimes of warm inflation, strong Υ > 3H and weak Υ ≤ 3H. I then understood that Fang's and my original expression for the density perturbation was only valid in the weak regime, and I then obtained the expression for the density perturbation in the strong regime [48]. It is the strong regime that has the most interesting features of warm inflation, since with Υ > 3H it allows m φ > H, thus cleanly solving the η-problem [49]. Also the strong regime can allow φ < m p , thus allowing all scales in the model to be below the quantum gravity scale. In more recent terms this is the best regime to overcome [61][62][63][64] all the swampland difficulties [50,51,65].
V. DENSITY PERTURBATIONS
In the most common realization of warm inflation, density perturbations are induced from a thermal bath. They are classical on creation and thus the scenario has no quantum-to-classical transition problem as is the case for cold inflation. In cold inflation the inflaton density perturbations are dictated by the Hubble scale where modes freeze-out to give δφ ∼ H. In contrast in warm inflation there are three scales, the Hubble scale, the dissipation scale, and the temperature during inflation. The dissipative term, Υ, in warm inflation can be much larger than the Hubble damping term H during inflation. Due to the Υ term, this freeze-out momentum scale can be much larger than that in cold inflation, which is ∼ H. At the freeze-out time t F , when the physical wavenumber k F = k/a(t F ), the mode amplitude δφ can be estimated using a purely thermal spectrum,
δφ 2 (k F ) ≈ k<kF d 3 k (2π) 3 1 ω k (e βω k − 1) −1 T →∞ ≈ k F T 2π 2 .(12)
To estimate k F , one must determine when the damping rate of Eq. (3) falls below the expansion rate H, which occurs at k 2 F ≈ (3H + Υ)H. Thus, in the strong dissipative regime Q ≡ Υ/(3H) ≫ 1, this implies k F ∼ √ HΥ. Substituting for k F in Eq. (12), one finds the expression for the inflaton fluctuation amplitude at freeze-out
δφ 2 ∼ √ HΥT 2π 2 .(13)
This expression was first derived by me in [48]. In the weak dissipative regime Q ≪ 1, the freeze-out wavenumber k F ∼ H, which is consistent with cold inflation, thus giving the inflaton fluctuation amplitude at freeze-out,
δφ 2 ∼ HT 2π 2 .(14)
This expression was found by Moss [59] and then independently rediscovered by Berera and Fang [2]. In both cases the regime of its validity was wrongly understood, and in [48] the appropriate regime in which it was valid, the weak dissipative regime, was clarified. The fact that the density perturbations in warm inflation are classical and of thermal origin some regard as an unappealing picture for the early universe, but there is no concrete argument behind this attitude. The idea that the initial primordial perturbations are quantum in origin has over the years become encased in the lore of early universe cosmology, but the factual basis for having such a beginning is lacking. There was initially some ideas about the universe being created as a quantum fluctuation. Moreover the chaotic inflation model [8] did provide some kind of dynamical picture motivating the origin of quantum fluctuations. However this model required inflation at a very high energy at the GUT scale which has been ruled out by CMB Planck data [66] and further constrained by more recent BICEP data [67].
There has been furious effort over the decades to develop ever more cold inflation models with their own unique signatures from the density perturbations. However the data itself has been far from revealing and the fact of the matter is what is seen there can equally be explained by the quantum fluctuations of cold inflation or the classical ones of warm inflation. So far observational data shows absolutely no preference. Moreover, of the myriad of possible interesting effects that might emerge from density perturbations, it is a model building game and one can concoct various such features both from warm and cold inflation models. More recently there has been work to look for intrinsic features that signal a quantum origin or in its absence a classical origin [68][69][70][71]. However such tests appear to be extremely difficult and if the CMB data remains without significant features, ultimately impossible to decisively measure. The real lesson that can be taken away from these papers is just how hard it is to discriminate between quantum versus classical primordial perturbations during inflation. This makes it all the more perplexing how some adhere to the belief that the primordial fluctuations must have been quantum. In actuality there is equal reason to believe both classical and quantum processes play roles in phenomenon in the early universe. This is the correct unbiased initial assumption that should be taken for a robust examination of early universe cosmology.
This adherence to the primordial fluctuations being quantum is more a statement about present attitudes in theoretical physics. This has historical parallels. Somewhat more than a century back, the established thinking was that the world was governed by deterministic classical physics. By now attitudes in theoretical physics tend to almost the other extreme and identifying quantum phenomenon where possible is all the rage. We are more fortuitous compared to the state of physics a century back in that we have an extensive understanding of both classical and quantum physics. The rational attitude is to accept both possibilities for the origin of density perturbations and let the science decide. The early universe was large enough to allow for classical behavior. In fact for anything bigger than the quantum gravity scale, there is no argument to favor quantum fluctuations over classical ones. From what we know, inflation had to occur below the Planck scale, below the string scale and even below the lower end of the GUT scale. As the bounds on the tensor mode decreases, if interpreted in terms of inflationary dynamics, meaning the energy scale of inflation decreases, so moves even further below the quantum gravity scale, the arguments for a quantum origin of perturbations become less compelling. The universe may still have initially emerged from some type of quantum gravity scale fluctuation, but after that the ensuing dynamics need not all then be quantum. Its well possible that particle production occurred and density fluctuations then had a classical characteristic to them whether thermal or any other statistical state. There is also an intermediate possibility that these primordial fluctuations have mixed quantum and classical properties.
The idea of quantum fluctuations seeding the initial density perturbations was suggested early on in the 1950s by Wheeler [72] and later considered by Harrison [73]. In these early works there was no mechanism suggested for producing such fluctuations, but it was simply asserted that their presence could explain what was known at the time about large scale structure. A noteworthy point about these early papers is they assumed these fluctuations would have been created at the quantum gravity scale m p , with their simple argument being that at that scale classical physics fails. Linde's chaotic inflation scenario offered a mechanism that linked quantum gravity scale physics down to the GUT scale, where he postulated quantum fluctuations. His scenario did not require at that stage quantum fluctuations. There were known dynamical QFT models at that scale, so some dynamical process such a thermalization etc... could still be conceivable. At the moment the most stringent bound on the tensor-to-scalar ratio is from BICEP, placing the current upper bound of r ∼ 0.03 [67]. This would correspond to an energy scale during inflation less than ∼ 10 15 GeV. This implies a Hubble scale during inflation H ∼ √ V /m p ≈ 10 11 GeV, which is seven orders of magnitude above the Large Hadron Collider energy scale. From our developed theoretical knowledge about the early universe, this does not come across as a particularly fast timescale. Moreover at this conceivable energy scale ∼ 10 15 GeV for inflation, there are plenty of particle physics models that have been constructed, GUT etc..., that could provide degrees of freedom operating fast enough to create a thermal or some type of multiparticle statistical state leading to classical fluctuations. Thus there is no reason to expect at this conceivable energy scale of inflation or lower, which is much below the quantum gravity scale, that primordial fluctuations must be uniquely quantum. Should a tensor mode eventually be found, based on the present bounds, we know the corresponding energy scale will be below the GUT scale. In such a case, the arguments are very compelling that such a outcome is favoring warm not cold inflation.
The gamble taken by cold inflationary cosmology was that the tensor mode signatures for inflation would be found at the GUT scale, based on an chaotic inflation explanation involving the simple monomial potentials. Nevertheless, there always was ample grounds to be cautious about these models, since their predictions came from a questionable regime of QFT with a sub-Hubble inflaton mass and super-Planckian field excursion, and eventually the data ruled them out. With the trends in the data not supporting the simple monomial cold inflation models, it's best now to be more open-minded. Meaningful progress in these theoretical questions about the early universe will only happen by taking a broad view. If data does eventually confirm a low tensor-to-scalar ratio, there are strong arguments that it is confirming warm rather than cold inflation.
In order to decide which is the more compelling origin of density perturbations thus the more compelling inflation picture, the data alone will not be sufficient, since there is only a limited amount of information we can measure about such an early time period of the Universe. Equally it needs to be seen which scenario is most compelling from a theoretical perspective, a point that will be discussed in greater detail in Section IX. A minimal requirement has to be that the scenario can be cleanly derived from quantum field theory. The success of quantum field theory in collider physics implies this is the best and only tool we have to explore the high energy regime. And for energy regimes yet far beyond measurement, the best we can do is rely on the rules of quantum field theory that we have learned at these lower energy scales. And if some consistent picture based on those rules emerges for higher energies, then that is the best possible prediction we can make. This of course means building a model beyond the Standard Model (SM), however using types of fields and if possible even symmetries that are known in the Standard Model. In particular such a model should not rely on gravity, since we know nothing definitive about its quantum nature and even have limited knowledge about its classical nature. In this respect the cold inflation scenario, though simple in appearance, hides many problems. We have already mentioned the problems that emerge due to the inflaton mass being less than the Hubble scale. Whether they are infra-red, η, or swampland problems, this small mass scale seem something unwanted by quantum field theory. Likewise a scalar field amplitude above the Planck scale introduces unknown quantum gravity concerns. Then there are quantum-to-classical transition issues.
One note here on terminology. Nowadays many researchers refer to the type of quantum field theory the Standard Model is built on as an effective field theory, suggesting it is subservient to some higher theory. However to date there is no such established higher theory. This is all still a matter of research and speculation. This type of nomenclature is fine as a matter of convenience for those working on higher theories. However when talking about predictions and comparing to experiment, it can be misleading. It can suggest the quantum field theory we know and understand is somehow less predictive than the higher theory. However until there is an established higher theory, the quantum field theory we know, and the rules it embodies, is the most predictive tool we have. As such in this review I will refer to the quantum field theory of the Standard Model as simply quantum field theory, first principles quantum field theory, conventional quantum field theory, or the quantum field theory we understand etc... I will include in this terminology effective field theories that have cutoff scales below the Planck scale, such as sigma models and such models involving pseudo Nambu-Goldstone bosons or other models built on symmetries found in the Standard Model.
VI. MODEL BUILDING
The ultimate goal of warm inflation model building is to find models computed from first principles quantum field theory. This requires that the model produces dissipation and the microphysical dynamics operates faster than the macroscopic dynamics, so faster than the evolution of the inflaton field and the expansion rate, H, of the universe. These requirements emerge as consistency conditions in a warm inflation calculation. Finally once a working model has been developed, one then needs to check whether its predictions are consistent with observation. Achieving all this is a very difficult task and so far only a few such warm inflation models have been developed.
Alongside this first principles QFT model building, there also has been phenomenological warm inflation model building. In this approach one simply puts in by hand a dissipative coefficient in the inflaton evolution and then computes the resulting warm inflation. This approach is useful for exploring types of dissipative behavior that can lead to observationally consistent warm inflation models. Given how hard the first principles approach is, this approach provides an intermediate step to studying the types of warm inflation models that could be relevant.
Here I will discuss some of the first principles quantum field theory warm inflation models that have been developed. The first such model is what we called the distributed mass model (DMM) [4]. In these models there are a set of bosonic fields χ i which interact with the inflaton field through shifted couplings. The interaction term in the Lagrangian which realizes such shifted couplings has the form,
g 2 2 (Φ − M i ) 2 χ 2 i ,(15)
so that when Φ = φ ∼ M i , the χ i field mass becomes small. In particular when the mass of a χ i field gets below the temperature scale in the Universe, it becomes thermally excited. Once thermally excited, as the background inflaton field evolves, it is able to dissipate energy into these fields. This creates a dissipative term in the inflaton evolution equation [3]. As an aside, the idea of these shifted couplings of the inflaton in our DMM model has subsequently been used to develop other types of warm inflation models including trapped inflation [74][75][76].
For the DMM, if these mass scales M i are now distributed over some range that φ will traverse, then during evolution of φ, some subset of these χ fields will be light and generate a dissipative term. In order to control the radiative corrections, this needs to be extended to a Supersymmetric model. A simple superpotential that realises this model is [77],
W = 4m φ S 2 + λS 3 + NM i=1 2gM i X 2 i + f X 3 i − 2gSX 2 i .(16)
Here the bosonic part of the chiral superfield S = Φ + θψ + θ 2 F , with θψ ≡ θ α ψ α and θ 2 ≡ θ α θ α , is the inflaton field Φ, with Φ = φ, and it interacts with both the Bose and Fermi fields of the chiral superfields X i = χ i + θψ χi + θ 2 F χi . The potential terms of the Lagrangian are obtained from Eq. (16) by standard procedures; the potential is
L V = d 4 xd 2 θW (S, {X i }) + h.c.
, and the auxiliary fields F and F χ are eliminated through the "field equations", ∂W/∂F = ∂W/∂F χi = 0, which results in the Lagrangian only being in terms of the Bose and Fermi fields. For the above superpotential Eq. (16), this leads to a Φ 4 inflaton potential with interactions to the χ i fields similar to Eq. (15) and in addition corresponding interaction terms to the Fermi fields ψ χi . The distribution of the mass scales M i are along the interval which φ traverses during the inflationary period. The Φ 4 self-coupling must remain small for successful inflation. In this SUSY theory it occurs because the renormalization group equations for the quartic coupling are proportional to the coupling itself, which means even if there is another large coupling, this will not lead to a problem. This model can generate warm inflation with adequate e-foldings to solve the horizon and flatness problems [4], as well as produce observationally consistent primordial fluctuations [48]. It should be noted that the most general superpotential would include a term in Eq. (16) linear in the X i fields, S 2 X i . It has been eliminated by hand. This term induces a φ dependent mass term to all the X i fields and so it must be very small for the success of this model. The stability of the SUSY theory under radiative corrections allows this term to be eliminated by hand. A more elegant way to prohibit the linear term in the superpotential would be by imposing a charge under some, for example GUT, symmetry so that these X i fields are not singlets. More recently this model was studied in [78] for various types of mass distributions. It was found the model can realize warm inflation for a a wide parameter range and in good agreement with Planck legacy data. We also found parameter ranges for this model entering into the strong dissipative regime, with the inflation mass m φ just over the Hubble scale. This is not a clean solution to the swampland criteria but it comes very close.
In [47,77] it was shown that the DM model can arise from a fine structure splitting of a single highly degenerate mass level. Let M ≈ g|M i+1 − M i | denote the characteristic splitting scale between adjacent levels. For typical cases studied in [4,48], it was shown in [47] that for significant expansion e-folding, N e > 50, warm inflation occurred in the interval 10 3 M < ∼ φ < ∼ 3 × 10 3 M and of note, at temperature M < ∼ T and not T at the much higher scale of the mass levels ∼ 10 3 M . The shifted mass couplings is precisely what makes these massive states light. In the string picture, this arrangement corresponds to a fine structure splitting of a highly degenerate state of very large mass, around the string scale ∼ M string , with the fine structure splitting scale several orders of magnitude less than the mass of the state, say M < ∼ M GUT ∼ 10 −3 M string . The following string scenario was suggested for this model in [47]. Initially in the high temperature region, some highly degenerate and very massive level assumes a shifted mass coupling to φ. All the states in this level are degenerate, so at this point they all couple identically as g 2 i (φ − M ) 2 χ 2 i . The string then undergoes a series of symmetry breaking that split the degeneracy and arrange the states into a DM model
i (φ − M i ) 2 χ 2 i with 0 < (M i − M i+1 )/M i ≪ 1.
In many models, thermal loop corrections are difficult to control adequately to maintain the required flatness of the potential and tiny inflaton mass. The DM model could adequately control loop corrections, but the interest was to find more models. This led to developing the two stage dissipative mechanism of warm inflation based on supersymmetry [21], in which the inflaton Φ is coupled to a set of heavy fields χ and ψ χ , which in turn are coupled to light fields y and ψ y . The key point is the heavy fields are not thermally excited, which means the loop corrections to the inflaton potential are only from vacuum fluctuations, and these SUSY can control. A generic superpotential that realises the two stage mechanism is
W I = Nχ i=1 N decay j=1 gSX 2 i + 4mX 2 i + hX i Y 2 j ,(17)
where S = Φ + ψθ + θ 2 F , X = χ + θψ χ + θ 2 F χ , and Y = y + θψ y + θ 2 F y are chiral superfields. The field Φ is identified as the inflaton in this model with Φ = φ + κ and Φ = φ. In the context of the two stage mechanism, X are the heavy fields to which the inflaton is directly coupled and these fields in turn are coupled to light Y fields. A specific inflaton potential has to be chosen In order to assess the effect of this interaction structure on radiative correction. Consider the case of a monomial inflaton potential with the additional superpotential term W φ = √ λS 3 /3 so that
W = W φ + W I .(18)
At tree-level the inflaton potential from this is
V 0 (φ) = λ 4 Φ 4 .(19)
When Φ = φ = 0, observe that the vacuum energy is nonzero, which means SUSY is broken. This manifests in the splitting of masses between the χ and ψ χ SUSY partners as,
m 2 ψχ = 2g 2 φ 2 + 16 √ 2mgφ + 64m 2 , m 2 χ1 = 1 8 (g 2 + 1 2 √ λg)φ 2 + √ 2mgφ + 4m 2 = m 2 ψχ + √ λgφ 2 , m 2 χ2 = 1 8 (g 2 − 1 2 √ λg)φ 2 + √ 2mgφ + 4m 2 = m 2 ψχ − √ λgφ 2 .(20)
This implies the one loop zero temperature effective potential correction
V 1 (φ) ≈ 9 128π 2 λg 2 φ 4 ln m 2 ψχ m 2 − 2 ≪ V 0 (φ) = λ 4 φ 4 .(21)
This is more suppressed than the tree level potential Eq. (19) and so will not alter the flatness of the inflaton potential. In [57] the two-stage mechanism was applied to the Φ 4 potential and we showed that dissipation suppresses the tensor-to-scalar ratio. The same behavior was observed a few years earlier in [56], well before the Planck data indicating the suppression of the tensor mode. At around this time there was anticipation based on the cold inflation chaotic Φ 4 model that a high tensor-to-scalar ratio would be found placing inflation at the GUT energy scale. Warm inflation demonstrated that this need not be the case for the Φ 4 model and that the presence of radiation during inflation could suppress the tensor mode. The above early models of warm inflation demonstrated that this type of dynamics can be realized within quantum field theory. However these models required a large number of fields and so were quite complicated. On the one hand such large number of fields can be accommodated within string theory based models, as also has been demonstrated for the above models. Nevertheless a significant step forward we felt would be to find warm inflation dynamics within a much simpler model. One of the things we began to realize from developing the above models is supersymmetry, though it can control the inflaton potential, is clumsy to work with. We started exploring other symmetries that might also maintain the ultraflat potential required for inflation. One idea we had been thinking about was the inflaton as a Nambu-Goldstone boson of a broken gauge symmetry. This eventually led in 2016 to the warm little inflaton model [58]. In this model the inflaton field corresponds to the relative phase between two complex Higgs scalars that collectively break a local U(1) symmetry. Fermions couple to these complex scalars through Yukawa interactions and both set of fields satisfy a discrete interchange symmetry, essentially leading to an effective theory below the symmetry breaking scale M involving the inflaton field and two Dirac fermions with a Lagrangian density
−L = gM cos(Φ/M )ψ 1 ψ 1 + gM sin(Φ/M )ψ 2 ψ 2 ,(22)
where g is a dimensionless coupling and Φ = φ. The original Lagrangian is actually written in terms of two complex scalar fields Φ 1 and Φ 2 and then these fields are represented in terms of modulus and phase. Thus there is no nonrenormalizable operators in this Lagrangian. It is just a matter of field representation, which is convenient when the two complex scalars develop nonzero vacuum expectations values,
Φ 1 = Φ 2 = M/ √ 2.
The particular form of this Lagrangian makes the fermion masses bounded from above, such that large inflaton field values do not lead to heavy fermions, and in addition leads to the cancellation of the leading thermal contributions of the fermion fields to the inflaton's mass.
We showed that this model can realize warm inflation, just requires in addition to the inflaton field, two fermonic fields and another scalar field, and for the Φ 2 inflaton potential leads to predictions for n s and r consistent with Planck observational data. Moreover, increasing the dissipation would decrease the tensor-to-scalar ratio. This model established that warm inflation can be realized in a simple model and showed the scenario has significant relevance to observational data. We also showed that the strong dissipative regime could be obtained with this model in [79], thus allowing for a inflaton mass m φ > H, so cleanly overcoming any infra-red or swampland problems that a light inflaton mass can lead to.
More recently a model was suggested by Berghaus et al. [80], where the inflaton φ has an axion-like coupling to a pure Yang-Mills gauge group,
L int = α 16π Φ fG µν a G a µν .(23)
Here G a µν is the field strength of an arbitrary Yang-Mills group with α = g 2 Y M /(4π), where g Y M is the gauge coupling. This model was named minimal warm inflation. They showed that for a modest coupling this led to a thermal friction and a thermal bath during inflation. They also showed this model could achieve the strong dissipative regime.
VII. ADVANTAGES
Before the inflation idea, Harrison [73] and Zeldovich [81] had already recognized that primordial fluctuations could be seeds for large scale structure and noted they needed to be scale invariant and even came up with an approximate value for the amplitude to be around 10 −4 . Inflation then built on these ideas to provide a mechanism for producing the primordial fluctuations. The goal was to realize this from a consistent quantum field theory model and not just symbolic scalar field potentials, of which one can concoct many, as seen in the literature. This main goal of inflation so far has not been realized. Until it is, inflation remains only an interesting idea that still needs theoretical validation. One of the successes of inflation is it can realize a Harrison-Zeldovich (HZ) spectrum and improve on it by providing a dynamical means to slightly alter the scale invariant spectrum through introducing a tilt. These features all emerge from a almost flat scalar field potential that is driving inflation.
Inflation would have occurred at a high energy scale above any scale for which quantum field theory has been empirically tested. From the success of nucleosynthesis we know our understanding of cosmological evolution is correct from the MeV scale to today. We also know from collider experiments how high energy physics behaves up to the 10 TeV scale in the context of the Standard Model. And we know there is no mechanism in the Standard Model for realising inflation. Thus it is safe to say that if inflation occurred, it must have been at a scale beyond where physics has been tested. Under these circumstances if we are trying to build an inflation model, the first question one must ask is what ground rules should be followed to produce a plausible model. One argument is build models that predict interesting, often called smoking gun, predictions. Then if data shows such effects, one could claim an indirect evidence for the model. A problem with that is cosmological data shows precious little evidence of such exciting effects. In fact one could say if inflation is correct, what empirical information we are able to gather about it is quite boring. Even by the time of the COBE data, it started becoming clear that inflation as seen from data was likely to be boring. Thus a second argument is that alongside the search for better empirical data as already mentioned, concurrently we have to ask the question can we build a realization of inflation that is consistent with everything we have learned theoretically and confirmed empirically about quantum fields. In the program of warm inflation this has been one of our main goals.
In order to pursue that, certain requirements have been imposed on the inflaton field for what is considered the ideal inflation model. One of these, which in my paper in 2000 [48] I called the infra-red condition, is I required that the mass of the inflaton field should be larger than the Hubble scale m φ > H. This means that the Compton wavelength of the inflaton field should be sub-Hubble. All the quantum matter fields that we have measured from colliders have masses above the Hubble scale. A quantum matter field in which the mass is sub-Hubble scale means it does not realize particles, certainly for field modes less than the Hubble scale. Our empirical understanding of quantum field theory so far has been in terms of a field and particle duality. We have no empirical knowledge what a quantum matter field is with masses that imply super-Hubble scale Compton wavelengths, or whether that even makes sense. The second requirement I imposed was that the scale of the inflaton amplitude < φ > (which we will just denote as φ) should be below the Planck scale, φ < m p . This condition arises since we have no knowledge of dynamics at the quantum gravity scale (Actually we only know for sure how QFT, as we understand it, behaves up to the LHC scale, so ∼ 10 TeV but we think the next scale where our fundamental understanding breaks down is all the way up at the quantum gravity scale. It could be that our understanding of QFT breaks down at an even much lower scale than that.). Thus we should not build an inflation model that breaches that scale. Some argue that the inflaton field amplitude is not the relevant scale, but rather the inflaton mass, which for example in the Φ 4 model would be ∼ λΦ 2 . Since λ is tiny in inflation models, even if φ > m p , the mass itself is sub-Planckian. However the inflaton field could also directly couple to gravity or other fields, thus if it is of Planckian scale, it gets into uncertain dynamics. In particular when φ > m p , from an effective field theory perspective higher dimensional non-renormalizable operators, such as dimension six V Φ 2 /m 2 p , become important and thus can ruin the flatness of the inflaton potential [11]. I imposed these conditions more than two decades ago based entirely on empirical reasoning. Back then of course the η-problem was known as were the effects of higher dimensional operators. There was belief that these issues could be overcome with adequate model building. However today these constraints have been realized from string theory in the context of the swampland conditions, which indicates a more fundamental problem in violating them [50,51]. The swampland conditions are an elaborate argument built for a very sophisticated model. Nevertheless its final conclusion is supported by arguments based on simple reasoning. This is to say if a model deviates from the regime in which QFT has been empirically verified, thus minimally if the inflaton mass is smaller than the Hubble scale, m φ < H , or the inflaton field amplitude is above the Planck scale, φ > m p , then you are now entering the twilight zone of quantum of field theory. The path of least resistance in building a theoretically consistent model of inflation is to use only the quantum field theory as we understand it, which minimally means to not breach these boundaries.
Whether infra-red, η, higher dimensional operators, or swampland problems, etc..., the writing on the wall has been clear for decades, that having an inflaton mass less than the Hubble scale or an inflaton field amplitude above the Planck scale is fraught with problems. Rather than fight against what quantum field theory clearly has difficulty with, it is prudent to explore an alternative approach that sits very comfortably in quantum field theory by looking for inflation models where the inflaton mass is larger than the Hubble scale and where the inflaton field amplitude remains sub-Planckian. If the inflaton mass is bigger than the Hubble scale, Hubble damping will have little effect in slowing down the inflaton field. Thus a dissipation term (or some other backreaction effect on the inflaton arising from particle production) is required with dissipative coefficient Υ > m φ > H. However the presence of such a dissipative term will imply radiation production during inflation. This logic guided by quantum field theory consistency leads in a natural way to warm inflation. There are some first principles quantum field theory warm inflation models which have been shown to achieve the strong dissipative regime [78][79][80]. There is still much to be done in this direction, but the evidence is convincing that warm inflation avoids the major model building hurdles that hamper cold inflation.
VIII. CRITIQUE
Within the field of cosmology, warm inflation seems to have developed into a rebel idea. It was never my intention for that to happen. As already mentioned, when I first proposed the basic warm inflation scenario, I had little background in cosmology and in particular inflation. In science, new ideas often are greeted with interest, which has been my experience from the different areas of physics I have worked in. The general attitude is no one idea can possibly be accepted until all reasonable ideas are given full consideration. In this respect my experience in cosmology has been somewhat unusual, as I unintentionally discovered. I found that this field had a certain large group of researchers, who advocated for cold inflation to the extent that they seemed to have already decided it is the right answer and had minimal interest in considering any alternative picture of the early universe. Somehow they knew that this idea is exactly what happened in that brief minuscule fraction of a second 14 billion years ago, and there is little need for broader thinking. The only thing this attitude has done in slowed the development of the field and wasted resources in the process, which could better have served research.
From my own part, I have never tried to hype up the warm inflation idea. In fact most of my effort seems to have gone in looking for all the ways warm inflation will not work. However in the process, a few gems of ideas have emerged that do work, and there are a handful of quantum field theory based models that look promising for producing a complete first principles solution of inflation. I hope that the scrutinizing attitude I have taken with warm inflation has helped to keep other researchers working on it, focused on the central problems or to do meaningful comparison of models with data. Often times the success of cold inflation is explained in terms of the number of citations and papers it has generated. However much of the work on cold inflation, though contains its compelling features, also perpetuates the same ignorances or builds more complicated theory or QFT machinery on top of the same core problems. Science is not a democracy nor a popularity contest. Ultimately the cold (or warm) hard truths catch up to you. And for cold inflation, despite its over four decades of existence and despite its simple picture in appearance, there remain some very difficult unanswered questions about the viability of this idea from first principles quantum field theory. Without there being clear and unambiguous answers to those questions, there is not much there. The same holds for warm inflation, but it seems to be a bit ahead in addressing the fundamental problems. Nevertheless warm inflation still needs to be better understood in terms of the underlying first principles QFT dynamics and separately in terms of the GR evolution of density perturbations in the presence of dissipation and radiation. Inflationary cosmology has been around now long enough, that the age of innocence for this field has long past. Enthusiasm for the basic picture can no longer be sufficient to justify its prevalence.
In this respect the fundamental problems confronting inflation provide ample justification to those who have altogether given up the inflation habit and are looking for very different solutions for early universe cosmology. After the first minute or so of amazement one has at all the cosmological problems inflation can solve in one fell swoop, here the subject is a career lifetime later and yet there is no fully consistent, viable dynamical model of inflation. In theoretical physics, inflation may be one of the great ideas of our time, which upon second consideration is maybe one dare say, not so great. Inflation certainly generates a large number of papers and citations. This apparent indicator of success of inflation is also its problem, in that the idea is so vague that for almost any feature one can imagine in the CMB data or in model building, one can invent some inflation model to explain it. However this does not feel quite like the success we are typically used to calling success in theoretical physics. In theoretical physics, success has a more rigorous foundation, where the theory makes definitive prediction based on a derivation that is widely accepted. And herein lies the key point, that if inflation model building was restricted to models that were theoretically consistent, there would be a vast reduction in possible models and possible predictions from inflation. Under such restrictions whether inflation really can produce a viable model remains to be seen. It may well be that theoretical cosmology comes full circle and the ideas considered early on by Wheeler and Harrison that the superhorizon primordial density perturbations were somehow fixed by yet unknown quantum gravity dynamics, may be the right answer. The great mystery of causality may reveal itself in solving the great mystery of gravity, and that entirely may change our perspective, especially about cosmology. If we are unsuccessful in finding a fully consistent dynamical model of inflation, then we have to keep open-minded that perhaps this physics was already fixed at the quantum gravity scale, and the whole inflation program is wrong.
I am being critical here just as much of warm inflation as cold inflation. The inflation ideas as a whole may yet prove to be the ether of our time. The ether idea was of a mysterious substance that drastically alters spacetime. That very much also describes inflation. The ether idea was motivated by the best concepts of its time, electromagnetism and stress tensor, just as inflation today is motivated by General Relativity. The idea was simple but implementing it led to many complications, just like inflation. It is ironic that General Relativity marked the final end of the ether idea and yet today is used to provide the strongest argument for inflation. Somethings never change.
The mysterious substance in the case of inflation is vacuum energy. Within the GR equations, such a substance leads to an exponential expansion, which is characteristic of inflation. However there has been no direct detection of such a substance exhibiting anti-gravity. Within conventional QFT as we so far understand it, the vacuum energy up to a constant is arbitrary and fundamentally unnecessary. Thus from this perspective, conventional QFT can have an arbitrary amount or not of inflation, it's completely unconstrained.
For these reasons one needs to be very careful in working with inflation. It should not be viewed as a goal that whatever is the evolving data and theory, at all costs a model of inflation must exist. Rather it should be question, a matter of scientific enquiry, that can we find a sensible and theoretically consistent inflation model in line with the data. And if we can not, then there is no validation of it. If that becomes the case, then serious thought needs to be given to alternative ideas about the early universe beyond inflation.
At least if one has a QFT model of inflation which is otherwise consistent, thus minimally with m φ > H and φ < m p , then such a model has only one unknown fundamental quantity, this vacuum energy. And if a tensor mode is found in the CMB, indicative of a vacuum energy, it could then more uniquely be attributed to a QFT model. If on the other hand the QFT model requires other unknown fundamental assumptions such as sub-Hubble masses or super-Planckian field excursions, then there are too many unknowns for the model to be uniquely predictive and it leaves open a bigger range of interpretations about what has been found in the CMB data.
For those of us who work on warm inflation, the alternatives are acknowledged. We have approached warm inflation as just one relevant idea that needs to be fully vetted. Suppressing alternative ideas or not fully recognizing the success of alternatives is not helpful to the development of this field. Point in fact, everything found in the CMB data to date that is talked about in support of cold inflation, equally is a success for warm inflation, yet this point is rarely acknowledged. Along with this, today it is often stated as a general fact that the Φ 2 and fairly closely the Φ 4 models of inflation have been ruled out due in particular to the lowering of the upper bound on any possible tensor-to-scalar ratio, but in fact such models are still consistent within warm inflation [57,58,79,82,83]. These have been the models for three decades that the advocates of cold inflation had been pinning their hopes on. Yet when they were ruled out, rather than even a brief moment of reflection and reassessment, thus noting the continued success of these models within warm inflation, their interest immediately turned to other more exotic models. What's going on? I am not saying cold inflation is wrong. How do I know. Everything I know is based on what I learn from the data and theory, and so far both these are inconclusive. There is no one that knows more than that. We don't have a Gandalf here showing us the way. Our only guides are the theory and experiment. Although there are pockets of success for inflation, they should not be exaggerated, since the whole picture doesn't quite add up neither from the side of theory nor observation. Everyone is capable of thinking for themselves in seeing the the science is inconclusive about inflation, and even more about warm versus cold. However warm inflation does score a bit higher than cold both in having anticipated a lower tensor-to-scalar ratio, which could still be found, and in addressing the main model building problems. There is no scientific basis for focusing exclusively on cold inflation and ignoring warm inflation.
The introduction of the Planck 2018 data was a watershed moment [84], where one would have to conclude that there is no longer a benckmark paradigm of the early universe. Cold inflation can no longer claim that mantle. Even if any researcher still wishes to remain ignorant about warm inflation or the many other interesting alternative ideas about the early universe, the quantum gravity possibility still hangs over the whole subject. Thus inflation has a very tough requirement to come up with a fully consistent dynamical model. That is why for warm inflation, as I already mentioned, I have maintained very stringent requirements in building a model that is fully consistent with the quantum field theory we know to have worked at tested collider energy scales. As such this minimally means no scale in the model, such as the inflaton field amplitude, should be larger than m p , since that enters the quantum gravity regime, for example in generating nonrenormalizable operator corrections. In going above the m p scale, one must already make an assumptions that quantum gravity effects somehow are subdominant to the low energy model. However as soon as any assumptions are required about quantum gravity, it isn't much of a leap to simply assume quantum gravity solves all the problems. Though I have not achieved the complete goal up to now of coming up with such a fully consistent warm inflation model, I have at least shown that warm inflation can avoid the most ambiguous regimes of quantum field theory by models remaining where
m φ > H, and (24) φ < m p .(25)
Thus I have shown warm inflation has the basic ingredients to solve the most fundamental and pressing problems of inflation model building. Moreover, with collaborators we have developed the first simple quantum field theory model that can realize these properties and give an observationally consistent result for inflaion [58,79]. These properties arise as consequences ultimately of the particle production from interactions amongst quantum fields during inflation.
As an aside, note these effects differ from the coupling of matter fields to classical gravity that then leads to particle production arising from an expanding background and/or curvature, which was examined in the seminal work of Parker [85] and subsequently by others [86,87]. Some papers also discussed this in the context of inflation [88][89][90][91], although these effects showed no appreciable change from cold inflation to the large scale predictions. There is a more general point here that my work has demonstrated. Just as radiative and thermal corrections affect the background scalar field physics by altering its effective potential, when time evolution is involved, additional QFT effects will occur, such as particle production, and these effects will react back on the background scalar field and alter its dynamics, such as by possibly allowing a larger inflaton mass or affecting the extent of the background inflaton's field value. The broader lesson to take from our warm inflation work, that may be useful for other early universe models, is not to ignore particle production from QFT interactions and the associated effects that come with it. For inflation, accounting for this from the time evolving background scalar field could be the missing ingredient that has balanced inflation models, so has allowed for more sensible model building.
My above critical comments about the attitudes in the early universe cosmology field arise from direct experience in trying to develop warm inflation. Although there have been many researchers who have contributed their insights and skill into developing warm inflation, it has also faced an almost stauch lack of acceptance from that larger portion of the cosmology community that advocated for cold inflation. Mainly they have just minimized acknowledging the idea at all but also argued that the idea is in someway or another inferior. At the start their arguments centered around the Yokoyama and Linde work, despite us having shown fairly soon after that warm inflation could be realize from QFT. Once such arguments clearly became dated, others followed, such as warm inflation is not as simple as cold inflation since it requires an additional parameter, the dissipation coefficient. Cold inflation also needed an additional parameter for reheating. However in that scenario, it seemed to have been decided without any empirical evidence supporting the picture, that inflation is synonymous with a supercooled phase and that reheating must be a distinct separate phase. By now, though criticism of warm inflation is much more muted, with some past critics even working on the idea, the main defence I have heard by some of the cold inflation advocates is they believe in cold inflation or like it. Belief is a fine quality in a devotee but needs to be tempered by the facts in a scientist.
The simplicity argument, despite it obvious shortcomings, carried on being used until the Planck data showed there was no tensor mode at the high energy scale predicted by the single field Φ 2 and Φ 4 chaotic cold inflation models. It became further obsolete with the swampland work that showed that many cold inflation models may not be consistent with quantum gravity, at least within the context of string theory. On the other hand, both these developments worked in favor of the warm inflation scenario. We had seen from model calculations as early as 2009 [56] and further studied in [57], that warm inflation suppresses the tensor-to-scalar ratio, including in the simple monomial models such as Φ 2 and Φ 4 . Moreover the swampland conditions showed once again that an inflaton field with mass m φ > H and φ < m p would be the consistent regime with string theory.
The Planck 2018 inflation paper [84] really demonstrated the harmful degree of advocacy by the cold inflation enthusiasts. I had informed the Planck inflation team about the successes of warm inflation in demonstrating well before their data that the tensor-to-scalar ratio should be suppressed, even in the simplest monomial models. Given that the outcome of no tensor mode detected down to the GUT scale was the big result from their analysis, thus breaking decades of expectation from cold inflation of a confirmation of the simple monomial models and possibly even its consistency condition, it is a rather big deal that warm inflation all along had what was now the consistent picture with the data. Despite this information available to them, the Planck 2018 inflation paper reported amongst its "main results" summary that Planck 2018 "strongly disfavors monomial models", which in truth only applies to cold inflation models, but the wording with no other qualifiers suggests that no science exists beyond that picture. That is misleading. In fact one significant interpretation of the Planck 2018 results is that it has found indirect evidence for radiation and dissipation during inflation, that is suppressing the tensor-to-scalar ratio. This is a very simple explanation to the Planck 2018 data. It is the sort of bread-and-butter explanation, involving simple potentials, particle production, dissipation, etc... that has a familiarity. It offers an explanation of this remote phenomenon that one ideally wants in terms of relatable analogies. This is a far cry from cold inflation, where the explanation involves string theory, modifications to gravity and other gizmos that so far have shown no close contact with reality. Physics is an empirical science. An unknown idea used to understand another unknown idea gains you very little. Neither string theory nor modifications to gravity have been shown to be theoretically consistent, and there is no experimental evidence for them. Thus it's unclear how relevant any inflation models are that are based on these ideas. On the other hand, models based on the quantum field theory we are familiar with, no doubt extending beyond the Standard Model but following the basic rules that are understood to work, have closer connection with reality. In this respect there are at least a few warm inflation models.
Nevertheless Planck 2018 gave only the vaguest mention about our successful prediction, yet the associated papers going back to 2009 that had made this prediction were not referenced. As Planck are an observation group, their responsibility is to their data and protecting its integrity from any theoretical bias. A supporting pillar of science is impartiality of experimentalists to theory. So if they insist on discussing theory, then they need to be equitable to all ideas that predicted the trends in their data. In this respect warm inflation has been spot on and with a very simple explanation, yet was given only a vague mentioned in their paper with relevant references missing. It does not benefit any field of science when advocacy reaches the point of discarding the evidence on the ground and the facts are not thoroughly reported. And here the facts remain that warm inflation bucked the theoretical trends by arguing for a lower tensor-to-scalar ratio below the detection limits of Planck, and we were correct.
If a tensor mode is eventually detected at some lower energy scale, then there are very good claims to be made that this is evidence for warm inflation [57,58,79,82]. Statistically warm inflation models compare very well to Planck data. The ∆χ 2 for warm inflation with dissipation of the type found in the warm little and two-stage models are much smaller then typically found for cold inflation models. For example for the Φ 4 potential, the ∆χ 2 for warm inflation models is very small, order 1 [83], whereas for cold inflation was found to be ∼ 40 in the Planck 2015 analysis [92]. Moreover, for the warm little inflaton model in [79], for the Φ 2 potential we found that a super-Hubble scalar inflaton mass and sub-Planckian scalar field excursion throughout inflation occurs for a tensor-to-scalar ratio r ≈ 6.4 × 10 −6 and with a spectral index n s ≈ 0.965 so within the 68% confidence level of the Planck 2018 legacy data [84].
Imagine for a moment that tomorrow a tensor mode is found in the CMB, which corresponds to an energy scale for inflation just below the present upper bound (which itself is just below that found by the Planck 2018 results). Then if we go by the Planck 2018 inflation paper [84] "main results" summary, a likely best fit to that hypothetical result is D-brane inflation. So does that mean that will mark the day when string theory will have been experimentally discovered? If we go by their main results, they say "...inflationary models such as R 2 , T and E α-attractor models, D-brane inflation and those having a potential with exponential tails provide good fits..." to their data. The first three of these involve either modifications to gravity, supergravity, conformal or superconformal field theories, and/or string theory, so in all cases would be very major discoveries. Such models are important as intellectual exercises in the overall efforts to develop more fundamental theories of physics. However they are far from established theories of the physical world. Such models have no urgency for an accurate best fit comparisons by experimentalists against their data and for them to do that is misguided. The last possibility they gave of potentials with an exponential tail is rather disappointing to find in their list. This suggests that after four decades of inflation model building, they consider it a main results to give just a form of a potential, which moreover without a higher theory that might justify it, on its own is nonrenormalizable and so without merit. It only should be added that this potential is much more complicated than the simple, renormalizable, monomial potentials that work for warm inflation. All their main result possibilities are more complicated to the much simpler conclusion, that their CMB data implies an inflation with a simple, conventionally renormalizable, potential and just a bit of radiation and dissipation. It would be irresponsible and downright unscientific to favor the complicated explanation over the simpler one. This goes against all practice in science in which when given the option to take the simplest interpretation. That is the misdirection the Planck 2018 inflation paper is heading us toward, if we simply ignore the warm inflation possibility. Out of over fifty inflation models tested (this in any case seems excessive), they did not test any warm inflation models, which are amongst the only few that are conventionally renormalizable, thus by default significant. There are only two dynamical pictures of inflation, warm and cold. If a thorough scientific analysis is the goal, then models from both pictures need to be tested against data. The particle physics dynamics implied by an interpretation of the CMB data as either warm or cold inflation is very different. Thus there needs to be extreme caution in how data is interpreted, as it will become the underlying basis for future particle physics model building.
Without a rigorous model of inflation that is fully consistent with the QFT that we know and understand, it's even dangerous to interpret any hypothetical tensor mode discovery in the CMB as inflation in the way we currently understand it. There is still a possibility that the true explanation may involve quantum gravity in some as yet subtle way. For example such a tensor mode may arise from an inert vacuum energy at the measured scale, but the primordial density perturbations may still have been fixed by quantum gravity. Moreover there may be some way the inert vacuum energy is correlated with quantum gravity. It may also be that quantum gravity is creating tensor modes of a sort we don't fully comprehend at the moment and we may confuse those signals for a vacuum energy. All this may sound like contrived explanations, but we just don't know all the unknown unknowns about quantum gravity until it is fully solved. That is why it is imperative that if we want to interpret a hypothetical tensor mode discovery in terms of inflation as we understand it, then minimally we need a rigorous QFT model that tells us how exactly do we understand inflation. And this model needs to be not half rigorous or sort of rigorous but completely rigorous. This requirement is true for both warm and cold inflation.
In fact, in the fallout of Planck 2018 it is best to not bias our thinking just toward inflation, especially by CMB experimental groups. If any experimental group insists on discussing theory, then they need to be very disciplined in giving a fair and broad overview. Experimentalists should understand what they represent as experimentalists if they make any association between their results and string theory or any of these many speculative ideas from theoretical physics. Actions such as these shift the focus away from looking for results that might be accessible to results which, for the time being, are impossibly unreachable. One expects experimentalists, especially large experimental groups, to lean on the side of restraint in associating any theoretical idea with consistency to their data until they have confidence the data is headed with high likelihood eventually to confirm that idea. They need to think very carefully whether the fragmentary information gleened about the early universe from the culmination of all their extensive work really is enough, to start associating it with these fantastic ideas of theoretical physics. And if they insist on wanting to do this, they need to ask what kind of credibility are they giving their results if they then ignore simpler, more tame, theoretical explanations, by not giving a broad assessment of theory. Very speculative directions can be left for individuals to consider, but experimental groups should act responsibly toward the whole field and also be thoughtful in the use of hard earned taxpayer money. Actually, there is no need for experimentalists to get into any comparison of theoretical models. Once they put their data out, within a short period some theorist will have analyzed it and report that some string theory model or whatever is the best fit to data and so forth. But that's different. Nobody listens to us theorists when we talk like that. However for an experimental group to make such claims is a huge deal.
The discussion in this Section has shown that at the very least there are two quite different dynamics, warm and cold inflation, with models which agree very well with CMB data. Moreover this is likely to still hold if a tensor mode is discovered in the near future. So there already is a large degeneracy of possible early universe solutions, even before considering non-inflationary models. We are far off from any conclusive early universe model. If a tensor mode is discovered, more needs to be assessed whether to present understanding its explanation reduces just to a comparison between warm and cold inflation. Thus, further research should be done in searching for non-inflationary early universe models that create tensor mode signals, as well as more scrutiny over secondary sources of such Bmode polarization signals. Alongside this, as discussed in the next Section, attention should be given to the level of assumptions entering a model, with those requiring the least number of assumptions then being most important for comparison against data. At this point it is to our advantage to treat all viable ideas about the very early universe on an equal footing and give them all equal tests in comparison to the data. In the long run this approach is better also for cold inflation, so that it is fully vetted and scrutinized. We need to avoid the danger of talking ourselves into believing only one idea is correct. A broader perspective is needed until clear evidence favors one idea well above all others, including the possibility that none of our present ideas for now are adequately favored.
IX. MODEL SELECTION
Apparently Landau once said something to the effect that cosmologists are often in error but never in doubt. I don't know if he said this but it sounds relevant. Still today this statement holds meaning for the theoretical side of cosmology. The problem with theory in cosmology is that no matter what particle physics model one builds, only a small portion will directly be tested against what intrinsically is limited empirical information about the early universe. In particular when comparing to data, all inflation models boil down to the value of a scalar field potential at one particular point of the background field and a couple of derivatives at that point (and maybe a couple more). The lower multipole spectrum can also be fit covering up to around 10 e-folds of inflation, so giving partial information about a small region of the potential about this point. There are a few more details. The potential in question must have another point at which inflation ends, and the point of interest for comparing to data is meant to be from where the inflaton evolves long enough to create around 50 efolds of inflation. Moreover the end temperature also gets fixed by the inflaton model and that is involved in determining exactly how many efolds of inflation near about 50 will be needed. For warm inflation there is also the radiation density generated during inflation, which in the most common realizations is a temperature scale. A point on the scalar potential will be associated with a temperature, which itself will depend on the parameters of the potential and some set of interaction couplings in the model. So the parametric dependence from the underlying model is a bit different in warm versus cold inflation, but the basic idea is the same.
Details aside, inflation model building boils down to finding an encasing theory that can produce a scalar field potential with a point on that potential and a small region about that point that compares well against the data. If that point on that potential of that encasing theory agrees well against the data, that does not mean you have confirmation of the encasing theory. It is the other way around. You need to show that the encasing theory which produces the potential that has that point has some claim to be a physical theory. For example, the presence of the photon does not mean string theory has been discovered. String models contain the photon, but the challenge is to show a string model also is theoretically consistent. If there was a unique encasing model that produced the point on the potential which agreed with the cosmological data, then that would at least single out that model. However, what compounds the problem is the proliferation of inflation models in the literature, which suggests not to expect such a unique association, and also not all these models ultimately could be based on underlying correct theoretical ideas. Moreover for many encasing theories, the choice of the inflaton potential has some degree of arbitrariness, thus further dissociating the potential, which is the only part of the encasing theory that cosmological data is testing, from other aspects of the encasing theory.
The cosmological data provides an upper bound on the tensor-to-scalar index r, an amplitude of the scalar perturbation, and the spectral index n s and possibly its running. The parameters of a given inflation model need to fit as best they can to these observables alongside some constraints such as producing a sufficient duration of inflation, the final temperature after inflation, and perhaps some theoretical constraints and consistency conditions within the model. The error in the spectral index is sufficiently wide that one should not anticipate uniquely separating out a single inflation model. Cosmic variance is an ultimate limiting factor in how accurately n s can be measured. Narrowing these errors will certainly improve predictability, but more is still needed to help determine the best model.
The requirement that the inflation model be theoretically consistent and have a claim on being part of the physical world provides a useful guide. It suggests that another helpful measure for separating inflation models would be to characterize for each such model how speculative it is. The more speculative it is, the higher the chance that once quantum field theory and particle physics model building is better understood, the prediction from the model will very likely change and in the worst case the underlying ideas the inflation model is built on are wrong or inconsistent. Alternatively, the less speculative the encasing theory is the more predictive it is. The least speculative theory would be the Standard Model, but we know that is not adequate to explain cosmology.
This gives a guiding rule, that models of the early universe should be assessed on how much more beyond the SM is required to compare well against the cosmological data. By counting the number of speculative ideas inputted to a model, it would provide some guidance over model building. At the lowest end of speculation it would still require building an extension to the Standard Model but remaining within the rules of QFT that have been understood by the Standard Model. At a bit higher level in speculation, effective field theory methods, though less predictive, can still be OK provided the cutoff scale is below the quantum gravity scale. Its fine to still make much more speculative models at a much higher end of the speculation count, but by having such a count, it makes us more aware that such models are for the most part intellectual exercises. The models with the lowest speculation count are the serious contenders for comparing to data. In that respect models that involve any assumptions about quantum gravity or rely in an essential way on some modification to gravity are not at a stage where much is gained to compare them to the very limited data that cosmological measurements can provide. That is because first our ideas about quantum gravity or even some aspects of classical gravity beyond GR could and probably are wrong at some level at the moment, so any model based on them will most likely ultimately need adjustments or simply be wrong. Second, if one is ready to make such assumptions involving quantum gravity as part of their model, then it's not a step much further to simply assume that quantum gravity might completely solve the problems of the early universe by directions already suggested in the literature or in some entirely different way, once it is much better understood. Thus for any such model, we simply must wait until quantum gravity is much better understood before we can assess its relevance. These are points I hope our research over the years in warm inflation has tried to express through our actions in attempting to find a warm inflation model that is fully consistent with the quantum field theory as we presently understand. Admittedly, we have also crossed into the higher end of the speculation count, because it has fundamental interest and is fun to do, but at the same time we have put considerable effort to find much more experimentally relevant models that were as close an extension as possible to the QFT we presently understand. Here I will try to a provide a more systematic guide as to how to do a speculation count for any cosmological model. Quantum gravity, string theory, higher spacetime dimensions beyond four, modifications to spacetime aside from additional dimensions, loop quantum gravity, super-Planckian field excursion, higher dimensional operators, sub-Hubble masses, modified gravity, fields where any mode propagation differs from standard QFT, supergravity, supersymmetry, effective field theory with cutoff scale at m p , using effective field theory methods, extra fields beyond those in the Standard Model which can not be attributed to some symmetry or higher theory, symmetries not of types in the Standard Model, symmetries included in the model, model building beyond the Standard Model, etc..., each add more speculation to a model. To account for these properties, the speculation count would need two separate categories. One would be for fundamental (F) attributes that alter the quantum field theory as we presently understand it. These are all attributes with unknown unknowns, for which at the moment there is no empirical evidence for whatsoever, and alongside that there is no established theory for them. Models with any fundamental speculation counts are susceptible ultimately to being wrong or needing significant modifications once any of the fundamental attributes in the count are better understood in the future. The other category would be speculations of a technical (T) nature related to standard QFT model building. These are attributes with known unknowns or known knowns. The success of the Standard Model is strong evidence that standard QFT model building is immensely successful, with ample evidence of its empirical relevance. The rules for such model building were set decades ago and by now are prescriptive, leaving much of it as a technical exercise. It can still lead to novel results, but it works with symmetries and properties of the type familiar or similar to those in the Standard Model. The attributes belonging to both categories are given in the Table. The speculation count based on this Table applies to inflation models that have a mechanism to end inflation into the radiation dominated regime and are able to protect the flatness needed of the inflaton potential from radiative/thermal corrections.
The separation into two different categories is necessary because it is an apples and oranges type comparison of speculation between them. Attributes in the Fundamental category involve a conceptual leap beyond our present both theoretical and empirical knowledge. On the other hand, for models with all attributes in the Technical category, they have a familiarity based on our extensive experience with the Standard Model. Models with only Technical attributes are built on types of quantum fields, all that have been tested in collider experiments, and so there is a possibility that beyond cosmological data some particle physics based collider or astrophysical tests can also be conceived to test such models, even if indirectly. To appreciate the distinction between Fundamental and Technical attributes, suppose a new particle was discovered which could be explained through standard type of model building beyond the Standard Model. That would be exciting and extremely noteworthy, but mainly because any discovery in particle physics has basic significance, happens slowly, and after a lot of work. Yet it would be a level shift fundamentally higher if for example the discovery directly showed that our world had an extra dimension. The Fundamental/Technical speculation count is obtained by simply counting the attributes of a model in each category.
The fundamental count includes sub-Hubble mass scalars based on my discussion earlier in the paper. For models with super-Planckian field excursions or higher dimensional operators, quantum gravity would be included in the fundamental count. Even if quantum gravity is not explicitly used in such models, once a model has these features, implicitly it is affected by quantum gravity and that is an uncontrolled approximation. In the technical category, any symmetry included in the inflation model is counted to assess the complexity of the model, but if that symmetry differs significantly from types found in the SM, it is counted twice to account for the higher speculation associated with such symmetries. For symmetries much different from the SM, specifically I exclude any new spacetime symmetries, like supersymmetry or alterations to Lorentz invariance, CPT etc..., as that is already counted in the Fundamental category, but it would include for example technicolor, preon models, unparticles etc... and for theories in the Fundamental category include symmetry details such as choice of compactification, brane type etc... The inflaton potential, which is often arbitrary in inflation models, I did not include as a separate attribute. This is partly accounted for in the model building beyond the SM attribute. Also from the symmetries in an inflation model, usually a scalar emerges that is identified as the inflaton. If the potential is nonrenormalizable with cutoff scale below m p , it is also accounted for with the effective field theories with cutoff below m p attribute and if the cutoff scale is at m p it is accounted for with the quantum gravity attribute. Counting the number of speculations in a model mitigates the need for relying on individual opinions on this matter. Two categories are necessary since Fundamental attributes are a different degree of speculative to the Technical ones and it would be impossible to give any metric to compare between the two. Then within each category, a priori with no further information, the count treats all attributes equally and the degree of speculation in a model is simply down to how many of the attributes in the Table it has.
One thing we can all agree on as theorists is a particle physics model of cosmology will require some degree of speculation, but we will never agree amongst us the degree to which each of the possibilities in the Table is speculative. This speculation count provides a simple step, of at least counting each speculative attribute. What we will still not agree amongst us is what level of speculation is acceptable, since speculation might also be viewed by another word, insightful. However there are two limiting cases on which we can all agree. First, models involving quantum gravity are at the extreme end of speculation. We know so little about this thing we so conveniently call quantum gravity, that we don't even know if at the Planck scale, physics behaves by the rules of quantum mechanics or whether at some scale, possibly much below the Planck scale, some entirely new type of physics kicks in that supercedes quantum mechanics just as that superceded classical mechanics at around the atomic scale. Gravity is the only force, which interacts directly, as far as we know, with all forms of energy and defies being encaged as a renormalizable point particle quantum field, so befuddles attempts at a unitary theory. These properties that separate gravity from all the other fundamental fields of Nature may be the earliest hints that at some high enough energy scale, its behavior goes beyond not four spacetime dimensions but rather beyond the rules of quantum mechanics. At present no one can exclude that possibility, thus there is little point in trying to argue that models requiring quantum gravity have any unique or urgent phenomenological relevance. In fact, we don't even have any definite idea how physics behaves just a couple of orders of magnitude in energy above the LHC scale, and so the Planck scale is well-beyond reach for meaningful application to phenomenology. Second, for any model at a low level of speculation, there is much less to be theoretically debated, so there is greater significance in testing how well it compares to data. In truth we don't even have a definite idea what is just around the corner at the next higher energy above the LHC scale. So even cosmological models with low speculation count should be treated with a great deal of caution. Nevertheless, for any model with a low level of speculation and especially with no Fundamental attributes, if it also fits well against data, then on a relative comparison it is amongst the best cosmological models.
The higher the speculation count, especially in the fundamental category for a model, the more it presses the question whether in order to obtain an adequate phase of quasi-exponential expansion, is the proposed model really a measured solution. For example is adding six spacetime dimensions really a measured solution to just obtain a phase of inflation. The count forces a think about the purpose of a model. If one is developing string theory, it makes sense to see how well a string based model can realize phenomenology relevant to real world data. However if one is interested in determining the most relevant models that agree with the data, too high a speculation count indicates those are not the primary models that should be tested. Cosmological observation basically fixes two data point, the scalar amplitude and the scalar index n s and gives one bound, on the tensor-to-scalar index r, and maybe a few more data points such as nongaussianity, running of n s , isocurvature etc... As the speculation count rises, the assumptions going into a model swamp the limited data and nothing is really being tested
Here we examine the Fundamental/Technical speculation count for a few models. We are not concerned here about how well the models compare to data, which we have already discussed in previous parts of the paper. Here the count is just assessing the theoretical aspects of these models. The potentials are written in terms of only the background inflaton field φ = Φ .
D-Brane inflation model [93]: D-branes are solitonic solutions arising in string theories of type I, IIA and IIB. There is an interaction energy between two parallel brane and anti-branes, and this is the potential energy utilized to drive inflation. The inflaton field is a mode corresponding to a relative motion between two parallel branes. The model relies on the locality of the higher dimensional theory to allow for a sub-Hubble mass as necessary in cold inflation. For D-3 branes, the potential has the form,
V D−brane (φ) = M 4 (1 − α/φ 4 ) .
The counting of speculations from the Table entering This gives a speculation count for Fundamental/Technical properties of 9/5. Here the choice of compactification and D-p brane I include in the technical category and not also as a fundamental attribute for new spacetime symmetries, since this model already has been penalized in the fundamental category for extra dimensions, which is sufficient.
α-attractor superconformal inflation model [94]: This is a supergravity model where the parameter α is inversely proportional to the curvature of the inflaton Kähler manifold. A common choice of potential is
V α−attractor (φ) = tanh 2n φ √ 6α ,
for n, α > 0. For large curvature, which corresponds to small α, the predictions agree well with CMB data. The counting of speculations entering this α-attractor model is: This gives a speculation count for Fundamental/Technical properties of 3/7. R 2 Starobinsky model [32,95]: This is a type of modified gravity model which has a curvature-squared R 2 /(6M 2 ) term added to the Einstein-Hilbert action, where R is the Ricci scalar and M < m p . This action is transformed into the Einstein frame leading to a inflaton potential of the form,
V R 2 (φ) = Λ 4 1 − exp − 2 3 φ m p 2 .
The counting of speculations entering the R 2 -Starobinksy model is: This gives a speculation count for Fundamental/Technical properties of 3/3. In the original paper by Starobinsky, he had viewed the R 2 term as dynamically generated as a self-consistent solution of the vacuum Einstein equations by one loop corrections due to quantized matter fields. The model can also have quantum gravity interactions treated semiclassically but these become subdominant for sufficient number of matter fields. Thus one could also count the assumptions from such a more first principles approach, but that would need the details about the matter fields and interactions. Nevertheless in such a case the fundamental assumption added in our above list of modifications to gravity beyond GR would not be included, although assumptions about the underlying matter fields would need to be added.
-
Higgs Inflation model [96]: This model assumes there are no other fields in the universe aside from those in the Standard Model, and the Higgs field has a non-minimal coupling to gravity. In the initial Jordan frame the Higgs field, h, has a standard type of quartic symmetry breaking potential of the form ∼ λ(h 2 − v 2 ) 2 . To get rid of the non-minimal coupling to gravity, a conformal transformation is done to the Einstein frame. The Higgs field is then treated as the inflaton, which at high field value has the potential in the Einstein frame,
V Higgs inflation (φ) = λm 4 p 4ξ 2 1 + exp − 2φ √ 6m p −2 ,
where ξ is a coupling constant between the Higgs field and the scalar curvature. As the authors' paper makes clear, it is not possible to have a rigorous discussion of quantum corrections due to the nonrenormalizable nature of gravity. This gives a speculation count for Fundamental/Technical properties of 3/3.
Warm little inflaton model [58,79]: This model has two complex Higgs fields with identical U (1) charges. The fields have nonzero vacuum expectation values and the phases of both fields then yield two Nambu-Goldstone bosons. The relative phase of the two fields yields a singlet which is the inflaton. The Higgs fields are coupled to left-handed fermions with U (1) charge and right-handed counterparts that are gauge singlets. There is an interchange symmetry between the two bosons and two fermions and they have identical couplings. There is an additional chiral fermion and singlet bosonic field to couple with the fermions for the particle creation decay width. The interaction Lagrangian for this model is given in Eq. (22). The inflaton potential for this model is simply monomials,
V warm little (φ) = λ 4! φ 4 or 1 2 m 2 φ φ 2 .
The counting of speculations entering the warm little inflaton inflation model in the strong dissipative regime is: This gives a speculation count for Fundamental/Technical properties of 0/10. This is the only model studied here with no speculation counts in the fundamental category. Note that in the weak dissipative regime the model would have at least one and up to two fundamental counts for sub-Hubble mass and quantum gravity, the latter because in cases there can be super-Planckian field excursion of the inflaton field. The speculation count highlights the importance of the strong dissipative regime of warm inflation. Nevertheless even in the weak dissipative regime there are less speculation counts in the fundamental category than all the other models examined here.
-
For the technical speculation points in each model, one can now look closer at the given model and see how well it can be parametrically constrained. The speculation count is only assessing the physical attributes of the model. One would need to look into the details of the inflation calculation in the given model to see how constrained it is for inflation. Alongside that one can also explore if other related cosmological phenomenon such as baryongensis, dark matter, dark energy, and cosmic magnetic fields can be explained by the model or extensions of it. From this one can assess the full parametric constraints on the model. Overall from the models tested here, only the warm little inflaton model has no fundamental speculation points and so is in the best position for comparison to cosmological data. Of course this model still should be further scrutinized but also further model building can be explored to explain other cosmological phenomenon. Building inflation models involving ideas from higher theories like quantum gravity etc... can be interesting, but it is extremely challenging to build an inflation model based on just the standard QFT that we presently understand. The warm little inflaton has achieved this, so is a model of considerable interest.
The speculation count provides a measure beyond comparing to observational data to assess not just inflation models but cosmological models at large. Amongst the models that agree well with the data, the speculation count provides a semi-quantitative measure to disentangle the level of assumption going into each of them. If the fundamental count is low and so is the technical count, then there is a better chance comparing that model to data can provide some insight into the unknown fundamental attributes. However if the technical count is also high, then such a model can provide little such insight. If the fundamental count is high, then irrespective of the technical count, comparison to data will provide very little insight into the unknown fundamental attributes. If there is no fundamental count, then the model is in position to consider how well the data constrains the parameters of the model and whether the model can be used to provide other cosmological predictions.
With the proliferation of inflation models, with so many agreeing well enough with the limited cosmological early universe data that it may as well be an infinite number, more is needed to help separate out models in terms of their qualities. Theoretical cosmology is different from most other fields in science in that it requires a greater level of speculation to make any progress. A speculation count is necessary in theoretical cosmology, to keep cognizant of the levels of assumption entering into model building, thus helping to keep the field on an even scientific keel. This is the needed 'doubt' I believe Landau's statement in jest was trying to convey that cosmologists lack.
This count should not be misunderstood as trying to make some simple-minded separation of models as good versus bad. From a different way of thinking, purely in a mathematical physics context, D-brane inflation is one of the most elegant models. However is it a physically relevant model or even close to that, based on what we currently know about Nature? In theoretical physics there is a notion that fundamental physical theories should be mathematically elegant, but this notion must be constrained first and foremost by the theory showing adequate connection with the physical world and being mathematically consistent. As detailed earlier in this Section, cosmology experiments intrinsically can provide very limited information about the early universe. For any theories with Fundamental attributes from the Table, that cosmological information on its own is not adequate to confirm the theory. It requires other independent empirical information, alongside of course the theory being consistent. Overstating the empirical relevance of string based models or those based on other higher theories also does no justice to the core theoretical work being done in those fields. It just furthers the image of these subject areas as out of touch with physics, since they are unable to distinguish between pertinent empirical models versus those mainly for theoretical interest. Counting the number of physical attributes in a model that are speculative, helps to assess which models are most relevant for testing against experimental cosmological data. The count provides a semi-quantitative measure on the degree of belief of a model given the present theoretical understanding. The higher a model's speculation count, especially in the fundamental category and to a lesser extent in the technical category, the less useful is the limited data able to select it out. This speculation count is time dependent. In the past 50 years, our confidence in the Standard Model has been fully confirmed and there is greater cosmological data, but amongst the fundamental attributes in the Table, not much has changed. Thus the timescale of significant change for the speculation count is around or larger than a career lifetime.
The speculation count helps to avoid a lot of arguments and debates on nowhere issues, by quantifying the degree of theoretical uncertainty about a given model, and it can be obtained by a simple counting of assumptions entering in the model. This little input can help us view cosmological models a bit more objectively. The swampland development from a couple years ago is a good example where some objectivity is needed. Within theoretical cosmology that work caused considerable controversy, perhaps even a division in the field. However it has to be appreciated that our understanding of physics around the Planck scale is evolving to the extent that at present there isn't even any experimental information, which typically is essential for any development in theoretical physics, at these extremely high energies. One should expect, not be surprised, at further such developments in the future, since that is an obvious consequence of string theory and more generally quantum gravity being unsolved. Likewise the validity of models based on these ideas may also change with time.
In any event, superplanckian field excursions, which are not allowed by the swampland conditions, still imply, irrespective of the swampland conditions, that the model will be susceptible to higher dimensional operator corrections arising from quantum gravity. These corrections are unknown and in such models one is forced to make an assumption that they don't affect the model. This is an uncontrolled assumptions as it relates to quantum gravity, and once such an assumption enters model building, one may as well assume quantum gravity can sort everything out, never mind the model. Similar problems arise for models with an inflaton mass less than the Hubble scale. Thus, there is little point in venting one's ire on the swampland conditions, which has only magnified already known problems. String theorists are just doing their job in understanding a very difficult problem, and their views probably also will further evolve over time. It is not a simple-minded question of right or wrong in regards issues at and above the Planck scale. It is about how much clarity there is in the matter, and for the time being there is very little, which thus also applies to any models at this scale. This makes a speculation count useful in forcing one to accept from the onset that models with high counts, no matter how insightful, have greater chance to eventually be found theoretically incomplete or wrong. At the other end, models with low speculation count have less theoretical baggage, so more relevant to be tested against experiment. The ideal case is a speculation count of zero, so below the 10 TeV regime, where the Standard Model explains everything. We know for cosmology we have to go to a higher speculation count, but if inflation really is a viable theoretical idea, then in order to feel certain it is connected to the physical world, we need to find a model with low speculation count. We can not call some possible future tensor mode discovery in the CMB as inflation, if we don't even have a working, consistent, physically relatable, model of what this inflation is.
X. DISCUSSION
For the many researchers who have and continue to study warm inflation, they work with confidence that data and theory indicate it is headed in the right direction. Today warm inflation is a strong contender in developing into a theory of the early universe. The strength of the underlying theoretical foundations of warm inflation provide at least some confidence that a tensor mode will eventually be detected. In this review I discussed primarily the early work done by me and the work I did with collaborators in developing warm inflation. Many researchers have made valuable contributions in developing warm inflation, including building interesting models , first principles model building [76,, as well as models that follow the basic ideas of warm inflation of particle production and associated dissipative effects during inflation [74,75,[152][153][154][155]. Work on warm inflation has been done in understanding the underlying quantum field theory dynamics [3, 13-15, 19, 156-159], examining density perturbations [53,54,[160][161][162][163][164][165][166][167], studying non-gaussianity [168][169][170][171][172][173][174][175], and testing models to data [83,[176][177][178][179][180][181][182][183][184][185][186][187][188][189][190][191]. There is work showing how warm inflation can realize cosmic magnetic fields [192][193][194], baryogenesis [195][196][197], primordial blackholes [198][199][200][201][202], and address the gravitino problem [203][204][205]. Studies have examined various aspects of the warm inflation scenario and dynamics [65,[206][207][208][209][210][211][212][213][214][215][216][217][218][219][220][221][222][223][224], including for other cosmological problems [225,226], with application also of the warm inflation ideas of dissipation during vacuum energy driven expansion to dark energy [227][228][229][230][231] with a possible resolution to the Hubble tension [232]. There are various reviews of warm inflation [17,56,[233][234][235].
Whether warm inflation, or inflation more generally, is the correct idea about the early universe will be decided by empirical data. Irrespective, warm inflation has introduced and substantiated two concepts about early universe cosmology that are here to stay. One, that particle production from quantum field interactions is possible at this early stage. Aside from filling the universe with particles, our work showed the backreaction effects of this process on the source are equally important in having dynamical consequences for this early phase of the universe. Two, that the initial primordial fluctuations could be of classical rather than quantum origin, thus changing a way of thinking that had gone unquestioned for decades, even before the inflation idea. The warm inflation story is very much a part of the early universe cosmology story, and those who have ignored it, or the many other facets of this problem, have lost the plot.
the D-brane inflation model is below. Here the square bracket indicates whether the attribute is Fundamental[F] or Technical [T], and the curved bracket give the number of speculation counts for that attribute if it's larger than one:
-
Symmetries not of type in SM -superconformal [T] -Symmetries included in the model -three chiral multiplets and Kähler potential with superconformal and SU (1, 1) symmetries (5) [T] -Model building beyond the Standard Model [T]
Effective field theory methods with cutoff below m p [T] -Symmetries included in the model -two Nambu-Goldstone bosons, two U (1), two interchange (6) [T] -Extra fields beyond the SM not attributed to any symmetries (2) [T] -Model building beyond the Standard Model [T]
Table :
:Range of speculative attributes in cosmological models Fundamental [F] Quantum gravity Additional spacetime dimensions above four Modifications to gravity beyond General Relativity Sub-Hubble mass scalar fields Supersymmetry/other new spacetime symmetries or adjustments to them Technical [T] Effective field theory methods with cutoff scale below m p Symmetries included in the model not of the type in the Standard Model and excluding new spacetime symmetries Symmetries included in the model Extra fields added beyond the Standard Model and not attributed to any symmetry Model building beyond the Standard Model
Quantum gravity [F] -Modifications to gravity beyond GR [F] -Sub-Hubble mass inflaton field [F] -Symmetry not of type in SM -transform from the Jordon to Einstein frame [T] -Symmetry included in the model [T] -Model building beyond the Standard Model [T]
The counting of speculations entering the Higgs inflation model is: -Quantum gravity [F] -Modification to gravity beyond GR [F] -Sub-Hubble mass inflaton field [F] -Symmetry not of type in SM -transform from the Jordan to Einstein frame [T] -Symmetry included in the model [T] -Model building beyond the Standard Model [T]
U Weiss, Series in Modern Condensed Matter Physics. World Scientific2Quantum Dissipative SystemsU. Weiss, 1993, Series in Modern Condensed Matter Physics Vol. 2: Quantum Dissipative Systems, (World Scientific).
. A Berera, L Z Fang, 10.1103/PhysRevLett.74.1912arXiv:astro-ph/9501024Phys. Rev. Lett. 74astro-phA. Berera and L. Z. Fang, Phys. Rev. Lett. 74 (1995), 1912-1915 doi:10.1103/PhysRevLett.74.1912 [arXiv:astro-ph/9501024 [astro-ph]].
. A Berera, M Gleiser, R O Ramos, 10.1103/PhysRevD.58.123508arXiv:hep-ph/9803394Phys. Rev. D. 58123508hep-phA. Berera, M. Gleiser and R. O. Ramos, Phys. Rev. D 58 (1998), 123508 doi:10.1103/PhysRevD.58.123508 [arXiv:hep-ph/9803394 [hep-ph]].
. A Berera, M Gleiser, R O Ramos, 10.1103/PhysRevLett.83.264arXiv:hep-ph/9809583Phys. Rev. Lett. 83hep-phA. Berera, M. Gleiser and R. O. Ramos, Phys. Rev. Lett. 83 (1999), 264-267 doi:10.1103/PhysRevLett.83.264 [arXiv:hep-ph/9809583 [hep-ph]].
. A H Guth, 10.1103/PhysRevD.23.347Phys. Rev. D. 23A. H. Guth, Phys. Rev. D 23 (1981), 347-356 doi:10.1103/PhysRevD.23.347 .
. A D Linde, 10.1016/0370-2693(82)90293-3Phys. Lett. B. 116A. D. Linde, Phys. Lett. B 116 (1982), 335-339 doi:10.1016/0370-2693(82)90293-3 .
. A Albrecht, P J Steinhardt, 10.1103/PhysRevLett.48.1220Phys. Rev. Lett. 48A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. 48 (1982), 1220-1223 doi:10.1103/PhysRevLett.48.1220 .
. A D Linde, 10.1016/0370-2693(83)90837-7Phys. Lett. B. 129A. D. Linde, Phys. Lett. B 129 (1983), 177-181 doi:10.1016/0370-2693(83)90837-7 .
. R H Brandenberger, 10.1103/RevModPhys.57.1Rev. Mod. Phys. 571R. H. Brandenberger, Rev. Mod. Phys. 57 (1985), 1 doi:10.1103/RevModPhys.57.1 .
. E J Copeland, A R Liddle, D H Lyth, E D Stewart, D Wands, 10.1103/PhysRevD.49.6410arXiv:astro-ph/9401011Phys. Rev. D. 49astro-phaE. J. Copeland, A. R. Liddle, D. H. Lyth, E. D. Stewart and D. Wands, Phys. Rev. D 49 (1994), 6410-6433 doi:10.1103/PhysRevD.49.6410 arXiv:astro-ph/9401011 [astro-pha]].
. N Arkani-Hamed, H C Cheng, P Creminelli, L Randall, arXiv:hep-th/0302034JCAP. 03073N. Arkani-Hamed, H. C. Cheng, P. Creminelli, and L. Randall, 2003, JCAP 0307, 003 [arXiv:hep-th/0302034].
. A O Caldeira, A J Leggett, Ann. Phys. 149374Caldeira, A. O. and Leggett, A. J., 1983, Ann. Phys. 149, 374.
. A Berera, R O Ramos, 10.1103/PhysRevD.63.103509arXiv:hep-ph/0101049Phys. Rev. D. 63103509hep-phA. Berera and R. O. Ramos, Phys. Rev. D 63 (2001), 103509 doi:10.1103/PhysRevD.63.103509 [arXiv:hep-ph/0101049 [hep-ph]].
. A Berera, R O Ramos, 10.1103/PhysRevD.71.023513arXiv:hep-ph/0406339Phys. Rev. D. 7123513hep-phA. Berera and R. O. Ramos, Phys. Rev. D 71 (2005), 023513 doi:10.1103/PhysRevD.71.023513 [arXiv:hep-ph/0406339 [hep-ph]].
. I D Lawrie, 10.1103/PhysRevD.66.041702arXiv:hep-ph/0204184Phys. Rev. D. 6641702hep-phI. D. Lawrie, Phys. Rev. D 66 (2002), 041702 doi:10.1103/PhysRevD.66.041702 [arXiv:hep-ph/0204184 [hep-ph]].
. A Berera, I G Moss, R O Ramos, 10.1103/PhysRevD.76.083520arXiv:0706.2793Phys. Rev. D. 7683520hep-phA. Berera, I. G. Moss and R. O. Ramos, Phys. Rev. D 76 (2007), 083520 doi:10.1103/PhysRevD.76.083520 [arXiv:0706.2793 [hep-ph]].
. A Berera, I G Moss, R O Ramos, 10.1088/0034-4885/72/2/026901arXiv:0808.1855Rept. Prog. Phys. 7226901hep-phA. Berera, I. G. Moss and R. O. Ramos, Rept. Prog. Phys. 72 (2009), 026901 doi:10.1088/0034-4885/72/2/026901 [arXiv:0808.1855 [hep-ph]].
. A Berera, 10.1103/PhysRevD.54.2519arXiv:hep-th/9601134Phys. Rev. D. 54hep-thA. Berera, Phys. Rev. D 54 (1996), 2519-2534 doi:10.1103/PhysRevD.54.2519 [arXiv:hep-th/9601134 [hep-th]].
. M Bastero-Gil, A Berera, R O Ramos, 10.1088/1475-7516/2011/09/033arXiv:1008.1929JCAP. 0933hep-phM. Bastero-Gil, A. Berera and R. O. Ramos, JCAP 09 (2011), 033 doi:10.1088/1475-7516/2011/09/033 [arXiv:1008.1929 [hep-ph]].
. A Berera, R O Ramos, 10.1016/j.physletb.2003.06.028arXiv:hep-ph/0210301Phys. Lett. B. 567hep-phA. Berera and R. O. Ramos, Phys. Lett. B 567 (2003), 294-304 doi:10.1016/j.physletb.2003.06.028 [arXiv:hep-ph/0210301 [hep-ph]].
. A Berera, R O Ramos, 10.1016/j.physletb.2004.12.028arXiv:hep-ph/0308211Phys. Lett. B. 607hep-phA. Berera and R. O. Ramos, Phys. Lett. B 607 (2005), 1-7 doi:10.1016/j.physletb.2004.12.028 [arXiv:hep-ph/0308211 [hep-ph]].
. L Z Fang, 10.1016/0370-2693(80)90421-9Phys. Lett. B. 95L. Z. Fang, Phys. Lett. B 95 (1980), 154-156 doi:10.1016/0370-2693(80)90421-9 .
. E Gliner, Sov. Phys. JETP. 22E. Gliner, Sov. Phys. JETP 22 (1966) 378-382.
. E Gliner, Sov. Phys. Dokl. 15E. Gliner, Sov. Phys. Dokl. 15 (1970) 559-561.
. D A Kirzhnits, JETP Lett. 15D. A. Kirzhnits, JETP Lett. 15 (1972), 529-531
. D A Kirzhnits, A D Linde, 10.1016/0370-2693(72)90109-8Phys. Lett. B. 42D. A. Kirzhnits and A. D. Linde, Phys. Lett. B 42 (1972), 471-474 doi:10.1016/0370-2693(72)90109-8
. D A Kirzhnits, A D Linde, Zh. Eksp. Teor. Fiz. 67D. A. Kirzhnits and A. D. Linde, Zh. Eksp. Teor. Fiz. 67 (1974), 1263-1275
. D A Kirzhnits, A D Linde, 10.1016/0003-4916(76)90279-7Annals Phys. 101D. A. Kirzhnits and A. D. Linde, Annals Phys. 101 (1976), 195-238 doi:10.1016/0003-4916(76)90279-7
. A D Linde, 10.1088/0034-4885/42/3/001Rept. Prog. Phys. 42389A. D. Linde, Rept. Prog. Phys. 42 (1979), 389 doi:10.1088/0034-4885/42/3/001
. Y B Zel'dovich, A Krasinski, Y B Zeldovich, 10.1007/s10714-008-0624-6Sov. Phys. Usp. 11Y. B. Zel'dovich, A. Krasinski and Y. B. Zeldovich, Sov. Phys. Usp. 11 (1968), 381-393 doi:10.1007/s10714-008-0624-6
. R Brout, F Englert, E Gunzig, 10.1016/0003-4916(78)90176-8Annals Phys. 11578R. Brout, F. Englert and E. Gunzig, Annals Phys. 115 (1978), 78 doi:10.1016/0003-4916(78)90176-8
. A A Starobinsky, 10.1016/0370-2693(80)90670-XPhys. Lett. B. 91A. A. Starobinsky, Phys. Lett. B 91 (1980), 99-102 doi:10.1016/0370-2693(80)90670-X
. D Kazanas, 10.1086/183361Astrophys. J. Lett. 241D. Kazanas, Astrophys. J. Lett. 241 (1980), L59-L63 doi:10.1086/183361
. E W Kolb, S Wolfram, 10.1086/158126Astrophys. J. 239428E. W. Kolb and S. Wolfram, Astrophys. J. 239 (1980), 428 doi:10.1086/158126
. M A Sher, 10.1103/PhysRevD.22.2989Phys. Rev. D. 222989M. A. Sher, Phys. Rev. D 22 (1980), 2989 doi:10.1103/PhysRevD.22.2989
. K Sato, Mon. Not. Roy. Astron. Soc. 195K. Sato, Mon. Not. Roy. Astron. Soc. 195 (1981), 467-479 NORDITA-80-29.
. A Berera, 10.1103/PhysRevLett.75.3218arXiv:astro-ph/9509049Phys. Rev. Lett. 75astro-phA. Berera, Phys. Rev. Lett. 75 (1995), 3218-3221 doi:10.1103/PhysRevLett.75.3218 [arXiv:astro-ph/9509049 [astro-ph]].
. A Berera, 10.1103/PhysRevD.55.3346arXiv:hep-ph/9612239Phys. Rev. D. 55hep-phA. Berera, Phys. Rev. D 55 (1997), 3346-3357 doi:10.1103/PhysRevD.55.3346 [arXiv:hep-ph/9612239 [hep-ph]].
. M Gleiser, R O Ramos, 10.1103/PhysRevD.50.2441arXiv:hep-ph/9311278Phys. Rev. D. 50hep-phM. Gleiser and R. O. Ramos, Phys. Rev. D 50 (1994), 2441-2455 doi:10.1103/PhysRevD.50.2441 [arXiv:hep-ph/9311278 [hep-ph]].
. A Hosoya, M A Sakagami, 10.1103/PhysRevD.29.2228Phys. Rev. D. 292228A. Hosoya and M. a. Sakagami, Phys. Rev. D 29 (1984), 2228 doi:10.1103/PhysRevD.29.2228.
. M Morikawa, M Sasaki, 10.1143/PTP.72.782Prog. Theor. Phys. 72782M. Morikawa and M. Sasaki, Prog. Theor. Phys. 72 (1984), 782 doi:10.1143/PTP.72.782.
. M Morikawa, 10.1143/PTP.77.1163Prog. Theor. Phys. 77M. Morikawa, Prog. Theor. Phys. 77 (1987), 1163-1177 doi:10.1143/PTP.77.1163.
. A Ringwald, 10.1016/S0003-4916Annals Phys. 177A. Ringwald, Annals Phys. 177 (1987), 129 doi:10.1016/S0003-4916(87)80027-1.
. E Calzetta, B L Hu, 10.1103/PhysRevD.37.2878Phys. Rev. D. 372878E. Calzetta and B. L. Hu, Phys. Rev. D 37 (1988), 2878 doi:10.1103/PhysRevD.37.2878 .
. D Boyanovsky, H J Vega, R Holman, D S Lee, A Singh, 10.1103/PhysRevD.51.4419arXiv:hep-ph/9408214Phys. Rev. D. 51hep-phD. Boyanovsky, H. J. de Vega, R. Holman, D. S. Lee and A. Singh, Phys. Rev. D 51 (1995), 4419-4444 doi:10.1103/PhysRevD.51.4419 [arXiv:hep-ph/9408214 [hep-ph]].
. J Yokoyama, A D Linde, arXiv:hep-ph/9809409Phys. Rev. D. 6083509J. Yokoyama and A. D. Linde, Phys. Rev. D 60, 083509 (1999). [arXiv:hep-ph/9809409].
. A Berera, T W Kephart, 10.1103/PhysRevLett.83.1084arXiv:hep-ph/9904410Phys. Rev. Lett. 83hep-phA. Berera and T. W. Kephart, Phys. Rev. Lett. 83 (1999), 1084-1087 doi:10.1103/PhysRevLett.83.1084 [arXiv:hep-ph/9904410 [hep-ph]].
. A Berera, 10.1016/S0550-3213(00)00411-9arXiv:hep-ph/9904409Nucl. Phys. B. 585hep-phA. Berera, Nucl. Phys. B 585 (2000), 666-714 doi:10.1016/S0550-3213(00)00411-9 [arXiv:hep-ph/9904409 [hep-ph]].
. A Berera, 10.22323/1.010.0069arXiv:hep-ph/0401139PoS. 200369hep-phA. Berera, PoS AHEP2003 (2003), 069 doi:10.22323/1.010.0069 [arXiv:hep-ph/0401139 [hep-ph]].
. G Obied, H Ooguri, L Spodyneiko, C Vafa, arXiv:1806.08362hep-thG. Obied, H. Ooguri, L. Spodyneiko and C. Vafa, [arXiv:1806.08362 [hep-th]].
. H Ooguri, E Palti, G Shiu, C Vafa, 10.1016/j.physletb.2018.11.018arXiv:1810.05506Phys. Lett. B. 788hep-thH. Ooguri, E. Palti, G. Shiu and C. Vafa, Phys. Lett. B 788 (2019), 180-184 doi:10.1016/j.physletb.2018.11.018 [arXiv:1810.05506 [hep-th]].
. M Bastero-Gil, A Berera, 10.1103/PhysRevD.76.043515arXiv:hep-ph/0610343Phys. Rev. D. 7643515hep-phM. Bastero-Gil and A. Berera, Phys. Rev. D 76 (2007), 043515 doi:10.1103/PhysRevD.76.043515 [arXiv:hep-ph/0610343 [hep-ph]].
. L M H Hall, I G Moss, A Berera, 10.1103/PhysRevD.69.083525arXiv:astro-ph/0305015Phys. Rev. D. 6983525astro-phL. M. H. Hall, I. G. Moss and A. Berera, Phys. Rev. D 69 (2004), 083525 doi:10.1103/PhysRevD.69.083525 [arXiv:astro-ph/0305015 [astro-ph]].
. C Graham, I G Moss, 10.1088/1475-7516/2009/07/013arXiv:0905.3500JCAP. 0713astro-ph.COC. Graham and I. G. Moss, JCAP 07 (2009), 013 doi:10.1088/1475-7516/2009/07/013 [arXiv:0905.3500 [astro-ph.CO]].
. R O Ramos, L A Silva, 10.1088/1475-7516/2013/03/032arXiv:1302.3544JCAP. 0332astroph.COR. O. Ramos and L. A. da Silva, JCAP 03 (2013), 032 doi:10.1088/1475-7516/2013/03/032 [arXiv:1302.3544 [astro- ph.CO]].
. M Bastero-Gil, A Berera, 10.1142/S0217751X09044206arXiv:0902.0521Int. J. Mod. Phys. A. 24hep-phM. Bastero-Gil and A. Berera, Int. J. Mod. Phys. A 24 (2009), 2207-2240 doi:10.1142/S0217751X09044206 [arXiv:0902.0521 [hep-ph]].
. S Bartrum, M Bastero-Gil, A Berera, R Cerezo, R O Ramos, J G Rosa, 10.1016/j.physletb.2014.03.029arXiv:1307.5868Phys. Lett. B. 732hep-phS. Bartrum, M. Bastero-Gil, A. Berera, R. Cerezo, R. O. Ramos and J. G. Rosa, Phys. Lett. B 732 (2014), 116-121 doi:10.1016/j.physletb.2014.03.029 [arXiv:1307.5868 [hep-ph]].
. M Bastero-Gil, A Berera, R O Ramos, J G Rosa, 10.1103/PhysRevLett.117.151301arXiv:1604.08838Phys. Rev. Lett. 11715151301hep-phM. Bastero-Gil, A. Berera, R. O. Ramos and J. G. Rosa, Phys. Rev. Lett. 117 (2016) no.15, 151301 doi:10.1103/PhysRevLett.117.151301 [arXiv:1604.08838 [hep-ph]].
. I G Moss, 10.1016/0370-2693(85)90570-2Phys. Lett. B. 154I. G. Moss, Phys. Lett. B 154 (1985), 120-124 doi:10.1016/0370-2693(85)90570-2
. J Yokoyama, K I Maeda, 10.1016/0370-2693(88)90880-5Phys. Lett. B. 207J. Yokoyama and K. i. Maeda, Phys. Lett. B 207, 31-35 (1988) doi:10.1016/0370-2693(88)90880-5
. S Das, 10.1103/PhysRevD.99.063514arXiv:1810.05038Phys. Rev. D. 99663514hep-thS. Das, Phys. Rev. D 99 (2019) no.6, 063514 doi:10.1103/PhysRevD.99.063514 [arXiv:1810.05038 [hep-th]].
. M Motaharfar, V Kamali, R O Ramos, 10.1103/PhysRevD.99.063513arXiv:1810.02816Phys. Rev. D. 99663513astro-ph.COM. Motaharfar, V. Kamali and R. O. Ramos, Phys. Rev. D 99 (2019) no.6, 063513 doi:10.1103/PhysRevD.99.063513 [arXiv:1810.02816 [astro-ph.CO]].
. A Berera, J R Calderón, 10.1103/PhysRevD.100.123530arXiv:1910.10516Phys. Rev. D. 10012hep-phA. Berera and J. R. Calderón, Phys. Rev. D 100 (2019) no.12, 123530 doi:10.1103/PhysRevD.100.123530 [arXiv:1910.10516 [hep-ph]].
. A Berera, S Brahma, J R Calderón, 10.1007/JHEP08(2020)071arXiv:2003.07184JHEP. 0871hepthA. Berera, S. Brahma and J. R. Calderón, JHEP 08 (2020), 071 doi:10.1007/JHEP08(2020)071 [arXiv:2003.07184 [hep- th]].
. A Bedroya, R Brandenberger, M Loverde, C Vafa, 10.1103/PhysRevD.101.103502arXiv:1909.11106Phys. Rev. D. 10110103502hep-thA. Bedroya, R. Brandenberger, M. Loverde and C. Vafa, Phys. Rev. D 101 (2020) no.10, 103502 doi:10.1103/PhysRevD.101.103502 [arXiv:1909.11106 [hep-th]].
. P A R Ade, Planck10.1051/0004-6361/201525830arXiv:1502.01589Astron. Astrophys. 59413astro-ph.COP. A. R. Ade et al. [Planck], Astron. Astrophys. 594 (2016), A13 doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]].
. P A R Ade, BICEP and Keck10.1103/PhysRevLett.127.151301arXiv:2110.00483Phys. Rev. Lett. 12715astro-ph.COP. A. R. Ade et al. [BICEP and Keck], Phys. Rev. Lett. 127 (2021) no.15, 151301 doi:10.1103/PhysRevLett.127.151301 [arXiv:2110.00483 [astro-ph.CO]].
. J Maldacena, 10.1002/prop.201500097arXiv:1508.01082Fortsch. Phys. 64hep-thJ. Maldacena, Fortsch. Phys. 64 (2016), 10-23 doi:10.1002/prop.201500097 [arXiv:1508.01082 [hep-th]].
. D Green, R A Porto, 10.1103/PhysRevLett.124.251302arXiv:2001.09149Phys. Rev. Lett. 12425251302hep-thD. Green and R. A. Porto, Phys. Rev. Lett. 124 (2020) no.25, 251302 doi:10.1103/PhysRevLett.124.251302 [arXiv:2001.09149 [hep-th]].
. S Brahma, A Berera, J Calderón-Figueroa, arXiv:2107.06910hep-thS. Brahma, A. Berera and J. Calderón-Figueroa, [arXiv:2107.06910 [hep-th]].
. R Dale, R Lapiedra, J A Morales-Lladosa, 10.1103/PhysRevD.107.023506Phys. Rev. D. 1072R. Dale, R. Lapiedra and J. A. Morales-Lladosa, Phys. Rev. D 107 (2023) no.2, 023506 doi:10.1103/PhysRevD.107.023506
. J A Wheeler, 10.1016/0003-4916(57)90050-7Annals Phys. 2J. A. Wheeler, Annals Phys. 2 (1957), 604-614 doi:10.1016/0003-4916(57)90050-7.
. E R Harrison, 10.1103/PhysRevD.1.2726Phys. Rev. D. 1E. R. Harrison, Phys. Rev. D 1 (1970), 2726-2730 doi:10.1103/PhysRevD.1.2726
. L Kofman, A D Linde, X Liu, A Maloney, L Mcallister, E Silverstein, 10.1088/1126-6708/2004/05/030arXiv:hep-th/0403001JHEP. 0530hep-thL. Kofman, A. D. Linde, X. Liu, A. Maloney, L. McAllister and E. Silverstein, JHEP 05 (2004), 030 doi:10.1088/1126- 6708/2004/05/030 [arXiv:hep-th/0403001 [hep-th]].
. D Green, B Horn, L Senatore, E Silverstein, 10.1103/PhysRevD.80.063533arXiv:0902.1006Phys. Rev. D. 8063533hep-thD. Green, B. Horn, L. Senatore and E. Silverstein, Phys. Rev. D 80 (2009), 063533 doi:10.1103/PhysRevD.80.063533 [arXiv:0902.1006 [hep-th]].
. D Lopez Nacir, R A Porto, L Senatore, M Zaldarriaga, 10.1007/JHEP01(2012)075arXiv:1109.4192JHEP. 0175hep-thD. Lopez Nacir, R. A. Porto, L. Senatore and M. Zaldarriaga, JHEP 01 (2012), 075 doi:10.1007/JHEP01(2012)075 [arXiv:1109.4192 [hep-th]].
. A Berera, T W Kephart, 10.1016/S0370-2693(99)00515-8arXiv:hep-ph/9811295Phys. Lett. B. 456hep-phA. Berera and T. W. Kephart, Phys. Lett. B 456 (1999), 135-140 doi:10.1016/S0370-2693(99)00515-8 [arXiv:hep-ph/9811295 [hep-ph]].
. M Bastero-Gil, A Berera, R Hernández-Jiménez, J G Rosa, 10.1103/PhysRevD.99.103520arXiv:1812.07296Phys. Rev. D. 9910hep-phM. Bastero-Gil, A. Berera, R. Hernández-Jiménez and J. G. Rosa, Phys. Rev. D 99 (2019) no.10, 103520 doi:10.1103/PhysRevD.99.103520 [arXiv:1812.07296 [hep-ph]].
. M Bastero-Gil, A Berera, R O Ramos, J G Rosa, 10.1016/j.physletb.2020.136055arXiv:1907.13410Phys. Lett. B. 813136055hep-phM. Bastero-Gil, A. Berera, R. O. Ramos and J. G. Rosa, Phys. Lett. B 813 (2021), 136055 doi:10.1016/j.physletb.2020.136055 [arXiv:1907.13410 [hep-ph]].
. K V Berghaus, P W Graham, D E Kaplan, 10.1088/1475-7516/2020/03/034arXiv:1910.07525JCAP. 0334hep-phK. V. Berghaus, P. W. Graham and D. E. Kaplan, JCAP 03 (2020), 034 doi:10.1088/1475-7516/2020/03/034 [arXiv:1910.07525 [hep-ph]].
. Y B Zeldovich, 10.1093/mnras/160.1.1PMon. Not. Roy. Astron. Soc. 160Y. B. Zeldovich, Mon. Not. Roy. Astron. Soc. 160 (1972), 1P-3P doi:10.1093/mnras/160.1.1P
. M Bastero-Gil, A Berera, R Hernández-Jiménez, J G Rosa, 10.1103/PhysRevD.98.083502arXiv:1805.07186Phys. Rev. D. 98883502astro-ph.COM. Bastero-Gil, A. Berera, R. Hernández-Jiménez and J. G. Rosa, Phys. Rev. D 98 (2018) no.8, 083502 doi:10.1103/PhysRevD.98.083502 [arXiv:1805.07186 [astro-ph.CO]].
. M Benetti, R O Ramos, 10.1103/PhysRevD.95.023517arXiv:1610.08758Phys. Rev. D. 952astro-ph.COM. Benetti and R. O. Ramos, Phys. Rev. D 95 (2017) no.2, 023517 doi:10.1103/PhysRevD.95.023517 [arXiv:1610.08758 [astro-ph.CO]].
. Y Akrami, Planck10.1051/0004-6361/201833887arXiv:1807.06211Astron. Astrophys. 641astroph.COY. Akrami et al. [Planck], Astron. Astrophys. 641 (2020), A10 doi:10.1051/0004-6361/201833887 [arXiv:1807.06211 [astro- ph.CO]].
. L Parker, 10.1103/PhysRevLett.21.562Phys. Rev. Lett. 21L. Parker, Phys. Rev. Lett. 21 (1968), 562-564 doi:10.1103/PhysRevLett.21.562 .
. Y B Zeldovich, A A Starobinsky, Zh. Eksp. Teor. Fiz. 61Y. B. Zeldovich and A. A. Starobinsky, Zh. Eksp. Teor. Fiz. 61 (1971), 2161-2175.
. B L Hu, S A Fulling, L Parker, 10.1103/PhysRevD.8.2377Phys. Rev. D. 8B. L. Hu, S. A. Fulling and L. Parker, Phys. Rev. D 8 (1973), 2377-2385 doi:10.1103/PhysRevD.8.2377 .
. L H Ford, 10.1103/PhysRevD.35.2955Phys. Rev. D. 352955L. H. Ford, Phys. Rev. D 35 (1987), 2955 doi:10.1103/PhysRevD.35.2955.
. J A Frieman, 10.1103/PhysRevD.39.389Phys. Rev. D. 39389J. A. Frieman, Phys. Rev. D 39 (1989), 389 doi:10.1103/PhysRevD.39.389 .
. J Cespedes, E Verdaguer, 10.1103/PhysRevD.41.1022Phys. Rev. D. 411022J. Cespedes and E. Verdaguer, Phys. Rev. D 41 (1990), 1022 doi:10.1103/PhysRevD.41.1022 .
. D H Lyth, D Roberts, 10.1103/PhysRevD.57.7120arXiv:hep-ph/9609441Phys. Rev. D. 57hep-phD. H. Lyth and D. Roberts, Phys. Rev. D 57 (1998), 7120-7129 doi:10.1103/PhysRevD.57.7120 [arXiv:hep-ph/9609441 [hep-ph]].
. P A R Ade, Planck10.1051/0004-6361/201525898arXiv:1502.02114Astron. Astrophys. 594astro-ph.COP. A. R. Ade et al. [Planck], Astron. Astrophys. 594 (2016), A20 doi:10.1051/0004-6361/201525898 [arXiv:1502.02114 [astro-ph.CO]].
. G R Dvali, Q Shafi, S Solganik, arXiv:hep-th/0105203hep-thG. R. Dvali, Q. Shafi and S. Solganik, [arXiv:hep-th/0105203 [hep-th]].
. R Kallosh, A Linde, D Roest, 10.1007/JHEP11(2013)198arXiv:1311.0472JHEP. 11hep-thR. Kallosh, A. Linde and D. Roest, JHEP 11 (2013), 198 doi:10.1007/JHEP11(2013)198 [arXiv:1311.0472 [hep-th]].
. A Vilenkin, 10.1103/PhysRevD.32.2511Phys. Rev. D. 322511A. Vilenkin, Phys. Rev. D 32 (1985), 2511 doi:10.1103/PhysRevD.32.2511
. F L Bezrukov, M Shaposhnikov, 10.1016/j.physletb.2007.11.072arXiv:0710.3755Phys. Lett. B. 659hep-thF. L. Bezrukov and M. Shaposhnikov, Phys. Lett. B 659 (2008), 703-706 doi:10.1016/j.physletb.2007.11.072 [arXiv:0710.3755 [hep-th]].
. J M F Maia, J A S Lima, 10.1103/PhysRevD.60.101301arXiv:astro-ph/9910568Phys. Rev. D. 60101301astro-phJ. M. F. Maia and J. A. S. Lima, Phys. Rev. D 60 (1999), 101301 doi:10.1103/PhysRevD.60.101301 [arXiv:astro-ph/9910568 [astro-ph]].
. A N Taylor, A Berera, 10.1103/PhysRevD.62.083517arXiv:astro-ph/0006077Phys. Rev. D. 6283517astro-phA. N. Taylor and A. Berera, Phys. Rev. D 62 (2000), 083517 doi:10.1103/PhysRevD.62.083517 [arXiv:astro-ph/0006077 [astro-ph]].
. L P Chimento, A S Jakubi, N A Zuccala, D Pavon, 10.1103/PhysRevD.65.083510arXiv:astro-ph/0201002Phys. Rev. D. 6583510astro-phL. P. Chimento, A. S. Jakubi, N. A. Zuccala and D. Pavon, Phys. Rev. D 65 (2002), 083510 doi:10.1103/PhysRevD.65.083510 [arXiv:astro-ph/0201002 [astro-ph]].
. R Herrera, S Campo, C Campuzano, 10.1088/1475-7516/2006/10/009arXiv:astro-ph/0610339JCAP. 109astro-phR. Herrera, S. del Campo and C. Campuzano, JCAP 10 (2006), 009 doi:10.1088/1475-7516/2006/10/009 [arXiv:astro-ph/0610339 [astro-ph]].
. J P Mimoso, A Nunes, D Pavon, 10.1103/PhysRevD.73.023502arXiv:gr-qc/0512057Phys. Rev. D. 7323502gr-qcJ. P. Mimoso, A. Nunes and D. Pavon, Phys. Rev. D 73 (2006), 023502 doi:10.1103/PhysRevD.73.023502 [arXiv:gr-qc/0512057 [gr-qc]].
. S Campo, R Herrera, 10.1016/j.physletb.2008.05.063arXiv:0806.0575Phys. Lett. B. 665astro-phS. del Campo and R. Herrera, Phys. Lett. B 665 (2008), 100-105 doi:10.1016/j.physletb.2008.05.063 [arXiv:0806.0575 [astro-ph]].
. M A Cid, S Campo, R Herrera, 10.1088/1475-7516/2007/10/005arXiv:0710.3148JCAP. 105astro-phM. A. Cid, S. del Campo and R. Herrera, JCAP 10 (2007), 005 doi:10.1088/1475-7516/2007/10/005 [arXiv:0710.3148 [astro-ph]].
. S Campo, R Herrera, 10.1016/j.physletb.2007.08.007arXiv:0708.1460Phys. Lett. B. 653gr-qcS. del Campo and R. Herrera, Phys. Lett. B 653 (2007), 122-128 doi:10.1016/j.physletb.2007.08.007 [arXiv:0708.1460 [gr-qc]].
. R Herrera, 10.1103/PhysRevD.81.123511arXiv:1006.1299Phys. Rev. D. 81123511astro-ph.COR. Herrera, Phys. Rev. D 81 (2010), 123511 doi:10.1103/PhysRevD.81.123511 [arXiv:1006.1299 [astro-ph.CO]].
. X M Zhang, J Y Zhu, 10.1103/PhysRevD.87.043522arXiv:1302.0168Phys. Rev. D. 87443522gr-qcX. M. Zhang and J. Y. Zhu, Phys. Rev. D 87 (2013) no.4, 043522 doi:10.1103/PhysRevD.87.043522 [arXiv:1302.0168 [gr-qc]].
. M R Setare, V Kamali, 10.1088/1475-7516/2012/08/034arXiv:1210.0742JCAP. 0834hep-thM. R. Setare and V. Kamali, JCAP 08 (2012), 034 doi:10.1088/1475-7516/2012/08/034 [arXiv:1210.0742 [hep-th]].
. M Sharif, R Saleem, 10.1140/epjc/s10052-014-2738-1arXiv:1311.7680Eur. Phys. J. C. 742738gr-qcM. Sharif and R. Saleem, Eur. Phys. J. C 74 (2014), 2738 doi:10.1140/epjc/s10052-014-2738-1 [arXiv:1311.7680 [gr-qc]].
. M R Setare, A Sepehri, V Kamali, 10.1016/j.physletb.2014.05.081arXiv:1405.7949Phys. Lett. B. 735gr-qcM. R. Setare, A. Sepehri and V. Kamali, Phys. Lett. B 735 (2014), 84-89 doi:10.1016/j.physletb.2014.05.081 [arXiv:1405.7949 [gr-qc]].
. M Sharif, A Ikram, 10.1134/S1063776116070232arXiv:1507.00905J. Exp. Theor. Phys. 1231gr-qcM. Sharif and A. Ikram, J. Exp. Theor. Phys. 123 (2016) no.1, 40-50 doi:10.1134/S1063776116070232 [arXiv:1507.00905 [gr-qc]].
. P Goodarzi, H Mohseni Sadjadi, 10.1140/epjc/s10052-017-5028-xarXiv:1609.06185Eur. Phys. J. C. 777463gr-qcP. Goodarzi and H. Mohseni Sadjadi, Eur. Phys. J. C 77 (2017) no.7, 463 doi:10.1140/epjc/s10052-017-5028-x [arXiv:1609.06185 [gr-qc]].
. M Motaharfar, H R Sepangi, 10.1140/epjc/s10052-016-4474-1arXiv:1604.00453Eur. Phys. J. C. 7611646gr-qcM. Motaharfar and H. R. Sepangi, Eur. Phys. J. C 76 (2016) no.11, 646 doi:10.1140/epjc/s10052-016-4474-1 [arXiv:1604.00453 [gr-qc]].
. A M Levy, G J Turiaci, 10.1103/PhysRevD.94.083514arXiv:1603.06608Phys. Rev. D. 94883514gr-qcA. M. Levy and G. J. Turiaci, Phys. Rev. D 94 (2016) no.8, 083514 doi:10.1103/PhysRevD.94.083514 [arXiv:1603.06608 [gr-qc]].
. Z P Peng, J N Yu, X M Zhang, J Y Zhu, 10.1103/PhysRevD.94.103531arXiv:1611.02789Phys. Rev. D. 9410gr-qcZ. P. Peng, J. N. Yu, X. M. Zhang and J. Y. Zhu, Phys. Rev. D 94 (2016) no.10, 103531 doi:10.1103/PhysRevD.94.103531 [arXiv:1611.02789 [gr-qc]].
. R Herrera, 10.1088/1475-7516/2017/05/029arXiv:1701.07934JCAP. 0529gr-qcR. Herrera, JCAP 05 (2017), 029 doi:10.1088/1475-7516/2017/05/029 [arXiv:1701.07934 [gr-qc]].
. A Mohammadi, T Golanbari, H Sheikhahmadi, K Sayar, L Akhtari, M A Rasheed, K Saaidi, 10.1088/1674-1137/44/9/095101arXiv:2001.10042Chin. Phys. C. 44995101gr-qcA. Mohammadi, T. Golanbari, H. Sheikhahmadi, K. Sayar, L. Akhtari, M. A. Rasheed and K. Saaidi, Chin. Phys. C 44 (2020) no.9, 095101 doi:10.1088/1674-1137/44/9/095101 [arXiv:2001.10042 [gr-qc]].
. X M Zhang, A Fu, K Li, Q Liu, P C Chu, H Y Ma, J Y Zhu, 10.1103/PhysRevD.103.023511arXiv:2011.14623Phys. Rev. D. 1032gr-qcX. M. Zhang, A. Fu, K. Li, Q. Liu, P. C. Chu, H. Y. Ma and J. Y. Zhu, Phys. Rev. D 103 (2021) no.2, 023511 doi:10.1103/PhysRevD.103.023511 [arXiv:2011.14623 [gr-qc]].
. Y Reyimuaji, X Zhang, 10.1088/1475-7516/2021/04/077arXiv:2012.07329JCAP. 0477astro-ph.COY. Reyimuaji and X. Zhang, JCAP 04 (2021), 077 doi:10.1088/1475-7516/2021/04/077 [arXiv:2012.07329 [astro-ph.CO]].
. M Motaharfar, R O Ramos, 10.1103/PhysRevD.104.043522arXiv:2105.01131Phys. Rev. D. 104443522hep-thM. Motaharfar and R. O. Ramos, Phys. Rev. D 104 (2021) no.4, 043522 doi:10.1103/PhysRevD.104.043522 [arXiv:2105.01131 [hep-th]].
. D Samart, P Ma-Adlerd, P Channuie, 10.1140/epjc/s10052-022-10073-xarXiv:2105.14552Eur. Phys. J. C. 822122gr-qcD. Samart, P. Ma-adlerd and P. Channuie, Eur. Phys. J. C 82 (2022) no.2, 122 doi:10.1140/epjc/s10052-022-10073-x [arXiv:2105.14552 [gr-qc]].
. A Mohammadi, 10.1103/PhysRevD.104.123538arXiv:2109.00247Phys. Rev. D. 10412gr-qcA. Mohammadi, Phys. Rev. D 104 (2021) no.12, 123538 doi:10.1103/PhysRevD.104.123538 [arXiv:2109.00247 [gr-qc]].
. M Alhallak, A Alrakik, N Chamoun, M S El-Daher, 10.3390/universe8020126arXiv:2111.050758126Universeastro-ph.COM. AlHallak, A. AlRakik, N. Chamoun and M. S. El-Daher, Universe 8 (2022) no.2, 126 doi:10.3390/universe8020126 [arXiv:2111.05075 [astro-ph.CO]].
. X M Zhang, K Li, Y F Guo, P C Chu, H Liu, J Y Zhu, 10.1103/PhysRevD.104.103513arXiv:2111.14138Phys. Rev. D. 10410gr-qcX. M. Zhang, K. Li, Y. F. Guo, P. C. Chu, H. Liu and J. Y. Zhu, Phys. Rev. D 104 (2021) no.10, 103513 doi:10.1103/PhysRevD.104.103513 [arXiv:2111.14138 [gr-qc]].
. R Agostino, O Luongo, 10.1016/j.physletb.2022.137070arXiv:2112.12816Phys. Lett. B. 829137070astro-ph.COR. D'Agostino and O. Luongo, Phys. Lett. B 829 (2022), 137070 doi:10.1016/j.physletb.2022.137070 [arXiv:2112.12816 [astro-ph.CO]].
. A Bose, S Chakraborty, 10.1016/j.nuclphysb.2022.115767arXiv:2204.05712Nucl. Phys. B. 978115767gr-qcA. Bose and S. Chakraborty, Nucl. Phys. B 978 (2022), 115767 doi:10.1016/j.nuclphysb.2022.115767 [arXiv:2204.05712 [gr-qc]].
. A Payaka, W Amaek, P Channuie, 10.1016/j.nuclphysb.2022.116052arXiv:2203.11041Nucl. Phys. B. 986116052gr-qcA. Payaka, W. Amaek and P. Channuie, Nucl. Phys. B 986 (2023), 116052 doi:10.1016/j.nuclphysb.2022.116052 [arXiv:2203.11041 [gr-qc]].
. S Kanno, A Nakato, J Soda, K Ueda, arXiv:2209.05776hep-thS. Kanno, A. Nakato, J. Soda and K. Ueda, [arXiv:2209.05776 [hep-th]].
. B Deb, S Yeasmin, A Deshamukhya, arXiv:2211.05059gr-qcB. Deb, S. Yeasmin and A. Deshamukhya, [arXiv:2211.05059 [gr-qc]].
. L M H Hall, I G Moss, 10.1103/PhysRevD.71.023514arXiv:hep-ph/0408323Phys. Rev. D. 7123514hep-phL. M. H. Hall and I. G. Moss, Phys. Rev. D 71 (2005), 023514 doi:10.1103/PhysRevD.71.023514 [arXiv:hep-ph/0408323 [hep-ph]].
. R Jeannerot, M Postma, 10.1088/1475-7516/2006/07/012arXiv:hep-th/0604216JCAP. 0712hep-thR. Jeannerot and M. Postma, JCAP 07 (2006), 012 doi:10.1088/1475-7516/2006/07/012 [arXiv:hep-th/0604216 [hep-th]].
. S Mohanty, A , 10.1103/PhysRevD.78.123515arXiv:0807.0317Phys. Rev. D. 78123515hepphS. Mohanty and A. Nautiyal, Phys. Rev. D 78 (2008), 123515 doi:10.1103/PhysRevD.78.123515 [arXiv:0807.0317 [hep- ph]].
. J C Bueno Sanchez, M Bastero-Gil, A Berera, K Dimopoulos, 10.1103/PhysRevD.77.123527arXiv:0802.4354Phys. Rev. D. 77123527hep-phJ. C. Bueno Sanchez, M. Bastero-Gil, A. Berera and K. Dimopoulos, Phys. Rev. D 77 (2008), 123527 doi:10.1103/PhysRevD.77.123527 [arXiv:0802.4354 [hep-ph]].
. Y F Cai, J B Dent, D A Easson, 10.1103/PhysRevD.83.101301arXiv:1011.4074Phys. Rev. D. 83101301hep-thY. F. Cai, J. B. Dent and D. A. Easson, Phys. Rev. D 83 (2011), 101301 doi:10.1103/PhysRevD.83.101301 [arXiv:1011.4074 [hep-th]].
. T Matsuda, 10.1103/PhysRevD.87.026001arXiv:1212.3030Phys. Rev. D. 872hep-thT. Matsuda, Phys. Rev. D 87 (2013) no.2, 026001 doi:10.1103/PhysRevD.87.026001 [arXiv:1212.3030 [hep-th]].
. H Mishra, S Mohanty, A , 10.1016/j.physletb.2012.02.005arXiv:1106.3039Phys. Lett. B. 710hep-phH. Mishra, S. Mohanty and A. Nautiyal, Phys. Lett. B 710 (2012), 245-250 doi:10.1016/j.physletb.2012.02.005 [arXiv:1106.3039 [hep-ph]].
. L Visinelli, 10.1088/1475-7516/2011/09/013arXiv:1107.3523JCAP. 0913astro-ph.COL. Visinelli, JCAP 09 (2011), 013 doi:10.1088/1475-7516/2011/09/013 [arXiv:1107.3523 [astro-ph.CO]].
. M Bastero-Gil, A Berera, T P Metcalf, J G Rosa, 10.1088/1475-7516/2014/03/023arXiv:1312.2961JCAP. 0323hep-phM. Bastero-Gil, A. Berera, T. P. Metcalf and J. G. Rosa, JCAP 03 (2014), 023 doi:10.1088/1475-7516/2014/03/023 [arXiv:1312.2961 [hep-ph]].
. X M Zhang, J Y Zhu, 10.1103/PhysRevD.90.123519arXiv:1402.0205Phys. Rev. D. 9012gr-qcX. M. Zhang and j. Y. Zhu, Phys. Rev. D 90 (2014) no.12, 123519 doi:10.1103/PhysRevD.90.123519 [arXiv:1402.0205 [gr-qc]].
. M Bastero-Gil, A Berera, R O Ramos, J G Rosa, 10.1088/1475-7516/2014/10/053arXiv:1404.4976JCAP. 1053astro-ph.COM. Bastero-Gil, A. Berera, R. O. Ramos and J. G. Rosa, JCAP 10 (2014), 053 doi:10.1088/1475-7516/2014/10/053 [arXiv:1404.4976 [astro-ph.CO]].
. M Bastero-Gil, A Berera, N Kronberg, 10.1088/1475-7516/2015/12/046arXiv:1509.07604JCAP. 1246hep-phM. Bastero-Gil, A. Berera and N. Kronberg, JCAP 12 (2015), 046 doi:10.1088/1475-7516/2015/12/046 [arXiv:1509.07604 [hep-ph]].
. B Batell, G F Giudice, M Mccullough, 10.1007/JHEP12(2015)162arXiv:1509.00834JHEP. 12162hep-phB. Batell, G. F. Giudice and M. McCullough, JHEP 12 (2015), 162 doi:10.1007/JHEP12(2015)162 [arXiv:1509.00834 [hep-ph]].
. A Notari, K Tywoniuk, 10.1088/1475-7516/2016/12/038arXiv:1608.06223JCAP. 1238hep-thA. Notari and K. Tywoniuk, JCAP 12 (2016), 038 doi:10.1088/1475-7516/2016/12/038 [arXiv:1608.06223 [hep-th]].
. R Z Ferreira, A Notari, 10.1088/1475-7516/2017/09/007arXiv:1706.00373JCAP. 097astro-ph.COR. Z. Ferreira and A. Notari, JCAP 09 (2017), 007 doi:10.1088/1475-7516/2017/09/007 [arXiv:1706.00373 [astro-ph.CO]].
. J G Rosa, L B Ventura, 10.1103/PhysRevLett.122.161301arXiv:1811.05493Phys. Rev. Lett. 12216hep-phJ. G. Rosa and L. B. Ventura, Phys. Rev. Lett. 122 (2019) no.16, 161301 doi:10.1103/PhysRevLett.122.161301 [arXiv:1811.05493 [hep-ph]].
. P M Sá, 10.1103/PhysRevD.102.103519arXiv:2007.07109Phys. Rev. D. 10210gr-qcP. M. Sá, Phys. Rev. D 102 (2020) no.10, 103519 doi:10.1103/PhysRevD.102.103519 [arXiv:2007.07109 [gr-qc]].
. M Laine, S Procacci, 10.1088/1475-7516/2021/06/031arXiv:2102.09913JCAP. 0631hep-phM. Laine and S. Procacci, JCAP 06 (2021), 031 doi:10.1088/1475-7516/2021/06/031 [arXiv:2102.09913 [hep-ph]].
. G Piccinelli, A Sánchez, 10.1103/PhysRevD.106.043511arXiv:2106.14791Phys. Rev. D. 106443511hep-phG. Piccinelli and A. Sánchez, Phys. Rev. D 106 (2022) no.4, 043511 doi:10.1103/PhysRevD.106.043511 [arXiv:2106.14791 [hep-ph]].
. W Derocco, P W Graham, S Kalia, 10.1088/1475-7516/2021/11/011arXiv:2107.07517JCAP. 1111hep-phW. DeRocco, P. W. Graham and S. Kalia, JCAP 11 (2021), 011 doi:10.1088/1475-7516/2021/11/011 [arXiv:2107.07517 [hep-ph]].
. P Agrawal, K V Berghaus, J Fan, A Hook, G Marques-Tavares, T Rudelius, arXiv:2203.08026hep-phP. Agrawal, K. V. Berghaus, J. Fan, A. Hook, G. Marques-Tavares and T. Rudelius, [arXiv:2203.08026 [hep-ph]].
. N Kitazawa, 10.1016/j.physletb.2023.137846arXiv:2210.08762Phys. Lett. B. 840137846astro-ph.CON. Kitazawa, Phys. Lett. B 840 (2023), 137846 doi:10.1016/j.physletb.2023.137846 [arXiv:2210.08762 [astro-ph.CO]].
. P Klose, M Laine, S Procacci, 10.1088/1475-7516/2022/12/020arXiv:2210.11710JCAP. 1220hep-phP. Klose, M. Laine and S. Procacci, JCAP 12 (2022), 020 doi:10.1088/1475-7516/2022/12/020 [arXiv:2210.11710 [hep-ph]].
. M M Anber, L Sorbo, 10.1103/PhysRevD.81.043534arXiv:0908.4089Phys. Rev. D. 8143534hep-thM. M. Anber and L. Sorbo, Phys. Rev. D 81 (2010), 043534 doi:10.1103/PhysRevD.81.043534 [arXiv:0908.4089 [hep-th]].
. N Barnaby, Z Huang, 10.1103/PhysRevD.80.126018arXiv:0909.0751Phys. Rev. D. 80126018astroph.CON. Barnaby and Z. Huang, Phys. Rev. D 80 (2009), 126018 doi:10.1103/PhysRevD.80.126018 [arXiv:0909.0751 [astro- ph.CO]].
. L Pearce, M Peloso, L Sorbo, 10.1088/1475-7516/2017/05/054arXiv:1702.07661JCAP. 0554astroph.COL. Pearce, M. Peloso and L. Sorbo, JCAP 05 (2017), 054 doi:10.1088/1475-7516/2017/05/054 [arXiv:1702.07661 [astro- ph.CO]].
. J P B Almeida, N Bernal, D Bettoni, J Rubio, 10.1088/1475-7516/2020/11/009arXiv:2007.13776JCAP. 119astro-ph.COJ. P. B. Almeida, N. Bernal, D. Bettoni and J. Rubio, JCAP 11 (2020), 009 doi:10.1088/1475-7516/2020/11/009 [arXiv:2007.13776 [astro-ph.CO]].
. I D Lawrie, 10.1103/PhysRevD.71.025021arXiv:hep-ph/0411130Phys. Rev. D. 7125021hep-phI. D. Lawrie, Phys. Rev. D 71 (2005), 025021 doi:10.1103/PhysRevD.71.025021 [arXiv:hep-ph/0411130 [hep-ph]].
. I G Moss, C Xiong, arXiv:hep-ph/0603266hep-phI. G. Moss and C. Xiong, [arXiv:hep-ph/0603266 [hep-ph]].
. G Aarts, A Tranberg, 10.1103/PhysRevD.77.123521arXiv:0712.1120Phys. Rev. D. 77123521hep-phG. Aarts and A. Tranberg, Phys. Rev. D 77 (2008), 123521 doi:10.1103/PhysRevD.77.123521 [arXiv:0712.1120 [hep-ph]].
. M Bastero-Gil, A Berera, R O Ramos, J G Rosa, 10.1088/1475-7516/2013/01/016arXiv:1207.0445JCAP. 0116hep-phM. Bastero-Gil, A. Berera, R. O. Ramos and J. G. Rosa, JCAP 01 (2013), 016 doi:10.1088/1475-7516/2013/01/016 [arXiv:1207.0445 [hep-ph]].
. H P De Oliveira, 10.1016/S0370-2693(01)01496-4arXiv:gr-qc/0202045Phys. Lett. B. 526gr-qcH. P. De Oliveira, Phys. Lett. B 526 (2002), 1 doi:10.1016/S0370-2693(01)01496-4 [arXiv:gr-qc/0202045 [gr-qc]].
. W Lee, L Z Fang, 10.1103/PhysRevD.69.023514arXiv:astro-ph/0310856Phys. Rev. D. 6923514astrophW. Lee and L. Z. Fang, Phys. Rev. D 69 (2004), 023514 doi:10.1103/PhysRevD.69.023514 [arXiv:astro-ph/0310856 [astro- ph]].
. M Bastero-Gil, A Berera, R O Ramos, 10.1088/1475-7516/2011/07/030arXiv:1106.0701JCAP. 0730astro-ph.COM. Bastero-Gil, A. Berera and R. O. Ramos, JCAP 07 (2011), 030 doi:10.1088/1475-7516/2011/07/030 [arXiv:1106.0701 [astro-ph.CO]].
. M Bastero-Gil, A Berera, I G Moss, R O Ramos, 10.1088/1475-7516/2014/05/004arXiv:1401.1149JCAP. 054astro-ph.COM. Bastero-Gil, A. Berera, I. G. Moss and R. O. Ramos, JCAP 05 (2014), 004 doi:10.1088/1475-7516/2014/05/004 [arXiv:1401.1149 [astro-ph.CO]].
. L Visinelli, 10.1088/1475-7516/2015/01/005arXiv:1410.1187JCAP. 015astro-ph.COL. Visinelli, JCAP 01 (2015), 005 doi:10.1088/1475-7516/2015/01/005 [arXiv:1410.1187 [astro-ph.CO]].
. M Bastero-Gil, A Berera, J R Calderón, 10.1088/1475-7516/2019/07/019arXiv:1904.04086JCAP. 0719astro-ph.COM. Bastero-Gil, A. Berera and J. R. Calderón, JCAP 07 (2019), 019 doi:10.1088/1475-7516/2019/07/019 [arXiv:1904.04086 [astro-ph.CO]].
. Y Qiu, L Sorbo, 10.1103/PhysRevD.104.083542arXiv:2107.09754Phys. Rev. D. 104883542astroph.COY. Qiu and L. Sorbo, Phys. Rev. D 104 (2021) no.8, 083542 doi:10.1103/PhysRevD.104.083542 [arXiv:2107.09754 [astro- ph.CO]].
. S Das, R O Ramos, arXiv:2212.13914astro-ph.COS. Das and R. O. Ramos, [arXiv:2212.13914 [astro-ph.CO]].
. S Gupta, A Berera, A F Heavens, S Matarrese, 10.1103/PhysRevD.66.043510arXiv:astro-ph/0205152Phys. Rev. D. 6643510astro-phS. Gupta, A. Berera, A. F. Heavens and S. Matarrese, Phys. Rev. D 66 (2002), 043510 doi:10.1103/PhysRevD.66.043510 [arXiv:astro-ph/0205152 [astro-ph]].
. N Bartolo, E Komatsu, S Matarrese, A Riotto, 10.1016/j.physrep.2004.08.022arXiv:astro-ph/0406398Phys. Rept. 402astro-phN. Bartolo, E. Komatsu, S. Matarrese and A. Riotto, Phys. Rept. 402 (2004), 103-266 doi:10.1016/j.physrep.2004.08.022 [arXiv:astro-ph/0406398 [astro-ph]].
. S Gupta, 10.1103/PhysRevD.73.083514arXiv:astro-ph/0509676Phys. Rev. D. 7383514astro-phS. Gupta, Phys. Rev. D 73 (2006), 083514 doi:10.1103/PhysRevD.73.083514 [arXiv:astro-ph/0509676 [astro-ph]].
. I G Moss, C Xiong, 10.1088/1475-7516/2007/04/007arXiv:astro-ph/0701302JCAP. 047astro-phI. G. Moss and C. Xiong, JCAP 04 (2007), 007 doi:10.1088/1475-7516/2007/04/007 [arXiv:astro-ph/0701302 [astro-ph]].
. I G Moss, T Yeomans, 10.1088/1475-7516/2011/08/009arXiv:1102.2833JCAP. 089astro-ph.COI. G. Moss and T. Yeomans, JCAP 08 (2011), 009 doi:10.1088/1475-7516/2011/08/009 [arXiv:1102.2833 [astro-ph.CO]].
. M Bastero-Gil, A Berera, I G Moss, R O Ramos, 10.1088/1475-7516/2014/12/008arXiv:1408.4391JCAP. 128astro-ph.COM. Bastero-Gil, A. Berera, I. G. Moss and R. O. Ramos, JCAP 12 (2014), 008 doi:10.1088/1475-7516/2014/12/008 [arXiv:1408.4391 [astro-ph.CO]].
. X M Zhang, J Y Zhu, 10.1103/PhysRevD.91.063510arXiv:1412.4366Phys. Rev. D. 91663510gr-qcX. M. Zhang and J. Y. Zhu, Phys. Rev. D 91 (2015) no.6, 063510 doi:10.1103/PhysRevD.91.063510 [arXiv:1412.4366 [gr-qc]].
. M Mirbabayi, A Gruzinov, 10.1088/1475-7516/2023/02/012arXiv:2205.13227JCAP. 0212astroph.COM. Mirbabayi and A. Gruzinov, JCAP 02 (2023), 012 doi:10.1088/1475-7516/2023/02/012 [arXiv:2205.13227 [astro- ph.CO]].
. L M H Hall, I G Moss, A Berera, 10.1016/j.physletb.2004.03.044arXiv:astro-ph/0402299Phys. Lett. B. 589astro-phL. M. H. Hall, I. G. Moss and A. Berera, Phys. Lett. B 589 (2004), 1-6 doi:10.1016/j.physletb.2004.03.044 [arXiv:astro-ph/0402299 [astro-ph]].
. I Moss, C Graham, 10.1088/1475-7516/2007/11/004arXiv:0707.1647JCAP. 114astro-phI. Moss and C. Graham, JCAP 11 (2007), 004 doi:10.1088/1475-7516/2007/11/004 [arXiv:0707.1647 [astro-ph]].
. G Panotopoulos, N Videla, 10.1140/epjc/s10052-015-3764-3arXiv:1510.06981Eur. Phys. J. C. 7511525gr-qcG. Panotopoulos and N. Videla, Eur. Phys. J. C 75 (2015) no.11, 525 doi:10.1140/epjc/s10052-015-3764-3 [arXiv:1510.06981 [gr-qc]].
. L Visinelli, 10.1088/1475-7516/2016/07/054arXiv:1605.06449JCAP. 0754astro-ph.COL. Visinelli, JCAP 07 (2016), 054 doi:10.1088/1475-7516/2016/07/054 [arXiv:1605.06449 [astro-ph.CO]].
. A Jawad, S Hussain, S Rani, N Videla, 10.1140/epjc/s10052-017-5264-0arXiv:1709.10430Eur. Phys. J. C. 7710700gr-qcA. Jawad, S. Hussain, S. Rani and N. Videla, Eur. Phys. J. C 77 (2017) no.10, 700 doi:10.1140/epjc/s10052-017-5264-0 [arXiv:1709.10430 [gr-qc]].
. M Bastero-Gil, S Bhattacharya, K Dutta, M R Gangopadhyay, 10.1088/1475-7516/2018/02/054arXiv:1710.10008JCAP. 0254astro-ph.COM. Bastero-Gil, S. Bhattacharya, K. Dutta and M. R. Gangopadhyay, JCAP 02 (2018), 054 doi:10.1088/1475- 7516/2018/02/054 [arXiv:1710.10008 [astro-ph.CO]].
. Ø Grøn, 10.3390/universe4020015415Ø. Grøn, Universe 4 (2018) no.2, 15 doi:10.3390/universe4020015
. A Berera, J Mabillard, M Pieroni, R O Ramos, 10.1088/1475-7516/2018/07/021arXiv:1803.04982JCAP. 0721astro-ph.COA. Berera, J. Mabillard, M. Pieroni and R. O. Ramos, JCAP 07 (2018), 021 doi:10.1088/1475-7516/2018/07/021 [arXiv:1803.04982 [astro-ph.CO]].
. C Gomes, O Bertolami, J G Rosa, 10.1103/PhysRevD.97.104061arXiv:1803.08084Phys. Rev. D. 9710hep-thC. Gomes, O. Bertolami and J. G. Rosa, Phys. Rev. D 97 (2018) no.10, 104061 doi:10.1103/PhysRevD.97.104061 [arXiv:1803.08084 [hep-th]].
. R Arya, R Rangarajan, 10.1142/S0218271820500558arXiv:1812.03107Int. J. Mod. Phys. D. 2908astro-ph.COR. Arya and R. Rangarajan, Int. J. Mod. Phys. D 29 (2020) no.08, 2050055 doi:10.1142/S0218271820500558 [arXiv:1812.03107 [astro-ph.CO]]
. H Sheikhahmadi, A Mohammadi, A Aghamohammadi, T Harko, R Herrera, C Corda, A Abebe, K Saaidi, 10.1140/epjc/s10052-019-7571-0arXiv:1907.10966Eur. Phys. J. C. 7912gr-qcH. Sheikhahmadi, A. Mohammadi, A. Aghamohammadi, T. Harko, R. Herrera, C. Corda, A. Abebe and K. Saaidi, Eur. Phys. J. C 79 (2019) no.12, 1038 doi:10.1140/epjc/s10052-019-7571-0 [arXiv:1907.10966 [gr-qc]].
. Ø G Grøn, 10.3390/universe80904408440Ø. G. Grøn, Universe 8 (2022) no.9, 440 doi:10.3390/universe8090440
. F B M Santos, R Silva, S S Da Costa, M Benetti, J S Alcaniz, arXiv:2209.06153astro-ph.COF. B. M. d. Santos, R. Silva, S. S. da Costa, M. Benetti and J. S. Alcaniz, [arXiv:2209.06153 [astro-ph.CO]].
. G Montefalcone, V Aragam, L Visinelli, K Freese, arXiv:2209.14908gr-qcG. Montefalcone, V. Aragam, L. Visinelli and K. Freese, [arXiv:2209.14908 [gr-qc]].
. M Alhallak, K K A Said, N Chamoun, M S El-Daher, arXiv:2211.07775gr-qcM. AlHallak, K. K. A. Said, N. CHAMOUN and M. S. El-Daher, [arXiv:2211.07775 [gr-qc]].
. G Montefalcone, V Aragam, L Visinelli, K Freese, arXiv:2212.04482gr-qcG. Montefalcone, V. Aragam, L. Visinelli and K. Freese, [arXiv:2212.04482 [gr-qc]].
. A Berera, T W Kephart, S D Wick, 10.1103/PhysRevD.59.043510arXiv:hep-ph/9809404Phys. Rev. D. 5943510hep-phA. Berera, T. W. Kephart and S. D. Wick, Phys. Rev. D 59 (1999), 043510 doi:10.1103/PhysRevD.59.043510 [arXiv:hep-ph/9809404 [hep-ph]].
. G Piccinelli, A Sanchez, A Ayala, A J Mizher, 10.1063/1.4817059arXiv:1311.0533AIP Conf. Proc. 15481astro-ph.COG. Piccinelli, A. Sanchez, A. Ayala and A. J. Mizher, AIP Conf. Proc. 1548 (2013) no.1, 288-293 doi:10.1063/1.4817059 [arXiv:1311.0533 [astro-ph.CO]].
. M Bastero-Gil, G Piccinelli, A Sanchez, 10.1002/asna.201512220Astron. Nachr. 3368/9M. Bastero-Gil, G. Piccinelli and A. Sanchez, Astron. Nachr. 336 (2015) no.8/9, 805-809 doi:10.1002/asna.201512220
. R H Brandenberger, M Yamaguchi, 10.1103/PhysRevD.68.023505arXiv:hep-ph/0301270Phys. Rev. D. 6823505hep-phR. H. Brandenberger and M. Yamaguchi, Phys. Rev. D 68 (2003), 023505 doi:10.1103/PhysRevD.68.023505 [arXiv:hep-ph/0301270 [hep-ph]].
. M Bastero-Gil, A Berera, R O Ramos, J G Rosa, 10.1016/j.physletb.2012.05.032arXiv:1110.3971Phys. Lett. B. 712hep-phM. Bastero-Gil, A. Berera, R. O. Ramos and J. G. Rosa, Phys. Lett. B 712 (2012), 425-429 doi:10.1016/j.physletb.2012.05.032 [arXiv:1110.3971 [hep-ph]].
. S Basak, S Bhattacharya, M R Gangopadhyay, N Jaman, R Rangarajan, M Sami, 10.1088/1475-7516/2022/03/063arXiv:2110.00607JCAP. 030363astro-ph.COS. Basak, S. Bhattacharya, M. R. Gangopadhyay, N. Jaman, R. Rangarajan and M. Sami, JCAP 03 (2022) no.03, 063 doi:10.1088/1475-7516/2022/03/063 [arXiv:2110.00607 [astro-ph.CO]].
. R Arya, 10.1088/1475-7516/2020/09/042arXiv:1910.05238JCAP. 0942astro-ph.COR. Arya, JCAP 09 (2020), 042 doi:10.1088/1475-7516/2020/09/042 [arXiv:1910.05238 [astro-ph.CO]].
. M Bastero-Gil, M S Díaz-Blanco, 10.1088/1475-7516/2021/12/052arXiv:2105.08045JCAP. 121252hep-phM. Bastero-Gil and M. S. Díaz-Blanco, JCAP 12 (2021) no.12, 052 doi:10.1088/1475-7516/2021/12/052 [arXiv:2105.08045 [hep-ph]].
. M Correa, M R Gangopadhyay, N Jaman, G J Mathews, 10.1016/j.physletb.2022.137510arXiv:2207.10394Phys. Lett. B. 835137510gr-qcM. Correa, M. R. Gangopadhyay, N. Jaman and G. J. Mathews, Phys. Lett. B 835 (2022), 137510 doi:10.1016/j.physletb.2022.137510 [arXiv:2207.10394 [gr-qc]].
. G Ballesteros, M A G García, A P Rodríguez, M Pierre, J Rey, 10.1088/1475-7516/2022/12/006arXiv:2208.14978JCAP. 126astro-ph.COG. Ballesteros, M. A. G. García, A. P. Rodríguez, M. Pierre and J. Rey, JCAP 12 (2022), 006 doi:10.1088/1475- 7516/2022/12/006 [arXiv:2208.14978 [astro-ph.CO]].
. S Bhattacharya, 10.3390/galaxies11010035arXiv:2302.12690Galaxies. 11135astro-ph.COS. Bhattacharya, Galaxies 11 (2023) no.1, 35 doi:10.3390/galaxies11010035 [arXiv:2302.12690 [astro-ph.CO]].
. A N Taylor, A R Liddle, 10.1103/PhysRevD.64.023513arXiv:astro-ph/0011365Phys. Rev. D. 6423513astro-phA. N. Taylor and A. R. Liddle, Phys. Rev. D 64 (2001), 023513 doi:10.1103/PhysRevD.64.023513 [arXiv:astro-ph/0011365 [astro-ph]].
. J C Bueno Sanchez, M Bastero-Gil, A Berera, K Dimopoulos, K Kohri, 10.1088/1475-7516/2011/03/020arXiv:1011.2398JCAP. 0320hep-phJ. C. Bueno Sanchez, M. Bastero-Gil, A. Berera, K. Dimopoulos and K. Kohri, JCAP 03 (2011), 020 doi:10.1088/1475- 7516/2011/03/020 [arXiv:1011.2398 [hep-ph]].
. S Bartrum, A Berera, J G Rosa, 10.1103/PhysRevD.86.123525arXiv:1208.4276Phys. Rev. D. 86123525hep-phS. Bartrum, A. Berera and J. G. Rosa, Phys. Rev. D 86 (2012), 123525 doi:10.1103/PhysRevD.86.123525 [arXiv:1208.4276 [hep-ph]].
. H P De Oliveira, R O Ramos, 10.1103/PhysRevD.57.741arXiv:gr-qc/9710093Phys. Rev. D. 57gr-qcH. P. de Oliveira and R. O. Ramos, Phys. Rev. D 57 (1998), 741-749 doi:10.1103/PhysRevD.57.741 [arXiv:gr-qc/9710093 [gr-qc]].
. M Bellini, 10.1016/S0370-2693(98)00385-2Phys. Lett. B. 428M. Bellini, Phys. Lett. B 428 (1998), 31-36 doi:10.1016/S0370-2693(98)00385-2
. A P Billyard, A A Coley, 10.1103/PhysRevD.61.083503arXiv:astro-ph/9908224Phys. Rev. D. 6183503astro-phA. P. Billyard and A. A. Coley, Phys. Rev. D 61 (2000), 083503 doi:10.1103/PhysRevD.61.083503 [arXiv:astro-ph/9908224 [astro-ph]].
. J F Donoghue, 10.1088/1126-6708/2000/08/022arXiv:hep-ph/0006088JHEP. 0822hep-phJ. F. Donoghue, JHEP 08 (2000), 022 doi:10.1088/1126-6708/2000/08/022 [arXiv:hep-ph/0006088 [hep-ph]].
. J M F Maia, J A S Lima, 10.1103/PhysRevD.65.083513arXiv:astro-ph/0112091Phys. Rev. D. 6583513astro-phJ. M. F. Maia and J. A. S. Lima, Phys. Rev. D 65 (2002), 083513 doi:10.1103/PhysRevD.65.083513 [arXiv:astro-ph/0112091 [astro-ph]].
. S Koh, R H Brandenberger, 10.1088/1475-7516/2007/06/021arXiv:hep-th/0702217JCAP. 0621hepthS. Koh and R. H. Brandenberger, JCAP 06 (2007), 021 doi:10.1088/1475-7516/2007/06/021 [arXiv:hep-th/0702217 [hep- th]].
. D Battefeld, T Battefeld, A C Davis, 10.1088/1475-7516/2008/10/032arXiv:0806.1953JCAP. 1032hep-thD. Battefeld, T. Battefeld and A. C. Davis, JCAP 10 (2008), 032 doi:10.1088/1475-7516/2008/10/032 [arXiv:0806.1953 [hep-th]].
. M Bastero-Gil, A Berera, R Cerezo, R O Ramos, G S Vicente, 10.1088/1475-7516/2012/11/042arXiv:1209.0712JCAP. 1142astro-ph.COM. Bastero-Gil, A. Berera, R. Cerezo, R. O. Ramos and G. S. Vicente, JCAP 11 (2012), 042 doi:10.1088/1475- 7516/2012/11/042 [arXiv:1209.0712 [astro-ph.CO]].
. Y Gim, W Kim, 10.1088/1475-7516/2016/11/022arXiv:1608.07466JCAP. 1122gr-qcY. Gim and W. Kim, JCAP 11 (2016), 022 doi:10.1088/1475-7516/2016/11/022 [arXiv:1608.07466 [gr-qc]].
. M Bastero-Gil, A Berera, R Brandenberger, I G Moss, R O Ramos, J G Rosa, 10.1088/1475-7516/2018/01/002arXiv:1612.04726JCAP. 012astro-ph.COM. Bastero-Gil, A. Berera, R. Brandenberger, I. G. Moss, R. O. Ramos and J. G. Rosa, JCAP 01 (2018), 002 doi:10.1088/1475-7516/2018/01/002 [arXiv:1612.04726 [astro-ph.CO]].
. K Sayar, A Mohammadi, L Akhtari, K Saaidi, 10.1103/PhysRevD.95.023501arXiv:1708.01714Phys. Rev. D. 952gr-qcK. Sayar, A. Mohammadi, L. Akhtari and K. Saaidi, Phys. Rev. D 95 (2017) no.2, 023501 doi:10.1103/PhysRevD.95.023501 [arXiv:1708.01714 [gr-qc]].
. M Bastero-Gil, A Berera, R O Ramos, J G Rosa, 10.1007/JHEP02(2018)063arXiv:1711.09023JHEP. 0263hep-phM. Bastero-Gil, A. Berera, R. O. Ramos and J. G. Rosa, JHEP 02 (2018), 063 doi:10.1007/JHEP02(2018)063 [arXiv:1711.09023 [hep-ph]].
. L L Graef, R O Ramos, 10.1103/PhysRevD.98.023531arXiv:1805.05985Phys. Rev. D. 982gr-qcL. L. Graef and R. O. Ramos, Phys. Rev. D 98 (2018) no.2, 023531 doi:10.1103/PhysRevD.98.023531 [arXiv:1805.05985 [gr-qc]].
. T Harko, H Sheikhahmadi, 10.1016/j.dark.2020.100521arXiv:2003.02257Phys. Dark Univ. 28100521gr-qcT. Harko and H. Sheikhahmadi, Phys. Dark Univ. 28 (2020), 100521 doi:10.1016/j.dark.2020.100521 [arXiv:2003.02257 [gr-qc]].
. O Trivedi, arXiv:2008.05899gr-qcO. Trivedi, [arXiv:2008.05899 [gr-qc]].
. S Das, R O Ramos, 10.1103/PhysRevD.103.123520arXiv:2005.01122Phys. Rev. D. 1031212gr-qcS. Das and R. O. Ramos, Phys. Rev. D 103 (2021) no.12, 12 doi:10.1103/PhysRevD.103.123520 [arXiv:2005.01122 [gr-qc]].
. R Arya, A K Mishra, 10.1016/j.dark.2022.101116arXiv:2204.02896Phys. Dark Univ. 37101116astroph.COR. Arya and A. K. Mishra, Phys. Dark Univ. 37 (2022), 101116 doi:10.1016/j.dark.2022.101116 [arXiv:2204.02896 [astro- ph.CO]].
. O Bertolami, P M Sá, 10.1088/1475-7516/2022/09/001arXiv:2204.13794JCAP. 091gr-qcO. Bertolami and P. M. Sá, JCAP 09 (2022), 001 doi:10.1088/1475-7516/2022/09/001 [arXiv:2204.13794 [gr-qc]].
. S N Gashti, J Sadeghi, 10.1140/epjp/s13360-022-02961-8Eur. Phys. J. Plus. 1376731S. N. Gashti and J. Sadeghi, Eur. Phys. J. Plus 137 (2022) no.6, 731 doi:10.1140/epjp/s13360-022-02961-8
. R Lieu, T W B Kibble, arXiv:1110.1172astro-ph.COR. Lieu and T. W. B. Kibble, [arXiv:1110.1172 [astro-ph.CO]].
. A Papageorgiou, P Quílez, K Schmitz, 10.1007/JHEP01(2023)169arXiv:2206.01129JHEP. 01169hepphA. Papageorgiou, P. Quílez and K. Schmitz, JHEP 01 (2023), 169 doi:10.1007/JHEP01(2023)169 [arXiv:2206.01129 [hep- ph]].
. K Dimopoulos, L Donaldson-Wood, 10.1016/j.physletb.2019.07.017arXiv:1906.09648Phys. Lett. B. 796gr-qcK. Dimopoulos and L. Donaldson-Wood, Phys. Lett. B 796 (2019), 26-31 doi:10.1016/j.physletb.2019.07.017 [arXiv:1906.09648 [gr-qc]].
. J G Rosa, L B Ventura, 10.1016/j.physletb.2019.134984arXiv:1906.11835Phys. Lett. B. 798134984hep-phJ. G. Rosa and L. B. Ventura, Phys. Lett. B 798 (2019), 134984 doi:10.1016/j.physletb.2019.134984 [arXiv:1906.11835 [hep-ph]].
. G B F Lima, R O Ramos, 10.1103/PhysRevD.100.123529arXiv:1910.05185Phys. Rev. D. 10012astro-ph.COG. B. F. Lima and R. O. Ramos, Phys. Rev. D 100 (2019) no.12, 123529 doi:10.1103/PhysRevD.100.123529 [arXiv:1910.05185 [astro-ph.CO]].
. G Agata, S González-Martín, A Papageorgiou, M Peloso, 10.1088/1475-7516/2020/08/032arXiv:1912.09950JCAP. 0832hep-thG. Dall'Agata, S. González-Martín, A. Papageorgiou and M. Peloso, JCAP 08 (2020), 032 doi:10.1088/1475- 7516/2020/08/032 [arXiv:1912.09950 [hep-th]].
. A Papageorgiou, arXiv:2011.06312[astro-ph.COA. Papageorgiou, [arXiv:2011.06312 [astro-ph.CO]].
. K V Berghaus, T , 10.1103/PhysRevD.101.083537arXiv:1911.06281Phys. Rev. D. 101883537astro-ph.COK. V. Berghaus and T. Karwal, Phys. Rev. D 101 (2020) no.8, 083537 doi:10.1103/PhysRevD.101.083537 [arXiv:1911.06281 [astro-ph.CO]].
. A Berera, 10.1080/00107510500392030arXiv:0809.4198Contemp. Phys. 47hep-phA. Berera, Contemp. Phys. 47 (2006), 33-49 doi:10.1080/00107510500392030 [arXiv:0809.4198 [hep-ph]].
. Ø Grøn, 10.3390/universe2030020220Ø. Grøn, Universe 2 (2016) no.3, 20 doi:10.3390/universe2030020
. R Rangarajan, 10.1142/97898112023390064arXiv:1801.02648astro-ph.COR. Rangarajan, doi:10.1142/9789811202339 0064 [arXiv:1801.02648 [astro-ph.CO]].
| [] |
[
"Refined parameters of the HD 22946 planetary system and the true orbital period of planet d ⋆",
"Refined parameters of the HD 22946 planetary system and the true orbital period of planet d ⋆"
] | [
"Z Garai ",
"H P Osborn ",
"D Gandolfi ",
"A Brandeker ",
"S G Sousa ",
"M Lendl ",
"A Bekkelien ",
"C Broeg ",
"A Collier Cameron ",
"J A Egger ",
"M J Hooton ",
"Y Alibert ",
"L ",
"D Ségransan ",
"A E Simon ",
"A M S Smith ",
"M Steller ",
"Gy M Szabó ",
"N Thomas ",
"S Udry ",
"J Venturini ",
"N Walton "
] | [] | [
"P. E. Cubillos"
] | Context. Multi-planet systems are important sources of information regarding the evolution of planets. However, the long-period planets in these systems often escape detection. These objects in particular may retain more of their primordial characteristics compared to close-in counterparts because of their increased distance from the host star. HD 22946 is a bright (G = 8.13 mag) late F-type star around which three transiting planets were identified via Transiting Exoplanet Survey Satellite (TESS) photometry, but the true orbital period of the outermost planet d was unknown until now. Aims. We aim to use the Characterising Exoplanet Satellite (CHEOPS) space telescope to uncover the true orbital period of HD 22946d and to refine the orbital and planetary properties of the system, especially the radii of the planets. Methods. We used the available TESS photometry of HD 22946 and observed several transits of the planets b, c, and d using CHEOPS. We identified two transits of planet d in the TESS photometry, calculated the most probable period aliases based on these data, and then scheduled CHEOPS observations. The photometric data were supplemented with ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) radial velocity data. Finally, a combined model was fitted to the entire dataset in order to obtain final planetary and system parameters.Results. Based on the combined TESS and CHEOPS observations, we successfully determined the true orbital period of the planet d to be 47.42489 ± 0.00011 d, and derived precise radii of the planets in the system, namely 1.362 ± 0.040 R ⊕ , 2.328 ± 0.039 R ⊕ , and 2.607 ± 0.060 R ⊕ for planets b, c, and d, respectively. Due to the low number of radial velocities, we were only able to determine 3σ upper limits for these respective planet masses, which are 13.71 M ⊕ , 9.72 M ⊕ , and 26.57 M ⊕ . We estimated that another 48 ESPRESSO radial velocities are needed to measure the predicted masses of all planets in HD 22946. We also derived stellar parameters for the host star. Conclusions. Planet c around HD 22946 appears to be a promising target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d, as a warm sub-Neptune, is very interesting because there are only a few similar confirmed exoplanets to date. Such objects are worth investigating in the near future, for example in terms of their composition and internal structure. ⋆ This article uses data from CHEOPS programmes CH_PR110048 and CH_PR100031. Photometry and radial velocity data of HD 22946 are available at the CDS via anonymous ftp to cdsarc.u-strasbg. fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/ qcat?J/A+A/. formed under the same conditions; see for exampleWeiss et al. (2018). The majority of the known multi-planet systems were found by space-based exoplanet transit surveys. This is because, while giant hot-Jupiters are relatively easy to observe with ground-based photometry, the detection of smaller planets, for example, Earths, super-Earths, and sub-Neptunes, which are typically found in multi-planet systems, requires the precise photometry of space-based observatories such as TESS (Ricker 2014).Mutual gravitational interactions in some multi-planet systems can provide constraints on the planet masses through tran-Article number, page 1 of 15 | 10.1051/0004-6361/202345943 | [
"https://export.arxiv.org/pdf/2306.04468v1.pdf"
] | 259,095,617 | 2306.04468 | 4a5085dcbb7d2e48417de7fa7b0fa388efff050e |
Refined parameters of the HD 22946 planetary system and the true orbital period of planet d ⋆
June 8, 2023
Z Garai
H P Osborn
D Gandolfi
A Brandeker
S G Sousa
M Lendl
A Bekkelien
C Broeg
A Collier Cameron
J A Egger
M J Hooton
Y Alibert
L
D Ségransan
A E Simon
A M S Smith
M Steller
Gy M Szabó
N Thomas
S Udry
J Venturini
N Walton
Refined parameters of the HD 22946 planetary system and the true orbital period of planet d ⋆
P. E. Cubillos
12June 8, 2023Received September 15, 1996; accepted March 16, 1997Astronomy & Astrophysics manuscript no. toi411 (Affiliations can be found after the references)Methods: observational -Techniques: photometric -Planets and satellites: fundamental parameters
Context. Multi-planet systems are important sources of information regarding the evolution of planets. However, the long-period planets in these systems often escape detection. These objects in particular may retain more of their primordial characteristics compared to close-in counterparts because of their increased distance from the host star. HD 22946 is a bright (G = 8.13 mag) late F-type star around which three transiting planets were identified via Transiting Exoplanet Survey Satellite (TESS) photometry, but the true orbital period of the outermost planet d was unknown until now. Aims. We aim to use the Characterising Exoplanet Satellite (CHEOPS) space telescope to uncover the true orbital period of HD 22946d and to refine the orbital and planetary properties of the system, especially the radii of the planets. Methods. We used the available TESS photometry of HD 22946 and observed several transits of the planets b, c, and d using CHEOPS. We identified two transits of planet d in the TESS photometry, calculated the most probable period aliases based on these data, and then scheduled CHEOPS observations. The photometric data were supplemented with ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) radial velocity data. Finally, a combined model was fitted to the entire dataset in order to obtain final planetary and system parameters.Results. Based on the combined TESS and CHEOPS observations, we successfully determined the true orbital period of the planet d to be 47.42489 ± 0.00011 d, and derived precise radii of the planets in the system, namely 1.362 ± 0.040 R ⊕ , 2.328 ± 0.039 R ⊕ , and 2.607 ± 0.060 R ⊕ for planets b, c, and d, respectively. Due to the low number of radial velocities, we were only able to determine 3σ upper limits for these respective planet masses, which are 13.71 M ⊕ , 9.72 M ⊕ , and 26.57 M ⊕ . We estimated that another 48 ESPRESSO radial velocities are needed to measure the predicted masses of all planets in HD 22946. We also derived stellar parameters for the host star. Conclusions. Planet c around HD 22946 appears to be a promising target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d, as a warm sub-Neptune, is very interesting because there are only a few similar confirmed exoplanets to date. Such objects are worth investigating in the near future, for example in terms of their composition and internal structure. ⋆ This article uses data from CHEOPS programmes CH_PR110048 and CH_PR100031. Photometry and radial velocity data of HD 22946 are available at the CDS via anonymous ftp to cdsarc.u-strasbg. fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/ qcat?J/A+A/. formed under the same conditions; see for exampleWeiss et al. (2018). The majority of the known multi-planet systems were found by space-based exoplanet transit surveys. This is because, while giant hot-Jupiters are relatively easy to observe with ground-based photometry, the detection of smaller planets, for example, Earths, super-Earths, and sub-Neptunes, which are typically found in multi-planet systems, requires the precise photometry of space-based observatories such as TESS (Ricker 2014).Mutual gravitational interactions in some multi-planet systems can provide constraints on the planet masses through tran-Article number, page 1 of 15
Introduction
Multi-planet systems are important from many viewpoints. Not only are they susceptible of relatively straightforward confirmation as bona fide planets (Lissauer et al. 2012), they also allow intra-planetary comparisons to be made for planets which during observations in sector numbers 3,4,30, and 31, the discoverers easily derived the orbital periods of the two inner planets, b and c, which are about 4.040 d and 9.573 d, respectively. The orbital period of planet d was not found by C22. The authors determined its presence through a single transit found in sector number 4 and obtained its parameters from this single transit event. Its depth and the host brightness make planet d easily detectable with CHEOPS, and therefore HD 22946 was observed several times with this instrument within the Guaranteed Time Observations (GTO) programmes CH_PR110048 and CH_PR100031, with the main scientific goals being to uncover the true orbital period of planet d and to refine the parameters of the HD 22946 system based on CHEOPS and TESS observations via joint analysis of the photometric data, supplemented with ESPRESSO spectroscopic observations of HD 22946. The present paper is organised as follows. In Sect. 2, we provide a brief description of observations and data reduction. In Sect. 3, we present the details of our data analysis and our first results, including stellar parameters, period aliases of HD 22946d from the TESS data, and a search for TTVs. Our final results based on the combined TESS, CHEOPS, and RV model are described and discussed in Sect. 4. We summarise our findings in Sect. 5.
Observations and data reduction
2.1. TESS data HD 22946 was observed during four TESS sectors: numbers 3, 4, 30, and 31 (see Table 1). The time gap between the two observing seasons is almost two years. These TESS data were downloaded from the Mikulski Archive for Space Telescopes 1 in the form of Presearch Data Conditioning Simple Aperture Photometry (PDCSAP) flux. These data, containing 61 987 data points, were obtained from two-minute integrations and were initially smoothed by the PDCSAP pipeline. This light curve is subjected to more treatment than the simple aperture photometry (SAP) light curve, and is specifically intended for detecting planets. The pipeline attempts to remove systematic artifacts while keeping planetary transits intact. The average uncertainty of the PDC-SAP data points is 310 ppm.
During these TESS observing runs, 23 transits of planet b were recorded, and the transit of planet c was observed eight times in total (see more details in Table 1). As in C22, we also initially recognised a transit-like feature in the sector number 4 data at t tr,1 = 2 458 425.1657 BJD TDB through visual inspection of the light curve. Given 65%-80% of single transits from the TESS primary mission will re-transit in the extended mission sectors (see Cooke et al. 2019Cooke et al. , 2021, we subsequently visually inspected the light curve once the TESS year 3 data were available and found a second dip at t tr,2 = 2 459 136.5357 BJD TDB in the sector number 30 data with near-identical depth and duration. Given the high prior probability of finding a second transit, the close match in transit shape between events, and the high quality of the data (i.e. minimal systematic noise elsewhere in the light curve), we concluded that this signal is a bona fide transit event and that the transits in sector numbers 4 and 30 are very likely caused by the same object, that is, by planet d.
Outliers were cleaned using a 3σ clipping, where σ is the standard deviation of the light curve. With this clipping procedure, we discarded 300 data points out of 61 987, which is ∼ 0.5% of the TESS data. Subsequently, we visually inspected the dataset in order to check the effect of the outlier removal, which we found to be reasonable. As TESS uses as time stamps Barycentric TESS Julian Date (i.e. BJD TDB − 2 457 000.0), during the next step we converted all TESS time stamps to BJD TDB .
CHEOPS data
HD 22946 was observed five times with the CHEOPS space telescope. This is the first European space mission dedicated primarily to the study of known exoplanets. It consists of a telescope with a mirror of 32 cm in diameter based on a Ritchey-Chrétien design. The photometric detector is a single-CCD camera covering the wavelength range from 330 to 1100 nm with a field of view of 0.32 deg 2 . The payload design and operation have been optimised to achieve ultra-high photometric stability, achieving a photometric precision of 20 ppm on observations of a G5-type star in 6 hours, and 85 ppm observations of a K5-type star in 3 hours (Benz et al. 2021). The CHEOPS observations were scheduled based on the existing TESS observations of planets b and c, and mainly based on the observed transit times of planet d (see Sect. 2.1). The marginal probability for each period alias of planet d was calculated using the MonoTools package (see Sect. 3.2). We were not able to observe all the highest-probability aliases, because some were not visible during the two-week period of visibility. Within the program number CH_PR110048, we therefore planned to observe the three highest-probability aliases of planet d with CHEOPS, but due to observability constraints and conflicts with other observations, only two visits 2 of planet d aliases were scheduled. Its true orbital period was confirmed during the second observation. The remaining three visits were scheduled in the framework of the program number CH_PR100031. Based on these CHEOPS observations, three transits of planet b were recorded during visits 1, 3, and 5, the transit of planet c was observed twice during visits 2 and 4, and a single transit of planet d (in multiple transit feature with planet c) was detected during the CHEOPS visit 4. Further details about these observations can be found in Table 2.
From the CHEOPS detector, which has 1024 × 1024 pixels, a 200 × 200 pixels subarray is extracted around the target point-spread function (PSF), which is used to compute the photometry. This type of photometry product was processed by the CHEOPS Data Reduction Pipeline (DRP) version 13.1.0 (Hoyer et al. 2020). It performs several image corrections, including bias-, dark-, and flat-corrections, contamination estimation, and background-star correction. The DRP pipeline produces four different light-curve types for each visit, but we initially analysed only the decontaminated 'OPTIMAL' type, where the aperture radius is automatically set based on the signal-to-noise ratio (S /N). In addition to the subarrays, there are imagettes available for each exposure. The imagettes are frames of 30 pixels in radius centred on the target, which do not need to be co-added before download owing to their smaller size. We used a tool specifically developed for photometric extraction of imagettes using point-spread function photometry, called PIPE 3 ; see for example Szabó et al. (2021Szabó et al. ( , 2022. The PIPE photometry has a S /N comparable to that of DRP photometry, but has the advantage of shorter cadence, and therefore we decided to use this CHEOPS product in this work. The average uncertainty of the PIPE data points is 160 ppm.
The PIPE CHEOPS observations were processed using the dedicated data decorrelation and transit analysis software called pycheops 4 (Maxted et al. 2022). This package includes downloading, visualising, and decorrelating CHEOPS data, fitting transits and eclipses of exoplanets, and calculating light-curve noise. We first cleaned the light curves from outlier data points using the pycheops built-in function clip_outliers, which removes outliers from a dataset by calculating the mean absolute deviation (MAD) from the light curve following median smoothing, and rejects data greater than the smoothed dataset plus the MAD multiplied by a clipping factor. The clipping factor equal to five was reasonable in our cases, which we checked visually. With this clipping procedure, we discarded 30 data points out of 3195, which is ∼ 0.9% of the CHEOPS data. The next step was the extraction of the detrending parameters. During this procedure, the software gives a list of the parameters necessary for the detrending. The most important decorrelation is subtraction of the roll-angle effect. In order to keep the cold plate radiators facing away from the Earth, the spacecraft rolls during its orbit. This means that the field of view rotates around the pointing direction. The target star remains stationary within typically 1 pixel, but the rotation of the field of view produces a variation of its flux from the nearby sources in phase with the roll angle of the spacecraft (Bonfanti et al. 2021). The extracted detrending parameters were co-fitted with the transit model (see Sect. 3.3).
ESPRESSO/VLT data
We acquired 14 high-resolution spectra of the host star HD 22946 using the ESPRESSO spectrograph (Pepe et al. 2014) mounted at the 8.2 m Very Large Telescope (VLT) at Paranal Observatory (Chile). The observations were carried out between 10 February 2019 and 17 March 2019 under the observing program number 0102.C-0456 (PI: V. Van Eylen) and within the KESPRINT 5 project. We used the high-resolution (HR) mode of the spectrograph, which provides a resolving power of R ≈ 134 000. We set the exposure time to 600 s, leading to a S /N per pixel at 650 nm ranging between 120 and 243. Daytime ThAr spectra and simultaneous Fabry-Perot exposures were taken to determine the wavelength solution and correct for possible nightly instrumental drifts, respectively. We reduced the ESPRESSO spectra using the dedicated data-reduction software and extracted the RVs by cross-correlating the échelle spectra A&A proofs: manuscript no. toi411 Table 3. The average uncertainty of the RV data points is ∼ 0.00015 km s −1 . We co-added the individual ESPRESSO spectra prior to carrying out the spectroscopic analysis presented in Sect. 3.1. To this aim, we Doppler-shifted the data to a common reference wavelength by cross-correlating the ESPRESSO spectra with the spectrum with the highest S /N. We finally performed a S /Nweighted co-addition of the Doppler-shifted spectra, while applying a sigma-clipping algorithm to remove possible cosmicray hits and outliers. The co-added spectrum has a S /N of ∼ 900 per pixel at 650 nm.
Data analysis and first results
Stellar parameters
The spectroscopic stellar parameters (the effective temperature T eff , the surface gravity log g, the microturbulent velocity v mic , and the metallicity [Fe/H]; see Table 4) were derived using the ARES and MOOG codes, following the same methodology as described in Sousa et al. (2021), Sousa (2014), and Santos et al. (2013). We used the latest version of the ARES code 6 (Sousa et al. 2007(Sousa et al. , 2015 to measure the equivalent widths of iron lines on the combined ESPRESSO spectrum. We used a minimisation procedure to find ionisation and excitation equilibrium and converge to the best set of spectroscopic parameters. This procedure makes use of a grid of Kurucz model atmospheres (Kurucz 1993a) and the radiative transfer code MOOG (Sneden 1973).
To derive the radius of the host star HD 22946, we used a Markov-Chain Monte Carlo (MCMC) modified infrared flux method. This enables us to calculate the bolometric flux using stellar atmospheric models defined by our spectral analysis to build spectral energy distributions (SEDs) that are compared with broadband fluxes and uncertainties from the most recent data releases for the following bandpasses: Gaia G, G BP , and G RP , 2MASS J, H, and K, and WISE W1 and W2 (Skrutskie et al. 2006;Wright et al. 2010;Gaia Collaboration et al. 2021). From the bolometric flux, we then determine stellar effective temperature and angular diameter; this latter is converted to a radius using the offset-corrected Gaia parallax Lindegren et al. (2021). We used Bayesian modeling averaging of the atlas (Kurucz 1993b;Castelli & Kurucz 2003) and phoenix (Allard 2014) catalogues to produce a weighted averaged posterior distribution of the stellar radius in order to account for uncertainties in stellar atmospheric modelling. We find a value of R s = 1.117 ± 0.009 R ⊙ , which is in 3σ agreement with the value of 1.157 ± 0.025 R ⊙ presented by the discoverers.
We finally determined the stellar mass M s and stellar age t s using two different sets of stellar evolutionary models, namely PARSEC 7 v1.2S (Marigo et al. 2017) and CLES (Code Liègeois d'Évolution Stellaire), see Scuflaire et al. (2008). More specifically, we employed the isochrone-placement algorithm developed by Bonfanti et al. (2015Bonfanti et al. ( , 2016 to interpolate the input parameters (T eff , [Fe/H], R s ) within pre-computed grids of PAR-SEC v1.2S isochrones and tracks to derive a first pair of mass and age. A second pair of mass and age values, instead, was retrieved by inputting T eff , [Fe/H], and R s directly in the CLES code, which generates the best-fit stellar evolutionary track following the Levenberg-Marquadt minimisation scheme, as described in Salmon et al. (2021). After carefully checking the mutual consistency of the two respective pairs of outcomes through the χ 2 -based methodology presented in Bonfanti et al. (2021), we finally merged (i.e. summed) the two M s and t s results and obtained M s = 1.098 ± 0.040 M ⊙ and t s = 2.5 ± 1.0 Gyr. The Table 4. Fundamental parameters of the exoplanet host HD 22946.
Parameter [unit]
Value Source Name HD 22946 -TOI ID 411 G2021 TIC ID 100990000 mass parameter value of the host star agrees within the uncertainty with the value provided in the discovery paper, which is 1.104 ± 0.012 M ⊙ . However, the planet host seems to be younger than previously presented by C22. The discoverers obtained a value of 5.0 ± 1.0 Gyr. More parameter values, including from this work, are compared with the discovery-paper parameter values in Table 4.
S2018 Gaia DR3 ID 4848767461548943104 G2022 RA (J2016) [deg] 54.819528 G2022 Dec (J2016) [deg] −42.76304 G2022 T (TESS) [mag] 7.757 ± 0.006 S2018 G (Gaia) [mag] 8.13 ± 0.69 G2022 J [mag] 7.250 ± 0.027 C2003 H [mag] 7.040 ± 0.044 C2003 K [mag] 6.981 ± 0.029 C2003 T eff [K] 6040 ± 48 C2022 T eff [K] 6169 ± 64 This work R s [R ⊙ ] 1.157 ± 0.025 C2022 R s [R ⊙ ] 1.117 ± 0.009 This work M s [M ⊙ ] 1.104 ± 0.012 C2022 M s [M ⊙ ] 1.
Period aliases of HD 22946d from the TESS data
In order to determine each possible period alias and to schedule CHEOPS observations of planet d, we first performed a period analysis of the available TESS data. For this purpose, we used the MonoTools package 8 (Osborn et al. 2022), which is able to model transit light curves in case of multiple transits, duotransits, and monotransits, as well as multiple systems with combinations of such candidates, with both radial velocities and transit photometry. The package calculates a marginalised probability distribution across all allowed aliases for a given transit model by combining priors for each alias. The probabilities are estimated based on two major assumptions, namely that short-period orbits are highly favoured over long-period ones due to a combination of geometric probability and window function, and that planets in multi-planet systems have low eccentricities (Kipping et al. 2013;Kipping 2018;Van Eylen & Albrecht 2015). More details about this software can be found in Osborn et al. (2022). The TESS data described in Sect. 2.1 were used during the fitting procedure using MonoTools. In the case of planet b, we 8 See https://github.com/hposborn/MonoTools. set as input parameters the reference mid-transit time of T c = 2 458 385.7318 BJD TDB , the orbital period of P orb = 4.040330 ± 0.000010 d, the transit duration (transit width) of W = 3.4 hr, and the transit depth of D = 134 ppm. In the case of planet c, the inputs were T c = 2 458 386.1878 BJD TDB , P orb = 9.573117 ± 0.000020 d, W = 3.8 hr, and D = 389 ppm. For planet d, we set as input parameters the two mid-transit times detected by TESS, namely t tr,1 = 2 458 425.1657 BJD TDB and t tr,2 = 2 459 136.5357 BJD TDB , the transit duration of W = 6.5 hr and the transit depth of D = 478 ppm. These parameters were calculated from the TESS data alone. The orbital period aliases of planet d with a probability of p > 1% are listed in Table 5. The software MonoTools forecasted that a transit of planet d with the orbital period alias number 2 would take place on 25 October 2021, with a mid-transit time of T c = 2 459 513.1441 BJD TDB . This forecasted event was observed during the third CHEOPS visit (see Table 2), but the expected transit of planet d did not happen; only the transit of planet b was recorded that time. After this observation, we were able to exclude the period alias of P = 41.8454 d from the list of possible aliases. The next forecast predicted a transit of planet d on 28 October 2021, with a mid-transit time of T c = 2 459 515.9338 BJD TDB , which means that, in this case, the alias number 4 (see Table 5) was preferred as its true orbital period. This forecasted event was observed with CHEOPS during its fourth visit. This time, the transit of planet d was successfully detected together with a transit of planet c, confirming that the period alias of P = 47.4248 d is the true orbital period of planet d. This result also confirms that the second transit-like feature of planet d, observed by TESS in sector number 30, was a real transit event and not an instrumental artifact as considered by C22. Alternatively, the dip observed at 2 459 136.5357 BJD TDB was a mixture of instrumental effects and the transit of planet d. With this gathered knowledge about the true orbital period of planet d, we were able to combine CHEOPS and TESS photometric observations and RV measurements in order to improve the orbital and planetary parameters of the HD 22946 system, which were previously obtained only from the TESS and RV data by the discoverers.
CHEOPS, TESS, and RV combined model
In order to produce accurate planetary parameters for all three planets, we built a combined model using all available data, that is, TESS photometry (described in Sect. 2.1), CHEOPS photometry (described in Sect. 2.2), and ESPRESSO RVs (described A&A proofs: manuscript no. toi411 Notes. Based on the joint fit of the TESS and CHEOPS photometric data and RV observations. Table shows the best-fitting value of the given parameter and its ±1σ uncertainty. The fitted values correspond to quantile 0.50 (median) and the uncertainties to quantils ±0.341 in the parameter distributions obtained from the samples. ⋆ Only the 3σ upper limits are listed here due to the low number of the RV observations. ⋄ Assuming the albedo value of 0.2. ♣ Based on the relations presented by Chen & Kipping (2017), assuming that the planet radii are from the interval of 1.23 < R p < 14.26 R ⊕ . ♡ Based on the relations presented by Otegi et al. (2020). † Assuming that ρ p > 3.3 g cm −3 . ‡ Assuming that ρ p < 3.3 g cm −3 . ♯ Based on the criteria set by Kempton et al. (2018), using the scale factor of 1.26, the stellar radius of R s = 1.117 ± 0.009 R ⊙ and J = 7.250 ± 0.027 mag (see Table 4). ) and a quality factor Q = 1/ √ 2, as is common for quasi-periodic stellar variability. In order to speed up sampling, we binned the TESS data to 30 minute bins far from transits, keeping 2 minute data near transit. As we have reasonable prior knowledge from theoretical analyses for the expected stellar limb-darkening (LD) parameters for HD 22946, we used these as priors in the analysis. We used the quadratic LD law and interpolated tables of coefficients calculated for the TESS (Claret 2018) and CHEOPS (Claret 2021) passbands using the derived stellar parameters of T eff = 6169 K and log g = 4.47 (cgs). In order to guard against systematic errors, we inflated the σ for each parameter prior to 0.1.
Even though the PIPE light curves for HD 22946 have fewer systematic features than the DRP light curves, they can still include flux variations due to the influence of various external factors. Therefore, we can improve the light curve by decorrelating the flux data against metadata generated for the instrument and target. To decipher which decorrelation vectors provide improvement, we ran an initial PyMC3 model for each CHEOPS visit using all available ancillary data -sin and cos of rollangle, background flux, x and y centroid positions, onboard temperature and time (which also fits short-timescale stellar variability). These parameters are normalised to have µ = 0.0 and σ = 1.0, and decorrelation parameters are given normal priors with µ = 0.0 and σ set by the root-mean-square (RMS) noise for each CHEOPS visit. For each visit model, we also included parameters for any planetary transits present in order to ensure the transits would not bias the model. After HMC sampling, we assessed each decorrelation parameter using the average and standard deviations, keeping only those parameters with a Bayes Factor of BF > 1. Despite this detrending, shortertimescale variation can also be present as a function of roll angle (φ). Pure detrending against sin and cos of roll angle removes the largest amplitude systematic trends at low frequencies. These are those closest in timescale to the transit feature, and so a simpler detrending technique for such timescales guards against overfitting of the transit. However, the CHEOPS light curve typically also contains systematic noise correlated with roll angle that is at a lower amplitude and higher frequency. This is not therefore adequately removed by simple sin and cos decorrelation. It is this noise that a more flexible GP is better able to model. We therefore also included a GP to model the variation of flux with roll-angle effects. To do this, we first found any potential large jumps in φ and made sure the time series was continuous between these jumps (i.e. by moving the zero point and 'wrapping around'). We then transformed the input data such that it is continuous in x -by sorting by φ rather than time. Once again, we used a SHO kernel from celerite with quality factor Q set at 1/ √ 2. As we expected the morphology of the variations to be preserved for all CHEOPS visits, we used a single shared kernel. We found that the linear decorrelation is the most important, decreasing the log likelihood by a factor of 1400, but the GP is responsible for a reduction of a further 450, which means that use of a GP to model roll-angle flux behaviour is well justified.
As multi-planet systems typically have low eccentricities e (Van Eylen et al. 2019), and we lack the high number of RVs capable of resolving any differences in e, we chose to fit only circular orbits. In order to guard against unphysical negative values, we used broad log-normal priors for the key transit and RV amplitude parameters, that is, for R p /R s (planet-to-star radius ratio) and K (RV semi-amplitude). The quantities derived in Sect. 3.1 are used as priors on the stellar parameters in the model. For all datasets -CHEOPS, TESS, and ESPRESSO-, we in-cluded a jitter term using a wide log-normal prior. We then sampled the combined model using the PyMC_ext 'sample' function, which is specifically written for astrophysical applications, and allows us to group independent dataset parameters (e.g. the CHEOPS visit-specific decorrelation parameters) together, thereby speeding up sampling greatly. We used ten chains, tuning each for 1300 steps before sampling for a further 1800, resulting in 18 000 unique samples. The sample have effective sample sizes in the thousands, and the gelmin-rubin statistics are below 1.01 for all parameters, suggesting they are sufficiently uncorrelated and unbiased. The full list of fitted GP hyperparameters and detrending parameters with the corresponding bestfitting values can be found in Appendix A.1. The best-fitting and derived parameters of the system are described and discussed in Sect. 4.
Search for transit-timing variations
In order to look for potential TTVs, we also ran a combined model using unconstrained timing for each planetary transit thanks to the TTVorbit function of exoplanet, and an independent analysis using the Allesfitter software 11 (Günther & Daylan 2019, 2021, applying a nested sampling fit. Although C22 already performed such an analysis and found no obvious sign of TTVs in the system, we repeated this procedure, but in this case using the CHEOPS data as well. This means mainly that we included three transits of planet d in the analysis and used a longer time baseline. We used the same dataset as in Sect. 3.3, which was co-fitted with a GP using the celerite SHO kernel in both cases. All planetary and system parameters were fixed as derived previously, only the GP hyperparameters, the detrending parameters, and the observed-minus-calculated (O-C) parameters for individual mid-transit times were fitted. Both solutions are consistent with a linear ephemeris, which means we did not find any indication of a quadratic trend in the data, in agreement with the conclusion made by the discoverers. As an illustration, the obtained O-C diagram of the mid-transit times for planets b, c, and d from the Allesfitter package is depicted in Fig. 1. We can see that the O-C values are scattered around O-C = 0.0 d, which means that no significant TTVs are present in the system.
Final results and discussion
The best-fitting and derived parameters from the combined model are listed in Table 6, and the model posteriors of the host star are summarised in Appendix A.2. The fitted TESS light curves from sector numbers 3, 4, 30, and 31 are depicted in the panels of Figs. 2 and 3. The CHEOPS individual observations overplotted with the best-fitting models are shown in the panels of Fig. 4. The RV observations fitted with a spectroscopic orbit are depicted in Fig. 5.
Here, we present new ephemerides of the planetary orbits, which we calculated based on the combined model. Thanks to the combined TESS and CHEOPS observations, we were able to improve the reference mid-transit times and the orbital periods of the planets compared to the discovery values. C22 derived the orbital period parameter values of P orb,b = 4.040301 +0.000023 −0.000042 d and P orb,c = 9.573096 +0.000026 −0.000023 d, and expected an orbital period of P orb = 46 ± 4 d for planet d, which was estimated based on the transit duration and depth along with stellar mass and radius through Kepler's third law, assuming circular orbits. We confirmed this prediction, finding an orbital period for planet d of 11 See https://www.allesfitter.com/home. P orb = 47.42489 ± 0.00011 d. The improved ratios of the orbital periods are P orb,c /P orb,b = 2.37 and P orb,d /P orb,c = 4.95. Based on the Kepler database, the adjacent planet pairs in multiple systems show a broad overall peak between period ratios of 1.5 and 2, followed by a declining tail to larger period ratios. In addition, there appears to be a sizeable peak just interior to the period ratio 5 (Steffen & Hwang 2015); therefore, we can say that the period ratios in HD 22946 fall into statistics and the seemingly large orbital gap between planets c and d is not anomalous. In the combined model, we determined the impact parameter b, which is the projected relative distance of the planet from the stellar disk centre during the transit midpoint in units of R s . Based on the combined TESS and CHEOPS photometry observations, we redetermined the radii of the planets, which are 1.362 ± 0.040 R ⊕ , 2.328 ± 0.039 R ⊕ , and 2.607 ± 0.060 R ⊕ for planets b, c, and d, respectively. The CHEOPS observations are an added value, because compared to the corresponding parameter values presented in C22 (R p,b = 1.72 ± 0.10 R ⊕ , R p,c = 2.74 ± 0.14 R ⊕ , and R p,d = 3.23 ± 0.19 R ⊕ ), there is a noticeable improvement in radius precision. Using TESS and CHEOPS photometry observations, the uncertainties on the planet radius parameter values were decreased by ∼ 50%, 68%, and 61% for planets b, c, and d, respectively. We also note that the parameter values from this work are in stark contrast to those derived by C22; these authors found significantly larger radii, that is, larger by ∼ 21%, 15%, and 19% for planets b, c, and d, respectively. We believe this may be due to a misunderstanding of the LimbDarkLightCurve function in exoplanet. The function requires the planetary radius R p in solar radii rather than the planet-to-star radius ratio R p /R s . When misused, the result is an inflation of R p /R s and R p values by a factor of R s /R ⊕ , which in this case is a factor of about 15%-21%. This mistake can be Fig. 2. TESS observations of the transiting planets HD 22946b, HD 22946c, and HD 22946d from sector numbers 3, 4, and 30 overplotted with the best-fitting model. This model was derived based on the entire CHEOPS and TESS photometric dataset and the RV observations from ESPRESSO via joint analysis of the data. The left-hand panels show the non-detrended data overplotted with the full model, while the right-hand panels show the detrended data overplotted with the transit model. We averaged the TESS data for better visualisation of the transit events using a running average with steps and width of 0.009 and 0.09 d, respectively. We note that an interruption in communications between the instrument and spacecraft occurred at 2 458 418.54 BJD TDB , resulting in an instrument turn-off until 2 458 421.21 BJD TDB . No data were collected during this period.
seen most clearly in C22, when comparing the models shown in Figure 5 with the implied depths in Table 4 (likely derived from the radius ratio), which are inflated by this factor. Such a mistake was also evident during the reanalysis of the BD+40 2790 (TOI-2076) system (Osborn et al. 2022).
According to the radius valley at ∼ 1.5 − 2.0 R ⊕ , which separates super-Earths and sub-Neptunes (Fulton et al. 2017;Van Eylen et al. 2018;Martinez et al. 2019;Ho & Van Eylen 2023), and based on the refined planet radii, we find that planet b is a super-Earth, and planets c and d are similar in size and are sub-Neptunes, in agreement with C22. It is well known that Fig. 3. As in Fig. 2, but for the TESS sector number 31.
small exoplanets have bimodal radius distribution separated by the radius valley. Potential explanations focus on atmosphericescape-driven mechanisms, such as photo-evaporation; see for example Owen (2019). The models showed that those planets that have radius below 1.5 R ⊕ were planets that initially had hydrogen/helium atmospheres, but ultimately lost them due to atmospheric escape, while those just above 2.0 R ⊕ had hydrogen/helium atmosphere masses of ∼ 1% of the core mass. Having HD 22946 planets on either side of the valley means that planet b could be a photo-evaporated version of planets c and d. Recently, Luque & Pallé (2022) presented a brand new approach, arguing that the density of planets might provide more information than planet radii alone and proposing that a density gap separates rocky from water-rich planets. For M dwarf systems, these authors found that rocky planets form within the ice line while water worlds formed beyond the ice line and migrated inwards. Given that theoretical models predict similar results for stars of other types, this scenario could also be possible in the case of the planets orbiting HD 22946. Due to the low number of RVs, here we present only the 3σ upper limits for the planet masses in agreement with the discoverers. C22 obtained the 3σ upper mass limits of about 11 M ⊕ , 14.5 M ⊕ , and 24.5 M ⊕ for planets b, c, and d, respectively, from the same spectroscopic observations. The 3σ upper limits for the planet masses from this work are M p,b = 13.71 M ⊕ , M p,c = 9.72 M ⊕ , and M p,d = 26.57 M ⊕ . Similarly to the discoverers, we obtained very different upper mass limits for planets c and d, although they have similar planet radii, which could be due to a somewhat different internal structure of these planets. Applying the relations of Chen & Kipping (2017) and Otegi et al. (2020), we also re-estimated the planet masses, which were previously forecasted by the discoverers as 6.29 ± 1.30 M ⊕ , 7.96 ± 0.69 M ⊕ , and 10.53±1.05 M ⊕ for planets b, c, and d, respectively. The improved parameter values are presented in Table 6. Furthermore, taking into account the estimated planet masses calculated based on the relations of Otegi et al. (2020), we predicted the number of additional RV measurements required to achieve a 3σ detection on each mass using the Radial Velocity Follow-up Calculator 12 (RVFC; see Cloutier et al. (2018)), and the RV simulator (Wilson et al. in preparation). Based on these simulations, we obtained that another 27, 24, and 48 ESPRESSO RVs are needed to measure the predicted masses of planets b, c, and d, respectively. The expected RV semi-amplitudes assum-12 See http://maestria.astro.umontreal.ca/rvfc/.
ing the estimated planet masses are K b = 1.10 ± 0.12 m s −1 , K c = 2.08 ± 0.10 m s −1 , and K d = 1.46 ± 0.08 m s −1 .
C22 also probed the planets from the viewpoint of future atmospheric characterisation using the transmission spectroscopy metric (TSM); see Eq. 1 in Kempton et al. (2018). The authors obtained the TSM values of 65 ± 10, 89 ± 16, and 67 ± 14 for planets b, c, and d, respectively. We revised these values based on the results from the present work. The improved TSM values (see Table 6) do not satisfy the recommended value of TSM > 90 for planets with a radius of 1.5 < R p < 10 R ⊕ . On the other hand, given that this threshold is set very rigorously, in agreement with the discoverers, we can note that planet c could be a feasible target for transmission spectroscopy observations with future atmospheric characterisation missions, such as the planned Ariel space observatory (Tinetti et al. 2021).
Finally, we discuss the relevance of planet d among the known population of similar exoplanets. HD 22946d represents a warm sub-Neptune. Based on the NASA Exoplanet Archive 13 (Akeson et al. 2013), there are 5272 confirmed exoplanets up to 22 February 2023, but only 63 planets out of 5272 are sub-Neptune sized (1.75 < R p < 3.5 R ⊕ ) and transiting bright stars (G ≤ 10 mag). Only 7 planets out of 63 have orbital periods longer than 30 days and only 4 planets out of 7 have an equilibrium temperature of below 550 K. Three planets have a lower insolation flux than planet d, namely TOI-2076d (Osborn et al. 2022), HD 28109d (Dransfield et al. 2022), andHD 191939 (Badenas-Agusti et al. 2020). HD 22946d is therefore an interesting target for future follow-up observations. One of the questions to be answered in the near future is the composition and internal structure of sub-Neptune-type planets. Using CHEOPS observations, we determined the radius of planet d with high accuracy. Its true mass could be determined with another 48 ESPRESSO RV measurements according to the estimate we present above. A combination of mass and radius gives the overall density, which will be an important step forward towards understanding sub-Neptunes.
Conclusions
Based on the combined TESS and CHEOPS observations, we refined several parameters of the HD 22946 planetary system. First of all, we improved the ephemerides of the planetary orbits in comparison with the discovery values. We can confirm that planets b and c have short orbital periods below 10 days, namely 4.040295 ± 0.000015 d and 9.573083 ± 0.000014 d, respectively. The third planet, HD 22946d, has an orbital period of 47.42489±0.00011 d, which we were able to derive based on additional CHEOPS observations. Furthermore, based on the combined TESS and CHEOPS observations, we derived precise radii for the planets, which are 1.362 ± 0.040 R ⊕ , 2.328 ± 0.039 R ⊕ , and 2.607 ± 0.060 R ⊕ for planets b, c, and d, respectively. On the one hand, we can confirm the conclusion of the discoverers that the planetary system consists of a super-Earth, and planets c and d are sub-Neptunes. On the other hand, we find the planet radii values to be in tension with the values presented in the discovery paper, which is very probably due to misuse of the software by the discoverers. The low number of ESPRESSO RV measure- We also investigated the planets from the viewpoint of possible future follow-up observations. First of all, we can conclude that more RV observations are needed to improve the planet masses in this system. The applied spectroscopic observations allowed us to derive precise stellar parameters of the host star and to fit an initial spectroscopic orbit to the RV data, but there is ample room for improvement in this way. We estimated that another 48 ESPRESSO RVs are needed to measure the predicted masses of all planets in HD 22946. Planet c could be a suitable target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d as a warm sub-Neptune is very interesting, because there are only a few similar confirmed exoplanets to date. Thanks to the synergy of TESS and CHEOPS missions, there is a growing sample of planets, such as HD 22946d. Such objects are worth investigating in the near future, for example in order to investigate their composition and internal structure. Finally, we can mention that future photometric and/or spectroscopic observations could also be oriented to searching for further possible planets in this system. to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI 2018.2323 "Gaseousor rocky? Unveiling the nature of small worlds". This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127). This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. IRI acknowledges support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grant PGC2018-098153-B-C33, as well as the support of the Generalitat de Catalunya/CERCA programme. This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche. KGI and MNG are the ESA CHEOPS Project Scientists and are responsible for the ESA CHEOPS Guest Observers Programme. They do not participate in, or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme. The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (
Notes.
Abbreviations refer to the following sources: G2021 = Guerrero et al. (2021), S2018 = Stassun et al. (2018), G2022 = Gaia Collaboration et al. (2022), C2003 = Cutri et al. (2003), C2022 = Cacciapuoti et al. (2022).
in Sect. 2.3). The combined model was built using the PyMC3 package 9 (Salvatier et al. 2016), which performs Hamiltonian Monte Carlo (HMC) sampling, with Keplerian orbits modeled with exoplanet package 10 (Foreman-Mackey et al. 2021). We used Gaussian processes (GPs) to model the stellar variability present in the TESS light curve, opting for a simple harmonic oscillator (SHO) kernel implemented in the celerite package (Foreman-Mackey et al. 2017
Fig. 1 .
1Observed-minus-calculated (O-C) diagram of mid-transit times of the planets HD 22946b, HD 22946c, and HD 22946d obtained using the Allesfitter package. The O-C values are consistent with a linear ephemeris, which means no significant TTVs are present in the system.
Converting these parameter values to the orbit inclination angle values we can obtain i = 88.90 +0.16 −0.05 deg, i = 88.52 +0.08 −0.07 deg, and i = 89.54 +0.02 −0.03 deg for planets b, c, and d, respectively. For comparison, we note that the corresponding discovery values are i b = 88.3 +1.1 −1.2 deg and i c = 88.57 +0.86 −0.53 deg. The inclination angle of planet d was not determined by C22. According to the improved parameter values, it seems that only the orbits of planets b and c are well aligned. Planet d is probably not in the same plane as planets b and c.
Fig. 4 .
4Individual CHEOPS observations of the transiting planets HD 22946b, HD 22946c, and HD 22946d. The observed light curves are overplotted with the best-fitting model. This model was derived based on the entire CHEOPS and TESS photometric dataset and the RV observations from ESPRESSO via joint analysis of the data. The left-hand panels show the non-detrended data overplotted with the full model, while the right-hand panels show the detrended data overplotted with the transit model. In the case of the fourth CHEOPS visit, as the multiple transit feature, the individual transit models of planets c and d are also shown in addition to the summed model.
BELSPO) in the framework of the PRODEX Program, and by the University of Liège through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation; L.D. is an F.R.S.-FNRS Postdoctoral Researcher. LMS gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 'Gaseous or rocky? Unveiling the nature of small worlds'. This project was supported by the CNES. MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18). M.G. is an F.R.S.-FNRS Senior Research Associate. ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP2_194576. NAW acknowledges UKSA grant ST/R004838/1. This work was supported by FCT -Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 -Programa Operacional Competitividade e Internacionalizacão by these grants: UID/FIS/04434/2019, UIDB/04434/2020, UIDP/04434/2020, PTDC/FIS-AST/32113/2017 & POCI-01-0145-FEDER-032113, PTDC/FIS-AST/28953/2017 & POCI-01-0145-FEDER-028953, PTDC/FIS-AST/28987/2017 & POCI-01-0145-FEDER-028987, O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT. PM acknowledges support from STFC research grant number ST/M001040/1. We acknowledge support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grants ESP2016-80435-C2-1-R, ESP2016-80435-C2-2-R, PGC2018-098153-B-C33, PGC2018-098153-B-C31, ESP2017-87676-C5-1-R, MDM-2017-0737 Unidad de Excelencia Maria de Maeztu-Centro de Astrobiología (INTA-CSIC), as well as the support of the Generalitat de Catalunya/CERCA programme. The MOC activities have been supported by the ESA contract No. 4000124370. SH gratefully acknowledges CNES funding through the grant 837319. S.C.C.B. acknowledges support from FCT through FCT contracts nr. IF/01312/2014/CP1215/CT0004. S.G.S. acknowledge support from FCT through FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). ACC and TW acknowledge support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. V.V.G. is an F.R.S-FNRS Research Associate. XB, SC, DG, MF and JL acknowledge their role as ESA-appointed CHEOPS science team members. YA and MJH acknowledge the support of the Swiss National Fund under grant 200020_172746. LBo, VNa, IPa, GPi, RRa and GSc acknowledge support from CHEOPS ASI-INAF agreement n. 2019-29-HH.0. NCS acknowledges support from the European Research Council through the grant agreement 101052347 (FIERCE). This work was supported by FCT -Fundação para a Ciência e a Tecnologia through national funds and by FEDER through
Table 1 .
1Log of TESS photometric observations of HD 22946.
Table 2 .
2Log of CHEOPS photometric observations of HD 22946.Visit
Start date
End date
File
CHEOPS Integration Number
No.
[UTC]
[UTC]
key
product
time [s]
of frames
1
2021-10-17 03:22 2021-10-17 14:40 CH_PR100031_TG021201 Subarray
2 × 20.0
629
1
2021-10-17 03:22 2021-10-17 14:40 CH_PR100031_TG021201 Imagettes
20.0
1258
2
2021-10-18 08:14 2021-10-18 19:04 CH_PR100031_TG021101 Subarray
2 × 20.0
637
2
2021-10-18 08:14 2021-10-18 19:04 CH_PR100031_TG021101 Imagettes
20.0
1274
3
2021-10-25 07:08 2021-10-25 19:49 CH_PR110048_TG010001 Subarray
2 × 20.4
708
3
2021-10-25 07:08 2021-10-25 19:49 CH_PR110048_TG010001 Imagettes
20.4
1416
4
2021-10-28 02:12 2021-10-28 13:50 CH_PR110048_TG010101 Subarray
2 × 20.4
666
4
2021-10-28 02:12 2021-10-28 13:50 CH_PR110048_TG010101 Imagettes
20.4
1332
5
2021-10-29 08:48 2021-10-29 18:14 CH_PR100031_TG021202 Subarray
2 × 20.0
555
5
2021-10-29 08:48 2021-10-29 18:14 CH_PR100031_TG021202 Imagettes
20.0
1110
Notes. Table shows the time interval of individual observations, the file key, which supports fast identification of the observations in the CHEOPS
archive, type of the photometric product (for more details see Sect. 2.2), the applied integration time, co-added exposures at the subarray type
CHEOPS product, and the number of obtained frames.
Table 3 .
3Log of ESPRESSO/VLT RV observations of HD 22946.Time [BJD TDB ]
RV value [km s −1 ] ±1σ [km s −1 ]
2458524.56069831
16.85125
0.00011
2458525.55490396
16.85217
0.00013
2458526.59541816
16.85512
0.00011
2458527.63233315
16.85284
0.00022
2458535.62345024
16.84839
0.00036
2458540.53620531
16.85020
0.00010
2458550.57174504
16.85549
0.00020
2458552.56783808
16.85330
0.00016
2458553.51738686
16.86251
0.00011
2458556.50131285
16.85536
0.00014
2458557.50492574
16.85738
0.00009
2458557.56483059
16.85716
0.00010
2458558.52709593
16.85741
0.00010
2458559.54006749
16.85690
0.00016
with a G2 numerical mask. We list the ESPRESSO RV measure-
ments in
Table 5 .
5Orbital period aliases of the planet HD 22946d.Notes. Only the period aliases with a probability of p > 1% are listed here, as calculated by the MonoTools package from TESS data alone, i.e. before CHEOPS observations.Alias Period alias (P) Probability (p)
No.
[d]
[%]
1
39.5206
17.420
2
41.8454
20.078
3
44.4607
20.341
4
47.4248
18.113
5
50.8122
13.445
6
54.7209
7.061
7
59.2809
2.756
8
64.6701
∼ 1.0
Table 6 .
6Best-fitting and derived system and planetary parameters of the HD 22946 planetary system.Parameter [unit] Description
HD 22946b
HD 22946c
HD 22946d
T c [BJD TDB ]
reference mid-transit time
2 458 385.7321 +0.0022
−0.0031
2 459 161.60861 +0.00069
−0.00072
2 459 136.53720 +0.00087
−0.00083
P orb [d]
orbital period
4.040295 +0.000015
−0.000014
9.573083 ± 0.000014
47.42489 +0.00010
−0.00011
b
impact parameter
0.21 +0.11
−0.13
0.504 +0.024
−0.026
0.456 +0.026
−0.028
a/R s
scaled semi-major axis
11.03 ± 0.12
19.61 +0.22
−0.23
57.00 ± 0.66
a [au]
semi-major axis
0.05727 +0.00085
−0.00082
0.1017 +0.0015
−0.0014
0.2958 +0.0044
−0.0042
R p /R s
planet-to-star radius ratio
0.01119 +0.00031
−0.00032
0.01912 +0.00026
−0.00027
0.02141 +0.00046
−0.00045
t dur [d]
transit duration
0.1281 +0.0026
−0.0037
0.1535 +0.0015
−0.0014
0.2701 +0.0019
−0.0020
R p [R ⊕ ]
planet radius
1.362 ± 0.040
2.328 +0.038
−0.039
2.607 ± 0.060
M p [M ⊕ ]
planet mass ⋆
13.71
9.72
26.57
M p,est [M ⊕ ]
estimated planet mass ♣
2.42 ± 0.12
6.04 ± 0.17
7.32 ± 0.28
M p,est [M ⊕ ]
estimated planet mass ♡
2.61 ± 0.27 †
6.61 ± 0.17 ‡
7.90 ± 0.28 ‡
ρ p [g cm −3 ]
planet density ⋆
18.96
3.15
10.80
K [m s −1 ]
RV semi-amplitude ⋆
5.05
2.70
4.31
I p [W m −2 ]
insolation flux
673 884 +32 444
−31 606
213 337 +10 270
−10 006
25 261 +1216
−1184
T surf [K]
surface temperature ⋄
1241 ± 14
931 ± 11
546 ± 6
TSM
transmission spectroscopy metric ♯ 43 ± 4
63 ± 2
43 ± 2
Z.Garai et al.: Refined parameters of the HD 22946 systemFig. 5. RV observations taken at ESPRESSO fitted with a spectroscopic orbit (red line). The 1σ and 2σ uncertainties of the model are plotted as coloured areas. The uncertainties of the individual RV data points correspond to a 3σ interval. ments allowed us to derive only the 3σ upper limits for the planet masses, which are 13.71 M ⊕ , 9.72 M ⊕ , and 26.57 M ⊕ for planets b, c, and d, respectively.-8
-6
-4
-2
0
2
4
6
8
10
1510
1520
1530
1540
1550
1560
1570
RV [m/s]
BJD TDB -2457000
+/-1σ
+/-2σ
Obs. with +/-3σ
RV model
). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. ZG acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, the PRODEX Experiment Agreement No. 4000137122 between the ELTE Eötvös Loránd University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the VEGA grant of the Slovak Academy of Sciences No. 2/0031/22, the Slovak Research and Development Agency contract No. APVV-20-0148, and the support of the city of Szombathely. GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, a PRODEX Institute Agreement between the ELTE Eötvös Loránd University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the Lendület LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Szombathely. ABr was supported by the SNSA. ACC acknowledges support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. B.-O. D. acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Four Aces; grant agreement No 724427). It has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021_200726. DG gratefully acknowledges financial support from the CRT foundation under Grant No.
See https://mast.stsci.edu/portal/Mashup/Clients/ Mast/Portal.html. Article number, page 2 of 15 Z. Garai et al.: Refined parameters of the HD 22946 system
A visit is a sequence of successive CHEOPS orbits devoted to observing a given target.
See https://github.com/alphapsa/PIPE. 4 See https://github.com/pmaxted/pycheops. 5 See https://kesprint.science/.
The last version, ARES v2, can be downloaded at https://github. com/sousasag/ARES.
PAdova and TR ieste Stellar Evolutionary Code: http://stev. oapd.inaf.it/cgi-bin/cmd
See https://pypi.org/project/pymc3/. 10 See https://pypi.org/project/exoplanet/.
See https://exoplanetarchive.ipac.caltech.edu/index. html.
Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal 9 Astrophysics Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE, UK 10 Astrobiology Research Unit, Université de Liège, Allée du 6 Août 19C, B-4000 Liège, Belgium 11 Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria 12 Observatoire Astronomique de l'Université de Genève, Chemin Pegasi 51, Versoix, Switzerland 13 Centre for Exoplanet Science, SUPA School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK 14 Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, 8042 Graz, Austria 15 INAF, Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy
−0.76 Article number, page 15 of 15
Acknowledgements. We thank the anonymous reviewer for the helpful comments and suggestions. CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like A&A proofs: manuscript no. toi411 COMPETE2020 -Programa Operacional Competitividade e Internacionalização by these grants: UIDB/04434/2020; UIDP/04434/2020. AT thanks the Science and Technology Facilities Council (STFC) for a PhD studentship. P.E.C. is funded by the Austrian Science Fund (FWF) Erwin Schroedinger Fellowship, program J4595-N.A&A proofs: manuscript no. toi411 the average offset of the median-normalised flux -1st CHEOPS visit 0.007 ± 0.011 x mean,2nd,CHEOPS[ppt]the average offset of the median-normalised flux -2nd CHEOPS visit 0.066 ± 0.011 x mean,3rd,CHEOPS[ppt]the average offset of the median-normalised flux -3rd CHEOPS visit 0.039 ± 0.010 x mean,4th,CHEOPS[ppt]the average offset of the median-normalised flux -4th CHEOPS visit 0.314 ± 0.014 x mean,5th,CHEOPS[ppt]the average offset of the median-normalised flux -5th CHEOPS visit 0.061 ± 0.011 x mean,TESS[ppt]the average offset of the median-normalised flux in TESS data 0.0179 +0.0085 −0.0084Notes. Based on the joint fit of the TESS and CHEOPS photometric data, and RV observations. The table shows the best-fitting value of the given parameter and its ±1σ uncertainty. The fitted values correspond to quantile 0.50 (median) and the uncertainties to quantils ±0.341 in the parameter distributions obtained from the samples.
. R L Akeson, X Chen, D Ciardi, PASP. 125989Akeson, R. L., Chen, X., Ciardi, D., et al. 2013, PASP, 125, 989
Exploring the Formation and Evolution of Planetary Systems. F Allard, M. Booth, B. C. Matthews, & J. R. Graham299Allard, F. 2014, in Exploring the Formation and Evolution of Planetary Systems, ed. M. Booth, B. C. Matthews, & J. R. Graham, Vol. 299, 271-272
. M Badenas-Agusti, M N Günther, T Daylan, AJ. 160113Badenas-Agusti, M., Günther, M. N., Daylan, T., et al. 2020, AJ, 160, 113
. W Benz, C Broeg, A Fortier, Experimental Astronomy. 51109Benz, W., Broeg, C., Fortier, A., et al. 2021, Experimental Astronomy, 51, 109
. A Bonfanti, L Delrez, M J Hooton, A&A. 646157Bonfanti, A., Delrez, L., Hooton, M. J., et al. 2021, A&A, 646, A157
. A Bonfanti, S Ortolani, V Nascimbeni, A&A. 5855Bonfanti, A., Ortolani, S., & Nascimbeni, V. 2016, A&A, 585, A5
. A Bonfanti, S Ortolani, G Piotto, V Nascimbeni, A&A. 57518Bonfanti, A., Ortolani, S., Piotto, G., & Nascimbeni, V. 2015, A&A, 575, A18
. L Cacciapuoti, L Inno, G Covone, A&A. 66885Cacciapuoti, L., Inno, L., Covone, G., et al. 2022, A&A, 668, A85
F Castelli, R L Kurucz, Modelling of Stellar Atmospheres. N. Piskunov, W. W. Weiss, & D. F. Gray21020IAU SymposiumCastelli, F. & Kurucz, R. L. 2003, in IAU Symposium, Vol. 210, Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray, A20
. J Chen, D Kipping, ApJ. 83417Chen, J. & Kipping, D. 2017, ApJ, 834, 17
. A Claret, A&A. 61820Claret, A. 2018, A&A, 618, A20
. A Claret, Research Notes of the American Astronomical Society. 513Claret, A. 2021, Research Notes of the American Astronomical Society, 5, 13
. R Cloutier, R Doyon, F Bouchy, G Hébrard, AJ. 15682Cloutier, R., Doyon, R., Bouchy, F., & Hébrard, G. 2018, AJ, 156, 82
. B F Cooke, D Pollacco, D R Anderson, MNRAS. 5005088Cooke, B. F., Pollacco, D., Anderson, D. R., et al. 2021, MNRAS, 500, 5088
. B F Cooke, D Pollacco, D Bayliss, A&A. 63183Cooke, B. F., Pollacco, D., & Bayliss, D. 2019, A&A, 631, A83
VizieR Online Data Catalog. R M Cutri, M F Skrutskie, S Van Dyk, II/246Cutri, R. M., Skrutskie, M. F., van Dyk, S., et al. 2003, VizieR Online Data Catalog, II/246
. L Delrez, D Ehrenreich, Y Alibert, Nature Astronomy. 5775Delrez, L., Ehrenreich, D., Alibert, Y., et al. 2021, Nature Astronomy, 5, 775
. V Dobos, S Charnoz, A Pál, A Roque-Bernard, G M Szabó, PASP. 13394401Dobos, V., Charnoz, S., Pál, A., Roque-Bernard, A., & Szabó, G. M. 2021, PASP, 133, 094401
. G Dransfield, A H M J Triaud, T Guillot, MNRAS. 5151328Dransfield, G., Triaud, A. H. M. J., Guillot, T., et al. 2022, MNRAS, 515, 1328
. D Foreman-Mackey, E Agol, S Ambikasaran, R Angus, AJ. 154220Foreman-Mackey, D., Agol, E., Ambikasaran, S., & Angus, R. 2017, AJ, 154, 220
. D Foreman-Mackey, R Luger, E Agol, The Journal of Open Source Software. 63285Foreman-Mackey, D., Luger, R., Agol, E., et al. 2021, The Journal of Open Source Software, 6, 3285
. B J Fulton, E A Petigura, A W Howard, AJ. 154109Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109
. A G A Brown, Gaia CollaborationA Vallenari, Gaia CollaborationA&A. 6491Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1
. A Vallenari, Gaia CollaborationA G A Brown, Gaia CollaborationarXiv:2208.00211arXiv e-printsGaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2022, arXiv e-prints, arXiv:2208.00211
. N M Guerrero, S Seager, C X Huang, ApJS. 25439Guerrero, N. M., Seager, S., Huang, C. X., et al. 2021, ApJS, 254, 39
Allesfitter: Flexible Star and Exoplanet Inference From Photometry and Radial Velocity. M N Günther, T Daylan, Astrophysics Source Code Library Günther, M. N. & Daylan, T. 2021. 25413Günther, M. N. & Daylan, T. 2019, Allesfitter: Flexible Star and Exoplanet In- ference From Photometry and Radial Velocity, Astrophysics Source Code Li- brary Günther, M. N. & Daylan, T. 2021, ApJS, 254, 13
. C S K Ho, V Van Eylen, MNRAS. 5194056Ho, C. S. K. & Van Eylen, V. 2023, MNRAS, 519, 4056
. B J Hord, K D Colón, T A Berger, AJ. 16413Hord, B. J., Colón, K. D., Berger, T. A., et al. 2022, AJ, 164, 13
. S Hoyer, P Guterman, O Demangeon, A&A. 63524Hoyer, S., Guterman, P., Demangeon, O., et al. 2020, A&A, 635, A24
. E M R Kempton, J L Bean, D R Louie, PASP. 130114401Kempton, E. M. R., Bean, J. L., Louie, D. R., et al. 2018, PASP, 130, 114401
. D Kipping, Research Notes of the American Astronomical Society. 2223Kipping, D. 2018, Research Notes of the American Astronomical Society, 2, 223
. D M Kipping, D S Spiegel, D D Sasselov, MNRAS. 4341883Kipping, D. M., Spiegel, D. S., & Sasselov, D. D. 2013, MNRAS, 434, 1883
SYNTHE spectrum synthesis programs and line data Kurucz, R. L. 1993b, SYNTHE spectrum synthesis programs and line data (Astrophysics Source Code Library). R L Kurucz, Kurucz, R. L. 1993a, SYNTHE spectrum synthesis programs and line data Kurucz, R. L. 1993b, SYNTHE spectrum synthesis programs and line data (As- trophysics Source Code Library)
. G Lacedelli, L Malavolta, L Borsato, MNRAS. 5014148Lacedelli, G., Malavolta, L., Borsato, L., et al. 2021, MNRAS, 501, 4148
. G Lacedelli, T G Wilson, L Malavolta, MNRAS. 5114551Lacedelli, G., Wilson, T. G., Malavolta, L., et al. 2022, MNRAS, 511, 4551
. L Lindegren, U Bastian, M Biermann, A&A. 6494Lindegren, L., Bastian, U., Biermann, M., et al. 2021, A&A, 649, A4
. J J Lissauer, G W Marcy, J F Rowe, The Astrophysical Journal. 750112Lissauer, J. J., Marcy, G. W., Rowe, J. F., et al. 2012, The Astrophysical Journal, 750, 112
. R Luque, E Pallé, Science. 3771211Luque, R. & Pallé, E. 2022, Science, 377, 1211
. P Marigo, L Girardi, A Bressan, ApJ. 83577Marigo, P., Girardi, L., Bressan, A., et al. 2017, ApJ, 835, 77
. C F Martinez, K Cunha, L Ghezzi, V V Smith, ApJ. 87529Martinez, C. F., Cunha, K., Ghezzi, L., & Smith, V. V. 2019, ApJ, 875, 29
. P F L Maxted, D Ehrenreich, T G Wilson, MNRAS. 51477Maxted, P. F. L., Ehrenreich, D., Wilson, T. G., et al. 2022, MNRAS, 514, 77
. M Mayor, D Queloz, Nature. 378355Mayor, M. & Queloz, D. 1995, Nature, 378, 355
. D Nesvorný, A Morbidelli, ApJ. 688636Nesvorný, D. & Morbidelli, A. 2008, ApJ, 688, 636
. H P Osborn, A Bonfanti, D Gandolfi, A&A. 664156Osborn, H. P., Bonfanti, A., Gandolfi, D., et al. 2022, A&A, 664, A156
. H P Osborn, G Nowak, G Hébrard, A&A. 63443Osborn, H. P., Nowak, G., Hébrard, G., et al. 2023, MNRAS, submitted Otegi, J. F., Bouchy, F., & Helled, R. 2020, A&A, 634, A43
. J E Owen, Annual Review of Earth and Planetary Sciences. 4767Owen, J. E. 2019, Annual Review of Earth and Planetary Sciences, 47, 67
. F Pepe, P Molaro, S Cristiani, Astronomische Nachrichten. 3358Pepe, F., Molaro, P., Cristiani, S., et al. 2014, Astronomische Nachrichten, 335, 8
Journal of the American Association of Variable Star Observers (JAAVSO). G R Ricker, 42234Ricker, G. R. 2014, Journal of the American Association of Variable Star Ob- servers (JAAVSO), 42, 234
. S J A J Salmon, V Van Grootel, G Buldgen, M A Dupret, P Eggenberger, A&A. 6467Salmon, S. J. A. J., Van Grootel, V., Buldgen, G., Dupret, M. A., & Eggenberger, P. 2021, A&A, 646, A7
PyMC3: Python probabilistic programming framework. J Salvatier, V , W T Fonnesbeck, C , record ascl:1610.016Astrophysics Source Code Library. Salvatier, J., V., W. T., & Fonnesbeck, C. 2016, PyMC3: Python probabilis- tic programming framework, Astrophysics Source Code Library, record ascl:1610.016
. N C Santos, S G Sousa, A Mortier, A&A. 556150Santos, N. C., Sousa, S. G., Mortier, A., et al. 2013, A&A, 556, A150
. R Scuflaire, S Théado, J Montalbán, Ap&SS. 31683Scuflaire, R., Théado, S., Montalbán, J., et al. 2008, Ap&SS, 316, 83
. M F Skrutskie, R M Cutri, R Stiening, AJ. 1311163Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163
. C A Sneden, UNIVERSITY OF TEXAS AT AUSTINPhD thesisSneden, C. A. 1973, PhD thesis, THE UNIVERSITY OF TEXAS AT AUSTIN.
S G Sousa, Determination of Atmospheric Parameters of B. Sousa, S. G. 2014, in Determination of Atmospheric Parameters of B, 297-310
. S G Sousa, V Adibekyan, E Delgado-Mena, A&A. 65653Sousa, S. G., Adibekyan, V., Delgado-Mena, E., et al. 2021, A&A, 656, A53
. S G Sousa, N C Santos, V Adibekyan, E Delgado-Mena, G Israelian, A&A. 57767Sousa, S. G., Santos, N. C., Adibekyan, V., Delgado-Mena, E., & Israelian, G. 2015, A&A, 577, A67
. S G Sousa, N C Santos, G Israelian, M Mayor, M J P F G Monteiro, A&A. 469783Sousa, S. G., Santos, N. C., Israelian, G., Mayor, M., & Monteiro, M. J. P. F. G. 2007, A&A, 469, 783
. K G Stassun, R J Oelkers, J Pepper, AJ. 156102Stassun, K. G., Oelkers, R. J., Pepper, J., et al. 2018, AJ, 156, 102
. G Stefánsson, R Kopparapu, A Lin, AJ. 160259Stefánsson, G., Kopparapu, R., Lin, A., et al. 2020, AJ, 160, 259
. J H Steffen, J A Hwang, MNRAS. 4481956Steffen, J. H. & Hwang, J. A. 2015, MNRAS, 448, 1956
. G M Szabó, D Gandolfi, A Brandeker, A&A. 654159Szabó, G. M., Gandolfi, D., Brandeker, A., et al. 2021, A&A, 654, A159
. G M Szabó, Z Garai, A Brandeker, A&A. 6597Szabó, G. M., Garai, Z., Brandeker, A., et al. 2022, A&A, 659, L7
. G Tinetti, P Eccleston, C Haswell, arXiv:2104.04824arXiv e-printsTinetti, G., Eccleston, P., Haswell, C., et al. 2021, arXiv e-prints, arXiv:2104.04824
. A Tuson, D Queloz, H P Osborn, MNRAS. 4794786MNRASTuson, A., Queloz, D., Osborn, H. P., et al. 2023, MNRAS, submitted Ulmer-Moll, S., Osborn, H. P., Tuson, A., et al. 2023, A&A, submitted Van Eylen, V., Agentoft, C., Lundkvist, M. S., et al. 2018, MNRAS, 479, 4786
. V Van Eylen, S Albrecht, ApJ. 808126Van Eylen, V. & Albrecht, S. 2015, ApJ, 808, 126
. V Van Eylen, S Albrecht, X Huang, The Astronomical Journal. 15761Van Eylen, V., Albrecht, S., Huang, X., et al. 2019, The Astronomical Journal, 157, 61
. L M Weiss, G W Marcy, E A Petigura, The Astronomical Journal. 15548Weiss, L. M., Marcy, G. W., Petigura, E. A., et al. 2018, The Astronomical Jour- nal, 155, 48
. T G Wilson, E Goffo, Y Alibert, MNRAS. 5111043Wilson, T. G., Goffo, E., Alibert, Y., et al. 2022, MNRAS, 511, 1043
. E L Wright, P R M Eisenhardt, A K Mainzer, AJ. 140MTA-ELTE Exoplanet Research Group, 9700 SzombathelySzent Imre h. u. 112, Hungary, e-mail: [email protected] 2 ELTE Gothard Astrophysical Observatory, 9700 Szombathely. Szent Imre h. u.Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868 1 MTA-ELTE Exoplanet Research Group, 9700 Szombathely, Szent Imre h. u. 112, Hungary, e-mail: [email protected] 2 ELTE Gothard Astrophysical Observatory, 9700 Szombathely, Szent Imre h. u. 112, Hungary
Slovak Academy of Sciences, 05960 Tatranská. Lomnica, SlovakiaAstronomical InstituteAstronomical Institute, Slovak Academy of Sciences, 05960 Tatran- ská Lomnica, Slovakia
Gesellsschaftstrasse 6, 3012 Bern, Switzerland 5 Department of Physics and Kavli Institute for Astrophysics and Space Research. Cambridge, MA 02139, USAPhysikalisches Institut, University of Bern ; Massachusetts Institute of TechnologyPhysikalisches Institut, University of Bern, Gesellsschaftstrasse 6, 3012 Bern, Switzerland 5 Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Universita degli Studi di Torino, via Pietro Giuria 1, I-10125. Fisica Dipartimento Di, Torino, Italy 7; Stockholm, SwedenDepartment of Astronomy, Stockholm University, AlbaNova University CenterDipartimento di Fisica, Universita degli Studi di Torino, via Pietro Giuria 1, I-10125, Torino, Italy 7 Department of Astronomy, Stockholm University, AlbaNova Uni- versity Center, 10691 Stockholm, Sweden
| [
"https://github.com/hposborn/MonoTools.",
"https://github.com/alphapsa/PIPE.",
"https://github.com/pmaxted/pycheops."
] |
[
"Melvin Space-times in Supergravity",
"Melvin Space-times in Supergravity"
] | [
"W A Sabra \nPhysics Department\nAmerican University of Beirut\nLebanon\n"
] | [
"Physics Department\nAmerican University of Beirut\nLebanon"
] | [] | We consider Melvin-like cosmological and static solutions for the theories of N = 2, D = 4 supergravity coupled to vector multiplets. We analyze the equations of motion and give some explicit solutions with one scalar and two gauge fields. Generalized Melvin solutions with four charges are also constructed for an embedding of a truncated N = 8 supergravity theory. Our results are then extended to supergravity theories with the scalar manifolds SL(N, R)/SO(N, R). It is shown that solutions with N charges only exist for N = 8, 6 and 5 corresponding to theories with space-time dimensions D = 4, 5 and 7. | null | [
"https://export.arxiv.org/pdf/2306.04427v1.pdf"
] | 259,095,752 | 2306.04427 | c0e6fdab383e4cee53b8f33f2ab6105d7ccd6f7b |
Melvin Space-times in Supergravity
7 Jun 2023
W A Sabra
Physics Department
American University of Beirut
Lebanon
Melvin Space-times in Supergravity
7 Jun 2023
We consider Melvin-like cosmological and static solutions for the theories of N = 2, D = 4 supergravity coupled to vector multiplets. We analyze the equations of motion and give some explicit solutions with one scalar and two gauge fields. Generalized Melvin solutions with four charges are also constructed for an embedding of a truncated N = 8 supergravity theory. Our results are then extended to supergravity theories with the scalar manifolds SL(N, R)/SO(N, R). It is shown that solutions with N charges only exist for N = 8, 6 and 5 corresponding to theories with space-time dimensions D = 4, 5 and 7.
Introduction
Recently an active area of research has been the study of supersymmetric gravitational backgrounds in supergravity theories with various space-times dimensions and signatures [1]. Our present work will only focus on non-supersymmetric time-dependent and static solutions in some supergravity theories. Time dependent solutions in string theory and their relevance to questions in cosmology have been considered by many authors (see for example [2][3][4][5][6][7] and references therein).
Many years ago, an interesting class of vacuum solutions for Einstein gravity depending on one variable was constructed by Kasner [8]. Related solutions were also found by several authors [9][10][11][12]. The Kasner metric can be generalized to all space-time signatures and dimensions [13]. The D-dimensional Kasner vacuum solution can take the form
ds 2 = ǫ 0 dτ 2 + D−1 i=1 ǫ i τ 2a i dx 2 i (1.1)
where ǫ 0 , ǫ i = ±1. The so-called Kasner exponents satisfy
D−1 i=1 a i = D−1 i=1 a 2 i = 1 . (1.2)
Four-dimensional charged Kasner universes of the form (1.1) with fixed exponents, as solutions admitting Killing spinors, were considered in [14]. Generalized Melvin-fluxtubes, domain walls and cosmologies for Einstein-Maxwell-dilaton theories and the correspondence among them were explored in [15]. These generalized solutions were obtained by applying a solution generating technique to seed Levi-Civita and Kasner space-times. The results of [15] were later extended to gravitational theories with a dilaton and an arbitrary rank antisymmetric tensor in [16]. There, explicit solutions were constructed for the various D = 11 supergravity and type II supergravity theories constructed by Hull [17]. Moreover, Melvin space-times were studied in [18] for the ungauged five-dimensional N = 2 supergravity theory. The equations of the very special geometry underlying the structure of the five dimensional theories turned out to be extremely useful in the analysis of the equations of motion and in the construction of the solutions. Our present work shall mainly extend the results of [15,16,18] and deal with nonsupersymmetric Melvin-like cosmological and static solutions of the theories of ungauged four-dimensional N = 2 supergravity theories coupled to vector multiplets in arbitrary space-time signature [19]. In these theories, the geometry of the scalar fields is a direct product of the geometry of vector multiplets scalars and that of the hypermultiplets scalars. In the present work, the hypermultiplets are ignored and only the scalars of vector multiplets are kept. For the detailed study of the extension of special geometry to theories with Euclidean and neutral signatures, we refer the reader to [20][21][22][23][24]. The vector multiplet sector of four-dimensional supergravity in arbitrary space-time signature has been considered in [25] via the reduction of Hull's eleven-dimensional supergravity theories on Calabi-Yau threefolds, followed by a reduction on spacelike and timelike circles. Moreover, four-dimensional N = 2 supergravity coupled to vector and hypermultiplets in signatures (0, 4), (1,3) and (2,2) were obtained via the compactification of type-II string theories with signatures (0, 10), (1,9) and (2,8) on Calabi-Yau threefolds [26].
Our work is planned as follows. In the next section we briefly present some of the basic properties of the ungauged four-dimensional N = 2 supergravity theories and their equations of motion for the metric, gauge and scalar fields. In section three, we perform an analysis of the equations of motion and derive our solutions. Section four contains some explicit solutions for two inequivalent Lorentzian supergravity models where the scalar manifold is given by SL(2, R)/SO(2) and for an embedding of a truncated N = 8 supergravity. In section five, we present two classes of solutions for supergravity theories where the scalars lie in the coset SL(N, R)/SO(N, R). It is demonstrated that solutions with N charges exist only for N = 8, 6 and 5 corresponding to space-time dimensions D = 4, 5 and 7. Our results are summarized in section six.
4D Supergravity
The Lagrangian of the general theory of ungauged N = 2, D = 4 supergravity theories can be given by
L 4 = |g| R − 2Q IJ ∂ µ X I ∂ µX J − α 2 Im N IJ F I · F J + Re N IJ F I ·F J . (2.1)
The theory has n + 1 gauge fields A I , (F I = dA I ) and X I are functions of n complex scalar fields z a . The details of the Lorentzian four-dimensional supergravity theories and their formulation in terms of special geometry can be found in [19]. Our analysis is not restricted to Lorentzian theories and is valid for all space-time signatures. The parameter α takes the values ±1. Roughly speaking, the theories of Euclidean and (2, 2) signature can be obtained by replacing the complex structure with a paracomplex structure [20][21][22][23][24].
To use a unified description, we define i ǫ which satisfies i 2 ǫ = ǫ andī ǫ = −i ǫ . Here ǫ = 1 for the theories with Euclidean and neutral signature and ǫ = −1 for the Lorentzian theory. We note the relation
Q IJ ∂ µ X I ∂ µX J = g ab ∂ µ z a ∂ µzb (2.2)
where g ab = ∂ a ∂bK is the Kähler metric and K is the Kähler potential of the supergravity theory.
In a formulation of special geometry, one relates the coordinates X I to the covariantly holomorphic sections
V = L I M I , I = 0, ..., n DāV = ∂ā − 1 2 ∂āK V = 0, (2.3) obeying the constraint i ǫ V,V = i ǫ L I M I − L IM I = 1, (2.4)
by
Ω = e −K/2 V = X I F I , ∂āΩ = 0 . (2.5)
The Kähler potential is given by
e −K = i ǫ X I F I − X IF I . (2.6)
We also have the relations
M I = N IJ L J , D a M I =N IJ D a L I , D a = ∂ a + 1 2 ∂ a K . (2.7)
In cases where the N = 2 supergravity models can be described in terms of a holomorphic homogeneous prepotential F = F (X I ) of degree two, we have
F I = ∂F ∂X I , F IJ = ∂F ∂X I ∂X J , etc.
Here we list the following useful relations
F = 1 2 F I X I , F I = F IJ X J , X I F IJK = 0, F I ∂ µ X I = X I ∂ µ F I . (2.8)
The scalar and gauge couplings appearing in the Lagrangian are given by
N IJ =F IJ − ǫ i ǫ (XNX) (NX) I (NX) J , Q IJ = e K N IJ + e 2K (NX) I (NX) J , (2.9) where N IJ = i ǫ F IJ − F IJ ,(2.10)
and with the notation (NX)
I = N IJ X J , (NX) I = N IJX J , XNX = X I X J N IJ .
The gauge fields equations of motion derived from (2.1) are given by
∂ µ | det g| Im N IJ F Jµν − 1 2 (Re N IJ )ǫ µνρσ F J ρσ = 0 . (2.11)
The field equations of the scalars z a andz a are
1 | det g| ∂ µ | det g|Q IJ ∂ µX J − (∂ L Q IJ ) ∂ µ X I ∂ µX J ∂ aX L − α 1 4 ∂ L (Im N IJ )F I µν F J µν + 1 8 | det g| ∂ L (Re N IJ )ǫ µνρσ F I µν F J ρσ ∂ a X L = 0 (2.12) and 1 | det g| ∂ µ | det g|Q LJ ∂ µ X J − (∂LQ IJ )∂ µ X I ∂ µX J ∂ a X L − α 1 4 ∂L(Im N IJ )F I µν F J µν + 1 8 | det g| ∂L(Re N IJ )ǫ µνρσ F I µν F J ρσ ∂ aX L = 0 . (2.13)
The Einstein equations of motion are given by
R µν = 2g ab ∂ µ z a ∂ νz b + α Im N IJ F I µλ F J ν λ − 1 4 g µν F I αβ F J αβ .
(2.14)
Solutions
We consider solutions of the form
ds 2 = e 2U ǫ 0 dτ 2 + ǫ 1 τ 2a dx 2 + ǫ 2 τ 2b dy 2 + e −2U ǫ 3 τ 2c dz 2 (3.1)
where U is a function of τ. We obtain from (2.11) for a non-vanishing F I τ z , the solutions Im N IJ F J τ z = q I τ 2c−1 e −2U (3.2) where q I are constants. The non-vanishing components of the Ricci tensor for the metric (3.1) are given by
R τ τ = −Ü − (1 − 4c)U τ − 2U 2 , R xx = −ǫ 0 ǫ 1 τ 2a Ü +U τ , R yy = −ǫ 0 ǫ 2 τ 2b Ü +U τ , R zz = ǫ 0 ǫ 3 e −4U τ 2c Ü +U τ . (3.3)
Using (3.2) and (3.3) and the Einstein equations of motion (2.14), we obtain, for real scalars, the two equations
U + (1 − 2c)U τ +U 2 = −Q IJẊ JẊ I ,(3.
4)
U +U τ = α 2 ǫ 3 e −2U Im N IJ q I q J τ 2c−2 .(3.5)
We now employ the relations of special geometry in the analysis of the equations (3.4) and (3.5). For real X I , the prepotential F and all its derivatives are purely imaginary. In this case, we obtain from (2.9) the following relations
Q IJ = 1 2F F I F J 2F − F IJ , N IJ = F I F J F − F IJ . (3.6)
Using (2.8) and (3.6), we obtain the relation
Q IJẊ JẊ I = 1 2F Ḟ 2 2F −F + X IF I . (3.7)
As an ansatz for our solution we take
e 2U = 4i ǫ F,(3.8)
then we obtain from (3.7) and special geometry the relations
Q IJẊ JẊ I = −Ü −U 2 + 2e −2U Im N IJ F JFI , U = 2e −2U Im N IJḞ I F J , U = 2e −2U Im N IJ F JFI −Ḟ JḞI .
(3.9)
Using these relations in (3.4) and (3.5), we finally obtain
(1 − 2c) X IḞ I τ + X IF I = 0, 2 Im N IJ Ḟ JḞI − F JFI − F JḞI τ + α 2 ǫ 3 Im N IJ q I q J τ 2c−2 = 0 . (3.10)
The first equation in (3.10) can be solved by
F I = 1 2 ǫi ǫ A I + B I τ 2c (3.11)
with constants A I and B I. The second equation then reduces to the algebraic condition
Im N IJ 2c 2 ǫA J B I − α 2 ǫ 3 q I q J = 0 . (3.12)
It remains to analyze the scalar equations of motion. After some calculation employing the equations of special geometry, the scalar equations of motion reduce to the algebraic condition ∂ a Im N IJ 2c 2 ǫA J B I − α 2 ǫ 3 q I q J = 0 .
(3.13)
A dual solution can be obtained where we have a non-vanishing F I xy = p I . In this case, X I are imaginary. The analysis of Einstein equations of motion then gives
U + (1 − 2c)U τ +U 2 = Q IJẊ IẊ J , U +U τ = − 1 2 ǫ 0 ǫ 1 ǫ 2 αe −2U τ −2(a+b) Im N IJ p I p J .
(3.14)
Again we take the solution
e 2U = −4i ǫ F . (3.15)
Special geometry relations give the following equationṡ
U = −2i ǫẊ I F I e −2Ü U = −2i ǫ Ẍ I F I − N IJẊ IẊ J e −2U .(3.16)
Note that in this case the relations (2.9) for imaginary X I imply
Q IJẊ JẊ I = − 1 2F N IJẊ IẊ J +U 2 . (3.17)
Consequently, using (3.16) and (3.17), the equations (3.14) reduce to
F I F Ẍ I + (1 − 2c)Ẋ I τ = 0, Im N IJ Ẍ I X J −Ẋ IẊ J + X JẊ I τ = 1 4 ǫ 0 ǫ 1 ǫ 2 ǫα Im N IJ τ −2(a+b) p I p J . (3.18)
The first equation in (3.18) admits the solution
X I = i ǫ 2 A I + B I τ 2c (3.19)
which upon plugging in the second equation of (3.18) gives the algebraic condition
Im N IJ c 2 B I A J − 1 4 ǫ 0 ǫ 1 ǫ 2 αp I p J = 0 . (3.20)
Again the analysis of the scalar equation gives, after some calculation involving special geometry relations, the algebraic condition
∂ a Im N IJ c 2 B I A J − 1 4 ǫ 0 ǫ 1 ǫ 2 αp I p J = 0 . (3.21)
Examples
Consider the N = 2 supergravity model with a Lorentzian signature and with the prepotential F = −iX 0 X 1 . This corresponds to a model where the scalar manifold is given by SL(2, R)/SO (2). For this model we obtain from (3.11) the solution
X 1 = 1 2 1 + B 0 t 2c , X 0 = 1 2 1 + B 1 t 2c . (4.1)
Using (3.8) and the algebraic conditions (3.12) and (3.13) (with ǫ = α = −ǫ 3 = −1), we finally arrive at the metric
ds 2 = 1 + q 2 0 4c 2 t 2c 1 + q 2 1 4c 2 t 2c −dt 2 + t 2a dx 2 + t 2b dy 2 + t 2c 1 + q 2 0 4c 2 t 2c 1 + q 2 1 4c 2 t 2c dz 2 .
(4.2) Using (3.2) and the second equation in (3.6), we obtain for the gauge fields
F 0 tz = − q 0 t 2c−1 1 + q 2 0 4c 2 t 2c 2 , F 1 tz = − q 1 t 2c−1 1 + q 2 1 4c 2 t 2c 2 . (4.3)
The dual solution with F 0 xy = p 0 and F 1 xy = p 1 has the scalar fields given by 4) and the metric as in (4.2) with q 0 and q 1 replaced by p 0 and p 1 . These solutions can be referred to as generalized Melvin cosmologies [15]. Similarly we can also construct Melvin domain wall solutions
X 0 = i 2 1 + (p 0 ) 2 4c 2 t 2c , X 1 = i 2 1 + (p 1 ) 2 4c 2 t 2c ,(4.ds 2 = 1 + q 2 0 4c 2 r 2c 1 + q 2 1 4c 2 r 2c dr 2 − r 2a dt 2 + r 2b dy 2 + r 2c 1 + q 2 0 4c 2 r 2c 1 + q 2 1 4c 2 r 2c dz 2 .
(4.5) Taking z as an angle coordinate gives Melvin fluxtubes with two charges. The original Melvin fluxtube [27] can be obtained using our formalism with F = −i (X 0 ) 2 and the exponents a = b = 0 and c = 1.
One can also consider solutions of the so-called minimal coupling model described by the prepotential F = −i X 0 (X 1 ) 3 and scalar manifold SL(2, R)/SO (2). For this model we have
F 0 = F 2X 0 , F 1 = 3F 2X 1 (4.6)
and
N 00 = F 2 (X 0 ) 2 , N 11 = 3F 2 (X 1 ) 2 , N 01 = 0 (4.7)
and explicit solutions can be obtained using (3.2), (3.8), (3.11), (3.12) and (3.13).
As another example we consider solutions of N = 8, SO(8) supergravity [28] by focusing on the U(1) 4 Cartan subgroup. Anti-de Sitter black holes solutions of the gauged version of this theory were considered in [29,30]. The resulting model can be embedded in an N = 2 supergravity model with the following prepotential
F = −i √ X 1 X 2 X 3 X 4 . (4.8)
Note that the previous two models considered are also consistent truncation of N = 8, SO(8) supergravity. Using (3.6) and (4.8) we obtain
F I = F 2X I , N IJ = F 4 (X I ) 2 δ IJ .
(4.9)
Using our analysis we obtain the generalized Melvin cosmological solutions The scalars and gauge fields are given by
ds 2 = e 2U −dt 2 + t 2a dx 2 + t 2b dy 2 + e −2U tX I = √ H 1 H 2 H 3 H 4 H I , F I τ z = − q I t 2c−1 (H I ) 2 . (4.12)
Similarly one can also obtain generalized Melvin domain wall and fluxtube solutions.
SL(N, R)/SO(N, R) coset models
In this section we consider solutions to D-dimensional supergravity theories with scalar fields parameterizing the space SL(N, R)/SO(N, R). These theories can be described by the following Lagrangian
e −1 L D = R − 1 2 (∂ ϕ) 2 − 1 4 G IJ F I · F J . (5.1)
The gauge kinetic term metric given by
G IJ = 1 (X I ) 2 δ IJ .(5.
2)
The scalars are described by X I subject to the condition N I=1 X I = 1 (5.3) and are related to the (N − 1) independent dilatonic scalars ϕ by
X I = e − 1 2 b I · ϕ (5.4)
where b I are the weight vectors of the fundamental representation of SL(N, R), satisfying
b I · b J = 8δ IJ − 8 N , I b I = 0 . (5.5)
The scalars ϕ can be described in terms of X I by
ϕ = − 1 4 I b I log X I . (5.6)
Domain wall and charged time-dependent solutions for the gauged versions of these theories were considered in [31] and [32]. We start by considering solutions of the form
ds 2 = e 2U (τ ) ǫ 0 dτ 2 + ǫ 1 τ 2a 1 dx 2 1 + ǫ 2 τ 2a 2 dx 2 2 + e 2V (τ ) D−1 k=3 ǫ k τ 2a k dx 2 k (5.7) with V = − 1 D − 3 U . (5.8)
For these solutions, the gauge fields two-form is given by
F I = P I dx 1 ∧ dx 2 (5.9)
where P I are constants. The analysis of the equations of motion derived from (5.1) gives the solution are satisfied. The analysis of the scalar equations reveals no further conditions. The condition (5.12) was also obtained in the study of domain wall solutions and S-branes [31,32]. As both space-time dimensions and N must be integers, it is evident that our solutions are only valid for the space-time dimensions D = 4, 5 and 7 corresponding to the cases N = 8, 6 and 5. A second class of solutions, with the condition (5.12), can be obtained with the metric
X I = e −U 1 + B I τ 2l , e N U = N I=1 1 + B I τ 2lds 2 = e 2U ǫ 0 dτ 2 + D−2 i=1 ǫ i τ 2a i dx 2 i + e 2(3−D)U ǫ D−1 τ 2a D−1 dw 2 , (5.13) where e N (D−3)U = N I=1 1 + B I τ 2(a D−1 ) , B I = 1 2 (a D−1 ) 2 ǫ D−1 (q I ) 2
(5.14)
and
X I = e (D−3)U (1 + B I τ 2(a D−1 ) ) , G IJ F J τ w = q I e 2(3−D)U τ (2a D−1 −1) . (5.15)
Summary
We have considered solutions depending on one variable for the theories of four-dimensional N = 2 supergravity theories with vector multiplets. Depending on the signature of the theory, our charged solutions can describe time-dependent cosmological or static solutions. These solutions which are labelled as Melvin space-times can be thought of as charged generalizations of Kasner spaces. We found explicit solutions for specific models in N = 2 supergravity with two charges. Solutions with four charges for a truncation of N = 8 supergravity theory that can be embedded in N = 2 supergravity were also presented. Moreover, solutions with N charges for N = 8, 6 and 5, corresponding to supergravity theories with space-time dimensions D = 4, 5 and 7 and SL(N, R)/SO(N, R) scalar manifolds were also found. It is well known that Melvin fluxtubes can be generated from Minkowski space-time as a seed solution [33]. Generalised Melvin solutions in Einstein-Maxwell theory were constructed using these techniques in [15] with Kasner space being the seed solutions. The generating techniques were generalized to dilaton gravity in [34] and to gravity with a cosmological constant in [35]. In our present analysis, we started with the vacuum Kasner solution as a seed solution and found solutions with non-trivial scalar and gauge fields through an explicit analysis of the equations of motion. It would be of interest to study generalized Melvin solutions in gauged supergravity theories in various dimensions. We hope to report on this in a future publication.
N
(D − 3) 4(D − 2) = 1, B I = − 1 2l 2 ǫ 0 ǫ 1 ǫ 2 P I 2 (5.12)
Acknowledgements:The work is supported in part by the National Science Foundation under grant number PHY-1620505.
Classification, geometry and applications of supersymmetric backgrounds. U Gran, J Gutowski, G Papadopoulos, Physics Reports. 794U. Gran, J. Gutowski and G. Papadopoulos, Classification, geometry and applications of supersymmetric backgrounds, Physics Reports 794 (2019).
S brane solutions in supergravity theories. C-M Chen, D V , M Gutperle, Phys. Rev. D. 6624043C-M Chen, D. V. Gal'tsov and M. Gutperle, S brane solutions in supergravity theories, Phys. Rev. D 66 (2002) 024043.
Cosmological solutions of type II string theory. A Lukas, B A Ovrut, D Waldram, Phys. Lett. 39365A. Lukas, B. A. Ovrut and D. Waldram, Cosmological solutions of type II string theory, Phys. Lett. B393 (1997) 65.
String and M-theory cosmological solutions with Ramond forms. A Lukas, B A Ovrut, D Waldram, Nucl. Phys. 495365A. Lukas, B. A. Ovrut and D. Waldram, String and M-theory cosmological solutions with Ramond forms, Nucl. Phys. B495 (1997) 365.
String Kaluza-Klein cosmologies with RR-fields. R Poppe, S Schwager, Phys. Lett. 39351R. Poppe and S. Schwager, String Kaluza-Klein cosmologies with RR-fields, Phys. Lett. B393 (1997) 51.
Resolution of cosmological singularities. F Larsen, F Wilczek, Phys. Rev. D. 554591F. Larsen and F. Wilczek, Resolution of cosmological singularities, Phys. Rev. D 55 (1997) 4591.
String-Kaluza-Klein cosmology. K Behrndt, S Förste, Nucl. Phys. 430441K. Behrndt and S. Förste, String-Kaluza-Klein cosmology, Nucl. Phys. B430 (1994) 441.
Geometrical theorems on Einstein's cosmological equations. E Kasner, Am. J. Math. 43217E. Kasner, Geometrical theorems on Einstein's cosmological equations, Am. J. Math. 43 (1921) 217.
. H Weyl, H Zur Gravitationstheorie, Annalen der Physik. 54117H. Weyl, H. Zur Gravitationstheorie, Annalen der Physik 54 (1917) 117.
. T Levi-Civita, Rend. Acc. Lincei. 26307T. Levi-Civita, Rend. Acc. Lincei. 26 (1917) 307.
. W Wilson, Phil. Mag. 40703W. Wilson, Phil. Mag. 40 (1928) 703.
Will the Real Kasner Metric Please Stand Up. A Harvey, General Relativity and Gravitation. 221433A. Harvey, Will the Real Kasner Metric Please Stand Up, General Relativity and Gravitation, 22, (1990) 1433.
Complex Transformation of the Kasner Metric. A Harvey, General Relativity and Gravitation. 211021A. Harvey, Complex Transformation of the Kasner Metric, General Relativity and Gravitation, 21 (1989) 1021.
Phantom Metrics With Killing Spinors. W A Sabra, Phys. Lett. 750237W. A. Sabra, Phantom Metrics With Killing Spinors, Phys. Lett. B750 (2015) 237;
Phantom space-times in fake supergravity. M Bu Taam, W A Sabra, Phys. Lett. 751297M. Bu Taam and W. A. Sabra, Phantom space-times in fake supergravity, Phys. Lett. B751 (2015) 297.
Melvin magnetic fluxtube/cosmology correspondence. D Kastor, J Traschen, Class. Quantum Grav. 32235027D. Kastor and J. Traschen, Melvin magnetic fluxtube/cosmology correspondence, Class. Quantum Grav. 32 (2015) 235027.
Kasner Branes with Arbitrary Signature. W A Sabra, Phys. Lett. 809135694W. A. Sabra, Kasner Branes with Arbitrary Signature, Phys. Lett. B809 (2020) 135694.
Duality and the signature of space-time. C M Hull, JHEP. 1117C. M. Hull, Duality and the signature of space-time, JHEP 11 (1998) 017.
Kasner metrics and very special geometry. W A Sabra, Phys. Lett. 833137380W. A. Sabra, Kasner metrics and very special geometry, Phys. Lett. B833 (2022) 137380.
. E Lauria, A Van Proeyen, Lecture Notes in Physics. 966N=2 Supergravity in D = 4 , 5 , 6 DimensionsE. Lauria and A. Van Proeyen, N=2 Supergravity in D = 4 , 5 , 6 Dimensions, Lecture Notes in Physics 966.
Special geometry of Euclidean supersymmetry. I: Vector multiplets. V Cortés, C Mayer, T Mohaupt, F Saueressig, JHEP. 0328V. Cortés, C. Mayer, T. Mohaupt and F. Saueressig, Special geometry of Euclidean supersymmetry. I: Vector multiplets, JHEP 03 (2004) 028.
Special geometry of Euclidean supersymmetry. II: Hypermultiplets and the c-map. V Cortés, C Mayer, T Mohaupt, F Saueressig, JHEP. 0625V. Cortés, C. Mayer, T. Mohaupt and F. Saueressig, Special geometry of Euclidean supersymmetry. II: Hypermultiplets and the c-map, JHEP 06 (2005) 025.
Special geometry of Euclidean supersymmetry III: The Local r-map, instantons and black holes. V Cortés, T Mohaupt, JHEP. 0766V. Cortés and T. Mohaupt, Special geometry of Euclidean supersymmetry III: The Local r-map, instantons and black holes, JHEP 07 (2009) 066.
Special geometry of Euclidean supersymmetry IV: the local c-map. V Cortés, P Dempster, T Mohaupt, O Vaughan, JHEP. 1066V. Cortés, P. Dempster, T. Mohaupt and O. Vaughan, Special geometry of Euclidean supersymmetry IV: the local c-map, JHEP 10 (2015) 066.
Four-dimensional vector multiplets in arbitrary signature. V Cortés, L Gall, T Mohaupt, International Journal of Geometric Methods in Modern Physics. 17V. Cortés, L. Gall and T. Mohaupt, Four-dimensional vector multiplets in arbitrary signature, International Journal of Geometric Methods in Modern Physics Vol 17, (2019) 2050150 and 2050151.
Special geometry and space-time signature. W A Sabra, Phys. Lett. 773191W. A. Sabra, Special geometry and space-time signature, Phys. Lett. B773 (2017) 191.
Type-II Calabi-Yau compactifications, Tduality and special geometry in general spacetime signature. M Medevielle, T Mohaupt, G Pope, JHEP. 0248M. Medevielle, T. Mohaupt and G. Pope, Type-II Calabi-Yau compactifications, T- duality and special geometry in general spacetime signature, JHEP 02 (2022) 048.
Pure magnetic and electric geons. M A Melvin, Phys. Lett. 865M. A. Melvin, Pure magnetic and electric geons, Phys. Lett. 8 (1964) 65.
The N=8 Supergravity Theory. 1. The Lagrangian. E Cremmer, B Julia, Phys. Lett. 8048E. Cremmer and B. Julia, The N=8 Supergravity Theory. 1. The Lagrangian, Phys. Lett. B80 (1978) 48;
N = 8 supergravity. B De Wit, H Nicolai, Nucl. Phys. 208323B. de Wit and H. Nicolai, N = 8 supergravity, Nucl. Phys. B208 (1982) 323.
Anti-de Sitter Black Holes in Gauged N=8 Supergravity. M J Duff, J T Liu, Nucl. Phys. 554M. J. Duff and J. T. Liu, Anti-de Sitter Black Holes in Gauged N=8 Supergravity, Nucl. Phys. B554 (1999).
Anti-de Sitter BPS black holes in N=2 gauged supergravity. W A Sabra, Phys. Lett. 45836W. A. Sabra, Anti-de Sitter BPS black holes in N=2 gauged supergravity, Phys. Lett. B458 (1999) 36.
Symmetric potentials of gauged supergravities in diverse dimensions and Coulomb branch of gauge theories. M Cvetic, S S Gubser, H Lu, C N Pope, Phys. Rev. D. 6286003M. Cvetic, S. S. Gubser, H. Lu and C. N. Pope, Symmetric potentials of gauged supergravities in diverse dimensions and Coulomb branch of gauge theories, Phys. Rev. D 62 (2000) 086003.
S-brane solutions in gauged and ungauged supergravities. M Gutperle, W A Sabra, Phys. Lett. 60173M. Gutperle and W. A. Sabra, S-brane solutions in gauged and ungauged supergrav- ities, Phys. Lett. B601 (2004) 73.
New solutions of the Einstein-Maxwell equations from old. B K Harrison, J. Math. Phys. 91744B. K. Harrison, New solutions of the Einstein-Maxwell equations from old, J. Math. Phys. 9 (1968) 1744;
Black holes in a magnetic Universe. F J Ernst, J. Math. Phys. 1754F. J. Ernst, Black holes in a magnetic Universe, J. Math. Phys. 17 (1976) 54.
Pair creation of dilaton black holes. F Dowker, J P Gauntlett, D A Kastor, J H Traschen, Phys. Rev. D. 492909F. Dowker, J. P. Gauntlett, D. A. Kastor and J. H. Traschen, Pair creation of dilaton black holes, Phys. Rev. D 49 (1994) 2909.
Charging axisymmetric space-times with cosmological constant. M Astorino, JHEP. 0686M. Astorino, Charging axisymmetric space-times with cosmological constant, JHEP 06 (2012) 086.
| [] |
[
"3D Model-based Zero-Shot Pose Estimation Pipeline",
"3D Model-based Zero-Shot Pose Estimation Pipeline"
] | [
"Jianqiu Chen ",
"Mingshan Sun ",
"Sensetime Research ",
"Tianpeng Bao ",
"Zhao Rui ",
"Sensetime Research ",
"Liwei Wu ",
"Sensetime Research ",
"Zhenyu He ",
"\nSenseTime Research\nHarbin Institute of Technology\nShenzhen\n",
"\nHarbin Institute of Technology\nShenzhen\n"
] | [
"SenseTime Research\nHarbin Institute of Technology\nShenzhen",
"Harbin Institute of Technology\nShenzhen"
] | [] | Most existing learning-based pose estimation methods are typically developed for non-zero-shot scenarios, where they can only estimate the poses of objects present in the training dataset. This setting restricts their applicability to unseen objects in the training phase. In this paper, we introduce a fully zero-shot pose estimation pipeline that leverages the 3D models of objects as clues. Specifically, we design a two-step pipeline consisting of 3D model-based zero-shot instance segmentation and a zero-shot pose estimator. For the first step, there is a novel way to perform zero-shot instance segmentation based on the 3D models instead of text descriptions, which can handle complex properties of unseen objects. For the second step, we utilize a hierarchical geometric structure matching mechanism to perform zero-shot pose estimation which is 10 times faster than the current render-based method. Extensive experimental results on the seven core datasets on the BOP challenge show that the proposed method outperforms the zero-shot state-of-the-art method with higher speed and lower computation cost. | 10.48550/arxiv.2305.17934 | [
"https://export.arxiv.org/pdf/2305.17934v1.pdf"
] | 258,960,481 | 2305.17934 | 67bebc6e00564c752e6b6b2e5b3e5f1091ea63e5 |
3D Model-based Zero-Shot Pose Estimation Pipeline
Jianqiu Chen
Mingshan Sun
Sensetime Research
Tianpeng Bao
Zhao Rui
Sensetime Research
Liwei Wu
Sensetime Research
Zhenyu He
SenseTime Research
Harbin Institute of Technology
Shenzhen
Harbin Institute of Technology
Shenzhen
3D Model-based Zero-Shot Pose Estimation Pipeline
Most existing learning-based pose estimation methods are typically developed for non-zero-shot scenarios, where they can only estimate the poses of objects present in the training dataset. This setting restricts their applicability to unseen objects in the training phase. In this paper, we introduce a fully zero-shot pose estimation pipeline that leverages the 3D models of objects as clues. Specifically, we design a two-step pipeline consisting of 3D model-based zero-shot instance segmentation and a zero-shot pose estimator. For the first step, there is a novel way to perform zero-shot instance segmentation based on the 3D models instead of text descriptions, which can handle complex properties of unseen objects. For the second step, we utilize a hierarchical geometric structure matching mechanism to perform zero-shot pose estimation which is 10 times faster than the current render-based method. Extensive experimental results on the seven core datasets on the BOP challenge show that the proposed method outperforms the zero-shot state-of-the-art method with higher speed and lower computation cost.
Introduction
Pose estimation plays a crucial role in various robotic and augmented reality applications. Existing deep-learning methods [1][2][3] achieve remarkable performance for seen objects after absorbing a large amount of training data for each target object. However, when there is a novel (unseen) object introduced, it requires significant time and effort to synthesize or annotate data and re-train the model from scratch. This greatly restricts the universality and reusability of models, and the high time and training costs further hinder the practical application of pose estimation with novel objects. Hence, there is a pressing need for a zero-shot pose estimation method that enables training once and generalizing to any unseen object during inference, addressing these challenges effectively.
Upon revisiting pose estimation methods, it is evident that they typically involve two steps. Firstly, an instance segmentation or object detection method is employed to locate and classify the object. Subsequently, a pose estimator is utilized to estimate the pose transformation parameters, which consist of three degrees for rotation and three degrees for translation, from the target object's coordinate system to the camera coordinate system. However, in the zero-shot setting, where the specific 3D object is unseen and lacks any prior information during the training phase, segmenting and classifying it becomes extremely challenging. Consequently, recent zero-shot pose estimators resort to utilizing supervised non-zero-shot segmentation results to estimate the 6D pose and focus solely on the second step. In the first step, most existing zero-shot instance segmentation methods, such as SAM [4], are primarily designed to predict foreground instances or instances associated with specific textual prompts. However, these methods encounter limitations in missing the capabilities of distinguishing the class label (related 3D model) of the candidate instances. In the case of foreground instance segmentation, the method fails to attribute the foreground instance to a specific object without human interaction. In the case of text-prompt instances segmentation, challenges arise in accurately describing complex properties of objects, such as their shape and materials, using natural language. This task requires domain experts with relevant knowledge, especially in specialized fields like industrial manufacturing or robotics applications. Additionally, foundation models may struggle to comprehend these professional descriptions, resulting in difficulties or errors. In summary, the field lacks a dedicated zero-shot instance segmentation method that effectively leverages 3D models.
In the second step, current zero-shot pose estimators, such as MegaPose [5] and Zephyr [6], typically employ an online rendering approach to compare the scene image with rendered images corresponding to candidate pose hypotheses. Although this approach is suitable for the zero-shot scenario, it has a limitation in terms of runtime. The online rendering module requires approximately 2.5 seconds to render each object, leading to a total rendering time of potentially more than ten seconds per image when multiple objects are present. This paper presents the pioneering work on a fully zero-shot pose estimation pipeline, as illustrated in Fig 1. Specifically, we have designed a two-step pipeline that incorporates 3D model-based zero-shot instance segmentation and zero-shot pose estimator. In the instance segmentation step, we introduce the utilization of 3D models to render and extract multi-view clues, enabling the search for correspondence between the instance from the scene image and candidate target objects. We refer to this approach as "3D model-based zero-shot instance segmentation". In the 6D pose parameter estimating step, we have proposed a zero-shot pose estimator which leverages a hierarchical geometric feature matching module that estimates the best pose parameters by minimizing the matching point pairs' distance from the target object coordinates to the camera coordinates. Compared with the online rendering mechanism in current SOTA zero-shot methods, the hierarchical geometric feature matching module shows a significant advantage in efficiency. It requires only 0.2 seconds for each object, making it 10 times faster than the current state-of-the-art method [5].
In summary, our paper makes the following key contributions:
• Introduction of a fully zero-shot pose estimation pipeline. • Proposal of a novel 3D model-based zero-shot instance segmentation method. • Development of a zero-shot pose estimator based on hierarchical geometric feature matching.
• Extensive experimental results demonstrate that our proposed method outperforms the zero-shot state-of-the-art method while offering higher speed and lower computation cost.
Related Work
In this section, we will begin by providing an overview of the existing research on the zero-shot instance segmentation problem. Subsequently, we will delve into the extensive research conducted on the zero-shot 6D pose estimation.
Zero-Shot Instance Segmentation
Instance segmentation involves predicting a mask for each object and assigning corresponding class labels. It is typically performed as a prerequisite task for 6D pose estimation. However, previous research [7-9, 4, 10] has primarily concentrated on zero-shot semantic segmentation and zero-shot category-agnostic instance segmentation. UOIS-Net [7] separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. Subsequently, Xiang et al. [8] utilized a metric learning loss function to produce pixel-wise feature embeddings such that pixels from the same object are close to each other and pixels from different objects are separated in the embedding space.
With the learned feature embeddings, a mean shift clustering algorithm can be applied to discover and segment unseen objects. These two methods assign different labels to different objects, but they fail to identify multiple objects of the same category as belonging to the same class. SupeRGB-D [9] explore zero-shot instance segmentation from RGB-D data to identify unseen objects in a semantic category-agnostic manner. Inspired by the development of prompt-based universal interfaces for large language models, both SAM [4] and SEEM [10] propose a model that is purposefully designed and trained to be promptable. This promptable model exhibits the ability to effectively transfer its knowledge and skills in a zero-shot manner to novel image distributions and tasks. However, the aforementioned methods do not perform instance segmentation for each individual instance based on its own category. To enhance the accuracy of our approach in labeling different segmented instances, we leverage 3D model-rendered images to identify the most similar instance. By utilizing the more detailed features provided by the rendered images, our method improves the precision of instance selection compared to relying solely on textual descriptions.
Zero-Shot 6D Pose Estimation
Zero-shot 6D pose estimation is the task of determining the 6D pose of novel objects that have not been included in the training data. In other words, this involves estimating the pose of objects that were not seen or encountered during the model training phase. The first attempts to tackle this problem involved establishing correspondences using locally invariant features [11][12][13][14][15]. By using oriented point pair features, PPF [15] generates a global model description and employs a fast voting scheme to perform local matching of the model. Convolutional neural networks (CNNs) [6,5] have replaced the previously used methods, such as those based on hand-crafted features, for 6D pose estimation. The current trend is to use learning-based approaches that utilize CNNs to achieve higher accuracy and robustness. Zephyr [6] uses a hypothesis generation and scoring framework that focuses on training a scoring function capable of generalizing to novel objects. To achieve zero-shot generalization, it rates hypotheses based on differences between unordered points. MegaPose [5] introduces a 6D pose refinement method that employs a render-and-compare strategy, capable of handling novel objects. To enable the network to process unknown objects, MegaPose provides shape and coordinate system information as inputs, achieved by online rendering multiple synthetic views of the object's 3D model. Deep learning methods for 6D pose estimation rely on 3D model rendering from multiple views, which can result in incomplete capture of the object's 3D structure. Moreover, multiple rendered images are required as input for each instance in a scene, leading to increased computational costs. To preserve the local 3D structural information, our method avoids the use of online rendered images and instead inputs corresponding 3D models and scene point clouds directly.
They are then locally matched by a hierarchical geometric feature-matching mechanism. The 6D pose parameters for each instance in the scene can be obtained from a single input by minimizing the transformation distance.
Method
Fig 2 illustrates our detailed implementation of the proposed zero-shot pose estimation pipeline which consists of two steps. We will provide a thorough explanation of the two steps in the following subsections, respectively.
3D Model-based Zero-Shot Instance Segmentation
As indicated by the yellow box in Fig 2, the target of the instance segmentation step is to search all potential instances related to the provided 3D models. To leverage the 3D model clues, we offline render the 3D model from different camera views achieving a set of 2D template images. A visual foundation model ImageBind [16] is utilized to extract visual features from the template images as the visual clues of the 3D model, which can be presented in F t with shape (N, R, C), where N is the number of target objects, R is the number rendered template images and C is the feature dimension for visual foundation model.
For the scene image, we adopt an interactive segmentation method SAM [4] taking a uniform point set as a prompt to generate all foreground masks without labels. Then, similar to the template images, we crop all foreground instances and extract their visual features through the same visual foundation model as the template images. It presents as F s with shape (M, C) where M is the number of foreground masks. To filter the potential instances from all foreground instances, a feature similarity filter module is introduced. It calculates the cosine feature similarity between scene instances F s and template objects F t :
C = F s ∥F s ∥ F t ∥F t ∥ T ,(1)
where the matrix C is with shape (M, N, R) and any element in it is the cosine distance with the range (−1, 1). The higher cosine distances mean higher feature similarities.
We select the max feature similarity for all render template images as the corresponding target object's predicted score. The scene instance whose prediction score is larger than the threshold is viewed as a candidate instance, the corresponding target object selects the highest score if there is more than one higher than the threshold.
3D Model-based Zero-Shot Pose Estimator
As indicated by the green box in Fig 2, the target of the pose estimator is to estimate the target object's rigid transformation R, t from the target object's object coordinate systems to the camera coordinate system, where R ∈ SO(3) is a 3D rotation and t ∈ R 3 is a 3D translation. To effectively and efficiently estimate the pose transformation, we choose the point cloud feature as input and leverage a hierarchical geometric feature matching module to estimate the pose parameter when transforming the object point cloud from the object coordinate systems to the camera coordinate systems. However, it is challenging to establish point correspondences between two point clouds acquired through different methods (3D scan vs. consumer-grade depth sensors) due to variations in density and visibility. For the 3D Model, a set of point clouds O = o i ∈ R 3 | i = 1, . . . , n is uniformly sampled from the surface of the mesh. For the scene points, the mask is applied to index the object region from the depth image and convert it into a scene point cloud S = s i ∈ R 3 | i = 1, . . . , m . The scene point cloud refers to a subset of the 3D model point cloud that contains only the visible region that can be captured. Different viewpoints of the camera will correspond to a specific visible region in the object 3D model. Besides, the scene point cloud is captured from the depth sensor and cropped from the detection mask, which unavoidably includes some surrounding noisy points. Unfortunately, most point cloud's down sample methods such as Farthest Point Sampling (FPS) or grid sampling are prone to keep the noisy points due to the sampling mechanism. Therefore, due to the mismatched
∥R · o xi + t − s yi ∥ 2 2 ,(2)
where s yi and o xi are matched corresponding points.
To alleviate this problem, we introduce a simple but effective 3D model-prior-based sampling method that holds the comparable feature receptive fields. We adopt the object circumradius as a prior constraint to cluster the scene point through a Mean-Shift algorithm with the bandwidth of the object circumradius, which controls the proximity range between scene and object point clouds by adjusting the clustering bandwidth. Taken the cluster with the highest number of the points as the foreground points. When holding the comparable feature receptive fields, we can leverage hierarchical features on both scene and object point clouds to match each other.
We follow GeoTransformer [17] to extract hierarchical point cloud features. As shown in Fig 3, the high-level feature with a larger receptive field (the red boxes) can be seen as a set of viewpoints from the 3D Model to locate the visible region. After that, a low-level feature is applied to find the point-to-point correspondence (green dotted lines) between the predicted visible region of the target object and the scene point cloud. The hierarchical design limits the matching scope in the visible area and alleviates the mismatching between visible and invisible regions. Based on the correspondence, we minimize the point cloud distance in Eq 2 to calculate a target R, t through Singular Value Decomposition (SVD).
We train the model on the large-scale 3D model dataset GSO [18] to learn viewpoint matching and local geometric structure matching. First, we leverage the features from the visible region of the 3D model and its corresponding scene point cloud area as positive samples. The others that have an overlap of less than a threshold are viewed as negative samples. Since the visible region may contain more than one feature point with comparable overlaps, we adopt the overlap-aware circle loss [17] to
Experiments
Benchmark Datasets
For the training dataset, we take the GSO [19] to train our pose estimation module, which contains 1000 3D objects under household scenes and 1 million synthetic images provided by Megapose [5].
For the test datasets, we evaluated our method on the seven core datasets of the BOP challenge [20], including LineMod Occlusion (LMO), T-LESS, TUD-L, IC-BIN, ITODD, HomebrewedDB (HB)and YCB-Video (YCB-V). These datasets are enriched with diverse factors of variation, including texture, symmetry, and household or industrial scenes, which accurately represent the different types of objects typically encountered in daily and robotic scenarios.
Instance Segmentation Metric
We evaluate the performance of instance segmentation by Average Precision (AP) and Average Recall (AR) following the COCO metric and the BOP Challenge. The metric of the AP is the mean of AP at different Intersection over Union (IoU=.50:.05:.95) values. AR is the maximum recall achieved when a fixed number (100 for the demonstration) of detections per image is considered.
Pose Estimation Metrics
In measuring the accuracy of an estimated poseP in relation to the ground-truth poseP of an object model M , we utilize the mean Average Recall of three pose-error functions, calculated by AR = (AR V SD + AR M SSD + AR M SP D )/3 and more information about evaluation metrics for the threshold to calculate the AR can be obtained from the competition [20]. Here is a brief distribution for each part of the metric.
VSD (Visible Surface Discrepancy)
VSD deems poses that possess an equal shape to be identical as it only assesses the error in the object's visible section.
e V SD (D,D,V ,V , τ ) = avg p∈V ∪V 0 if p ∈V ∩V ∧ D (p) −D(p) < τ 1 otherwise(3)
To calculate the distance mapsD andD, it renders the 3D model M in the predicted poseP and ground truth poseP , respectively.V andV are visibility masks, which are obtained by comparing them with the distance maps of the test image. The τ is the tolerance parameter.
MSSD (Maximum Symmetry-Aware Surface Distance)
The set S M contains global symmetry transformations of the 3D model, and V M represents a set of vertices of the model. The maximum distance among the vertices of the model plays a crucial role in robotic manipulation. In this context, a larger maximum surface deviation suggests a higher probability of a successful grasp.
e M SSD (P ,P , S M , V M ) = min S∈S M max X∈V M P x −P S X 2(4)
MSPD (Maximum Symmetry-Aware Projection Distance)
The function proj represents the pixel-level 2D projection, while the other variables have the same meaning as in MSSD. MSPD takes into account the global object symmetry and replaces the mean with the maximum distance to enhance the robustness of the object model's geometry and sampling.
Evaluation of Zero-Shot Instance Segmentation
We conduct the experiments on seven BOP Challenge datasets and use the Average Precision (AP) as the metric to evaluate the proposed method. Since there are no other related methods following the 3D Model-Based Zero-Shot Instance Segmentation setting, we select two supervised non-zero-shot methods and implement a text-based zero-shot instance segmentation method for comparison. The Mask RCNN method [21] is from the CosyPose [22], which is trained on the real or synthetic training datasets for each test object. The ZebraPoseSAT [23] is the current SOTA method for the instance segmentation task, which leverages a two-stage setting that detection for coarse locating and pose estimation network for refinement. For the text-based zero-shot instance segmentation, we hold the same pipeline with our method and adopt the class name as a text feature, instead of the template image features to evaluate and compare the performance. For the T-LESS, TUD-L, and HB datasets, there are only object IDs without class names provided. As depicted in Tab 1, our 3D model-based zero-shot instance segmentation method shows obvious performance improvements compared with the text-based method from 6.7 % to 21.4 %. However, there is still a performance gap compared with the supervised methods.
Evaluation of Zero-Shot Pose Estimation
To evaluate our pose estimation performance, we compared the supervised methods without zero-shot setting and the latest zero-shot pose estimation methods. The zero-shot task can not achieve any object prior which is quite challenging and has performance limitations compared with the supervised method. Compared with the zero-shot method Megapose [5], our method achieves 1.3% performance gain with ten times speed and our method does not require any online image rendering which is the crux when porting to real applications due to heavy computation cost. Besides, the megapose [5] has a limitation in that it is required a supervised Mask RCNN to locate the candidate object but our method is able to estimate the pose in and fully zero-shot pipeline with a comparable performance.
Time Efficiency
Compared to current zero-shot 6D pose estimation methods, the proposed pose estimation method based on point cloud matching eliminates the time-consuming online image rendering operation. As presented in Table 3, our method achieves an impressive pose estimation time of 0.2 seconds per object, which is ten times faster than the current state-of-the-art zero-shot method [5].
In terms of the instance segmentation step, since the rendering of templates is conducted offline, the zero-shot instance segmentation requires only 0.8 seconds per image. Consequently, the overall pipeline demonstrates higher efficiency compared to current methods, enabling its application in various downstream tasks without the limitations imposed by time constraints.
Ablation Study
Comparison of the Number of Template Images
To investigate the effect of the number of rendered template images for each 3D model of the target object, we select four settings for 6, 72, 512, and 576. For 6 template images, it uses the front, back, left, right, up, and down direction of the object to render template images. The others are sampled from a uniform sequence over SO(3) sphere with different densities following [26]. As shown in 4, we demonstrate the GPU memory consumption, inference running time, and accuracy for different rendered template images of each target object's 3D model. The sparse rendered template images have the worst robustness for visual ambiguity when objects have a similar appearance though there is a minor advantage in the speed and memory consumption. For rendering template images from
Comparison of The 3D Model-based Sampling Mechanism
To validate the effectiveness of the proposed sampling mechanism at the local structure matching module, we compare the pose estimation performance with and without this mechanism. As demonstrated in Tab 5, this sampling mechanism is able to improve the matching accuracy and shows 0.8% improvement.
Comparison of Different Visual Foundation Models
The visual feature extraction foundation models depend on the accuracy and robustness of the label selection in the zero-shot instance segmentation step. There is an experiment for two different foundation modules SLIP [27] (an advanced version of CLIP [28]) and ImageBind [16] (a recent multi-modal pertaining method). The results in Tab 6 show that a larger and stronger foundation model increases the average precision and recall significantly.
Implementation Details
The zero-shot pose estimation pipeline is trained on the cluster with 8 NVIDIA V100 and inferring on the PC with NVIDIA RTX 3090.
Foreground Instance Segmentation It adopts the recent SOTA interactive segmentation method [4], uniform set 16 per image as the prompt of Decoder to generate the foreground instances' masks and filtering the noisy region where the mask area is small than 200 pixels.
Visual Feature Extraction The large-scaled visual foundation model ImageBind [16] takes the resized image with (224, 224, 3) shape as input to extract the scene images and template images visual features in (N, C) where N is the number of images and C is the feature dimension for 1024 in this experiment.
Feature Similarity Filter
We adopt the cosine similarity as a metric to filter the instances which are not related to our 3D models. For each candidate object, the max feature similarity for all template views to the scene instance is chosen. When feature similarities between scene instances and template images are larger than the threshold, these scene instance is the candidate and the related object is selected by the highest feature similarity between the candidate image and template images.
Hierarchical Matching Mechanism We keep the similar network architecture in [17] that leverages the KPConv [29] as the backbone to extract hierarchical point cloud features and matches the potentially visible regions based on the feature from level 4 and points matching based on the feature from level 2.
Conclusion
In this paper, we propose a fully zero-shot pose estimation pipeline based on the 3D model. Our pipeline consists of a 3D model-based zero-shot instance segmentation module to segment the candidate object from the scene image and a zero-shot pose estimator is introduced to estimate the pose transformation between the object coordinate system to the camera coordinate system based on the hierarchical point cloud structure matching mechanism. The proposed method outperforms the zero-shot state-of-the-art method but with higher speed and lower computation cost. There is a limitation in that 3D models are not used as prompts in the zero-shot instance segmentation module because SAM [4] only supports text, point, box, and mask prompts. We will try to explore a multi-modality model to support 3D model prompts in future work. The proposed pipeline operates fully in a zero-shot setting and demonstrates faster performance compared to previous approaches. This pipeline holds promise for inspiring innovative solutions in various fields, including industrial manufacturing, robotics, and beyond.
A Comparison with only zero-shot pose estimation
Since the current zero-shot pose estimation method, Megapose [5], operates within the supervised instance segmentation setting, it is necessary to establish a fair comparison with our proposed zeroshot pose estimator. To achieve this, we integrate the zero-shot instance segmentation results with the segmentation results obtained through supervised training using MaskRCNN [21], which aligns with Megapose's methodology.
Compared to Megapose, our proposed zero-shot pose estimator exhibits a significant performance gain in terms of the mean average recall (AR) score across the seven BOP core datasets, increasing from 17.7% to 25.0%. Our method generally outperforms Megapose 8.6% in terms of the mean average recall (AR) score. For LM-O and T-LESS datasets, the performance is comparable because there is a limitation from the low signal-to-noise ratio of the scene's point clouds. The number of points from the scene object region is too small and the percentage of noise from the depth sensor or segmentation result will be increased. Consequently, the accuracy of the scene's geometric structure is compromised, making it challenging to estimate the correct pose accurately.
B Pseudo Code
We present the PyTorch-style pseudocode of the proposed zero-shot pose estimation Algorithm 1.
Figure 1 :
1The proposed pipeline for zero-shot pose estimation based on 3D models.
Figure 2 :
2Overview architecture of 3D model-based zero-shot instance segmentation and pose estimator. The first step involves segmenting all candidate instances and classifying their corresponding 3D models using the RGB image as input. The second step leverages the depth image and the predicted mask to generate the scene point cloud and estimates the pose parameter based on the hierarchical geometric feature matching. receptive fields, it interferes with the calculation of point cloud correspondence C, thereby decreasing the performance in fitting the best pose parameters by minimizing the following equation.
Figure 3 :
3Hierarchical matching mechanism in the zero-shot pose estimator. A hierarchical geometric feature matching that higher level feature to locate the potentially visible region (red boxes) and predict the point correspondences ((green dotted lines) in this region by local structure feature similarity.train the model to focus on the high overlap feature points matching. For local geometric structures, the low-level scene points are matched with 3D model points transformed by ground truth pose with a matching radius. The predicted correspondence matrix is calculated from the cosine similarity of features with an extra learnable mismatch dustbin feature to indicate the noisy points and invisible points. The negative log-likelihood loss is applied to fit the matching correspondence.
e
M SP D (P ,P , S M , V M ) = min S∈S M max X∈V M proj(P x) − proj(P S X )
Table 1 :
1Evaluation of zero-shot instance segmentation results on the seven core datasets in the BOP challenge. The metric is the Average Precision (AP) at different Intersection over Union (IoU=.50:.05:.95) values. * denotes that there is no object name provided and inferred by the object id. Real denotes the use of real annotation data for training.Method
Zero-Shot Real LM-O T-LESS TUD-L IC-BIN ITODD HB YCB-V Mean
Mask RCNN[21]
37.5
51.7
30.6
31.6
12.2
47.1
42.9
36.2
Mask RCNN[21]
37.5
54.4
48.9
31.6
12.2
47.1
42.9
39.2
ZebraPoseSAT[23]
50.6
62.9
51.4
37.9
36.1
64.6
62.2
52.2
ZebraPoseSAT[23]
50.6
70.9
70.7
37.9
36.1
64.6
74.0
57.8
Text-based
6.5
0.0*
0.0*
9.8
3.6
0.0*
27.3
6.7
Ours
17.6
9.6
24.1
18.7
6.3
31.4
41.9
21.4
Table 2 :
2Evaluation of zero-shot pose estimation results on the BOP dataset.The metric is the Average
Table 3 :
3Time efficiency for zero-shot pose estimation on the YCB-V dataset.Method
Runtime per object (s)
Megapose[5]
2.5
Ours
0.2
Table 4 :
4Comparison of the number of template images for each object on the YCB-V dataset.Number Memory (MB) Time (s) AP (%) AR (%)
6
0.023
0.33
39.4
48.8
72
0.281
0.33
41.9
53.1
512
2.000
0.34
41.3
56.4
576
2.250
0.34
40.5
55.9
Table 5 :
5Comparison of the 3D model-based
sampling mechanism for pose estimation AR
metric.
Sampling LMO(AR %) YCB-V(AR %)
Without
13.4
22.8
With
14.2
27.3
Table 6 :
6Comparison of different visual foun-
dation models on the YCB-V dataset, with the
evaluation metrics of AP and AR for instance
segmentation.
Visual Model
AP (%) AR (%)
SLIP[27]
13.2
28.7
ImageBind[16]
41.6
53.1
Table 7 :
7Evaluation of zero-shot pose estimation results on the BOP dataset. The metric is the Average Recall(AR) in BOP Challenge [20]. Method ZSIS ZSPE LM-O T-LESS TUD-L IC-BIN ITODD HB YCB-V MeanMegaPose[5]
18.7
19.7
20.5
15.3
8.00
18.6
13.9
16.4
Ours
15.2
19.2
30.6
25.0
24.5
31.6
28.8
25.0
Ours
14.2
6.3
24.9
18.7
13.6
23.6
22.9
17.7
Algorithm 1 The PyTorch-style pseudo code for proposed zero-shot pose estimation Input: I RGB , I D , Objs Output: M seg , Rs, ts # Inputs parameters: # I_{RGB}, I_D: the scene image from the RGB−D sensors # Objs: target objects' 3D Models # Output parameters: # M_{seg}: predict instance segmentation results # Rs, ts: the estimated pose parameters # Segment anything by SAM SAM.set(I_{RGB}) # scene image (H, W, 3) encoding points_prompt = uniform_points(64) # 64 points per image side # foreground instances masks without labels Masks = SAM(points_prompt) # (M, H, W) # Label out candidate masks # Crop the image region from the scene image by masks candidate_images = crop_resize(I_{RGB}, Masks, 224) # (M, 224, 224 ,3) #extract visual features by a pretrained ViT model. candidate_features = ViT(candidate_images) # (M, C) # L2 normalize candidate_features = F.normalize(candidate_features, dim=−1) # extract the rendered template images. # N target objects with R rendered template images per object. template_features = ViT(template images) # (N, R, C) # L2 normalize template_features = F.normalize(template_features, dim=−1) # cosine similarity logits = torch.einsum('mc,nrc−>mnr',[candidate_features,template_features]) # adopt the max similarity from different templates for each object logits = logits.max(−1).values # (M, N) valid_mask, valid_cls = torch.where(logits > thres) # Select the max similarity object as the label for each candidate masks masks, object_ids = index_mask_with_logits(valid_mask, valid_cls, logits) M_{seg} = zip(masks, object_ids) # zero−shot pose estimation # xyz = project(I_D, K) # project depth to XYZ of camera intrinsic para K. # sample point could from target objects' 3D models obj_pcs = uniform_sample(Objs) obj_radius = radius(obj_pcs) # calculate the radius of objects Rs, ts = [], [] for mask, oid in zip(masks, object_ids): # obj point clouds as src point src = obj_pcs[oid] # (P, 3) # scene object point cloud as ref point ref = xyz[mask] # (Q, 3) # Filter the noisy points by object radius o_r = obj_radius[oid] ref_clean = filter(ref, o_r) # Estimate the pose transformation from src to ref R, t = point_matching(src, ref) Rs.append(R) ts.append(t) return M_{seg}, Rs, ts
Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. Yisheng He, Wei Sun, Haibin Huang, Jianran Liu, Haoqiang Fan, Jian Sun, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionYisheng He, Wei Sun, Haibin Huang, Jianran Liu, Haoqiang Fan, and Jian Sun. Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11632-11641, 2020.
Ffb6d: A full flow bidirectional fusion network for 6d pose estimation. Yisheng He, Haibin Huang, Haoqiang Fan, Qifeng Chen, Jian Sun, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYisheng He, Haibin Huang, Haoqiang Fan, Qifeng Chen, and Jian Sun. Ffb6d: A full flow bidirectional fusion network for 6d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3003-3013, 2021.
Uni6dv2: Noise elimination for 6d pose estimation. Mingshan Sun, Ye Zheng, Tianpeng Bao, Jianqiu Chen, Guoqiang Jin, Rui Zhao, Liwei Wu, Xiaoke Jiang, arXiv:2208.06416arXiv preprintMingshan Sun, Ye Zheng, Tianpeng Bao, Jianqiu Chen, Guoqiang Jin, Rui Zhao, Liwei Wu, and Xiaoke Jiang. Uni6dv2: Noise elimination for 6d pose estimation. arXiv preprint arXiv:2208.06416, 2022.
. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, arXiv:2304.02643Segment anything. arXiv preprintAlexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare. Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic, CoRL. 2022Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, and Josef Sivic. MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare. In CoRL, 2022.
Zephyr: Zero-shot pose hypothesis rating. Brian Okorn, Qiao Gu, Martial Hebert, David Held, 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEEBrian Okorn, Qiao Gu, Martial Hebert, and David Held. Zephyr: Zero-shot pose hypothesis rating. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 14141-14148. IEEE, 2021.
Unseen object instance segmentation for robotic environments. Christopher Xie, Yu Xiang, Arsalan Mousavian, Dieter Fox, IEEE Transactions on Robotics. 375Christopher Xie, Yu Xiang, Arsalan Mousavian, and Dieter Fox. Unseen object instance segmentation for robotic environments. IEEE Transactions on Robotics, 37(5):1343-1359, 2021.
Learning rgb-d feature embeddings for unseen object instance segmentation. Yu Xiang, Christopher Xie, Arsalan Mousavian, Dieter Fox, Conference on Robot Learning. PMLRYu Xiang, Christopher Xie, Arsalan Mousavian, and Dieter Fox. Learning rgb-d feature embeddings for unseen object instance segmentation. In Conference on Robot Learning, pages 461-470. PMLR, 2021.
Supergb-d: Zero-shot instance segmentation in cluttered indoor environments. Evin Pınar Örnek, K Aravindhan, Shreekant Krishnan, Cheng-Hao Gayaka, Arnie Kuo, Nassir Sen, Federico Navab, Tombari, IEEE Robotics and Automation Letters. Evin Pınar Örnek, Aravindhan K Krishnan, Shreekant Gayaka, Cheng-Hao Kuo, Arnie Sen, Nassir Navab, and Federico Tombari. Supergb-d: Zero-shot instance segmentation in cluttered indoor environments. IEEE Robotics and Automation Letters, 2023.
. Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Gao, Yong Jae Lee, arXiv:2304.06718arXiv preprintSegment everything everywhere all at onceXueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Gao, and Yong Jae Lee. Segment everything everywhere all at once. arXiv preprint arXiv:2304.06718, 2023.
Object recognition from local scale-invariant features. G David, Lowe, Proceedings of the seventh IEEE international conference on computer vision. the seventh IEEE international conference on computer visionIeee2David G Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150-1157. Ieee, 1999.
Surf: Speeded up robust features. Herbert Bay, Tinne Tuytelaars, Luc Van Gool, Lecture notes in computer science. 3951Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. Lecture notes in computer science, 3951:404-417, 2006.
Efficient multi-view object recognition and full pose estimation. Alvaro Collet, S Siddhartha, Srinivasa, 2010 IEEE International Conference on Robotics and Automation. IEEEAlvaro Collet and Siddhartha S Srinivasa. Efficient multi-view object recognition and full pose estimation. In 2010 IEEE International Conference on Robotics and Automation, pages 2050-2055. IEEE, 2010.
The moped framework: Object recognition and pose estimation for manipulation. The international journal of robotics research. Alvaro Collet, Manuel Martinez, Siddhartha S Srinivasa, 30Alvaro Collet, Manuel Martinez, and Siddhartha S Srinivasa. The moped framework: Object recognition and pose estimation for manipulation. The international journal of robotics research, 30(10):1284-1306, 2011.
Model globally, match locally: Efficient and robust 3d object recognition. Bertram Drost, Markus Ulrich, Nassir Navab, Slobodan Ilic, 2010 IEEE computer society conference on computer vision and pattern recognition. IeeeBertram Drost, Markus Ulrich, Nassir Navab, and Slobodan Ilic. Model globally, match locally: Efficient and robust 3d object recognition. In 2010 IEEE computer society conference on computer vision and pattern recognition, pages 998-1005. Ieee, 2010.
Imagebind: One embedding space to bind them all. Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15180-15190, 2023.
Geotransformer: Fast and robust point cloud registration with geometric transformer. Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, Slobodan Ilic, Dewen Hu, Kai Xu, IEEE Transactions on Pattern Analysis and Machine Intelligence. Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, Slobodan Ilic, Dewen Hu, and Kai Xu. Geotransformer: Fast and robust point cloud registration with geometric transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
Google scanned objects: A high-quality dataset of 3d scanned household items. Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, B Thomas, Vincent Mchugh, Vanhoucke, 2022 International Conference on Robotics and Automation (ICRA). IEEELaura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553-2560. IEEE, 2022.
Midastouch: Monte-carlo inference over distributions across sliding touch. Sudharshan Suresh, Zilin Si, Stuart Anderson, Michael Kaess, Mustafa Mukadam, Conference on Robot Learning. PMLRSudharshan Suresh, Zilin Si, Stuart Anderson, Michael Kaess, and Mustafa Mukadam. Midastouch: Monte-carlo inference over distributions across sliding touch. In Conference on Robot Learning, pages 319-331. PMLR, 2023.
Bop challenge 2020 on 6d object localization. Tomáš Hodaň, Martin Sundermeyer, Bertram Drost, Yann Labbé, Eric Brachmann, Frank Michel, Carsten Rother, Jiří Matas, Computer Vision-ECCV 2020 Workshops. Glasgow, UKSpringerProceedings, Part II 16Tomáš Hodaň, Martin Sundermeyer, Bertram Drost, Yann Labbé, Eric Brachmann, Frank Michel, Carsten Rother, and Jiří Matas. Bop challenge 2020 on 6d object localization. In Computer Vision-ECCV 2020 Workshops: Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 577-594. Springer, 2020.
Mask r-cnn. Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionKaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017.
Cosypose: Consistent multi-view multi-object 6d pose estimation. Y Labbe, J Carpentier, M Aubry, J Sivic, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)2020Y. Labbe, J. Carpentier, M. Aubry, and J. Sivic. Cosypose: Consistent multi-view multi-object 6d pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation. Yongzhi Su, Mahdi Saleh, Torben Fetzer, Jason Rambach, Nassir Navab, Benjamin Busam, Didier Stricker, Federico Tombari, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYongzhi Su, Mahdi Saleh, Torben Fetzer, Jason Rambach, Nassir Navab, Benjamin Busam, Didier Stricker, and Federico Tombari. Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6738-6748, 2022.
Dpod: 6d pose object detector and refiner. Sergey Zakharov, Ivan Shugurov, Slobodan Ilic, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSergey Zakharov, Ivan Shugurov, and Slobodan Ilic. Dpod: 6d pose object detector and refiner. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1941-1950, 2019.
Epos: Estimating 6d pose of objects with symmetries. Tomas Hodan, Daniel Barath, Jiri Matas, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionTomas Hodan, Daniel Barath, and Jiri Matas. Epos: Estimating 6d pose of objects with symmetries. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11703-11712, 2020.
Generating uniform incremental grids on so(3) using the hopf fibration. Anna Yershova, Swati Jain, Steven M Lavalle, Julie C Mitchell, The International Journal of Robotics Research. 297Anna Yershova, Swati Jain, Steven M. LaValle, and Julie C. Mitchell. Generating uniform incremental grids on so(3) using the hopf fibration. The International Journal of Robotics Research, 29(7):801-812, Nov 2009.
Slip: Self-supervision meets languageimage pre-training. Norman Mu, Alexander Kirillov, David Wagner, Saining Xie, Computer Vision-ECCV 2022: 17th European Conference. Tel Aviv, IsraelSpringerProceedings, Part XXVINorman Mu, Alexander Kirillov, David Wagner, and Saining Xie. Slip: Self-supervision meets language- image pre-training. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXVI, pages 529-544. Springer, 2022.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International conference on machine learning. PMLRAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021.
Kpconv: Flexible and deformable convolution for point clouds. Hugues Thomas, R Charles, Jean-Emmanuel Qi, Beatriz Deschaud, François Marcotegui, Leonidas J Goulette, Guibas, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionHugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6411-6420, 2019.
| [] |
[
"From 'Snippet-lects' to Doculects and Dialects: Leveraging Neural Representations of Speech for Placing Audio Signals in a Language Landscape",
"From 'Snippet-lects' to Doculects and Dialects: Leveraging Neural Representations of Speech for Placing Audio Signals in a Language Landscape"
] | [
"Séverine Guillaume [email protected] \nLangues et Civilisations à Tradition Orale (LACITO)\nLangues et Civilisations Orientales (INALCO)\nCNRS\nUniversité Sorbonne Nouvelle -Institut National\n\n",
"Guillaume Wisniewski [email protected] \nLaboratoire de Linguistique Formelle (LLF)\nUniversité de Paris Cité\nCNRS\nF-75013ParisFrance\n",
"Alexis Michaud [email protected] \nLangues et Civilisations à Tradition Orale (LACITO)\nLangues et Civilisations Orientales (INALCO)\nCNRS\nUniversité Sorbonne Nouvelle -Institut National\n\n"
] | [
"Langues et Civilisations à Tradition Orale (LACITO)\nLangues et Civilisations Orientales (INALCO)\nCNRS\nUniversité Sorbonne Nouvelle -Institut National\n",
"Laboratoire de Linguistique Formelle (LLF)\nUniversité de Paris Cité\nCNRS\nF-75013ParisFrance",
"Langues et Civilisations à Tradition Orale (LACITO)\nLangues et Civilisations Orientales (INALCO)\nCNRS\nUniversité Sorbonne Nouvelle -Institut National\n"
] | [] | XLSR-53, a multilingual model of speech, builds a vector representation from audio, which allows for a range of computational treatments. The experiments reported here use this neural representation to estimate the degree of closeness between audio files, ultimately aiming to extract relevant linguistic properties. We use max-pooling to aggregate the neural representations from a 'snippet-lect' (the speech in a 5-second audio snippet) to a 'doculect' (the speech in a given resource), then to dialects and languages. We use data from corpora of 11 dialects belonging to 5 less-studied languages. Similarity measurements between the 11 corpora bring out greatest closeness between those that are known to be dialects of the same language. The findings suggest that (i) dialect/language can emerge among the various parameters characterizing audio files and (ii) estimates of overall phonetic/phonological closeness can be obtained for a little-resourced or fully unknown language. The findings help shed light on the type of information captured by neural representations of speech and how it can be extracted from these representations. | 10.48550/arxiv.2305.18602 | [
"https://export.arxiv.org/pdf/2305.18602v1.pdf"
] | 258,967,663 | 2305.18602 | 48de249e4ac285a3d82f6229b2bd9641f29785af |
From 'Snippet-lects' to Doculects and Dialects: Leveraging Neural Representations of Speech for Placing Audio Signals in a Language Landscape
29 May 2023
Séverine Guillaume [email protected]
Langues et Civilisations à Tradition Orale (LACITO)
Langues et Civilisations Orientales (INALCO)
CNRS
Université Sorbonne Nouvelle -Institut National
Guillaume Wisniewski [email protected]
Laboratoire de Linguistique Formelle (LLF)
Université de Paris Cité
CNRS
F-75013ParisFrance
Alexis Michaud [email protected]
Langues et Civilisations à Tradition Orale (LACITO)
Langues et Civilisations Orientales (INALCO)
CNRS
Université Sorbonne Nouvelle -Institut National
From 'Snippet-lects' to Doculects and Dialects: Leveraging Neural Representations of Speech for Placing Audio Signals in a Language Landscape
29 May 2023Index Terms: pre-trained acoustic modelslanguage documen- tationunder-resourced languagessimilarity estimation
XLSR-53, a multilingual model of speech, builds a vector representation from audio, which allows for a range of computational treatments. The experiments reported here use this neural representation to estimate the degree of closeness between audio files, ultimately aiming to extract relevant linguistic properties. We use max-pooling to aggregate the neural representations from a 'snippet-lect' (the speech in a 5-second audio snippet) to a 'doculect' (the speech in a given resource), then to dialects and languages. We use data from corpora of 11 dialects belonging to 5 less-studied languages. Similarity measurements between the 11 corpora bring out greatest closeness between those that are known to be dialects of the same language. The findings suggest that (i) dialect/language can emerge among the various parameters characterizing audio files and (ii) estimates of overall phonetic/phonological closeness can be obtained for a little-resourced or fully unknown language. The findings help shed light on the type of information captured by neural representations of speech and how it can be extracted from these representations.
Introduction
The present research aims to contribute to a recent strand of research: exploring how pre-trained multilingual speech representation models like XLSR-53 [1] or HuBERT [2] can be used to assist in the linguistic analysis of a language [3]. XLSR-53, a multilingual model of speech, builds a vector representation from an audio signal. The neural representation is different in structure from that of the audio recording. Whereas wav (PCM) audio consists in a vector of values in the range [-1:+1], at a bit-depth from 8 to 32 and a sampling rate on the order of 16,000 Hz, the XLSR-53 neural representation contains 1,024 components, at a rate of 47 frames per second. The size of the vector representation is on the same order of magnitude as that of the audio snippet, and the amount of information can be hypothesized to be roughly comparable. But the neural representation, unlike the audio format, comes in a vector form that is tractable to a range of automatic treatments building on the vast body of work in data mining and machine learning. The neural representation of speech holds potential for an epistemological turning-point comparable to the introduction of the spectrogram 8 decades ago [4,5,6].
The experiments reported here use the neural representation yielded by XLSR-53 (used off-the-shelf, without fine-tuning, unlike [7,8]) as a means to characterize audio: estimating the degree of closeness between audio signals, and (ultimately) extracting relevant linguistic properties, teasing them apart from other types of information, e.g. technical characteristics of the recordings. We start out from 5-second audio snippets, and we pool neural representations (carrying out mean pooling, i.e. averaging across frames) to progress towards the level of the entire audio file, then the entire corpus (containing several audio files). We thereby gradually broaden the scope of the neural representation from a 'snippet-lect' (the speech present in an audio snippet 1 ) to a 'doculect' (a linguistic variety as it is documented in a given resource [9]), then towards 'dialects' (other groupings could also be used: by sociolect, by speaking style/genre, etc.) and, beyond, entire languages.
In a set of exploratory experiments, we build neural representations of corpora of 11 dialects that belong to 5 under-resourced languages. We then use linguistic probes [10] (i.e. a multiclass classifier taking as input the frozen neural representation of an utterance and assigning it to a language, similarly to a language identification system) to assess the capacity of XLSR-53 to capture language information. Building on these first results, we propose to use our probe on languages not present in the training set and to use its decisions as a measure of similarity between two languages, following the intuition that if an audio segment of an unknown language is identified as being of language A, then the language in the audio segment is "close" to A.
Representations like XLSR-53 have already been used to develop language identification systems (e.g. [11,12]), but their use in the context of under-resourced languages and linguistic fieldwork datasets raises many challenges. First, there is much less data available for training and testing these systems both in terms of number of hours of audio and number of speakers. For instance, VoxLingua [13], a dataset collected to train language identification models, contains 6, 628 hours of recordings in 107 languages, i.e. at least an order of magnitude more data per language than typical linguistic fieldwork corpora. Second, the languages considered in a language documentation context have not been used for (pre-)training speech representations and have linguistic characteristics that are potentially very different from the languages used for pre-training them (on consequences of narrow typological scope for Natural Language Processing research, see [14]). The ability of models such as XLSR-53 to correctly represent these languages remains an open question. We also aim to assess to what extent pre-trained models of speech can address these two challenges.
Similarity measurements between the 11 corpora bring out greatest closeness between those that are known to be dialects of the same language. Our findings suggest that dialect/language can emerge among the many parameters characterizing audio files as captured in XLSR-53 representations (which also include acoustic properties of the environment, technical characteristics of the recording equipment, speaker ID, speaker gender, age, social group, as well as style of speech: speaking rate, etc.), and that there is potential for arriving at useful estimates of phonetic/phonological closeness. The encouraging conclusion is that, even in the case of a little-resourced or fully unknown language, 'snippet-lects' and 'doculects' can be placed relative to other speech varieties in terms of their closeness.
An estimation of closeness between speech signals can have various applications. For computational language documentation [15,16,17,8], there could be benefit in a tool for finding closest neighbours for a newly documented language (with a view to fine-tuning extant models for the newly documented variety, for instance), bypassing the need for explicit phoneme inventories, unlike in [18]. For dialectology, a discipline that traditionally relies on spatial models based on isogloss lines [19], neural representations of audio signals for cognate words allow for calculating a phonetic-phonological distance along a dialect continuum [3]; our work explores whether cross-dialect comparison of audio snippets containing different utterances also allows for significant generalizations. Last but not least, for the community of speech researchers, the task helps shed light on the type of information captured by neural representations of speech and how it can be extracted from these representations. This work is intended as a stepping-stone towards the mid-term goal of leveraging neural representations of speech to extract typological features from neural representations of speech signals: probing linguistic information in neural representations, to arrive at data-driven induction of typological knowledge [20]. Note that our work is speech-based, like [21,22], and unlike text-based research predicting typological features (e.g. [23]).
This article is organized as follows. In Section 2 we introduce our system. In Section 3 we briefly review the languages used in our experiments. Finally, we report our main experimental findings in Section 4.
Probing Language Information in Neural Representations
Predicting the language of a spoken utterance can, formally, be seen as a multi-class classification task that aims at mapping an audio snippet represented by a feature vector to one of the language labels present in the training set. Our implementation of this principle is very simple: we use 5-second audio snippets and use, as feature vector, the representation of the audio signal built by XLSR-53, a cross-lingual speech representation that results from pre-training a single Transformer model from the raw waveform of speech in multiple languages [1]. XLSR-53 is a sequence-to-sequence model that transforms an audio file (a sequence of real numbers along the time dimension) into a sequence of vectors of dimension 1,024 sampled at 47 Hz (i.e. it outputs 47 vectors for each second of audio). We use max-pooling to aggregate these vectors and map each audio snippet to a single vector. In all our experiments, we use a logistic regression (as implemented in the sklearn library [24]) as the multi-class classifier with ℓ2 regularization. Importantly, our language identification system uses the representations built by XLSR-53 without ever modifying them and is therefore akin to a linguistic probe [10]. We do not carry out fine-tuning of a pre-trained model. Language identification is a well-established task in the speech community and has been the focus of much research; our work does not aim at developing a state-of-the-art language identification model, but at showing that neural representations encode language information, and that this information can be useful for language documentation and analysis. Said differently, we do not aim to leverage "emergent abilities" of large language models [25], but to explore one of their latent abilities.
Our experimental framework allows us to consider several questions of interest to linguists. We can use various sets of labels, e.g. language names, or any level of phylogenetic (diachronic) grouping, or again typological (synchronic) groupings. We can also vary the examples the classifier is trained on. Among the many possibilities, we consider three settings: • a dialect identification setting in which the classifier is trained on recordings of N language varieties (dialects) and is then used (and evaluated) to recognize one of these; • a language identification setting which differs from the previous setting only by the definition of the label to predict: the goal is now to identify languages, which constitute groups of dialects. Importantly, this classifier can be used to predict the language affiliation of a dialect that is not present in the train set, so that it can be used to predict, for instance, the language to which a hitherto unknown dialect belongs; • a similarity identification setting which differs from the first setting only by the definition of the train set: in this setting, we use our model on utterances of a dialect that is not present in the train set. Since the classifier cannot predict the exact dialect (as its label is not available from within the train set), it seems intuitively likely to choose the label of a dialect with similar characteristics. Crucially, we believe that this setting will therefore allow to identify similarities between language varieties.
Information on Languages and Dialects
In all our experiments, we use datasets from the Pangloss Collection [26], 2 an open archive of (mostly) endangered languages.Our experiments focus on 11 dialects that belong to five languages:
• two dialects of Nepali: Achhami (Glottocode [27]: doty1234) and Dotyal (doty1234); • two dialects of Lyngam (lyng1241): Langkma and Nongtrei; • three varieties of Na-našu, a dialect of Shtokavian Serbo-Croatian (shto1241) spoken by Italian Croats; • two dialects of War (khas1268): Amwi (warj1242) and Nongtalang (nong1246); • two dialects of Na (yong1270): Lataddi Na (lata1234) and
Yongning Na (yong1288). We also consider two additional languages, Naxi (naxi1245) and Laze (laze1238), because of their closeness to Na [28].
For the sake of consistency in the experiments reported here, we use "dialect" as the lowest-level label, and "language" for the first higher level, as a convention. We are aware that the distance between "dialects" (and between "languages") varies significantly from one case to another. We do not assume that the distance between Achhami and Dotyal (dialects of Nepali) is (even approximately) the same as that between Langkma and Nongtrei (dialects of Lyngam), or between Lataddi Na and Yongning Na. The key assumption behind our use of terms is that language varieties referred to as "dialects" of the same language are close enough that it makes sense to assume that the degree of phonetic similarity between them can serve as a rule-of-thumb estimate for the distance that separates them, without requiring higher-level linguistic information (of the type used to train a language model).
In this preliminary study we have decided to focus on a small number of languages and to focus on qualitative analysis of our results, rather than running a large-scale experiment on dozens of languages. The languages are chosen according to the size of the available corpora and specific properties. We favored continuous speech (we left aside corpora consisting solely of word lists or materials elicited sentence by sentence).
For each of these languages we extracted 2 to 50 files of variable length (from 33 seconds to 30 minutes).
Experiments
In all our experiments, we evaluate the capacity of our classifier to predict the correct language information (either the label of a specific dialect or the name of a language) using the usual metrics for multi-class classification, namely, precision, recall and their combination in the F1 score.
Dialect Identification To test the ability of a classifier to recognize a dialect from the representations built by XLSR-53, we consider a classifier using the names of the 13 dialects or languages described in Section 3 as its label set. We try out two configurations. In the first one, all the utterances of a dialect are randomly divided into a test set (20% of utterances) and a training set (80%). In the second configuration, the training corpus is made up of 80% of the files of a dialect and the test corpus contains the remaining 20%. While the latter configuration is closer to the real conditions of use of our system (guaranteeing that the utterances of the test corpus come only from files that have not been seen at training), it is more difficult to control the size of the train and test sets, which makes the analysis less straightforward.
The results are reported in Table 1. They show that, in both configurations, a simple classifier is able to identify the correct dialect label for an utterance with high accuracy, showing that XLSR-53 representations encode language information. Similar observations have already been reported (see, e.g., [29]), but to the best of our knowledge, our work is the first evaluation of the capacity of XLSR-53 representations to identify under-documented language varieties whose characteristics are potentially very different from the languages seen at (pre-)training [8]. Interestingly, the quality of predictions does not seem to be influenced by the amount of training data (a similar paradox is reported in the evaluation of another large language model in multilingual learning: ChatGPT [30]).
The recordings considered in the experiment we have just described were all collected in the context of linguistic fieldwork, and thus have some peculiarities that may distort the conclusions we have just drawn. In particular, most of the dialects we considered have recordings of a single speaker. Moreover, different dialects of the same language were often recorded by the same linguist, using the same recording setup (in particular, the same microphone). We therefore need to check whether our classifiers just learn to distinguish speakers (in many cases: one per dialect) or recording conditions. In order to rule out this possibility, we carried out a control experiment in which we tried to predict the file name (serving as proxy information for the speaker and the recording conditions). A logistic regression trained in the 80-20 condition described above achieved a macro F1 score of 0.45, showing that the decision of the classifier is largely based on linguistic information, not solely on information about the recording conditions.
Language Identification In a second experiment, we test the ability of our classifier to identify languages (that is, groups Table 3: Performance of a classifier trained to predict the language (group of dialects) of dialects not seen during training. Naxi and Laze have been left out as there is a single variety of these languages in our dataset.
of dialects). We consider, again, two conditions to train and evaluate our classifier. In a first condition, the train and test sets are randomly sampled from all the recordings we consider (with the usual 80%-20% split) without any condition being imposed on the files or languages. All dialects are therefore present in both the test and train sets. In a second condition, the test set is put together by selecting, for each language (group of dialects), all the recordings of a randomly chosen dialect. The test set is thus made up of 5 dialects that have not been seen at training. Table 2 reports results in the first condition. The classifier succeeds in identifying the correct language in the vast majority of cases, a very logical result since the same languages are present in the train and test sets and the experiments reported Table 4: Distribution of the labels predicted by a classifier trained on 12 dialects (in columns) and used on a 13 th dialect (unseen at training). Thus a classifier trained on all except Yongning Na identifies 72.4% of Yongning Na utterances as Lataddi Na and 12.5% as Naxi.
in the previous paragraph proved that it is possible to identify dialects with good accuracy. To verify that the classifier was able to extract linguistic information rather than merely memorizing arbitrary associations between dialects, we performed a control experiment in which we divided the 13 dialects into 5 arbitrary groups having the same size as the languages (dialect groups) considered in the previous experiment. A classifier considering these groups as labels achieves a macro F1 score of 0.85. While this score is high, it is notably lower than the score obtained by predicting linguistic families, showing that the classifier decisions are, to a significant extent, based on linguistic criteria. Table 3 shows the results for the second condition, in which we evaluate the capacity of a classifier to predict the language (dialect group) of a dialect that was not part of the train set. Scores vary greatly by language (group of dialects) and several factors make it difficult to interpret these results. First, removing a dialect completely from the train set can result in large variation in its size and the results of Table 3 are not necessarily comparable with those reported so far. Second, some confounders seem to cause particularly poor performance for certain groups of dialects. For example, recordings of Dotyal are mainly sung epic poetry, so it is not surprising that any generalization across the two dialects of Nepali is difficult. Gender seems to be another confounder: several corpora only contain recordings by speakers of the same gender, and a quick qualitative study seems to show that a model trained on a female speaker does not perform well on data by a male speaker (and conversely). Note, however, that our evaluation of the performance of the classifier puts it at a disadvantage since it is evaluated at the level of a 5-second snippet and not of an entire recording. It is not unlikely that the performance would be better if we predicted a single label for a whole recording (for example by taking the most frequent label among those of all snippets).
Similarity Identification Setting In our last experiment, we trained 12 classifiers, considering all dialects but one for training and looking at the distribution of predicted labels when the classifier had to identify snippets of the held-out language. As explained in Section 2, the classifier cannot predict the correct label (since the target language is not present in the training corpus) but might, we believe, pitch on a language with similar characteristics. Results of this experiment are reported in Table 4. They allow us to draw several interesting conclusions.
First, these results show that the classifier pitches consistently on one and the same label. In almost every case, the distribution of predicted labels is concentrated on a few labels. That means that the classifier typically identifies almost all snippets from an audio file as being in the same language. Second, in several cases (e.g. for dialects of Na, War or Na-našu), the classifier recognizes the unknown language as a dialect of the same group: for instance Yongning Na utterances are mainly labelled as Lataddi Na (the dialect of a nearby village). In addition to its interest for the automatic identification of dialect groups, this observation proves that XLSR-53 uncovers representations that somehow generalize over small dialectal variation.
Further experiments are needed to understand the two cases where the output of the classifier disagrees with the goldstandard clustering: the San Felice del Molise dialect of Na-našu, and the two dialects of Lyngam (Langkma and Nongtrei). (For Nepali, a plausible confounder was mentioned above: data type -genre -, as the Dotyal corpus consists of sung epics.)
Conclusions
Our exploratory experiments exploring the capacity of XLSR-53 to place audio signals in a language and dialect landscape confirm the interest of neural representations of speech as an exciting avenue of research. Further work is required to ensure that a dialect identification system bases its decisions on phenomena (detecting relevant phonetic-phonological structures), not on parameters such as recording conditions, speaker characteristics (gender, age...) and speech genre/style, which constitute confounders in a language identification task. In
Table 2 :
2Performance of a classifier trained to predict languages (group of dialects). Languages consisting of a single dialect are indicated with a †.precision recall
F1
Lyngam
0.59
0.81
0.68
Na
0.86
0.83
0.84
Na-našu
0.48
0.75
0.59
Nepali
0.09
0.09
0.09
War
0.74
0.60
0.66
macro average
0.55
0.62
0.57
'Snippet-lect' is coined on the analogy of 'doculect'[9], to refer to the characteristics of a 5-second audio snippet.
Website: pangloss.cnrs.fr. A tool for bulk downloads and for tailoring reference corpora is available: OutilsPangloss.
we plan to reproduce the experiments on corpora of better-resourced languages, such as LibriVox or CommonVoice, for which it is easier to control recording conditions, speaker gender, and the amount of training data. work. 6Referenceswork, we plan to reproduce the experiments on corpora of better-resourced languages, such as LibriVox or CommonVoice, for which it is easier to control recording conditions, speaker gender, and the amount of training data. 6. References
Unsupervised cross-lingual representation learning for speech recognition. A Conneau, A Baevski, R Collobert, A Mohamed, M Auli, Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association. H. Hermansky, H. Cernocký, L. Burget, L. Lamel, O. Scharenborg, and P. MotlícekBrno, CzechiaA. Conneau, A. Baevski, R. Collobert, A. Mohamed, and M. Auli, "Unsupervised cross-lingual representation learning for speech recognition," in Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August -3 September 2021, H. Hermansky, H. Cernocký, L. Burget, L. Lamel, O. Scharenborg, and P. Motlícek, Eds. ISCA, 2021, pp. 2426-2430. [Online].
. 10.21437/Interspeech.2021-329Available: https://doi.org/10.21437/Interspeech.2021-329
HUBERT untangles BERT to improve transfer across NLP tasks. M Moradshahi, H Palangi, M S Lam, P Smolensky, J Gao, abs/1910.12647CoRR. M. Moradshahi, H. Palangi, M. S. Lam, P. Smolensky, and J. Gao, "HUBERT untangles BERT to improve transfer across NLP tasks," CoRR, vol. abs/1910.12647, 2019. [Online]. Available: http://arxiv.org/abs/1910.12647
Neural representations for modeling variation in speech. M Bartelds, W Vries, F Sanal, C Richter, M Liberman, M Wieling, Journal of Phonetics. 92M. Bartelds, W. de Vries, F. Sanal, C. Richter, M. Liberman, and M. Wieling, "Neural representations for modeling variation in speech," Journal of Phonetics, vol. 92, pp. 101-137, 2022.
R K Potter, G A Kopp, H C Green, Visible speech. New York: D. Van Nostrand. R. K. Potter, G. A. Kopp, and H. C. Green, Visible speech. New York: D. Van Nostrand, 1947.
The beginning of time-frequency analysis. S A Fulop, 10.1121/10.0014987The Journal of the Acoustical Society of America. 1525S. A. Fulop, "The beginning of time-frequency analysis," The Journal of the Acoustical Society of America, vol. 152, no. 5, pp. R9-R10, 11 2022. [Online]. Available: https://doi.org/10.1121/10.0014987
Acoustic theory of speech production, with calculations based on X-ray studies of Russian articulations. G Fant, MoutonThe Hague & ParisG. Fant, Acoustic theory of speech production, with calculations based on X-ray studies of Russian articulations. The Hague & Paris: Mouton, 1960.
Leveraging pre-trained representations to improve access to untranscribed speech from endangered languages. N San, M Bartelds, M Browne, L Clifford, F Gibson, J Mansfield, D Nash, J Simpson, M Turpin, M Vollmer, S Wilmoth, D Jurafsky, 2021 IEEE Automatic Speech Recognition and Understanding Workshop. N. San, M. Bartelds, M. Browne, L. Clifford, F. Gibson, J. Mans- field, D. Nash, J. Simpson, M. Turpin, M. Vollmer, S. Wilmoth, and D. Jurafsky, "Leveraging pre-trained representations to improve access to untranscribed speech from endangered languages," in 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2021, pp. 1094-1101.
Fine-tuning pre-trained models for Automatic Speech Recognition: experiments on a fieldwork corpus of Japhug (Trans-Himalayan family). S Guillaume, G Wisniewski, C Macaire, G Jacques, A Michaud, B Galliot, M Coavoux, S Rossato, M.-C Nguyên, M Fily, Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages. the Fifth Workshop on the Use of Computational Methods in the Study of Endangered LanguagesDublin, IrelandAssociation for Computational LinguisticsS. Guillaume, G. Wisniewski, C. Macaire, G. Jacques, A. Michaud, B. Galliot, M. Coavoux, S. Rossato, M.-C. Nguyên, and M. Fily, "Fine-tuning pre-trained models for Automatic Speech Recognition: experiments on a fieldwork corpus of Japhug (Trans-Himalayan family)," in Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages. Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 170-178. [Online]. Available: https://aclanthology.org/2022.computel-1.21
Languoid, doculect, and glossonym: formalizing the notion 'language. J Good, M Cysouw, Language Documentation & Conservation. 7J. Good and M. Cysouw, "Languoid, doculect, and glossonym: formalizing the notion 'language'," Language Documentation & Conservation, vol. 7, pp. 331-359, 2013. [Online]. Available: http://hdl.handle.net/10125/4606
Understanding intermediate layers using linear classifier probes. G Alain, Y Bengio, 5th International Conference on Learning Representations. Toulon, FranceWorkshop Track Proceedings. OpenReview.netG. Alain and Y. Bengio, "Understanding intermediate layers using linear classifier probes," in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24- 26, 2017, Workshop Track Proceedings. OpenReview.net, 2017. [Online]. Available: https://openreview.net/forum?id=HJ4-rAVtl
Improved language identification through cross-lingual self-supervised learning. A Tjandra, D G Choudhury, F Zhang, K Singh, A Conneau, A Baevski, A Sela, Y Saraf, M Auli, 10.1109/ICASSP43922.2022.9747667IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSP 2022, Virtual and SingaporeA. Tjandra, D. G. Choudhury, F. Zhang, K. Singh, A. Conneau, A. Baevski, A. Sela, Y. Saraf, and M. Auli, "Improved language identification through cross-lingual self-supervised learning," in IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23-27 May 2022. IEEE, 2022, pp. 6877-6881. [Online]. Available: https://doi.org/10.1109/ICASSP43922.2022.9747667
Exploring wav2vec 2.0 on speaker verification and language identification. Z Fan, M Li, S Zhou, B Xu, Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association. Brno, Czechia30Z. Fan, M. Li, S. Zhou, and B. Xu, "Exploring wav2vec 2.0 on speaker verification and language identification," in Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30
. H Hermansky, H Cernocký, L Burget, L Lamel, O , 10.21437/Interspeech.2021-1280Scharenborg, and P. MotlícekAugust -3 September 2021, H. Hermansky, H. Cernocký, L. Burget, L. Lamel, O. Scharenborg, and P. Motlícek, Eds. ISCA, 2021, pp. 1509-1513. [Online]. Available: https://doi.org/10.21437/Interspeech.2021-1280
VoxLingua107: a dataset for spoken language recognition. J Valk, T Alumäe, Proc. IEEE SLT Workshop. IEEE SLT WorkshopJ. Valk and T. Alumäe, "VoxLingua107: a dataset for spoken language recognition," in Proc. IEEE SLT Workshop, 2021.
Linguistically naïve!= language independent: Why NLP needs linguistic typology. E M Bender, Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous. the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or VacuousE. M. Bender, "Linguistically naïve!= language independent: Why NLP needs linguistic typology," in Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computa- tional Linguistics: Virtuous, Vicious or Vacuous?, 2009, pp. 26-32.
Integrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit. A Michaud, O Adams, T Cohn, G Neubig, S Guillaume, Language Documentation & Conservation. 12A. Michaud, O. Adams, T. Cohn, G. Neubig, and S. Guillaume, "Integrating automatic transcription into the language documen- tation workflow: Experiments with Na data and the Persephone toolkit," Language Documentation & Conservation, vol. 12, pp. 393-429, 2018.
Future directions in technological support for language documentation. D Van Esch, B Foley, N San, Proceedings of the Workshop on Computational Methods for Endangered Languages. the Workshop on Computational Methods for Endangered Languages1D. van Esch, B. Foley, and N. San, "Future directions in technolog- ical support for language documentation," in Proceedings of the Workshop on Computational Methods for Endangered Languages, vol. 1, 2019.
Automatic speech recognition for supporting endangered language documentation. E Prud'hommeaux, R Jimerson, R Hatcher, K Michelson, Language documentation and conservation. 15E. Prud'hommeaux, R. Jimerson, R. Hatcher, and K. Michel- son, "Automatic speech recognition for supporting endangered language documentation," Language documentation and conser- vation, vol. 15, 2021.
Probabilistic typology: Deep generative models of vowel inventories. R Cotterell, J Eisner, arXiv:1705.01684arXiv preprintR. Cotterell and J. Eisner, "Probabilistic typology: Deep generative models of vowel inventories," arXiv preprint arXiv:1705.01684, 2017.
Shinydialect: a cartographic tool for spatial interpolation of geolinguistic data. C Chagnaud, P Garat, P.-A Davoine, E Carpitelli, A Vincent, Proceedings of the 1st ACM SIGSPATIAL workshop on Geospatial Humanities. the 1st ACM SIGSPATIAL workshop on Geospatial HumanitiesC. Chagnaud, P. Garat, P.-A. Davoine, E. Carpitelli, and A. Vin- cent, "Shinydialect: a cartographic tool for spatial interpolation of geolinguistic data," in Proceedings of the 1st ACM SIGSPATIAL workshop on Geospatial Humanities, 2017, pp. 23-30.
Modeling language variation and universals: A survey on typological linguistics for natural language processing. E M Ponti, H O'horan, Y Berzak, I Vulić, R Reichart, T Poibeau, E Shutova, A Korhonen, Computational Linguistics. 453E. M. Ponti, H. O'horan, Y. Berzak, I. Vulić, R. Reichart, T. Poibeau, E. Shutova, and A. Korhonen, "Modeling language variation and universals: A survey on typological linguistics for natural language processing," Computational Linguistics, vol. 45, no. 3, pp. 559-601, 2019.
Comparative analysis of prosodic characteristics using wavenet embeddings. A Suni, M Wlodarczak, M Vainio, J Simko, 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019). ISCA. A. Suni, M. Wlodarczak, M. Vainio, and J. Simko, "Comparative analysis of prosodic characteristics using wavenet embeddings," in 20th Annual Conference of the International Speech Communi- cation Association (INTERSPEECH 2019). ISCA, 2019.
Investigating the usefulness of i-vectors for automatic language characterization. M De Seyssel, G Wisniewski, E Dupoux, B Ludusan, Speech Prosody 2022-11th International Conference on Speech Prosody. M. De Seyssel, G. Wisniewski, E. Dupoux, and B. Ludusan, "Investigating the usefulness of i-vectors for automatic language characterization," in Speech Prosody 2022-11th International Conference on Speech Prosody, 2022.
Tracking typological traits of Uralic languages in distributed language representations. J Bjerva, I Augenstein, arXiv:1711.05468arXiv preprintJ. Bjerva and I. Augenstein, "Tracking typological traits of Uralic languages in distributed language representations," arXiv preprint arXiv:1711.05468, 2017.
Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, "Scikit-learn: Machine learning in Python," Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011.
Are emergent abilities of Large Language Models a mirage. R Schaeffer, B Miranda, S Koyejo, arXiv:2304.15004arXiv preprintR. Schaeffer, B. Miranda, and S. Koyejo, "Are emergent abil- ities of Large Language Models a mirage?" arXiv preprint arXiv:2304.15004, 2023.
Documenting and researching endangered languages: the Pangloss Collection. B Michailovsky, M Mazaudon, A Michaud, S Guillaume, A François, E Adamou, Language Documentation & Conservation. 8B. Michailovsky, M. Mazaudon, A. Michaud, S. Guillaume, A. François, and E. Adamou, "Documenting and researching endangered languages: the Pangloss Collection," Language Documentation & Conservation, vol. 8, pp. 119-135, 2014. [Online]. Available: https://shs.hal.science/halshs-01003734
Glottolog: A free, online, comprehensive bibliography of the world's languages. H Hammarström, 3rd International Conference on Linguistic and Cultural Diversity in Cyberspace. H. Hammarström, "Glottolog: A free, online, comprehensive bibliography of the world's languages," in 3rd International Conference on Linguistic and Cultural Diversity in Cyberspace. UNESCO, 2015, pp. 183-188.
Approaching the historical phonology of three highly eroded Sino-Tibetan languages: Naxi, Na and Laze. G Jacques, A Michaud, Diachronica. 284G. Jacques and A. Michaud, "Approaching the historical phonology of three highly eroded Sino-Tibetan languages: Naxi, Na and Laze," Diachronica, vol. 28, no. 4, pp. 468-498, 2011. [Online]. Available: https://shs.hal.science/halshs-00537990
Probing phoneme, language and speaker information in unsupervised speech representations. M De Seyssel, M Lavechin, Y Adi, E Dupoux, G Wisniewski, Interspeech 2022 -23rd INTERSPEECH Conference. Incheon, South KoreaM. de Seyssel, M. Lavechin, Y. Adi, E. Dupoux, and G. Wis- niewski, "Probing phoneme, language and speaker information in unsupervised speech representations," in Interspeech 2022 -23rd INTERSPEECH Conference, Incheon, South Korea, Sep. 2022. [Online]. Available: https://hal.inria.fr/hal-03830470
ChatGPT beyond English: Towards a comprehensive evaluation of Large Language Models in multilingual learning. V D Lai, N T Ngo, A P B Veyseh, H Man, F Dernoncourt, T Bui, T H Nguyen, arXiv:2304.05613arXiv preprintV. D. Lai, N. T. Ngo, A. P. B. Veyseh, H. Man, F. Dernoncourt, T. Bui, and T. H. Nguyen, "ChatGPT beyond English: Towards a comprehensive evaluation of Large Language Models in multilingual learning," arXiv preprint arXiv:2304.05613, 2023.
| [] |
[
"AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation",
"AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation"
] | [
"Ganesh Jawahar \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Subhabrata Mukherjee \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"♠ Xiaodong Liu \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"♠ \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Young Jin Kim \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Muhammad Abdul-Mageed \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"♣ ♢ \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Laks V S Lakshmanan \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"♣ Ahmed \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Hassan Awadallah \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Sebastien Bubeck \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n",
"Jianfeng Gao \nUniversity of British Columbia, ♠ Microsoft Research\n♢ MBZUAI\n"
] | [
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI",
"University of British Columbia, ♠ Microsoft Research\n♢ MBZUAI"
] | [] | Mixture-of-Expert (MoE) models have obtained state-of-the-art performance in Neural Machine Translation (NMT) tasks. Existing works in MoE mostly consider a homogeneous design where the same number of experts of the same size are placed uniformly throughout the network. Furthermore, existing MoE works do not consider computational constraints (e.g., FLOPs, latency) to guide their design. To this end, we develop AutoMoE -a framework for designing heterogeneous MoE's under computational constraints. AutoMoE leverages Neural Architecture Search (NAS) to obtain efficient sparse MoE sub-transformers with 4× inference speedup (CPU) and FLOPs reduction over manually designed Transformers, with parity in BLEU score over dense Transformer and within 1 BLEU point of MoE SwitchTransformer, on aggregate over benchmark datasets for NMT. Heterogeneous search space with dense and sparsely activated Transformer modules (e.g., how many experts? where to place them? what should be their sizes?) allows for adaptive compute -where different amounts of computations are used for different tokens in the input. Adaptivity comes naturally from routing decisions which send tokens to experts of different sizes. AutoMoE code, data, and trained models are available at https://aka.ms/AutoMoE. | null | [
"https://export.arxiv.org/pdf/2210.07535v2.pdf"
] | 259,108,418 | 2210.07535 | 5165de3cd4f8dc9d88e82d55f4798013d57cc0f1 |
AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation
Ganesh Jawahar
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Subhabrata Mukherjee
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
♠ Xiaodong Liu
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
♠
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Young Jin Kim
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Muhammad Abdul-Mageed
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
♣ ♢
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Laks V S Lakshmanan
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
♣ Ahmed
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Hassan Awadallah
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Sebastien Bubeck
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
Jianfeng Gao
University of British Columbia, ♠ Microsoft Research
♢ MBZUAI
AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation
Mixture-of-Expert (MoE) models have obtained state-of-the-art performance in Neural Machine Translation (NMT) tasks. Existing works in MoE mostly consider a homogeneous design where the same number of experts of the same size are placed uniformly throughout the network. Furthermore, existing MoE works do not consider computational constraints (e.g., FLOPs, latency) to guide their design. To this end, we develop AutoMoE -a framework for designing heterogeneous MoE's under computational constraints. AutoMoE leverages Neural Architecture Search (NAS) to obtain efficient sparse MoE sub-transformers with 4× inference speedup (CPU) and FLOPs reduction over manually designed Transformers, with parity in BLEU score over dense Transformer and within 1 BLEU point of MoE SwitchTransformer, on aggregate over benchmark datasets for NMT. Heterogeneous search space with dense and sparsely activated Transformer modules (e.g., how many experts? where to place them? what should be their sizes?) allows for adaptive compute -where different amounts of computations are used for different tokens in the input. Adaptivity comes naturally from routing decisions which send tokens to experts of different sizes. AutoMoE code, data, and trained models are available at https://aka.ms/AutoMoE.
Introduction
Sparsely activated models like the Mixture-of-Experts (MoE) (Fedus et al., 2022b) perform conditional computation in which only a subset of the weights of the network are activated per input. Selective compute allows us to design neural networks with a large number of model parameters, without significant increase in the computational cost. With increased capacity, these sparse models have demonstrated state-of-the-art performance in natural language tasks such as neural machine * Correspondence to {[email protected], [email protected]}. translation (NMT) (Kim et al., 2021;Kudugunta et al., 2021;Zuo et al., 2022).
MoE architectures require several design choices: (a) Expert placement: Identifying Transformer layers for introducing expert sub-networks. (b) Number of experts: How many experts to place in different layers? (c) Expert FFN size: What should be the feedforward network (FFN) size for each expert? Given the large search space of potential architectures and the exorbitant computational cost of training and evaluating them, existing approaches manually design MoE architectures from a highly-restricted homogeneous space. For instance, they use the same number of experts of the same capacity in different layers and make ad-hoc decisions like introducing experts in every other layer (Fedus et al., 2022b;Kim et al., 2021;Zuo et al., 2022;Du et al., 2022;Artetxe et al., 2021) or every four layers (Zoph et al., 2022).
While these MoE's support conditional computation, homogeneity (specifically, fixed-size experts) results in the same amount (albeit different subsets) of weights to be applied to each input. We hypothesize that this is not an optimal solution and that we can reduce the number of experts (in some layers) to reduce communication cost, and the size (of some experts) to reduce computation cost resulting in reduction in model size, FLOPs and latency without much quality degradation.
This naturally extends MoEs to be adaptive compute models (similar to work on early exit (Schuster et al., 2022)) where different amounts of computations are used for different inputs. The adaptivity comes naturally from the routing decisions which would send tokens to experts of different sizes.
The above observations are depicted in Table 1, which shows demonstrative examples of manually designed MoE's vs. those designed by our AutoMoE framework. We compare these architectures against various computational metrics (e.g., latency, FLOPs, active MoE parameters), archi- (2) Supernet training by sampling subnetworks from search space and training them by sharing common weights with Supernet. (3) Evolutionary search to find efficient architectures by (a) sampling MoE subnetworks from the search space; (b) using latency measured in the target device; and (c) performance estimation from Supernet as feedback for iterative optimization via crossover and mutation. (4) Efficient MoE subnetwork(s) from evolutionary search is trained on downstream task. tectural configurations and task performance. For the most efficient configuration (last row in the table), AutoMoE reduces the number of decoder layers, compensating for the capacity with increased experts in the bottom layer, and places most of the experts in the encoder. Overall AutoMoE introduces the following components and contributions:
• Heterogeneous design with adaptive computation for MoEs with variable number, size and placement of experts in both encoders and decoders.
• Extends Supernet training and evolutionary search from prior work on dense Transformers to new search space of sparse MoE's. This combines all possible MoE sub-architectures in a single graph; jointly training them via weightsharing; and searching for optimal one with best possible performance on a downstream task satisfying a user-specified computational constraint.
• Experiments on NMT benchmarks demonstrate AutoMoE-designed MoE's to obtain 4× inference speedup on CPU and equal FLOPs reduction over manually designed Transformers, with parity in BLEU with dense Transformer and within 1 BLEU point of MoE SwitchTransformer. Further, it outperforms NAS methods in the dense search space (e.g., 1.3× and 2.4× FLOPs reduction and inference speedup over HAT and Evolved Transformer (So et al., 2019)).
Background
Mixture-of-Experts: MoE's have a rich literature in machine learning dating back to the early 90s (Yuksel et al., 2012). They have received significant attention with works such as , Switch Transformers (Fedus et al., 2022b), GShard (Lepikhin et al., 2020), BASE (Lewis et al., 2021), Hash (Roller et al., 2021), GLaM (Du et al., 2022), Stochastic Experts (Zuo et al., 2022), Gating Dropout and ST-MoE (Zoph et al., 2022). Some crucial differences in these works include choice of expert routing function, expert placement technique, stability/performance enhancement techniques and nature of the task (pretraining vs. fine-tuning). Some challenges in building sparse expert models include: (i) lack of diversity in expert design (expert layer selection, number of experts, expert size, etc.), (ii) training instability, (iii) poor out-of-distribution generalization, (iv) cross-task adaptation of pre-trained models, (v) communication bottleneck, (vi) high memory and (vii) expert load balancing issue, to name a few. A comprehensive review of recent sparse expert models can be found at (Fedus et al., 2022a). Table 4. We report computational metrics (measured on Intel Xeon CPU) and BLEU score of MoE's on WMT'14 En-De MT task. Number of experts per layer are separated by hyphen (-) for encoder and decoder. Zuo et al., 2022;Du et al., 2022;Artetxe et al., 2021), (ii) every four layers (Zoph et al., 2022), or (iii) final few layers (Rajbhandari et al., 2022). While these MoE's support conditional computation, they generally do not support adaptive compute since same number of expert parameters apply to every input, largely given by their homogeneous design (e.g., all experts of same size). Further, MoE design is generally agnostic to computational constraints (e.g., latency, memory) of the hardware in which the MoE model has to be deployed.
Neural Architecture Search (NAS): Given a search space of architectures and efficiency constraints (e.g., model size, latency), NAS typically aims to identify the optimal architecture that maximizes the task performance, while satisfying the efficiency constraints. NAS has been recently used for natural language understanding tasks to build efficient BERT (Devlin et al., 2019) and GPT (Brown et al., 2020) based pre-trained language models Yin et al., 2021;Xu et al., 2022a,b;Dong et al., 2021;Javaheripi et al., 2022) as well as for machine translation tasks (So et al., 2019;. Hardware aware transformers (HAT) is a state-of-the-art NAS framework with dense Transformers for MT that uses hardware latency as feedback for optimization.
However, all of the above NAS works consider a search space with densely activated Transformers and non-MoE architectures, They primarily search over typical Transformer architectural hyper-parameters like number of layers, attention heads and hidden size. In contrast, we propose the first NAS framework that searches for efficient sparsely activated Mixture-of-Expert modules in Transformers. Our heterogeneous AutoMoE framework addresses some longstanding design choices for MoE's like how many experts? which layers to place them? what should be their sizes? and so on.
Designing Heterogeneous Mixture-of-Experts
We now present the components of AutoMoE framework (illustrated in Figure 1) for designing efficient MoE's under computational constraints.
Heterogeneous MoE Search Space
Existing MoE approaches restrict their design space by considering uniform distribution of size and number of experts placed in different Transformer layers. For instance, the standard MoE design (Fedus et al., 2022b) for an L-layer Transformer with M experts placed in alternate layers have only two possible configurations viz., {1-M -1-· · · }, {M -1-M -· · ·}. (a) Our design space allows variable number of experts in each layer resulting in M L possible configurations. (b) Furthermore, our design space also allows variable expert size, e.g., by modulating the width of the feedforward (FFN) subnetworks for different experts. Considering N possible FFN dimensions for each expert results in N M L possible configurations for designing the expert space. (c) Finally, given the autoregressive nature of tasks like neural machine translation, the inference cost is dominated by the decoder (Kasai et al., 2021). For instance, for token-based MoE, decoders take 200× the time per step compared to encoders at peak throughput (Kudugunta et al., 2021). Therefore, we further consider variable number of decoder layers along with the above choices for expert placement and expert capacity.
To the best of our knowledge, our work is the first to study such a flexible and exhaustive design space for MoE architectures.
In addition to heterogeneous experts, we allow flexible design for non-expert Transformer modules like the number of attention heads, hidden size and intermediate feedforward dimensions. This heterogeneous design of non-MoE, i.e., dense Transformer modules, has been explored in prior works such as HAT tasks like NMT, and AutoDistil (Xu et al., 2022a) for understanding tasks like those in the GLUE benchmark (Wang et al., 2018). Table 2 shows our search space. We demonstrate our heterogeneous MoE search to perform better than both manual and NAS-searched architectures in the dense space.
Supernet Training for MoE
AutoMoE leverages the idea of Supernet training from prior works Xu et al., 2022a; The Supernet is trained with the following steps: (i) sample a candidate architecture randomly from the search space (Guo et al., 2020); (ii) train the sampled architecture by extracting the common portion of weights from different layers in the Supernet (i.e., by weight sharing) for one training step on the task; (iii) repeat steps (i) and (ii) until the training budget is exhausted. Once the Supernet training converges, we can obtain a quick accuracy estimate for a candidate architecture (i.e. subnetwork) by extracting its shared weights from the Supernet and evaluating on the validation set.
The key challenge here is to build weight sharing techniques for MoE components, which include: (i) router: a neural network that is trained to route each token (of 'embedding size') in an incoming example to exactly one expert (out of M experts) for top-1 routing; (ii) FFN expert: a standard Transformer FFN block that has unique weights and is learned independently. AutoMoE's expert layers follow the Switch Transformer (Fedus et al., 2022b) specification. For subnetwork extraction from the Supernet, AutoMoE extracts front rows and front columns of the Supernet's router weight matrix, corresponding to the subnet design. For example, consider the Supernet's router to be designed for 4 experts and 640 embedding size with the shape of the router weight matrix as 4 × 640. Consider a sampled subnet during Supernet training to consist of 3 < 4 experts and 512 < 640 embedding size with the subnet's router matrix as 3 × 512. To populate this matrix, we extract the first 3 rows and first 512 columns from the Supernet's weight matrix (as illustrated in Figure 2 (a)). (Output) of the corresponding Supernet weights. For the second expert, the weight matrices of shape 1024 × 512 (Input) and 512 × 1024 (Output) are extracted from the first 1024 rows, 512 columns (Input) and first 512 rows, 1024 columns (Output) of the corresponding Supernet weights. This example is illustrated in Figure 2 (b). The subnet extraction technique does not extract weights from the third and fourth experts of the Supernet as the subnet is designed to have only two experts (not shown in the figure). Such a weight sharing technique allows us to design architectures with varying intermediate FFN size for each expert. Additional techniques for improving expert capacity such as stacking FFNs, and techniques for improving Supernet performance with sandwich sampling (Yu et al., 2019), inplace knowledge distillation (Yu et al., 2019), gradient conflict reduction (Gong et al., 2022) are left for future work.
Searching for Efficient MoE Subnetwork with Computational Constraint
AutoMoE search is based on an evolutionary algorithm that takes the hardware computational constraint (e.g., CPU latency ≤ 600ms) as input and aims to identify the MoE subnetwork from the Supernet which achieves maximum accuracy for the task while satisfying the constraint. The algorithm works by sampling an initial set of MoE candidate architectures randomly from the Supernet; evolving the top architectures iteratively by mutation; followed by crossover; until the search iterations are exhausted. Candidate MoE architectures are easily ranked by the Supernet performance estimator based on the validation score for the task. Latency estimate for each architecture is obtained by measuring the latency directly on the target device. The standard approach measures gold latency for forward propagation of a batch of examples for a large number (e.g., 300) of passes and then computes the truncated mean (after removing bottom and top 10% outlier latencies). This latency estimation can be costly given the large space of candidate architectures. To overcome this challenge, AutoMoE uses partially gold latency, which is obtained by forward propagation of a batch of examples for a small number (e.g., 100) of passes and then computing truncated mean. After the search is completed, the MoE architecture with the highest performance is selected as the optimal one.
Training Efficient MoE Sub-Transformer
Once the optimal MoE architecture is identified, we train the model weights for the final architecture to convergence for the same number of training steps as our baseline models for a fair comparison. Table 3. We use pre-processed datasets and evaluation setup from . We report BLEU score (Papineni et al., 2002) as a performance metric with beam of size 5 and a length penalty of 0.6 (for WMT). Baselines. We compare AutoMoE against both manually designed and NAS-searched architectures.
For manual baselines, we consider: (a) densely activated Transformers (Vaswani et al., 2017) with no experts; (b) sparsely activated MoE with homogeneous experts (i.e. same number and FFN size) placed in every other layer (Fedus et al., 2022b;Kim et al., 2021;Zuo et al., 2022;Du et al., 2022;Artetxe et al., 2021).
For NAS baselines, we consider (c) HAT , which is a Supernet-based state-of-theart NAS framework for identifying efficient dense sub-Transformers for neural machine translation (same task setting as ours); and (d) Evolved Transformer (So et al., 2019) which is one of the earlier works on finding efficient dense sub-Transformers with evolution-based architecture search. Note that both the NAS baselines apply only to dense non-MoE transformers, and AutoMoE is the first work to leverage NAS to identify efficient sparse MoE subtransformers. Finally, we consider (e) AutoMoE with Random Search (typically treated as a strong baseline for NAS) that samples an MoE subnetwork (given latency constraints) randomly from AutoMoE search space and trains it till convergence. Training configurations and search space. All the baselines and AutoMoE including the Supernet and final model are trained with the same setting for fair comparison. All the models are trained for 40K steps, with a warmup of 10K steps from 10 −7 to 10 −3 and use cosine annealing to 10 −7 for the rest of the steps. All models are trained using fairseq toolkit (Ott et al., 2019) with an effective batch size of 524K tokens on 16 V100 GPUs. All the NAS baselines have the same search space for dense Transformer modules (e.g., number of decoder layers, q-k-v dimension, attention heads, etc.) with AutoMoE further incorporating MoE relevant aspects (e.g., experts, gating, routing, etc.) in the search space. The number of encoder layers is kept fixed for all the NAS baselines including AutoMoE since the latency is primarily determined by the decoders for autoregressive generation (as we discuss in Section 5.2). Evolutionary search setup. For performance estimation, we monitor the validation loss of subnets on the NMT task. We compute latency by measuring the time taken to perform translation from a source sentence to a target sentence with same desired input/output length (30 for WMT) and original beam settings (see Section 4) on target device (Intel Xeon CPU). We measure latency 300 times for gold (to report final metrics) and 100 times for partially gold (during evolutionary search) respectively; discard top and bottom 10% (outlier latency) and compute mean of the rest. Hyper-parameter settings for evolutionary search include: 15 as iterations, 125 as population size, 25 as parents' size, 50 as mutation population size with mutation probability of 0.3 and 50 as crossover population size. Unless otherwise stated, latency constraint for all experiments is set to 600ms. Table 4 presents a comparison of AutoMoE with baselines on several computational metrics and task performance. We report the number of parameters without embedding weights, and FLOPs without the last decoding layer for all the models, consistent with evaluation. AutoMoE-generated sparse MoE sub-Transformers obtain 4× reduction in FLOPs over both manually designed (densely activated) Transformer-Big, and (sparsely activated) MoE SwitchTransformer-Big with experts in every layer, and equivalent inference speedups on CPU. Compared to NAS baselines like Evolved Transformer (So et al., 2019) and HAT that generate densely activated sub-Transformers, AutoMoE improves on FLOPs and latency by 2.4× and 1.3× respectively with parity in BLEU score on aggregate. Notably, Supernet-based AutoMoE and HAT have massively reduced amortized training cost (GPU hours) com-pared to Evolved Transformer with progressive evolutionary search. AutoMoE with Random Search, a strong NAS baseline, obtains the best speedup but with significant performance regression.
Results
AutoMoE vs. Baseline Performance
Compared to all other models (both dense and sparse), we observe AutoMoE to generate networks with high sparsity resulting in massively reduced active parameters and FLOPs. For the NAS models, we train the top-2 sub-Transformers in the Pareto and report the one with the best trade-off in BLEU vs. FLOPs on the validation set. Maximum experts for the best performance vary for different tasks, with 6 experts for WMT'14 En-De, 16 experts for WMT'14 En-Fr and WMT'19 En-De -given the latter two datasets are 10× larger than the former.
Analysis
Decoder layers vs. FLOPs. Figure 3 (a) shows the average FLOPs for several AutoMoE architectures with different decoder layers as obtained during our search (varying from 3 to 6) from the Pareto, and baseline models. Notice that the FLOPs increase with increase in decoder layers, given the auto-regressive nature of NMT tasks which require generating tokens sequentially. In contrast to manually designed Transformers with 6 decoder layers (both dense and sparsely activated MoE variants), AutoMoE-and HAT-searched architectures reduce the number of decoder layers with a resulting decrease in both FLOPs and latency. This is also evident in Figure 3 (e) which shows that decoder latency dominates the total inference latency for all the models by more than 90%. Expert distribution in encoder vs. decoder. Figure 3 (b) plots the number of encoder experts as ratio of total experts for AutoMoE-generated sub-Transformers. We observe that AutoMoE assigns significantly larger number of experts to encoder as compared to the decoder. As a result, encoders have much higher capacity (i.e., encoder parameters as a proportion of overall parameters) than decoders. This correlates with the earlier observation that models with higher encoder layers compared to decoder layers enjoy better latency-performance trade-off (Kasai et al., 2021). Our findings from AutoMoE designed architectures indicate that the number of layers and experts are two knobs that jointly help in modulating encoder capacity and decoder latency to design efficient MoE. Expert distribution in different layers. Figures 3 (c) and (d) show the percentage of experts allocated to different layers for encoders and decoders -averaged over several sampled architectures from AutoMoE Supernet. Notice that the middle encoder layers (3 rd , 5 th ) are allocated the maximum number of experts, while the first layer receives the least. The trend reverses for decoder, with the first layer receiving most experts with gradual reduction in expert allocation. This is also consistent with keeping decoders light by dropping layers to reduce latency; while compensating for the reduced capacity with increased experts in the first few layers. Latency vs. FLOPs as constraint for search. Ta- 1 We use same hyper-parameters for all models with no tuning (provided in code). Given 40K training steps for each model and no tuning, MoE numbers may not be comparable to SOTA numbers which typically train for more steps. HAT and Evol. Transformer numbers are reported from . We follow their evaluation and reporting protocol. Table 7 presents the impact of search space choices on MoE efficiency and performance trade-off. The first variation is to make '#Encoder Layers' an elastic search dimension. Note that both HAT and AutoMoE consider the number of encoder layers to be fixed (refer to Table 2). We observe that varying encoder layers has a relatively poor trade-off on model performance vs efficiency as compared to varying decoder layers, re-inforcing our prior observations on the importance of encoder capacity and depth.
In the second variation (see third major row), we fix the expert architecture (with 2 experts manually placed uniformly) in the search space and only search for standard Transformer hyper-parameters. Observe that AutoMoE-designed models have better FLOPs than such manually designed ones.
The last variation introduces identity or dummy experts (i.e., expert with 0 intermediate FFN size, equivalent to identity operation). This explores the idea that we can skip the computation for some of the tokens based on context rather than always forcing them through an FFN. We observe identity experts to marginally hurt the performance but significantly reduce FLOPs (see last major row).
Conclusion
AutoMoE is the first framework to design heterogeneous MoE's under computational constraints. It supports adaptive computation i.e. variable compute for different inputs with variable-size experts. It leverages NAS to explore a heterogeneous search space with variable number of experts, sizes, and placement choices; alongside other standard Transformer architectural hyper-parameters. AutoMoE generated MoE subnetworks reduce FLOPs and latency over both manually designed and NASsearched architectures on benchmark MT tasks.
Limitations
Given our focus on finding efficient MoE models under computational constraints, AutoMoE search space and evaluation has been restricted in scale to big-sized Transformer models for benchmark MT tasks. A natural extension of this work is to explore the limits of MoE models like SwitchTransformers (Fedus et al., 2022b) and GShard (Lepikhin et al., 2020) that are significantly larger containing billions to trillions of parameters; as well as designing sparse and transferable efficient expert models (Zoph et al., 2022) for diverse types of tasks like reasoning, summarization and understanding.
The limitations of this work are as follows:
1. Sandwich sampling (Yu et al., 2019), inplace knowledge distillation (Yu et al., 2019), and gradient conflict reduction (Gong et al., 2022) are popular techniques to improve the training procedure of supernet. It would be interesting to study the impact of these techniques to improve AutoMoE's supernet.
2. AutoMoE uses the hidden dimension of intermediate feedforward network (FFN) to modulate the capacity of each expert. It would be interesting to study other techniques to modulate expert capacity such as stacking variable number of hidden layers in FFN.
3. The backbone of AutoMoE's supernet uses Switch Transformer, which adds FFN based expert layers and routes each token to exactly one expert (top-1 routing). It would be interesting to: (i) search for the number of tokens to route, and (ii) search for the Transformer component (e.g., FFN, self-attention projection layers, LayerNorm) to add expert layers.
4. AutoMoE's search space contains classical Transformer components such as multi-head attention and FFN layers. It would be interesting to add components that are efficient by design such as convolutional layer, FLASH (Hua et al., 2022), and g-MLP .
Figure 1 :
1AutoMoE Framework. (1) Heterogeneous MoE with variable dimensions for dense Transformer blocks and sparsely activated expert modules.
Figure 2 :
2Weight sharing in the MoE Supernet for sparsely activated expert modules.
Figure 3 :
3Architecture analysis for AutoMoE-generated MoEs. We sample several architectures from the Pareto for AutoMoE subnets, and report aggregate statistics in terms of the impact on different computational metrics.
for generationAttributes
AutoMoE
Transformer Base / Big
Encoder-Embedding-Size
{512, 640}
512 / 1024
Decoder-Embedding-Size
{512, 640}
512 / 1024
#Encoder-Layers
{6}
6
#Decoder-Layers
{1, 2, 3, 4, 5, 6}
6
Encoder-QKV-Dim
{512}
512 / 1024
Decoder-QKV-Dim
{512}
512 / 1024
#Encoder-Self-Att-Heads (PL)
{4, 8}
8 / 16
#Decoder-Self-Att-Heads (PL)
{4, 8}
8 / 16
#Decoder-Cross-Att-Heads (PL)
{4, 8}
8 / 16
#Decoder-Arbitrary-Att (PL)
{-1, 1, 2}
-1
Encoder-FFN-Intermediate-Size (PL, PE)
{1024, 2048, 3072}
2048 / 4096
Decoder-FFN-Intermediate-Size (PL, PE)
{1024, 2048, 3072}
2048 / 4096
#Encoder-Experts (PL)
{1, 2, · · · M}
-
#Decoder-Experts (PL)
{1, 2, · · · M}
-
Table 2 :
2Search space of AutoMoE compared to manually configured Transformer Base / Big. 'PL' and 'PE' refer to per layer and per expert search dimensions. Decoder arbitrary attn. searches last k encoder layers to attend for each decoder layer. FFN size varies across layers and experts. M denotes maximum experts per layer.
AutoMoE is the largest sparsely activated MoE in the search space. It consists of the maximum number of experts (M ) placed in every layer of the Transformer in both encoder and decoder. Each expert FFN has the maximum intermediate hidden size in the search space. Similar principles apply to the non-expert dense modules initialized with corresponding full dimension.in Neural Architecture Search
that were developed for standard non-MoE archi-
tectures. We extend Supernet training to the search
space for MoE's by incorporating experts, gat-
ing and routing protocols. Typically, a Supernet
consists of thousands of subnetworks that are all
jointly trained via weight-sharing. The Supernet
for
Such a weight sharing technique allows us to design hetegogeneous MoE architectures with varying number of experts in each Transformer layer.AutoMoE also extracts front rows and front columns from the weight matrices of each FFN expert from the Supernet, corresponding to the subnet design. For the previous example, assume the intermediate FFN size of each expert in the Supernet to be 3072 (shape of weight matrix for first FFN layer is 3072 × 640 and second FFN layer is 640 × 3072). Assume the sampled subnet to be designed for 2 experts with intermediate FFN size of one expert to be 2048 while the other to be 1024. For the first expert, the weight matrices of the subnet of shape 2048 × 512 (Input) and 512 × 2048 (Output) are extracted from the first 2048 rows, 512 columns (Input) and first 512 rows, 2048 columnsMax. Embedding Size
(e.g., 640)
Sampled Embedding Size
(e.g., 512)
Max. Experts (e.g., 4)
Sampled Experts (e.g., 3)
Input
Output
SubTransformer
Weight for
Router
(a) Router
Max. Embedding Size (e.g., 640)
Max. Expert FFN
Intermediate Size (e.g.,
3072)
Sampled Embedding Size
(e.g., 512)
Sampled Expert FFN
Intermediate Size
(e.g., 2048)
SubTransformer
Weight for
Input FFN-1
Input
Output
Max. Expert FFN
Intermediate Size (e.g., 3072)
Max. Embedding Size
(e.g., 640)
Expert FFN Intermediate Size
(e.g., 2048)
Sampled
Embedding Size
(e.g., 512)
SubTransformer
Weight for
Output FFN-1
Input
Output
Max. Embedding Size (e.g., 640)
Max. Expert FFN
Intermediate Size (e.g.,
3072)
Sampled Embedding Size
(e.g., 512)
Sampled Expert
FFN Intermediate
Size (e.g., 1024)
SubTransformer
Weight for Input
FFN-2
Input
Output
Max. Expert FFN
Intermediate Size (e.g., 3072)
Max. Embedding Size
(e.g., 640)
Expert FFN
Intermediate Size
(e.g., 1024)
Sampled
Embedding Size
(e.g., 512)
SubTransformer
Weight for
Output FFN-2
Input
Output
(b) Experts (e.g., 2 FFN experts)
Table 3 :
3Machine translation benchmark data.
4 Experiments
4Datasets and evaluation metrics.We evaluate AutoMoE on standard machine translation benchmarks: WMT'14 En-De, WMT'14 En-Fr and WMT'19 En-De with dataset statistics inDataset
Network
#Active
Params (M)
Sparsity (%)
FLOPs (G)
BLEU
GPU hours
Latency (ms)
WMT'14 En-De
Transformer-Big
Dense
176
0
10.6 (1×)
28.4
184
2199 (1×)
SwitchTransformer-Big
Sparse
176
36
10.6 (1×)
28.8
236
Evolved Transformer
NAS over Dense
47
0
2.9 (3.7×)
28.2
2,192,000
-
HAT
NAS over Dense
56
0
3.5 (3×)
28.2
264
669 (3.3×)
Random Search
NAS over Sparse
42
21
2.2 (4.8×)
27.3
126
416 (5.3×)
AutoMoE (6 Experts)
NAS over Sparse
45
62
2.9 (3.7×)
28.2
224
504 (4.4×)
WMT'14 En-Fr
Transformer-Big
Dense
176
0
10.6 (1×)
41.2
240
2199 (1×)
SwitchTransformer-Big
Sparse
176
36
10.6 (1×)
42.3
234
Evolved Transformer
NAS over Dense
175
0
10.8 (1×)
41.3
2,192,000
-
HAT
NAS over Dense
57
0
3.6 (2.9×)
41.5
248
723 (3×)
Random Search
NAS over Sparse
42
21
2.2 (4.8×)
40.3
130
416 (5.3×)
AutoMoE (6 Experts)
NAS over Sparse
46
72
2.9 (3.7×)
41.6
236
547 (4×)
AutoMoE (16 Experts)
NAS over Sparse
135
65
3.0 (3.5×)
41.9
672 (3.3×)
WMT'19 En-De
Transformer-Big
Dense
176
0
10.6 (1×)
46.1
184
2199 (1×)
SwitchTransformer-Big
Sparse
176
36
10.6 (1×)
47.0
223
HAT
NAS over Dense
63
0
4.1 (2.6×)
45.8
264
758 (2.9×)
Random Search
NAS over Sparse
42
21
2.2 (4.8×)
43.7
126
416 (5.3×)
AutoMoE (2 Experts)
NAS over Sparse
45
41
2.8 (3.8×)
45.5
248
558 (3.9×)
AutoMoE (16 Experts)
NAS over Sparse
69
81
3.2 (3.3×)
45.9
656 (3.3×)
Table 4 :
4Comparison of AutoMoE vs. baselines with Pareto-optimal architectures highlighted in blue color. We report active model parameters, and sparsity measured as non-active parameters as a percentage of total parameters. We train all baselines and AutoMoE for the same 40K training steps for fair comparison to report BLEU 1 . Training time (with search, if applicable) is reported in hours for one Nvidia V100 GPU. Inference latency is measured in Intel Xeon CPU. AutoMoE significantly reduces FLOPs and latency with parity in BLEU, on aggregate, over NAS methods in dense search space (e.g., 1.3× and 2.4× FLOPs reduction and speedup over HAT and Evolved Transformer). AutoMoE with Random Search obtains the best speedup but results in significant regression in BLEU.
Table 5 :
5AutoMoE-generated Pareto-optimal architectures for different datasets. FFN intermediate sizes for fractional
experts (i.e. varying expert sizes within each layer) are enclosed within square brackets.
Table 6 :
6Impact of latency and FLOPs constraints on WMT'14 En-Fr dataset. Latency is computed on 1 NVIDIA V100 GPU. ble 6 presents the impact of latency and FLOPs as computational constraints on the performanceefficiency trade-off. Constraining FLOPs results in models that fully exhaust the FLOPs budget; while leading to higher latency. On the other hand, constraining latency tends to under-utilize the budget leading to relatively superior FLOPs and latency, providing a stricter control. Pareto-optimal AutoMoE generated MoE architectures.Table 5 shows sparsely activated MoE architectures designed by two variants of AutoMoE ('std-expert': expert FFN size same in each layer and variable across; 'fract-expert': fully heterogeneous expert size) for different datasets with the best trade-off in BLEU vs. latency. On aggregate 71% of the experts are allocated to the encoder compared to the decoder. Meanwhile, 70% of the expert layers in 'fract-expert' architectures have 2 or more experts, out of which more than 75% of the expert layers have varying capacities (i.e., experts with different FFN intermediate size). Figures 4, 5, 6 in Appendix show full architecture (embedding size, layers, heads, experts, placement, sizes, etc.) of AutoMoE subnets on WMT14 En-De,Search Space Variation
BLEU FLOPs
HAT
28.2
3.5G
AutoMoE (2 Experts) w/ fixed encoder layers
28.2
2.9G
Varying number of encoder layers
HAT w/ #Encoder-Layers ∈ {1-6}
28.1
3.4G
AutoMoE (2 Experts) w/ #Encoder-Layers ∈ {1-6}
28.3
3.7G
AutoMoE (2 Experts) w/ manually designed ho-
mogeneous experts
1-2-1-2-1-2
28.3
3.5G
1-1-1-2-2-2
28.3
3.8G
2-2-2-1-1-1
28.3
3.1G
AutoMoE w/ Identity Expert FFN size ∈ {0, 3072}
28.1
2.7G
Table 7 :
7Variations in AutoMoE's search space on WMT'14 En-De dataset. WMT14 En-Fr and WMT19 EN-De respectively. MoE Search space variations.
A AppendixA.1 Full Architecture DesignFigure 4, 5 and 6 present the full architecture design of pareto-efficient architectures generated by AutoMoE.A.2 Evolutionary Search -StabilityWe study the initialization effects on the stability of the pareto front outputted by the evolutionary search for HAT.Table 8displays sampled (direct) BLEU and latency of the models in the pareto front for different seeds on the WMT'14 En-Fr task. The differences in the latency and BLEU across seeds are mostly marginal. This result highlights that the pareto front outputted by the evolutionary search is largely stable for HAT.A.3 Evolutionary Search -AlgorithmWe present the pseudo code of the evolutionary search algorithm proposed by HAT in Algorithm 1. This algorithm is also adopted by AutoMoE.Algorithm 1 Evolutionary search algorithm for Neural architecture search. Input: supernet, latency-predictor, num-iterations, num-population, num-parents, num-mutations, num-crossover, mutate-prob, latency-constraint Output: best-architecture for mi ← 1 to num-mutations do 6:cur-mutate-gene ← mutate a random example from popu with mutation probability mutate-prob 7:if cur-mutate-gene satisfies latency-constraint via latency-predictor then if cur-crossover-gene satisfies latency-constraint via latency-predictor then 13:cur-crossover-popu = cur-crossover-popu ∪ cur-crossover-geneFigure 6: AutoMoE-generated architecture for WMT'19 En-De.
. Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, O' Brian, Jeff Horo, Luke Wang, Mona T Zettlemoyer, Zornitsa Diab, Ves Kozareva, Stoyanov, Efficient large scale language modeling with mixtures of experts. CoRR, abs/2112.10684Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pa- sunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, and Ves Stoyanov. 2021. Efficient large scale lan- guage modeling with mixtures of experts. CoRR, abs/2112.10684.
Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordCurran Associates, Inc33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
Once for all: Train one network and specialize it for efficient deployment. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, Song Han, International Conference on Learning Representations. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. 2020. Once for all: Train one network and specialize it for efficient deployment. In Interna- tional Conference on Learning Representations.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Efficient-BERT: Progressively searching multilayer perceptron via warm-up knowledge distillation. Chenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, Xiaodan Liang, 10.18653/v1/2021.findings-emnlp.123Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsChenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, and Xiaodan Liang. 2021. Efficient- BERT: Progressively searching multilayer percep- tron via warm-up knowledge distillation. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2021, pages 1424-1437, Punta Cana, Dominican Republic. Association for Computational Linguistics.
. Nan Du, Yanping Huang, M Andrew, Simon Dai, Dmitry Tong, Yuanzhong Lepikhin, Maxim Xu, Yanqi Krikun, Adams Wei Zhou, Orhan Yu, Barret Firat, Liam Zoph, Fedus, P Maarten, Zongwei Bosma, Tao Zhou, Emma Wang, Kellie Wang, Marie Webster, Kevin Pellat, Kathleen Robinson, Toju Meier-Hellstern, Lucas Duke, Kun Dixon, Quoc Zhang, Le, Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P Bosma, Zongwei Zhou, Tao Wang, Emma Wang, Kellie Webster, Marie Pel- lat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc Le, Yonghui 2 https://alliancecan.ca
GLaM: Efficient scaling of language models with mixtureof-experts. Zhifeng Wu, Claire Chen, Cui, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Wu, Zhifeng Chen, and Claire Cui. 2022. GLaM: Efficient scaling of language models with mixture- of-experts. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5547-5569. PMLR.
A review of sparse expert models in deep learning. William Fedus, Jeff Dean, Barret Zoph, 10.48550/ARXIV.2209.01667William Fedus, Jeff Dean, and Barret Zoph. 2022a. A review of sparse expert models in deep learning.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. William Fedus, Barret Zoph, Noam Shazeer, Journal of Machine Learning Research. 23120William Fedus, Barret Zoph, and Noam Shazeer. 2022b. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1-39.
Autobert-zero: Evolving BERT backbone from scratch. Jiahui Gao, Hang Xu, Han Shi, Xiaozhe Ren, L H Philip, Xiaodan Yu, Xin Liang, Zhenguo Jiang, Li, Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event. AAAI PressJiahui Gao, Hang Xu, Han Shi, Xiaozhe Ren, Philip L. H. Yu, Xiaodan Liang, Xin Jiang, and Zhenguo Li. 2022. Autobert-zero: Evolving BERT backbone from scratch. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Con- ference on Innovative Applications of Artificial In- telligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 -March 1, 2022, pages 10663-10671. AAAI Press.
NASVit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Liu Qiang, Vikas Chandra, International Conference on Learning Representations. Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, qiang liu, and Vikas Chandra. 2022. NASVit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. In International Conference on Learning Representations.
Single path one-shot neural architecture search with uniform sampling. Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, Jian Sun, 10.1007/978-3-030-58517-4_32Computer Vision -ECCV 2020 -16th European Conference. Glasgow, UKSpringer12361Proceedings, Part XVIZichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. 2020. Single path one-shot neural architecture search with uni- form sampling. In Computer Vision -ECCV 2020 - 16th European Conference, Glasgow, UK, August 23- 28, 2020, Proceedings, Part XVI, volume 12361 of Lecture Notes in Computer Science, pages 544-560. Springer.
Transformer quality in linear time. Weizhe Hua, Zihang Dai, Hanxiao Liu, Quoc Le, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le. 2022. Transformer quality in linear time. In Proceed- ings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 9099-9117. PMLR.
Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz L Religa, C T Caio, Gustavo H Mendes, De Rosa, 10.48550/ARXIV.2203.02094Sebastien Bubeck, Farinaz Koushanfar, and Debadeepta Dey. 2022. Litetransformersearch: Training-free on-device search for efficient autoregressive language models. Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz L. Religa, Caio C. T. Mendes, Gustavo H. de Rosa, Sebastien Bubeck, Farinaz Koushanfar, and Debadeepta Dey. 2022. Litetransformersearch: Training-free on-device search for efficient autore- gressive language models.
Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah Smith, International Conference on Learning Representations. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In International Conference on Learning Representations.
. Young Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andrés Felipe Cruz-Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, andYoung Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andrés Felipe Cruz-Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, and
Scalable and efficient moe training for multitask multilingual models. Hany Hassan Awadalla, abs/2109.10465CoRRHany Hassan Awadalla. 2021. Scalable and effi- cient moe training for multitask multilingual models. CoRR, abs/2109.10465.
Beyond distillation: Task-level mixture-of-experts for efficient inference. Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, Orhan Firat, 10.18653/v1/2021.findings-emnlp.304Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsSneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Lu- ong, and Orhan Firat. 2021. Beyond distillation: Task-level mixture-of-experts for efficient inference. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3577-3599, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics.
Gshard: Scaling giant models with conditional computation and automatic sharding. CoRR, abs. Dmitry Lepikhin, Hyoukjoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen, Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. CoRR, abs/2006.16668.
Base layers: Simplifying training of large, sparse models. Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer, PMLRProceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine Learning139Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. Base layers: Simplifying training of large, sparse models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 6265-6274. PMLR.
Pay attention to mlps. Hanxiao Liu, Zihang Dai, David So, Quoc V Le, Advances in Neural Information Processing Systems. Curran Associates, Inc34Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. 2021. Pay attention to mlps. In Advances in Neural Information Processing Systems, volume 34, pages 9204-9215. Curran Associates, Inc.
Gating dropout: Communicationefficient regularization for sparsely activated transformers. Rui Liu, Young Jin Kim, Alexandre Muzio, Hany Hassan, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Rui Liu, Young Jin Kim, Alexandre Muzio, and Hany Hassan. 2022. Gating dropout: Communication- efficient regularization for sparsely activated trans- formers. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 13782-13792. PMLR.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, 10.18653/v1/N19-4009Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Minneapolis, MinnesotaAssociation for Computational LinguisticsMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
DeepSpeed-MoE: Advancing mixture-of-experts inference and training to power next-generation AI scale. Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Ammar Ahmad Awan, Jeff Rasley, and Yuxiong HeSamyam Rajbhandari, Conglong Li, Zhewei Yao, Min- jia Zhang, Reza Yazdani Aminabadi, Ammar Ah- mad Awan, Jeff Rasley, and Yuxiong He. 2022. DeepSpeed-MoE: Advancing mixture-of-experts in- ference and training to power next-generation AI scale. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 18332-18346. PMLR.
Hash layers for large sparse models. Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, Jason E Weston, Advances in Neural Information Processing Systems. Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason E Weston. 2021. Hash layers for large sparse models. In Advances in Neural Information Processing Systems.
. Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Q Vinh, Yi Tran, Tay, 10.48550/ARXIV.2207.07061and Donald Metzler. 2022. Confident adaptive language modelingTal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Q. Tran, Yi Tay, and Donald Met- zler. 2022. Confident adaptive language modeling.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. Noam Shazeer, Azalia Mirhoseini, Andy Krzysztof Maziarz, Quoc Davis, Geoffrey Le, Jeff Hinton, Dean, International Conference on Learning Representations. Noam Shazeer, *Azalia Mirhoseini, *Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural net- works: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representa- tions.
The evolved transformer. David So, Quoc Le, Chen Liang, PMLRProceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Pro- ceedings of Machine Learning Research, pages 5877- 5886. PMLR.
Searching for efficient transformers for language modeling. David So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, Quoc V Le, Advances in Neural Information Processing Systems. Curran Associates, Inc34David So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V Le. 2021. Searching for efficient transformers for language modeling. In Advances in Neural Information Processing Systems, volume 34, pages 6010-6022. Curran Associates, Inc.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, 10.18653/v1/W18-5446Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Com- putational Linguistics.
HAT: Hardware-aware transformers for efficient natural language processing. Hanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, Song Han, 10.18653/v1/2020.acl-main.686Proceedings of the 58th. the 58thHanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, and Song Han. 2020. HAT: Hardware-aware transformers for efficient natural language processing. In Proceedings of the 58th
Annual Meeting of the Association for Computational Linguistics. Online. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 7675-7688, Online. Association for Computational Linguistics.
Autodistil: Few-shot task-agnostic neural architecture search for distilling large language models. Dongkuan Xu, ( Subhabrata, ) Subho, Xiaodong Mukherjee, Debadeepta Liu, Wenhui Dey, Xiang Wang, Ahmed H Zhang, Jianfeng Awadallah, Gao, ArXivDongkuan Xu, Subhabrata (Subho) Mukherjee, Xi- aodong Liu, Debadeepta Dey, Wenhui Wang, Xi- ang Zhang, Ahmed H. Awadallah, and Jianfeng Gao. 2022a. Autodistil: Few-shot task-agnostic neural ar- chitecture search for distilling large language models. ArXiv.
Nas-bert: Task-agnostic and adaptive-size bert compression with neural architecture search. Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, Tie-Yan Liu, 10.1145/3447548.3467262Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21New York, NY, USAAssociation for Computing MachineryJin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. 2021. Nas-bert: Task-agnostic and adaptive-size bert compression with neural ar- chitecture search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, page 1933-1943, New York, NY, USA. Association for Computing Machinery.
Analyzing and mitigating interference in neural architecture search. Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, Jian Li, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian Li. 2022b. An- alyzing and mitigating interference in neural architec- ture search. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 24646-24662. PMLR.
AutoTinyBERT: Automatic hyper-parameter optimization for efficient pre-trained language models. Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu, 10.18653/v1/2021.acl-long.400Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsYichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2021. AutoTinyBERT: Au- tomatic hyper-parameter optimization for efficient pre-trained language models. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 5146-5157, Online. As- sociation for Computational Linguistics.
Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, Thomas Huang, Slimmable neural networks. In International Conference on Learning Representations. Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. 2019. Slimmable neural networks. In International Conference on Learning Representa- tions.
Twenty years of mixture of experts. Joseph N Seniha Esen Yuksel, Paul D Wilson, Gader, 10.1109/TNNLS.2012.2200299IEEE Transactions on Neural Networks and Learning Systems. 238Seniha Esen Yuksel, Joseph N. Wilson, and Paul D. Gader. 2012. Twenty years of mixture of experts. IEEE Transactions on Neural Networks and Learning Systems, 23(8):1177-1193.
St-moe: Designing stable and transferable sparse expert models. Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, William Fedus, 10.48550/ARXIV.2202.08906Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yan- ping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. St-moe: Designing stable and transfer- able sparse expert models.
Taming sparsely activated transformer with stochastic experts. Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Jianfeng Gao, Tuo Zhao, International Conference on Learning Representations. Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Jianfeng Gao, and Tuo Zhao. 2022. Taming sparsely activated transformer with stochastic experts. In International Conference on Learning Representations.
| [] |
[
"ON WEAK SOLUTIONS TO THE GEODESIC EQUATION IN THE PRESENCE OF CURVATURE BOUNDS",
"ON WEAK SOLUTIONS TO THE GEODESIC EQUATION IN THE PRESENCE OF CURVATURE BOUNDS"
] | [
"Moritz Reintjes ",
"Blake Temple "
] | [] | [] | We show that taking account of bounded curvature reduces the threshold regularity of connection coefficients required for existence and uniqueness of solutions to the geodesic equation, to L p loc , one derivative below the regularity W 1,p loc required if one does not take account of curvature, (L p loc for existence, W 1,p loc for uniqueness). Our argument is based on authors' theory of the RT-equations for regularizing connections to optimal regularity by coordinate transformation. The incoming regularity is too low to formulate a weak version of the geodesic equation based on the standard method of multiplying by smooth test functions and integrating by parts, so alternatively, we define weak solutions by coordinate transformation and we give an explicit procedure for mollifying the original connection such that the correct weak solution is indeed a limit of smooth solutions of the mollified equations in the original coordinates. This is an example where limits under suitable mollification are more fundamental than a weak formulation, indicative of more complicated PDE's in which the standard weak formulation of the equations does not adequately rule out unphysical solutions. Our results apply to general second order ODE's in which the lack of regularity can be isolated in the connection coefficients. The results apply to General Relativity. | null | [
"https://export.arxiv.org/pdf/2306.04868v1.pdf"
] | 259,108,433 | 2306.04868 | 2edc4afc431232348c55647f2270665aba58ecf8 |
ON WEAK SOLUTIONS TO THE GEODESIC EQUATION IN THE PRESENCE OF CURVATURE BOUNDS
8 Jun 2023 JUNE 8, 2023
Moritz Reintjes
Blake Temple
ON WEAK SOLUTIONS TO THE GEODESIC EQUATION IN THE PRESENCE OF CURVATURE BOUNDS
8 Jun 2023 JUNE 8, 2023
We show that taking account of bounded curvature reduces the threshold regularity of connection coefficients required for existence and uniqueness of solutions to the geodesic equation, to L p loc , one derivative below the regularity W 1,p loc required if one does not take account of curvature, (L p loc for existence, W 1,p loc for uniqueness). Our argument is based on authors' theory of the RT-equations for regularizing connections to optimal regularity by coordinate transformation. The incoming regularity is too low to formulate a weak version of the geodesic equation based on the standard method of multiplying by smooth test functions and integrating by parts, so alternatively, we define weak solutions by coordinate transformation and we give an explicit procedure for mollifying the original connection such that the correct weak solution is indeed a limit of smooth solutions of the mollified equations in the original coordinates. This is an example where limits under suitable mollification are more fundamental than a weak formulation, indicative of more complicated PDE's in which the standard weak formulation of the equations does not adequately rule out unphysical solutions. Our results apply to general second order ODE's in which the lack of regularity can be isolated in the connection coefficients. The results apply to General Relativity.
INTRODUCTION
We introduce a solvability condition sufficient to imply existence and uniqueness of solutions x = γ(t) to the initial value problem for the geodesic equation 1 γ
µ + Γ µ ρν (γ)γ ργν = 0, γ µ (t 0 ) = x µ 0 , γ µ (t 0 ) = v µ 0 ,(1.1)
when connection components are only in L p , one derivative less regular than the standard ODE theory requires-a regularity too low to restrict connections to curves and make sense of weak solutions of (1.1) in a standard way. Our improvement is E-mail addresses: [email protected], [email protected]. 1 We use standard tensor notation; indices µ, ν, ρ, ... run from 1 to n, repeated up-down indices are summed over, etc., (see for example [9]). obtained by writing the equations in coordinates where the coefficients are more regular. For this, based on authors' prior work [21,23], it suffices to take account of the regularity of components of the Riemann curvature, Riem(Γ) ∈ L p for existence and Riem(Γ) ∈ W 1,p for uniqueness of solutions to (1.1), and no other apriori information about the geometry associated with Γ need be assumed. 2 To start, assume that the connection components Γ ≡ Γ x ≡ Γ µ ρν (x) ∈ L p (Ω) are arbitrary given real valued functions of x = (x 1 , ..., x n ) ∈ Ω ⊂ R n , Ω open, n ≥ 2. At this level of generality, the component functions Γ µ νρ (x), together with an atlas of coordinate transformations and the transformation law for connections, are sufficient to define a unique affine connection Γ on the tangent bundle of an n-dimensional manifold M-and even though the associated geometry could be non-metric and highly degenerate, this alone is sufficient to define the Riemann curvature tensor Riem(Γ) associated with Γ, to which our theory here applies. Thus, from the point of view of ODE theory, we can interpret the geometry as a device for formulating a solvability condition for general systems of nonlinear equations of form (1.1), whether or not the underlying geometry is of interest in its own right.
Note that equation (1.1) does not admit a weak formulation based on multiplying by smooth test functions and integrating by parts because the connection Γ ∈ L p is of too low a regularity to restrict to curves γ(t). We here introduce an alternative formulation of the equations based on coordinate transformation, which is equivalent for smooth Γ, and hence suffices as a weak formulation when Γ is of low regularity. The idea is that for smooth Γ, whenever we transform a solution γ x (t) of (1.1) as a curve under a smooth coordinate transformation x → y, the theory of the covariant derivative implies that to recover the equivalent equation for solution γ y (t) in the new y-coordinates, it is sufficient for the connection coefficients Γ µ ρν (x) to transform by the connection transformation law. By this principle of equivalent equations, the transformation law for connections is invoked by the requirement that the transformed equations be equivalent for smooth enough Γ, independent of any geometry in the background. It follows that if we can find a coordinate transformation that sufficiently regularizes Γ, then the transformed equations can naturally be taken as providing the correct weak formulation when the untransformed Γ has too low a regularity to make sense of the equations in the untransformed coordinates. That is, if a low regularity Γ ∈ L p admits a regularization under coordinate transformation x → y sufficient to define classical solutions γ y (t) in the transformed coordinates, (our purpose is to establish this here using the theory of the RT-equations [23,25]), then transforming the solution γ y (t) back to the original x-coordinates as a curve, γ x = y −1 • γ y , provides the correct notion of weak solution in the original coordinates. Thus, analogous to defining distributions, we use two formulations equivalent for smooth solutions to define weak solutions in a low regularity setting in which one formulation, but not the other, has sufficient regularity to define classical solutions. This principle works to define weak solutions for general second order systems of ODE's which are quadratic in first derivatives in the terms with coefficients of low regularity, i.e., equations of
the formγ µ + Γ µ ρν (γ)γ ργν = K µ (t, γ,γ),(1.2)
where K µ encodes the sufficiently regular terms in the equation. 3 In this case, for smooth Γ and K, a smooth transformation of coordinates will transform Γ by the connection transformation law, and K µ must transform as a vector, K i = K µ ∂y i ∂x µ , in order to get an equivalent equation of form (1.2) in y-coordinates. To see this note that the first two terms in (1.2) comprise the covariant derivative of ∇γγ which transforms as a vector, so transforming K as a vector suffices to make (1.2) a covariant geometric equation. Thus when Γ ∈ L p , if the map x → y and K µ have sufficient regularity for the equations (Lipschitz continuity suffices) to admit strong solutions in y-coordinates, then this will provide the correct weak formulation in x-coordinates. To keep things simple here, we restrict to the case K µ = 0.
It is well known that Peano's Theorem establishes Hölder continuity of the components Γ ≡ Γ µ νρ (x) ∈ C 0,α , 0 ≤ α < 1, as the threshold regularities for existence of classical solutions of (1.1) in x-coordinates. 4 Uniqueness requires more regularity. Namely, the Picard-Lindelöff Theorem requires Lipschitz continuity of the connection components, Γ ∈ C 0,1 . By Morrey's inequality, functions in W 1,p , (the Sobolov space of functions with weak derivatives integrable in L p ), are Hölder continuous when p > n, W 1,p ⊂ C 0,α with α = 1 − n/p, and by Rademacher's Theorem C 0,1 ≃ W 1,∞ , c.f. [4]. So Γ ∈ W 1,p is a regularity sufficient for uniqueness of solutions to (1.1) and Γ ∈ W 1,∞ is the threshold for uniqueness. Below these threshold regularities, there is no guarantee the Picard iteration for solving (1.1) will converge to a solution, and prior approaches [26,28] relied on the Filippov theory for establishing existence of (non-unique) solutions in the sense of differential inclusions [16]. In this paper, for regularities below these thresholds, we establish convergence of the Picard iteration in modified coordinates, under natural conditions on the regularity of dΓ, or equivalent, Riem(Γ).
As authors pointed out in [22], a direct consequence of Riemann's idea to construct a tensorial measure of curvature is that the regularity of connections can range by coordinate transformation from one derivative above to one derivative below the regularity of the Riemann curvature, while keeping the regularity of the Riemann curvature fixed. This is based on the difference between the transformation law for components of the Riemann curvature tensor R τ
µνρ (x) ≡ Riem(Γ x ), R τ µνρ (x) = ∂x τ ∂y δ ∂y α ∂x µ ∂y β ∂x ν ∂y γ ∂x ρ R δ αβγ (y),(1.3)
and the transformation law for connections Minkowski-force, [30]. 4 Regularity of connections and tensors always refers to regularity of their components in a given coordinate system. under a coordinate transformation x µ → y α with Jacobian J ≡ ∂y α ∂x µ and inverse J −1 ≡ ∂x µ ∂y α , where Γ x and Γ y denote connection components represented in xand y-coordinates, respectively, (e.g.,
(Γ x ) µ ρν = ∂x µ ∂y α ∂y β ∂x ρ ∂y γ ∂x ν (Γ y ) α βγ + ∂ ∂x ρ ∂y α ∂x ν ,(1.Γ x ≡ (Γ x ) µ ρν ≡ (Γ x ) µ ρν (x), etc.
). It follows from (1.3) and (1.4) that the regularity of connections and tensors, as measured in coordinates, is invariant under smooth coordinate transformations, but it is not invariant under low regularity transformations. In particular, the regularity of the components of the Riemann curvature tensor is invariant under the atlas of coordinate transformations with Jacobian of (at least) the regularity of the curvature, because the curvature components transform as a tensor by contraction with undifferentiated Jacobians. But, within the same atlas, the regularity of connection components is not invariant, due to the fact that (1.4) involves derivatives of the Jacobian. By this, in a given coordinate system, the regularity of connection components could be up to one derivative below the regularity of the components of its Riemann curvature. On the other hand, the authors' theory of the RT-equation asserts that one can always transform to coordinates in which connection components are one derivative more regular than curvature components, (optimal regularity, c.f. [21,23,24]). Thus, in a given coordinate system, the regularity of connection components could range from one derivative above, to one derivative below, the regularity of the components of its Riemann curvature. It follows that, apriori, there is no geometric reason why the connection regularity in (1.1) could not lie anywhere within this range. We show in this paper that by taking account of the regularity of the curvature, the author's theory of optimal regularity based on the Regularity Transformation (RT)-equations [21,23,24] can be applied to lower the threshold required for existence and uniqueness of (1.1) by one derivative, over and above that given by the standard theory of Peano and Picard-Lindelöff.
Interesting to us, our theory in [21,23,24] couched in the language of geometry, implies a new existence theory for ordinary differential equations of type (1.1), based on coordinate transformation to optimal connection regularity. Building on this, we show in Section 2.2 that, even though the regularity in the original xcoordinates is too low for the geodesic equation to admit a weak formulation (based on multiplying by smooth test functions and integrating by parts), solutions of (1.1) in x-coordinates exist as limits of smooth solutions of mollified equations obtained by mollifying the connection components Γ in a manner which maintains L p regularity of the curvature in the zero mollification limit. This shows that for the geodesic equation the notion of weak solutions in the sense of approximations is more fundamental than the notion of distributional solutions of a weak formulation of the equations, and that taking account of the geometry in the form of curvature, such weak solutions can be realized in a unique way. This raises the interesting question as to whether limits under some suitably constrained mollification might be more fundamental than weak formulations based on integration by parts, and we wonder whether this might hold some lessons for the problem of non-uniqueness for more complicated systems, like the 3-D Euler equations, c.f. [3].
Our main theorems are stated in Section 2, and their proofs are recorded in Section 5. In Section 4 we review authors' theory of the RT-equations, on which these proofs are based. In Section 3 we characterize explicitly the loss of connection regularity relative to the curvature under coordinate transformation in terms of the exterior derivative dΓ and its co-derivative δΓ.
STATEMENT OF RESULTS
2.1. Weak solutions by geometry. To state our basic existence and uniqueness results, we first need to introduce a proper notion of solutions to the geodesic initial value problem (1.1) in x-coordinates, given that Γ in L p has a regularity too low to restrict to curves and hence too low for a standard weak formulation for the equations.
Definition 2.1. We call a curve γ ≡ γ x a weak solution of (1.1) in x-coordinates, if there exists a coordinate transformation x → y such that Γ x → Γ y transforms by the connection transformation law (1.4), Γ y ∈ W 1,p , and the transformed curve
γ y ≡ y • x −1 (γ x ) is a classical solution of (1.1) in y-coordinates.
The theory of connections in differential geometry implies that weak solutions in the sense of Definition 2.1 are well-defined, because the transformation law (1.4) for connections in W 1,p preserves geometric properties under coordinate transformation, including the invariance of geodesic curves.
Our first theorem addresses existence of solutions to the geodesic initial value problem 1.1 subject to bounded curvature.
Theorem 2.2. Assume Γ x ∈ L 2p
(Ω) and Riem(Γ x ) ∈ L p (Ω), for p > n, (or equivalently, assume Γ x ∈ L 2p (Ω) and dΓ x ∈ L p (Ω)). Then there exists a solution to the initial value problem (1.1) in the sense of Definition 2.1. More generally, there exists a solution to the initial value problem for (1.2) provided K is Hölder continuous.
Note that the threshold regularity for existence established by Theorem 2.2 lies one derivative below the Hölder continuity, Γ ∈ W 1,p ⊂ C 0,α , p > n, α = 1 − n p , required by Peano's Theorem, if one does not take account of the curvature. Uniqueness is more delicate, and requires more regularity on Riem(Γ) which might differ from the regularity of dΓ, (c.f. Section 3 below).
Theorem 2.3. Assume Γ x ∈ L 2p (Ω) and Riem(Γ x ) ∈ W 1,p (Ω), for p > n.
Then there exists a unique solution to the initial value problem
(1.1) in the sense of Definition 2.1. Moreover, if Γ x , Riem(Γ x ) ∈ W 1,p (Ω), p > n, then there exists a unique solution to the initial value problem for (1.2) provided K is Lipschitz continuous.
The threshold regularity for uniqueness established by Theorem 2.3 lies below the Lipschitz continuity, (Γ ∈ W 1,∞ = C 0,1 ), required by the Picard-Lindelöff Theorem, if one does not take account of the curvature. In particular, Theorem 2.3 implies as a corollary uniqueness of solutions for Hölder continuous connections provided their curvature is in W 1,p .
To put this into the context of General Relativity (GR), note that the connection is always precisely one derivative less regular than the metric by Christoffel's formula, so the above classical thresholds for existence and uniqueness can be equivalently expressed in terms of the metric tensor, requiring C 1,α metric regularity for existence of geodesics, and C 1,1 regularity for uniqueness. Lorentzian metrics of low regularity are central to various recent research programs in GR, ranging from relativistic shock waves [10,7,27], to Penrose's Strong Cosmic Censorship Conjecture [17,2,12,18], to the Hawking Penrose Singularity Theorems [5,6,13,14,9], to Lorentzian Length Spaces [15]. Glimm scheme based shock wave solutions in General Relativity exhibit only Lipschitz continuous gravitational metrics [10,7,27], i.e., one derivative below the classical threshold for existence and uniqueness of geodesics, but Theorem 2.2 applies since the curvature is bounded in L ∞ . It was the authors' attempt to understand the apparent singularities in GR solutions generated by Glimm's method that originally motivated the authors' to develop the theory of the RT-equations in [20,21,22,23,24,25], the basis for the methods in this paper.
2.2. Weak solutions as a mollified limit. Theorems 2.2 and 2.3 use the theory of optimal regularity in geometry to define a weak solution of (1.1) in a setting where the equation is too weak to admit either strong solutions, or solutions of a weak formulation of the equation based on integration by parts. Having established a notion of weak solution using a higher order theory, in this case geometry, it makes sense to ask in what sense the equations are satisfied in the original x-coordinates, where the problem is originally posed. In this subsection we show that our notion of weak solution provides an explicit description for a mollification Γ ǫ x of the original connection components Γ x , so that Γ ǫ x → Γ x and the weak solutions defined by Definition 2.1 are obtained as a limit of smooth solutions of the mollified equation (1.1) in x-coordinates, as ǫ → 0. (Extending the construction in this section to equation (1.2) is straightforward.)
To begin, note that assuming only that components of Γ x are functions in L p , we cannot expect the weak solutions identified in Theorems 2.2 and 2.3 to always be faithful limits of solutions to more regular equations obtained by arbitrarily mollifying Γ x . Indeed, L p functions are too weak to even restrict to curves, so in general, to realize solutions as regular limits, one would expect that mollifications need to be constrained in a manner which faithfully represents the missing physical information, in this case the transformation law for connections, and our assumptions on the curvature of Γ. Here we prove that the RT-equations provide an explicit pro-
cedure for mollifying Γ x component-wise to Γ ǫ x , so that (Γ ǫ x ) µ ρν → (Γ x ) µ ρν in L 2p
, and such that the weak solutions identified in Theorems 2.2 and 2.3 are the correct limits, in C 1 , of solutions of the regularized equation (1.1) obtained by substituting Γ ǫ x for Γ x , and solving the regularized equations in x-coordinates; namely, C 1 limits of solutions of
γ µ ǫ + (Γ ǫ x ) µ ρν (γ ǫ )γ ρ ǫγ ν ǫ = 0, γ µ ǫ (t 0 ) = x µ 0 , γ µ ǫ (t 0 ) = v µ 0 .
(2.1)
To this end, recall that authors' existence theory for the RT-equations provides the Jacobians J and transformed connection Γ y such that J integrates to a coordinate transformation x → y, and transforming Γ x to y-coordinates by the connection transformation law (1.4), yields Γ y , a connection regular enough for the transformed geodesic equation (1.1) to admit existence and uniqueness of solutions, according to Theorems 2.2 and 2.3. The point is now that this in turn provides a procedure for smoothly mollifying Γ x . Namely, let (Γ ǫ y ) α βγ be a standard smooth mollification of (Γ y ) α βγ in y-coordinates, and let x ǫ ≡ x ǫ • y −1 be a smooth mollification of the coordinate transformation x • y −1 and let y ǫ ≡ y • x −1 ǫ be its inverse, where we write x = x ǫ (y) and y = y ǫ (x) to distinguish the coordinates from the mollified mappings. We then introduce the mollification of Γ x in x-coordinates as
(Γ ǫ x ) µ ρν = ∂x µ ǫ ∂y α ∂y β ǫ ∂x ρ ∂y γ ǫ ∂x ν (Γ ǫ y ) α βγ + ∂ ∂x ρ ∂y α ǫ ∂x ν , (2.2)
all components in (2.2) expressed as function in the original x-coordinates. This construction underlies the following theorem.
Theorem 2.4. Assume Γ x ∈ L 2p (Ω) and Riem(Γ x ) ∈ W 1,p (Ω), for p > n.
Then the sequence of C ∞ curves γ ǫ which solve (2.1) converge to the correct weak solution in the following sense:
(i) The C ∞ curves γ ǫ which solve (2.1) are defined on some common open inter- val I ⊂ R, for each ǫ > 0, where t 0 ∈ I. (ii) The sequence of connections Γ ǫ x converges to Γ x strongly in L 2p and Riem(Γ ǫ x ) converges strongly to Riem(Γ x ) in L p as ǫ → 0. (iii) The curves γ ǫ converge to γ strongly in C 1 as ǫ → 0, where γ is the unique weak solution identified in Theorem 2.3. Moreover, for (1.2), if Γ x , Riem(Γ x ) ∈ W 1,p (Ω)
, p > n, and K is Lipschitz continuous, then (i) -(iii) hold for C ∞ solutions γ ǫ of (2.1) with a standard mollification of K on the right hand side.
To summarize, Theorems 2.3 and 2.2 use the higher order theory of geometry to identify the correct weak solutions of (1.1) in a setting in which there does not exist a standard weak formulation of the equations; and Theorems 2.4 and 2.5 characterize these weak solutions as limits of solutions of mollified equations. Thus, in a physical example, limits under mollification are more fundamental than weak formulations of the equations based on integration by parts. Authors find this interesting in light of the non-uniqueness of weak solutions of the compressible Euler equations based on integration by parts, demonstrated in [3], considering that the physically correct weak solutions of compressible Euler should be zeroviscosity limits of the regularizing Navier-Stokes equations. That is, Theorems 2.4 and 2.5 provide the "physically correct" weak solutions as mollified limits via an explicit mollification procedure, (like Navier-Stokes to Euler), in a setting where a weak formulation of the equations does not even exist.
The next theorem shows that the above mollification procedure gives rise to sequences of curves converging in C 1 to weak solutions in the sense of Definition 2.1, under the weaker assumption of Theorem 2.2.
Theorem 2.5. Assume Γ ∈ L 2p (Ω) and Riem(Γ) ∈ L p (Ω), for p > n, (or equivalently, assume Γ ∈ L 2p (Ω) and dΓ ∈ L p (Ω)). Then (i) and (ii) of Theorem 2.4 hold, and limits of subsequences of solutions γ ǫ (t) to (2.1), as ǫ → 0, are weak solutions of (1.1) in the sense of Definition 2.1. More generally, this extends to (1.2) provided K is Hölder continuous.
Theorems 2.5 and 2.4 raise a larger mathematical question, namely, whether any mollification which appropriately takes account of the curvature bounds required for Theorems 2.2 and 2.3, would yield in the limit ǫ → 0 the correct weak solution in the sense of Definition 2.1.
NON-INVARIANCE OF CONNECTION REGULARITY RELATIVE TO
CURVATURE
We now clarify the mechanism, identified in Section 1, by which the regularity of connections can range by coordinate transformation from one derivative above to one derivative below the regularity of the Riemann curvature, due to the difference between the transformation laws for tensors versus connections, in terms of the exterior derivative d and its co-derivative δ. To start, assume a connection of optimal regularity with components Γ y ∈ W 2,p and curvature Riem(Γ y ) ∈ W 1,p in y-coordinates. Consider now the effect of a coordinate transformation y → x with Jacobian J −1 = ∂x ∂y having regularity identical to that of the curvature, i.e., J −1 ∈ W 1,p . (We use here J −1 for y → x to be consistent with the notation in Sections 4 -5, where we consider transformations from x → y with Jacobian J). For ease, assume p > n, so that W 1,p is closed under multiplication by Morrey's inequality. Now write the connection transformation law (1.4) as
Γ x =Γ + J −1 dJ, whereΓ µ ρν ≡ (J −1 ) µ α J β ρ J γ ν (Γ y ) α βγ ,(3.1)
where (dJ) α ρν ≡ ∂ ∂x ρ ∂y α ∂x ν , and we view Γ x andΓ as matrix valued 1-forms, e.g., Γ ≡Γ µ νj dx j with matrix indices µ, ν. Then Lemma 3.3 in [20] implies
δΓ x = δΓ + dJ −1 ; dJ + J −1 ∆J,(3.2)
where δ is the co-derivative based on the Euclidean metric in x-coordinates and ∆ ≡ δd + dδ is the standard Laplacian in x-coordinates, and · ; · is a matrix valued inner product, (see [20] for precise definitions). The point we would like to make, now, is that (3.2) implies that the co-derivative δΓ x has the regularity of ∆J, thus lies in general only in W −1,p . However, in contrast, even though Riem(Γ) involves derivatives of Γ, by (1.3) the Riemann tensor transforms by contraction with undifferentiated Jacobians J, J −1 ∈ W 1,p , (as proven in Appendix B for distributional curvature), and this preserves its W 1,p regularity. This establishes that connections are mapped in general from one derivative of regularity above, to one derivative below the curvature, under coordinate transformation with Jacobians at the regularity of the curvature. Interestingly the exterior derivative dΓ, the leading order part of Riem(Γ), works differently than δΓ due to a cancellation of second order Jacobian derivatives. This is the reason why dΓ can be taken in place of Riem(Γ) in Theorem 2.2, but cannot be taken in place of Riem(Γ) in Theorem 2.3. To see this, note first that
dΓ x = Curl(Γ x ) ≡ ∂ ∂x τ Γ µ νρ − ∂ ∂x ρ Γ µ ντ (3.3)
is the leading order part of the Riemann curvature tensor
Riem(Γ) = dΓ + Γ ∧ Γ (3.4)
both in xand y-coordinates, where Γ ∧ Γ ≡ Γ µ σρ Γ σ ντ − Γ µ στ Γ σ νρ is the wedge product. By Lemma 6.1 in [20], we find from (3.1) that
dΓ x = dΓ + dJ −1 ∧ dJ,(3.5)
since d 2 J = 0. That is, the exterior derivative dΓ only contains first order derivatives of J and J −1 , and thus maintains regularity when J, J −1 ∈ W 2,p , but looses one derivative when J, J −1 ∈ W 1,p . 5 We conclude that under singular coordinate transformation, the regularity of δΓ can be one derivative below the regularity of dΓ, which in turn can be one derivative below Riem(Γ). But by authors theory of the RT-equations, this can always be reversed, and Γ can always be lifted to one derivative above Riem(Γ) by coordinate transformation.
THE RT-EQUATIONS AND OPTIMAL REGULARITY
Authors proved in [23] that solutions of the RT-equations furnish coordinate transformations which regularize connections to one derivative of regularity above their Riemann curvature (optimal connection regularity), c.f. Theorem 2.1 in [23]. By this, the RT-equations extend optimal regularity and Uhlenbeck compactness to arbitrary affine connections, when before it was only known for positive definite metric geometries [11,29]. We now briefly review how the RT-equations establish optimal regularity, referring to [20,23] for detailed definitions and proofs. The RT-equations are derived in [20] from the connection transformation law, guided by the Riemann-flat condition in [19]. A simplified version of the original RTequations is obtained by making a serendipitous gauge-type transformation, which uncouples the equations for the regularizing Jacobian J ≡ ∂y ∂x from the equations for the connection of optimal regularity, leading to what we call in [23] the reduced RT-equations,
∆J = δ(J ·Γ) − B, (4.1) d B = − → div dJ ∧ Γ + − → div J dΓ , (4.2) δ B = w,(4.3)
where we view J,B and Γ ≡ Γ x as matrix valued differential forms with components expressed in x-coordinates. In [23], assuming Γ x ∈ L 2p , dΓ x ∈ L p in xcoordinates, we proved existence of solutions (J, B) of the reduced RT-equations, J ∈ W 1,2p , B ∈ L 2p , with J invertible and integrable to coordinates, by an explicit 5 In the latter case, Jacobian derivatives in the transformed wedge product Γ ∧ Γ cancel precisely the Jacobian derivative terms in dΓ, by which the Riemann curvature transforms as a tensor and maintains its regularity. iteration scheme. To show J transforms Γ x to optimal regularity, we introduce the associated "connection field"Γ bỹ
Γ ≡ Γ x − J −1 dJ. (4.4)
It is then proven, by exact cancellation of uncontrolled derivative terms δΓ x , that Γ solves the "gauge transformed" first RT-equation 6
∆Γ = δdΓ x − δ dJ −1 ∧ dJ + d J −1 A ,(4.5)
where A ≡ (B − dJ;Γ ), from which we inferΓ ∈ W 1,p by elliptic regularity theory [23]. Moreover, integration of the Jacobian J ≡ ∂y ∂x yields a coordinate transformation x → y. In light of (4.4), the connection in y-coordinates given by
(Γ y ) γ αβ = J γ k (J −1 ) i α (J −1 ) j βΓ k ij (4.6)
is of optimal regularity, Γ y ∈ W 1,p . All this is proven in full detail and at the adequate level of weak solutions in [23], starting from the assumption that Γ ∈ L 2p and dΓ ∈ L p in x-coordinates, an assumption equivalent to Γ ∈ L 2p and Riem(Γ) ∈ L p by (3.3) -(3.4); see [25] for a non-technical summary. The generality of the setting addressed in this paper is possible because the RT-equations themselves apply in such generality-requiring nothing other than the connection components given locally in a coordinate system, making no symmetry assumptions, no requirement of a metric, nor any other technical assumptions about the background geometry, other than the regularity of the components of Γ and dΓ in a coordinate system-assuming no more than what is required to formulate the problem of optimal regularity.
PROOFS OF THE THEOREMS
5.1. Existence -Proof of Theorem 2.2. The idea of proof is to use the RT-equations to construct a coordinate transformation x → y which regularizes the connection Γ to Hölder continuity, the threshold regularity required for existence of solutions to (1.1) in y-coordinates by the Peano Theorem. So let Γ x denote the coefficients of a connection in x-coordinates on some open and bounded set Ω x ⊂ R n . For convenience, we view Γ x as a connection represented in x-coordinates in some coordinate chart (x, Ω), on some n-dimensional manifold M with Ω x ≡ x(Ω) ⊂ R n . (To reiterate, the global structure of M is not relevant here.) Assume Γ x ∈ L 2p (Ω x ) and Riem(Γ x ) ∈ L p (Ω x ), for p > n. By (3.3) -(3.4), this is equivalent to Γ x ∈ L 2p (Ω x ) and dΓ x ∈ L p (Ω x ), the incoming assumption of authors' optimal regularity result [23, Thm 2.1]. Let Q ∈ Ω be the point where initial data is assigned in (1.1), i.e., γ(t 0 ) = Q.
By Theorem 2.1 in [23], there exists a neighborhood Ω ′ ⊂ Ω of Q on which a W 2,2p coordinate transformation x → y is defined such that the connection components Γ y in y-coordinates have optimal regularity, Γ y ∈ W 1,p (Ω ′ y ). By Morrey's inequality [4], W 1,p (Ω ′ y ) ⊂ C 0,α (Ω ′ y ) for p > n and α = 1 − n p , (after the usual potential change on a set of measure zero). This implies that Γ y is Hölder continuous. By Peano's Theorem [8,Thm 2.1], existence of at least one C 2 solution γ y to the initial value problem of the geodesic equation (1.1) in y-coordinate now follows. Transforming the resulting geodesic curve γ y (t) back to x-coordinates with the inverse W 2,2p coordinate transformation y → x, transformingγ µ y (t) as a vector and Γ y as a connection, the transformed curve γ x ≡ x • y −1 (γ y ) is a weak solution of (1.1) in x-coordinates by Definition 2.1. This completes the proof of existence.
To extend the proof to the general equation (1.2), note that tensor transformation of the vector field K µ by J, J −1 ∈ W 1,2p ⊂ C 0,α preserves the Hölder continuity of K µ . Thus the above argument to prove existence of solutions to (1.1) applies to (1.2) unchanged, completing the proof.
5.2.
Uniqueness -Proof of Theorem 2.3. The idea of proof is to construct a coordinate transformation which regularizes the connection Γ to Lipschitz continuity, the threshold regularity required by the Picard-Lindelöff Theorem for uniqueness, by using the RT-equations twice. Let γ(t 0 ) = Q ∈ Ω, and assume Γ x ∈ L 2p (Ω x ) and Riem(Γ x ) ∈ W 1,p (Ω x ), for p > n. By (3.4), this implies Γ x ∈ L 2p (Ω x ) and dΓ x ∈ L p (Ω x ), the incoming assumption of the optimal regularity result [23, Thm 2.1]. By [23,Thm 2.1], there now exists a neighborhood Ω ′ ⊂ Ω of Q on which a coordinate transformation x → y ′ is defined, such that in y ′ -coordinates Γ y ′ ∈ W 1,p (Ω ′ y ), and the Jacobian J ′ of the regularizing transformation has regularity J ′ ∈ W 1,2p (Ω ′ y ). It follows by the transformation law for the curvature (1.3) that Riem(Γ y ′ ) maintains its W 1,p regularity, because contraction by Jacobians in W 1,2p does not lower its regularity, since W 1,p is closed under multiplication by Morrey's inequality for p > n. Thus, in y ′ -coordinates we have Γ y ′ ∈ W 1,p (Ω y ′ ) and Riem(Γ y ′ ) ∈ W 1,p (Ω y ′ ) for p > n, which is the starting assumption of authors' prior optimal regularity result [21, Thm 1.1]. By Theorem 1.1 in [21], there exists another neighborhood Ω ′′ ⊂ Ω ′ of Q on which another coordinate transformation y ′ → y ′′ is defined with Jacobian J ′′ ∈ W 2,p , such that in y ′′coordinates Γ y ′′ ∈ W 2,p (Ω ′′ y ). Now, since p > n, Morrey's inequality implies that
W 2,p (Ω ′′ y ) ⊂ C 1,α (Ω ′′ y ), and C 1,α (Ω ′′ y ) ⊂ W 1,∞ ≃ C 0,1 . Thus Γ y ′′ ∈ W 2,p (Ω ′′ y )
is Lipschitz continuous and the Picard-Lindelöff Theorem implies the existence of a unique C 2 solution γ y ′′ to the initial value problem of (1.1) in y ′′ -coordinates, [8,
Thm 1.1].
Transformation of γ y ′′ back to x-coordinates gives a weak solution in the sense of Definition 2.1. Moreover, this is the only such weak solution of (1.1) in xcoordinates. Namely, given a curve γ y ′′′ which solves the transformed initial value problem (1.1) in another coordinate system y ′′′ with Γ y ′′′ in W 1,p , then transforming from y ′′′ -to y ′′ -coordinates, would regularize Γ y ′′′ from W 1,p to Γ y ′′ ∈ W 2,p (Ω ′′ y ), and hence take γ y ′′′ to γ y ′′ by uniqueness of solutions to the initial value problem in y ′′ -coordinates. This proves uniqueness of solutions to (1.1).
To prove uniqueness of solutions to (1.2), taking into account that we assume Γ in W 1,p , we only need to apply the second step in the above argument for proving uniqueness of solutions to (1.1). That is, the regularization by the coordinate transformation y ′ → y ′′ suffices to prove uniqueness, because tensor transformation of the vector field K µ by J ′′ , (J ′′ ) −1 ∈ W 2,p ⊂ C 0,1 preserves the Lipschitz continuity of K µ . 7 This completes the proof.
5.3.
Existence under mollification -Proof of Theorem 2.5. To begin, consider the regularized connection Γ y ∈ W 1,p (Ω y ) constructed in the proof of Theorem 2.2. Introduce a standard mollifier Γ ǫ y of Γ y which, by construction, converges strongly to Γ y in W 1,p , c.f. [4]. By Morrey's inequality Γ ǫ y also converges to Γ y in C 0,α , α = 1 − n p . Moreover, Riem(Γ ǫ y ) converges to Riem(Γ y ) in L p . Namely, using Hölder's inequality,
Riem(Γ ǫ y ) − Riem(Γ y ) L p ≤ dΓ ǫ y − dΓ y L p + Γ ǫ y ∧ Γ ǫ y − Γ y ∧ Γ y L p ≤ dΓ ǫ y − dΓ y L p + Γ ǫ y L 2p + Γ y L 2p Γ ǫ y − Γ y L 2p ≤ 1 + Γ ǫ y L 2p + Γ y L 2p Γ ǫ y − Γ y W 1,p ǫ→0 −→ 0. (5.1)
Now there exists a unique solution γ ǫ y to the geodesic initial value problem (1.1) for each Γ ǫ y with γ ǫ y ∈ C ∞ (I ǫ , Ω y ), on open intervals I ǫ ⊂ R containing the initial time t 0 . The convergence Γ ǫ y → Γ y in C 0,α implies Γ ǫ y is uniformly bounded in terms of Γ y W 1,p in C 0,α , by the Morrey inequality, i.e.,
Γ ǫ y C 0,α ≤ C Γ ǫ y W 1,p ≤ Γ y W 1,p . (5.2)
The uniform bound (5.2) in turn implies that there exists a common open subinterval I ⊂ I ǫ for all ǫ > 0 which contains the initial time t 0 , and on which the solutions γ ǫ y are defined, (c.f. Appendix A). Moreover, expressing the geodesic equation in (1.1) as a first order system, standard ODE theory implies the uniform bound γ ǫ y C 0,α + γ ǫ y C 0,α ≤ C (5.3)
where C > 0 is a constant depending only on the uniform bound on Γ ǫ y C 0,α in (5.2), the initial data and the domain I, c.f. (A.6) in the appendix. By (5.3), the Arzela Ascoli Theorem implies C 0 -convergence of a subsequence of (γ ǫ y ,γ ǫ y ). That is, a subsequence of γ ǫ y converges in C 1 to some limit curve γ y . Since Γ ǫ y converges to Γ y in C 0,α , it follows further that γ y is a weak solution of the initial value problem (1.1) in y-coordinates in the standard sense. That is, for every test functions φ ∈ C ∞ 0 (I, R) the following limit holds
I −γ yφ + φ Γ y (γ y )γ yγy dt = lim ǫ→0 I −γ ǫ yφ + φ Γ ǫ y (γ ǫ y )γ ǫ yγ ǫ y dt = 0,
7 Lipschitz continuity of K might not be preserved under the W 1,p Jacobian of the coordinate transformation from x → y ′ , which is the reason why we need the stronger assumption Γx ∈ W 1,p .
where we omit indices and write Γ y (γ y )γ yγy in place of (Γ y ) µ ρν (γ y )γ ρ yγ ν y . To see this, estimate closeness of each term separately, in particular, note that
|Γ ǫ y (γ ǫ y )−Γ y (γ y )| ≤ |Γ ǫ y (γ ǫ y )−Γ y (γ ǫ y )|+|Γ y (γ ǫ y )−Γ y (γ y )| ≤ Γ ǫ y −Γ y C 0 +C γ ǫ y −γ y α C 0
by Hölder continuity of Γ. Moreover, since Γ y (γ y ) andγ y are both continuous, the standard weak form of the geodesic equation in y-coordinates implies that the weak derivative of the C 0 curveγ y is continuous, which implies by the theory of distribution that γ y ∈ C 2 (I). Thus γ y is in fact a classical strong solution of (1.1) in y-coordinates.
We now show that the transformed curve γ x = x • y −1 (γ y ) is indeed the zero mollification limit of solutions to (2.1) and thus a weak solution of (1.1) in the sense of Definition 2.1, as asserted by Theorem 2.5. For this, we map the sequence of connections Γ ǫ y and geodesics γ ǫ y to x-coordinates. However, care must be taken since the coordinate transformation y → x is only in W 2,2p , and would hence not maintain smoothness of Γ ǫ y as required in Theorems 2.4 and 2.5. To circumvent this problem, we mollify the coordinate transformation x • y −1 , producing mappings x ǫ • y −1 with Jacobians J −1 ǫ ≡ ∂xǫ ∂y ∈ C ∞ and J ǫ ≡ ∂y ∂xǫ ∈ C ∞ converging in W 1,2p to J −1 and J respectively. Now, x ǫ • y −1 maps each Γ ǫ y ∈ W 1,p to some Γ ǫ x ∈ C ∞ , while maintaining L 2p closeness of connections under the connection transformation law,
Γ ǫ x − Γ x L 2p ≤ C 3 Γ ǫ y − Γ y L 2p + dJ ǫ − dJ L 2p ǫ→0 −→ 0. (5.4)
Here we view each Γ ǫ x as expressed in x-coordinates and base norms in x-coordinates, we use Morrey's inequality to bound L ∞ norms on Jacobians, and C > 1 denotes a constant bounding the W 1,2p -norm of J −1 ǫ and J ǫ uniformly. Likewise, the closeness of the curvature established in (5.1) is maintained by the tensor transformation law,
Riem(Γ ǫ x ) − Riem(Γ x ) L p ≤ C 4 Riem(Γ ǫ y ) − Riem(Γ y ) L p ǫ→0 −→ 0. (5.5)
Moreover, C 1 convergence of the geodesics γ ǫ y is preserved under tensor transformation ofγ ǫ y , that is,
γ ǫ x − γ x C 1 = γ ǫ x − γ x C 0 + γ ǫ x −γ x C 0 ≤ γ ǫ y − γ y C 0 + C γ ǫ y −γ y C 0 ǫ→0 −→ 0. (5.6)
In summary, keeping in mind the W 1,2p convergence of J ǫ and J −1 ǫ to J and J −1 , respectively, we have proven that γ x = x • y −1 (γ y ) is a weak solution in x-coordinates in the sense of Definition 2.1, and that γ x is the zero mollification limit satisfying (i) -(ii) of Theorem 2.4. This completes the proof of Theorem 2.5 for the geodesic equation.
To extend the proof to the generalized geodesic equation (1.2), recall from the proof of Theorem 2.2 that Hölder continuity of K is preserved under the W 2,2p coordinate transformation from x → y and vice versa. By the same principle, closeness of a standard mollification K ǫ of K with respect to the Hölder norm is maintained under coordinate transformation. By this, it is straightforward to adapt the above analysis to C ∞ solutions γ ǫ of (2.1) with a standard mollification of K on the right hand side, and prove the assertion of Theorem 2.5. This completes the proof of Theorem 2.5.
5.4.
Uniqueness under mollification -Proof of Theorem 2.4. Consider the unique geodesic curve γ y ∈ C 2 (I, Ω y ) in y-coordinates constructed in the proof of Theorem 2.3 above, where we denote here y ′′ -coordinates simply by y. The mollification procedure in the proof of Theorem 2.5 yields again a C 2 geodesic γ y in the zero mollification limit, which is now identical to the unique geodesic constructed in the proof of Theorem 2.3. Transformation back to x-coordinates, following again the procedure in the proof of Theorem 2.5, shows that γ x = x • y −1 (γ y ) is the unique weak solution in the sense of Definition 2.1, which satisfies (i) -(iii) of Theorem 2.4. This completes the proof for the geodesic equation.
To prove the assertion of uniqueness of Theorem 2.5 for the generalized geodesic equation (1.2), under the stronger assumption of Γ x ∈ W 1,p (Ω), p > n, recall from the proof of Theorem 2.3 that Lipschitz continuity of K is preserved under the W 3,p coordinate transformation from y ′ → y ′′ and vice versa. Adapt now the analysis to C ∞ solutions γ ǫ of (2.1) with a standard mollification of K on the right hand side, the assertion of Theorem 2.4 follow again.
Note that, despite the stronger regularity assumption Riem(Γ) ∈ W 1,p in Theorem 2.4 versus Theorem 2.5, convergence of Riem(Γ ǫ ) to Riem(Γ) does not hold in W 1,p in general, since closeness of the wedge product of Γ's in (5.1) is only controlled in L p when the connection is in L 2p , as assumed in Theorem 2.4. Starting with the stronger assumption of Γ ∈ W 1,p , convergence of the curvature in W 1,p could also be established by adapting the above analysis in the proof of Theorem 2.5.
DISCUSSION OF UNIQUENESS IN THE SINGULAR CASE Γ, dΓ IN L ∞
Assuming Γ ∈ L ∞ and dΓ ∈ L ∞ , we currently do not have a proof that solutions of the RT-equations furnish a regularizing coordinate transformation x → y to optimal regularity Γ y ∈ W 1,∞= C 0,1 , the threshold regularity for uniqueness of solutions to (1.1) by the Picard Lindelöff Theorem. This is because p = ∞ is a singular case of elliptic PDE theory, so the Laplacian may fail to lift solutions two derivatives above sources in L ∞ , and hence may fail to lift solutions two derivatives above sources in W −1,∞ as well. Thus even though the RT-equations may establish the regularity Γ y ∈ W 1,p for any p < ∞, they may fail to regularize to the threshold regularity Γ y ∈ W 1,∞= C 0,1 required for uniqueness of geodesics. Likewise, authors' investigation of the space of functions of bounded mean oscillations (BMO), (a space larger than L ∞ which is contained in all L p spaces for p < ∞), indicates that even though the Laplacian lifts solutions of the Poisson equation two derivatives above source functions in BMO, the (non-linear) RT-equation don't appear to do so because BMO is not closed under multiplication. For comparison, in Theorems 2.3 and (2.4), we assume Γ ∈ L 2p below L ∞ , but this requires assuming dΓ ∈ W 1,p , a regularity above L ∞ . At this stage authors do not know whether there exist L ∞ connections with Riemann curvature bounded in L ∞ , which cannot be lifted to W 1,∞ by coordinate transformation, and for which solutions of the geodesic equation are non-unique.
APPENDIX A. RELEVANT ODE THEORY
For completeness we now review some standard estimates for systems of ODE'ṡ
u = F (u), u(t 0 ) = u 0 (A.1)
where F : Ω → R m is assumed to be Hölder continuous on an open and bounded domain Ω ⊂ R n , with bounded Hölder norm (some 0 < α ≤ 1)
F C 0,α (Ω) ≡ F C 0 (Ω) + sup u 1 ,u 2 ∈Ω |F (u 1 )−F (u 2 )| |u 1 −u 2 | α , (A.2) F C 0 (Ω) ≡ sup u∈Ω |F (u)|. (A.3)
We now derive estimate (5.3), in the proof of Theorem (2.5), assuming continuity of F with bounded sup-norm (A.3). For this, assume solutions on some bounded interval I which we assume for simplicity to have length |I| ≤ 1. We find from (A.1) and (A.2) (for t > t ′ ) that
|u(t) − u(t ′ )| ≤ t t ′ |u|dt ≤ t t ′ |F (u)|dt ≤ F C 0 |t − t ′ |, (A.4)
which by |I| ≤ 1 implies u C 1 (I) ≡ u C 0 (I) + u C 0 (I) ≤ F C 0 + |u 0 | and
u C 0,α (I) ≡ u C 0 (I) + sup t 1 ,t 2 ∈Ω |u(t 1 )−u(t 2 )| |t 1 −t 2 | α ≤ F C 0 (Ω) + |u 0 |. (A.5)
Analogously, given a sequence of functions F ǫ with uniform bound F ǫ C 0 < C, (A.5) turns into a uniform bound on u ǫ C 0,α (Iǫ) for solutions u ǫ to (A.1) with F ǫ in place of F , and with fixed initial data u ǫ (t 0 ) = u 0 , defined on intervals I ǫ . The intervals I ǫ depend only on the uniform bound F ǫ C 0 (Ω) < C and the volume of Ω, (c.f. [8, Thm 2.1]), which implies that there exist an open subinterval I ⊂ I ǫ for all ǫ > 0, and we assume again without loss that |I| ≤ 1.
To derive (5.3) from (A.5), first use that the geodesic equation can be written in form (A.1) with u = γ(t), v(t) and F (u) = (v, Γ µ σρ (γ)v σ v ν ) for v ≡γ. Now, assuming Γ is defined on some set Ω ⊂ R n , and restricting the domain I of solutions such that |v −γ(t 0 )| ≤ 1 with respect to the Euclidean norm | · | on R n , (A.5) gives
γ C 0,α (I) + γ C 0,α (I) ≤ F C 0 + |u 0 |, ≤ b n Γ C 0 (Ω) + b 2 n + |γ(t 0 )| + |γ(t 0 )|, (A.6)
where b n denotes the volume of the ball of radius 1 in R n (to take account of the sup-norm of v over the ball {|v −γ(t 0 )| ≤ 1}). Replacing now Γ and γ by Γ ǫ and γ ǫ , (A.6) implies the sought after uniform C 1,α bound (5.3).
We end this section with a comparison of standard Hölder versus Lipschitz estimates and their relation to uniqueness of solutions of ODE's. Combining (A.1) and (A.2) we obtain the standard ODE estimate on two solutions u 1 and u 2 of (A.1), d dt
|u 1 − u 2 | ≤ C|u 1 − u 2 | α , (A.7)
where C ≡ F C 0,α (Ω) . For α < 1, division by |u 1 − u 2 | α and subsequent and integration gives the basic ODE estimate
|u 1 − u 2 |(t) 1−α ≤ |u 1 − u 2 |(t 0 ) 1−α + C|t − t 0 |. (A.8)
This estimate is insufficient to yield uniqueness because of the growth in t on the right hand side. On the other hand, in the case of Lipschitz continuity, when α = 1, the same operation as above leads to control of d dt ln |u 1 −u 2 |, and integration yields
|u 1 − u 2 |(t) ≤ |u 1 − u 2 |(t 0 )e C|t−t 0 | , (A.9)
which implies uniqueness for solutions with u 1 (t 0 ) = u 2 (t 0 ).
APPENDIX B. WEAK FORMULATIONS OF CURVATURE AND ITS INVARIANCE
In this section we introduce the weak form of the Riemann curvature based on the Koszul formula, Riem(Γ) = dΓ + Γ ∧ Γ, and we prove that, when the curvature is in L p , a weak formulation based in each coordinate system is consistent with both the transformation law for the connection, and the tensor transformation of the L p components of the curvature. This establishes a notion of invariance of the weak curvature sufficient for our purposes in this paper, in particular, the control of regularity of the curvature under coordinate transformation addressed in Section 3.
We define the weak form of the Riemann curvature tensor of a connection Γ ≡ Γ x in L 2p (Ω) in x-coordinates component-wise as a functional over the space of test functions in the sense of distributions,
Riem(Γ x )[ψ] µ νρτ ≡ − ω Γ µ νρ ∂ψ ∂x τ − Γ µ ντ ∂ψ ∂x ρ dx + Ω Γ ∧ Γ µ νρτ ψ dx, (B.1)
by shifting derivatives in Curl(Γ) ≡ ∂ ∂x τ Γ µ νρ − ∂ ∂x ρ Γ µ ντ onto test functions ψ ∈ C ∞ 0 (Ω). We use here scalar valued test functions and the non-invariant volume element dx, following the setting in [27]. We say that the weak form of the curvature (B.1) is in L p , if there exist functions R µ νρτ ∈ L p (Ω) such that
Riem(Γ)[ψ] µ νρτ = Ω R µ νρτ ψ dx ≡ R[ψ] µ νρτ (B.2)
for all ψ ∈ C ∞ 0 (Ω), in which case we write Riem(Γ) = R ∈ L p (Ω). We now prove that the weak curvature in L p transforms as a tensor under coordinate transformation following the ideas in [27].
In the general setting of affine connections in this paper, where no metric is assumed, there is no invariant volume element. So one cannot expect (B.1) to define the exact same functional in every coordinate system. However, for our purposes it suffices to show that transforming the L p component functions (R x ) µ νρτ of the curvature in x-coordinates as a tensor from x → y by (1.3) is consistent with the weak form (B.1). That is, it suffices to prove that, when transforming Γ x by the connection transformation law (1.4) to Γ y , then the weak form of the curvature Riem(Γ y ) in y-coordinates, defined by replacing x by y everywhere in (B.1), agrees with the tensor transformed L p functions (R y ) δ αβγ in y-coordinates in the sense of (B.2). This is accomplished in the following lemma.
Lemma B.1. Consider a coordinate transformation x → y with Jacobian J ≡ ∂y ∂x ∈ W 1,p (Ω x ), p > n. Let Γ x ∈ L 2p (Ω x ) and Γ y ∈ L 2p (Ω y ) be the connection components in xand y-coordinates, respectively, related by the connection transformation law (1.4). Assume that Riem(Γ x ) = R x for a collection of functions R x ∈ L p (Ω) in x-coordinates in the sense of (B.2). Define R y by the tensor transformation law (1.3), i.e., (R x ) τ µνρ = ∂x τ ∂y δ ∂y α ∂x µ ∂y β ∂x ν ∂y γ ∂x ρ (R y ) δ αβγ . Then R y ∈ L p (Ω y ) and Riem(Γ y ) = R y in the sense of (B.2).
Proof. Assume for the moment the coordinate transformation x → y is smooth. By assumption, R x and R y are related by the tensor transformation law (1.3),
(R x ) τ µνρ = ∂x τ ∂y δ ∂y α ∂x µ ∂y β ∂x ν ∂y γ ∂x ρ (R y ) δ αβγ , (B.3)
which we write for brevity as R x = ∂y ∂x R y . For our incoming assumption Riem(Γ x ) = R x ∈ L p (Ω x ), we obtain from (B. where ∂x ∂y denotes the determinant of the inverse Jacobian ∂x ∂y resulting by transformation of the volume element from x → y, and we transform ψ as a scalar function, ψ(y) = ψ(x(y)).
On the other hand, for a standard mollification Γ ǫ x of the connection components Γ x , since (B.1) only involves the undifferentiated connection components in L 2p (Ω x ) ⊂ L 1 (Ω x ), we obtain This proves that Riem(Γ y ) = R y in the sense of (B.2) under smooth coordinate transformations, while R y ∈ L p (Ω y ) follows directly from the tensor transformation (B.3). The proof directly extends to coordinate transformations with Jacobians in W 1,p by mollification, using that W 1,p is closed under multiplication for p > n. This completes the proof.
(
*) DEPARTMENT OF MATHEMATICS, CITY UNIVERSITY OF HONG KONG, SAR HONG KONG (**) DEPARTMENT OF MATHEMATICS, UNIVERSITY OF CALIFORNIA, DAVIS, CA 95616, USA
the smooth components of the curvature Riem(Γ ǫ x ), equivalent to the weak form (B.1) for the mollified connections Γ ǫ x . Moreover, transforming Γ ǫ x to Γ ǫ x by the connection transformation law (1.4) under the smooth transformation x → y, the components R ǫ y of Riem(Γ ǫ y ) transform by the tensor transformation law (1.3), which gives Riem(Γ x )[, integration by parts on the right hand side of (B.7) gives the weak form (B.1), where the L 2p convergence of Γ ǫ y converges to Γ y , (as a result of the L 2p convergence of Γ ǫ x and the connection transformation law), yields convergence of the weak form of the curvature,
FUNDINGM.
Reintjes was partially supported by CityU Start-up Grant for New Faculty (7200748) and by CityU Strategic Research Grant (7005839).
Such as positive definiteness, c.f.[11,29], or the need for Γ to be a metric connection.
The reduced RT-equations (4.1) -(4.3) together with the first RT-equation (4.5) are equivalent to the original RT-equations, c.f.[25].
ACKNOWLEDGEMENTWe thanks Craig Evans for helpful comments on elliptic regularity theory in the singular case of L ∞ based Sobolev spaces.
On Lorentzian causality with continuous metrics. P T Chruciel, J D E Grant, Class. Quantum Grav. 29145001P. T. Chruciel and J. D. E. Grant, "On Lorentzian causality with continuous metrics", Class. Quantum Grav., Vol. 29, (2012), 145001.
The interior of dynamical vacuum black holes I: The C 0 -stability of the Kerr Cauchy horizon. M Dafermos, J Luk, arXiv:1710.01722M. Dafermos and J. Luk, "The interior of dynamical vacuum black holes I: The C 0 -stability of the Kerr Cauchy horizon", (2017). arXiv:1710.01722
The Euler equation as a differential inclusion. C De Lellis, L Székelyhidi, Ann. Math. 1703C. De Lellis and L. Székelyhidi, "The Euler equation as a differential inclusion", Ann. Math., 170.3, (2009), pp. 1417-1436.
L C Evans, Partial Differential Equations. 3L. C. Evans, Partial Differential Equations, Berkeley Mathematics Lecture Notes, 3A, (1994).
Singularity theorems for C1-Lorentzian metrics. M Graf, Comm. Math. Phys. 3782M. Graf, "Singularity theorems for C1-Lorentzian metrics", Comm. Math. Phys. 378, (2020), no. 2, 1417-1450.
The Hawking-Penrose singularity theorem for C 1,1 -Lorentzian metrics. M Graf, J D E Grant, M Kunzinger, R Steinbauer, Commun. Math. Phys. 360M. Graf, J. D. E. Grant, M. Kunzinger, and R. Steinbauer, "The Hawking-Penrose singularity theorem for C 1,1 -Lorentzian metrics", Commun. Math. Phys. 360, 3, (2018), 1009-1042.
Shock-Wave Solutions of the Einstein Equations with Perfect Fluid Sources: Existence and Consistency by a Locally Inertial Glimm Scheme. J Groah, B Temple, Memoirs AMS. 172NumberJ. Groah and B. Temple, Shock-Wave Solutions of the Einstein Equations with Perfect Fluid Sources: Existence and Consistency by a Locally Inertial Glimm Scheme, Memoirs AMS, Vol. 172, Number 813, (2004), ISSN 0065-9266.
P Hartman, Ordinary Differential Equations. John Wiley and SonsP. Hartman, Ordinary Differential Equations, John Wiley and Sons, (1964).
S W Hawking, G F R Ellis, The Large Scale Structure of Spacetime. Cambridge University PressS.W. Hawking and G.F.R. Ellis, The Large Scale Structure of Spacetime, Cambridge University Press, (1973).
Singular hypersurfaces and thin shells in general relativity. W Israel, Il Nuovo Cimento. 1W. Israel, "Singular hypersurfaces and thin shells in general relativity", Il Nuovo Cimento, Vol. XLIV B, N. 1, 1-14, (1966).
Some Regularity Theorems in Riemannian Geometry. J L Kazdan, D M Deturck, Ann. scient.Éc. Norm. Sup. 4 e série, t. 14J. L. Kazdan and D. M. DeTurck, "Some Regularity Theorems in Riemannian Geometry", Ann. scient.Éc. Norm. Sup., 4 e série, t. 14, 1981, p. 249á 260.
Diophantine approximation as Cosmic Censor for Kerr-AdS black holes. C Kehle, Invent. Math. 227C. Kehle, "Diophantine approximation as Cosmic Censor for Kerr-AdS black holes", Invent. Math., 227, (2022), 1169-1321.
The Hawking-Penrose singularity theorem for C1-Lorentzian metrics. M Kunzinger, A Ohanyan, B Schinnerl, R Steinbauer, arXiv:2110.09176M. Kunzinger, A. Ohanyan, B. Schinnerl, R. Steinbauer, "The Hawking-Penrose singularity theorem for C1-Lorentzian metrics", pre-print (2022). arXiv:2110.09176
The Penrose singularity theorem in regularity C 1,1. M Kunzinger, R Steinbauer, J A Vickers, Class. Quantum Gravity. 3215155010M. Kunzinger, R. Steinbauer, and J. A. Vickers, "The Penrose singularity theorem in regularity C 1,1 ", Class. Quantum Gravity, 32(15):155010, 12, (2015).
Lorentzian length spaces. M Kunzinger, C Sämann, Annals of Global Analysis and Geometry. 54M. Kunzinger and C. Sämann, "Lorentzian length spaces", Annals of Global Analysis and Geometry, vol. 54, 399 -447, (2017).
Differential equations with discontinuous right hand sides. A F Filippov, Mathematics and its Applications (Soviet Series). 18Kluwer Academic Publishers GroupA. F. Filippov, "Differential equations with discontinuous right hand sides", Mathematics and its Applications (Soviet Series), vol. 18, Kluwer Academic Publishers Group, Dordrecht, (1988).
Gravitational Collapse" in Gravitational Radiation and Gravitational Collapse. R Penrose, IAU Symposium. C. DeWitt-Morette64SpringerR. Penrose, "Gravitational Collapse" in Gravitational Radiation and Gravitational Collapse, Ed. by C. DeWitt-Morette. Vol. 64 of IAU Symposium, Springer, (1974), pp. 82-91.
Strong Cosmic Censorship with Bounded Curvature. M Reintjes, arXiv:2304.04444M. Reintjes, "Strong Cosmic Censorship with Bounded Curvature", pre-print (2023). arXiv:2304.04444
Shock Wave Interactions and the Riemann-flat Condition: The Geometry behind Metric Smoothing and the Existence of Locally Inertial Frames in General Relativity. M Reintjes, B Temple, arXiv:1610.02390Arch. Rat. Mech. Anal. 235M. Reintjes and B. Temple, "Shock Wave Interactions and the Riemann-flat Condition: The Geometry behind Metric Smoothing and the Existence of Locally Inertial Frames in General Relativity", Arch. Rat. Mech. Anal. 235 (2020), 1873-1904. arXiv:1610.02390
The regularity transformation equations: An elliptic mechanism for smoothing gravitational metrics in General Relativity. M Reintjes, B Temple, arXiv:1805.01004Adv. Theor. Math. Phys. 245M. Reintjes and B. Temple, "The regularity transformation equations: An elliptic mecha- nism for smoothing gravitational metrics in General Relativity", Adv. Theor. Math. Phys 24.5, (2020), 1203-1245. arXiv:1805.01004
Optimal metric regularity in General Relativity follows from the RT-equations by elliptic regularity theory in L p -spaces. M Reintjes, B Temple, arXiv:1808.06455Meth. Appl. Anal. 27M. Reintjes and B. Temple, "Optimal metric regularity in General Relativity follows from the RT-equations by elliptic regularity theory in L p -spaces", Meth. Appl. Anal. 27.3 (2020), pp. 199-242. arXiv:1808.06455.
How to smooth a crinkled map of spacetime: Uhlenbeck compactness for L ∞ connections and optimal regularity for general relativistic shock waves by the Reintjes-Temple-equations. M Reintjes, B Temple, arXiv:1812.06795Proc. R. Soc. A. 47620200177M. Reintjes and B. Temple, "How to smooth a crinkled map of spacetime: Uhlenbeck com- pactness for L ∞ connections and optimal regularity for general relativistic shock waves by the Reintjes-Temple-equations", Proc. R. Soc. A 476: 20200177. arXiv:1812.06795.
On the optimal regularity implied by the assumptions of geometry I: Connections on tangent bundles. M Reintjes, B Temple, arXiv:1912.12997Meth. Appl. Anal. 294M. Reintjes and B. Temple, "On the optimal regularity implied by the assumptions of geom- etry I: Connections on tangent bundles", Meth. Appl. Anal., Vol. 29, No. 4, 303-396, (2023). arXiv:1912.12997
On the optimal regularity implied by the assumptions of geometry II: Connections on vector bundles. M Reintjes, B Temple, arXiv:2105.10765preprint (2021), 40 pagesM. Reintjes and B. Temple, "On the optimal regularity implied by the assumptions of geometry II: Connections on vector bundles", preprint (2021), 40 pages. arXiv:2105.10765
Optimal regularity and Uhlenbeck compactness for General Relativity and Yang-Mills Theory. M Reintjes, B Temple, arXiv:2202.09535Proc. R. Soc. A. 47920220444M. Reintjes and B. Temple, "Optimal regularity and Uhlenbeck compactness for General Rel- ativity and Yang-Mills Theory", (2022), Proc. R. Soc. A 479: 20220444. arXiv:2202.09535
On geodesics in low regularity. C Sämann, R Steinbauer, J. Phys.: Conf. Ser. 96812010C. Sämann and R. Steinbauer, "On geodesics in low regularity", J. Phys.: Conf. Ser. 968, 012010, (2018).
Shock wave solutions of the Einstein equations: The Oppenheimer-Snyder model of gravitational collapse extended to the case of non-zero pressure. J Smoller, B Temple, Arch. Rat. Mech. Anal. 128J. Smoller and B. Temple, "Shock wave solutions of the Einstein equations: The Oppenheimer- Snyder model of gravitational collapse extended to the case of non-zero pressure", Arch. Rat. Mech. Anal. 128 (1994), 249-297.
Every Lipschitz metric has C1-geodesics. R Steinbauer, Class. Quantum Gravity. 31557001R. Steinbauer, "Every Lipschitz metric has C1-geodesics", Class. Quantum Gravity, 31(5):057001, 3, (2014).
Connections with L p Bounds on Curvature. K Uhlenbeck, Commun. Math. Phys. 83K. Uhlenbeck, "Connections with L p Bounds on Curvature", Commun. Math. Phys. 83, 31-42, (1982).
S Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. John Wiley and Sons1st editionS. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, John Wiley and Sons, 1st edition, (1972).
| [] |
[
"SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts Gernot Salzer",
"SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts Gernot Salzer"
] | [
"Monika Di Angelo ",
"Thomas Durieux ",
"João F Ferreira ",
"T U Wien ",
"\nTU Wien Vienna\nAustria\n",
"\nINESC-ID and IST\nTU\nDelft DelftNetherlands\n",
"\nUniversity of Lisbon Lisbon\nViennaPortugal, Austria\n"
] | [
"TU Wien Vienna\nAustria",
"INESC-ID and IST\nTU\nDelft DelftNetherlands",
"University of Lisbon Lisbon\nViennaPortugal, Austria"
] | [] | Smart contracts are blockchain programs that often handle valuable assets. Writing secure smart contracts is far from trivial, and any vulnerability may lead to significant financial losses. To support developers in identifying and eliminating vulnerabilities, methods and tools for the automated analysis have been proposed. However, the lack of commonly accepted benchmark suites and performance metrics makes it difficult to compare and evaluate such tools. Moreover, the tools are heterogeneous in their interfaces and reports as well as their runtime requirements, and installing several tools is time-consuming.In this paper, we present SmartBugs 2.0, a modular execution framework. It provides a uniform interface to 19 tools aimed at smart contract analysis and accepts both Solidity source code and EVM bytecode as input. After describing its architecture, we highlight the features of the framework. We evaluate the framework via its reception by the community and illustrate its scalability by describing its role in a study involving 3.25 million analyses. | null | [
"https://export.arxiv.org/pdf/2306.05057v1.pdf"
] | 259,108,575 | 2306.05057 | 27997aaf800eb5930fdd2f1ab751b868d1057a4d |
SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts Gernot Salzer
Monika Di Angelo
Thomas Durieux
João F Ferreira
T U Wien
TU Wien Vienna
Austria
INESC-ID and IST
TU
Delft DelftNetherlands
University of Lisbon Lisbon
ViennaPortugal, Austria
SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts Gernot Salzer
Index Terms-BytecodeEVMSoliditySecurityVulnerability
Smart contracts are blockchain programs that often handle valuable assets. Writing secure smart contracts is far from trivial, and any vulnerability may lead to significant financial losses. To support developers in identifying and eliminating vulnerabilities, methods and tools for the automated analysis have been proposed. However, the lack of commonly accepted benchmark suites and performance metrics makes it difficult to compare and evaluate such tools. Moreover, the tools are heterogeneous in their interfaces and reports as well as their runtime requirements, and installing several tools is time-consuming.In this paper, we present SmartBugs 2.0, a modular execution framework. It provides a uniform interface to 19 tools aimed at smart contract analysis and accepts both Solidity source code and EVM bytecode as input. After describing its architecture, we highlight the features of the framework. We evaluate the framework via its reception by the community and illustrate its scalability by describing its role in a study involving 3.25 million analyses.
I. INTRODUCTION
Smart contracts are a fundamental part of blockchain technology, particularly on platforms like Ethereum, where they enable the development of decentralized applications. Benefits like transparency, trust, and security are paired with potential risks, as malicious actors can exploit vulnerable smart contracts and cause substantial financial losses. Therefore, there is a pressing need for automated tools that help identify such vulnerabilities.
The goal of this paper is to present SmartBugs 2.0, a modular execution framework that simplifies the execution of analysis tools for smart contracts, facilitates reproducibility, and supports large-scale experimental setups. It is open-source and publicly available at https://github.com/smartbugs/smartbugs.
Methodology. SmartBugs supports three modes for analyzing smart contracts: Solidity source code, creation bytecode, and runtime code. It currently includes 19 tools encapsulated in docker images. With its standardized output format (via scripts that parse and normalize the output of the tools), it facilitates an automated comparison of the findings across tools. In This project was partially supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) under project UIDB/50021/2020. The project was also partially supported by the CASTOR Software Research Centre. the context of a bulk analysis, it allows for the parallel, randomized execution of tasks for the optimal use of resources. Envisioned users. SmartBugs is intended for • developers auditing smart contracts before deployment, • analysts evaluating already deployed smart contracts, • tool developers comparing selected tools, • researchers performing large-scale analyses, and thereby advances the state-of-the-art in the automated analysis of smart contracts. Engineering challenges and new features. Compared to the original version, SmartBugs 2.0 offers the following improvements that overcome several engineering challenges:
• support for bytecode as input • 8 additional tools • modular integration of new tools • support for multiple versions of the same tool • generic architecture • increased robustness and reliability • detection and reporting of tool errors and failures • SARIF as output format • mapping of tool findings to the SWC taxonomy 1 By adding bytecode as accepted input format, the range of smart contracts that can be analyzed by SmartBugs has been extended to programs without source code, including all smart contracts already deployed. Due to its modular structure, SmartBugs 2.0 can easily be extended with further tools. The standardized output format and the mapping to a tool-independent taxonomy both facilitate the integration of a comprehensive vulnerability analysis into the development cycle. Validation studies. To showcase the capabilities of SmartBugs 2.0, we present a typical use case that demonstrates how SmartBugs 2.0 has supported the largest experimental setup to date, both in terms of the number of tools and the number of analyzed smart contracts. of the smart contracts to process and a list of tools to execute. For a mass analysis, it is also important to specify the number of parallel processes as well as resource bounds per process. Task builder. For each smart contract matching the specification, the task builder selects those tools that fit the format of the smart contract (source code, creation bytecode, or runtime code) and pulls their Docker images. Moreover, it determines a unique folder for the output of each run. Sometimes the naming scheme specified by the user leads to collisions, meaning that the output of different smart contracts or tools would end up in the same folder. The task builder resolves conflicts in a deterministic way such that any restart of SmartBugs with the same arguments after an interrupt leads to the same output folders.
II. ARCHITECTURE
Most tools analyzing Solidity source code either contain a compiler for a fixed Solidity version or download an appropriate compiler on the fly. Both approaches are problematic in the context of a bulk analysis. In the first case, the integrated compiler is not able to handle smart contracts written for a different version, whereas in the second case an adequate compiler will be downloaded, but used only once and then discarded together with the container of the tool, which leads to redundant downloads during the analysis. Therefore, the task builder inspects the smart contracts and downloads the corresponding compilers beforehand. Later on, during analysis, a compiler matching the smart contract is injected into the container such that the tool is able to compile the contract without attempting to download the compiler itself. Overall, the task builder downloads all resources and detects problems before actually starting the analysis. This prevents racing conditions, errors popping up only during the analysis phase, and minimizes network traffic. Runner. The runner receives a list of tasks, where each task contains the information for applying a single tool to a single smart contract. The length of the list is roughly the product of the number of smart contracts and the number of tools. To improve the utilization of server resources, the runner randomly permutes the task list. Then it starts the requested number of parallel analyzers, which process the tasks from the list one after the other. Analyzers. Each analyzer picks a task from the queue of the runner, copies the smart contract, the Solidity compiler (if nec-essary) and auxiliary scripts to a temporary volume and runs the Docker image of the tool with this volume mounted. Once the Docker container has terminated, the analyzer extracts the result files and writes them to the designated output folder. It adds a file with meta information like the execution time, the arguments of the Docker run, and the version of the tool.
Parsing. The output of the tools is heterogeneous: some provide their results in structured form, others produce textual output. Parsers are small scripts accompanying each tool. They scan the results for the weaknesses detected, but also watch out for errors (irregular conditions reported by the tool) and failures (exceptions not caught by the tool). The information is written to JSON files and -to facilitate the integration of SmartBugs into CI workflows -to SARIF files.
III. FEATURES
Output format SARIF. SmartBugs 2.0 can provide the results in SARIF (Static Analysis Results Interchange Format), an OASIS standard that defines a common reporting format for static analysis tools [1]. SARIF is JSON-based and allows IDEs to access the analysis reports in a uniform way. By adopting a common format that can be parsed by readily available tools, the cost and complexity of aggregating the results of analysis tools into common workflows diminishes. For example, it becomes trivial to integrate SmartBugs into GitHub workflows, since GitHub automatically creates code scanning alerts in a repository using information from SARIF files. 2 For an example of the integration of SARIF produced by SmartBugs and GitHub, we refer the reader to the repository smartbugs/sarif-tests. 3 Bytecode input. On Ethereum, smart contracts are deployed by sending a transaction containing the creation bytecode. When executed by the Ethereum Virtual Machine, this code initializes the environment of the new contract and returns the runtime code that is actually stored on the chain. In most cases, the creation bytecode is the result of compiling Solidity source code. A significant enhancement of SmartBugs 2.0 is its ability to integrate tools that analyze the creation bytecode and runtime code directly, obviating the need to procure Solidity sources first. In fact, for many smart contracts deployed on the chain, their source code is not available. Of the 19 tools currently included in SmartBugs, 13 are able to process creation bytecode and/or runtime code. Provision of proper compiler versions. Another important addition to SmartBugs 2.0 is its ability to select an appropriate compiler for each smart contract. Solidity has seen a rapid development over the past years, with numerous breaking changes. Therefore, programmers are strongly advised to include a pragma that specifies the language version that a smart contract was developed for. Analysis tools have three strategies to cope with this situation. Experimental tools (proofs-ofconcept) may come with just a specific compiler version, restricting its applicability. Other tools implicitly assume that the compiler on the command search path matches the smart contract to be analyzed. The most versatile tools inspect the smart contract and download an appropriate compiler before starting analysis. As none of these approaches fits the needs of an unsupervised bulk analysis, the task builder (see its description above) inspects the smart contracts, downloads each required compiler version once before the actual analysis, and then injects the correct one into every container. This allows the tool to run the correct compiler version without the need for on-the-fly downloads, which would cost time and increase the network traffic. As another benefit, this improvement enhances the reproducibility and uniformity of the analyses, as the same compiler version is used consistently across all runs. Tool integration. With the new version of SmartBugs, it is now possible to incorporate new tools without touching the code of SmartBugs. The details of adding a new tool are described in the wiki of SmartBugs's repository 4 . In essence, a few lines in a configuration file are needed to specify the docker image of the tool and its interface. Moreover, for extracting the findings and errors from the result files, a Python script has to be added. This new flexibility in adding tools also allows researchers to compare the behavior of different versions of the same tool, which is particularly useful for evaluating performance over time, or for ensuring that performance does not degrade with an update. Mapping to a weakness taxonomy. To compare and unify findings across tools, the idiosyncratic labels assigned by each tool need to be mapped to a common frame of reference. SmartBugs 1.0 maps the findings to the vulnerability taxonomy DASP TOP 10 5 . The new version adds a mapping of all findings (including those of the new tools) to the weakness taxonomy of the SWC registry. 6 The SWC registry is a community-driven catalog of software weaknesses in smart contracts, whose granularity is finer than the one of DASP TOP 10, which allows us to provide more detailed information about the weaknesses found by the tools. This classification is added to the SARIF output, in order to be displayed in the context of the source or bytecode. Supported tools. The tools currently in SmartBugs 2.0 are listed in Table I. Check marks in black ( ) indicate new additions, while the gray check marks in column 'Solidity' identify the capabilities of the old version. We added 8 new tools as well as bytecode support for seven of the old tools.
In most cases, bytecode support refers to runtime code. Only two tools are able to handle the creation bytecode as well.
IV. EVALUATION
Reception. The appreciation of SmartBugs by the community on GitHub is reflected in the following metrics. With 13 contributors, it received over 400 stars, 81 issues were filed, and 110 users/organizations have forked the repository, with 50 unique cloners in the weeks from May 09 to 22, 2023. SmartBugs is not only used by developers and security companies, but also in academic studies [2], [3], [4] or master theses [5], [6], [7], [8]. Moreover, components of it have been used to build a ML-based tool [9]. Use case. In the largest experimental study to date [10], we used SmartBugs 2.0 to execute 13 tools on almost 250 000 runtime bytecodes. The tools reported over 1.3 million weaknesses in total. With a resource limit of 30 min and 32 GB, the execution took a total of 31 years. More than half of the tools could run on just 4 GB for the vast majority of the bytecodes and with less than 3 min on average per bytecode, while three tools ran into the limits for more than 1 000 bytecodes.
The new feature in SmartBugs 2.0 of reporting errors and failures gives the user an indication, for which bytecodes a tool may be operating outside of its specification. This way, potential findings or non-findings are put in relation to the tools ability to properly analyze the bytecode. bytecodes. As Mythril, Oyente and Vandal report no errors, they are not depicted. Apparently, HoneyBadger, Maian, and Osiris experience an increasing error rate after 7.5 million blocks. This information can be used to enhance the tools or make informed decisions about whether to use them for more recent smart contracts.
Moreover, tool failures may serve as a measure of robustness. For eight tools, the failure rate was below 1 % of the bytecodes, whereas for one tool, the failure rate reached 25 %, meaning that the tool ran into an exception for one out of four bytecodes.
V. RELATED WORK
As documented in the previous sections, SmartBugs 2.0 is a major improvement over the original version of Smart-Bugs [11], which was released in 2019. To the best of our knowledge, the only other execution framework that implements similar ideas is USCV [12]. It comprises eight tools for the analysis of Solidity source code, with seven of them also covered by SmartBugs. USCV seems to be neither widely used nor maintained, as the latest of its 10 commits is from mid-2021 and no issues have been filed so far.
VI. CONCLUSION SmartBugs 2.0 has proven to be a useful tool for our own work as well as for fellow researchers and developers. Its extensive use has shown some limitations, partly resulting in enhancement requests by users. In future work, we will consider the following extensions. Support for historic compiler versions. SmartBugs supports Solidity 0.4.11 and above. By accessing another repository, we can include versions down to 0.4.0. Compiler versions older than that may be harder to come by.
Support for more complex formats of source code. At the moment, each smart contract has to be contained in a single file. However, complex projects are split into several files, with additional includes of system-wide libraries. SmartBugs could try to determine the dependencies and transfer them also into the container.
Use of source code mappings. Tools for bytecode analysis can be made to analyze source code by compiling the source code before feeding the result to the tool. The difficult part is to map the bytecode addresses of weaknesses back to source code lines.
Addition of new tools. The automated analysis of smart contracts is an active area, with new tools emerging every year. We hope that we will be able to keep up, not least with the help of the community contributing further tool configurations.
Figure 1
1depicts the architecture of SmartBugs. It can be started from the command line or called from Python programs. The main arguments to provide are a specification
Fig. 1 :
1The Architecture of SmartBugs.
Fig. 2 :
2Tool errors over time: percentage of errors encountered by the tools, in bins of 100 000 blocks.
Figure 2
2depicts the error rate of each tool on a time line of blocks on the Ethereum main chain, where each data point represents the percentage of reported errors in bins of 100 000
TABLE I :
ISupported tools.Tool
Version
New
Contract format
Solidity Creation Runtime
ConFuzzius
#4315fb7
Conkas
#4e0f256
Ethainter
eThor
2021 (CCS'20)
HoneyBadger #ff30c9a
MadMax
#6e9a6e9
Maian
#4bab09a
Manticore
0.3.7
Mythril
0.23.15
Osiris
#d1ecc37
Oyente
#480e725
Pakala
#c84ef38
Securify
sFuzz
#48934c0
Slither
Smartcheck
Solhint
3.3.8
teEther
#04adf56
Vandal
#d2b0043
19 tools
8
13
2
13
https://docs.github.com/en/code-security/code-scanning/ integrating-with-code-scanning/uploading-a-sarif-file-to-github 3 https://github.com/smartbugs/sarif-tests/security/code-scanning
https://github.com/smartbugs/smartbugs/wiki/Adding-new-analysis-tools 5 https://dasp.co/ 6 https://swcregistry.io/
Static Analysis Results Interchange Format (SARIF) Version 2.1.0, Oasis Standard. OASIS Static Analysis Results Interchange Format (SARIF) Technical Committee. OASIS Static Analysis Results Interchange Format (SARIF) Technical Committee, "Static Analysis Results Interchange Format (SARIF) Ver- sion 2.1.0, Oasis Standard," 2020, https://docs.oasis-open.org/sarif/sarif/ v2.1.0/os/sarif-v2.1.0-os.html.
Empirical review of automated analysis tools on 47,587 Ethereum smart contracts. T Durieux, J F Ferreira, R Abreu, P Cruz, Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. the ACM/IEEE 42nd International Conference on Software EngineeringNew York, NY, USAACMT. Durieux, J. F. Ferreira, R. Abreu, and P. Cruz, "Empirical review of automated analysis tools on 47,587 Ethereum smart contracts," in Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, no. November. New York, NY, USA: ACM, jun 2020, pp. 530-541.
Smart contract and DeFi security: Insights from tool evaluations and practitioner surveys. S Chaliasos, M A Charalambous, L Zhou, R Galanopoulou, A Gervais, D Mitropoulos, B Livshits, arXiv:2304.02981arXiv preprintS. Chaliasos, M. A. Charalambous, L. Zhou, R. Galanopoulou, A. Ger- vais, D. Mitropoulos, and B. Livshits, "Smart contract and DeFi security: Insights from tool evaluations and practitioner surveys," arXiv preprint arXiv:2304.02981, 2023.
I Qasse, M Hamdaqa, B Þ Jónsson, arXiv:2304.06568Smart contract upgradeability on the Ethereum blockchain platform: An exploratory study. arXiv preprintI. Qasse, M. Hamdaqa, and B. Þ. Jónsson, "Smart contract upgradeability on the Ethereum blockchain platform: An exploratory study," arXiv preprint arXiv:2304.06568, 2023.
Comparison of Ethereum smart contract vulnerability detection tools. B , University of TurkuMaster's thesisB. Aryal, "Comparison of Ethereum smart contract vulnerability detection tools," Master's thesis, University of Turku, 2021. [Online].
Análise Estática de Smart Contracts. N M O Veloso, 2021Instituto Superior Técnico, Universidade de Lisboa (ULisboaMaster's thesisN. M. O. Veloso, "Análise Estática de Smart Contracts," Master's thesis, Instituto Superior Técnico, Universidade de Lisboa (ULisboa), 2021.
A Static Analysis-based Platform-as-Service to Improve the Quality of Smart Contracts. D A P De Araújo, 2021Instituto Superior Técnico, Universidade de Lisboa (ULisboaMaster's thesisD. A. P. de Araújo, "A Static Analysis-based Platform-as-Service to Improve the Quality of Smart Contracts," Master's thesis, Instituto Superior Técnico, Universidade de Lisboa (ULisboa), 2021.
Automatic Bug Prioritization of SmartBugs Reports using Machine Learning. J T S Dinis, 2022ULisboaInstituto Superior Técnico, Universidade de LisboaMaster's thesisJ. T. S. Dinis, "Automatic Bug Prioritization of SmartBugs Reports using Machine Learning," Master's thesis, Instituto Superior Técnico, Universidade de Lisboa (ULisboa), 2022.
A machine learning-based dynamic method for detecting vulnerabilities in smart contracts. J Mandloi, P Bansal, International Journal of Applied Engineering &Technology. 4J. Mandloi and P. Bansal, "A machine learning-based dynamic method for detecting vulnerabilities in smart contracts," International Journal of Applied Engineering &Technology, vol. 4, pp. 110-118, 2022.
Evolution of automated weakness detection in Ethereum bytecode: a comprehensive study. M Angelo, T Durieux, J F Ferreira, G Salzer, arXiv:2303.10517arXiv preprintM. di Angelo, T. Durieux, J. F. Ferreira, and G. Salzer, "Evolution of automated weakness detection in Ethereum bytecode: a comprehensive study," arXiv preprint arXiv:2303.10517, 2023.
Smartbugs: A framework to analyze Solidity smart contracts. J F Ferreira, P Cruz, T Durieux, R Abreu, Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering. the 35th IEEE/ACM International Conference on Automated Software EngineeringNew York, NY, USAACMJ. F. Ferreira, P. Cruz, T. Durieux, and R. Abreu, "Smartbugs: A framework to analyze Solidity smart contracts," in Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering. New York, NY, USA: ACM, dec 2020, pp. 1349-1352.
Evaluating countermeasures for verifying the integrity of ethereum smart contract applications. S Ji, D Kim, H Im, IEEE Access. 9S. Ji, D. Kim, and H. Im, "Evaluating countermeasures for verifying the integrity of ethereum smart contract applications," IEEE Access, vol. 9, pp. 90 029-90 042, 2021.
| [
"https://github.com/smartbugs/smartbugs.",
"https://github.com/smartbugs/sarif-tests/security/code-scanning",
"https://github.com/smartbugs/smartbugs/wiki/Adding-new-analysis-tools"
] |
[
"Stationary transport above the critical velocity in a one- dimensional superflow past an obstacle",
"Stationary transport above the critical velocity in a one- dimensional superflow past an obstacle"
] | [
"J Huynh \nUniversité Côte d'Azur\nCNRS\nINPHYNI\nFrance\n",
"F Hébert \nUniversité Côte d'Azur\nCNRS\nINPHYNI\nFrance\n",
"P.-É Larré \nUniversité Côte d'Azur\nCNRS\nINPHYNI\nFrance\n",
"M Albert \nUniversité Côte d'Azur\nCNRS\nINPHYNI\nFrance\n"
] | [
"Université Côte d'Azur\nCNRS\nINPHYNI\nFrance",
"Université Côte d'Azur\nCNRS\nINPHYNI\nFrance",
"Université Côte d'Azur\nCNRS\nINPHYNI\nFrance",
"Université Côte d'Azur\nCNRS\nINPHYNI\nFrance"
] | [] | We consider in this work the different possible stationary flows of a one dimensional quantum fluid in the mean-field regime. We focus on the supersonic regime where a transition from a time dependent flow to a stationary diffractive flow occurs at a given critical velocity. We give nonperturbative results for this critical velocity in the presence of a localised obstacle of arbitrary size and strength. In addition, we discuss the existence of superfluid-like solution in the supersonic regime due to resonant transport and provide a complete map of the different regimes of stationary transport of a quantum fluid. | null | [
"https://export.arxiv.org/pdf/2306.05048v1.pdf"
] | 259,108,612 | 2306.05048 | f90d89ef24c625144ffd9a9ad86beb1e7d50f5ac |
Stationary transport above the critical velocity in a one- dimensional superflow past an obstacle
J Huynh
Université Côte d'Azur
CNRS
INPHYNI
France
F Hébert
Université Côte d'Azur
CNRS
INPHYNI
France
P.-É Larré
Université Côte d'Azur
CNRS
INPHYNI
France
M Albert
Université Côte d'Azur
CNRS
INPHYNI
France
Stationary transport above the critical velocity in a one- dimensional superflow past an obstacle
We consider in this work the different possible stationary flows of a one dimensional quantum fluid in the mean-field regime. We focus on the supersonic regime where a transition from a time dependent flow to a stationary diffractive flow occurs at a given critical velocity. We give nonperturbative results for this critical velocity in the presence of a localised obstacle of arbitrary size and strength. In addition, we discuss the existence of superfluid-like solution in the supersonic regime due to resonant transport and provide a complete map of the different regimes of stationary transport of a quantum fluid.
Abstract -We consider in this work the different possible stationary flows of a one dimensional quantum fluid in the mean-field regime. We focus on the supersonic regime where a transition from a time dependent flow to a stationary diffractive flow occurs at a given critical velocity. We give nonperturbative results for this critical velocity in the presence of a localised obstacle of arbitrary size and strength. In addition, we discuss the existence of superfluid-like solution in the supersonic regime due to resonant transport and provide a complete map of the different regimes of stationary transport of a quantum fluid.
Introduction. -One important property of superfluids is their ability to move without dissipation below a certain critical velocity [1]. First observed in liquid Helium [2,3], superfluidity was later shown to be more generic and was observed in various quantum fluids [4][5][6][7]. Soon after its discovery, the critical velocity was theorised by Landau [8,9] who proposed a very elegant and general criterion which states that v c = min p ε(p)/p where ε(p) is the spectrum of elementary excitations with momentum p. However, this prediction usually overestimates the actual critical velocity and was verified experimentally only under very specific configurations as, for instance, by moving a single ion in liquid Helium [10]. The reason is that Landau's argument is perturbative and therefore does not take properly into account the nonlinear nature of the problem of interaction between quantum fluids and external potentials. Important progresses arose with the introduction of a simpler model to describe the flow of a quantum fluid: The nonlinear Schrödinger (NLS) equation or Gross-Pitaevskii (GP) equation [11][12][13]. In particular, nonperturbative results were derived for the first time by Frisch and collaborators in two dimensions in the presence of an impenetrable cylinder [14] followed by a series of works for various obstacles (see ref. [15] for a review). Although this model is not satisfactory for the description of dense systems such as liquid Helium, it is very accurate for weakly interacting superfluids such as Bose-Einstein condensates [16] or quantum fluids of light [17].
However, the transport properties of quantum fluids described by a NLS equation are not restricted to superfluidity and display a rich phenomenology [18][19][20][21] which is fig. 1. In the presence of a localised obstacle at rest, an important control parameter is the Mach number, which is the ratio of the incoming velocity v ∞ to the sound velocity far from the obstacle c ∞ . In the Landau approach the sound velocity c ∞ coincides with the critical velocity v c . Below the actual superfluid critical velocity, the flow is stationary and only locally perturbed in the vicinity of the obstacle, as illustrated in subfigure (a) of fig. 1. Above this threshold, which strongly depends on the obstacle, the flow can no longer be stationary due to the continuous emission of linear and nonlinear excitations ( fig. 1 (b)) which leads to a slowdown of the superfluid motion and possibly to wave and quantum turbulence [22,23].
At larger velocities, a second critical velocity, often referred to as supersonic separatrix [18,24,25], separates the latter regime from another regime of stationary transport. This regime is reminiscent of the linear Schrödinger equation since the kinetic energy becomes much larger than interaction energy in the fluid. In that case, the flow is partly backscattered by the obstacle and generally experiences friction. The incoming and reflected flows interfere and create a standing wave with a density modulation ahead of the obstacle ( fig. 1 (c)). Yet some configurations exist in which dissipation does not occur, even in the nonsuperfluid phase due to resonant transport [26][27][28]. For specific obstacle parameters there might exist lines in the supersonic stationary phase where backscattering is suppressed and the fluid experiences no drag at all hence mimicking a superfluid solution ( fig. 1 (d)), a behaviour that is normally present below the superfluid critical velocity, in the subsonic regime [29].
The aim of this paper is to determine, in a nonperturbative way, the supersonic separatrix -i.e. the border between the nonstationary and the stationary nonsuperfluid regimes -for a generic quantum fluid flowing past a simplified localised obstacle in the one-dimensional meanfield regime. In addition, we study in details the conditions to obtain superfluid-like solutions in the supersonic regime. Combined with previous results for the superfluid critical veloctity [15], this work provides a complete map of the different possible regimes of stationary transport for a one-dimensional quantum fluid, above and below the sound velocity, for repulsive or attractive obstacle, and for different types of nonlinearities. This paper is divided as follows: The model of the quantum fluid, based on a generalisation of the 1D nonlinear Schrödinger equation to any local self-interaction potential increasing with the fluid density, is first detailed. This general approach makes it possible to describe many superfluid systems ranging from ultracold atomic Bose and Fermi gases [16] to exciton-polariton condensates in semiconductor optical microcavities [6,17] and fluids of light [7,21,[30][31][32][33][34]. A thorough analytical study is then performed in the following sections in the limits of narrow or wide obstacles. Finally, we bridge the gap between these two limiting cases with a numerical study for a model obstacle and characterise analytically the perfect transmis-sion lines.
Theoretical model. -We consider a one dimensional quantum fluid flowing in the negative-x direction in the framework of the NLS equation. For the sake of clarity, we employ here the language of weakly interacting bosonic particles of mass m although the results derived in this paper are of wider interest. A quantum fluid dictionary is provided in Supp. Mat. for readers interested in other physical realisations of this model. The dynamics of the considered system is governed by a generalised nonlinear Schrödinger equation for the order parameter ψ
ih∂ t ψ = −h 2 2m ∂ xx + U (x) + g(|ψ| 2 ) ψ.(1)
The flow is here constrained by an obstacle described in eq. (1) by a potential U (x) = U 0 f (|x|/σ) which attains its single positive maximum (negative minimum) U 0 at x = 0 and which is localized, i.e., which vanishes as |x| ≫ σ, with σ being its typical range. Throughout this work, we will exemplify our results with a repulsive (attractive) square potential U (x) = U 0 Θ(σ/2 − |x|) but results with a Gaussian potential are given in the Supp. Mat. The reason why we employ such a toy model is because it allows to obtain analytical results without loss of generality. In addition to the external potential, the fluid is also subjected to a self-interaction described by the local nonlinear term g(|ψ| 2 = n)ψ, where the potential g(n) is an increasing function of the density n. g(n) is usually given by the coupling constant obtained in the Born approximation in the standard version of NLS, standing for dilute ultracold bosonic atoms. Yet it is here written as a functional of the density, so that the interaction potential can take many forms describing a wide variety of systems ranging from Bose or Fermi gases to quantum fluids of light [15]. For instance, in the case of a fluid of light this potential assumes the form g(n) = (1 + n s ) 2 n/ [n s (n + n s )] after proper rescaling, where n is related to the light intensity and n s to the saturation intensity in the nonlinear medium [15] (see Supp. Mat. for details). We now look for the existence of out-of-equilibrium stationary solutions of eq. (1) of the form
ψ(x, t) = exp[−iµt/h]A(x) exp[iφ(x)],(2)
from which the density and the velocity fields are obtained from n(x) = A(x) 2 , v(x) =hφ ′ (x)/m and µ is the chemical potential. This yields the following equation of motion for these fields
n(x)v(x) = Φ, −h 2 2m A ′′ (x) + U (x) + g(n) + mΦ 2 2A(x) 4 A(x) = µA(x).(3)
The first of eqs. (3) is simply the current conservation while the second one expresses the space dependence of p-2 the density and therefore the velocity through current conservation. These equations have to be complemented with boundary conditions. As explained in ref. [18], a regime of stationary flow exists for supersonic velocities, but in this case the radiation condition [35] requires that the wake is always located ahead of the obstacle, i.e. upstream, with no long-range perturbation of the fluid on the downstream region, where the flow remains unperturbed. The solution has therefore to tend to a constant solution with density n ∞ and velocity v ∞ far away from the obstacle in the downstream region (in our case x → −∞) with Φ = n ∞ v ∞ and µ = 1 2 mv 2 ∞ + g(n ∞ ). For comparison, a stationary superfluid solution satisfies the same condition in the upstream and downstream regions which is way more restrictive. Finally, two important scales emerge due to the nonlinearity g(n), namely the sound velocity c ∞ and the healing length ξ ∞ . They are defined in the downstream region as mc 2 ∞ = n ∞ g ′ (n ∞ ) = µ ∞ and ξ ∞ =h/mc ∞ . In the rest of the manuscript we rescale all quantities in terms of n ∞ for densities, c ∞ for velocities, ξ ∞ for distances, and mc 2 ∞ for energies. This corresponds to the substitutionh = m = 1 and µ = g(1) + 1 2 v 2 ∞ in eq. (3). The main objective is now to search for the condition of existence for the solutions to eq. (3), which depends on the value of the injection velocity v ∞ . The last value under which there is no longer a solution to eq. (3) defines the equation of the supersonic separatrix. In the spirit of ref. [18], eq. (3) can be rephrased in terms of a Hamilton equation describing the dynamics of a fictitious classical particle of position A(x) and momentum p = A ′ (x) at time x. The corresponding Hamilton function reads [36] H(A, p) =
p 2 2 + W (A 2 ) − U (x)A 2 (4) with W (A 2 = n) = v 2 ∞ 2
n + 1 n + ng(1) − G(n), and the antiderivative G(n) = dn g(n). Equations (3) are then derived from the canonical Hamilton equationsṗ = −∂ A H andȦ = ∂ p H where the dot stands for the total derivative with respect to the effective time x. In particular, in the absence of the external potential U (x), this Hamiltonian is time-independent and the energy E cl of the classical particle is conserved. The free solutions of the NLS equation can then be readily obtained from the possible trajectories of the classical particle in the potential W (n = A 2 ). The typical shape of this potential is depicted in fig. 2. For example, the equilibrium point referred to as n min in fig. 2 corresponds to a constant-density supersonic solution (v ∞ > c ∞ ) while small oscillations around this classical fixed point correspond to the superposition of an incoming plane wave and a small amplitude reflected wave describing weak backscattering. Note that, in general, the nonlinearity of the NLS equation forbids such a description based on the superposition principle. However, if the backscattering is weak, the interaction between the incoming wave and the reflected wave is negligible. In general, this separation is not possible and the free solutions are de- scribed by cnoidal waves [18]. In the presence of the scattering potential U (x) the energy of the classical particle is no longer conserved and its dynamics may be nontrivial. The boundary condition in the downstream region (where U (x) = 0) imposes that the classical particles starts with A(−∞) = 1 (n min in fig. 2) and the forward integration has to satisfy that the final energy and the position A of the classical particle remain in the well of W (n). This is the strategy we use to obtain the equation of the supersonic separatrix.
n min n 0,c n max n W min W max W (n) E cl,wide E cl,δ
In the following we provide explicit analytical results for the supersonic separatrix in the limiting cases of narrow and wide obstacles. We then focus on the case of the attractive obstacle of arbitrary width. Using numerical simulations, we identify resonant transport and solutions with perfect transmission similar to the ones of the superfluid regime. Our study reveals the existence of resonances for very specific sets of injection velocity and obstacle parameters as can be seen in fig. 1, which we characterise in the context of our simplified model, providing a better comprehension of the phenomenon.
Narrow obstacle. -When the typical range of the obstacle potential is much smaller than the healing length, i.e. when σ ≪ 1, it is possible to approximate U (x) by
U (x) = U 0 F (σ)δ(x), where F (σ)
is the integral of f (|x|/σ) over the whole real axis, and is simply given by F (σ) = σ in the case of a square obstacle, which could be a well or a barrier depending on the sign of U 0 . One can then obtain an analytical expression for the supersonic separatrix by searching for the solutions of the Hamilton equations with energy
E cl,δ = ε(v ∞ ) = 2U 2 0 F 2 (σ) + v 2 ∞ + g(1) − G(1) associated to a δ-shaped obstacle.
The fictitious potential must be typically of the shape shown in fig. 2, with lim n→0 W (n) = +∞ and lim n→+∞ W (n) = −∞. W (n) has a local minimum W min obtained at n min = 1, and a local maximum W max for n max > 1.
From a classical point of view, the fictitious particle starts at x = −∞ with density n min = 1. It experiences a kick of energy when meeting the obstacle, going from W min to ε, and will oscillate between the two solutions of W (n) = ε after this encounter. If W min < ε < W max , the particle is trapped and the density oscillates between the two solutions of W (n) = ε: This is the supersonic stationary regime. This type of solution is depicted in subfigure (c) of fig. 1. However if ε < W min or ε > W max , the dynamics is no longer stationary and excitations are continuously generated as depicted in subfigure (b) of fig. 1.
p-3 U 0 F (σ) (units of µ ∞ ξ ∞ ) 1 2 3 4 v c (units of c ∞ ) 0 1 2 3 4 5 U 0 (units of µ ∞ ) 1 2 3 4 v c (units of c ∞ ) g(n) = (1 + n s ) 2 n/[n s (n s + n)], n s = 0.1 g(n) = (1 + n s ) 2 n/[n s (n s + n)], n s = 10 g(n) = n
The boundary between the nonstationary and the stationary regimes is by definition the supersonic separatrix, and corresponds to the last stationary solution. It is given by
ε(v c ) = W (n max (v c )) with n max such that W ′ (n max ) = 0. This yields 1 √ 2 v 2 c 2 √ n max − 1 √ n max 2 + g(1)(n max − 1) + G(1) − G(n max ) 1 2 = |U 0 F (σ)|.(5)
An explicit solution of this equation can be derived for a cubic nonlinearity of the form g(n) = n [18] while for a saturable nonlinearity of the form g(n) = (1 + n s ) 2 n/ [n s (n + n s )], characteristic of superfluids of light in a saturable media [21], it has to be solved numerically (see Supp. Mat.).
The upper panel of fig. 3 represents the supersonic separatrix with respect to the effective amplitude U 0 F (σ) of the narrow obstacle. The green curve is obtained for a cubic nonlinearity, whereas the blue plain (dotted) lines are for a saturable nonlinearity, with saturation intensity n s = 0.1, 10. Although the cubic nonlinearity is a limiting case of the saturable nonlinearity when n s ≫ n, large deviations are observable even for n s = 10 which are of great importance for experiments with fluids of light. Moreover, it is important to emphasise that in the above-mentioned saturable systems, the fictitious potential W (n) may be such that it has no local maximum depending on the value of n s . A thorough study showed that the supersonic separatrix saturates to the value v ∞ = √ 2 + 2n s , which must be an artefact stemming from the δ-shaped potential as it does not describe a physical system. This explains the plateau reached by the blue curves in the top part of fig. 3.
It is also interesting to note that eq. (5) predicts a symmetric supersonic separatrix as a function of U 0 . This symmetry between repulsive and attractive obstacle -not present in the case of the superfluid separatrix -is also a mathematical artefact stemming from the δ-peak model, and will be broken as σ increases, or in other words when the velocity of the flow is large enough so that the associated de Broglie wavelength is small enough to resolve the details of the potential. This can be seen in fig. 1 where the symmetry is clearly broken and resonances appear in the attractive case.
Wide repulsive obstacle. -We now consider the obstacle dependence of the separatrix in the case of a wide obstacle σ ≫ 1. In that case, local density approximation can be applied and the obstacle is approximated by a constant. As a consequence, the fluid is locally uniform and its dependence on U 0 is implicitly given by eq. (3) with A ′′ = 0 and E cl,wide = U 0 n 0,c + C, where n 0,c > 1, the density of the fluid at x = 0 when v ∞ = v c , is solution of
g ′ (n 0,c )n 3 0,c = v 2 c .(6)
The separatrix is then obtained for values of U 0 and C such that E cl,wide is the tangent to W (n) at its inflexion point n 0,c , as shown in fig. 2. As for the narrow barrier, this supersonic separatrix is represented in the bottom part of fig. 3 for an obstacle of typically large σ, and for two different types of nonlinearities: The green curve is for a cubic nonlinearity, whereas the blue plain (dotted) line is for a saturable nonlinearity with saturation intensity n s = 0.1, 10. As a matter of fact, the effect of saturation of the nonlinearity is less pronounced for large obstacles than for narrow ones. Note that we do not display the attractive part since eq. (6) predicts that the critical velocity is always the sound velocity at this level of approximation.
Attractive obstacle of arbitrary width. -In the general case, the precise shape of the obstacle has an important influence as we will discuss in this section. In general, eq. (3) has to be solved numerically to obtain the equation of the separatrix, except for specific models such as piece-wise constant obstacles [18]. However, as far as localised obstacles of the form discussed in this work are considered, the generic picture displayed in fig. 1 is valid. In particular, nonlinear resonances may exists and lead to a nontrivial structure of the stability diagram. In ref. [29] such resonances were considered in the case of a repulsive square well obstacle due to the Ramsauer-Townsend effect in arbitrary dimension. These solutions were put forward to be of great interest since they share an important property with superfluid solutions, and they do not experience friction with the obstacle although they are supersonic (see Supp. Mat. for a detailed analysis). However, they exist on specific lines in the stability diagram and do not form a continuous family of solution like the subsonic superfluid solutions. We then cannot find a real superfluid regime above the supersonic separatrix as these lines form a null measure set. In the following, we discuss in details the case of an attractive potential and give explicit results for a square well potential. Results with a Gaussian potential are available in the Supp. Mat. In particular we demonstrate that the lobe structure in the stability diagram of fig. 1 is indeed related to these resonances which continuously connect the superfluid solutions to superfluid-like solution above the critical velocity along one dimensional lines in parameter space (U 0 , v ∞ ).
From now on, we focus on the attractive case and exemplify our findings with a square well potential of amplitude U 0 and width σ, and complement the stability diagram with the knowledge of the transmission coefficient in the (U 0 , v ∞ ) plane. While in the linear case [i.e. g(n) = 0 in eq. (1)] the reflection and transmission coefficients, as well as the position of the resonances, are well known [37], they cannot be defined easily in the nonlinear case as previously discussed. Nevertheless it is possible to give a proper definition of scattering amplitudes using the theory of adiabatic invariants [36,38] or in the weak backscattering limit [36,39]. Since we are mostly interested in the position of the resonances, we will employ the latter definition which reads for the transmission coefficient
T = 1 + ∆E 2(v 2 ∞ − 1) −1 ,(7)
with ∆E the energy difference of the fictitious particle far away from the obstacle on either side of it:
∆E = H [∂ x A(x), ∂ x p(x)] − H [A = 1, p = 0] for x → +∞.
Numerical results are summarised in fig. 4. The colour scale shows the transmission of the fluid across the obstacle as a function of the injection velocity of the fluid and of the amplitude of the square well obstacle, for a given value of σ. The coloured zone is separated from white zones of undefined transmission (corresponding to the nonstationary regime) by the supersonic separatrix, clearly exhibiting resonances. In particular, the perfect transmission lines are shown to follow exactly the nontrivial structure of the stability diagram and are drawn as orange dotted lines, while the white dashed line represents the envelope of the resonances. Both curves can be calculated analytically for a square well potential as suggested in [18]. In the following we provide explicit results for g(n) = n. Again, thinking in terms of a fictitious particle moving in a classical potential provides a simple picture of the underlying physics, and the mechanism behind the existence of resonances is illustrated in fig. 5.
We start by discussing the stability diagram. Before the excitation caused by the rectangular obstacle, the fictitious particle is at rest from x = −∞ to x = −σ/2 with density n + and energy E − = W (n + ) in the potential W . As it reaches the obstacle, it undergoes a kick of energy ∆E = E 0 − E − , going from (n + , E − ) to (n + , E 0 ) in the new potential W 0 = W + U 0 n. The particle then oscillates in W 0 between n + and n − as it progresses in the obstacle, and returns to the potential W with densityñ for x = +σ/2. Several cases leading to different dynamics for the fluid are then possible depending on the values of p-5 Fig. 5: Classical potentials seen by the fictitious particle in the case of an attractive square well obstacle. The lower curve is the potential W (n) for x < −σ/2 and x > σ/2 while W0(n) is the one for x ∈ [−σ/2, σ/2]. U 0 , v ∞ and σ. In that context, we defineL the distance between n + andñ performed in W 0 , and L 0 the distance of the round-trip between n + and n − , i.e. the period of the cnoidal wave in W 0
n −ñ n + n 2 n E − W (ñ) W (n − ) W (n 2 ) E 0 W (n) W 0 (n)L = 1 √ 2 √ n+ √ñ dA E 0 − W 0 (A) ,(8a)L 0 = √ 2 √ n+ √ n− dA E 0 − W 0 (A) .(8b)
For stationary solutions to exist it is necessary that the energy of the fictitious particle when it exits the obstacle is lower than the maximum of W (n) (the configuration of fig. 5). That way, the particle is always confined. The envelope of the resonances (white dashed line in fig. 4), above which stationary solutions exist no matter the value of σ, is obtained when the energy of the particle at the end of the obstacle corresponds exactly to the maximum of the fictitious potential. An analytical expression can be obtained for that envelope, delimiting the case where solutions always exists to the one where the existence of said solutions depends on the value of σ, can be found in eqns. (34) and (35) of [18]. It is interesting to note that, for a square well obstacle, U 0 and σ are uncorrelated quantities, and the amplitude of the resonances will not depend on σ as shown by the white dashed line in fig. 4. For such an obstacle, the resonances will never disappear and their envelope will always be the same for any value of σ. Interestingly, numerical simulations showed that this is not the case for a Gaussian potential: The envelope of the resonances does depend on σ, and decreases as the width increases. These results are presented in the Supp. Mat. One can also see in fig. 4 that the resonances multiply as σ increases. At some point, for an arbitrarily large value of σ, the resonances are so thin and numerous that they are not distinguishable from one another any more, to the extent that the supersonic separatrix would be given by v c = 1 in the limit σ ≫ 1. Since the subsonic separatrix is also given by v c = 1, the gap opened by the nonstationary regime slowly closes as the width of the obstacle increases.
Concerning the position of the resonances, the connection of the subsonic superfluid solution to lines of perfect transmission in the supersonic regime is performed at v ∞ = c ∞ . From our classical analysis, the two extrema of the potential W (n) merge in a unique saddle point located at n = n + . Oscillations are not possible any more, and the only way for a stationary state to exist is when the excited fictitious particle exits W 0 with the same density it had when entering it, meaningL = 0. More generally, when the fictitious particle performs an arbitrary number of round-trips in the excited potential W 0 , so that its energy after exiting the obstacle is exactly that of before the excitation, a resonance forms between the width of the obstacle and the wavelength of the cnoidal wave of the oscillating particle, causing a perfect transmission, and linking the superfluid regime to the stationary nonsuperfluid one. The equation of theses lines of perfect transmission (see the orange dotted lines in fig. 4) is then given by αL 0 = σ, α being an integer. Along these lines, the superfluid/stationary nonsuperfluid transition is continuous and the system is always stationary. Note that we have numerically checked the stability of this solution by doing time dependent simulation of the NLS eq. (1).
Conclusion. -In this paper we have studied the supersonic separatrix between the nonstationary and the normal stationary regime of a generic quantum fluid flowing past a localised obstacle of arbitrary amplitude and width, in the 1D mean-field regime. We have computed this critical velocity by deriving nonperturbative exact expressions in the limits of narrow-and wide-obstacle, and studying more thoroughly the peculiar case of the arbitrary attractive obstacle, which exhibits a nontrivial behaviour. For most parameters, a standing wave forms ahead of the obstacle, with a constant friction force and a partially transmitted wave. However we have shown that, along certain lines, due to resonant transport, supersonic solutions may share fundamental properties with their subsonic superfluid counterpart: They are symmetric solutions and, therefore, the quantum fluid is totally transmitted through the obstacle and does not experience friction.
All together, these results are important for the experimental studies of transport properties of quantum fluids described by a generalised NLS equation. They provide a clear map in parameter space of the different possible stationary regimes of flow for a quantum fluid, in order to guide experimental studies in the desired regime of nonlinear transport.
Finally, an extension of this work to higher dimensions would be desirable, notably in 2D, as experimental data are available for a saturable nonlinearity [21]. * * * We acknowledge P. Vignolo, M. Bellec and C. Michel for inspiring discussions. This work has benefited from the financial support of Agence Nationale de la Recherche under Grants Nos. ANR-21-CE30-0008 STLight (Superfluid and Turbulent Light in Complex Media) and ANR-21-CE47-0009 Quantum-SOPHA (Quantum Simulators for One-Dimensional Systems with Photons and Atoms).
Supplementary material for "Stationary transport above the critical velocity in a one-dimensional superflow past an obstacle" Nonlinear Schrödinger equation dictionary. -The mean-field dynamics of various 1D systems can be described by generalised NLS equations. Two instances of such systems, exemplified in the main paper, will be given in this section.
This equation is mainly known to describe the dynamics of the one-dimensional reduction ψ(x, t) of the condensate wave function of a dilute ultracold atomic Bose gas. In that case, the system consists of weakly repulsive identical atoms in a highly asymmetric harmonic trap, which makes the evolution of the condensate to be quasi onedimensional along a given axis of the trap [1]. In the present paper, we mostly work with a toy model consisting of a square potential for simplicity but, from an experimental point of view, the obstacle potential U (x) can be realised by crossing the atomic cloud with a detuned laser beam, larger than the transverse size of the condensate [2]. The flow of the Bose fluid in a given direction can then be simulated by displacing the laser beam creating the obstacle in the opposite direction [2], which is equivalent to looking at the system in a reference frame where the fluid is at rest. In that context, the self-interaction potential g(n) of the condensate is proportional to n and given by g(n) = 2hω ⊥ na s where a s is the s-wave scattering length of the two-body interaction potential, ω ⊥ is the transverse frequency of the harmonic trap, and the system is dilute na s ≪ 1. This configuration of a dilute gas is the one exemplified in the main text but in its dimensionless form g(n) = n, obtained after a relevant rescaling of the main quantities.
The NLS equation is also used in the optics domain, for example to describe the propagation of a scalar laser field in a local nonlinear medium. Such interactions between light and matter can be encountered in various domains pertaining to nonlinear optics or atomic physics [3][4][5][6][7][8][9][10][11]. A particular realisation, relevant to the study of the transport of a fluid of light around an obstacle, is realised by the paraxial propagation of a monochromatic optical field in a nonlinear medium [6,7,12]. Such systems can be mapped onto a one-dimensional Gross-Pitaevskii-type evolution of a quantum fluid of interacting photons in the plane transverse to the propagation [1], the propagation coordinate playing the role of time. The transverse direction represents the space in which the fluid of light evolves, which is generally two-dimensional. An obstacle U (x) can be introduced through a spatial modulation in the linear refractive index of the medium [6,7]. The effective mass m is related to the propagation constant of the fluid-oflight beam propagating in the medium, the density of the fluid is given by the light intensity, and its velocity corresponds to the gradient of the phase of the optical field. The photon-photon interactions, mediated by the nonlinear response of the material in which the fluid of light is propagating, lead to different kinds of nonlinearities, depending on the medium that is considered. For example, a defocusing Kerr medium gives a nonlinearity g(n) that increases linearly with the light intensity n [3,4]. In a defocusing saturable medium, like the nonlinear photorefractive crystal used in the experiments of refs. [6,7], the nonlinearity takes the saturable form g(n) = πN 3 r 33 E 0 n/[λ 0 (n s + n)], where N and r 33 are respectively the mean refractive index and the electro-optic coefficient of the crystal along the extraordinary axis. E 0 is the amplitude of an electric field applied to the crystal, λ 0 is the wavelength of the laser carrier in free space, and n s is a saturation intensity adjusted by illuminating the crystal with white light. In the present paper, the dimensionless version of the saturable nonlinearity reads g(n) = (1 + n s ) 2 n/ [n s (n s + n)], p-1 arXiv:2306.05048v1 [cond-mat.quant-gas] 8 Jun 2023
and is once again obtained through the proper rescaled units.
Explicit expressions for the critical velocity. -Based on previous studies [13][14][15], we first detail the calculations leading to the expression of the upper separatrix in the case of an obstacle potential of the form U (x) = U 0 F (σ)δ(x), in the limit σ ≪ 1. The equation to obtain the critical velocity, derived from eq. (3) of the main paper, reads
1 2 ∂ xx √ n √ n + v 2 ∞ 2 1− 1 n 2 +g(1)−g(n) = U 0 F (σ)δ(x). (1)
The process of solving this equation for the density can be separated into two parts: First, for x = 0, which yields a condition on the first derivative of the density. This condition will then act as a link between the two solutions for x < 0 and x > 0. The radiation condition [16] imposes that n(x) = n ∞ in the region upstream of the obstacle, which forces ∂ x n(0 − ) = 0. In the downstream region, an infinite cnoidal wave is generated, for which the expression of the density is given by the solution of
1 2 ∂ xx √ n √ n + v 2 ∞ 2 1 − 1 n 2 + g(1) − g(n) = 0.(2)
A graphic representation of the upstream and downstream solutions for the density, obtained after a numerical integration of the Gross-Pitaevskii equation, can be found in fig. 4. It is possible to obtain exact analytical results for the upper separatrix. The procedure to do so is pretty straightforward: Obtain the expression of n max through W ′ (n max ) = 0, and then inject it in ε(v c ) = W (n max (v c )), which corresponds to eqn. (5) from the main paper. For a cubic nonlinearity, we find
n max (v c ) = v 2 c + v c v 2 c + 8 4 ,(3)
which then leads to
|U 0 F (σ)| = v c (v 2 c + 8) 3/2 + v 4 c − 20v 2 c − 8 4 √ 2 .(4)
On the other hand, things become quite cumbersome when considering a saturable nonlinearity, and even if it is possible to obtain n max analytically
n max (v c ) = v 2 c (1 + n s ) + v c 8n s (1 + n s ) + v 2 c (1 − n s ) 2 2(2 + 2n s − v 2 c ) ,(5)
the equation leading to the critical velocity has to be solved numerically.
In the wide-obstacle limit σ ≫ 1, the gradients of U (x) are small enough that the fluid behaves as if it were uniform. Based on a rigorous multiple-scale treatment of the obstacle potential when σ ≫ 1, we consider an obstacle of the form U (x) = U 0 1 + f ′′ (0)x 2 /(2σ 2 ) , where the terms in the square brackets correspond to the series expansion of f (|x/σ|) to second order in 1/σ ≪ 1. The method to obtain the equation to the supersonic separatrix is the following: The last existing solution to v 2
∞ 2 1 − 1 n 2 + g(1) − g(n) = U 0(6)
(eq. (3) from the main paper with a flat obstacle of amplitude U 0 , i.e. the zeroth-order in the 1/σ development of the obstacle potential) is obtained for v ∞ = v c and n = n 0,c , defined through the condition g ′ (n 0,c )n 3 0,c = v 2 c . Exact analytical results were obtained for the critical velocity with respect to the amplitude of the obstacle (as its width is fixed and supposed large), and for a cubic nonlinearity
v c,0 = 2(U 0 − 1) + 3 U 0 − 1 + U 0 (U 0 − 2) − 1 3 + 3 U 0 − 1 + U 0 (U 0 − 2) 1 3 1 2 .(7)
Similar results were obtained numerically for a saturable nonlinearity. Contrary to the critical velocity for superfluidity (i.e. the lower separatrix) for which superfluidity is broken for a given U 0,max which depends on the chosen nonlinearity [17], eq. (7) is valid for any amplitude of the obstacle. However, the bigger U 0 , the harder it will be to reach a stationary regime, which is pretty intuitive as an important obstacle will induce more nonlinear emissions inside the fluid than a relatively small one, and the regime of quantum turbulence will be harder to depart from.
The approach used in [14,17] to perform an analytical treatment of the higher order in 1/σ ≪ 1 is no longer valid for the supersonic separatrix: It assumes that n(x) = n 0,c + δn(x) with δn(x)/n 0,c ≪ 1, meaning small density fluctuations around the obstacle, which is clearly not the case as v ∞ has no upper bound and can lead to density fluctuations of any amplitude. However, a numerical simulation showed that if it exists, this correction to the zeroth-order is extremely small compared to v c,0 . Given these results, we consider the hydraulic approximation sufficient to describe the supersonic separatrix in the wide-obstacle limit.
Results for a Gaussian obstacle. -After obtaining results in the two limits of the wide and narrow obstacle, it is natural to move to the generic situation of a localised obstacle of arbitrary range, for which the supersonic separatrix is not trivial. We performed a numerical simulation yielding the results illustrated in fig. 1 for a repulsive Gaussian obstacle of the form U (x) = U 0 exp (−x 2 /σ 2 ), with U 0 = 0.5 and for the two nonlinearities considered before: g(n) = n and g(n) = (1 + n s ) 2 n/[n s (n s + n)]. The upper part of the figure (v ∞ > 1) depicts the supersonic separatrix as a function of the width of the obstacle (the focus of the main article), whereas the lower part (v ∞ < 1) encompasses the previous results obtained in [17] for the critical velocity for superfluidity.
In addition, we raised the question of the attractive obstacle, for which nontrivial results were obtained after the numerical simulation we performed. They indeed exhibit many resonances (except in the δ-peak limit) that multiply as σ increases, and that are delimited by an envelope which can be analytically determined in the context of our toy model (see the next section). These results applied to a Gaussian obstacle exhibit another interesting feature. The envelope of said resonances gets a negative dependence on σ as can be seen in fig. 2 (which is not the case for a square well obstacle): The wider the obstacle, the more numerous the resonances and the lower their envelope.
Characterisation of the resonances. -When looking at the phase diagram (U 0 , v ∞ ), one can see that three different regimes coexist for an attractive obstacle: A nonstationary regime under the lobes, a regime that is always stationary located above the envelope of the resonances, and a regime that can be stationary depending on the value of σ, which is between the envelope and the lobes. Going back to a square obstacle and following the calculations of [13], we obtained an analytical expression for the envelope of the resonances, which can also be found in eqns. (34) and (35) of [13] 1
− F (v ∞ ) 2U 0 = G(v ∞ , U 0 ),(8)−10 −8 −6 −4 −2 0 U 0 (units of µ ∞ ) 1.0 1.2 1.4 v ∞ (units of c ∞ ) σ = 1 σ = 2F (v ∞ ) = v 2 ∞ 4 1 + 1 + 8 v 2 ∞ − 1 × 5v 2 ∞ 4 + 1 − 3v 2 ∞ 4 1 + 8 v 2 ∞ ,(9)G(v ∞ , U 0 ) = v 2 ∞ + 1 2 + U 0 − v 2 ∞ + 1 2 + U 0 2 − v 2 ∞ 1 2 .
(10) Concerning the transmission coefficient, as g(n) becomes non-null, the usual approach in terms of incident and reflected waves is not possible anymore because the problem is nonlinear, and the resonances, defined by the condition T = 1, are slightly shifted as discussed in [18,19]. We used a numerical simulation based on eq. (7) from the main paper to compute it, yielding fig. 4 of the main article with T represented by the associated colour bar.
The aim is to characterise these resonances on the line v ∞ = 1, as it creates unbroken lines of perfect transmission linking the superfluid regime to the stationary supersonic one. To get rid of the energy offset, we look at the distance between the resonances and compare that to the linear case, for which the transmission coefficient is easily recovered [20] and in which case two consecutive resonances are separated by (2k + 1)π 2 /(2σ 2 ). We introduce the rescaled separation between each consecutive resonances ∆U resc. (k) = 2σ 2 (2k + 1)π 2 |U 0,k+1 − U 0,k |, (11) and plot it in fig. 3. This function is constant (and equal to 1) in the linear case for g(n) = 0 and is plotted in black dashed line, whereas the dots, triangles and squares are numerically obtained for different values of σ in the nonlinear case. As the left-hand side of eq. (11) gets rid p-3 of any σ-dependence, the different coloured curves should collapse on the black dashed one if the linear and the nonlinear case were following the same distribution. One can see in fig. 3 that this is the case for large values of k. However, there is a clear deviation from the linear case for small values of k, which is more important for wider obstacles. The nonlinear effects then have a bigger impact on the separation between the resonances for small k and for large σ. It is also interesting to note that we have less data for σ = 2 than for the other cases as this obstacle configuration leads to fewer resonances. Concerning the superfluid-like solutions in the supersonic regime, we plot in fig. 4 the density profile for several points in the (U 0 , v ∞ ) plane. The dotted and dashed curves are respectively located close to the first and second line of perfect transmission (k = 1 and k = 2). The density is qualitatively the same as in the superfluid regime: A localised dip where the obstacle is, and a flat profile otherwise. The only difference is that as k increases, so does the number of oscillations in the density profile. When the chosen parameters do not coincide with a resonance, the density profile is similar to the one in the supersonic regime when U 0 > 0, as represented with the dasheddotted line. This case can be better understood with an analogy with the linear case: A wave coming from the right hits the obstacle and is partially transmitted through it as T < 1, and part of it is also reflected creating a standing wave.
Finally, we were able to characterise the friction exerted on the obstacle by evaluating the force F fric. = − dx n(x)∂ x U (x).
We recover, as shown in fig. 5, that for specific values of (U 0 , v ∞ ) leading to T = 1, the force drops to zero: The fluid experiences no friction along the lines of perfect transmission.
Fig. 1 :
1Typical phase diagram of the possible stationary flows of a quantum fluid in the presence of a localised obstacle as a function of the fluid velocity v∞ and the strength of the obstacle U0. Here the obstacle is a square potential of width σ ∼ ξ∞. The different regimes range from a superfluid (light blue) to a normal stationary regime (dark blue), and the white phase in between corresponds to the nonstationary nonsuperfluid regime. The dotted line corresponds to a resonant state where the supersonic solution mimics the superfluid ones. Generic space dependent density profiles n(x) are given in subfigures (a), (b), (c) and (d), for the corresponding points in the phase diagram. p-1 arXiv:2306.05048v1 [cond-mat.quant-gas] 8 Jun 2023 summarised in
Fig. 2 :
2Schematic behaviour of the fictitious potential W (n) for a cubic nonlinearity of the form g(n) = n.
Fig. 3 :
3The top (bottom) figure represents the supersonic separatrix for a repulsive narrow (wide) obstacle of amplitude U0F (σ) (U0). The two types of nonlinearity g(n) considered here are indicated in the legend which applies to both figures.
Fig. 4 :
4Phase diagram (U0, v∞) (in the natural units of the superfluid) of a quantum fluid flowing across attractive square well potentials of respective width of σ = 1 (top) and σ = 4 (bottom), and for a cubic nonlinearity of the form g(n) = n. The transmission across the barrier is associated with the colour bar and is maximum along the orange dotted lines which determine the position of the resonances, whereas the white dashed line represents the envelope of said resonances.
J. Huynh 1 , F. Hébert 1 , P.-É. Larré 1 and M. Albert 1 1 Université Côte d'Azur, CNRS, INPHYNI, France Abstract -In this supplementary material, we provide a discussion about different experimental systems that are described by nonlinear Schrödinger equations. We then give further details about explicit analytical expressions for the critical velocity and about the case of a Gaussian obstacle. Finally we characterise the resonances observed above the supersonic separatrix.
Fig. 1 :
1Critical velocities in the subsonic and supersonic regimes as a function of the typical range σ of the obstacle potential U (x) = U0 exp (−x 2 /σ 2 ) supposed to be repulsive. The velocities are plotted at a fixed U0 = 0.5 for two differents g(n)'s. The asymptotic results for the δ-limit and the wide obstacle limit are represented in black dashed lines and are in relatively good agreement with numerics.
Fig. 2 :
2Supersonic separatrix for a Gaussian potential U (x) = U0 exp (−x 2 /σ 2 ) as a function of its negative amplitude. The results presented here are obtained through a numerical simulation for two different width σ = 1 and σ = 2, and clearly exhibit resonances for particular values of U0.
Fig. 3 :
3Rescaled value of the separation between the resonances for a square well obstacle as a function of k the index of the resonance. The coloured dots, triangles and square stand respectively for σ = 2, σ = 4 and σ = 8 in the nonlinear regime g(n) = n and are obtained after a numerical simulation, whereas the black dashed curve is the theoretical value for g(n) = 0.
Fig. 4 :Fig. 5 :
45Density profiles for several chosen parameters in the (U0, v∞) plane. The flow goes from right to left with constant velocity v∞ = 2.5, and encounters a square well obstacle of width σ = 1. Friction force exerted by a fluid of velocity v∞ = 3.5 on a square obstacle as a function of its amplitude U0, for two different widths σ = 1 and σ = 4. The values of U0 for which F fric. = 0 are located on the lines of total transmission.
. A J Leggett, Rev. Mod. Phys. 71318Leggett A. J., Rev. Mod. Phys., 71 (1999) S318.
. P Kapitza, Nature. 74Kapitza P., Nature, 141 (1938) 74.
. J F Allen, A D Misener, Nature. 75Allen J. F. and Misener A. D., Nature, 141 (1938) 75.
. D D Osheroff, R C Richardson, D M Lee, Phys. Rev. Lett. 141885Osheroff D. D., and Richardson R. C. and Lee D. M., Phys. Rev. Lett., 141 (1972) 885.
. C Raman, Phys. Rev. Lett. 832502Raman C. et al, Phys. Rev. Lett., 83 (1999) 2502.
. A Amo, Nature Physics. 5805Amo A. et al, Nature Physics, 5 (2009) 805.
. C Michel, O Boughdad, M Albert, P.-É Larré, M Bellec, Nat. Commun. 92108Michel C., Boughdad O., Albert M., Larré P.-É. and Bellec M., Nat. Commun., 9 (2018) 2108.
. L D Landau, Phys. Rev. 60356Landau L. D., Phys. Rev., 60 (1941) 356.
. L D Landau, J. Phys. USSR. 571Landau L. D., J. Phys. USSR, 5 (1941) 71.
. A Phillips, P V E Mcclintock, Phys. Rev. Lett. 331468Phillips A. and McClintock P. V. E., Phys. Rev. Lett., 33 (1974) 1468.
. V L Ginzburg, L P Pitaevskii, Zh. Eksp. Teor. Fiz. 341240Ginzburg V. L. and Pitaevskii L. P., Zh. Eksp. Teor. Fiz., 34 (1958) 1240.
. E P Gross, Il Nuovo Cimento. 20354Gross E. P., Il Nuovo Cimento, 20 (1961) 354.
. L P Pitaevskii, J. Phys. USSR. 13451Pitaevskii L. P., J. Phys. USSR, 13 (1961) 451.
. T Frisch, Y Pomeau, Rica S , Phys. Rev. Lett. 691644Frisch T., Pomeau Y. and Rica S., Phys. Rev. Lett., 69 (1992) 1644.
. J Huynh, M Albert, P.-É Larré, Phys. Rev. A. 10523305Huynh J., Albert M. and Larré P.-É., Phys. Rev. A, 105 (2022) 023305.
L P Pitaevskii, S Stringari, Bose-Einstein Condensation and Superfluidity. OxfordOxford University PressPitaevskii L. P. and Stringari S., Bose-Einstein Con- densation and Superfluidity (Oxford University Press, Ox- ford) 2016.
. I Carusotto, C Ciuti, Rev. Mod. Phys. 85299Carusotto I. and Ciuti C., Rev. Mod. Phys., 85 (2013) 299.
. P Leboeuf, N Pavloff, Phys. Rev. A. 6433602Leboeuf P. and Pavloff N., Phys. Rev. A, 64 (2001) 033602.
. P Engels, C Atherton, Phys. Rev. Lett. 99160405Engels P. and Atherton C., Phys. Rev. Lett., 99 (2007) 160405.
. D Dries, S E Pollack, J M Hitchcock, R G Hulet, Phys. Rev. A. 8233603Dries D., Pollack S. E., Hitchcock J. M. and Hulet R. G., Phys. Rev. A, 82 (2010) 033603.
. A Eloy, O Boughdad, M Albert, P.-É Larré, F Mortessagne, M Bellec, C Michel, EPL. 13426001Eloy A., Boughdad O., Albert M., Larré P.-É., Mortessagne F., Bellec M. and Michel C., EPL, 134 (2021) 26001.
. S Nazarenko, Wave Turbulence, Springer Science & Business Media825Nazarenko S., Wave turbulence Vol. 825 (Springer Sci- ence & Business Media) 2011.
C F Barenghi, L Skrbek, K R Sreenivasan, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1114647Barenghi C. F., Skrbek L. and Sreenivasan K. R., Proc. Natl. Acad. Sci. USA, 111 (2014) 4647.
. A M Leszczyszyn, G A El, G Yu, A M Kamchatnov, Phys. Rev. A. 7963608Leszczyszyn A. M., El G. A., Gladush Yu. G. and Kamchatnov A. M., Phys. Rev. A, 79 (2009) 063608.
. A M Kamchatnov, N Pavloff, Phys. Rev. A. 8533603Kamchatnov A. M. and Pavloff N., Phys. Rev. A, 85 (2012) 033603.
. T Paul, K Richter, P Schlagheck, Phys. Rev. Lett. 9420404Paul T., Richter K. and Schlagheck P., Phys. Rev. Lett., 94 (2005) 020404.
. K Rapedius, D Witthaut, H J Korsch, Phys. Rev. A. 7333608Rapedius K., Witthaut D. and Korsch H. J., Phys. Rev. A, 73 (2006) 033608
. K Rapedius, H J Korsch, Phys. Rev. A. 7763610Rapedius K. and Korsch H. J., Phys. Rev. A, 77 (2008) 063610
. Paris Mandoki, A Shearring, J Mancarella, F Fromhold, T M Trombettoni, A Krüger, P , Scientific Reports. 79070Paris Mandoki A., Shearring J., Mancarella F., Fromhold T. M., Trombettoni A. and Krüger P., Scientific Reports, 7 (2015) 9070.
. Q Fontaine, T Bienaimé, S Pigeon, E Giacobino, A Bramati, Q Glorieux, Phys. Rev. Lett. 121183604Fontaine Q., Bienaimé T., Pigeon S., Giacobino E., Bramati A. and Glorieux Q., Phys. Rev. Lett., 121 (2018) 183604.
. D Vocke, K Wilson, F Marino, I Carusotto, E M Wright, T Rodger, B P Anderson, P Öhberg, D Faccio, Phys. Rev. A. 9413849Vocke D., Wilson K., Marino F., Carusotto I., Wright E. M., Rodger T., Anderson B. P.,Öhberg P. and Faccio D., Phys. Rev. A, 94 (2016) 013849.
. P Leboeuf, S Moulieras, Phys. Rev. Lett. 105163904Leboeuf P. and Moulieras S., Phys. Rev. Lett., 105 (2010) 163904.
. P.-É Larré, I Carusotto, Phys. Rev. A. 9243802Larré P.-É. and Carusotto I., Phys. Rev. A, 92 (2015) 043802.
. N Šantić, A Fusaro, S Salem, J Garnier, A Picozzi, R Kaiser, Phys. Rev. Lett. 12055301Šantić N., Fusaro A., Salem S., Garnier J., Picozzi A. and Kaiser R., Phys. Rev. Lett., 120 (2018) 055301.
. H Lamb, Hydrodynamics, Cambridge University PressCambridgeLamb H., Hydrodynamics (Cambridge University Press, Cambridge) 1997.
. T Paul, M Hartung, K Richter, P Schlagheck, Phys. Rev. A. 7663605Paul T., Hartung M., Richter K. and Schlagheck P., Phys. Rev. A, 76 (2007) 063605.
Introduction to quantum mechanics. D J Griffiths, D F Schroeter, Cambridge University PressCambridgeGriffiths D. J. and Schroeter D. F., Introduction to quantum mechanics (Cambridge University Press, Cam- bridge) 2018.
. L D Landau, E Lifshitz, Mechanics, Course of Theoretical Physics. 1Landau L. D. and Lifshitz E., Mechanics, Third Edition: Volume 1 (Course of Theoretical Physics) (Butterworth-Heinemann) 1976.
. T Paul, M Albert, P Schlagheck, P Leboeuf, N Pavloff, Phys. Rev. A. 8033615Paul T., Albert M., Schlagheck P., Leboeuf P. and Pavloff N., Phys. Rev. A, 80 (2009) 033615.
L P Pitaevskii, S Stringari, Bose-Einstein Condensation and Superfluidity. OxfordOxford University PressPitaevskii L. P. and Stringari S., Bose-Einstein Con- densation and Superfluidity (Oxford University Press, Ox- ford) 2016.
. P Engels, C Atherton, Phys. Rev. Lett. 99160405Engels P. and Atherton C., Phys. Rev. Lett., 99 (2007) 160405.
G P Agrawal, Nonlinear Fiber Optics. CambridgeAcademic PressAgrawal G. P., Nonlinear Fiber Optics (Academic Press, Cambridge) 2019.
Nonlinear Optics. R W Boyd, Academic PressCambridgeBoyd R. W., Nonlinear Optics (Academic Press, Cam- bridge) 2019.
. D Vocke, K Wilson, F Marino, I Carusotto, E M Wright, T Rodger, B P Anderson, P Öhberg, D Faccio, Phys. Rev. A. 9413849Vocke D., Wilson K., Marino F., Carusotto I., Wright E. M., Rodger T., Anderson B. P.,Öhberg P. and Faccio D., Phys. Rev. A, 94 (2016) 013849.
. C Michel, O Boughdad, M Albert, P.-É Larré, M Bellec, Nat. Commun. 92108Michel C., Boughdad O., Albert M., Larré P.-É. and Bellec M., Nat. Commun., 9 (2018) 2108.
. A Eloy, O Boughdad, M Albert, P.-É Larré, F Mortessagne, M Bellec, C Michel, EPL. 13426001Eloy A., Boughdad O., Albert M., Larré P.-É., Mortessagne F., Bellec M. and Michel C., EPL, 134 (2021) 26001.
. L D Landau, Phys. Rev. 60356Landau L. D., Phys. Rev., 60 (1941) 356.
. P Leboeuf, S Moulieras, Phys. Rev. Lett. 105163904Leboeuf P. and Moulieras S., Phys. Rev. Lett., 105 (2010) 163904.
. P.-É Larré, I Carusotto, Phys. Rev. A. 9243802Larré P.-É. and Carusotto I., Phys. Rev. A, 92 (2015) 043802.
. N Šantić, A Fusaro, S Salem, J Garnier, A Picozzi, R Kaiser, Phys. Rev. Lett. 12055301Šantić N., Fusaro A., Salem S., Garnier J., Picozzi A. and Kaiser R., Phys. Rev. Lett., 120 (2018) 055301.
. I Carusotto, C Ciuti, Rev. Mod. Phys. 85299Carusotto I. and Ciuti C., Rev. Mod. Phys., 85 (2013) 299.
. P Leboeuf, N Pavloff, Phys. Rev. A. 6433602Leboeuf P. and Pavloff N., Phys. Rev. A, 64 (2001) 033602.
. V Hakim, Phys. Rev. E. 552835Hakim V., Phys. Rev. E, 55 (1997) 2835.
. N Pavloff, Phys. Rev. A. 6613610Pavloff N., Phys. Rev. A, 66 (2002) 013610.
. H Lamb, Hydrodynamics, Cambridge University PressCambridgeLamb H., Hydrodynamics (Cambridge University Press, Cambridge) 1997.
. J Huynh, M Albert, P.-É Larré, Phys. Rev. A. 10523305Huynh J., Albert M. and Larré P.-É., Phys. Rev. A, 105 (2022) 023305.
. T Paul, K Richter, P Schlagheck, Phys. Rev. Lett. 9420404Paul T., Richter K. and Schlagheck P., Phys. Rev. Lett., 94 (2005) 020404.
. K Rapedius, D Witthaut, H J Korsch, Phys. Rev. A. 7333608Rapedius K., Witthaut D. and Korsch H. J., Phys. Rev. A, 73 (2006) 033608
C Cohen-Tannoudji, B Diu, F Laloë, Quantum Mechanics. Blackwell Verlag GmbHICohen-Tannoudji C., Diu B. and Laloë F., Quantum Mechanics Vol. I (Blackwell Verlag GmbH) 2019.
| [] |
[
"Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic",
"Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic"
] | [
"Alexander V Gheorghiu [email protected] \nUniversity College London\nWC1E 6BTLondonUK\n\nInstitute of Philosophy\nUniversity of London\nWC1H 0ARLondonUK\n"
] | [
"University College London\nWC1E 6BTLondonUK",
"Institute of Philosophy\nUniversity of London\nWC1H 0ARLondonUK"
] | [] | 0000−0002−7144−6910] , Tao Gu 1[0000−0001−5749−0758] , and David J. Pym 1,2[0000−0002−6504−5838]Abstract. This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established. | null | [
"https://export.arxiv.org/pdf/2306.05106v1.pdf"
] | 259,108,639 | 2306.05106 | 2eacc0a3cf7c1598e1dd8e15482e148cb4f62c5c |
Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic
8 Jun 2023
Alexander V Gheorghiu [email protected]
University College London
WC1E 6BTLondonUK
Institute of Philosophy
University of London
WC1H 0ARLondonUK
Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic
8 Jun 2023Logic · Semantics · Proof Theory · Proof-theoretic Seman- tics · Substructural Logic · Multiplicative Connectives
0000−0002−7144−6910] , Tao Gu 1[0000−0001−5749−0758] , and David J. Pym 1,2[0000−0002−6504−5838]Abstract. This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established.
Introduction
In model-theoretic semantics (M-tS), logical consequence is defined in terms of models; that is, abstract mathematical structures in which propositions are interpreted and their truth is judged. As Schroeder-Heister [31] explains, in the standard reading given by Tarski [36,37], a propositional formula ϕ follows modeltheoretically from a context Γ iff every model of Γ is a model of ϕ; that is, Γ |= ϕ iff for all models M, if M |= ψ for all ψ ∈ Γ , then M |= ϕ Therefore, consequence is understood as the transmission of truth.
Proof-theoretic semantics (P-tS) is an alternative approach to meaning that is based on proof -understood as valid argument -as opposed to truth. It sits within the semantic paradigm of inferentialism -the view that meaning (or validity) arises from rules of inference (see Brandom [5]). To illustrate the paradigmatic shift from M-tS to P-tS, consider the proposition 'Tammy is a vixen'. What does it mean? Intuitively, it means, somehow, 'Tammy is female' and 'Tammy is a fox'. On inferentialism, its meaning is given by the rules,
Tammy is a fox Tammy is female Tammy is female
Tammy is a vixen Tammy is female Tammy is a vixen Tammy is a fox These merit comparison with the laws governing ∧ in IPL, which justify the sense in which the above proposition is a conjunction:
ϕ ψ ϕ ∧ ψ ϕ ∧ ψ ϕ ϕ ∧ ψ ψ
There are two major branches of P-tS: proof-theoretic validity (P-tV) in the Dummett-Prawitz tradition (see, for example, Schroeder-Heister [30]) and baseextension semantics (B-eS) in the sense of, for example, Sandqvist [28,26,27]. The former is a semantics of arguments, and the latter is a semantics of a logic. Tennant [38] provides a general motivation for P-tV: reading a consequence judgement Γ ⊢ϕ proof-theoretically -that is, that ϕ follows by some reasoning from Γ -demands a notion of valid argument that encapsulates what the forms of valid reasoning are. That is, we require explicating the semantic conditions required for an argument that witnesses ψ 1 , . . . , ψ n ; therefore, ϕ to be valid. A particular motivation comes from the following programmatic remarks by Gentzen [35]:
The introductions represent, as it were, the 'definitions' of the symbols concerned, and the eliminations are no more, in the final analysis, than the consequences of these definitions. This fact may be expressed as follows: In eliminating a symbol, we may use the formula with whose terminal symbol we are dealing only 'in the sense afforded it by the introduction of that symbol'.
Dummett [8] developed a philosophical understanding of the normalization results of Prawitz [23], which give a kind of priority to the introduction rules, that yields a notion of valid arguments. The result is P-tV -see Schroeder-Heister [30] for a succinct explanation.
Meanwhile, B-eS proceeds via a judgement called support defined inductively according to the structure of formulas with the base case (i.e., the support of atoms) given by proof in a base. A base is a set of inference rules over atomic propositions, thought of as defining those atoms -an example is the set of rules above define 'Tammy is a vixen'. Though this approach is closely related to possible world semantics in the sense of Beth [2] and Kripke [16] -see, for example, Goldfarb [12] and Makinson [17] -it remains subtle. For example, there are several incompleteness results for intuitionistic logics -see, for example, Piecha et al. [20,19,22], Goldfarb [12], Sandqvist [25,26,28,27], Stafford [34]. Significantly, a sound and complete B-eS for IPL has been given by Sandqvist [27]. Gheorghiu and Pym [9] have shown that this B-eS captures the declarative content of P-tV.
Sandqvist's B-eS for IPL is the point of departure for this paper. Given a base B, we write ⊢ B p to denote that p can be derived in B. Support in a base B -denoted B -is defined by clauses of Figure 1 in which Γ = ∅. We desire to give an analogous semantics for intuitionistic multiplicative linear logic (IMLL).
(At) B p iff ⊢ B p (→) B ϕ → ψ iff ϕ B ψ (∧) B ϕ ∧ ψ iff B ϕ and B ψ (∨) B ϕ ∨ ψ
iff for any C such that B ⊆ C and any p ∈ A, if ϕ C p and ψ C p, then C p (⊥) B ⊥ iff B p for any p ∈ A (Inf) Γ B ϕ iff for any C such that B ⊆ C , if C γ for any γ ∈ Γ, then C ϕ
Fig. 1: Sandqvist's Support in a Base
A compelling reading of IMLL is its resource interpretation, which is inherently proof-theoretic -see Girard [10]. Accordingly, looking at (Inf), we expect that ϕ being supported in a base B relative to some multiset of formulas Γ means that the resources garnered by Γ suffice to produce ϕ. We may express this by enriching the notion of support with multisets of resources P and U combined with multiset union -denoted , . Then, that the resources garnered by Γ are given to ϕ is captured by the following behaviour:
Γ P B ϕ iff for any X ⊇ B and any U , if U X Γ , then P ,U X ϕ
Naively, we may define ⊗ as a resource-sensitive version of (∧); that is, P B ϕ ⊗ ψ iff there are P 1 , P 2 such that P = P 1 , P 2 , P1 B ϕ, and P2 B ψ However, the resulting semantics does not capture IMLL. A counter example is witnessed by p ⊗ (p ⊸ (q ⊗ r) ⊢ q ⊗ r as the resources used for p and p ⊸ (q ⊗ r) do not correspond to the resources used for q and r -a detailed explanation is given in Section 4.2. Thus, to understand how to set up the B-eS for IMLL correctly, we first review the semantics for IPL. There is an obvious difference between the B-eS for IPL and the logic's possible world semantic: the treatment of disjunction (∨) and absurdity (⊥). The distinctive clauses can be explained by the principle of definitional reflection (DR) (see Hallnäs [13,14] and Schroeder-Heister [29]): whatever follows from all the premisses of an assertion also follows from the assertion itself Taking the perspective that the introduction rules are definitions, DR provides an answer for the way in which the elimination rules follow. Similarly, it justifies that the clauses for the logical constants take the form of their elimination rules.
Of course, in order to arrive at the elimination rules for conjunction (∧) from DR requires some work. What DR gives is the generalized elimination rule,
ϕ ∧ ψ [ϕ, ψ] χ χ
Accordingly, we may modify the B-eS for IPL by replacing (∧) with the following:
(∧ ′ ) B ϕ ∧ ψ iff for any C ⊇ B and any p ∈ A, if ϕ, ψ C p, then C p
We show in Section 2.3 that the result does indeed characterize IPL.
Taking this analysis into consideration, we postulate the following less naive definition of the multiplicative conjunction that corresponds to the definitional reflection of its introduction rule:
P B ϕ ⊗ ψ iff
for every X ⊇ B, resources U , and atom p,
if ϕ , ψ U X p, then P , U X p
We show in Section 4 that the result does indeed characterize IMLL. The paper is structured as follows: in Section 2, we review the B-eS for IPL given by Sandqvist [27]; in Section 3, we define IMLL and provide intuitions about its B-eS; in Section 4, we formally define the B-eS for IMLL and explain its soundness and completeness proofs. The paper ends in Section 5 with a conclusion and summary of results.
Base-extension Semantics for IPL
In this section, we review the B-eS for IPL given by Sandqvist [27]. In Section 2.1, we give a terse but complete definition of the B-eS for IPL. In Section 2.2, we summarize the completeness proof. Finally, in Section 2.3, we discuss a modification of the treatment of conjunction. While IPL is not the focus of this paper, this review provides intuition and motivates the B-eS for IMLL in Section 3. Specifically, the analysis of the treatment of conjunction in IPL motivates the handling of the multiplicative conjunction in IMLL.
Throughout this section, we fix a denumerable set of atomic propositions A, and the following conventions: p, q, . . . denote atoms; P, Q, . . . denote finite sets of atoms; ϕ, ψ, θ, . . . denote formulas; Γ, ∆, . . . denote finite sets of formulas.
Support in a Base
The B-eS for IPL begins by defining derivability in a base. A (properly) secondlevel atomic rule -see Piecha and Schroeder-Heister [32,21] -is a natural deduction rule of the following form, in which q, q 1 , ..., q n are atoms and Q 1 ,...,Q n are (possibly empty) sets of atoms:
q [Q 1 ] q 1 ... [Q n ] q n q
Importantly, atomic rules are taken per se and not closed under substitution. They may be expressed inline by ⇒ q and (Q 1 ⊲ q 1 , . . . , Q n ⊲ q n ) ⇒ q, respectively. They are read as natural deduction rules in the sense of Gentzen [35]; thus, ⇒ q means that the atom q may be concluded whenever, while (Q 1 ⊲ q 1 , . . . , Q n ⊲ q n ) ⇒ q means that one may derive q from a set of atoms S if one has derived q i from S assuming Q i for i = 1, ..., n.
A base is a set of atomic rules. We write B, C , . . . to denote bases. We say C is an extension of B if C is a superset of B, denoted C ⊇ B.
Definition 1 (Derivability in a Base). Derivability in a base B is the least relation ⊢ B satisfying the following:
(Ref-IPL) S, q ⊢ B q. (App-IPL) If atomic rule (Q 1 ⊲ q 1 , . . . , Q n ⊲ q n ) ⇒ q is in B, and S, Q i ⊢ B q i for all i = 1, . . . , n, then S ⊢ B q.
This forms the base case of the B-eS for IPL:
Definition 2 (Sandqvist's Support in a Base). Sandqvist's support in a base B is the least relation B defined by the clauses of Figure 1.
A sequent Γ ⊲ϕ is valid -denoted Γ ϕ -iff it is supported in every base, Γ ϕ iff Γ B ϕ holds for any B
Every base is an extension of the empty base (∅), therefore Γ ϕ iff Γ ∅ ϕ. Sandqvist [27] showed that this semantics characterizes IPL:
Theorem 1 (Sandqvist [27]). Γ ⊢ϕ iff Γ ϕ Soundness -that is, Γ ⊢ ϕ implies Γ ϕ -follows from showing that respects the rules of Gentzen's [35] NJ; for example, Γ ϕ and ∆ ψ implies Γ, ∆ ϕ ∧ ψ. Completeness -that is, Γ ϕ implies Γ ⊢ ϕ -is more subtle. We present the argument in Section 2.2 as it motivates the work in Section 4.3.
Completeness of IPL
We require to show that Γ ϕ implies that there is an NJ-proof witnessing Γ ⊢ϕ. To this end, we associate to each sub-formula ρ of Γ or ϕ a unique atom r, and construct a base N such that r behaves in N as ρ behaves in NJ. Moreover, formulas and their atomizations are semantically equivalent in any extension of N so that support in N characterizes both validity and provability. When ρ ∈ A, we take r := ρ, but for complex ρ we choose r to be alien to Γ and ϕ. Example 1. Suppose ρ := p ∧ q is a sub-formula of Γ or ϕ. Associate to it a fresh atom r. Since the principal connective of ρ is ∧, we require N to contain the following rules: p q r r p r q
We may write (p ∧ q) ♭ for r so that these rules may be expressed as follows: -if ρ ∈ A, then ρ ♭ is an atom that does not occur in Γ or ϕ; -if ρ ∈ A, then ρ ♭ = ρ.
p q (p ∧ q) ♭ (p ∧ q) ♭ p (p ∧ q) ♭ q ρ ♭ σ ♭ (ρ ∧ σ) ♭ ∧ ♭ I (ρ ∧ σ) ♭ ρ ♭ ∧ ♭ E (ρ ∧ σ) ♭ σ ♭ ∧ ♭ E ρ ♭ (ρ → σ) ♭ σ ♭ → ♭ E ρ ♭ (ρ ∨ σ) ♭ ∨ ♭ I σ ♭ (ρ ∨ σ) ♭ ∨ ♭ I (ρ ∨ σ) ♭ [ρ ♭ ] p [σ ♭ ] p p ∨ ♭ E [ρ ♭ ] σ ♭ (ρ → σ) ♭ → ♭ I ⊥ ♭ p EFQ ♭
By unique we mean that (·) ♭ is injective -that is, if ρ = σ, then ρ ♭ = σ ♭ . The left-inverse of (·) ♭ is (·) ♮ , and the domain may be extended to the entirety of A by identity on atoms not in the codomain of (·) ♭ . Both functions act on sets point-wise -that is, Σ ♭ := {ϕ ♭ | ϕ ∈ Σ} and P ♮ := {p ♮ | p ∈ P }. Relative to (·) ♭ , let N be the base containing the rules of Figure 2 for any sub-formulas ρ and σ of Γ and ϕ, and any p ∈ A.
Sandqvist [27] establishes three claims that deliver completeness:
(IPL-AtComp) Let S ⊆ A and p ∈ A and let B be a base:
S B p iff S ⊢ B p.
(IPL-Flat) For any sub-formula ξ of Γ or ϕ and N ′ ⊇ N :
N ′ ξ ♭ iff N ′ ξ. (IPL-Nat) Let S ⊆ A and p ∈ A: if S ⊢ N p, then S ♮ ⊢p ♮ .
The first claim is completeness in the atomic case. The second claim is that ξ ♭ and ξ are equivalent in N -that is, ξ ♭ N ξ and ξ N ξ ♭ . Consequently,
Γ ♭ N ′ ϕ ♭ iff Γ N ′ ϕ
The third claim is the simulation statement which allows us to make the final move from derivability in N to derivability in NJ.
Proof (Theorem 1 -Completeness.). Assume Γ ϕ and let N be its bespoke base. By (IPL-Flat), Γ ♭ N ϕ ♭ . Hence, by (IPL-AtComp), Γ ♭ ⊢ N ϕ ♭ . Whence, by (IPL-Nat), (Γ ♭ ) ♮ ⊢(ϕ ♭ ) ♮ , i.e. Γ ⊢ϕ, as required.
Base-extension Semantics for IPL, revisited
Goldfarb [12] has also given a proof-theoretic semantics for IPL, but it mimics Kripke's [16] semantics. What is interesting about the B-eS in Sandqvist [27] is the way in which it is not a representation of the possible world semantics. This is most clearly seen in (∨), which takes the form of the 'second-order' definition of disjunction -that is, U +V = ∀X ((U → X) → (U → X) → X) (see Girard [11] and Negri [39]). This adumbrates the categorical perspective on B-eS given by Pym et al. [24]. Proof-theoretically, the clause recalls the elimination rule for the connective restricted to atomic conclusions,
ϕ ∨ ψ [ϕ] p [ψ] p p
Dummett [8] has shown that such restriction in NJ is without loss of expressive power. Indeed, all of the clauses in Figure 1 may be regarded as taking the form of the corresponding elimination rules.
The principle of definitional reflection justifies (∨), as described in Section 1 justifies this phenomenon. According to this principle, an alternative candidate clause for conjunction is as follows:
(∧ * ) * B ϕ ∧ ψ iff for any C ⊇ B and any p ∈ A, if ϕ, ψ * C p, then * C p
Definition 3. The relation * B is defined by the clauses of Figure 1 with (∧ * ) in place of (∧). The judgement Γ * ϕ obtains iff Γ * B ϕ for every B.
The resulting semantics is sound and complete for IPL:
Theorem 2. Γ * ϕ iff Γ ⊢ϕ.
Proof. We assume the following (proofs in Appendix A): for arbitrary base B, and formulas ϕ, ψ, χ,
(IPL * -Monotone) If * B ϕ, then * C ϕ for any C ⊇ B. (IPL * -AndCut) If * B ϕ ∧ ψ and ϕ, ψ * B χ, then * B χ.
The first claim follows easily from (Inf). The second is a generalization of (∧ * ); it follows by induction on the structure of χ -an analogous treatment of disjunction was given by Sandqvist [27]. By Theorem 1, it suffices to show that Γ * ϕ iff Γ ϕ. For this it suffices to show * B θ iff B θ for arbitrary B and θ. We proceed by induction on the structure of θ. Since the two relations are defined identically except in the case when the θ is a conjunction, we restrict attention to this case.
First, we show B θ 1 ∧ θ 2 implies * B θ 1 ∧ θ 2 . By (∧ * ), the conclusion is equivalent to the following: for any C ⊇ B and p ∈ A, if θ 1 , θ 2 * C p, then * C p. Therefore, fix C ⊇ B and p ∈ A such that θ 1 , θ 2 * C p. By (Inf), this entails the following: if * C θ 1 and * C θ 2 , then * C p. By (∧) on the assumption (i.e., B θ 1 ∧θ 2 ), we obtain B θ 1 and B θ 2 . Hence, by the induction hypothesis (IH), * B θ 1 and * B θ 2 . Whence, by (IPL * -Monotone), * C θ 1 and * C θ 2 . Therefore, * C p. We have thus shown * B θ 1 ∧ θ 2 , as required. Second, we show * B θ 1 ∧θ 2 implies B θ 1 ∧θ 2 . It is easy to see that θ 1 , θ 2 * B θ i obtains for i = 1, 2. Applying (IPL * -AndCut) (setting ϕ = θ 1 , ψ = θ 2 ) once with χ = θ 1 and once with χ = θ 2 yields * B θ 1 and * B θ 2 . By the IH, B θ 1 and B θ 2 . Hence, B θ 1 ∧ θ 2 , as required.
A curious feature of the new semantics is that the meaning of the contextformer (i.e., the comma) is no longer interpreted as ∧; that is, we define the context-former as follows: *
B Γ, ∆ iff * B Γ and * B ∆
This differs from the definition of ∧ in the new semantics. Nonetheless, as shown in the proof of Theorem 2, they are equivalent at every base -that is,
* B ϕ, ψ iff * B ϕ ∧ ψ for any B.
Having defined the context-former, we may express (Inf) as follows:
Γ * B ϕ iff for any C ⊇ B, if * C Γ , then * C ϕ
This illustrates that support in a base of a sequent is the transmission of the support of the context in a base to support of the formula in a bigger base. This equivalence of the two semantics yields the following:
Corollary 1. For arbitrary base B and formula ϕ, B ϕ iff, for every X ⊇ B and every atom p, if ϕ X p, then X p.
The significance of this result is that we see that formulas in the B-eS are precisely characterized by their support of atoms.
Intuitionistic Multiplicative Linear Logic
Having reviewed the B-eS for IPL, we turn now to intuitionistic multiplicative linear logic (IMLL). We first define the logic and then consider the challenges of giving a B-eS for it. This motivates the technical work in Section 4. Henceforth, we abandon the notation of the previous section as we do not need it and may recycle symbols and conventions.
Fix a countably infinite set A of atoms.
Definition 4 (Formula). The set of formulas (Form IMLL ) is defined by the following grammar:
ϕ, ψ ::= p ∈ A | ϕ ⊗ ψ | I | ϕ ⊸ ψ
We use p, q, . . . for atoms and ϕ, ψ, χ, . . . for formulas. In contrast to the work on IPL, collections of formulas in IMLL are more typically multisets. We use P, Q, . . . for finite multisets of atoms, abbreviated atomic multisets, and Γ, ∆, . . . to denote finite multisets of formulas.
We use [ · ] to specify a multiset; for example, [ϕ, ϕ, ψ] denotes the multiset consisting of two occurrence of ϕ and one occurrences of ψ. The empty multiset (i.e., the multiset with no members) is denoted ∅. The union of two multisets Γ and ∆ is denoted Γ , ∆. We may identify a multiset containing one element with the element itself; thus, we may write ψ , ∆ instead of [ψ] , ∆ to denote the union of multiset ∆ and the singleton multiset [ψ].
ϕ ⊲ ϕ ax Γ , ϕ ⊲ ψ Γ ⊲ ϕ ⊸ ψ ⊸I Γ ⊲ ϕ ⊸ ψ ∆ ⊲ ϕ Γ , ∆ ⊲ ψ ⊸E ⊲ I I I Γ ⊲ ϕ ∆ ⊲ I Γ , ∆ ⊲ ϕ I E Γ ⊲ ϕ ∆ ⊲ ψ Γ , ∆ ⊲ ϕ ⊗ ψ ⊗ I Γ ⊲ ϕ ⊗ ψ ∆ , ϕ , ψ ⊲ χ Γ , ∆ ⊲ χ ⊗ E Fig. 3:
The Sequential Natural Deduction System NIMLL for IMLL Definition 5 (Sequent). A sequent is a pair Γ ⊲ ϕ in which Γ is a multiset of formulas and ϕ is a formula.
We characterize IMLL by proof in a natural deduction system. Since it is a substructural logic, we write the system in the format of a sequent calculus as this represents the context management explicitly. We assume general familiarity with sequent calculi -see, for example, Troelstra and Schwichtenberg [39].
Definition 6 (System NIMLL). The sequential natural deduction system for IMLL, denoted NIMLL, is given by the rules in Figure 3.
A sequent Γ ⊲ ϕ is a consequence of IMLL -denoted Γ ⊢ ϕ -iff there is a NIMLL-proof of it.
One may regard IMLL as IPL without the structural rules of weakening and contraction -see Došen [7]. In other words, adding the following rules to NIMLL recovers a sequent calculus for IPL:
Γ ⊲ ϕ ∆ , Γ ⊲ ϕ w ∆ , ∆ , Γ ⊲ ϕ ∆ , Γ ⊲ ϕ c
To stay close to the work in Section 2 it is instructive to consider the natural deduction presentation, too. The rule figures may be the same, but their application is not; for example,
ϕ ψ ϕ ⊗ ψ means if Γ ⊢ ϕ and ∆ ⊢ ψ, then Γ , ∆ ⊢ ϕ ⊗ ψ
Here, it is important that the context are multisets, not as sets.
The strict context management in IMLL yields the celebrated 'resource interpretations' of linear logic -see Girard [10]. The leading example of which is, perhaps, the number-of-uses reading in which a proof of a formula ϕ ⊸ ψ determines a function that uses its arguments exactly once. This reading is, however, entirely proof-theoretic and is not expressed in the truth-functional semantics of IMLL -see Girard [10], Allwein and Dunn [1], and Coumans et al. [6]. Though these semantics do have sense of 'resource' it is not via the number-ofuses reading, but instead denotational in the sense of the treatment of resources in the truth-functional semantics of the Logic of Bunched Implications [18]. The number-of-uses reading is, however, reflected in the categorical semantics -see Seely [33] and Biermann [4,3].
How do we render support sensitive to the resource reading? The subtlety is that for Θ ϕ (where Θ = ∅), we must somehow transmit the resources captured by Θ to ϕ. From Corollary 1, we see that in B-eS the content of a formula is captured by the atoms it supports. Therefore, we enrich the support relation with an atomic multiset of atoms P ,
Θ P B ϕ iff for any X ⊇ B and any U , if U X Θ, then P,U X ϕ where U B Θ 1 , Θ 2 iff there are U 1 and U 2 s.t. U = U 1 , U 2 and U1 X Θ 1 and U1 X Θ 2
This completes the introduction to IMLL.
Base-extension Semantics for IMLL
In this section, we give a B-eS for IMLL. It is structured as follows: first, we define support in a base in Section 4.1; second, we prove soundness in Section 4.2; finally, we prove completeness in Section 4.3.
Support in a Base
The definition of the B-eS proceeds in line with that for IPL (Section 2) while taking substructurality into consideration. Omitted proofs are in Appendix B.
Definition 7 (Atomic Sequent). An atomic sequent is a pair P ⊲ p in which P is an atomic multiset and q is an atom.
Definition 8 (Atomic Rule
). An atomic rule is a pair P ⇒ p in which P is a (possibly empty) finite set of atomic sequents and p in an atom.
Definition 9 (Base). A base B is a (possibly infinite) set of atomic rules.
Definition 10 (Derivability in a Base). The relation ⊢ B of derivability in B is the least relation satisfying the following:
(Ref) p ⊢ B p (App) If S i , P i ⊢ B p i for i = 1, . . . , n and (P 1 ⊲ p 1 , . . . , P n ⊲ p n ) ⇒ p ∈ B, then S 1 , . . . , S n ⊢ B p.
Note the differences between Definition 1 and Definition 10: first, in (Ref), no redundant atoms are allowed to appear, while in (Ref-IPL) they may; second, in (App), the multisets S 1 ,...,S n are collected together as a multiset, while in (App-IPL), there is one set. These differences reflect the fact in the multiplicative setting that 'resources' can neither be discharged nor shared. Fig. 4: Base-extension Semantics for IMLL Definition 11 (Support). That a sequent Γ ⊲ ϕ is supported in the base B using resources S -denoted Γ S B ϕ -is defined by the clauses of Figure 4 in which Γ , ∆, and Θ are non-empty finite multisets of formulas. The sequent Γ ⊲ ϕ is supported using resources S -denoted Γ S ϕ -iff Γ S B ϕ for any base B. The sequent Γ ⊲ ϕ is valid -denoted Γ ϕ -iff Γ ⊲ ϕ is supported using the emtpy multiset of resources (i.e., Γ ∅ ϕ).
(At) P B p iff P ⊢ B p (⊗) P B ϕ ⊗ ψ iff for every X ⊇ B, atomic multiset U , and atom p, if ϕ , ψ U X p, then P,U X p (I) P B I iff for every base X ⊇ B, atomic multiset U , and atom p, if U X p, then P , U X p (⊸) P B ϕ ⊸ ψ iff ϕ P B ψ ( , ) P B Γ , ∆ iff there are U and V such that P = U , V and U B Γ and V B ∆ (Inf) Θ P B ϕ iff for any X ⊇ B and any U , if U X Θ, then P , U X ϕ
It is easy to see that Figure 4 is an inductive definition on a structure of formulas that prioritizes conjunction (⊗) over implication (⊸) -an analogous treatment in IPL with disjunction (∨) prioritized over implication (→) has been given by Sandqvist [27]. As explained in Section 3, the purpose of the multisets of atoms S in the support relation S B is to express the susbtructurality of the logical constants. There is no obvious way to use multisets of formulas rather than multisets of atoms -for example, Γ ∆ B ϕ iff Γ ,∆ B ϕ -that does not yield and impredicative definition.
We read (Inf) as saying that Θ S B ϕ (for Θ = ∅) means, for any extension X of B, if Θ is supported in X with some resources U (i.e. U X Θ), then ϕ is also supported by combining the resources U with the resources S (i.e., S , U X ϕ). The following observation on the monotonicity of the semantics with regard to base extensions follows immediately by unfolding definitions:
Proposition 1. If Γ S
B ϕ and C ⊇ B, then Γ S C ϕ. From this proposition we see the following: Γ S ϕ iff Γ S ∅ ϕ, and Γ ϕ iff Γ ∅ ∅ ϕ. As expected, we do not have monotonicity on resources -that is, Γ S ϕ does not, in general, imply Γ S , T ϕ for arbitrary T .
A distinguishing aspect of support is the structure of (Inf). In one direction, it is merely cut, but in the other it says something stronger. The completeness argument will go through the atomic case (analogous to the treatment of IPL in Section 2.2), and the following proposition suggests that the setup is correct:
Proposition 2.
The following two propositions are equivalent for arbitrary base B, atomic multisets P, S, and atom q, where we assume P = [p 1 , . . . , p n ]:
1. P , S ⊢ B q. 2. For every X ⊇ B and atomic multisets T 1 , . . . , T n , if T i ⊢ X p i holds for all i = 1, . . . , n, then T 1 , . . . , T n , S ⊢ X q.
It remains to prove soundness and completeness.
Soundness
Theorem 3 (Soundness). If Γ ⊢ ϕ, then Γ ϕ.
The full proof is in Appendix C. The argument follows a typical strategy of showing that the semantics respects the rules of NIMLL -that is, for any Γ, ∆, ϕ, ψ, and χ:
(Ax) ϕ ϕ (⊸I) If Γ, ϕ ψ, then Γ ϕ ⊸ ψ (⊸E) If Γ ϕ ⊸ ψ and ∆ ϕ, then Γ, ∆ ψ (⊗I) If Γ ϕ and ∆ ψ, then Γ, ∆ ϕ ⊗ ψ (⊗E) If Γ ϕ ⊗ ψ and ∆ , ϕ , ψ χ, then Γ, ∆ χ (II) I (IE) If Γ χ and ∆ I, then Γ, ∆ χ These follow quickly from the fact that the clauses of each connective in Figure 4 takes the form of its elimination rules. The only subtle cases are (⊗E) and (IE).
To show (IE), suppose Γ χ and ∆ I. We require to show Γ , ∆ χ. By (Inf), we fix some base B and atomic multisets P and Q such that P B Γ and Similarly, we require the following to prove (⊗E):
Lemma 2. For arbitrary base B, atomic multisets S, T , and formulas ϕ, ψ, χ, if 1. S B ϕ ⊗ ψ, 2. ϕ , ψ T B χ, then 3. S,T B χ. With these results, we may prove completeness:
Proof (Theorem 3 -sketch). We demonstrate (⊗I) and (⊗E).
(⊗I). Assume Γ ϕ and ∆ ψ. We require to show Γ , ∆ ϕ ⊗ ψ. By (Inf), the conclusion is equivalent to the following: for any base B, for any multisets of atoms T and S , if T B Γ and S B ∆, then T ,S B ϕ ⊗ ψ. So we fix some B and T , S such that T B Γ and S B ∆, and show that T ,S B ϕ ⊗ ψ. By (⊗), it suffices to show, for arbitrary C ⊇ B, multiset of atoms U , and atom p, if ϕ , ψ U C p, then T ,S,U C p. So we fix some C ⊇ B, multiset of atoms U , and atom p such that ϕ , ψ U C p, and the goal is to show that T ,S,U C p. From the assumptions Γ ϕ and ∆ ψ, we see that S,T B ϕ , ψ obtains. Therefore, by monotonicity, S,T C ϕ , ψ obtains. By (Inf), this suffices for ϕ , ψ U C p, to yield T ,S,U C p, as required.
(⊗E). Assume Γ ϕ ⊗ ψ and ∆ , ϕ , ψ χ. We require to show Γ , ∆ χ. By (Inf), it suffices to assume S B Γ and T B ∆ and show that S,T B χ. First, Γ ϕ⊗ψ together with S B Γ entails that S B ϕ ⊗ ψ. Second, by (Inf), ∆ , ϕ , ψ χ is equivalent to the following:
for every X and P, Q, if P X ∆ and Q X ϕ , ψ, then P ,Q X χ
Since T B ∆, setting P := T and Q := S, yields, for every base X ⊇ B, if S X ϕ , ψ, then T ,S X χ Now, given S B ϕ ⊗ ψ and (4.2), we can apply Lemma 2 and conclude S,T B χ. If either of the clauses for ⊗ or I takes the form of the corresponding introduction rule, then soundness fails: Example 2. Let * P B be defined by the clauses of Figure 4 with (⊗) replaced by the following:
(⊗ ′ ) * P B ϕ ⊗ ψ iff there are P 1 and P 2 s.t. P = P 1 , P 2 , * P1 B ϕ, and * P2 B ψ
We show that the resulting semantics fails to be sound by a counter-example. It is easy to see that p , p ⊸ (q ⊗ r) ⊢ q ⊗ r obtains, and argue by contradiction that p , p ⊸ (q ⊗ r) * q ⊗ r fails. Assume p , p ⊸ (q ⊗ r) * q ⊗ r. By (Inf), for any X and any U , if * U X p , p ⊸ (q ⊗ r), then * U X q ⊗ r. Let C = {(⊲p) ⇒ q, (⊲p) ⇒ r}. By ( , ) and (App), we have * p C p , p ⊸ (q ⊗ r). However, * p C q ⊗ r fails because, using (⊗ ′ ), we require to split [p] in a way that the rules in C can simultaneously yield q and r, but there is no such partition.
Completeness
Theorem 4 (Completeness). If Γ ϕ, then Γ ⊢ ϕ.
The argument follows the strategy used by Sanqvist [27] for IPL -see Section 2.2. We explain the main steps with the full proof given in Appendix D.
Let Ξ be the set of all sub-formulas of Γ and ϕ. Let (·) ♭ : Ξ → A be an injection that is fixed on A -that is, p ♭ = p for p ∈ Ξ ∩ A. Let (·) ♮ be the left-inverse of (·) ♭ -that is p ♮ = χ if p = χ ♭ , and p ♮ = p if p is not in the image of (·) ♭ . Both act on multisets of formulas pointwise; that is,
∆ ♭ := [δ ♭ | δ ∈ ∆] and P ♮ := [p ♮ | p ∈ P ].
We construct a base M such that ϕ ♭ behaves in M as ϕ behaves in NIMLL. The base M contains all instances of the rules of Figure 5 when σ and τ range over Ξ, and p ranges over A. We illustrate how M works with an example. Example 3. Consider the sequent Γ ⊲ ϕ where Γ = [p 1 , p 2 , p 1 ⊗ p 2 ⊸ q, p 1 ] and ϕ = q ⊗ p 1 . By definition, Ξ := {p 1 , p 2 , p 1 ⊗ p 2 ⊸ q, p 1 ⊗ p 2 , q, q ⊗ p 1 }, and, therefore, the image of (·) ♭ is {p 1 , p 2 , q, (p 1 ⊗ p 2 ⊸ q) ♭ , (p 1 ⊗ p 2 ) ♭ , (q ⊗ p 1 ) ♭ }. That Γ ⊢ ϕ obtains is witnessed by the following NIMLL-proof:
⊸ I ♭ : (σ ♭ ⊲ τ ♭ ) ⇒ (σ ⊸ τ ) ♭ ⊸ E ♭ : ⊲(σ ⊸ τ ) ♭ , ⊲σ ♭ ⇒ τ ♭ ⊗ I ♭ : ⊲σ ♭ , ⊲τ ♭ ⇒ (σ ⊗ τ ) ♭ ⊗ E ♭ : ⊲(σ ⊗ τ ) ♭ , σ ♭ , τ ♭ ⊲ p ⇒ p I I ♭ : ⇒ I ♭ I E ♭ : ⊲I ♭ , ⊲p ⇒ pax p 1 ⊲ p 1 ax p 2 ⊲ p 2 ⊗ I p 1 , p 2 ⊲ p 1 ⊗ p 2 ax p 1 ⊗ p 2 ⊸ q ⊲ p 1 ⊗ p 2 ⊸ q ⊸ E p 1 , p 2 , p 1 ⊗ p 2 ⊸ q ⊲ q ax p 1 ⊲ p 1 ⊗ I p 1 , p 2 , p 1 ⊗ p 2 ⊸ q , p 1 ⊲ q ⊗ p 1
The base M is designed so that we may simulate the rules of NIMLL; for example, the ⊗ E is simulated by using (App) on ⊗ ♭ E ,
(∅ ⊲ (σ ⊗ τ ) ♭ , σ ♭ , τ ♭ ⊲ γ ♭ ) ⇒ γ ♭ means if ∆ ♭ ⊢ M (σ ⊗ τ ) ♭ and Σ ♭ , σ ♭ , τ ♭ ⊢ M γ ♭ then ∆ ♭ , Σ ♭ ⊢ M γ ♭ .
In this sense, the proof above is simulated by the following steps: (1) and (2), we obtain (4) p 1 , (3) and (4), we obtain (5) (p 1 ⊗ p 2 ⊸ q) ♭ , p 1 , p 2 ⊢ M q (iv) By (App), using (⊗ I ) ♭ on (1) and (5). we have (
(i) By (Ref), (1) p 1 ⊢ M p 1 ; (2) p 2 ⊢ M p 2 ; (3) (p 1 ⊗ p 2 ⊸ q) ♭ ⊢ M (p 1 ⊗ p 2 ⊸ q) ♭ (ii) By (App), using (⊗ I ) onp 2 ⊢ M (p 1 ⊗ p 2 ) ♭ (iii) By (App), using (⊸ E ) ♭ onp 1 ⊗ p 2 ⊸ q) ♭ , p 1 , p 2 , p 1 ⊢ M (q ⊗ p 1 ) ♭ .
Significantly, steps (i)-(iv) are analogues of the steps in the proof tree above.
The completeness statement then follows from the following three observations, which are indeed counterparts to (IPL-AtComp), (IPL-Flat), and (IPL-Nat), respectively, from Section 2.2:
(IMLL-AtComp) For any B, P , S, and q, P , S ⊢ B q iff P S B q. (IMLL-Flat) For any ξ ∈ Ξ, X ⊇ M and U , U X ξ ♭ iff U X ξ. (IMLL-Nat) For any P and q, if P ⊢ M q then P ♮ ⊢ q ♮ .
(IMLL-AtComp) follows from Proposition 2 and is the base case of completeness. (IMLL-Flat) formalizes the idea that every formula ξ appearing in Γ ⊲ ϕ behaves the same as ξ ♭ in any base extending M . Consequently, Γ ♭ M ϕ ♭ iff Γ M ϕ. (IMLL-Nat) intuitively says that M is a faithful atomic encoding of NIMLL, witnessed by (·) ♮ . This together with (IMLL-Flat) guarantee that every ξ ∈ Ξ behaves in M as ξ ♭ in M , thus as ξ ♭ ♮ = ξ in NIMLL.
Proof (Theorem 4). Assume Γ ϕ and let M be the bespoke base for Γ ⊲ ϕ. By (IMLL-Flat), Γ ♭ ∅ M ϕ ♭ . Therefore, by (IMLL-AtComp), we have Γ ♭ ⊢ M ϕ ♭ . Finally, by (IMLL-Nat), Γ ♭ ♮ ⊢ ϕ ♭ ♮ , namely Γ ⊢ϕ.
Conclusion
Proof-theoretic semantics (P-tS) is the paradigm of meaning in logic based on proof, as opposed to truth. A particular form of P-tS is base-extension semantics (B-eS) in which one defines the logical constants by means of a support relation indexed by a base -a system of natural deduction for atomic propositionswhich grounds the meaning of atoms by proof in that base. This paper provides a sound and complete base-extension semantics for intuitionistic multiplicative linear logic (IMLL).
The B-eS for IPL given by Sandqvist [27] provides a strategy for the problem. The paper begins with a brief but instructive analysis of this work that reveals definitional reflection (DR) as an underlying principle delivering the semantics; accordingly, in Section 2.3, the paper modifies the B-eS for IPL to strictly adhere to DR and proves soundness and completeness of the result. Moreover, the analysis highlights that essential to B-eS is a transmission of proof-theoretic content: a formula ϕ is supported in a base B relative to a context Γ iff, for any extension C of B, the formula ϕ is supported in C whenever Γ is supported in C .
With this understanding of B-eS of IPL, the paper gives a 'resource-sensitive' adaptation by enriching the support relation to carry a multiset of atomic 'resources' that enable the transmission of proof-theoretic content. This captures the celebrated 'resource reading' of IMLL which is entirely proof-theoretic -see Girard [10]. The clauses of the logical constants are then delivered by DR. Having set up the B-eS for IMLL in this principled way, soundness and completeness follow symmetrically to the preceeding treatment of IPL.
Traditionally, P-tS has been restricted to classical and intuitionistic propositional logics so this paper provides the first step toward a broader analysis. In particular, the analysis in this paper suggests a general methodology for delivering B-eS for other substructural logics such as, inter alia, Linear Logic [10] and the logic of Bunched Implication [18]. Developing the P-tS for this class of logic is valuable because of their deployment in modelling systems; significantly, P-tS has shown the be useful in simulation modelling -see, for example, Kuorikoski and Reijula [15]. Of course, more generally, we may ask what conditions a logic must satisfy in order to provide a B-eS for it. A Omitted proofs from Section 2.3
The following contains proofs for the claims IPL * -Monotone and IPL * -AndCut in the proof of Theorem 2.
Lemma 3 (IPL * -Monotone). If Γ * B ϕ, then Γ * C ϕ for any C ⊇ B. Proof. By (Inf), the conclusion Γ * C ϕ means: for every D ⊇ C , if * D γ for every γ ∈ Γ , then * D ϕ. Since D ⊇ C ⊇ B, this follows by (Inf) on the hypothesis Γ * B ϕ.
Lemma 4 (IPL * -AndCut). If * B ϕ ∧ ψ and ϕ, ψ * B χ, then * B χ. Proof. We proceed by induction on the structure of χ:
χ = p ∈ A. This follows immediately by expanding the hypotheses with (∧) and (Inf), choosing the atom to be χ. -χ = χ 1 → χ 2 . By (→), the conclusion is equivalent to σ * B τ . By (Inf), this is equivalent to the following: for any C ⊇ B, if * C χ 1 , then * C χ 2 . Therefore, fix an arbitrary C ⊇ B such that * C χ 1 . By the induction hypothesis (IH), it suffices to show: (1) * C ϕ ∧ ψ and (2) for any D ⊇ C , if * D ϕ and * D ψ, then * D χ 2 . By Lemma IPL * -Monotone on the first hypothesis we immediately get (1). For (2), fix an arbitrary base D ⊇ C such that * D ϕ, and * D ψ. By the second hypothesis, we obtain * D χ 1 → χ 2 -that is, χ 1 * D χ 2 . Hence, by (Inf) and IPL * -Monotone (since D ⊇ B) we have * D χ 2 , as required. -χ = χ 1 ∧ χ 2 . By (∧ * ), the conclusion is equivalent to the following: for any C ⊇ B and atomic p, if χ 1 , χ 2 * C p, then * C p. Therefore, fix arbitrary C ⊇ B and p such that χ 1 , χ 2 * C p. By (Inf), for any D ⊇ C , if * D χ 1 and * D χ 2 , then * Y p. We require to show * C p. By the IH, it suffices to show the following: (1) * C ϕ ∧ ψ and (2), for any E ⊇ C , if * E ϕ and * E ψ, then * E p. Since B ⊆ C , By Lemma IPL * -Monotone on the first hypothesis we immediately get (1). For (2), fix an arbitrary base E ⊇ C such that * E ϕ and * E ψ. By the second hypothesis, we obtain * D p, as required. -χ = χ 1 ∨ χ 2 . By (∨), the conclusion is equivalent to the following: for any C ⊇ B and atomic p, if χ 1 * C p and χ 2 * C p, then * C p. Therefore, fix an arbitrary base C ⊇ B and atomic p such that χ 1 * C p and χ 2 * C p. By the IH, it suffices to prove the following: (1) * C ϕ ∧ ψ and (2). for any D ⊇ C , if * D ϕ and * D ψ, then * D p. By Lemma IPL * -Monotone on the first hypothesis we immediately get (1). For (2), fix an arbitrary D ⊇ C such that * D ϕ and * D ψ. Since D ⊇ B, we obtain * D χ 1 ∨ χ 2 by the second hypothesis. By (∨), we obtain * D p, as required. -χ = ⊥. By (⊥), the conclusion is equivalent to the following: * B r for all atomic r. By the IH, it suffices to prove the following: (1) * B ϕ ∧ ψ and (2), for any C ⊇ B, if * C ϕ and * C ψ, then * C r. By the first hypothesis we have (1). For (2), fix an arbitrary C ⊇ B such that * C ϕ and * C ψ. By the second hypothesis, * C ⊥ obtains. By (⊥), we obtain * C r, as required.
This completes the induction.
Corollary 1. For arbitrary base B and formula ϕ, B ϕ iff, for every X ⊇ B and every atom p, if ϕ X p, then X p.
Proof. Let ⊤ be any formula such that ⊤ -for example, ⊤ := p∧(p → q) → q.
We apply the two equivalent definitions of ∧ to the neutrality of ⊤.
B ϕ iff B ϕ and B ⊤ (def. of ⊤) iff B ϕ ∧ ⊤ (∧)
iff for any X ⊇ B, for any p ∈ A, ϕ, ⊤ X p implies B p (∧ * ) iff for any X ⊇ B, for any p ∈ A, ϕ X p implies B p (def. of ⊤)
This establishes the desired equivalence.
B Omitted proofs from Section 4.1
Proposition 3. The supporting relation S B from Definition 11 is well-defined. Proof. Basically we show that this is an inductive definition, by providing some metric. We follow the idea of Sandqvist, and notice that the extra layer of complexity given by the resource S in S B does not impact the argument for welldefinedness.
We define the degree of IMLL formulas as follows: Note that for each of (I), (⊗), and (⊸), the formulas appearing in the definitional clauses all have strictly smaller degrees than the formula itself, and the atomic case S B is defined by the derivability relation as S ⊢ B p. Therefore this is a valid inductive definition. Proposition 1. If Γ S B ϕ and C ⊇ B, then Γ S C ϕ. Proof. Formally we prove by induction on (see Definition 11).
-For the base case, Γ S B ϕ is of the form S B p where p is an atom. Then by definition this means S ⊢ B p. For arbitrary C that extends B, S ⊢ C p also holds simply because the derivability relation ⊢ X is totally determined by the atmoics rules in the base X , and C ⊇ B means that every atomic ruls in B is also in C . Then S ⊢ C p says S C p. -For the inductive cases (⊗), (I), (⊸) (expanded using (Inf) and ( , )), note that each uses a universal quantification over bases extending B, namely 'for every X ⊇ B, ...'. Now for an arbitrary base C that extends B, such universal quantified statement also holds by replacing the quantification with all bases extending C , namely 'for every X ⊇ C , ...'. Therefore the inductive steps also pass.
This completes the inductive proof.
Corollary 2. Γ S ϕ iff Γ S ∅ ϕ. Proof.
Recall that the definition that Γ S ϕ means the following: for any base B, Γ S B ϕ holds. For the 'only if' direction, note that Γ S ϕ implies that in particular, Γ S ∅ ϕ holds.
For the 'if' direction, suppose Γ S ϕ holds. Then for arbitrary B, since B ⊇ ∅ holds, we can apply Proposition 1 and conclude that Γ S B ϕ also holds. Since this is true for arbitrary base B, we have Γ S ϕ. Proposition 2. The following two propositions are equivalent for arbitrary base B, atomic multisets P, S, and atom q, where we assume P = [p 1 , . . . , p n ]:
1. P , S ⊢ B q. 2.
For every X ⊇ B and atomic multisets T 1 , . . . , T n , if T i ⊢ X p i holds for all i = 1, . . . , n, then T 1 , . . . , T n , S ⊢ X q.
Proof. It is straightforward to see that (2) entails (1): we take X to be B, and T i to be [p i ] for each i = 1, . . . , n. Since p 1 ⊢ B p 1 , . . . , p n ⊢ B p n all hold by (Ref), it follows from (2) that p 1 , . . . , p n , S ⊢ B q, namely P , S ⊢ B q.
As for (1) entails (2), we prove by induction on how P , S ⊢ B q is derived (see Definition 10).
-P , S ⊢ B q holds by (Ref). That is, P , S = [q], and q ⊢ B q follows by (Ref).
Here are two subcases, depending on which of P and S is [q].
-Suppose P = [q] and S = ∅. So (2) becomes: for every X ⊇ B and T , if T ⊢ X q, then T ⊢ X q. This holds a fortiori. -Suppose S = [q] and P = ∅. Since P = ∅, (2) becomes: for every X ⊇ B, S ⊢ X q. This holds by (Ref). -P, S ⊢ B q holds by (App). We assume that P = P 1 , . . . , P k , S = S 1 , . . . , S k , and the following hold for some Q 1 , . . . , Q k and r 1 , . . . , r k :
P 1 , S 1 , Q 1 ⊢ B r 1 , . . . , P k , S k , Q k ⊢ B r k (1) (Q 1 ⊲ r 1 , . . . , Q k ⊲ r k ) ⇒ q is in B(2)
In order to prove (2), we fix some arbitrary base C ⊇ B and atomic multisets T 1 , . . . , T n such that T 1 ⊢ C p 1 , . . . , T n ⊢ C p n , and show T 1 , . . . , T n , S ⊢ C q. Let us assume P i = p i1 , . . . , p iℓi for each i = 1, . . . , k. We apply IH to every (1), and get T i1 , . . . , T iℓi , S i , Q i ⊢ C r i . Moreover, the atomic rule from (2) is also in C , since C ⊇ B. Therefore we can apply (App) and get
P i , S i , Q i ⊢ B r i fromT 11 , . . . , T 1ℓ1 , S 1 , . . . , T k1 , . . . , T kℓ k , S k ⊢ C q.
By the definition of S i and T ij , this is precisely T 1 , . . . , T n , S ⊢ C q.
This completes the inductive proof.
C Proof of Soundness
This appendix is devoted to the detailed proof of the soundness (Theorem 3) of the base-extension semantics for IMLL.
Theorem 3 (Soundness). If Γ ⊢ ϕ, then Γ ϕ.
Proof. Recall that Γ ϕ is abbreviation of Γ ∅ ∅ ϕ. By the inductive definition of ⊢, it suffices to prove the following:
(Ax) ϕ ϕ (⊸I) If Γ , ϕ ψ, then Γ ϕ ⊸ ψ. (⊸E) If Γ ϕ ⊸ ψ and ∆ ϕ, then Γ , ∆ ψ. (⊗I) If Γ ϕ and ∆ ψ, then Γ , ∆ ϕ ⊗ ψ. (⊗E) If Γ ϕ ⊗ ψ and ∆ , ϕ , ψ χ, then Γ , ∆ χ. (II) I (IE) If Γ χ and ∆ I, then Γ , ∆ χ.
Now we prove them on by one. We assume that Γ = γ 1 , . . . , γ m and ∆ = δ 1 , . . . , δ n in all the above equations to be checked.
-(Ax) holds a fortiori by definition of the validity relation : by (Inf), ϕ ϕ means that for every base X , if X ϕ, then X . -(⊸I). Assume Γ, ϕ ψ, we show Γ ϕ ⊸ ψ. By (Inf), the assumption Γ , ϕ ψ boils down to the following:
For all base X and multiset of atoms P, if there exists S 1 , . . . , S m , T satisfying P = S 1 , . . . , S m , T, such that S1 X γ 1 , . . . , Sm X γ m , T X ϕ, then P X ψ.
(
In order to show that Γ P B ψ, we fix an arbitrary base B and multiset of atoms P satisfying that there exists P 1 , . . . , P m such that P = P 1 , . . . , P m , and P1 B γ 1 , . . . , Pm B γ m . The goal is show that P B ϕ ⊸ ψ. By (⊸), P B ϕ ⊸ ψ means ϕ P B ψ. To show ϕ P B ψ, we fix an arbitrary C ⊇ B and multiset Q such that Q C ϕ, and prove that P,Q C ψ. By monotonicity of with respect to the base, Pi B γ i implies Pi C γ i , for i = 1, . . . , m. Apply (3) to this together with Q C ϕ, it follows that P,Q C ψ. -(⊸E). Assume Γ ϕ ⊸ ψ and ∆ ϕ, we show Γ , ∆ ψ. Spelling out the definition of Γ ϕ ⊸ ψ and ∆ ϕ using (Inf), we have:
For every base X and atomic multisets P = P 1 , . . . , P m ,
if P1 X γ 1 , . . . , Pm X γ m , then P X ϕ ⊸ ψ.(4)
For every base Y and atmoic multisets Q = Q 1 , . . . , Q n ,
if Q1 Y δ 1 , . . . , Qn Y δ n , then Q Y ϕ.(5)
In order to show Γ , ∆ ψ, we fix an arbitrary base B, atomic multisets S = S 1 , . . . , S m and T = T 1 , . . . , T n , such that S1 B γ 1 , . . . , Sm B γ m and T1 B δ 1 , . . . , Tn B δ n , and go on to prove that S,T B ψ. Using (4), S1 B γ 1 , . . . , Sm B γ m implies that S B ϕ ⊸ ψ; using (5), T1 B δ 1 , . . . , Tn B δ n implies that T B ϕ. Spelling out the definition of S B ϕ ⊸ ψ, we know that for arbitrary base X ⊇ B and atomic multiste U , if U X ϕ, then S,U X ψ. In particular, since T B ϕ, we have S,T B ψ. -(⊗I). We assume Γ ϕ and ∆ ψ, and show that Γ , ∆ ϕ ⊗ ψ holds. Spelling out the definition of Γ , ∆ ϕ ⊗ ψ, it suffices to fix some base B and atomic multiset S 1 , . . . , S m , T 1 , . . . , T n (denote S = S 1 , . . . , S m , and T = T 1 , . . . , T n ) such that S1 B γ 1 , .
Under the assumption T B ∆, by fixing P and Q to be T and S respectively, (6) implies the following:
For every base X ⊇ B, if S X [ϕ, ψ], then T,S X χ.(7)
Now, given S B ϕ ⊗ ψ and (7), we can apply Lemma 2 and conclude that S,T B χ. -(II). By (I), I is equivalent to that for every base X , atomic multiset U , and atom q, if U X q, then U X q. This is true a fortiori. -(IE). We assume Γ χ and ∆ I, and show that Γ , ∆ χ. Towards this, we fix some base B and atomic multisets S, T such that S B Γ and T B ∆, and show that S,T B χ. By ∆ I and T B ∆, we know that T B I. By Γ χ and S B Γ , we have S B χ. Now apply Lemma 1 to T B I and S B χ, we conclude that S,T B χ. This completes the verification of all items.
Lemma 2. For arbitrary base B, atomic multisets S, T , and formulas ϕ, ψ, χ, if 1. S B ϕ ⊗ ψ, 2. ϕ , ψ T B χ, then 3. S,T B χ. Proof. We prove by induction on the structure of χ. The condition (2) can be spelled out as: for every X ⊇ B and U , if U X [ϕ, ψ], then U,T X χ. -When χ is an atom, the statement of the lemma follows immediately from (⊗).
χ = I. By (I), (3) amounts to that, for every X ⊇ B, atomic multiset U , atom p, if U X p, then S,T,U X p. So we fix some base C ⊇ B, atomic multiset Q, and atom q, such that Q C q. The goal is to show S,T,Q C q. According to the atomic case, this follows from the following two facts:
S C ϕ ⊗ ψ (8) ϕ , ψ T,Q C q(9)
Here (8) follows immediately from (1) and C ⊇ B, so it suffices to prove (9). For this, we fix some base D ⊇ C , atomic multiset R 1 , R 2 such that R1 D ϕ and R2 D ψ hold, and show that T,Q,R1,R2 D q. Note that (2) now becomes ϕ , ψ T B I. So together with R1 D ϕ and R2 D ψ, it follows that T,R1,R2 D I. This according to (I) says that for every X ⊇ D, atomic multiset U , and atom p, U X p implies T,R1,R2,U X p. In particular, since Q D q (which is immediately consequence of Q C q and D ⊇ C ), it follows that T,R1,R2,Q D q. χ = σ ⊸ τ . The goal is to prove that, given (1) and (2), S,T B σ ⊸ τ holds; spelling out the definition using (⊸) and (Inf), this amounts to showing that for arbitrary X ⊇ B and atomic multiset U , if U X σ, then S,T,U X τ . So we fix an arbitrary C ⊇ B and atomic multiset P such that P C σ holds, and the goal is to show S,T,P C τ . By IH, it suffices to show the following:
S C ϕ ⊗ ψ (10) ϕ , ψ T,P C τ(11)
Since (10) is exactly (1), we focus on (11). So we fix an arbitrary D ⊇ C and Q such that Q D [ϕ, ψ], and show Q,T,P D τ . Apply (2) to Q D [ϕ, ψ], we get Q,T D σ ⊸ τ , or equivalently σ Q,T D τ . That is, for every Y ⊇ D and atomic multiset U , U Y σ implies Q,T,U Y τ . Therefore, given P C σ, by monotonicity we have P D σ, thus Q,T,P D τ . χ = σ ⊗ τ . Given (1) and (2), we show S,T B σ ⊗ τ . Spelling out the definition using (⊗), we can simply fix an arbitrary C ⊇ B, atomic multiset P , and atom p such that σ , τ P C p; in other words, for every X ⊇ C and U, if U X σ , τ, then U,P X p
and then show S,T,P C p. By IH, it suffices to prove the following:
S C ϕ ⊗ ψ (13) ϕ , ψ T,P C p(14)
Now (13) follows immediately from (1) by monotonicity. Towards (14), let us fix arbitrary D ⊇ C and Q such that Q D ϕ , ψ, and prove Q,T,P D p. By (2), Q D ϕ , ψ entails that Q,T D σ ⊗ τ . This by (⊗) means that,
for every Y ⊇ D, V and q, if σ , τ V Y q, then Q,T,V Y q.(15)
In particular, since σ , τ P D p, we can conclude from (15) that Q,T,P D p.
This completes all the cases of the proof by induction.
Lemma 1. For arbitrary base B, atomic multisets S, T , and formula χ, if 1. S B I, 2. T B χ, then 3. S,T B χ. Proof. We prove by induction on the structure of χ.
χ is some atom q. Spelling out the definition of (1) S B I, we have that for arbitrary X ⊇ B, atomic multiset U , and atom p, if U X p, then S,U X p. Apply this to (2) T B q, it follows that S,T B q. χ = I. In order to prove S,T B I, it suffices to fix some base C ⊇ B, atomic multiset W , and atom q such that W C q, and prove that S,T,W C q. Since S B I, C ⊇ B, and W C q, we have S,W C q. This together with T B q and C ⊇ B imply that S,T,W C q.
χ = σ ⊗ τ . Uses Lemma 2. The goal is to show that S,T B σ ⊗ τ ; using (⊗), for every X ⊇ B, U , p, if σ , τ U X p, then S,T,U X p. So we fix some base C ⊇ B, atomic multiset W , and atom q such that σ , τ W C q, and the goal is now to show that S,T,W C q. Apply Lemma 2 to T C σ ⊗ τ (which follows immediately from T B σ ⊗ τ and C ⊇ B) and σ , τ W C q, we have T,W This completes all the inductive cases.
D Proof of Completeness
Proposition 4 (IMLL-AtComp). For arbitrary base B, atomic multisets P, S, and atom q, P , S ⊢ B q iff P S B q.
Proof. The equivalence follows immediately from Proposition 2. Let us assume that P = [p 1 , . . . , p n ]. Starting from P S B q, by (Inf), it means for every base X ⊇ B and atomic multisets T 1 , . . . , T n , T1 X p 1 , . . . , Tn X p n implies S,T X q. Spelling out the definition of B for atoms (At), P S B q is equivalent to that, for every base X ⊇ B and atomic multisets T 1 , . . . , T n , T 1 ⊢ X p 1 , . . . , T n ⊢ X p n implies S , T ⊢ X q. This is precisely P , S ⊢ B q, given Proposition 2.
Theorem 4 (Completeness). If Γ ϕ, then Γ ⊢ ϕ.
Proof. We assume Γ ϕ, and Γ = [γ 1 , . . . , γ n ]. Let Ξ be SubF(Γ ∪{ϕ}), namely the set of all subformulas Γ and ϕ. Since Γ ∪ {ϕ} is finite, Ξ is also a finite set. We define a 'flattening' function (·) ♭ : Ξ → A: it assigns to each non-atomic ξ ∈ Ξ a unique atom which does not appear in Ξ, denoted as ξ ♭ (uniqueness means ξ ♭ = ζ ♭ if ξ = ζ); for each atomic p ∈ Ξ, we define p ♭ to be p itself. Conversely, we define the 'deflattening' function (·) ♮ : A → Ξ ∪ A as an extension of the inverse of (·) ♭ : for every atom in the image of (·) ♭ say γ ♭ (note that such γ is unique if it exists), we define (γ ♭ ) ♮ as γ; for the other atoms, (·) ♮ is simply identity. We generalize both notations to multisets of formulas: ∆ ♭ := [δ ♭ | δ ∈ ∆] and P ♭ := [p ♭ | p ∈ P ]; likewise for (·) ♮ .
We still construct the base M that encodes the natural deduction for IMLL. Base M contains the following atomic rules, where σ and τ range over Γ ∪ {ϕ}, and p ranges over all atoms:
(1) (σ ♭ ⊲ τ ♭ ) ⇒ (σ ⊸ τ ) ♭ (2) (⊲(σ ⊸ τ ) ♭ ), (⊲σ ♭ ) ⇒ τ ♭ (3) (⊲σ ♭ ), (⊲τ ♭ ) ⇒ (σ ⊗ τ ) ♭ (4) (⊲(σ ⊗ τ ) ♭ ), (σ ♭ , τ ♭ ⊲ p) ⇒ p (5) ⇒ I ♭ (6) (⊲I ♭ ), (⊲τ ♭ ) ⇒ τ ♭
The following two statements are the key to completeness:
( †) For every ξ ∈ Ξ, every X ⊇ M and every U , U X ξ ♭ iff U X ξ. ( ‡) For every atomic multiset P and atom q, if P ⊢ M q then P ♮ ⊢ q ♮ .
Starting from our assumption Γ ϕ, we can conclude Γ ♭ M ϕ ♭ as follows: starting from arbitrary base B ⊇ M and atomic multisets U 1 , . . . U n satisfying U1 B γ 1 ♭ , . . . , Un B γ n ♭ , by (the 'only if' direction of) ( †) we have U1 B γ 1 , . . . , Un B γ n ; by the assumption Γ ϕ, it follows that U B ϕ where U = U 1 , . . . , U n ; applying ( †) again (but this time using the 'if' direction) we know U B ϕ ♭ . Then, according to Proposition 4, Γ ♭ M ϕ ♭ implies Γ ♭ ⊢ M ϕ ♭ . So, by ( ‡), (Γ ♭ ) ♮ ⊢ (ϕ ♭ ) ♮ , which according to the definition of (·) ♭ and (·) ♮ says Γ ⊢ ϕ.
So it only remains to prove ( †) and ( ‡). We first look at ( †). We fix an arbitrary base B ⊇ M and atomic multiset S, and prove by induction on the structure of ξ.
ξ is atomic. Then by definition, ξ ♭ = ξ, so ( †) is a tautology.
ξ is I.
S B I ♭ iff S ⊢ B I ♭ (At)
iff for every X ⊇ B, U, p, if U ⊢ X p, then S , U ⊢ X p (Lemma 5)
iff for every X ⊇ B, U, p, if U X p, then S,U X p (At) iff S B I (I)
Fig. 2 :
2Atomic System N Formally, given a judgement Γ ϕ, to every sub-formula ρ associate a unique atomic proposition ρ ♭ as follows:
QB
∆. It remains to verify P ,Q B χ. When χ is atomic, this follows immediately from P B χ and Q B I by (I). To handle non-atomic χ, we require the following: Lemma 1. For arbitrary base B, atomic multisets S, T , and formula χ, if 1. S B I, 2. T B χ, then 3. S,T B χ. This lemma follows by induction on the structure of χ, with the base case given by (I). One cannot use this general form to define I as it would result in an impredicative definition of support.
Fig. 5 :
5Atomic System M
29 .
29Schroeder-Heister, P.: Rules of Definitional Reflection. In: Logic in Computer Science -LICS. pp. 222-232. IEEE (1993) 30. Schroeder-Heister, P.: Validity Concepts in Proof-theoretic Semantics. Synthese 148(3), 525-571 (2006) 31. Schroeder-Heister, P.: Proof-Theoretic versus Model-Theoretic Consequence. In: Pelis, M. (ed.) The Logica Yearbook 2007. Filosofia (2008) 32. Schroeder-Heister, P., Piecha, T.: Atomic Systems in Proof-Theoretic Semantics: Two Approaches. In:Ángel Nepomuceno Fernández, Martins, O.P., Redmond, J. (eds.) Epistemology, Knowledge and the Impact of Interaction. Springer Verlag (2016) 33. Seely, R.A.G.: Linear logic, * -autonomous categories and cofree coalgebras. In: Categories in Computer Science and Logic, vol. 92. American Mathematical Society (1989) 34. Stafford, W.: Proof-theoretic Semantics and Inquisitive Logic. Journal of Philosophical Logic (2021) 35. Szabo, M.E. (ed.): The Collected Papers of Gerhard Gentzen. North-Holland Publishing Company (1969) 36. Tarski, A.: O pojȩciu wynikania logicznego. Przeglad Filozoficzny 39 (1936) 37. Tarski, A.: On the concept of following logically. History and Philosophy of Logic 23(3), 155-196 (2002). https://doi.org/10.1080/0144534021000036683 38. Tennant, N.: Entailment and Proofs. Proceedings of the Aristotelian Society 79, 167-189 (1978) 39. Troelstra, A.S., Schwichtenberg, H.: Basic Proof Theory. Cambridge University Press (2000)
(ϕ • ψ) := deg(ϕ) + deg(ψ) + 1, where • ∈ {⊗, ⊸}
B
I, we can conclude that S,T,W C q. χ = σ ⊸ τ . Spelling out the definition (⊸), the goal S,T B σ ⊸ τ is equivalent to σ S,T B τ .So we fix some base C ⊇ B and atomic multiset W such that W C σ, and then show that S,T,W C τ . By IH, from S C I and W C σ, we have S,W C σ. This together with T B σ ⊸ τ implies that S,T,W C τ .
. . , Sm B γ m , T1 B δ 1 , . . . , Tn B δ n , and show that S,T B ϕ ⊗ ψ follows. According to (⊗), we can simply fix some base C ⊇ B, atomic multiset U , atom p satisfying ϕ , ψ U C p, and show S,T,UB ψ. By monotonicity, S,T C [ϕ, ψ]. This together with ϕ , ψ U C p entail that ⊗E). We use Lemma 2. Suppose Γ ϕ ⊗ ψ and ∆ , ϕ , ψ χ, and we show that Γ , ∆ χ. So let us suppose that S B Γ and T B ∆ (thus S,T B Γ , ∆), and show that S,T B χ. First, Γ ϕ ⊗ ψ together with S B Γ entails that S B ϕ ⊗ ψ. Second, spelling out the definition of ∆ , ϕ , ψ χ, we have:For every base X and atomic multisets P, Q, if P X ∆ and Q X [ϕ, ψ], then P,Q X χ.C
p.
Note that S,T
B [ϕ, ψ] holds: S
B Γ and Γ
ϕ implies S
B ϕ; T
B ∆ implies
S
S,T,U
C
p.
-(
ξ is of the form σ ⊸ τ . By the construction of Ξ, σ and τ are both in Ξ as well, so IH applies. Therefore, S B (σ ⊸ τ ) ♭ iff σ ♭ S B τ ♭ (Lemma 5 and Proposition 4)ξ is of the form σ ⊗ τ . Again we can use IH on σ and τ as both are in Ξ as well. Therefore, S B (σ ⊗ τ ) ♭ iff for every X ⊇ X , U, p, if σ ♭ , τ ♭ U X p, then S,U X p (Lemma 5 and Proposition 4)This completes the proof by induction on ξ for ( †).Next we turn to showing ( ‡). By the inductive definition of ⊢ M (see Definition 10), it suffices to show the follows:Now(16)follows imemdiately from (ax). As for (17), we simply need to prove the statement for each atomic rule in base M , which according to the definition of M amounts to proving the following facts:, which follows immediately from (⊸ I ).According to the definition of (·) ♮ , this is equivalent to that S 1According to the definition of (·) ♮ , this is equivalent to that S ♮ ⊢ σ ⊗ τ and T ♮ , σ , τ ⊢ p ♮ implies S ♮ , T ♮ ⊢ p ♮ , which follows immediately from (⊗ E ).-Suppose ⇒ I is in M , then we show ⊢ I ♮ , namely ⊢ I. And this is exactlyBy the definition of (·) ♮ , this is equivalent to that S 1 ♮ ⊢ I and S 2 ♮ ⊢ τ implies S 1 ♮ , S 2 ♮ ⊢ τ . This follows from (I E ).This completes the case analysis for establishing ( ‡).Lemma 5. The following holds for arbitrary base B ⊇ M and atomic multiset S, when σ ⊸ τ , σ ⊗ τ , or I is in Ξ, respectively:Proof. Let us fix arbitrary base B ⊇ M and atomic multiset S.1. We prove the two directions separately.-Left to right:Again we show the two directions separately.-Left to right: We assume S ⊢ B (σ ⊗ τ ) ♭ . It suffices to fix some C ⊇ B, T and q satisfying T , σ ♭ , τ ♭ ⊢ C q, and then show S , T ⊢ C q. Note that the atomic rule ⊲(σ ⊗ τ ) ♭ , σ ♭ , τ ♭ ⊲ q ⇒ q is in B thus also in C, therefore from the two assumptions we can derive S , T ⊢ C q. -Right to left: We assume that for every Y ⊇ B, V , andfollows from the assumption. To show σ ♭ , τ ♭ ⊢ B (σ ⊗ τ ) ♭ , it suffices to apply (App) to the atomic rule ⊲σ ♭ , ⊲τ ♭ ⇒ (σ ⊗ τ ) ♭ as well as the fact that both σ ♭ ⊢ B σ ♭ and τ ♭ ⊢ B τ ♭ hold (using (Ref)). 3. We prove the two directions separately.-Left to right: We fix some C ⊇ B, T , and q such that T ⊢ C q, and the goal is to show that S , T ⊢ C q holds. Notice that the atomic rule ⊲I ♭ , ⊲τ ♭ ⇒ τ ♭ is in B thus in C , so apply (App) to this rule together with S ⊢ C I ♭ (immediate consequence of S ⊢ B I ♭ and C ⊇ B) and T ⊢ C q entails that S , T ⊢ C q. -Right to left: This is the simpler direction. Since the atomic rule ⇒ I ♭ is in B, using (App) we have ⊢ B I ♭ . The the RHS of the statement entails that S ⊢ B I ♭ .This completes the proof for all the three statements.
Kripke models for linear logic. G Allwein, J M Dunn, The Journal of Symbolic Logic. 582Allwein, G., Dunn, J.M.: Kripke models for linear logic. The Journal of Symbolic Logic 58(2), 514-545 (1993)
Semantic Construction of Intuitionistic Logic. E W Beth, Indagationes Mathematicae. 174Beth, E.W.: Semantic Construction of Intuitionistic Logic. Indagationes Mathe- maticae 17(4), pp. 327-338 (1955)
What is a categorical model of Intuitionistic Linear Logic?. G M Bierman, Dezani-Ciancaglini, M., Plotkin, G.SpringerTyped Lambda Calculi and ApplicationsBierman, G.M.: What is a categorical model of Intuitionistic Linear Logic? In: Dezani-Ciancaglini, M., Plotkin, G. (eds.) Typed Lambda Calculi and Applica- tions. pp. 78-93. Springer (1995)
On Intuitionistic Linear Logic. G M Bierman, 346University of Cambridge ; available as Computer LaboratoryTechnical ReportBierman, G.M.: On Intuitionistic Linear Logic. Ph.D. thesis, University of Cam- bridge (1994), available as Computer Laboratory Technical Report 346
Articulating Reasons: An Introduction to Inferentialism. R Brandom, Harvard University PressBrandom, R.: Articulating Reasons: An Introduction to Inferentialism. Harvard University Press (2000)
Relational semantics for full linear logic. D Coumans, M Gehrke, L Van Rooijen, Journal of Applied Logic. 121Coumans, D., Gehrke, M., van Rooijen, L.: Relational semantics for full linear logic. Journal of Applied Logic 12(1), 50-66 (2014).
. 10.1016/j.jal.2013.07.005https://doi.org/10.1016/j.jal.2013.07.005
A historical introduction to substructural logics. K Došen, Substructural Logics. Schroeder-Heister, P.J., Došen, K.Oxford University PressDošen, K.: A historical introduction to substructural logics. In: Schroeder-Heister, P.J., Došen, K. (eds.) Substructural Logics. Oxford University Press (1993)
The Logical Basis of Metaphysics. M Dummett, Harvard University PressDummett, M.: The Logical Basis of Metaphysics. Harvard University Press (1991)
From Proof-theoretic Validity to Base-extension Semantics for Intuitionistic Propositional Logic. A V Gheorghiu, D J Pym, submittedGheorghiu, A.V., Pym, D.J.: From Proof-theoretic Validity to Base-extension Semantics for Intuitionistic Propositional Logic (Accessed 08 February 2023), https://arxiv.org/abs/2210.05344, submitted
Linear Logic: its syntax and semantics. J Y Girard, Advances in Linear Logic. Girard, J.Y., Lafont, Y., Regnier, L.Cambridge University PressGirard, J.Y.: Linear Logic: its syntax and semantics. In: Girard, J.Y., Lafont, Y., Regnier, L. (eds.) Advances in Linear Logic, p. 1-42. London Mathematical Society Lecture Note Series, Cambridge University Press (1995)
J Y Girard, P Taylor, Y Lafont, Proofs and Types. Cambridge University PressGirard, J.Y., Taylor, P., Lafont, Y.: Proofs and Types. Cambridge University Press (1989)
On Dummett's "Proof-theoretic justifications of logical laws. W Goldfarb, Advances in Proof-theoretic Semantics. SpringerGoldfarb, W.: On Dummett's "Proof-theoretic justifications of logical laws". In: Advances in Proof-theoretic Semantics, pp. 195-210. Springer (2016)
. L Hallnäs, Partial Inductive Definitions. Theoretical Computer Science. 871Hallnäs, L.: Partial Inductive Definitions. Theoretical Computer Science 87(1), 115-142 (1991)
On the Proof-theoretic Foundation of General Definition Theory. L Hallnäs, 148SyntheseHallnäs, L.: On the Proof-theoretic Foundation of General Definition Theory. Syn- these 148, 589-602 (2006)
Making It Count: An inferentialist account of Computer simulation. Jaakko Kuorikoski, S R , 10.31235/osf.io/v9bmrJaakko Kuorikoski, S.R.: Making It Count: An inferentialist account of Computer simulation. https://osf.io/preprints/socarxiv/v9bmr (2022). https://doi.org/10.31235/osf.io/v9bmr, accessed January 2023
Semantical Analysis of Intuitionistic Logic I. S A Kripke, Studies in Logic and the Foundations of Mathematics. Elsevier40Kripke, S.A.: Semantical Analysis of Intuitionistic Logic I. In: Studies in Logic and the Foundations of Mathematics, vol. 40, pp. 92-130. Elsevier (1965)
On an Inferential Semantics for Classical Logic. D Makinson, Logic Journal of IGPL. 221Makinson, D.: On an Inferential Semantics for Classical Logic. Logic Journal of IGPL 22(1), 147-154 (2014)
The logic of bunched implications. P W O'hearn, D J Pym, Bulletin of Symbolic Logic. 52O'Hearn, P.W., Pym, D.J.: The logic of bunched implications. Bulletin of Symbolic Logic 5(2), 215-244 (1999)
Completeness in Proof-theoretic Semantics. T Piecha, Advances in Prooftheoretic Semantics. SpringerPiecha, T.: Completeness in Proof-theoretic Semantics. In: Advances in Proof- theoretic Semantics, pp. 231-251. Springer (2016)
Failure of Completeness in Proof-theoretic Semantics. T Piecha, W De Campos Sanz, P Schroeder-Heister, Journal of Philosophical Logic. 443Piecha, T., de Campos Sanz, W., Schroeder-Heister, P.: Failure of Completeness in Proof-theoretic Semantics. Journal of Philosophical Logic 44(3), 321-335 (2015)
The Definitional View of Atomic Systems in Proof-theoretic Semantics. T Piecha, P Schroeder-Heister, The Logica Yearbook. College Publications LondonPiecha, T., Schroeder-Heister, P.: The Definitional View of Atomic Systems in Proof-theoretic Semantics. In: The Logica Yearbook 2016, pp. 185-200. College Publications London (2017)
Incompleteness of Intuitionistic Propositional Logic with Respect to Proof-theoretic Semantics. T Piecha, P Schroeder-Heister, Studia Logica. 1071Piecha, T., Schroeder-Heister, P.: Incompleteness of Intuitionistic Propositional Logic with Respect to Proof-theoretic Semantics. Studia Logica 107(1), 233-246 (2019)
Ideas and Results in Proof Theory. D Prawitz, Studies in Logic and the Foundations of Mathematics. Elsevier63Prawitz, D.: Ideas and Results in Proof Theory. In: Studies in Logic and the Foundations of Mathematics, vol. 63, pp. 235-307. Elsevier (1971)
Proof-theoretic Semantics in Sheaves (Extended Abstract). D J Pym, E Ritter, E Robinson, Proceedings of the Eleventh Scandinavian Logic Symposium -SLSS 11. the Eleventh Scandinavian Logic Symposium -SLSS 11Pym, D.J., Ritter, E., Robinson, E.: Proof-theoretic Semantics in Sheaves (Ex- tended Abstract). In: Proceedings of the Eleventh Scandinavian Logic Symposium -SLSS 11 (2022)
An Inferentialist Interpretation of Classical Logic. T Sandqvist, Uppsala UniversityPh.D. thesisSandqvist, T.: An Inferentialist Interpretation of Classical Logic. Ph.D. thesis, Uppsala University (2005)
Classical Logic without Bivalence. T Sandqvist, Analysis. 692Sandqvist, T.: Classical Logic without Bivalence. Analysis 69(2), 211-218 (2009)
Base-extension Semantics for Intuitionistic Sentential Logic. T Sandqvist, Logic Journal of the IGPL. 235Sandqvist, T.: Base-extension Semantics for Intuitionistic Sentential Logic. Logic Journal of the IGPL 23(5), 719-731 (2015)
Hypothesis-discharging Rules in Atomic Bases. T Sandqvist, Dag Prawitz on Proofs and Meaning. SpringerSandqvist, T.: Hypothesis-discharging Rules in Atomic Bases. In: Dag Prawitz on Proofs and Meaning, pp. 313-328. Springer (2015)
| [] |
[
"A Cover Time Study of a non-Markovian Algorithm",
"A Cover Time Study of a non-Markovian Algorithm"
] | [
"Guanhua Fang [email protected] ",
"Gennady Samorodnitsky ",
"Zhiqiang Xu [email protected] ",
"\nSchool of Management\nSchool of Operations Research and Information Engineering\nFudan University\nShanghaiChina\n",
"\nDepartment of Machine Learning Mohamed bin Zayed University of Artificial Intelligence\nCornell University\nAbu DhabiNew YorkUSA, UAE\n"
] | [
"School of Management\nSchool of Operations Research and Information Engineering\nFudan University\nShanghaiChina",
"Department of Machine Learning Mohamed bin Zayed University of Artificial Intelligence\nCornell University\nAbu DhabiNew YorkUSA, UAE"
] | [] | Given a traversal algorithm, cover time is the expected number of steps needed to visit all nodes in a given graph. A smaller cover time means a higher exploration efficiency of traversal algorithm. Although random walk algorithms have been studied extensively in the existing literature, there has been no cover time result for any non-Markovian method. In this work, we stand on a theoretical perspective and show that the negative feedback strategy (a count-based exploration method) is better than the naive random walk search. In particular, the former strategy can locally improve the search efficiency for an arbitrary graph. It also achieves smaller cover times for special but important graphs, including clique graphs, tree graphs, etc. Moreover, we make connections between our results and reinforcement learning literature to give new insights on why classical UCB and MCTS algorithms are so useful. Various numerical results corroborate our theoretical findings. | null | [
"https://export.arxiv.org/pdf/2306.04902v1.pdf"
] | 259,108,691 | 2306.04902 | 0e03918ed0c35bc638b5af4e6200cde9e6992e95 |
A Cover Time Study of a non-Markovian Algorithm
Guanhua Fang [email protected]
Gennady Samorodnitsky
Zhiqiang Xu [email protected]
School of Management
School of Operations Research and Information Engineering
Fudan University
ShanghaiChina
Department of Machine Learning Mohamed bin Zayed University of Artificial Intelligence
Cornell University
Abu DhabiNew YorkUSA, UAE
A Cover Time Study of a non-Markovian Algorithm
Given a traversal algorithm, cover time is the expected number of steps needed to visit all nodes in a given graph. A smaller cover time means a higher exploration efficiency of traversal algorithm. Although random walk algorithms have been studied extensively in the existing literature, there has been no cover time result for any non-Markovian method. In this work, we stand on a theoretical perspective and show that the negative feedback strategy (a count-based exploration method) is better than the naive random walk search. In particular, the former strategy can locally improve the search efficiency for an arbitrary graph. It also achieves smaller cover times for special but important graphs, including clique graphs, tree graphs, etc. Moreover, we make connections between our results and reinforcement learning literature to give new insights on why classical UCB and MCTS algorithms are so useful. Various numerical results corroborate our theoretical findings.
Introduction
The cover time of a walk/an algorithm on a graph is the expectation of the number of steps required to visit every nodes/vertices. Formally, given a finite graph, we say there is a (directed) edge between node i and node j if we can take some action such that the agent could transit from state i to state j. For time n = 0, 1, . . ., we use X n to denote a sequence of nodes covered by the traversal algorithm. We define T C := the smallest n such that X 0 , X 1 , . . . X n visit all nodes of graph,
whose expectation E[T C ] is called the cover time (Broder and Karlin, 1989;Kahn et al., 1989). It is of particular interest to study the cover time since it quantifies how fast/effectively a walk/an algorithm can traverse the whole graph. One of the most common but important walk is known as (simple) random walk Pearson (1905); Abdullah (2012); Spitzer (2013), which is a sequence of movements from one node to another where at each step an edge is chosen uniformly at random from the set of edges incident on the current node, and then transitioned to the next node. Cover time on random walk has been studied extensively in past several decades Aldous (1991); Lovász and Winkler (1993); Grassberger (2017); Dembo et al. (2021). Other extended types of random walk, including lazy random walk Avin et al. (2008) and weighted random walk Abdullah (2012), have also been considered in the literature. Unfortunately, all such theoretical results on cover time pertain to memory-less random walk. There has been no result on the cover time of any non-Markovian traversal algorithm. In other words, it is relatively hard to analyze the covering property of history-dependent random walk algorithms when the Markovian property fails to hold.
In this paper, we try to bridge the aforementioned gap. Specifically, we consider a simple but important non-Markovian traversal algorithm, which we call the negative feedback strategy. To be more mathematically clear, the negative feedback algorithm is a count-based method. If X n = i at time n, the next state is uniformly randomly selected from a subset Smin contains those nodes which are least visited from state i up to time n. Such procedure is called the "favor least" mechanism. Heuristically, it tends to move to un/less-visited nodes and hence can improve the cover time. Undoubtedly, it is history-dependent, since it requires counting transitions between each pair of neighbour nodes.
Why should we consider the negative feedback algorithm? Reasons are of three-fold. First, it is one of the simplest non-Markovian random walk algorithms. There is less hope of making any theoretical claims on the cover time of a very complex traversal algorithm. Second, it only requires to store a count table where each entry represents the number of movements from one node to its neighbor. It can be updated very efficiently and fast. Third, it has strong connections with algorithms in reinforcement learning (RL) field. To be more concrete, given a discrete-state environment, we can treat each state as a node. The agent can take a certain policy to explore the whole environment. The negative feedback strategy is often treated as an exploration tool McFarlane (2018); Hazan et al. (2019) in an unknown Markov Decision Process (MDP) setting.
Our main results in this work are summarized here.
I . We first show a local improvement result for a general graph. To be specific, we consider a local version of the negative feedback algorithm where the "favor least" mechanism is only applied to the starting state X 0 . For an arbitrary graph, we have shown that E π loc [N j |X 0 ] ≤ E πrw [N j |X 0 ] for any other node j ̸ = X 0 , where N j is defined to be the number of excursions outside starting node X 0 before state j is visited for the first time, π loc and π rw stand for the local negative feedback algorithm and random walk policy, respectively. This local improvement result implies that the negative feedback mechanism improves the exploration efficiency, at least locally, in the sense that the agent has the stronger tendency to visit other nodes instead of returning to the starting node.
II . We then make a step forward and show cover time improvement under several special graph structures.
We are able to show that E πneg [T C ] < E πrw [T C ] (π neg represents the negative feedback algorithm) under Star, Path, Clique and Tree graphs. In particular, in the case of a balanced b-ary tree, we establish that E πneg [T C ] ≤ 4H b+1 b−1 b H , where b is the number of children of each non-leaf node and H is the depth of tree. By contrast, it was shown (Aldous, 1991) that E πrw [T C ] ≈ 2H 2 b H+1 log b b−1 . Therefore, the negative feedback algorithm improves the cover time by order of H log b. In other words, in tree-like RL games, the naive random walk search becomes less efficient compared with negative feedback strategy as action and state spaces become more complex.
The rest of the paper is organized as follows. In Section 2, we introduce the cover time formulation and provide an illustrative example to show why the negative feedback algorithm can improve over the random walk policy. In Sections 3 and 4, we establish the local and cover time improvement results, respectively. In Section 5, we make connections to the maximum-entropy exploration, UCB, and Monte Carlo tree search methods and make attempts on non-discrete cases. A concluding remark is given in Section 6. Numerical experiments, additional discussions and technical proofs are provided in the appendices.
Cover Time Formulation
Let us imagine that an agent walks on a finite and connected graph. If an agent could take some action to transit from node i to node j, then we say there is a (directed) edge from node i to j. To help reader understand the terminology clearer, we provide the following examples. In a two-dimensional Grid Word environment, a node can be viewed as the position of the agent. Since the agent can choose to move up, down, right or left, two nodes have an edge between them if and only if these two positions are adjacent to each other. In a Go game, two players take turns to put stones on the board. A node here represents a 19 × 19 board with white and black stones on it. There is a directed edge from node i to another node j, only if node j can be reached from node i after a player takes a single action.
Given a starting time n = 0 and an initial node X 0 , we let X n , n = 0, 1, . . . describe the sequence of states/nodes governing by some exploration strategy and denote T C = the first time n, X 0 , X 1 , . . . , X n visit all nodes of the graph.
(2)
The quantity E[T C ] (expectation of T C ) is called the cover time (Broder and Karlin, 1989;Kahn et al., 1989).
In this paper, we mainly focus on two exploration strategies, random walk algorithm (Aldous, 1991;Dembo et al., 2021) and negative feedback algorithm, whose formal mathematical formulations are described as below.
Random walk algorithm This is a Markovian mechanism; if X n = i at some time n = 0, 1, 2, . . . for some node i, the next state is chosen uniformly at random among the neighbours of i in the graph.
P(X n+1 = j|X 0 = i 0 , . . . , X n−1 = i n−1 , X n = i)
= 1/d i if (i, j) is an edge 0 if (i, j) is not an edge,(3)
where d i is the degree of the node i. Negative feedback algorithm For every node i of the graph and every neighbour j of i let N (n)
ij be the number of times the agent moved from node i to node j prior to time n (so that N (0) ij = 0 for all nodes i, j) and denote N min
(n) i = min j: (i,j) an edge N (n) ij ,(4)Smin (n) i = j : (i, j) is an edge and N (n) ij = N min (n) i , Kmin (n) i = cardinality(Smin (n) i ).
Then X n = i at some time n = 0, 1, 2, . . . for some vertex i, the next state is chosen uniformly at random among the neighbours of i in the graph with the smallest prior selection. That is,
P(X n+1 = j|X 0 = i 0 , . . . , X n−1 = i n−1 , X n = i) = 1/Kmin (n) i if j ∈ Smin (n) i 0 otherwise.(5)
In other word, a neighbour node j will be never chosen only if it becomes the least visited one from current node i. It is not hard to see that negative feedback algorithm requires storing the counts of transitions between pair of nodes with edge among them. Therefore the algorithm is non-Markovian and does not have nice properties (e.g. regeneration property) as random walk algorithm does. It makes theoretical analysis extremely hard in studying general graph.
Remark 2.1. There are quite a few existing works on the cover time of random walk algorithm (see Kahn et al. (1989);Feige (1995); Abdullah (2012) and references therein) and its variants (e.g. lazy random walk (Avin et al., 2008), random walk with heterogeneous step lengths (Guinard and Korman, 2020)). However, to our knowledge, there is no literature considering the cover time problem of any count-based algorithm.
Let us first consider the following specific toy example, which shows the advantage of negative feedback algorithm over random walk algorithm. A toy grid world. It is a three by three two-dimensional maze as shown in Figure 1. Black grids are obstacles which are not accessible. At starting time n = 0, the agent is placed at "Start" grid. The "End" grid is the target node. The positive reward will not be given until the agent arrives at "End" grid. One would wonder, under which policy between random walk and negative feedback algorithms, the agent can take fewer steps in this simple task? Start End Figure 1: A 3 by 3 grid world. The black grid is inaccessible. The arrow represents the action can be taken in each grid. We define T π task to be the first time of arriving "End" under a policy π. Let π rw be the random walk policy and π neg be the negative feedback algorithm. Is E[T πneg task ] < E[T πrw task ]? The answer is affirmative as indicated by the following proposition.
Proposition 2.1. In the toy grid world described above, we have E[T πneg task ] < E[T πrw task ] ≡ 23. Moreover, we consider the temporally-persistent/extended policy (Dabney et al., 2020), where one can randomly choose an action and perform it for consecutive times. To be specific, the agent first choose the direction (up, down, right or left) uniformly randomly and then choose the repetition time z ∼ p(z) (z = 1, 2, . . .). We denote such policy as π per (p). When p(z = 1) = 1, the policy reduces to the random walk strategy. In this toy grid word, it can be shown that the temporally-persistent strategy is even worse than random walk method.
Local Improvement
Our goal is to provide a theoretical explanation why one can expect that, at least in some cases, the negative feedback algorithm has a smaller cover time than the random walk. Since a direct analytical analysis of the cover time on an arbitrary graph is extremely difficult, we will instead look at a related quantity in this section.
For a node j of the graph, we denote the first hitting time of j by T j = inf{n ≥ 1 : X n = j}.
For an arbitrary node index i, E[T j |X 0 = i] represents the expected time to reach vertex j starting in vertex i. It is intuitively clear that the quantities E[T j |X 0 = i], i, j are two nodes are strongly related to the cover time. More precisely, we define
µ + = max i,j E[T j |X 0 = i], µ − = min i,j E[T j |X 0 = i].
Then for any starting node i of the graph, it holds
µ − H m−1 ≤ E[T C ] ≤ µ + H m−1 ,(6)
where m is the total number of nodes in the graph and H k := 1 + 1/2 + · · · + 1/k is the kth harmonic number; see Matthews (1988) for more detailed explanations. Therefore, as a substitute for comparing directly the cover time under the negative feedback algorithm and the random walk, it is desired to compare the expected first hitting times under these two algorithms. For any two arbitrary nodes i and j in graph, a direct comparison of E[T j |X 0 = i] under the two algorithms also seems to be prohibitively difficult. We further instead compare another related quantities. Let V 0 = 0 and, for any integer k ≥ 1, also let V k = inf n > V k−1 : X n = i , the time of the kth visit to state i. If V k−1 = ∞, then we set V k = ∞ as well. We can think of the time interval {V k−1 + 1, . . . , V k } as the kth excursion outside of the vertex i. Let
N j = inf k ≥ 1 : T j ≤ V k
be the number of the excursion outside of i during which vertex j is visited for the first time. It is clear that
E[T j |X 0 = i] is also closely related to E[N j |X 0 = i].
The intuition is that smaller N j indicates that agent spends less time to discover node j.
In the rest of section, we will aim to compare the latter quantity between two algorithms. To make the comparison possible and circumvent the non-Markovian issue, we consider the local version of the negative feedback algorithm. (Local negative feedback algorithm) The mechanism (5) is used only when the agent is in the starting state i. In every other states, the random walk dynamics is used! By such modification, we are able to show that values of E[N j |X 0 = i] under the local negative feedback algorithm is no larger than that of random walk exploration strategy.
Remark 3.1. The technical reason of considering local negative feedback algorithm is explained here. A nice property of naive random walk strategy is known as regeneration property, where everything is reset once the agent returns back to the original state. It makes computation of mathematical recursive formula possible. Unfortunately, negative feedback algorithm relies on the past information and does not has such regeneration property. Instead, the local version of negative feedback can mitigate this issue. Despite of being non-Markovian, the regeneration can still happen at the time n when Kmin
(n) i = d i .
Theorem 3.1. For any given starting node i, it holds
E π loc [N j |X 0 = i] ≤ E πrw [N j |X 0 = i](7)
for any node j ̸ = i in the graph.
Theorem 3.1 says that, on average/expectation, the local negative feedback algorithm takes less number of excursions (outside the starting state) to visit any other states . Therefore, a local modification at state i indeed improves the exploration efficiency, i.e., stronger tendency of visiting other states rather than returning back to the initial state. Bottom left: Clique graph with an arbitrary initial state. Bottom right: a tree graph with root node as initial state.
Cover Time Improvement
As described in the previous sections that direct analytical analysis of cover time is not an easy job for the negative feedback strategy whose non-Markovian mechanism makes computation prohibitively hard. However, fortunately, we are able to show that the negative-feedback strategy is strictly better than random walk strategy, E πneg [T C ] < E πrw [T C ], when the graph admits some special structures. Star Graph. There is an central node (state) 0 and it connects to n leaf nodes. The starting position is node 0. See Figure 2 for graphical illustration. It is easy to see that the degree of node 0 is n and degree of each leaf node is 1.
Theorem 4.1. In the star graph with n ≥ 2, it holds that E πneg [T C ] = 2n − 1 and E πrw [
T C ] = 2n( n i=1 1 i ) − 1.
Hence, cover time of negative feedback policy is strictly smaller than that of random walk policy.
Path. All (n + 1) nodes are aligned in a line. Node i (̸ = 0 or n) is connected to i − 1 and i + 1. The initial state is state 0. In this graph, nodes 0 and n have degree 1. All other nodes have degree 2.
Theorem 4.2. In the path graph with n ≥ 2, it holds that E πneg [T C ] < n 2 ≡ E πrw [T C ]. Hence cover time of negative feedback policy is strictly smaller than that of random walk policy.
Clique Graph. It is a graph of n nodes such that any pair of nodes has an edge between them. It is not hard to see that every node has degree n.
Theorem 4.3. In the clique graph with n ≥ 3, the strict inequality
E πneg [T C ] < E πrw [T C ] holds.
Tree Graph. It is a graph of nodes with no circle. The tree has depth H and each non-leaf node has at most b child nodes.
Remark 4.1. In fact, every RL environment can be reformulated as a tree graph if we treat every state-action trajectory as a single node. This idea is usually adopted in counter-factual regret minimization (CFR, Zinkevich et al. (2007)).
Balanced b-ary Tree Graph. It is a tree graph with depth H and each non-leaf node has exactly b children.
It can be seen that the balanced b-ary tree is a special case of a general tree. In the literature, the following result of cover time under random walk policy has been established in 1990s (Aldous, 1991). (1991)). For a balanced b-ary tree of depth H, the cover time of random walk algorithm is asymptotically
Proposition 4.4 (Aldous
2H 2 b H+1 (log b)/(b − 1) as H → ∞.
We first show there exists an upper bound of visiting each node under negative feedback algorithm. Therefore, it will not waste too much time on visited nodes before all node have been visited at least once.
Theorem 4.5. In the tree graph, under negative feedback algorithm, each node is visited at most 2(b + 1)H times before all nodes have been visited at least once.
By counting the total number of nodes, a direct application of Theorem 4.5 will lead to that, in the balanced b-ary tree graph with depth
H, it holds E πneg [T C ] ≤ 2(b + 1)H b H+1 −1 b−1 .
A more refined analysis will give us the following result.
Theorem 4.6. In the balanced b-ary tree graph with depth H, under negative feedback algorithm, it holds
E πneg [T C ] ≤ 4H b+1 b−1 b H .
Compared with Proposition 4.4, the negative feedback algorithm is asymptotically H times faster than the random walk algorithm for any fixed b. It is also asymptotically log b faster for any fixed H. Therefore, the negative feedback algorithm improves the search efficiency in terms of both tree width and tree depth. In practice, for many board games which can have large action spaces and form very deep trees, the random walk exploration is a less efficient strategy according to our theoretical explanation from the cover time perspective.
Moreover, 4Hb H b+1 b−1 is actually also the worst case bound of T C (in addition to the bound on expectation E[T C ]) under the proposed algorithm. By contrast,
2H 2 b H+1 (log b)/(b − 1) is only the upper bound of expectation, E[T C ].
In other words, with non-vanishing probability (in some extreme cases), T C can be exponentially large under random walk policy as number of nodes grows.
Connections and Discussions
In previous sections, we have established properties of negative feedback algorithm on the finite-node graph and show that it indeed improves cover times of several important graphs. In below, we make connections to reinforcement learning field and provide practical implications why negative feedback strategy is so interesting and important to many popular RL algorithms. Before going to detailed discussions, we would like to make clarification that we are not trying to propose any new RL algorithm in our paper. Negative feedback algorithm considered here is just a counterpart/non-Markovian extension of random walk. It does not rely on any Markov decision process setting or any Bellman equation-related assumptions Puterman (1990).
Connection with ϵ-Greedy Methods
The negative-feedback exploration strategy can be easily incorporated into any existing reinforcement learning algorithm. For example, in ϵ-greedy type of methods (Sutton, 1995;Wunder et al., 2010), we can replace random action selection by using negative-feedback strategy.
With 1 − ϵ probability, agent adopts a learned policy for exploiting the current possible maximum reward. In the literature, such learner can be either model-based method (UCRL2 (Thomas et al., 2010), UCRL2B (Fruit et al., 2020), etc.) or model-free method (Q-learning (Watkins and Dayan, 1992), SARSA (Sutton and Barto, 2018), etc.). With ϵ probability, exploration policy is used for exploring the entire state space and can help escaping the local optimum. As discussed in previous sections, the negative feedback algorithm is indeed a better strategy than random work from theoretical perspective. As a result, negative feedback strategy can theoretically improve the efficiency of any ϵ-greedy-type RL algorithms in tree-like environment.
Connection with RL Exploration Methods
A fundamental problem in reinforcement learning is how to explore the state space faster, especially when the environment only provides sparse rewards or even no reward before reaching the final state. This question has received a lot of attentions, with approaches such as intrinsic-reward learning ( where N (s, a) is the cumulative count number of choosing a at state s up to the current round. In other words, the negative feedback algorithm considered in our paper serves as an important role in learning unknown transition probabilities. Our theories answer that why popular RL exploration methods prefer to using "favor least" mechanism rather than using simple random walk strategy.
Connection with UCB method
The upper confidence bound (UCB) algorithm (Auer, 2002;Auer et al., 2002) is probably one of the most famous methods in balancing exploration and exploitation. At time n and state s, the agent chooses the best action according to the following criteria,
arg max a r(s, a) + c log n N (s, a) ,(8)
wherer(s, a) is the sample average of rewards by choosing action a at state s, N (s, a) is again the cumulative count number of choosing a at state s up to time n. In many scenarios like Grid Word or chess board games, the reward is very sparse. Therefore, the reward estimatesr(s, a) ≡ 0 in most times. (8) can be reduced to arg max a log n N (s,a) which is equivalent to arg min a N (s, a). The latter criterion is exactly the negative feedback algorithm. Hence, by previous theorems, we know UCB algorithm is indeed theoretically better than naive random action selection in terms of exploration efficiency in very sparse reward environments.
Connection with Monte Carlo Tree Search
In computer science, Monte Carlo tree search (MCTS, Browne et al. (2012); Silver et al. (2016)) is a heuristic search algorithm for some kinds of decision processes, most notably those adopted in software that plays board games. In that context, MCTS is usually used to solve the game tree. MCTS consists of four main steps, selection, expansion, simulation and back propagation.
Recall the fact that, in expansion step, MCTS will always randomly choose the un-visited node rather than choose the visited nodes. This exactly shares the same spirit as negative feedback algorithm does. Moreover, in selection step, the agent usually chooses Upper Confidence Trees (UCT, Browne et al. (2012);
Couëtoux et al. (2011)) criterion, arg max a w a n a + c log N n a ,
(n a is the number of simulations after choosing action a; w a is the number of wins after choosing action a, N is the number of simulations after the current node) to select the successive child nodes. If the tuning constant c → +∞, then UCT also reduces to arg min a n a which is exactly how negative feedback algorithm chooses the next action. Therefore, our new theories (partially) explains why combination of selection and expansion steps in MCTS is more efficient and effective than naive random tree search.
Non-tabular Cases
Up to now, we have only focused on graphs with finite nodes (i.e. discrete-state RL environments). One may wonder can negative feedback algorithm be extended to non-tabular cases (i.e. the state space is continuous instead of being discrete)? In below, we provide an approximate version of negative feedback algorithm without theoretical justification. For arbitrary state s and action a, we define the cumulative approximate visiting number as
N (n) approx (s, a) = n−1 t ′ =1 κ(s t ′ , s)1{a t ′ = a}(9)
at time n. Here kernel κ(s 1 , s 2 ) quantifies the similarity between two states. If states s 1 and s 2 are close, the value of κ(s 1 , s 2 ) will be close to 1. Otherwise, κ(s 1 , s 2 ) is close to 0. For example, in a Euclidean R 2 space, a state s can be represented by a two-dimensional coordinate, (s x , s y ). The kernel function can be simply chosen as the indicator function, κ(s 1 , s 2 ) := 1{|s 1x − s 2x | ≤ δ and |s 1y − s 2y | ≤ δ}, where δ is a tuning parameter which adjusts the affinity level. Then the agent will choose action a in favor of the least approximate visiting number. That is,
π (n) approx (s) = a; if a = arg min a ′ N (n) approx (s, a ′ ),(10)
where ties break randomly.
Conclusion
In this work, we study the cover time problem of a non-Markovian algorithm, negative feedback strategy, which is based on "favor least" principle. To our knowledge, our work is the first theoretical work of this kind rather than empirical/synthetic study. We make attempts to show that why negative feedback algorithm is better than naive random walk policy. Specifically, we establish that the local version of negative feedback algorithm leads to a smaller expected number of excursions to visit any node in arbitrary graph. We also establish that the cover time of negative feedback algorithm has smaller cover time under many special graphs, including star, clique graph, tree graphs, etc. Connections are made with several important RL algorithms, including maximum-entropy exploration, UCB and MCTS methods. Various experimental results support our new theories and our findings. The result presented in this work may help practitioners to understand different exploration strategies better from mathematical angles. Theoretical analyses of cover time comparison in more complex graph structures or continuous-state environments can be considered as possible directions in future work.
Appendices of "Exploration in Reinforcement Learning: a Theoretical Study"
Experimental results, additional explanations and proofs of technical results are all collected in this appendix.
A Experimental Studies
In this section, we numerically compare between negative feedback algorithm and random walk strategy under various RL environments. The detailed settings are given below.
A.1 Settings
Grid1Dim: an environment with n states lying on a line. One in state i (2 ≤ i ≤ n − 1) can move to either state i − 1 and i + 1. One can only move to 2 (or n − 1) if it is in state 0 (or n). The starting state is randomly chosen from 0 to n. GridCircle: an environment with n states lying forming a circle. One in state i (1 ≤ i ≤ n) can move to either state i − 1 and i + 1. Here state -1 (or n + 1) is treated as n (or 0). The starting state is again uniformly randomly chosen. Grid2dim: A two-dimensional grid world with n 1 rows and n 2 columns. At each state (i, j) (1 ≤ i ≤ n 1 , 1 ≤ j ≤ n 2 ), it can go up (down, left, right). That is, it transfers to next state (i − 1, j) ((i + 1, j), (i, j − 1), (i, j + 1)). If next state is out of boundary (i.e., i + 1 > n 1 , i − 1 < 1, j + 1 > n 2 or j − 1 < 1), then it stays in the previous state. Again the starting state is uniformly randomly generated. For simplicity, we set n 1 = n 2 = grid ∈ {5, 10, 20, 30, 40}. Grid3dim: A two-dimensional grid world with n 1 rows, n 2 columns and n 3 heights. At each state (i, j, k) (1 ≤ i ≤ n 1 , 1 ≤ j ≤ n 2 , 1 ≤ k ≤ n 3 ), it can go up (down, left, right, forward, back). That is, it transfers to next state (i − 1, j, k) ((i + 1, j, k), (i, j − 1, k), (i, j + 1, k), (i, j, k − 1), (i, j, k + 1)). If next state is out of boundary (i.e., i + 1 > n 1 , i − 1 < 1, j + 1 > n 2 , j − 1 < 1, k + 1 > n 3 or k − 1 < 1), then it stays in the previous state. The starting state is still uniformly randomly generated. For illustration purposes, we set n 1 = n 2 = n 3 = grid ∈ {4, 6, 8, 10, 12}. Tree: a tree-like graph. The depth is n and each node has m = 3 children. Root node is numbered as 0. For each non-leaf node i, it connects to children, i · m + 1, i · m + 2, . . . , (i + 1) · m. The root node is the starting state. BarBell: The graph consists of two cliques with size n. The nodes in the first clique are numbered from 1 to n. The nodes from the second clique are numbered from n + 1 to 2n. There is an extra link connecting node n and n + 1. The starting state is randomly chosen from 1 to 2n. Taxi: It is a 5 by 5 two dimensional grid space shown in the right of Figure 3. The yellow icon represents a taxi car, which can move up (down, left, right). If it meets the borderline in boldface, then it remains the previous state. There are four target locations labeled with "R", "G", "Y", "B". A passenger can randomly appear in one of these four positions. His/her destination is also a random place of these four locations. The taxi needs to first pick up the passenger and then takes him/her to the destination. At start, the car randomly appears in the 5 × 5 grid space and the passenger is waiting at "R", "G", "Y" or "B". Tower of Hanoi: This is a very classical game. There are four discs and three pegs. Initially, discs are placed in the leftmost peg as shown in middle of Figure 3. The game requires player to move all discs to the rightmost peg in the order that smaller disc is placed on the larger disc. There is a rule that, in each move, it is only allowed to put a disc to an empty peg or onto a lager disc. MultiRoom: It is a two-dimensional grid world such that n identically same rooms are connected. The bottom-right grid of one room can transit to the top-left grid of next room. The black grids are not accessible.
In each state, it can go to one of four directions. It remains the previous state if it meets the borderline or black grids. At start, the agent is placed at the most left and upper grid. Tic-tac-toe: two players take turns marking the spaces in a three-by-three grid with cross (X) and nought (O). One of two players who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row is the winner. Otherwise, the game ends with a draw.
A.2 Results
For Grid World-type environments, we compare negative feedback algorithm, random walk algorithm and temporally extended/persistent algorithm (see definition in Section 2) with p(z) ∝ 1/z. From Figure 4, we can see that negative feedback is universally better than random walk strategy. Temporally persistent method has smaller cover time in 1-dimensional environment but becomes worse in 2 and 3-dimensional cases.
For non Grid World-type environments, we only compare negative feedback and random walk algorithms without considering temporally persistent strategy. This is because the latter method makes agent easily get stuck at certain states and thus delays the whole exploration procedure. In Tree graph, we can see that the negative feedback strategy improves cover time significantly over random walk strategy as height of tree goes deep. In the tasks of Taxi and Tower of Hanoi environments, the negative feedback algorithm also uses much less time to reach the target state compared with that of random walk policy. 4. In Tic-tac-toe game, we plot the curves of "number of games played" versus "number of visited end states". (An "end state" is a leaf-node at which the game ends.) We can see that negative feedback algorithm takes much fewer games to explore the same number of end states. In other words, compared to random walk strategy, negative feedback method can learn all possible final situations of tic-tac-toe game much faster. All these phenomena corroborate our theories developed in Section 4.
Lastly, in two-dimensional continuous space search problems, we can see that the negative feedback algorithm also improves cover time over the random walk policy from Figure 4. This sheds a light on possible future work on theoretical comparison between different policies in non-discrete RL problems.
B More Detailed Connections to RL Problems
In the reinforcement learning (RL) problem, an agent must interact with the environment, by taking actions and observing their consequences. When the agent starts acting in an environment, it usually does not have any prior knowledge regarding the task which it needs to tackle (Amin et al., 2021). The ability of the agent to explore the environment fast is a key factor in success of achieving higher cumulative rewards. Efficient exploration has been acknowledged as an important area in adaptive control since decades ago, starting with the multi-armed bandit problems (Thompson, 1933;Lattimore and Szepesvári, 2020). It has also become a hot topic in RL literature (Sutton, 1995) since 1990s.
The categorization of exploration methods in RL can be described as in Table 1. The two main categories are "Reward-free" approaches and "Reward-based" approaches. In the former category, the methods do not take into account reward in their action-selection criteria. By contrast, in the latter one, the methods select the action based on the rewards returned by the environment.
Category
Reward-free
Blind Intrinsically Motivated
Reward-based
Randomized action selection Probability matching Optimism Based Table 1: The categorization of exploration strategies in RL. Main focus of this work is on reward-free strategies.
Moreover, there are two sub-categories in "Reward-free" strategies, "Blind" exploration and "Intrinsicallymotivated" exploration. "Blind" methods explore the environment solely based on random action selection. The agent is not guided by the past information history. Such technique is the most basic, simple and effective type of methods in the literature. Its special cases include random-walk (Thrun, 1992), ϵ-greedy method (Caironi and Dorigo, 1994;Sutton, 1995), temporally extended ϵz-greedy method (Dabney et al., 2020), etc. In contrast to blind exploration, intrinsically-motivated exploration methods utilize some environment intrinsic structure to encourage exploring the entire space. Many of such methods aim at minimizing agent's prediction uncertainty or error (Schmidhuber, 1991a,b;Pathak et al., 2017). Other of such methods aim to pursue the maximization of space coverage Machado et al. On the other hand, there are three sub-categories in "Reward-based" strategies, "Randomized action selection" exploration, "Probability matching" exploration and "Optimism (bonus) Based" exploration. Specifically, "Randomized action selection" assigns action selection probabilities to the admissible actions based on value functions/rewards (e.g. Boltzmann distribution (Watkins, 1989;Lin, 1992), value-difference based exploration (Tokic, 2010;Tokic and Palm, 2011)) or the learned policies (e.g. policy gradient (Kakade, 2001;Silver et al., 2014)). Probability matching is also known as Thompson sampling (Thompson, 1933), which maintains a posterior distribution over its beliefs regarding the optimal action, instead of directly selecting the action with the highest expected return. Posterior Sampling for Reinforcement Learning (PSRL, Strens (2000)) and its variants (Osband et al., 2013(Osband et al., , 2019 are shown to be successful in many RL tasks. In the field, one of most popular sub-categories of exploration methods is the optimism (bonus)-based method (Kaelbling et al., 1996), where algorithm employs different bonus calculation technique to encourage the choice of action that leads to a higher level of uncertainty and consequently, novel or informative states. Upper Confidence Bounds (UCB, Strehl et al. (2006); Auer and Ortner (2006) In reinforcement learning problems, a common phenomenon is that the environment provides very sparse rewards, that is, the agent can observe (positive) rewards only if it reaches a very few specific target states. For example, in Grid World game (Stolle and Precup, 2002;Tizhoosh, 2005), the robot moves up, down, left or right in a grid board. It can only be fed with positive feedback once it gets the target grid. In board game like Tic-tac-toe (Beck, 2008;Abu Dalffa et al., 2019), two players take turns marking the spaces in a three-by-three grid with cross (X) and nought (O). The reward will not be given only until one player wins or a draw happens. Therefore, fast exploration of the entire environment space without knowledge of extrinsic rewards plays a key role in success of a RL agent / strategy. An interesting research question naturally arises that, with reasonable theoretical guarantees, could we improve "blind" method by using reward-free strategy and choosing action a bit more smartly rather than taking action completely at random (i.e. random walk)? To be more mathematically formal, we want to find an exploration strategy whose cover time (i.e. the expectation of first time of covering all possible states in the environment) is smaller than that of pure random search under certain environment assumptions or structures.
To our knowledge, there is no literature directly related to cover time comparison between non-trivial exploration strategy and pure random walk strategy. Despite this, some strategies among blind exploration category touch upon cover time minimization. Dabney et al. (2020) introduces a temporally-extended ϵ-greedy method, which utilizes the option (a strict generalization of action) strategy allowing repeated actions for consecutive times. They empirically show the advantage over random search in Deep Sea, Mountain Car environments. Jinnai et al. (2019a,b) compute the Laplacian matrix and try to maximize the second smallest eigenvalue (known as algebraic connectivity (Fiedler, 1973)). Their methodology is heuristic and the construction of Laplacian requires sufficient large action-state samples.
As a result, the problem considered in current work is fundamental. Our results (partially) address the issue that why "favor least" can perform much more efficiently than random walk from cover-time viewpoint.
C Proof of Results in Section 2.
Proof of Proposition 2.2. We first code 7 states for this toy grid world as below, "Start":0, upper-left grid: 1, bottom-left grid: 2, middle grid: 3, middle-right grid: 4, upper-right grid: 5, and "End":6. We next define T i (i ∈ {0, 1, 2, 3, 4, 5}) as the expectation of first time to reach state 6 by starting at state i.
Next We let a be the probability of performing an action for 1 consecutive time (i.e. p(z = 1) = a) and b be the probability of performing an action for 2 consecutive times (i.e. p(z = 2) = b = 1 − a). There is no need to consider performing an action for more than 2 consecutive times, since the maze is only of size, three by three. (If an agent repeat the same action for ≥ 3 times, then it will hit the wall and waste the time.)
By directly examining the specific structure of this grid world, we can get the following recursive formula.
T 0 = 1 3 T 1 + 1 3 T 2 + a 3 T 3 + b 3 T 4 + a + 2b,(11)
T 1 = aT 0 + bT 2 + a + 2b,
T 2 = aT 0 + bT 1 + a + 2b, T 3 = 1 2 T 0 + 1 2 T 4 + a + 2b, T 4 = 1 3 T 5 + a 3 T 3 + b 3
T 0 + 2(a + 2b)/3 + 1/3,
T 5 = aT 4 + a + 2b.
By simplifications, we know
T 1 = T 2 = a 1 − b T 0 + a + 2b 1 − b ; (1 − a 3 )T 4 = a 3 T 3 + b 3
T 0 + (a + 2b) + 1 3 ;
T 3 = 1 2 (T 0 + T 4 ) + (a + 2b).
Furthermore, we have
(1 − a/2)T 4 = (a/6 + b/3)T 0 + (a/3 + 1)(a + 2b) + 1 3 ,
(2 − a)T 3 = (1 − a/3 + b/3)T 0 + (3 − 2a/3)(a + 2b) + 1 3 .(12)
Solving above equations for T 3 , T 4 and plugging back to (11), we get
T 0 = 2(a+2b) 3(1−b) + (a + 2b) 3a−2a 2 /3+2ab/3+2b+1/3 3(2−a) + a + 2b 1 − 2a 3(1−b) − a−a 2 /3+2ab/3+2b 2 /3 3(2−a) .(14)
By noticing that (14) is a decreasing function of variable a. Therefore the smallest T 0 is 23 when a = 1 and b = 0. This implies that the random walk policy is always better than temporally persistent policy in this environment.
Proof of Proposition 2.1. For negative-feedback algorithm, we first consider a restricted environment that state 1 and 2 are not accessible. Then there are only five reachable states, i.e., 0,3,4,5,6. Under this restricted environment, we can compute the T ′ 0 by listing all possible paths in the following table. Based on Table 2, we can easily compute that T ′ 0 = 95/12 in the restricted environment. Moreover, we can see that the policy goes back to state 0 at most 2 times (excluding the initial position). In other words, in original environment, state 1 and 2 can be visited at most three times. Therefore, E[T start,right ] = 95/12, which gives T 0 ≤ T ′ 0 + (2 + 2) * 3 < 23. This proves that the negative feedback algorithm is better than random walk algorithm in this task.
Path
Probability 1 (0,3,5,6) 1/6 2 (0,3,4,5,4,6) 1/12 3 (0,3,4,5,4,3,0,3,4,6) 1/12 4 (0,3,0,3,4,6) 1/6 5 (0,3,0,3,4,5,4,6) 1/12 6 (0,3,0,3,4,5,4,3,4,6) 1/24 7 (0,3,0,3,4,5,4,3,0,3,4,6) 1/24 8 (0,3,4,3,0,3,4,6) 1/24 9 (0,3,4,3,0,3,4,5,4,6) 1/24 10 (0,3,4,3,0,3,0,4,4,6) 1/24 11 (0,3,4,3,0,3,0,3,4,5,4,6) 1/24 12 (0,3,0,3,4,3,4,6) 1/24 13 (0,3,0,3,4,3,4,5,4,6) 1/24 14 (0,3,0,3,4,3,0,3,4,6) 1/24 15 (0,3,0,3,4,3,0,3,4,5,4,6) 1/24
D Proof of Result in Section 3
Proof of Theorem 3.1. Suppose that node i has K neighbours in the graph, denoted as a 1 , . . . , a K , so that d i = K. For k = 1, . . . , K, we define
p a k = P T j ≤ V 1 |X 0 = i, X 1 = a k ,(15)
the probability of visiting node j before returning to node i. Then a simple recursive argument shows that under the random walk dynamics,
E πrw [N j |X 0 = i] = 1 K K k=1 p a k · 1 + (1 − p a k ) 1 + E πrw [N j |X 0 = i] (16) =1 + E πrw [N j |X 0 = i] · 1 K K k=1 (1 − p a k ) =1 + E πrw [N j |X 0 = i] 1 − 1 K K k=1 p a k .
This means that, under the random walk dynamics,
E πrw [N j |X 0 = i] = K K k=1 p a k .(17)
The modified local negative feedback algorithm is not Markovian, but there exist regeneration points as well. In fact, these are the time points when the agent returns back to node i and by that time the process has moved from state i to each one of its K neighbours with equal number of times (i.e. those time points n's such that Kmin
(n) i = d i ).
Beginning at each such a regeneration point, the sequence of the next K actions (steps out of the node i) will be a random permutation π = (π 1 , . . . , π k ) of the action set {a 1 , . . . , a K }, after which a regeneration point is reached once again. Therefore, a recursive argument shows that
E π loc [N j |X 0 = i] = π=(π1,...,π k ) K k=1 1 K 1 K − 1 · · · 1 K − k + 1 k−1 m=1 (1 − p aπ m )p aπ k · k (18) + K k=1 (1 − p a k ) K + E π loc [N j |X 0 = i] .
We can reorganize the first term in the right hand side of (18) by collecting together all terms in the sum which correspond to the same k = 1, . . . , K. This puts the first term in the right hand side of (18) in the form
K k=1 k (k − 1)! K(K − 1) · · · (K − k + 1) |J|=k−1 l̸ ∈J p a l j∈J (1 − p aj ) = K k=1 1 K k |J|=k−1 l̸ ∈J 1 − (1 − p a l ) j∈J (1 − p aj ) = K k=1 1 K k (K − k + 1) |J|=k−1 j∈J (1 − p aj ) − K k=1 1 K k k |J|=k j∈J (1 − p aj ) = K−1 k=0 1 K k+1 (K − k) |J|=k j∈J (1 − p aj ) − K k=1 1 K k k |J|=k j∈J (1 − p aj ) =1 − K K j=1 (1 − p aj ) + K−1 k=1 (K − k)(k + 1)!(K − k − 1)! K! − kk!(K − k)! K! |J|=k j∈J (1 − p aj ) =1 − K K j=1 (1 − p aj ) + K−1 k=1 1 K k |J|=k j∈J
(1 − p aj ).
Solving now (18) says that under the modified local negative feedback algorithm,
E π loc [N j |X 0 = i] = 1 + K−1 k=1 1 ( K k ) |J|=k j∈J (1 − p aj ) 1 − K j=1 (1 − p aj ) .(19)
Our goal is to show that the expected value (19) under the modified negative feedback dynamics is never larger than the expected value (17) under the random walk dynamics. That is, we need to show that
K K j=1 p aj ≥ 1 + K−1 k=1 1 ( K k ) |J|=k j∈J (1 − p aj ) 1 − K j=1 (1 − p aj ) .(20)
This is, of course, the same as
1 − K j=1 (1 − p aj ) K ≤ 1 − K j=1 (1 − p aj ) 1 + K−1 k=1 1 ( K k ) |J|=k j∈J (1 − p aj ) .(21)
Denoting x j = 1 − p aj , j = 1, . . . , K, we need to prove that
1 − K j=1 x j K ≤ 1 − K j=1 x j 1 + K−1 k=1 1 ( K k ) |J|=k j∈J x j .(22)
Alternatively, we need to prove that
K j=1 x j K ≥ K−1 k=1 1 ( K k ) |J|=k j∈J x j + K j=1 x j 1 + K−1 k=1 1 ( K k ) |J|=k j∈J x j (23) = K k=1 1 ( K k ) |J|=k j∈J x j K−1 k=0 1 ( K k ) |J|=k j∈J x j .
It is, of course, sufficient to prove the comparison for each individual k. That is, it is enough to prove that for each k = 1, . . . , K we have
K j=1 x j K ≥ 1 ( K k ) |J|=k j∈J x j 1 ( K k−1 ) |J|=k−1 j∈J x j .(24)
Note that for k = 1, the inequality (24) becomes an identity. To prove it for k = 2, . . . , K, we denote
S k = 1 K k |J|=k j∈J
x j , k = 2, . . . , K, so that (24) states that
S 1 ≥ S k S k−1 , k = 2, . . . , K.(25)
The Maclaurin inequalities state that
S 1 ≥ S 1/2 2 ≥ · · · ≥ S 1/K K ;(26)
see e.g. Cvetkovski (2012). It follows from (26) that for any k = 2, . . . , K we have
S 1 S k−1 ≥ S 1/(k−1) k−1 S k−1 = S k/(k−1) k−1 = S 1/(k−1) k−1 k ≥ S 1/k k k = S k ,
hence proving (25).
E Proof of Result in Section 4
Proof of Theorem 4.1. First of all, it can be seen that it must move to node 0 whenever it is in node i ∈ {1, . . . , n}. For random walk policy, the question is then reduced to computing the expectation of numbers N 0 for visiting all child nodes 1 to n starting from state 0. This becomes a standard Coupon collector's problem. It is known that E[N 0 ] = n(1
+ 1 2 + 1 n ). Then E πrw [T C ] = 2E[N 0 ] − 1 = 2n( n i=1
1 i ) − 1, since all edges except the last one must be traversed in both directions .
For negative-feedback policy, it can be observed that (i) node i ∈ {1, . . . , n} can be only visited if the previous state is 0; (ii) the node i will be visited twice only if all states j ̸ = i have been visited at least once. Therefore, we conclude that E πneg [T C ] = 2n − 1.
Proof of Theorem 4.2. For any integer n ≥ 1, we write E π [T C (n)] as the cover time of path with length n under policy π and we prove the statement by induction method. If n = 2, we can easily list all possible paths for negative-feedback policy. Then E πneg [T C (2)] = 2 × 1 2 + 4 × 1 2 = 3 < 4. Therefore, statement is true Path Probability 1 (0,1,2) 1/2 2 (0,1,0,1,2) 1/2 when n = 2. Suppose that E πneg [T C (n)] < n 2 for all n ≤ n 0 , we need to prove E πneg [T C (n)] < n 2 holds for n = n 0 + 1 as well.
Note that the initial state is 0. After the first move, it will be in state 1. We then consider a restricted graph onto states {1, . . . , n} (i.e. remove state 0 and the edge connecting states 0 and 1). By induction, we
know that E πneg [T ′ C (n − 1)] < (n − 1) 2 , where E πneg [T ′ C (n − 1)
] is the cover time in this restricted graph. Next, we use the key observation in Lemma E.1 that state 0 can be visited at most n−1 times before all states are visited. Therefore, it leads to E πneg [T C (n)] ≤ 1 + E πneg [T ′ C (n − 1)] + 2(n − 1) < 1 + (n − 1) 2 + 2(n − 1) = n 2 . Here, 1 corresponds to the first move, 2(n − 1) corresponds to the fact that the edge between nodes 0 and 1 can be traversed in both direction for at most n − 1 times.
Finally, it is easy to get that E πrw [T C (n)] = n 2 for random walk policy. This concludes the proof.
Lemma E.1. In path graph with n ≥ 1, state 0 can be visited at most n − 1 times (excluding time 0) before it reaches state n.
Proof of Lemma E.1. For node i ∈ {1, . . . , n − 1}, we define
x i (t) = N i,i−1 (t) − N i,i+1 (t),
where t is the index of number of time visiting state 0 and N i,j (t) represents the number of moves from node i to node j (j = i − 1 or i + 1) up to t-th visit to initial state 0. When t = 0, x i (0) ≡ 0 for any node i. Under negative feedback policy, x i (t) can only take value in {−1, 0, 1}. Moreover, considering any two consecutive visits to state 0. We examine changes in x i (t)'s. It is not hard to see that i∈{1,...,n−1}
x i (t + 1) − i∈{1,...,n−1} x i (t) = 1.
See Figure 5 for more intuition.
Note that i∈{1,...,n−1} x i (t) is upper bounded by n − 1. At the start, i∈{1,...,n−1} x i (0) = 0. Therefore, T max ≤ (n − 1)/1 = n − 1, where T max is defined to be the number of times visiting state 0 before reaching state n. This completes the proof.
Proof of Theorem 4.3. It is straightforward to see that E π [T C ] = 1 + n−1 i=1 T π i,i+1 for any policy π, where T π i,i+1 is the expectation of N π i,i+1 , where the definition of latter quantity is the first time to reach the (i + 1)-th un-visited node starting from i-th un-visited node.
In random walk policy, N πrw i,i+1 is a random variable following geometric distribution with parameter n−i n−1 . Then T πrw i,i+1 = n−1 n−i . n 0 1 2 3 Figure 5: A graphical illustration of two consecutive visits to state 0 in the path graph. In above, the agent starts from state 0 and moves right until it reaches state 3. Then it moves left to get to state 1 and turn right again. The agent repeats taking left or right until it gets back to state 0. It helps understanding the relation i∈{1,...,n−1} x i (t + 1) − i∈{1,...,n−1} x i (t) = 1.
In negative-feedback policy, during each move from a visited node j, the success probability of visiting an un-visited node is at least n−i n−1 . This is because any edge between node j and un-visited nodes has not been traversed yet. Therefore (n − i) un-visited nodes must belong to set Smin (n) j . In addition, cardinality Kmin (n) j is trivially upper bounded by n − 1. Therefore, success probability is at least (n − i)/(n − 1). Thus N πneg i,i+1 is stochastically smaller than N πrw i,i+1 which gives T πneg i,i+1 = E[N πneg i,i+1 ] ≤ n−1 n−i . (Here we say a random variable X is stochastically smaller than random variable Y , if their cumulative distribution functions satisfy F X (t) ≥ F Y (t) for any t ∈ R.)
Next, we need to show that there is at least one i such that T πneg i,i+1 < n−1 n−i holds. We prove this by contradiction method. If not, then T i,i+1 = n−1 n−i for all i ∈ {1, . . . , n − 1}. This implies that each node j has been visited at most once. This is due to the fact that the success probability will be at least (n − i j,2 )/(n − 2), when j is visited for the second time. Here i j,2 represents the number of visited nodes before j is visited twice. Then T πneg ij,2,ij,2+1 < n−1 n−ij,2 which violates our assumption that T i,i+1 = n−1 n−i for all i. Hence, all states must be visited for at most one time. On the other hand, E πneg [T C ] = i n−1 n−i > n which indicates that, with non-zero probability, at least one node has been visited for multiple times. This leads to the contradiction.
Therefore, E πneg [T C ] < E πrw [T C ] holds.
Proof of Theorem 4.5. We first show that the root node is visited at most bH times before all nodes have been visited at least once. To prove this, we rely on the following observation (Lemma E.2).
Lemma E.2. The root node is visited at most bh times before all nodes at depth h have been visited at least once.
Proof of Lemma E.2. We take an arbitrary node i h at depth h. Thanks to the tree structure, we know that there exists a unique path that connects the root node i 0 and intermediate node i h and the length of this path is exactly h. For notational simplicity, we write this path as {i 0 , i 1 , i 2 , . . . , i h }. Similar to the proof of Lemma E.1, we can define x j (t) = N ij ,ij−1 (t) − N ij ,ij+1 (t) where t is the index of number of time visiting state 0 and N ij ,ij−1 (t) (N ij ,ij+1 (t)) is the number of moves from node i j to node i j−1 (i j+1 ). By repeating the rest of proof of Lemma E.1, we arrive at the conclusion there is at most h moves from i 1 to i 0 before node i h is reached. Since root node i 0 has at most b children. Therefore, root node could be visited at most bh times before all nodes at depth h have been visited once. This concludes this lemma.
We especially take h = H in Lemma E.2 and conclude that the root node is visited at most bH times. Next, we prove that the node at depth h could be visited at most (b + 1)(H + h) times before all nodes are visited at least once. By previous argument, we have proved that there are at most H moves from root node i 0 to node i i (i 1 is an arbitrary node at depth 1) and H moves from node i 1 to root node i 0 . By the property of negative feedback algorithm, there are at most H + 1 moves from node i 1 to each of its children i 2 . Hence, the number of moves from its arbitrary child i 2 to node i 1 is at most H + 1. (This is due to the another observation that the number of moves from child node to parent node is no larger than the number of moves from parent node to child node.) To sum up, node i 1 at depth 1 can be visited at most (b + 1)(H + 1) times. By repeating such procedure, we can have that the number of moves from node i h+1 at depth h + 1 to its parent node i h is at most H + h and the number of moves from node i h−1 to its child node i h is at most H + h − 1. Therefore, node i h at depth h can be visited at most (b + 1)(H + h) times. By taking h = H, we finally conclude that each node is visited for at most 2(b + 1)H times before it covers all nodes.
Proof of Theorem 4.6. In the proof of Theorem 4.5, we have already shown that the number of moves from node i h−1 (at depth h − 1) to its child node is at most H + h − 1. For any leaf node, we know it is at depth H. Therefore, there is at most H + H − 1 ≤ 2H number of times to visit a single leaf node. Therefore, the cover time is at most T C ≤ #{non-leaf nodes}2(b + 1)H + #{leaf nodes}2H
≤ 2(b + 1)H · b H − 1 b − 1 + 2Hb H ≤ 4H b + 1 b − 1 b H .(27)
This completes the proof.
Figure 2 :
2Upper left: Star graph with initial state at 0. Upper right: Path graph with initial state at 0.
Chentanez et al., 2004; Bellemare et al., 2016), curiosity-driven algorithm (Pathak et al., 2017; Burda et al., 2018), etc. Among those, maximum-entropy exploration policy (Hazan et al., 2019) arouses special interests in recent years. The policy needs to iteratively learn the unknown MDP. For un-explored / less-explored node s (see definition of m-known state in Hazan et al. (2019)), they select action, arg min a N (s, a),
Figure 3 :
3First fig: taxi environment. Second fig: Hanoi environment. Third fig: multi-room environment. Fourth fig: tic-tac-toe game.
Figure 4 :
4Cover times under different policies in various discrete-state environments described in Section A.1. Last two figures are for cover time comparison in 2D continuous-state environments. D = 5, M ∈ {5, 10, 20, 30, 40}.
Continuous 2D Space: This is a [0, D] × [0, D] Eculidean space. The agent randomly walks in either of x-axis or y-axis direction. Two walk types are considered. One is Brownian motion (the step size follows the standard Normal distribution). The other one is Levy flight Viswanathan et al. (2000) (the step size follows Pareto distribution with degree 2). The whole space is equally divided into M × M sub-regions ( each sub-region is a square with side length of δ = D/M ). Cover time here is defined to be the expectation of first time of visiting all sub-regions.
(2017a,b); Hong et al. (2018); Jinnai et al. (2019a,b).
That is, E[Tπper(p)
task
] ≥ E[T πrw
task ].
Proposition 2.2. In the same toy grid world as in Proposition 2.1, we have E[T πrw
task ] ≤ E[T
πper(p)
task
] for any
distribution p.
Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.Chen Avin, Michal Kouckỳ, and Zvi Lotker. How to explore a fast-changing world (cover time of a simple random walk on evolving graphs). In International Colloquium on Automata, Languages, and Programming, pages 121-132. Springer, 2008. József Beck. Combinatorial games: tic-tac-toe theory, volume 114. Cambridge University Press Cambridge, 2008. Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. Advances in neural information processing systems, 29, 2016. Andrei Z Broder and Anna R Karlin. Bounds on the cover time. Journal of Theoretical Probability, 2(1): 101-120, 1989. Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1-43, 2012. Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355, 2018. Pierguido VC Caironi and Marco Dorigo. Training q-agents, 1994. Nuttapong Chentanez, Andrew Barto, and Satinder Singh. Intrinsically motivated reinforcement learning. Advances in neural information processing systems, 17, 2004. Amir Dembo, Jay Rosen, and Ofer Zeitouni. Limit law for the cover time of a random walk on a binary tree. Zhang-Wei Hong, Tzu-Yun Shann, Shih-Yang Su, Yi-Hsiang Chang, Tsu-Jui Fu, and Chun-Yi Lee. Diversitydriven exploration strategy for deep reinforcement learning. Advances in neural information processing systems, 31, 2018. Yuu Jinnai, Jee Won Park, David Abel, and George Konidaris. Discovering options for exploration by minimizing cover time. In International Conference on Machine Learning, pages 3130-3139. PMLR, 2019a. Yuu Jinnai, Jee Won Park, Marlos C Machado, and George Konidaris. Exploration in reinforcement learning with deep covering options. In International Conference on Learning Representations, 2019b. Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. László Lovász and Peter Winkler. A note on the last new vertex visited by a random walk. Marlos C Machado, Clemens Rosenbaum, Xiaoxiao Guo, Miao Liu, Gerald Tesauro, and Murray Campbell. Eigenoption discovery through the deep successor representation. arXiv preprint arXiv:1710.11089, 2017b. Peter Matthews. Covering problems for brownian motion on spheres. The Annals of Probability, pages 189-199, 1988. Roger McFarlane. A survey of exploration strategies in reinforcement learning. McGill University, 2018. Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats, pages 222-227, 1991b. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387-395. PMLR, 2014. Richard S Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. Advances in neural information processing systems, 8, 1995. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. Jaksch Thomas, Ortner Ronald, and Auer Peter. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563-1600, 2010. Sebastian B Thrun. Efficient exploration in reinforcement learning. 1992. Hamid R Tizhoosh. Reinforcement learning based on actions and opposite actions. In International conference on artificial intelligence and machine learning, volume 414, 2005. Michel Tokic. Adaptive ε-greedy exploration in reinforcement learning based on value differences. In Annual Conference on Artificial Intelligence, pages 203-210. Springer, 2010. Michel Tokic and Günther Palm. Value-difference based exploration: adaptive control between epsilon-greedy and softmax. In Annual conference on artificial intelligence, pages 335-346. Springer, 2011. Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. 1989. Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. Advances in neural information processing systems, 20, 2007.Machine learning, 47(2):235-256, 2002.
Adrien Couëtoux, Jean-Baptiste Hoock, Nataliya Sokolovska, Olivier Teytaud, and Nicolas Bonnard. Contin-
uous upper confidence trees. In International conference on learning and intelligent optimization, pages
433-445. Springer, 2011.
Zdravko Cvetkovski. Newton's inequality, maclaurin's inequality. In Inequalities, pages 117-119. Springer,
2012.
Will Dabney, Georg Ostrovski, and André Barreto. Temporally-extended ϵ-greedy exploration. arXiv preprint
arXiv:2006.01782, 2020.
In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, volume 57, pages 830-855. Institut
Henri Poincaré, 2021.
Uriel Feige. A tight lower bound on the cover time for random walks on graphs. Random structures and
algorithms, 6(4):433-438, 1995.
Miroslav Fiedler. Algebraic connectivity of graphs. Czechoslovak mathematical journal, 23(2):298-305, 1973.
Ronan Fruit, Matteo Pirotta, and Alessandro Lazaric. Improved analysis of ucrl2 with empirical bernstein
inequality. arXiv preprint arXiv:2007.05456, 2020.
Peter Grassberger. How fast does a random walk cover a torus? Physical Review E, 96(1):012115, 2017.
Brieuc Guinard and Amos Korman. Tight bounds for the cover times of random walks with heterogeneous
step lengths. arXiv preprint arXiv:2002.05443, 2020.
Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy
exploration. In International Conference on Machine Learning, pages 2681-2691. PMLR, 2019.
Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. Journal
of artificial intelligence research, 4:237-285, 1996.
Jeff D Kahn, Nathan Linial, Noam Nisan, and Michael E Saks. On the cover time of random walks on graphs.
Journal of Theoretical Probability, 2(1):121-128, 1989.
Sham M Kakade. A natural policy gradient. Advances in neural information processing systems, 14, 2001.
Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine
learning, 49(2):209-232, 2002.
Raksha Kumaraswamy, Matthew Schlegel, Adam White, and Martha White. Context-dependent upper-
confidence bounds for directed exploration. Advances in Neural Information Processing Systems, 31,
2018.
Machine
learning, 8(3):293-321, 1992.
Journal of graph
theory, 17(5):593-596, 1993.
Marlos C Machado, Marc G Bellemare, and Michael Bowling. A laplacian framework for option discovery in
reinforcement learning. In International Conference on Machine Learning, pages 2295-2304. PMLR, 2017a.
Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior
sampling. Advances in Neural Information Processing Systems, 26, 2013.
Ian Osband, Benjamin Van Roy, Daniel J Russo, Zheng Wen, et al. Deep exploration via randomized value
functions. J. Mach. Learn. Res., 20(124):1-62, 2019.
Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by
self-supervised prediction. In International conference on machine learning, pages 2778-2787. PMLR, 2017.
Karl Pearson. The problem of the random walk. Nature, 72(1865):294-294, 1905.
Martin L Puterman. Markov decision processes. Handbooks in operations research and management science,
2:331-434, 1990.
Jürgen Schmidhuber. Curious model-building control systems. In Proc. international joint conference on
neural networks, pages 1458-1463, 1991a.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian
Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go
with deep neural networks and tree search. nature, 529(7587):484-489, 2016.
Frank Spitzer. Principles of random walk, volume 34. Springer Science & Business Media, 2013.
Martin Stolle and Doina Precup. Learning options in reinforcement learning. In International Symposium on
abstraction, reformulation, and approximation, pages 212-223. Springer, 2002.
Alexander L Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L Littman. Pac model-free
reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, pages
881-888, 2006.
Malcolm Strens. A bayesian framework for reinforcement learning. In ICML, volume 2000, pages 943-950,
2000.
William R Thompson. On the likelihood that one unknown probability exceeds another in view of the
evidence of two samples. Biometrika, 25(3-4):285-294, 1933.
GM Viswanathan, V Afanasyev, Sergey V Buldyrev, Shlomo Havlin, MGE Da Luz, EP Raposo, and H Eugene
Stanley. Lévy flights in random searches. Physica A: Statistical Mechanics and its Applications, 282(1-2):
1-12, 2000.
Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3):279-292, 1992.
Michael Wunder, Michael L Littman, and Monica Babes. Classes of multiagent q-learning dynamics with
epsilon-greedy exploration. In ICML, 2010.
Table 2 :
2In the restricted 5-state grid world, all possible paths along with corresponding probability under
negative feedback policy.
Table 3 :
3In the path graph with n = 2, all possible paths along with corresponding probability under negative feedback policy.
| [] |
[
"COMBINING PARTICLE AND TENSOR-NETWORK METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS VIA SKETCHING",
"COMBINING PARTICLE AND TENSOR-NETWORK METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS VIA SKETCHING"
] | [
"Yian Chen ",
"Yuehaw Khoo "
] | [] | [] | In this paper, we propose a general framework for solving high-dimensional partial differential equations with tensor networks. Our approach offers a comprehensive solution methodology, wherein we employ a combination of particle simulations to update the solution and re-estimations of the new solution as a tensor-network using a recently proposed tensor train sketching technique. Our method can also be interpreted as an alternative approach for performing particle number control by assuming the particles originate from an underlying tensor network. We demonstrate the versatility and flexibility of our approach by applying it to two specific scenarios: simulating the Fokker-Planck equation through Langevin dynamics and quantum imaginary time evolution via auxiliary-field quantum Monte Carlo. | 10.48550/arxiv.2305.17884 | [
"https://export.arxiv.org/pdf/2305.17884v2.pdf"
] | 258,959,060 | 2305.17884 | 0fa668e405057eee614343c462265a2876fc06fa |
COMBINING PARTICLE AND TENSOR-NETWORK METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS VIA SKETCHING
Yian Chen
Yuehaw Khoo
COMBINING PARTICLE AND TENSOR-NETWORK METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS VIA SKETCHING
In this paper, we propose a general framework for solving high-dimensional partial differential equations with tensor networks. Our approach offers a comprehensive solution methodology, wherein we employ a combination of particle simulations to update the solution and re-estimations of the new solution as a tensor-network using a recently proposed tensor train sketching technique. Our method can also be interpreted as an alternative approach for performing particle number control by assuming the particles originate from an underlying tensor network. We demonstrate the versatility and flexibility of our approach by applying it to two specific scenarios: simulating the Fokker-Planck equation through Langevin dynamics and quantum imaginary time evolution via auxiliary-field quantum Monte Carlo.
Introduction
Partial differential equations (PDEs) are mathematical equations that describe a wide range of phenomena in various fields, including physics, engineering, biology, and finance. They are fundamental tools for modeling and understanding complex systems and their behavior over time. The importance of solving high-dimensional PDEs arises from the fact that many real-world problems involve intricate systems with numerous variables and intricate dependencies. However, the traditional finite difference and finite element methods scale exponentially with the number of dimensions. To circumvent the curse-of-dimensionality, researchers propose to pose various lowcomplexity ansatz on the solution of FPE to control the growth of parameters. For example, [17,38,39] propose to parametrize the unknown PDE solution with deep neural networks and optimize their variational problems instead. [2,9,10,19] approximates the differential operators with data-sparse low-rank and hierarchical matrices. [6] considers a low-rank matrix approximation to the solution of time-evolving PDE.
The matrix product state (MPS), also known as the tensor train (TT), has emerged as a popular ansatz for representing solutions to many-body Schrödinger equations [8,37]. Recently, it has also been applied to study statistical mechanics system where one needs to characterize the evolution of a many-particle system via Fokker-Planck type PDEs [11,12,15]. Despite the inherent highdimensionality of these PDEs, the MPS/TT representation mitigates the curse-of-dimensionality challenge by representing a d-dimensional solution through the contraction of d tensor components. Consequently, it achieves a storage complexity of O(d).
To fully harness the potential of MPS/TT in solving high-dimensional PDEs, it is crucial to efficiently perform the following operations:
(1) Fast applications of the time-evolution operator G to an MPS/TT represented solution ϕ.
(2) Compression of the MPS/TT rank after applying G to ϕ, as the rank of Gϕ can be larger than that of ϕ.
While these operations can be executed with high numerical precision and O(d) time complexity when the PDE problem exhibits a specialized structure (particularly in 1D-like interacting manybody systems), general problems may necessitate exponential running time in dimension d to perform these tasks.
On the other hand, particle-based methods employ a representation of ϕ as a collection of random walkers. The application of a time-evolution operator G to such a particle representation of ϕ can be achieved inexpensively through short-time Monte Carlo simulations. As time progresses, the variance of the particles, or random walkers, may increase, accompanied by a growth in the number of walkers. To manage both the variance and computational cost of the particle system, it is common to use importance sampling strategies.
Our contribution is to combine the best of both worlds. Specifically, we adopt the MPS/TT representation as an ansatz to represent the solution, and we conduct its time evolution by incorporating short-time Monte Carlo simulations. This integration of methodologies allows us to capitalize on the advantages offered by both approaches, leading to improved performance and broader applicability in solving high-dimensional PDEs. The improvement is two-folds, (1) From the viewpoint of improving tensor network methods, we simplify the application of a semigroup G to a MPS/TT Gϕ via Monte Carlo simulations. While the application of G using Monte Carlo is efficient, one needs a fast and accurate method to estimate an underlying MPS/TT from the random walkers. To address this requirement, we employ a recently developed parallel MPS/TT sketching technique, proposed by one of the authors, which enables estimation of the MPS/TT from the random walkers without the need for any optimization procedures. (2) From the viewpoint of improving Monte Carlo methods, we propose a novel approach to perform walker population control, via reducing the sum of random walkers into a lowrank MPS/TT. Furthermore, the MPS/TT structure possesses the capability to function as a generative model, enabling the generation of fresh samples from the solution. We demonstrate the success of our algorithm by providing compelling demonstrations in both statistical and quantum mechanical scenarios for determining the transient solution of parabolic type PDE. In particular, we use our method to perform Langevin dynamics and auxiliary-field quantum Monte Carlo for systems that do not exhibit 1D orderings of the variables.
The rest of paper is organized as follows. First we introduces some preliminaries for tensor networks, in Section 2. We discuss the proposed framework that combines Monte Carlo and MPS/TT in Section 3. In Section 4, we demonstrate the applications of our proposed framework on two specific evolution systems: ground state energy problem for quantum many-body system and density evolution problem by solving the Fokker-Planck equation. The corresponding numerical experiments for the two applications are provided in Section 5. We conclude the paper by discussions in Section 6.
Background and Preliminaries
In order to combine particle-based simulations and tensor-network based approaches, we need to describe a few basic tools regarding tensor-network.
2.1. Tensor Networks and Notations. Our primary objective in this paper is to obtain a MPS/TT representation of the solution of the initial value problem (3.1) at any time t ≥ 0. Since the technique works for u(x, t) at any given t ≥ 0, we omit t from the expression and use u(x) to denote an arbitrary d-dimensional function:
u : X 1 × X 2 × · · · × X d → R, where x 1 ∈ X 1 , x 2 ∈ X 2 , · · · , x d ∈ X d . (2.1)
And the state spaces X 1 , · · · , X d ⊆ R. Definition 1. We say function u admits an MPS/TT representation with ranks or bond dimensions (r 1 , . . . , r d−1 ), if one can write
u(x 1 , x 2 , . . . , x d ) = r1 α1=1 r2 α2=1 · · · r d−1 α d−1 =1 G 1 (x 1 , α 1 )G 2 (α 1 , x 2 , α 2 ) . . . G d (α d−1 , x d ) (2.2)
for all (x 1 , x 2 , . . . , x d ) ∈ X 1 × · · · × X d . We call the 3-tensor G k the k-th tensor core for the MPS/TT.
We present the tensor diagram depicting the MPS/TT representation in Fig. 2.1a. In this diagram, each tensor core is represented by a node, and it possesses one exposed leg that represents the degree of freedom associated with the corresponding dimension. Definition 2. O admits a matrix product operator (MPO) representation with ranks or bond dimensions (r 1 , . . . , r d−1 ), if one can write
O(x 1 , . . . , x d ; x ′ 1 , . . . , x ′ d ) = r1 α1=1 r2 α2=1 · · · r d−1 α d−1 =1 G 1 (x 1 , x ′ 1 , α 1 )G 2 (α 1 , x 2 , x ′ 2 , α 2 ) . . . G d (α d−1 , x d , x ′ d ) (2.3) for all (x 1 , x 2 , . . . , x d ) ∈ X 1 × · · · × X d , (x ′ 1 , x ′ 2 , . . . , x ′ d ) ∈ X ′ 1 × · · · × X ′ d .
We call the 4-tensor G k the k-th tensor core for the MPO.
The tensor network diagram corresponding to the MPO representation is depicted in Fig. 2.1b. For more comprehensive discussions on tensor networks and tensor diagrams, we refer interested readers to [11]. Each tensor core has two exposed legs pointing upwards and downwards, respectively, indicating two free dimensions.
Finally, for two integers m, n ∈ N where n > m, we use the MATLAB notation m : n to denote the set {m, m + 1, · · · , n}. When working with high-dimensional functions, it is often convenient to group the variables into two subsets and think of the resulting object as a matrix. We call these matrices unfolding matrices. In particular, for k = 1, · · · , d − 1, we define the k-th unfolding matrix by u(x 1 , · · · , x k ; x k+1 , · · · , x d ) or u(x 1:k ; x k+1:d ), which is obtained by grouping the first k and last d − k variables to form rows and columns, respectively.
Tensor Network Operations.
In this subsection, we introduce several tensor network operations of high importance in our applications using the example of MPS/TT. Similar operators can be extended to more general ensor networks.
2.2.1. Marginalization. To marginalize the MPS/TT representation of u defined in Definition 1, one can perform direct operations on each node G k . For instance, if the goal is to integrate out a specific variable x k , the operation can be achieved by taking the summation:
(2.4) x k G k (α k−1 , x k , α k ).
The overall computational cost of the marginalization process is at most O(d), depending on the number of variables that need to be integrated out.
∥u∥ 2 2 = x1,...,x d u(x 1 , . . . , x d ) 2 ,
for a MPS/TT. One can again accomplish this via operations on each node. In particular, one can first form the Hadamard product u ⊙ u in terms of two MPS/TTs, and then integrate out all variables x 1 , . . . , x d of u ⊙ u to get ∥u∥ 2 2 using marginalization of MPS/TT as described in Section 2.2.1. The complexity of forming the Hadamard product of two MPS/TTs is O(d), as mentioned in [27]. Therefore, the overall complexity of computing ∥u∥ 2 2 using the described approach is also O(d).
2.2.3.
Sampling From MPS/TT Parametrized Probability Density. If given a density function u(x 1 , x 2 , . . . , x d ) in MPS/TT format, it is possible to exploit the linear algebra structure to draw independent and identically distributed (i.i.d.) samples in O(d) time [14], thereby obtaining a sample (y 1 , y 2 , . . . , y d ) ∼ u. This approach is derived from the following identity:
u(x 1 , x 2 , . . . , x d ) = u(x 1 )u(x 2 |x 1 )u(x 3 |x 2 , x 1 ) . . . u(x d |x d−1 , . . . , x 1 ), (2.5) where each u(x i |x i−1 , . . . , x 1 ) = u(x 1:i ) u(x 1:i−1 ) ,
is a conditional distribution of u. We note that it is easy to obtain the marginals u(x 1 ), . . . u(x 1:d−1 ) (hence the conditionals) in O(d) time. If we have such a decomposition (2.5), we can draw a sample y in O(d) complexity as follows. We first draw component y 1 ∼ u(x 1 ). Then we move on and draw y 2 ∼ u(x 2 |x 1 = y 1 ). We continue this procedure until we draw y d ∼ u d (x d |x 1:d−1 = y 1:d−1 ).
2.3.
Tensor-network sketching. In this subsection, we present a parallel method for obtaining the tensor cores for representing a d-dimensional function u(x) that is discretely-valued, i.e. each x j takes on finite values in a set X j , as a MPS/TT. This is done via an MPS/TT sketching technique proposed in [24] where the key idea is to solve a sequence of core determining equations. Let u k : X 1 ×· · ·×X k−1 ×Γ k−1 → R, k = 2, . . . , d be a set of functions such that Range(u k (x 1:k ; γ k )) = Range(p(x 1:k ; x k+1:d )). In this case, a representation of u as Definition 1 can be obtained via solving G k from the following set of equations
u 1 (x 1 , γ 1 ) = G 1 (x 1 , γ 1 ), (2.6) u k (x 1:k , γ k ) = γ k−1 ∈Γ k−1 u k−1 (x 1:k−1 , γ k−1 )G k (γ k−1 , x k , γ k ), u(x) = γ d−1 ∈Γ d−1 u d−1 (x 1:d−1 , γ d−1 )G k (γ d−1 , x d ), based on knowledge of u k 's.
However (2.6) is still inefficient to solve since each u k is exponentially sized, moreover, such a size prohibits it to be obtained/estimated in practice. Notice that (2.6) is over-determined, we further reduce the row dimensions by applying a left sketching function to (2.6),
x1 · · · x k−1 S k−1 (x 1:k−1 , ξ k−1 )u k (x 1:k , γ k ) (2.7) = γ k−1 ∈Γ k−1 x 1:k−1 S k−1 (x 1:k−1 , ξ k−1 )u k−1 (x 1:k−1 , γ k−1 ) G k (γ k−1 , x k , γ k ),
where S k−1 : X 1 × · · · × X k−1 × Ξ k−1 → R is the left sketching function which compresses over variables x 1 , · · · , x k−1 .
Now to obtain u k where Range(u k (x 1:k ; γ k )) = Range(u(x 1:k ; x k+1:d )), we use a right-sketching by sketching the dimensions x k+1:d , i.e.
u k (x 1:k , γ k ) = x k+1:d u(x 1:k , x k+1:d )T k+1 (x k+1:d , γ k ), (2.8)
where T k+1 : X k+1 × · · · × X d × Γ k → R is the right sketching function which compresses u by contracting out variables x k+1 , · · · , x d . Plugging such a u k into (2.6), we get
B k [u](ξ k−1 , x k , γ k ) = γ k−1 ∈Γ k−1 A k [u](ξ k−1 , γ k−1 )G k (γ k−1 , x k , γ k ), (2.9) where A k [u](ξ k−1 , γ k−1 ) = x 1:k−1 x k+1:d S k−1 (x 1:k−1 , ξ k−1 )u(x 1:k−1 , x k:d )T k (x k:d , γ k−1 ), (2.10) B k [u](ξ k−1 , x k , γ k ) = x 1:k−1 x k+1:d S k−1 (x 1:k−1 , ξ k−1 )u(x 1:k−1 , x k , x k+1:d )T k+1 , (x k+1:d , γ k ), (2.11)
and we can readily solve for G k .
Many different types of sketch functions can be used, e.g. random tensor sketches or cluster basis sketches [1,36]. Take a single S k (x 1:k , ξ k ) as an example, we choose a separable form
S k (x 1:k , ξ k ) = h 1 (x 1 ) · · · h k (x k ) for some h 1 , .
. . , h k (and similarly for T k 's) in order to perform fast tensor operations. When the state space is discrete, random tensor sketch amounts to taking h 1 , · · · , h k to be random vectors of size |X 1 |, . . . , |X k |. We give special focus to cluster basis sketch. Let {b l } n l=1 be a set of single variable basis, cluster basis sketch with c-cluster consists of choos-
ing S k (x 1:k , ξ k ) ∈ {b l1 (x i1 ) · · · b lc (x ic ) | (l 1 , . . . , l c ) ∈ [n] c , {x i1 , . . . , x ic } ⊆ {x 1 , . . . , x k }}, and also T k (x k:d , γ k−1 ) ∈ {b l1 (x i1 ) · · · b lc (x ic ) | (l 1 , . . . , l c ) ∈ [n] c , {x i1 , . . . , x ic } ⊆ {x k , . . . , x d }}.
A similar construct is used in [29]. In this case,
A k is of size estimate k−1 c n c × d−k+1 c n c , and B k is of size k−1 c n c × |X k | × d−k+2 c n c . The most important property we look for is that the variance A k [û], B k [û] is small, ifû is an unbiased estimator of u.
Remark 1 (Rank of the MPS/TT representation). The rank of the MPS/TT may be too large due to oversketching. For instance, when using a cluster basis sketch with a fixed cluster size, the core determining matrices can grow polynomially with the total number of dimensions. However, the intrinsic rank of the MPS/TT may be small. To address this issue, we can utilize truncated singular value decompositions (SVD) of the matrices {A k [u]} d k=2 to define projectors that reduce the rank of the MPS/TT representation.
By performing a truncated SVD on each A k [u], we obtain a low-rank approximation of the unfolding matrix u(x 1:k−1 , x k:d ). This allows us to define projectors that effectively trim the rank of the MPS/TT. The truncation parameter in the SVD determines the level of approximation and controls the reduction in rank. Let
A k [u](ξ k−1 , γ k−1 ) = r A,k−1 α A,k−1 =1 U A,k (ξ k−1 , α A,k−1 )S A,k (α A,k−1 , α A,k−1 )V T A,k (α A,k−1 , γ k−1 ), k = 2, . . . , d (2.12) be the truncated SVD with rank r A,k−1 for A k [u]
. By solving (2.9), we obtain an MPS/TT with tensor cores {G k } d k=1 (see Fig. 2
.2a). We can insert projectors {V
A,k V T A,k } d k=2
between all cores to get the "trimmed" MPS/TT, as shown in Fig. 2.2b. We use thick legs to denote tensor contractions with large dimensionalities (γ k 's) and thin legs to denote tensor contractions with small dimensionalities (α A,k 's). Then we can redefine the reduced tensor cores by grouping the tensor nodesḠ
k := γ k−1 γ k V T A,k (α A,k−1 , γ k−1 )G k (γ k−1 , x k , γ k )V A,k+1 (γ k , α A,k ), k = 2, . . . , d − 1,andḠ 1 := γ1 G 1 (x 1 , γ 1 )V A,2 (γ 1 , α A,1 ),Ḡ d := γ d−1 V T A,d (α A,d , γ d )G d (x d , γ d−1 ). (2.13)
The regrouping operations are highlighted with red dashed boxes in Fig. 2.2b. Now the new tensor coreḠ k is of shape r A,k−1 × |X k | × r A,k (Fig. 2.2c). We reduce the bond dimensions of the original MPS/TT (|Γ 1 |, . . . , |Γ d−1 |) to (r A,1 , . . . , r A,d−1 ).
Proposed Framework
In this section, we present a framework for solving initial value problems (IVPs) that arise in both statistical and quantum mechanical systems. In many applications, the evolution of a d-dimensional physical system in time is described by an IVP of the form
∂ϕ(x, t) ∂t = −Aϕ(x, t), t ≥ 0, ϕ(x, 0) = ϕ 0 (x), (3.1)
where A is a positive-semidefinite operator and gives a semigroup {exp (−At)} t [13,21,22]
. x = (x 1 , x 2 , · · · , x d ) is a d-dimensional spatial point.
The solution of the IVP (3.1) can then be obtained by applying the semigroup operator exp (−At) to the initial function, i.e. ϕ(x, t) = exp(−At)ϕ 0 (x) for all t ∈ R. Our goal is to obtain ϕ * = ϕ(·, t), t → ∞. When (3.1) is a Fokker-Planck equation, ϕ * corresponds to the equilibrium distribution of a Langevin dynamics. When (3.1) is the imaginary time evolution of a Schrödinger equation, ϕ * is the lowest energy state wavefunction.
Our method alternates between the following steps:
(1) Apply the semigroup operator exp(−Aδt) to the tensor-network approximation of the solution ϕ θt (x) using particle simulations for δt > 0, i.e.
ϕ t+1 (x) = exp (−Aδt) ϕ θt (x) = E y∼µ [f (x; y)] ≈ 1 N N i=1 f (x; x i ) =:φ t+1 (x), (3.2) where {x i } N i=1 ⊂ R d is a collection of N i.i.d.
samples according to a distribution µ (depending on the application, see Section 4 and f (x; x i ) is a d-dimensional function with parameters x i . In the traditional Monte Carlo simulation, f is simply the Dirac delta function at the given sample point, i.e. f (x;
x i ) = δ(x − x i ).
(2) Estimate ϕ t+1 (x) as a tensor-network ϕ θt+1 (x) from its particle approximationsφ t+1 (x) via the parallel TT-sketching method with linear time complexity with respect to the number of samples and constant time with respect to the dimension if distributed computing is used (Section 2.3). Often certain normalization constraints (with respect to a certain norm ∥ · ∥) need to be enforced for ϕ θt+1 . In this case one simply adds an extra step, letting
ϕ θt+1 ← ϕ θ t+1
∥ϕ θ t+1 ∥2 , which can be done with O(d) complexity (Section 2.2.2). Notice that we introduce three versions of the state ϕ: ϕ t is the ground truth state function at time t;φ t is a particle approximation of ϕ t ; ϕ θt is a tensor-network representation of ϕ t . The significance of these two operations can be described as follows. In the first step, we use a particle based simulation to bypass the need of applying exp(−Aδt)ϕ θt exactly, which may run into the curse-of-dimensionality. In particular, we generate f (x; x i ) that can be represented easiliy as a low-rank TT. In the second step, the use of the sketching algorithm [24] bypasses a direct application of recursive SVD based compression scheme [27], which may run into O(dN 2 ) complexity. Furthermore, since generally the variance of an empirical distribution Var(φ t+1 ) scales exponentially in d, one cannot apply a standard tensor compression scheme toφ t+1 directly which tend to preserve the exponential statistical variance. Therefore it is crucial to use the techniques proposed in [24], which is designed to control the variance due to the empirical distributionφ t+1 since it only involves estimation of low-order moments ofφ t+1 . How to obtain the particle representation of the time-evolution (3.1) is dependent on the applications, and we present different schemes for many-body Schrödinger and Fokker-Planck equation in Section 4. In this section, we focus on the details of second step.
3.1. MPS/TT Sketching for random walkers represented as a sum of TT. The concept of MPS/TT sketching or general tensor network sketching for density estimations has been extensively explored in [24,31,35]. The choice of tensor network representation in practice depends on the problem's structure. In this work, we demonstrate the workflow using MPS/TT, but the framework can be readily extended to other tensor networks.
A crucial assumption underlying our approach is that each particle f (x; x i ) exhibits a simple structure and can be efficiently represented or approximated in MPS/TT format. For instance, in the case of a vanilla Markov Chain Monte Carlo (MCMC), we have f (x;
x i ) = δ(x − x i ) = d j=1 δ(x j − x i j )
, which can be expressed as a rank-1 MPS/TT. This assumption holds true for various sampling-based methods, as we will discuss in later sections, e.g. Section 4.1, Section 4.2.
Based on the aforementioned discussions, we assume that each f (x; x i ) can be efficiently represented or approximated in MPS/TT format. Consequently, the right-hand side of (3.2) becomes a summation of N MPS/TTs. The tensor diagram illustrating this general particle approximation is shown in Fig. 3.1a. Directly adding these MPS/TTs yields an MPS/TT with a rank of N [27]. This linear growth in rank with the number of samples N leads to scaling of downstream tensor operations with poly(N ).
On the other hand, depending on the problem, if we know that the intrinsic MPS/TT rank of the solution is small, it should be possible to represent the MPS/TT with a smaller size. To achieve this, we apply the method described in Section 2.3 to estimate a low-rank tensor directly from the sum of N particles. Specifically, we construct A k [φ t+1 ] and B k [φ t+1 ] from the empirical sampleŝ ϕ t+1 and use them to solve for the tensor cores through (2.9), resulting in ϕ θt+1 . This process is illustrated in Fig. 3.1b, Fig. 3.1c, and Fig. 3.1d. It is important to note that we approximate the ground truth ϕ t+1 using stochastic samplesφ t+1 . The estimated tensor cores form an MPS/TT representation of ϕ t+1 , denoted as ϕ θt+1 .
3.2. Complexity Analysis. In this section, we assume we are givenφ t+1 in (3.1) as N constant rank MPS/TT. In terms of particles, forming A k [φ t+1 ]) and B k [φ t+1 ] in (2.10) and (2.11) can be done as
A k [φ t+1 ](ξ k−1 , γ k−1 ) (3.3) = 1 N N i=1 x 1:k−1 x k:d S k−1 (x 1:k−1 , ξ k−1 )f (x 1:k−1 , x k:d ; x i )T k (x k:d , γ k−1 ) , B k [φ t+1 ](ξ k−1 , x k , γ k ) = 1 N N i=1 γ k−1 ∈Γ k−1 x 1:k−1 x k:d S k−1 (x 1:k−1 , ξ k−1 )f (x 1:k−1 , x k , x k+1:d ; x i )T k+1 (x k+1:d , γ k ) . B k is a 3-tensor of shape |Ξ k−1 | × |X k | × |Γ k |. A k is a matrix of shape |Ξ k−1 | × |Γ k |.
The size of the linear system is independent of the number of dimensions d and the total number of particles N . The number of sketch functions |Ξ k | and |Γ k | are hyperparameters we can control. Let n = max k |X k |,r = max k {|Ξ k |, |Γ k |}. First we consider the complexity of forming the core determining equations, i.e. evaluating A k and B k . We note that the complexity is dominated by evaluating the tensor contractions between sketch functions S k 's, right sketch functions T k 's and particles f (·; x i )'s. If the particle f (·; x i ) and sketch functions S k (·, ξ k ), T k (·, γ k−1 ) are separable, then the tensor contractions in the square bracket in (3.3) can be evaluated in O(d) time. Each term in the summation N i=1 can be done independently, potentially even in parallel, giving our final A k 's and B k 's. Taking all d dimensions into account, the total complexity of evaluating A k 's and B k 's is O(nr 2 N d).
8COMBINING PARTICLE AND TENSOR-NETWORK METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS
Next, the complexity of solving the linear system (2.9) is O(nr 3 ). For all d dimensions, the total complexity for solving the core determining equations is O(nr 3 d). Combing everything together, the total computational complexity for MPS/TT sketching for a discrete particle system is
O(nr 2 N d) + O(nr 3 d). (3.4)
Remark 2. MPS/TT sketching is a method to control the the complexity of the solution via estimating an MPS/TT representation in terms of the particles. Another more conventional approach of reducing the rank of MPS/TT is TT-round [28]. With this approach, one can first compute the summation in Fig. 3.1a to form a rank O(N ) MPS/TT and round the resulting MPS/TT to a constant rank. There are two main drawbacks of this approach. Firstly, the QR decomposition in TT-round has complexity O(N 2 ). Secondly, TT-round tries to make ϕ θt+1 ≈φ t+1 , which at the same time makes Var(ϕ θt+1 )) ≈ Var(φ t+1 ), and such an error can scale exponentially.
(a) Particle approxima- tion of ϕt+1 (b) Form {A k } d k=1 . (c) Form {B k } d k=1 .
(d) Solving core determining equations Step (a) shows how ϕ t is represented as empirical distributionφ t+1 .
Step
(b), (c), (d) shows how to form A k [φ t+1 ], B k [φ t+1 ] in (3.
3) and use them to solve for G k . Here we use the determination of G k where k = 3 as an example.
Applications
In this section, to demonstrate the generality of the proposed method, we show how our algorithm can be used in two applications: quantum Monte Carlo (Section 4.1) and solving Fokker-Planck equation (Section 4.2). For these applications, we focus on discussing how to approximate exp(−βA)ϕ θt as a sum of MPS/TT, when ϕ θt is already in a MPS/TT representation, as required in (3.1).
Quantum Imaginary Time Evolutions.
In this subsection, we apply the proposed framework to ground state energy estimation problems in quantum mechanics for the spin system. Statistical sampling based approaches have been widely applied to these types of problems, see e.g. [25]. In this problem, we want to solve (4.1) ∂ϕ(x, t) ∂t = −Hϕ(x, t), ∥ϕ(· · · , t)∥ 2 = 1
where ϕ(·, t) : {±1} d → C, and H is the Hamiltonian operator.
Let O := [O i ] d i=1 , where (4.2) O i = I 2 ⊗ I 2 ⊗ · · · ⊗Õ i-th term ⊗ · · · ⊗ I 2 , for someÕ ∈ C 2×2 , O 2 i = I 2 d , usually H takes the form of H 1 [O] = d i=1 O i or H 2 [O] = d i,j=1 J ij O i O j or both. When t → ∞, ϕ(·, t) = exp(−Ht)ϕ0 ∥ exp(−Ht)ϕ0∥2
gives the lowest eigenvector of H. This can be done with the framework detailed in Section 3 with A = H. In what follows, we detail how exp(−δtH)ϕ θt can be approximated as a sum of N functions f (x; x i ) via a specific version of quantum Monte Carlo, the auxiliary-field quantum Monte Carlo (AFQMC). The AFQMC method [3,5] is a powerful numerical technique that has been developed to overcome some of the limitations of traditional Monte Carlo simulations. AFQMC is based on the idea of introducing auxiliary-fields to decouple the correlations between particles by means of the application of the Hubbard-Stratonovich transformation [23]. This reduces the many-body problem to the calculation of a sum or integral over all possible auxiliary-field configurations. The method has been successfully applied to a wide range of problems in statistical mechanics, including lattice field theory, quantum chromodynamics, and condensed matter physics [7,26,30,34,40].
We first focus on how to apply exp
(−δtH 2 [O]) where H 2 [O] = d i,j=1 J ij O i O j .
We assume J is positive semi-definite, which can be easily achieved by adding a diagonal to J since O 2 i = I 2 d . To decouple the interactions through orthogonal transformations, we first perform an eigenvalue decomposition J = U diag(λ)U T and compute
exp (−δtH 2 [O]) = exp −δt d j=1 λ j (u j · O) 2 = d j=1 exp(−δtλ j (u j · O) 2 ) + O(δt 2 ), (4.3) where U = [u 1 , . . . , u j ], u j ·O := d i=1 u ji O i .
The approximation follows from Trotter-decomposition [18]. Using the identity
(4.4) exp(−ax 2 ) = 1 2πa ∞ −∞ exp − k 2 2a exp(−ikx)dk,
we can write (4.3) as
exp(−δtH 2 [O]) = ∞ −∞ · · · ∞ −∞ (2π) −d/2 d j=1 exp −k 2 j /2 (4.5) d j=1 exp −2δtλ i k j (u j · O) dk 1 · · · dk d + O(δt 2 ).
Instead of sampling directly from the spin space, AFQMC approximates the d-dimensional integration in (4.5) by Monte Carlo integration in the k-space. Replacing the integration with Monte Carlo samples, we get the following approximation,
exp(−δtH 2 [O]) ≈ 1 N N i=1 d j=1 exp −2δtλ j k i j (u j · O) + O(δt 2 ) (4.6) ≈ 1 N N i=1 exp( √ −1(k i · O)) + O(δt 2 )
where N is the total number of Monte Carlo samples, k i j is the i-th sample for the frequency variable of the j-th dimension, andk i =
exp(−δtH 2 [O])ϕ t ≈ 1 N N i=1 exp( √ −1(k i · O))ϕ t . (4.7)
Notably, since each exp(
√ −1(k i · O)) is a rank-1 MPO, exp( √ −1(k i · O)
)ϕ t has the same rank as ϕ t . Lastly, the application of exp(−δtH 1 [O]) to some ϕ t that is represented as an MPS can be done easily, since exp(−δtH 1
[O]) = d i=1 exp(−δtO i ) is a rank-1 MPO.
To summarize, we can approximate the imaginary time propagator exp(−δt(
H 1 [O] + H 2 [O])
) with a summation of N rank-1 MPOs. Applying the propagator to many-body wavefunction ϕ t reduces to MPO-MPS contractions. The corresponding tensor diagram is shown in Fig. 4.1b.
Classical Parabolic Partial Differential Equation Evolutions
. In this subsection, we demonstrate the application of the proposed framework to numerical simulations of parabolic PDEs, specifically focusing on the overdamped Langevin process and its corresponding Fokker-Planck equations. We consider a particle system governed by the following overdamped Langevin process,
dx t = −∇V (x t ) dt + 2β −1 dW t , (4.8)
where x t ∈ Ω ⊆ R d is the state of the system, V : Ω ⊂ R d → R is a smooth potential energy function, β = 1/T is the inverse of the temperature T , and W t is a d-dimensional Wiener processs. If the potential energy function V is confining for Ω (see, e.g., [4, Definition 4.2]), it can be shown that the equilibrium probability distribution of the Langevin dynamics (4.8) is the Boltzmann-Gibbs distribution,
ϕ * (x) = 1 Z β exp(−βV (x)) (4.9)
where Z β = Ω exp(−βV (x)) dx is the partition function. Moreover, the evolution of the distribution of the particle system can be described by the corresponding time-dependent Fokker-Planck equation, ∂ϕ ∂t = β −1 ∆ϕ + ∇ · (∇V ϕ) =: −Aϕ, ϕ(x, 0) = ϕ 0 (x), ∥ϕ(·, t)∥ 1 = 1 (4.10)
where ϕ 0 is the initial distribution. ∥ϕ(·, t)∥ 1 = 1 ensures that the |ϕ(x, t)|dx = 1. Therefore, (4.10) is the counterpart of (3.1) in our framework. Now we need to be able to approximate exp(−Aδt)ϕ t as particle systems. Assuming the current density ϕ t is a MPS/TT, we use the following procedures to generate a particle approximation ϕ t+1 :
(1) We apply conditional sampling on the current density estimate ϕ θt (Section 2.2.3) to generate N i.i.d. samples x 1 , . . . , x N ∼ ϕ θt . (2) Then, we simulate the overdamped Langevin dynamics (4.8) using Euler-Maruyama method over time interval δt for each of the N initial stochastic samples x 1 , . . . x N ∼ ϕ θt . By the end of δt we have final particle positions x 1 , . . . , x N ∼ ϕ t+1 , and
ϕ t+1 (x) = 1 N N i=1 δ(x − x i ), (4.11)
by standard Monte Carlo approximation. Note that the only difference between this application and the quantum ground state problem is the conversion of the empirical distribution into an MPS/TT ϕ θt+1 (x). Here, we employ a version of sketching for continuous distributions instead of discrete distributions as used in quantum ground state energy estimation (Section 4.1). For more detailed information on MPS/TT sketching for continuous distributions, we refer readers to Appendix C of [24].
Numerical Experiments
Quantum Ground State Energy Estimations.
In this subsection, we consider the ground state energy estimation problem using the transverse-field Ising model of the following quantum Hamiltonian,
H = − d i,j=1 J ij S z i S z j − h i S x i , (5.1)
where S z j , S x j are the Pauli matrices [16],
S z j = I 2 ⊗ I 2 ⊗ · · · ⊗ 1 0 0 −1 j-th dimension ⊗ · · · ⊗ I 2 , (5.2) S x j = I 2 ⊗ I 2 ⊗ · · · ⊗ 0 1 1 0 j-th dimension ⊗ · · · ⊗ I 2 ,(5.3)
and I 2 is the 2 × 2 identity matrix. When h = 1, and J is the adjacency matrix of a 1D cycle graph, the system undergoes a quantum phase transition. We consider three models under this category: (a) d = 16 sites with h = 1, and J the adjacency matrix of a 1D cycle graph, (b) d = 32 sites with h = 1, and J the adjacency matrix of a 1D cycle graph, (c) d = 32 sites with h = 1, and J the adjacency matrix of a 2D periodic square lattice. For 1D model, the dimensions of the MPS/TT are naturally ordered according to the sites on the 1D chain. In the 2D Ising model case, we use space filling curve [32] to order the dimensions. For example, we show the space filling curve and the ordering of MPS/TT dimensions in a 4 × 4 lattices in Fig. 5.1.
We set the infinitesimal time step δt to be 0.01, and use 2000 samples in each power method iteration to approximate the propagator exp(−δtH). The rest of the parameters are set as follows: we use ar = 60 (size of |Γ k |, |Ξ k |) random tensor sketches (Section 2.3) for sketching and we use 10 −3 as the singular value threshold to solve the core determining equations (2.9). We initialize the initial wavefunction ϕ 0 as a random MPS/TT. The imaginary time evolution energy is shown in Fig. 5.2. Here we use the symmetric energy estimator given by where ϕ t is the wavefunction of the t-th iteration. Theoretically the energy given by the symmetric estimator can only be larger than the ground state energy. Often mixed estimators
E symmetric = ⟨ϕ t , H, ϕ t ⟩ ⟨ϕ t , ϕ t ⟩ , (5.4)E mixed = ⟨ϕ, H, ϕ t ⟩ ⟨ϕ, ϕ t ⟩ , (5.5)
where ϕ is a fixed reference wavefunction is used to reduce the bias error originated from the variance in ϕ t [34,40]. The mixed estimator will oscillate around the ground state energy after convergence so one can further take the average of the mixed energy estimators of several iterations to reduce the variance.
We report the energies based on the symmetric estimator in Fig. 5.2 and compare with the ground truth energy for the quantum Hamiltonian. The ground state energy in 1D Ising model with 16 and 32 sites is reported in [33]. For the 4 × 4 2D Ising model, we are still able to store the Hamiltonian exactly in memory so we solve the ground state energy by eigendecomposition.
For 1D Ising model with 16 sites, the ground truth energy is −20.4046 and our approach converges to energy −20.4008, with a relative error of 1.86 × 10 −4 . For 1D Ising model with 32 sites, the ground truth energy is −40.7600 and our approach converges to energy −40.7183, with a relative error of 1.00 × 10 −3 . In the 2D case, the ground truth energy is −10.8605 and our approach converges to energy −10.7900, with a relative error of 6.5 × 10 −3 . Our approach has already achieved stable convergence and very accurate ground state energy estimation with only the symmetric estimators. Moreover, we obtain the final ground energy state wavefunction in low-complexity MPS/TT format, enabling efficient further applications.
Tensor network sketching makes it possible to work with large number of samples while maintaining an algorithm with memory complexity that is independent from the number of samples. To demonstrate the advantage of tensor network sketching over other recompression methods, we consider a natural alternative to obtain an algorithm with constant memory usage. That is we first add up the samples by performing addition in terms MPS/TT into a high rank tensor and truncate the rank. While doing addition, as we increase the rank, if it hits rank 100 we apply SVD based TT-rounding procedure [27] to reduce the rank. The energy evolution for d = 16 1D model and 4 × 4 2D Ising model is shown in Fig. 5.3. We show the energy evolutions for both the symmetric and mixed estimators. We can clearly observe that the energy convergence is more stable and even faster with tensor network sketching.
Fokker-Planck Equation.
In this numerical experiment, we solve the Fokker-Planck equation by parameterizing the density with MPS/TT. We consider two systems in this subsection, one with a simple double-well potential that takes a separable form, which is intrinsically an 1D potential, and the other one being the Ginzburg-Landau potential, where we only compare the obtained marginals with the ground truth marginals since the true density is exponentially sized. 5.2.1. Double-well Potential. We consider the following double-well potential
V (x) = (x 2 1 − 1) 2 + 0.3 d j=2
x 2 j , (5.6) and the particle dynamics governed by overdamped Langevin equation (4.8). Since the potential function is easily separable, the equilibrium Boltzmann density is a product of univariate densities for each dimension, i.e. In Fig. 5.5 we visualize the simulated Langevin particles, the fitted continuous MPS/TT density and the target equilibrium density for the first dimension at iteration 1, 3, 5, 7, 20 and 30. We can observe that the particle distribution gets evolved effectively by the Langevin dynamics and the fitted continuous MPS/TT density accurately captures the histograms of the particle samples. The low-complexity continuous MPS/TT format also serves as an extra regularization and as a result, is not prone to overfitting. To quantify the performance of our algorithm, we evaluate the relative error metric E = ∥ϕ 1 * − ϕ 1θ j ∥/∥ϕ 1 * ∥ where ϕ 1 * and ϕ 1θ j are the first marginal distribution of the ground truth and MPS/TT represented distribution, respectively. At iteration 30, the relative error E = 3.8 × 10 −2 . We can further improve the performance of the algorithm by choosing more basis functions and generating more stochastic samples.
1 Z β exp(−βV (x)) = 1 Z β exp −β(x 2 1 − 1) 2 d j=2 exp −0.3βx 2 j ,(5.
5.2.2.
Ginzburg-Landau Potential. The Ginzburg-Landau theory was developed to provide a mathematical description of phase transition [20]. In this numerical example, we consider a simplified Ginzburg-Landau model, in which the potential energy is defined as In Fig. 5.6 we visualize the 8-th marginal distribution of the particle dynamics, the MPS/TT density, and the equilibrium density, at iteration 1, 3, 5, 7, 20 and 30. At iteration 30, the relative error of the 8-th marginal distribution is E = 9.8 × 10 −2 .
V (U ) := d+1 i=1 λ 2 U i − U i−1 h 2 + 1 4λ (1 − U 2 i ) 2 ,(5.
Conclusion
In this paper, we propose a novel and general framework that combines particle simulation with MPS/TT ansatz. By leveraging the advantages of both approaches, our method offers an efficient way to apply the semigroup/time-evolution operator and control the variance and population of particles using tensor-sketching techniques.
The performance of our algorithm is determined by two factors: the rank of the randomized sketches and the number of particles used. Our algorithm is expected to succeed when the true solution of the evolution (3.1) can be well-approximated by a low-complexity MPS/MPO format, indicating weak correlations between dimensions. Additionally, since we employ particles to determine the MPS/TT representation, it is essential to integrate the MPS/TT (or, from another perspective, form moments corresponding to sketch functions in TT-sketching) with Monte Carlo samples without suffering from the curse of dimensionality. This necessitates the imposition of smoothness constraints on the MPS/TT-represented function beyond its low-rank nature. Hence, it is crucial to investigate the function space under time evolution and understand how the MPS/TT representation of the solution can be efficiently determined using particles in a statistically optimal manner. 7. Acknowledgements Y.C. and Y.K. acknowledge partial supports from NSF Award No. DMS-211563 and DOE Award No. DE-SC0022232. The authors also thank Michael Lindsey and Shiwei Zhang regarding discussions on potential improvements for the proposed method.
Figure 2 . 1 .
21Tensor diagram for a d-dimensional MPS/TT and MPO. (a): An MPS/TT representing u in Definition 1. The exposed legs indicates the MPS/TT takes x 1 , . . . , x d as inputs, and the connected legs indicate the summation over α 1 , . . . , α d−1 . (b) An MPO representing O.
.
Normalization. Often, one needs to compute the norm
Figure 2 . 2 .
22Tensor diagrams to reduce the bond dimensions of MPS/TT via truncated SVD. The regrouping operation for each reduced tensorḠ k is highlighted by red dashed boxes.
Figure 3 . 1 .
31Tensor diagram for the workflow of estimating a MPS/TT from particles.
k i j u j . All samples {k i j } j=1,··· ,d, i=1,··· ,N are drawn from i.i.d. standard normal distribution. The structure of such an approximation to exp(−δtH 2 [O]) is illustrated inFig. 4.1a, which is a sum of rank-1 MPO. The application of exp(−δtH 2 [O]) to a function u that is already represented as an MPS/TT in the form of Def. 1 can written as
MPS approximation of exp(−δt(H1[O] + H2[O]))ϕt
Figure 4 . 1 .
41Tensor diagram for (a) approximating the imaginary time propagator exp(−δtH 2 [O]) as a summation of rank-1 MPO's and (b) approximating evolution equation as MPO-MPS products. We remove the internal legs connecting the tensor cores in MPO to indicate the MPO has rank 1.
Figure 5 . 1 .
51Example of 2D space filling curve for Ising model of 4 × 4 lattices.
Figure 5 . 2 .
52Imaginary time evolution energy plots. The ground state energy is shown as horizontal dashed lines.
Figure 5 . 3 .
53us a nice way to visualize the distribution. In this example, we use β = 1 and d = 10. The support of the domain is a hypercube [−M, M ] d where M = 2.5. To obtain a continuous MPS/TT approximation, we use Gaussian kernel function as univariate basis functions {b l } 20 l=1 , where b l (·) = exp − ( · + M − (l − 1)∆x) 2 2∆x 2 , l = 1, . . . , 20, (5.8) ∆x = 5/18 to form cluster basis for sketching, as mentioned in Section 2.3. After sketching, we obtain a continuous analog of (2.9) (where each G k is a set of continuous univariate functions)Imaginary time evolution energy plots when iteratively adding and rounding the MPS/TT to constant rank. we solve this linear least-squares in function space using the same set of univariate basis functions. We visualize all the basis functions in Fig. 5.4.
Figure 5 . 4 .
54Visualization of univariate basis functions for each dimension. Here we use univariate Gaussian kernel functions as our basis functions.We start from the uniform distribution over the hypercube [−M, M ] d and evolve the distribution towards equilibrium. To approximate the given solution at each time ϕ θt as an MPS/TT, we first sample from this distribution via a conditional sampling Section 2.2.3. Then we simulate the overdamped Langevin process forward for the sampled particles up to time δt, as detailed in Section 4.2. Then we estimate a new MPS/TT representation ϕ θt+1 . We choose δt = 0.02 time and we use N = 10 4 samples for all iterations.
Figure 5 . 5 .
55Visualization of the evolution of first marginal for the doublewell potential. The blue histograms correspond to the sample histograms after Langevin simulations at each iteration (φ t+1 in (3.1)) The estimated continuous MPS/TT densityφ θt+1 in (3.1) and the target equilibrium density ϕ * are represented with red solid lines and black dashed lines, respectively.
, U 0 = U d+1 = 0. We fix d = 16, λ = 0.03 and the temperature β = 1/8. We use the same set of 20 basis function for all dimensions as shown in Fig. 5.4. For this example, we solve the Fokker-Planck equation starting from the initial uniform distribution over the hypercube [−M, M ] d and evolve the distribution with δt = 0.002 time and N = 10 4 samples.
Figure 5 . 6 .
56Visualization of the evolution of the 8-th marginal distribution for the Ginzburg-Landau potential. The blue histograms correspond to the sample histograms after Langevin simulations at each iteration (φ t+1 in (3.1)) The estimated continuous MPS/TT densityφ θt+1 in (3.1) and the target equilibrium density ϕ * are represented with red solid lines and black dashed lines, respectively.
6COMBINING PARTICLE AND TENSOR-NETWORK METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS (a) Original MPS/TT (b) Insert projectors and regrouping (c) Reduced MPS/TT
D Thomas, Jakob Ahle, Bt Knudsen, arXiv:1909.01821Almost optimal tensor sketch. arXiv preprintThomas D Ahle and Jakob BT Knudsen, Almost optimal tensor sketch, arXiv preprint arXiv:1909.01821 (2019).
Hierarchical matrix approximations of hessians arising in inverse problems governed by pdes. Ilona Ambartsumyan, Wajih Boukaram, Tan Bui-Thanh, Omar Ghattas, David Keyes, Georg Stadler, George Turkiyyah, Stefano Zampini, SIAM Journal on Scientific Computing. 425Ilona Ambartsumyan, Wajih Boukaram, Tan Bui-Thanh, Omar Ghattas, David Keyes, Georg Stadler, George Turkiyyah, and Stefano Zampini, Hierarchical matrix approximations of hessians arising in inverse problems governed by pdes, SIAM Journal on Scientific Computing 42 (2020), no. 5, A3397-A3426.
Computation within the auxiliary field approach. Sa Baeurle, Journal of Computational Physics. 1842SA Baeurle, Computation within the auxiliary field approach, Journal of Computational Physics 184 (2003), no. 2, 540-558.
N Rabi, Bhattacharya, Edward C Waymire, Stochastic processes with applications. SIAMRabi N Bhattacharya and Edward C Waymire, Stochastic processes with applications, SIAM, 2009.
Monte carlo calculations of coupled boson-fermion systems. i. Richard Blankenbecler, R L Scalapino, Sugar, Physical Review D. 2482278Richard Blankenbecler, DJ Scalapino, and RL Sugar, Monte carlo calculations of coupled boson-fermion sys- tems. i, Physical Review D 24 (1981), no. 8, 2278.
Stochastic dynamical low-rank approximation method. Yu Cao, Jianfeng Lu, Journal of Computational Physics. 372Yu Cao and Jianfeng Lu, Stochastic dynamical low-rank approximation method, Journal of Computational Physics 372 (2018), 564-586.
Auxiliary-field quantum monte carlo method for strongly paired fermions. J Carlson, Stefano Gandolfi, Kevin E Schmidt, Shiwei Zhang, Physical Review A. 84661602J Carlson, Stefano Gandolfi, Kevin E Schmidt, and Shiwei Zhang, Auxiliary-field quantum monte carlo method for strongly paired fermions, Physical Review A 84 (2011), no. 6, 061602.
The density matrix renormalization group in quantum chemistry. Garnet Kin-Lic Chan, Sandeep Sharma, Annual review of physical chemistry. 62Garnet Kin-Lic Chan and Sandeep Sharma, The density matrix renormalization group in quantum chemistry, Annual review of physical chemistry 62 (2011), 465-481.
Scalable gaussian process analysis for implicit physics-based covariance models. Yian Chen, Mihai Anitescu, International Journal for Uncertainty Quantification. 116Yian Chen and Mihai Anitescu, Scalable gaussian process analysis for implicit physics-based covariance models, International Journal for Uncertainty Quantification 11 (2021), no. 6.
Scalable physics-based maximum likelihood estimation using hierarchical matrices. arXiv:2303.10102arXiv preprint, Scalable physics-based maximum likelihood estimation using hierarchical matrices, arXiv preprint arXiv:2303.10102 (2023).
Committor functions via tensor networks. Yian Chen, Jeremy Hoskins, Yuehaw Khoo, Michael Lindsey, Journal of Computational Physics. 472111646Yian Chen, Jeremy Hoskins, Yuehaw Khoo, and Michael Lindsey, Committor functions via tensor networks, Journal of Computational Physics 472 (2023), 111646.
Solution of the fokker-planck equation by cross approximation method in the tensor train format. Andrei Chertkov, Ivan Oseledets, Frontiers in Artificial Intelligence. 4668215Andrei Chertkov and Ivan Oseledets, Solution of the fokker-planck equation by cross approximation method in the tensor train format, Frontiers in Artificial Intelligence 4 (2021), 668215.
The algebraic theory of semigroups. H Alfred, Gordon B Clifford, Preston, AMS1Alfred H Clifford and Gordon B Preston, The algebraic theory of semigroups, vol. 1, AMS surveys 7 (1961), 1967.
Approximation and sampling of multivariate probability distributions in the tensor train decomposition. Sergey Dolgov, Karim Anaya-Izquierdo, Colin Fox, Robert Scheichl, Statistics and Computing. 30Sergey Dolgov, Karim Anaya-Izquierdo, Colin Fox, and Robert Scheichl, Approximation and sampling of mul- tivariate probability distributions in the tensor train decomposition, Statistics and Computing 30 (2020), 603- 625.
Fast solution of multi-dimensional parabolic problems in the tt/qtt-format with initial application to the fokker-planck equation. Sergey Dolgov, N Boris, Ivan V Khoromskij, Oseledets, Sergey Dolgov, Boris N Khoromskij, and Ivan V Oseledets, Fast solution of multi-dimensional parabolic prob- lems in the tt/qtt-format with initial application to the fokker-planck equation (2011).
Imaginary numbers are not real-the geometric algebra of spacetime. Stephen Gull, Anthony Lasenby, Chris Doran, Foundations of Physics. 239Stephen Gull, Anthony Lasenby, and Chris Doran, Imaginary numbers are not real-the geometric algebra of spacetime, Foundations of Physics 23 (1993), no. 9, 1175-1201.
Solving high-dimensional partial differential equations using deep learning. Jiequn Han, Arnulf Jentzen, Weinan E , Proceedings of the National Academy of Sciences. 11534Jiequn Han, Arnulf Jentzen, and Weinan E, Solving high-dimensional partial differential equations using deep learning, Proceedings of the National Academy of Sciences 115 (2018), no. 34, 8505-8510.
Finding exponential product formulas of higher orders, Quantum annealing and other optimization methods. Naomichi Hatano, Masuo Suzuki, Naomichi Hatano and Masuo Suzuki, Finding exponential product formulas of higher orders, Quantum an- nealing and other optimization methods, 2005, pp. 37-68.
Hierarchical interpolative factorization for elliptic operators: differential equations. L Kenneth, Lexing Ho, Ying, Communications on Pure and Applied Mathematics. 698Kenneth L Ho and Lexing Ying, Hierarchical interpolative factorization for elliptic operators: differential equations, Communications on Pure and Applied Mathematics 69 (2016), no. 8, 1415-1451.
K-H Hoffmann, Qi Tang, Ginzburg-landau phase transition theory and superconductivity. Birkhäuser134K-H Hoffmann and Qi Tang, Ginzburg-landau phase transition theory and superconductivity, Vol. 134, Birkhäuser, 2012.
The early development of the algebraic theory of semigroups, Archive for history of exact sciences. Christopher Hollings, 63Christopher Hollings, The early development of the algebraic theory of semigroups, Archive for history of exact sciences 63 (2009), 497-536.
John Mackintosh Howie, Fundamentals of semigroup theory. Oxford University PressJohn Mackintosh Howie, Fundamentals of semigroup theory, Oxford University Press, 1995.
Calculation of partition functions. John Hubbard, Physical Review Letters. 3277John Hubbard, Calculation of partition functions, Physical Review Letters 3 (1959), no. 2, 77.
Yoonhaeng Hur, Jeremy G Hoskins, Michael Lindsey, Yuehaw Miles Stoudenmire, Khoo, arXiv:2202.11788Generative modeling via tensor train sketching. arXiv preprintYoonhaeng Hur, Jeremy G Hoskins, Michael Lindsey, E Miles Stoudenmire, and Yuehaw Khoo, Generative modeling via tensor train sketching, arXiv preprint arXiv:2202.11788 (2022).
Unbiasing fermionic quantum monte carlo with a quantum computer. Joonho Lee, Aps march meeting abstracts. Joonho Lee, Unbiasing fermionic quantum monte carlo with a quantum computer, Aps march meeting ab- stracts, 2022, pp. N40-007.
Twenty years of auxiliary-field quantum monte carlo in quantum chemistry: An overview and assessment on main group chemistry and bond-breaking. Joonho Lee, Q Hung, Pham, David R Reichman, Journal of Chemical Theory and Computation. 1812Joonho Lee, Hung Q Pham, and David R Reichman, Twenty years of auxiliary-field quantum monte carlo in quantum chemistry: An overview and assessment on main group chemistry and bond-breaking, Journal of Chemical Theory and Computation 18 (2022), no. 12, 7024-7042.
Tensor-train decomposition. V Ivan, Oseledets, SIAM Journal on Scientific Computing. 335Ivan V Oseledets, Tensor-train decomposition, SIAM Journal on Scientific Computing 33 (2011), no. 5, 2295- 2317.
Tensor-train decomposition. SIAM Journal on Scientific Computing. 335, Tensor-train decomposition, SIAM Journal on Scientific Computing 33 (2011), no. 5, 2295-2317.
Yifan Peng, Yian Chen, Yuehaw Miles Stoudenmire, Khoo, arXiv:2304.05305Generative modeling via hierarchical tensor sketching. arXiv preprintYifan Peng, Yian Chen, E Miles Stoudenmire, and Yuehaw Khoo, Generative modeling via hierarchical tensor sketching, arXiv preprint arXiv:2304.05305 (2023).
Benchmark study of the two-dimensional hubbard model with auxiliary-field quantum monte carlo method. Mingpu Qin, Hao Shi, Shiwei Zhang, Physical Review B. 94885103Mingpu Qin, Hao Shi, and Shiwei Zhang, Benchmark study of the two-dimensional hubbard model with auxiliary-field quantum monte carlo method, Physical Review B 94 (2016), no. 8, 085103.
Yinuo Ren, Hongli Zhao, Yuehaw Khoo, Lexing Ying, arXiv:2212.00759High-dimensional density estimation with tensorizing flow. arXiv preprintYinuo Ren, Hongli Zhao, Yuehaw Khoo, and Lexing Ying, High-dimensional density estimation with tensoriz- ing flow, arXiv preprint arXiv:2212.00759 (2022).
Hans Sagan, Space-filling curves. Springer Science & Business MediaHans Sagan, Space-filling curves, Springer Science & Business Media, 2012.
Variational quantum monte carlo simulations with tensor-network states. W Anders, Guifre Sandvik, Vidal, Physical review letters. 9922220602Anders W Sandvik and Guifre Vidal, Variational quantum monte carlo simulations with tensor-network states, Physical review letters 99 (2007), no. 22, 220602.
Some recent developments in auxiliary-field quantum monte carlo for real materials. Hao Shi, Shiwei Zhang, The Journal of Chemical Physics. 154224107Hao Shi and Shiwei Zhang, Some recent developments in auxiliary-field quantum monte carlo for real materials, The Journal of Chemical Physics 154 (2021), no. 2, 024107.
Xun Tang, Yoonhaeng Hur, Yuehaw Khoo, Lexing Ying, arXiv:2209.01341Generative modeling via tree tensor network states. arXiv preprintXun Tang, Yoonhaeng Hur, Yuehaw Khoo, and Lexing Ying, Generative modeling via tree tensor network states, arXiv preprint arXiv:2209.01341 (2022).
Fast and guaranteed tensor decomposition via sketching. Yining Wang, Hsiao-Yu Tung, Alexander J Smola, Anima Anandkumar, Advances in neural information processing systems. 28Yining Wang, Hsiao-Yu Tung, Alexander J Smola, and Anima Anandkumar, Fast and guaranteed tensor decomposition via sketching, Advances in neural information processing systems 28 (2015).
Density matrix formulation for quantum renormalization groups. Steven R White, Physical review letters. 69192863Steven R White, Density matrix formulation for quantum renormalization groups, Physical review letters 69 (1992), no. 19, 2863.
The deep ritz method: a deep learning-based numerical algorithm for solving variational problems. Bing Yu, Communications in Mathematics and Statistics. 61Bing Yu et al., The deep ritz method: a deep learning-based numerical algorithm for solving variational prob- lems, Communications in Mathematics and Statistics 6 (2018), no. 1, 1-12.
A deep learning method for solving fokker-planck equations, Mathematical and scientific machine learning. Jiayu Zhai, Matthew Dobson, Yao Li, Jiayu Zhai, Matthew Dobson, and Yao Li, A deep learning method for solving fokker-planck equations, Math- ematical and scientific machine learning, 2022, pp. 568-597.
Auxiliary-field quantum monte carlo for correlated electron systems. emergent phenomena in correlated matter: Autumn school organized by the forschungszentrum jülich and the german research school for simulation sciences at forschungszentrum jülich 23-27. Shiwei Zhang, Lecture Notes of the Autumn School Correlated Electrons. 3Shiwei Zhang, Auxiliary-field quantum monte carlo for correlated electron systems. emergent phenomena in correlated matter: Autumn school organized by the forschungszentrum jülich and the german research school for simulation sciences at forschungszentrum jülich 23-27 september 2013, Lecture Notes of the Autumn School Correlated Electrons 3 (2013), 2013.
| [] |
[
"Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems",
"Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems"
] | [
"Lukas Stappen ",
"Jeremy Dillmann ",
"Serena Striegel ",
"Hans-Jörg Vögel ",
"Nicolas Flores-Herr ",
"Björn W Schuller "
] | [] | [] | This paper aims to serve as a comprehensive guide for researchers and practitioners, offering insights into the current state, potential applications, and future research directions for generative artificial intelligence and foundation models within the context of intelligent vehicles. As the automotive industry progressively integrates AI, generative artificial intelligence technologies hold the potential to revolutionize user interactions, delivering more immersive, intuitive, and personalised in-car experiences. We provide an overview of current applications of generative artificial intelligence in the automotive domain, emphasizing speech, audio, vision, and multimodal interactions. We subsequently outline critical future research areas, including domain adaptability, alignment, multimodal integration and others, as well as, address the challenges and risks associated with ethics. By fostering collaboration and addressing these research areas, generative artificial intelligence can unlock its full potential, transforming the driving experience and shaping the future of intelligent vehicles. | 10.48550/arxiv.2305.17137 | [
"https://export.arxiv.org/pdf/2305.17137v1.pdf"
] | 258,959,144 | 2305.17137 | d6549ffc970d45cd29105ad9bacbaa61613f8b00 |
Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems
Lukas Stappen
Jeremy Dillmann
Serena Striegel
Hans-Jörg Vögel
Nicolas Flores-Herr
Björn W Schuller
Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems
This paper aims to serve as a comprehensive guide for researchers and practitioners, offering insights into the current state, potential applications, and future research directions for generative artificial intelligence and foundation models within the context of intelligent vehicles. As the automotive industry progressively integrates AI, generative artificial intelligence technologies hold the potential to revolutionize user interactions, delivering more immersive, intuitive, and personalised in-car experiences. We provide an overview of current applications of generative artificial intelligence in the automotive domain, emphasizing speech, audio, vision, and multimodal interactions. We subsequently outline critical future research areas, including domain adaptability, alignment, multimodal integration and others, as well as, address the challenges and risks associated with ethics. By fostering collaboration and addressing these research areas, generative artificial intelligence can unlock its full potential, transforming the driving experience and shaping the future of intelligent vehicles.
I. INTRODUCTION
Generative artificial intelligence has witnessed an unprecedented growth, with applications such as ChatGPT becoming the fastest adopted consumer software in history [1]. These generative, general-purpose (language) models, characterized by their capabilities to creatively create text, audio [2], and images [3] in natural interaction, have swiftly permeated our daily lives. In the face of such rapid progress, it is essential to consider the impact of these generative artificial intelligence technologies on the future of intelligent vehicles and the transformative potential they hold for the automotive industry.
The vehicle presents a distinctive scenario, in which the driver is occupied with primary driving tasks, such as steering and accelerating, as well as secondary tasks like activating windshield wipers [4]. Generative artificial intelligence will primarily enhance tertiary tasks, focusing on the control of in-vehicle infotainment systems, which contribute to the growing prominence of intelligent vehicles and the pursuit of realizing the vision of an all-round personal assistant within automobiles. BMW, for instance, envisions a digital companion [5] that engages with users emotionally, cultivating a natural relationship through dialogue. Inputs and outputs across various modalities are vital to achieve the highest quality of interaction in high-stakes driving situations. This encompasses two components: streamlining necessary tasks, such as multi-turn point of interest searches for charging, parking, and recreation, and providing entertainment and productivity opportunities during unused driving periods. Additionally, several hedonic elements are crucial in crafting an emotional experience within the vehicle [6], including in-car visualization (e. g., lighting), audio functions (e. g., evehicle sounds), and exterior features (e. g., light carpet). The creative prowess of generative artificial intelligence offers the ideal tools for actualizing this vision and forging the future of intelligent vehicles.
The spectrum of generative artificial intelligence approaches is mostly bounded to training types, each with its unique advantages in specific situations, as shown in Table I. Unsupervised learning employs models like generative adversarial networks (GANs) and variational autoencoders (VAE) to generate data samples. The StableDiffusion architecture, for instance, combines the latter with a reconstruction (diffusion) process to create photo-realistic images [3]. Besides images, also audio can be generated without requiring labelled data, making it valuable when labels are scarce [7]. Supervised generative modelling uses architectures like conditional GANs (e. g., pix2pix), which are trained with a large amount of labelled data, allowing them to generate data samples conditioned on specific attributes and making them most useful when data must adhere to certain conditions [8]. The training technique of most foundation models, models adaptable to a variety of down-stream tasks, are usually trained with self-supervised learning, such as those employed by autoregressive transformer models like GPT-3 [9], learn by creating their own supervision signal from the input data and thereby able to utilise vast quantities of data. Hereby, they are predicting parts of the input from the rest of the input, such as the next word in a sentence, making them advantageous when seeking to generate coherent and contextually relevant output [9]. Lastly, reinforcement learning became recently more popular to learn complex behaviours, for instance, fine-tuning the model to align with human expectations using proximal policy optimization [10]. By first learning a supervised reward function, which then acts as a judge for the generated data samples, the model outputs can be sharpened automatically within a closed-loop reward system. AI-enhanced vehicle functions presumably play a vital role in the mobility of the future and can be broadly categorized into perception and generation systems. Perception Combines VAE with a reconstruction process to create photorealistic images e. g., exterior LED projections (Self-) Supervised GPT-3/-4 [12] Text Language Generation
Generates human-like text based on input and context e. g., natural assistant interaction RETRO [13] Text Language Generation Improves language models by retrieving from trillions of tokens e. g., intuitive retrieval from manual via Q&A Tacotron 2 [14] Speech Text-to-Speech
Generates high-quality, natural-sounding speech from text e. g., assistant voice synthesis MusicLM [15] Audio Music Generation
Generates music with a coherent structure and long-term dependencies e. g., EV sound modelling pix2pix [8] Vision Image-to-Image
Converting images from one domain to another such as sketches to photos e. g., passenger entertainment DALL-E [16] Mulitmodal Image Generation
Generates images from textual descriptions e. g., theme based lighting [17], [18] and anticipating passengers' behaviour and emotional states [19]. Research involving generative models is considerably less substantial. They have been employed to enhance visual classification using VAEs for unsupervised domain adaptation between automotive sensors [20]. Similarly, the authors of [7] utilized conditional GANs to improve the capabilities of autonomous vehicles by simulating more realistic and varied real-world scenarios. In the realm of audio generation, SoundsRide [21] introduces an in-car audio system that synchronizes music with sound affordances along the ride in real-time, providing an immersive music experience that could potentially influence driving safety, with both positive and negative effects depending on the mix and user. Improving dialogue and personalised voice characters for in-car speech interfaces, [22] examines the design and impact, comparing four assistant personalities (friend, admirer, aunt, and butler). The findings suggest that matching the user's personality results in higher likability and trust. However, existing generation research is limited in terms of breadth and depth of the applications, as well as in multimodal modelling. This paper adopts a practical approach, seeking to develop a research agenda that explores the application of generative artificial intelligence technologies in intelligent vehicles. We examine the role of AI in augmenting intelligent vehicle functions and the potential of generative artificial intelligence to facilitate multimodal interaction, encompassing audio, video, and speech in these systems. Our research agenda is guided by key principles, such as model capabilities, ethical considerations, and the alignment of AI models. Furthermore, by presenting specific use cases along the modalities, including productivity and relationship-building functions, we lay the foundation for research challenges and opportunities associated with these applications, paving the way for future research directions in this burgeoning field.
The paper is organised as follows: Section II explores the role of generative artificial intelligence use cases from various modalities. Section III delves into the principles guiding our research agenda and discusses the challenges and opportunities presented by the integration of generative artificial intelligence in intelligent vehicles. Section IV covers implications such as an interdisciplinary approach to collaboration and standardization, as well as an examination of potential risks and system limitations. Overall, our research agenda endeavours to advance the understanding and application of generative artificial intelligence technologies in intelligent vehicles, with far-reaching implications for safety, user experience, and industry innovation. By casting a discerning eye on the potential of these technologies and their impact on the automotive landscape, we hope to contribute to the discourse surrounding the future of intelligent vehicles and the exciting possibilities that lie ahead.
II. POTENTIAL APPLICATIONS OF GENERATIVE AI IN
INTELLIGENT VEHICLES
In this section, we explore various use-case ideas for generative artificial intelligence in intelligent vehicles, categorized across the three modalities of speech, audio, and video (cf. Table I). Hereby, the focus is primarily on human-machine interaction. As generative artificial intelligence holds the potential to significantly enhance various aspects of intelligent vehicles, it is irremissible that future perception systems accurately identify and target specific interactors, thereby preventing confusion for those who are not the intended recipients of the interaction.
A. Speech
Generative voice AI can greatly enhance the voice capabilities of intelligent vehicles, offering users a more engaging experience. Voice interaction is an ideal modality for drivers to search for information and receive decision support, e. g., which charging station to use, which road to choose, without having to take their eyes off the road. This Example preparing a presentation:
• Explain the top 3 trends in China regarding JV in automotive?
• What are main supply chain challenges?
• How would you start the presentation with a catchy example?
• What questions might a typical audience with background knowledge in supply chain ask me?
Please give me a one sentence summary from the e-mail from Manuel Meier this afternoon. Fig. 1: Example communication flow between a driver and the generative artificial intelligence personal assistant, reading and answering emails in a productivity in-car suite, conducting generative artificial intelligence tasks, such as summary, interactive formulation, and searching and question & answering.
modality can offer both user-experience and safety benefits. However, voice interaction is only engaged if the system does not disappoint drivers and trust is built. By facilitating more humanlike dialogues [12], generative models based in vehicle-assistant can provide context-aware and personalised responses to queries and requests, making interactions seamless and efficient. Furthermore, a productivity assistant (cf. Figure 1) can help users draft and edit emails, prepare presentations, or perform other tasks through enhanced voice interactions, contributing to a more productive incar environment as captured in. Emotional voice [14] and personality can also be generated by generative artificial intelligence systems, allowing the voice assistant to express emotions and adapt its tone to match the brand, user preferences, or specific scenarios. This personalisation fosters a stronger connection between the user and the vehicle. Additionally, generative artificial intelligence can provide realtime translation services [23] for radio traffic warnings or conversations between driver and passenger, breaking down language barriers and aiding communication during foreign travel or with passengers speaking different languages.
B. Audio
Generative audio AI possesses the capacity to revolutionize the auditory experience, providing users with an unparalleled level of personalization and immersion. By generating bespoke welcome melodies upon the user's entry, vehicles can evoke a sense of familiarity and belonging [24], [2]. With the increase of semi-autonomous vehicle functions and thus regular driver engagement by the vehicle, a personal set of auditory warning signals as well as a guidance system for people with disabilities in fully-autonomous driving are possible. Furthermore, electric vehicles can benefit from AI-generated in-car sound modelling that simulates engine noises or other bespoke audio experiences [15] for passengers, adhering to safety concerns by not affecting exterior noise levels.
C. Vision
The realm of vision presents yet another avenue, offering a myriad of visually engaging and interactive features. AIgenerated persona avatars [11], for instance, can actively listen, empathize, and respond with emotions, fostering a more profound connection between the user and the vehicle's assistant. The appearance can also individualise projecting visuals to intended recipient screens, e. g., the driver head-up display. This enriched interaction transcends the limitations of conventional voice-only communication. Furthermore, the navigation and information system can leverage generative artificial intelligence to adapt the visual appearance [16] context-aware to ambient light sensors, weather data, user preferences, interior design and the type of road. In addition, it can create custom album art, artist portraits, or visualizations based on the audio being played. AI-generated visual content [3] can also be employed to create personalised LED night sky displays and interactive animations projected from the door handle. These personalised touches can elevate the in-car entertainment experience adding an element of wonder and fascination. Moreover, generative artificial intelligence can assist in creating visual summaries of accident reports, which can be relayed to emergency services, streamlining the process and potentially improving response times. As shown in [25], generative artificial intelligence hold also promises for enhancing training data for driver assistance systems by creating realistic, diverse, and high-quality synthetic data. Also, a visual style transfer from one country's signs to another in the form of typical colours and symbols is possible. As mentioned before, however, the practical implementation of such models in these safetycritical domains can be challenging, due to the stringent regulatory environment.
D. Multimodal
The ongoing evolution of generative artificial intelligence systems is characterized by a shift toward multimodal models [26], capable of emulating human cognition by seamlessly processing and integrating multiple modalities. This comprehensive approach allows for a more profound understanding of user needs, fostering a richer, more intuitive user experience [17]. Take, for instance, vehicle issue diagnosis: A multimodal AI system could astutely comprehend a user's verbal description of a problem while concurrently analysing visual cues from the vehicle's sensors or userprovided images. This synergistic processing of information potentially results in a faster, more precise diagnosis and customized suggestions. Furthermore, when elucidating vehicle functions, the AI assistant can employ a harmonious blend of voice and visual modalities to offer lucid, methodical guidance, thereby enhancing user comprehension and the overall driving experience. By embracing the sophistication of multimodal generative artificial intelligence , intelligent vehicle systems are poised to revolutionize user interactions, ushering in a new era of captivating and instinctive in-car experiences.
III. KEY PRINCIPLES GUIDING FUTURE RESEARCH ON GENERATIVE AI IN INTELLIGENT VEHICLE SYSTEMS
In this section, we argue that the following principles are fundamental to the advance and implementation of generative artificial intelligence for intelligent vehicles:
• Domain adaptability and personalisation • Reliability, alignment, and controllability • Multimodal integration • End-to-end architecture • Ethical considerations These principles serve as a foundation for successful design and implementation of innovative in-vehicle functions that are both efficacious and user-centric. In the following, we describe these principles in detail and show how they are instrumental in identifying crucial research topics and potential avenues in the domain of generative artificial intelligence in intelligent vehicles.
A. Domain Adaptability and Personalisation
Generative artificial intelligence systems should be designed to adapt to specific domains, integrating non-public knowledge in the model. In addition, models should also adapt to user behaviour, preferences, and contexts to deliver personalised experiences. This principle ensures that the AI algorithms can tailor their interactions and responses to suit individual industries and users, leading to greater satisfaction and engagement. In practice, this might involve generating customized routing explanations based on the driver's preferred scenic routes or dynamically adjusting the lighting in-car environment to suit their mood and preferences.
A deciding research direction lies in the adaption of general-purpose, generative models to the automotive domain, especially for context-driven language models. From a practical standpoint, automotive companies are either provided with injecting domain knowledge into the model of providers or find themselves compelled to train their own general-purpose models. This domain knowledge is often not publicly accessible and might change rapidly, therefore, existing prompt-based fine-tuning methods are not sufficient. One direction could be retrieval transformers, incorporating external knowledge sources and long-term context by a retrieval mechanism [13] while inadvertently staying further aligned to the overarching vision of artificial general intelligence.
Personalisation requires to cater for individual user preferences accurately and hold limited context over a long time. For example, language models have experienced a significant increase in token capacity, expanding from a mere 2k tokens to a seemingly ample 32k tokens with GPT-4 [12]. When considering the vast scope of user interactions that span hours, weeks, or even months, this limitation appears still rather restrictive. Fig. 2: A schematic illustration of a controllable generative artificial intelligence voice user interface. The user prompt is firstly moderated, given to the general-purpose model to secondly either be directly answered, or thirdly, optionally enriched by non-model data using external API calls, before the natural answer is lastly given back to the user via a moderation layer.
services. In speech personalisation, synthesis still suffers from unclear and missing words that can be attributed to disordered attention alignments in the phoneme-to-acoustic part of autoregressive models [27]. This, combined with a lack of diversity in training copra, results currently in shortcomings to accurately representing key aspects of speaking styles such as accents and prosody and should be addressed in the near future. Tailoring music and visualisations to individuals using foundation models is still in its infancy. A promising starting point involves the expending of the source and representation of input data beyond the conventional text inputs. This may include incorporating sensor data or time series of events that implicitly capture changes in human behaviour and potentially achieve a deeper, more automatic generation. Therefore, central research questions remain open, such as what approaches can be employed to develop AI systems that can dynamically adapt to an individual user context that will steer future investigations toward the development of highly domain adaptable and personalised generative artificial intelligence solutions.
B. Reliability, Alignment, and Controllability
Ensuring that generative artificial intelligence models for intelligent vehicles are reliable, aligned, and controllable is crucial. Reliability encompasses the model's consistent performance across diverse inputs resulting in trustworthy outputs, while alignment refers to adherence to human as well as company values and standards. Controllability enables fine-grained control over the model's output, ensuring generated content is contextually relevant and brand-aligned. By addressing these concerns, generative artificial intelligence models can be developed that meet user needs and preserve brand integrity, while mitigating risks associated with inappropriate or misaligned content. This principle fosters trust in AI systems integrated into intelligent vehicles.
This pursuit demands the exploration of effective finetuning strategies, encompassing both broad domains and specific use-cases, to steer the system reliably toward their designers' intended goals and interests and minimize the risk of generating inappropriate content. First, it is crucial to develop strategies for identifying appropriate internal and external sources to align the model [28]. By meticulously training with well-selected data samples, the model should minimize the risk of generating hallucinations, which are seemingly coherent but ultimately nonsensical or incorrect outputs during inference. Second, despite recent improvements in local (follow user instructions) and global (follow tone rules) language model alignment, it is questionable if the inherent nature of an autoregressive model will ever enable sufficient alignment [12]. Practically speaking, for controlling that the generation stays in each operator's own responsibility and independent, robust moderation mechanisms (cf. Figure 2) are imperative. Thus, the guiding research question is how generative artificial intelligence systems can be made more reliable to ensure their outputs align with user intentions, company values, and safety requirements, particularly when dealing with uncontrolled inputs and outputs.
Another pivotal research area involves the controllability of how the model orchestrates the output. Long-lasting information can be stored and retrieved within the model itself. Short-lived information requires real-time system data, such as traffic, for efficient route comparison. Thus, models must effectively learn when, which, and with what arguments an external API should be called, and how to integrate this information controllably into the output (cf. Figure 2). First approaches such as the Tooltransformer [29] might be a first step into a full generic or automotive-specific direction. A component of this research may involve transitioning between legacy and generative artificial intelligence systems. Particularly, in the primary application of personal assistant systems, the shift towards a fully developed generative artificial intelligence system will occur progressively. Future studies must explore hybrid systems that enable seamless transitions between responses generated from both generative artificial intelligence and static, legacy system components.
C. Multimodal Integration
A paramount principle in the development of generative artificial intelligence for intelligent vehicles is the seamless integration of different modalities, such as speech, audio, and vision. This integration fosters context-aware and natural interactions, enhancing the user experience. For example, a holistic approach to understanding driver commands and gestures can lead to improved responses from the vehicle's assistant, creating a more intuitive experience.
The pursuit of advanced multimodal integration in generative artificial intelligence systems for intelligent vehicles presents a multifaceted and intriguing avenue for future research. Moving beyond conventional image-to-text pairings as present in current model developments [3], scholars and practitioners alike must delve into the intricacies of timecontinuous vehicular scenes, investigating practical sampling rates for sensing and exploring methodologies for feeding inputs from multiple modalities into models. By addressing such inquiries, researchers will gain insights into the precise circumstances under which reliance on a single modality may be sufficient, as opposed to scenarios where multimodal integration is indispensable for a thorough understanding of user needs and context. Consequently, the overarching research question -how can generative artificial intelligence models be designed and trained to effectively integrate and process information across multiple modalities, such as speech, audio, and vision -shall propel the field toward the development of sophisticated, context-aware systems that enhance the experience.
D. End-to-End Architecture
While many existing use-cases demonstrated by prominent AI companies operate in API-driven software platforms with no hardware involvement, the nature of vehicular systems necessitates a more intricate principle. The fusion of hardware and software components calls for end-to-end thinking, from the user interface, computation separation (hardware) and model sizing (software) as illustrated in Figure 3.
1) User interface: In spite of the potential to facilitate an unprecedented user-experience, generative artificial intelligence solutions will only be adopted if the multimodal systems are able to gain users' trust. This is especially true in-vehicle generative artificial intelligence use-cases that are performed next to the driving task. To develop trust, the classical user-interface design guidelines such as transparency, consistency, controllability, and easy error correction may not be sufficed [30]. User interface solutions need to take into account a degree of uncertainty present when interacting with generative artificial intelligence systems [31]. For example, research is necessary to understand if and how drivers recognize and manage potential faulty predictions by the generative artificial intelligence system [32]. The technical improvement of generative artificial intelligence solutions must be accompanied by human-computer interaction research, involving iterative prototyping and user testing [33]. Future generative artificial intelligence interfaces must allow users to acquire and apply knowledge on how they can interact with their generative artificial intelligence system to achieve use-case specific goals [33].
2) Distributed computation: Another decisive research direction is the balance between in-vehicle and off-vehicle computation. The choice between an edge, hybrid, or cloud architecture for generative artificial intelligence applications in intelligent vehicles can be influenced by available hardware, the complexity of the task, and the desired performance. The later two are strongly linked to the size of the model (cf. Section III-D.3). Tasks may range from shallow software and hardware integration (cf. night sky Section II-C), to deep integration such as intent routing within existing personal assistants. An example of deep integration is depicted in Figure 3, where intent recognition directs user prompts between in-vehicle recognition and executions (e. g., adjusting music volume), external world information (e. g., details about a tennis star), and automotive use-cases (e. g., complex route comparing) solved by generative artificial intelligence that are also done off-vehicle due to a hybrid or legacy architecture. We expect that such systems will form the transition until all parts can be reliably taken over by generative artificial intelligence systems. Furthermore, as generative artificial intelligence models evolve rapidly, the practicality of over-the-air updates must be weighed against the computational demands of model inference and the highly specialized nature of current hardware. Critical aspects of an overall architectural design and integration are ongoing research matters.
3) Model type and size: Closely connected to where the computation takes place is the type and size of the model. Generative artificial intelligence models are generally seen as large, therefore, resource-intensive compared to other machine learning models. For instance, smaller models may be employed for relatively simple tasks, such as generating personalised welcome melodies. These models may offer faster response times and lower computational requirements, making them suitable for in-vehicle integration (cf. Figure 3). Conversely, more complex tasks, such as reasoning over the internet, real-time and historical data when creating natural language or detailed visual content, may necessitate larger models with billions of parameters, computed in the cloud, that can capture the nuances and intricacies of human language or visual elements. Researchers must strive to explore novel techniques for model pruning since currently generative artificial intelligence models are undertrained, thus, inefficient [34]. Furthermore, fine-tuning that simultaneously promotes low-latency natural interactions and maximizes the efficiency of model architectures is key. It is unclear yet how automotive-specific generative artificial intelligence models can be developed and optimized to reconcile these competing demands. Currently, they also have a significant influence on the training and inference costs and are contrary to green AI initiatives. As the field of generative artificial intelligence advances, the prospect of devising a unified model that adeptly accommodates a vast array of automotive use-cases emerges as a compelling research direction. Such a versatile model would necessitate striking an optimal balance between performance, computational requirements, and resource constraints, especially within the realm of embedded systems and real-time applications.
Future investigations should focus on optimizing integrated architectures, model size, and computational efficiency to the creation of streamlined AI systems that can adeptly navigate the multifarious challenges and complexities inherent to the modern automotive landscape.
E. Ethical Considerations
As generative artificial intelligence will become increasingly integrated into intelligent vehicles, it is pivotal to address ethical concerns that may arise, including privacy, security, and fairness. Transparent data collection and processing practices, as well as rigorous security measures, must be in place to safeguard user data and ensure trust. Those principles are not new in the area of AI, however, remarkably still treated with negligence resulting in a complete ban of ChatGPT in Italy and ongoing debates in Germany, arising from unaddressed concerns surrounding training and user data.
It is essential to develop generative artificial intelligence models that are unbiased and equitable, fostering fair and inclusive in-vehicle experiences for all users. For instance, when imitating voice profiles, the system should be capable of accurately simulating various accents and dialects without perpetuating stereotypes or biases. By doing so, AIgenerated voices can provide a more diverse and inclusive representation. For text generation, limiting harmful and wrong content is crucial. Challenges arise from the limited understanding of model learning, typically addressed by explainable AI. Since the predictions in generative models are not discrete labels but rather generated artifacts, interpreting them becomes difficult due to their high degree of variability. Atman presents an intriguing direction by employing perturbations to alter the attention mechanisms, enabling the suppression or emphasis of specific tokens within the transformer model [35]. By exploring alternative generations synthesized through this manipulation, researchers can gain a better understanding of how foundation models generate content, potentially leading to more controlled and adaptive content generation.
Navigating the intricate landscape of ethical considerations in the development and integration of generative artificial intelligence systems for automotive applications necessitates meticulous collaboration between researchers and policymakers. Establishing clear ethical standards and guidelines will be instrumental in directing the responsible development and deployment of these technologies. Drawing from the EU AI Act [36] which might provide a basis for other countries in developing their regulatory frameworks related to AI in the future, several pertinent research questions emerge: How can AI systems be accurately classified, and the potential risks associated with generative artificial intelligence systems in the automotive sector be assessed? How do we ensure that generative artificial intelligence systems interacting with humans, generating content, or recognizing emotions align with stringent ethical categories and comply with legal requirements regarding transparency, accountability, data quality, documentation, and human oversight? What practical approaches can be devised to store and manage sensitive data, striking a balance between external service providers and original equipment manufacturers, while adhering to privacy regulations and user preferences? Considering the planned transitional phase of 24 to 36 months before the EU AI Act starting from Q2 2023 comes into full effect, these research questions, among others, will have to be addressed swiftly to shape future inquiries, ultimately guiding the ethical development and integration of generative artificial intelligence technologies in the automotive domain.
IV. IMPLICATIONS FOR GENERATIVE AI RESEARCH
The principles outlined emphasise the need for an interdisciplinary approach, fostering collaboration and addressing challenges in generative artificial intelligence technologies for intelligent vehicles. This involves cooperation among researchers, industry stakeholders, and academia to establish best practices, benchmarks, and shared resources, while also exploring potential risks and system limitations.
A. Collaboration and Standardisation
Given that the research landscape and goals (e. g., alignment [10]) are discovered and heavily influenced by corporate entities, such as OpenAI and Microsoft, a collaboration between AGI players, domain adopters, and research institutions is crucial to identify areas where collaboration can be bolstered. Furthermore, training data, open-source models, and the establishment of best practices and benchmarks are integral components for fostering a shared understanding and facilitating advancements. Consequently, effective collaboration between researchers and industry stakeholders is essential to establish best practices, benchmarks, and shared resources for the development, evaluation, and deployment of generative artificial intelligence technologies in intelligent vehicles, while balancing the interests and contributions of both corporate and academic entities.
B. Potential Risks and System Limitations
It is essential to recognize and address potential risks and limitations that may not be entirely mitigated, ultimately transforming our lives and the way we interact with the world. Although, if generative artificial intelligence models are thoroughly tested and validated to become safe to operate, free user input is prone to adversarial attacks, which likely can never be fully mitigated by countermeasures. Implementing stringent privacy and data protection measures is vital to maintain user trust and adhere to regulatory requirements. For example, the liability to cyberattacks can be reduced, but never completely ruled out, making an exposure of highly private data on an immense scale. Developing guidelines to address system breaches, not only involving individual operators but also entire systems, such as nations, could be a valuable endeavour.
Even though they are virtual systems, such systems directly influence the physical world when people act on their suggestions. As trust in their reliability grows, so does the blind dependence on these systems, especially since humans are often unaware of black swan events. The combination of overconfidence (as seen in hallucinations) and concurrent sensory gaps may result in life-threatening consequences. A straightforward example in the automotive industry could involve a user being inadvertently guided into a hazardous situation due to a faulty sensor falsely indicating a cold engine, prompting the system to suggest an oil change on a hot engine.
V. CONCLUSION
The advent of generative artificial intelligence technologies has opened up a plethora of opportunities and challenges for the automotive industry, paving the way for the development of intelligent vehicles that provide more immersive, intuitive, and personalised in-car experiences. This paper has presented an overview of current applications and future research directions in the domain of generative artificial intelligence and intelligent vehicles, highlighting the potential of these technologies to revolutionize user interactions and drive innovation in the sector. Key future research areas in generative artificial intelligence and intelligent vehicles include multimodal integration, model optimization, personalisation, reliability, and architecture. Additionally, ethical implications particularly focusing on user privacy, data security, and potential misuse are of importance. Addressing these areas by fostering collaboration will unlock generative artificial intelligence s full potential, transforming the driving experience and shaping the future of intelligent vehicles. One limitation of the work is that we have maintained a perspective where driving remains the primary task of the driver. While many of the scenarios discussed may still be relevant in a fully automated context, it is important to acknowledge that a shift in priorities could occur. For instance, the interaction modality might transition from a focus on voice towards a greater emphasis on graphic user interfaces, and use-cases designed to combat boredom (passive fatigue) might see an increase in demand. Nonetheless, as generative artificial intelligence continues to advance, the development of intelligent vehicles that cater to the diverse needs and preferences of users will become increasingly important, ultimately redefining the way we interact with and experience the world around us.
Fig. 3 :
3Multimodal flow from user interaction, hybrid computation distribution depending on task complexity and considering the type of model, to the execution of a vehicle function.
TABLE I :
IExcerpt of the latest generative models categorised by training technique and domain including potential intelligent vehicle applications.models are extensively researched for understanding interac-
tions within vehicles
Chaining representation of context in the form of embeddings as provided by open-source libraries 1 could be one direction. To preserve control over the model, major non-open-source providers do yet not offer such 1 https://github.com/hwchase17/langchainEnrichment API
Moderation
API
GPT Model
factual knowledge
needed
examples:
• current traffic
• personal emails
check whether
input and output
content complies
with proivder
usage policies
Examples: hate,
threatening, self-harm,
sexual, minors, violence.
Lukas Stappen | Food for thought: BMW Generative AI high potential use cases8
controller & distributor
in-car intent
recognition
voice prompt,
gesture, etc.
1
in-vehicle
local models
e.g., simple in-vehicle
function, external world
information, complex
dialogue
e.g., music volume, night
sky brightness
off-vehicle
internet
OEM cloud
global models
vehicle function
execution
complex tasks
querying
e.g., routing,
writing emails
3
2
e.g., famous tennis star
BMW Group Research and Technology, Munich, Germany [email protected] 2 Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Dresden, Germany 3 Imperial College London, London, UK and University of Augsburg, Augsburg, Germany and audEERING GmbH, Munich, Germany
Chatgpt sets record for fastest-growing user base -analyst note. Krystal Hu, Krystal Hu. Chatgpt sets record for fastest-growing user base -analyst note, Feb 2023.
Jukebox: A generative model for music. Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever, arXiv:2005.00341arXiv preprintPrafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. Jukebox: A generative model for music. arXiv preprint arXiv:2005.00341, 2020.
High-resolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRobin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022.
Fahrerassistenz-primaer ein beitrag zum komfort oder fuer die sicherheit?/driver assistance-firstly a contribution to primary safety or rather to comfort? VDI-Berichte. Heiner Bubb, Heiner Bubb. Fahrerassistenz-primaer ein beitrag zum komfort oder fuer die sicherheit?/driver assistance-firstly a contribution to primary safety or rather to comfort? VDI-Berichte, (1768), 2003.
Digital World Premiere: BMW i Vision Dee -your ultimate companion. YouTube. Digital World Premiere: BMW i Vision Dee -your ultimate compan- ion. YouTube, Jan 2023.
Emotionawareness for intelligent vehicle assistants: A research agenda. Hans-Jörg Vögel, Christian Süß, Thomas Hubregtsen, Viviane Ghaderi, Ronee Chadowitz, Elisabeth André, Nicholas Cummins, Björn Schuller, Jérôme Härri, Raphaël Troncy, Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems. the 1st International Workshop on Software Engineering for AI in Autonomous SystemsHans-Jörg Vögel, Christian Süß, Thomas Hubregtsen, Viviane Ghaderi, Ronee Chadowitz, Elisabeth André, Nicholas Cummins, Björn Schuller, Jérôme Härri, Raphaël Troncy, et al. Emotion- awareness for intelligent vehicle assistants: A research agenda. In Proceedings of the 1st International Workshop on Software Engineer- ing for AI in Autonomous Systems, pages 11-15, 2018.
High-resolution image synthesis and semantic manipulation with conditional gans. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionTing-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8798- 8807, 2018.
Image-to-image translation with conditional adversarial networks. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1125-1134, 2017.
Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines. Luciano Floridi, Massimo Chiriatti, Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, pages 1-14, 2020.
Training language models to follow instructions with human feedback. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, Advances in Neural Information Processing Systems. 35Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wain- wright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow in- structions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022.
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionTero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8110-8119, 2020.
. Openai, Gpt-4 technical reportOpenAI. Gpt-4 technical report, 2023.
Improving language models by retrieving from trillions of tokens. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, International conference on machine learning. PMLRSebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206-2240. PMLR, 2022.
Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEJonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779- 4783. IEEE, 2018.
Andrea Agostinelli, I Timo, Zalán Denk, Jesse Borsos, Mauro Engel, Antoine Verzetti, Qingqing Caillon, Aren Huang, Adam Jansen, Marco Roberts, Tagliasacchi, arXiv:2301.11325Generating music from text. arXiv preprintAndrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, et al. Musiclm: Generating music from text. arXiv preprint arXiv:2301.11325, 2023.
Dall·e 2 preview -risks and limitations. Pamela Mishkin, Lama Ahmad, Miles Brundage, Gretchen Krueger, Girish Sastry, Pamela Mishkin, Lama Ahmad, Miles Brundage, Gretchen Krueger, and Girish Sastry. Dall·e 2 preview -risks and limitations. 2022.
X-aware: Contextaware human-environment attention fusion for driver gaze prediction in the wild. Lukas Stappen, Georgios Rizos, Björn Schuller, Proceedings of the 2020 International Conference on Multimodal Interaction, ICMI '20. the 2020 International Conference on Multimodal Interaction, ICMI '20New York, NY, USAACM2020Lukas Stappen, Georgios Rizos, and Björn Schuller. X-aware: Context- aware human-environment attention fusion for driver gaze prediction in the wild. In Proceedings of the 2020 International Conference on Multimodal Interaction, ICMI '20, pages 858--867, New York, NY, USA, 2020. ACM.
Domain adaptation with joint learning for generic, optical car part recognition and detection systems (go-card). Lukas Stappen, Xinchen Du, Vincent Karas, Stefan Müller, W Björn, Schuller, arXiv:2006.08521arXiv preprintLukas Stappen, Xinchen Du, Vincent Karas, Stefan Müller, and Björn W Schuller. Domain adaptation with joint learning for generic, optical car part recognition and detection systems (go-card). arXiv preprint arXiv:2006.08521, 2020.
Driver emotion recognition for intelligent vehicles: A survey. Sebastian Zepf, Javier Hernandez, Alexander Schmitt, Wolfgang Minker, Rosalind W Picard, ACM Computing Surveys (CSUR). 533Sebastian Zepf, Javier Hernandez, Alexander Schmitt, Wolfgang Minker, and Rosalind W Picard. Driver emotion recognition for intelligent vehicles: A survey. ACM Computing Surveys (CSUR), 53(3):1-30, 2020.
Monica Haurilet, and Rainer Stiefelhagen. Deep classification-driven domain adaptation for cross-modal driver behavior recognition. Simon Reiß, Alina Roitberg, 2020 IEEE Intelligent Vehicles Symposium (IV). Simon Reiß, Alina Roitberg, Monica Haurilet, and Rainer Stiefelha- gen. Deep classification-driven domain adaptation for cross-modal driver behavior recognition. In 2020 IEEE Intelligent Vehicles Sym- posium (IV), pages 1042-1047, 2020.
Soundsride: Affordance-synchronized music mixing for in-car audio augmented reality. Mohamed Kari, Tobias Grosse-Puppendahl, Alexander Jagaciak, David Bethge, Reinhard Schütte, Christian Holz, The 34th Annual ACM Symposium on User Interface Software and Technology, UIST '21. New York, NY, USAACM2021Mohamed Kari, Tobias Grosse-Puppendahl, Alexander Jagaciak, David Bethge, Reinhard Schütte, and Christian Holz. Soundsride: Affordance-synchronized music mixing for in-car audio augmented reality. In The 34th Annual ACM Symposium on User Interface Software and Technology, UIST '21, pages 118-133, New York, NY, USA, 2021. ACM.
At your service: Designing voice assistant personalities to improve automotive user interfaces. Michael Braun, Anja Mainz, Ronee Chadowitz, Bastian Pfleging, Florian Alt, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19New York, NY, USAACMMichael Braun, Anja Mainz, Ronee Chadowitz, Bastian Pfleging, and Florian Alt. At your service: Designing voice assistant personalities to improve automotive user interfaces. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, page 1-11, New York, NY, USA, 2019. ACM.
Robust speech recognition via largescale weak supervision. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine Mcleavey, Ilya Sutskever, arXiv:2212.04356arXiv preprintAlec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large- scale weak supervision. arXiv preprint arXiv:2212.04356, 2022.
. Christine Payne, Musenet. OpenAI Blog. 3Christine Payne. Musenet. OpenAI Blog, 3, 2019.
Generative model based data augmentation for special person classification. Zijie Guo, Rong Zhi, Wuqaing Zhang, Baofeng Wang, Zhijie Fang, Vitali Kaiser, Julian Wiederer, Fabian Flohr, 2020 IEEE Intelligent Vehicles Symposium (IV). Zijie Guo, Rong Zhi, Wuqaing Zhang, Baofeng Wang, Zhijie Fang, Vitali Kaiser, Julian Wiederer, and Fabian Flohr. Generative model based data augmentation for special person classification. In 2020 IEEE Intelligent Vehicles Symposium (IV), pages 1675-1681, 2020.
Phenaki: Variable length video generation from open domain textual description. Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, Dumitru Erhan, arXiv:2210.02399arXiv preprintRuben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual description. arXiv preprint arXiv:2210.02399, 2022.
Neural codec language models are zero-shot text to speech synthesizers. Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, arXiv:2301.02111arXiv preprintChengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al. Neural codec language models are zero-shot text to speech synthesizers. arXiv preprint arXiv:2301.02111, 2023.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, arXiv:2112.09332Browser-assisted question-answering with human feedback. arXiv preprintReiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
Augmented language models: a survey. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Timo Baptiste Rozière, Jane Schick, Asli Dwivedi-Yu, Celikyilmaz, arXiv:2302.07842arXiv preprintGrégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalm- pantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented lan- guage models: a survey. arXiv preprint arXiv:2302.07842, 2023.
Power to the people: The role of humans in interactive machine learning. Saleema Amershi, Maya Cakmak, William Bradley Knox, Todd Kulesza, 35AI MagazineSaleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4):105-120, 2014.
Advait Sarkar. Confidence, command, complexity: metamodels for structured interaction with machine intelligence. PPIG. 3Advait Sarkar. Confidence, command, complexity: metamodels for structured interaction with machine intelligence. In PPIG, page 3, 2015.
The algorithmic imaginary: Exploring the ordinary affects of facebook algorithms. Information, communication & society. Taina Bucher, 20Taina Bucher. The algorithmic imaginary: Exploring the ordinary affects of facebook algorithms. Information, communication & society, 20(1):30-44, 2017.
Bringing transparency design into practice. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, Heinrich Hussmann, 23rd International Conference on Intelligent User Interfaces. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. Bringing transparency design into practice. In 23rd International Conference on Intelligent User Interfaces, pages 211-223, 2018.
Training compute-optimal large language models. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego De Las, Lisa Anne Casas, Johannes Hendricks, Aidan Welbl, Clark, arXiv:2203.15556arXiv preprintJordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Train- ing compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
Atman: Understanding transformer predictions through memory efficient attention manipulation. Mayukh Deb, Björn Deiseroth, Samuel Weinbach, Patrick Schramowski, Kristian Kersting, arXiv:2301.08110arXiv preprintMayukh Deb, Björn Deiseroth, Samuel Weinbach, Patrick Schramowski, and Kristian Kersting. Atman: Understanding transformer predictions through memory efficient attention manipulation. arXiv preprint arXiv:2301.08110, 2023.
Demystifying the draft eu artificial intelligence act-analysing the good, the bad, and the unclear elements of the proposed approach. Michael Veale, Frederik Zuiderveen Borgesius, Computer Law Review International. 224Michael Veale and Frederik Zuiderveen Borgesius. Demystifying the draft eu artificial intelligence act-analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4):97-112, 2021.
| [
"https://github.com/hwchase17/langchainEnrichment"
] |
[
"GENERALIZED THREE PERSON HAT GAME",
"GENERALIZED THREE PERSON HAT GAME"
] | [
"Theo Van Uem "
] | [] | [] | The Hat Game (Ebert's Hat Problem) got much attention in the beginning of this century; not in the last place by its connections to coding theory and computer science. All players guess simultaneously the color of their own hat observing only the hat colors of the other players. It is also allowed for each player to pass: no color is guessed. The team wins if at least one player guesses his or her hat color correct and none of the players has an incorrect guess. This paper studies Ebert's hat problem for three players and two colors, where the probabilities of the colors may be different for each player. Our goal is to maximize the probability of winning the game and to describe winning strategies. In this paper we introduce the notion of an adequate set. The construction of adequate sets is independent of underlying probabilities and we can use this fact in the analysis of our general case. arXiv:1704.04244v7 [math.CO] | null | [
"https://export.arxiv.org/pdf/1704.04244v7.pdf"
] | 259,095,454 | 1704.04244 | a4a0ea8c7cab599bfc915ae83e6476348a11fd4a |
GENERALIZED THREE PERSON HAT GAME
Theo Van Uem
GENERALIZED THREE PERSON HAT GAME
The Hat Game (Ebert's Hat Problem) got much attention in the beginning of this century; not in the last place by its connections to coding theory and computer science. All players guess simultaneously the color of their own hat observing only the hat colors of the other players. It is also allowed for each player to pass: no color is guessed. The team wins if at least one player guesses his or her hat color correct and none of the players has an incorrect guess. This paper studies Ebert's hat problem for three players and two colors, where the probabilities of the colors may be different for each player. Our goal is to maximize the probability of winning the game and to describe winning strategies. In this paper we introduce the notion of an adequate set. The construction of adequate sets is independent of underlying probabilities and we can use this fact in the analysis of our general case. arXiv:1704.04244v7 [math.CO]
Introduction
Hat puzzles were formulated at least since Martin Gardner's 1961 article [8]. They have got an impulse by Todd Ebert in his Ph.D. thesis in 1998 [6]. Buhler [2] stated: "It is remarkable that a purely recreational problem comes so close to the research frontier". Also articles in The New York Times [15], Die Zeit [1] and abcNews [14] about this subject got broad attention. This paper studies generalized Ebert's hat problem for three players and two colors. The probabilities of the colors may be different for each player, but known to all the players. All players guess simultaneously the color of their own hat observing only the hat colors of the other two players. It is also allowed for each player to pass: no color is guessed. The team wins if at least one player guesses his or her hat color correctly and none of the players has an incorrect guess. Our goal is to maximize the probability of winning the game and to describe winning strategies. The symmetric two color hat problem (equal probability 0.5 for each color) with N = 2 k − 1 players is solved in [7], using Hamming codes, and with N = 2 k players in [5] using extended Hamming codes. Burke et al. [3] try to solve the symmetric hat problem with N = 3, 4, 5, 7 players using genetic programming. Their conclusion: The N -prisoners puzzle (alternative names: Hat Problem, Hat Game) gives evolutionary computation and genetic programming a new challenge to overcome. Lenstra and Seroussi [13] show that in the symmetric case of two hat colors, and for any value of N , playing strategies are equivalent to binary covering codes of radius one.
Krzywkowski [12] describes applications of the hat problem and its variations, and their connections to different areas of science. Johnson [10] ends his presentation with an open problem: If the hat colors are not equally likely, how will the optimal strategy be affected? We will answer this question and our method gives also interesting results in the symmetric case. In section 2 we define out main tool: an adequate set. In section 3 we obtain results for three person two color Hat Game, where each player i may have different probabilities (p i , q i ) to get a specific colored hat. In section 4 we obtain results for the asymmetric three person two color Hat Game, where each player has the same set of probabilities (p, q) to get a specific colored hat, but the probabilities are different. In section 5 we find old and new results for the well known symmetric case p = q = 1 2 , using the method of adequate sets.
Adequate sets
In this section we have N players and q colors. The N persons in our game are distinguishable, so we can label them from 1 to N . We label the q colors 0, 1, .., q − 1. The probabilities of the colors are known to all players. The probability that color i will be on a hat of player j is p i,j (i ∈ {0, 1, .., q − 1}, ∀j ∈ {1, 2, .., N } :
b i−1 b i+1 ..b N with decimal value s i = i−1 k=1 b k .q N −k−1 + N k=i+1 b k .q N −k ,= i−1 k=1 b k .q N −k−1 + N k=i+1 b k .q N −k , b i ∈ {0, 1, .
. . , q − 1}, i = 1, 2, . . . , N }. Each player has to make a choice out of q + 1 possibilities: 0='guess color 0', 1='guess color 1', . . . ., q − 1 ='guess color q − 1', q='pass'. We define a decision matrix D = (a i,j ) where i ∈ {1, 2, .., N }(players); j ∈ {0, 1, .., q N −1 − 1}(S-code of a player); a i,j ∈ {0, 1, .., q} . The meaning of a i,j is: player i sees S-code j and takes decision a i,j (guess a color or pass). We observe the total probability (sum) of our guesses.
For each b 1 b 2 . . . b N in B we have: IF a 1,s 1 ∈ {q, b 1 } AND a 2,s 2 ∈ {q, b 2 } AND ... AND a N,s N ∈ {q, b N } AND NOT (a 1,s 1 = a 2,s 2 = · · · = a N,s N = q) THEN sum = sum + p b 1 ,1 .p b 2 ,2 . . . p b N ,N .
Any choice of the a i,j in the decision matrix determines which CASES b 1 b 2 . . . b N have a positive contribution to sum (we call it a GOOD CASE) and which CASES don't contribute positive to sum (we call it a BAD CASE). Proof. Any GOOD CASE has at least one a i,j not equal to q. Let this specific a i,j have value b i 0 . Then our GOOD CASE generates q − 1 BAD CASES by only changing the value b i 0 in any value of 0, 1, .
., q − 1 except b i 0 . □
The definition of adequate set is the same idea as the concept of strong covering, introduced by Lenstra and Seroussi [13]. The number of elements in an adequate set will be written as das (dimension of adequate set). Adequate sets are generated by an adequate set generator (ASG). See Appendix A for an implementation in a VBA/Excel program. Given an adequate set, we obtain a decision matrix D = (a i,j ) by the following procedure.
Procedure DMG (Decision Matrix Generator):
Begin Procedure For each element in the adequate set:
• Determine the q-ary representation
b 1 b 2 . . . b N • Calculate S-codes s i = i−1 k=1 b k .q N −k−1 + N k=i+1 b k .q N −k ( i=1,..,N) • For each player i: fill decision matrix with a i,s i = b i (i=1,..,N), where
each cell may contain several values. Matrix D is filled with BAD COLORS. We can extract the GOOD COLORS by considering all a i,j with q − 1 different BAD COLORS and then choose the only missing color. In all situations with less than q − 1 different BAD COLORS we pass. When there is an a i,j with q different BAD COLORS all colors are bad, so the first option is to pass. But when we choose any color, we get a situation with q − 1 colors. So in case of q BAD COLORS we are free to choose any color or pass. The code for pass is q, but in our decision matrices we prefer a blank, which supports readability. The code for 'any color or pass will do' is defined q + 1, but this will not happen in our specific Hat Game. End Procedure.
This procedure is implemented in the VBA/Excel program DMG (See Appendix B).
Generalized three person two color Hat Game
Three distinguishable players are randomly fitted with a white or black hat. Each player i has his own probabilities p i and q i to get a white respectively a black hat, where 0 < p i < 1, p i + q i = 1 (i = 1, 2, 3). All probabilities are known to all players.
Part of the strategy is that the players give themselves an identification: 1, 2 and 3.
Our goal is to maximize the probability of winning the game and to describe winning strategies. Let X be an adequate set and P (X) is the probability generated by the adequate set. The adequate set X dominates the adequate set Y if P (X) ≤ P (Y ). We also define: X dominates the adequate set Y absolutely if P (X) < P (Y ). Adequate sets X and Y are isomorphic when there is a bijection from {1,2,3} to itself which transforms X into Y . The decision matrices are then also isomorphic. A player i with p i < q i gets an asterix: when observing such a player we have to flip the colors: white becomes black and vice versa. In such a way we have without loss of generality p i ≥ q i (i = 1, 2, 3) . The next step is to renumber the players in such a way that p 1
q 1 ≥ p 2 q 2 ≥ p 3 q 3 , which is equivalent to p 1 ≥ p 2 ≥ p 3 . So: 1 > p 1 ≥ p 2 ≥ p 3 ≥ 1 2 or, equivalently: 0 < q 1 ≤ q 2 ≤ q 3 ≤ 1 2 .
We define decision matrices: α: Theorem 2. Given 0 < q 1 ≤ q 2 ≤ q 3 ≤ 1 2 we have:
CASE q 2 = 1 2 q 2 < 1 2 1 q 1 > 1 q 2 + 1 q 3 p 1 > 3 4 p 1 > 3 4 ϵ ϵ 1 q 1 < 1 q 2 + 1 q 3 3 4 p 1 + q 1 q 2 q 3 ( 1 q 2 + 1 q 3 − 1 q 1 ) > p 1 > 3 4 α, δ δ 1 q 1 = 1 q 2 + 1 q 3 3 4 p 1 > 3 4 α, δ, ϵ δ, ϵ
where in each case we give the optimal probability in the first line and the optimal decision matrices in the second line.
{0,7} p 1 p 2 p 3 + q 1 q 2 q 3 = A {1,6} p 1 p 2 q 3 + q 1 q 2 p 3 = B {2,5} p 1 q 2 p 3 + q 1 p 2 q 3 = C {3,4} p 1 q 2 q 3 + q 1 p 2 p 3 = D
We have: {3, 4} is the winner when p 1 q 2 q 3 + q 1 p 2 p 3 < q 1 , so: 1
A − B = q 1 q 2 q 3 ( p 1 q 1 p 2 q 2 − 1)( p 3 q 3 − 1) A − C = q 1 q 2 q 3 ( p 1 q 1 p 3 q 3 − 1)( p 2 q 2 − 1) A − D = q 1 q 2 q 3 ( p 3 q 3 p 2 q 2 − 1)( p 1 q 1 − 1) B − C = q 1 q 2 q 3 ( p 2 q 2 − p 3 q 3 )( p 1 q 1 − 1); B − D = q 1 q 2 q 3 ( p 1 q 1 − p 3 q 3 )( p 2 q 2 − 1) C − D = q 1 q 2 q 3 ( p 1 q 1 − p 2 q 2 )( p 3 q 3 − 1) So we have: A ≥ B ≥ C ≥ Dq 1 < 1 q 2 + 1 q 3
{4,5,6,7} is the winner when 1
q 1 > 1 q 2 + 1 q 3
where 0 < q 1 ≤ q 2 ≤ q 3 ≤ 1 2 and p i +q i = 1 (i = 1, 2, 3). Adequate set {0, 7} has value A and decision matrix α. Adequate set {3, 4} has value D and decision matrix δ. Adequate set {4, 5, 6, 7} has value p 1 and decision matrix ϵ. Using DMG (Appendix B) we find α, δ and ϵ.
CASE 1 1 q 1 > 1 q 2 + 1 q 3 : {4, 5, 6
, 7} is absolute dominant with winning probability 1 − q 1 = p 1 and decision matrix ϵ, where 1 (
q 1 > 1 q 2 + 1 q 3 ≥ 4, so p 1 > 3 4 CASE 2 1 q 1 < 1 q 2 + 1p 3 q 3 p 2 q 2 − 1)( p 1 q 1 − 1) = 0 ⇔ ((p 2 = q 2 = 1 2 ) ∧ (p 3 = q 3 = 1 2 )) ∨ (p 1 = q 1 = 1 2 ) ⇔ ( 1 2 = p 3 = p 2 ≤ p 1 < 1) ∨ (p 1 = p 2 = p 3 = 1 2 ) ⇔ ( 1 2 = p 3 = p 2 ≤ p 1 < 1) ⇔ p 2 = 1 2 and optimal probability is 1 − (p 1 q 2 q 3 + q 1 p 2 p 3 ) = 3 4 .
CASE 2.2. Only δ optimal: A < D, so: p 2 > 1 2 ; optimal probability:
1 − (p 1 q 2 q 3 + q 1 p 2 p 3 ) = p 1 + q 1 q 2 q 3 ( 1 q 2 + 1 q 3 − 1 q 1 ) > p 1 . And 1 q 1 = 1 q 2 + 1 q 3 < 4, so p 1 > 3 4 CASE 3 1 q 1 = 1 q 2 + 1 q 3 CASE 3.1 p 2 = 1 2 Optimal probability: 1 − (p 1 q 2 q 3 + q 1 p 2 p 3 ) = 3 4 CASE 3.2 p 2 > 1 2
Optimal probability: 1 − (p 1 q 2 q 3 + q 1 p 2 p 3 ) = p 1 and 1 q 1 = 1 q 2 + 1 q 3 < 4, so p 1 > 3 4 □ Note 1: Instead of 1 q we can also use p q . We get p 1
q 1 = p 2 q 2 + p 3 q 3 + 1 instead of 1 q 1 = 1 q 2 + 1 q 3 .
Note 2: {1, 6}, {2, 5} and {3, 4} are isomorphic, but this doesn't imply B = C = D: the probabilities are not always invariant by a bijection of the players.
Note 3: We observe that the well-known strategy α is only dominant when
( 1 q 1 ≤ 1 q 2 + 1 q 3 ) ∧ (q 2 = 1 2 ) ⇔ (p 2 = p 3 = 1 2 ) ∧ ( 1 2 ≤ p 1 ≤ 3 4 ).
Asymmetric three person two color Hat Game
In this section we study three person two color asymmetric Hat Game. For each player let p be the probability to get a white hat and q be the probability to get a black hat. Without loss of generality we may assume (asymmetric case): 1 2 < p < 1. Theorem 3. In asymmetric three person two color hat game we have maximal probability 1 − pq of winning the game, with decision matrix: 00 01 10 11 0 1 0 1 0 1
Proof. Use the result of CASE ( 1 q 1 < 1 q 2 + 1 q 3 ) ∧ (p 2 > 1 2 ) in Theorem 2. Optimal probability is 1 − (p 1 q 2 q 3 + q 1 p 2 p 3 ) = 1 − pq. □
Symmetric two color three person Hat Game
In this section we focus on the symmetric Hat Game with two colors and 3 players. Each player chooses a white hat with probability 1 2 and a black hat with probability 1 2 . Theorem 4. For symmetric three person two color Hat Game the maximal probability is 3 4 and the optimal decision matrices are: 00 01 10 11 0 1 0 1 0 1 and: 00 01 10 11 1 0 1 0 1 0
Proof. Use result in CASE ( 1 , 7} with probability: p 1 p 2 q 3 + p 1 q 2 p 3 + q 1 p 2 p 3 + q 1 q 2 q 3 = (p 1 p 2 p 3 + p 1 p 2 q 3 ) + (p 1 q 2 p 3 − p 1 p 2 p 3 ) + (q 1 p 2 p 3 + q 1 q 2 p 3 ) + (q 1 q 2 q 3 − q 1 q 2 p 3 ) = p 1 p 3 + p 1 p 2 (q 3 − p 3 ) + q 1 p 3 + q 1 q 2 (q 3 − p 3 ) = p 3 + (p 1 p 2 + q 1 q 2 )(q 3 − p 3 ) ≤ p 3 The same procedure works for {1, 2, 4, 7} and gives simple probabilities (p 1 , p 2 or p 3 ) in the remaining 5 adequate sets. We have 1 q 1 ≥ 1 q 2 + 1 q 3 , so q 1 < q 2 ; we get: p 1 > p 2 ≥ p 3 . Conclusion: {4, 5, 6, 7} is absolute dominant when das = 4.
q 1 < 1 q 2 + 1 q 3 ) ∧ (p 2 = 1 2 ) in Theorem 2.
i,j = 1). Each possible configuration of the hats can be represented by an element of B = {b 1 b 2 . . . b N |b i ∈ {0, 1, . . . , q − 1} , i = 1, 2.., N }. The S-code represents what the N different players sees. Player i sees q-ary code b 1 ..
a value between 0 and q N −1 − 1. Let S be the set of all S-codes: S = {s 1 s 2 . . . s N |s i
Definition 1 .
1Let A ⊂ B. A is adequate to B − A if for each q-ary element x in B − A there are q − 1 elements in A which are equal to x up to one fixed q-ary position. Theorem 1. BAD CASES are adequate to GOOD CASES.
q 3
3There are 4 potential dominant sets: {0, 7}, {1, 6}, {2, 5}, {3, 4}. The last three are isomorphic and because of A ≤ B ≤ C ≤ D we analyze the battle between A and D.CASE 2.1. α and δ are both optimal: A = D, so:
For
das) As Integer: ReDim j(1 To das, 1 To n) As Integer: ReDim check(0 To H) As Integer ' for each number from 0 to H: first calculate binary digits 000 ..... 111 and put it in matrix cx: row in Excel where result is displayed For i1 = 0 To H -das + 1 ' adequate set: {i_1,i_2..,i_das} i(1) = i1 ' VBA-EXCEL can't handle with an array in for to next means: we found an adequate set; go to next row in Excel sheet For k = 1 To das: If State = 1 Then Cells(x, k) = i(k): ' shows elements of adequate 62 adequate sets, where 55 are absolutely dominated by {0, 7}, {1, 6}, {2, 5} or {3, 4}. Observe {0, 3, 5, 6}: adequate set, so BAD CASES; GOOD CASES are: {1, 2, 4
Denksport für Hutträger, Die Zeit. W Blum, W. Blum, Denksport für Hutträger, Die Zeit, May 3, 2001.
Hat tricks. J Buhler, Math. Intelligencer. 24J. Buhler, Hat tricks, Math. Intelligencer 24 (2002), 44-49.
A puzzle to challenge genetic programming. E Burke, S Gustafson, G Kendall, Genetic Programming. SpringerE. Burke, S. Gustafson, G. Kendall, A puzzle to challenge genetic programming, Genetic Programming, 136-147, Lecture Notes in Computer Science, Springer, 2002.
Hat guessing games. S Butler, M Hajianghayi, T Kleinberg, Leighton, SIAM J Discrete Math. 22S. Butler, M. Hajianghayi, R Kleinberg, T. Leighton, Hat guessing games, SIAM J Discrete Math 22 (2008), 592-605.
G Cohen, I Honkala, S Litsyn, A Lobstein, Covering Codes. North-Holland54G. Cohen, I. Honkala, S. Litsyn, A. Lobstein, Covering Codes, North-Holland, Math- ematical Library 54, 1997.
T Ebert, Applications of recursive operators to randomness and complexity. Santa BarbaraUniversity of CaliforniaPh.D. ThesisT. Ebert, Applications of recursive operators to randomness and complexity, Ph.D. Thesis, University of California, Santa Barbara, 1998.
On the autoreducibility of random sequences. T Ebert, W Merkle, H Vollmer, SIAM J. Comp. 32T. Ebert, W. Merkle, H. Vollmer, On the autoreducibility of random sequences, SIAM J. Comp. 32 (2003), 1542-1569.
The 2nd Scientific American Book of Mathematical Puzzles & Diversions. M Gardner, Simon and SchusterNew YorkM. Gardner, The 2nd Scientific American Book of Mathematical Puzzles & Diver- sions, Simon and Schuster, New York, 1961.
The Mathematics of coordinated inference: A study of generalized hat problems. C Hardin, A Taylor, Springer International Publishing SwitzerlandC. Hardin, A. Taylor, The Mathematics of coordinated inference: A study of gener- alized hat problems, Springer International Publishing Switzerland 2013.
. B Johnson, B. Johnson, http://mathstat.slu.edu/ johnson/public/maths/hatproblem.pdf
G Kéri, Tables for bounds on covering codes. G. Kéri, Tables for bounds on covering codes, www.sztaki.hu/ keri/codes
On the hat problem, its variations, and their applications. M Krzywkowski, Annales Universitatis Paedagogicae Cracoviensis Studia Mathematica. 91M. Krzywkowski, On the hat problem, its variations, and their applications, Annales Universitatis Paedagogicae Cracoviensis Studia Mathematica 9 (1), 55-67, 2010.
On hats and other covers. H Lenstra, G Seroussi, Arxiv cs/0509045v1 [cs.IT] 15H. Lenstra, G. Seroussi, On hats and other covers, (Extended Summary), Arxiv cs/0509045v1 [cs.IT] 15 sep 2005.
Could you solve this 1 million hat trick?, abcNews. J Poulos, J. Poulos, Could you solve this 1 million hat trick?, abcNews, November 29, 2001
Why mathematicians now care about their hat color, The New York Times. S Robinson, Science Times Edition. 5S. Robinson, Why mathematicians now care about their hat color, The New York Times, Science Times Edition, page D5, April 10, 2001.
A combinatorial approach to Ebert's hat game with many colors. Uthaipon Tantipongpipat, Electron. J. Combin. 214Paper 4.33Uthaipon Tantipongpipat. A combinatorial approach to Ebert's hat game with many colors. Electron. J. Combin., 21(4):Paper 4.33, 18, 2014.
. P Winkler, A K Mind-Benders, Peters, Wellesley, MassachusettsP. Winkler, Mathematical Mind-Benders, A.K. Peters, Wellesley, Massachusetts, 2007
. Amsterdam University of Applied Sciences. Email address: [email protected] University of Applied Sciences, Amsterdam, The Netherlands. Email address: [email protected]
| [] |
[
"Prefix Siphoning: Exploiting LSM-Tree Range Filters For Information Disclosure (Full Version)",
"Prefix Siphoning: Exploiting LSM-Tree Range Filters For Information Disclosure (Full Version)"
] | [
"Adi Kaufman \nTel Aviv University\nTel Aviv University & IBM Research\nTel Aviv University\n\n",
"Moshik Hershcovitch \nTel Aviv University\nTel Aviv University & IBM Research\nTel Aviv University\n\n",
"Adam Morrison \nTel Aviv University\nTel Aviv University & IBM Research\nTel Aviv University\n\n"
] | [
"Tel Aviv University\nTel Aviv University & IBM Research\nTel Aviv University\n",
"Tel Aviv University\nTel Aviv University & IBM Research\nTel Aviv University\n",
"Tel Aviv University\nTel Aviv University & IBM Research\nTel Aviv University\n"
] | [] | Key-value stores typically leave access control to the systems for which they act as storage engines. Unfortunately, attackers may circumvent such read access controls via timing attacks on the key-value store, which use differences in query response times to glean information about stored data.To date, key-value store timing attacks have aimed to disclose stored values and have exploited external mechanisms that can be disabled for protection. In this paper, we point out that key disclosure is also a security threat-and demonstrate key disclosure timing attacks that exploit mechanisms of the key-value store itself.We target LSM-tree based key-value stores utilizing range filters, which have been recently proposed to optimize LSMtree range queries. We analyze the impact of the range filters SuRF and prefix Bloom filter on LSM-trees through a security lens, and show that they enable a key disclosure timing attack, which we call prefix siphoning. Prefix siphoning successfully leverages benign queries for non-present keys to identify prefixes of actual keys-and in some cases, full keys-in scenarios where brute force searching for keys (via exhaustive enumeration or random guesses) is infeasible. | null | [
"https://export.arxiv.org/pdf/2306.04602v1.pdf"
] | 259,095,506 | 2306.04602 | 0169cd1378877d58a3e5b49ea33914c265b843b1 |
Prefix Siphoning: Exploiting LSM-Tree Range Filters For Information Disclosure (Full Version)
Adi Kaufman
Tel Aviv University
Tel Aviv University & IBM Research
Tel Aviv University
Moshik Hershcovitch
Tel Aviv University
Tel Aviv University & IBM Research
Tel Aviv University
Adam Morrison
Tel Aviv University
Tel Aviv University & IBM Research
Tel Aviv University
Prefix Siphoning: Exploiting LSM-Tree Range Filters For Information Disclosure (Full Version)
Key-value stores typically leave access control to the systems for which they act as storage engines. Unfortunately, attackers may circumvent such read access controls via timing attacks on the key-value store, which use differences in query response times to glean information about stored data.To date, key-value store timing attacks have aimed to disclose stored values and have exploited external mechanisms that can be disabled for protection. In this paper, we point out that key disclosure is also a security threat-and demonstrate key disclosure timing attacks that exploit mechanisms of the key-value store itself.We target LSM-tree based key-value stores utilizing range filters, which have been recently proposed to optimize LSMtree range queries. We analyze the impact of the range filters SuRF and prefix Bloom filter on LSM-trees through a security lens, and show that they enable a key disclosure timing attack, which we call prefix siphoning. Prefix siphoning successfully leverages benign queries for non-present keys to identify prefixes of actual keys-and in some cases, full keys-in scenarios where brute force searching for keys (via exhaustive enumeration or random guesses) is infeasible.
Introduction
Key-value stores serve as the storage engines of many cloud and enterprise systems, from object caches [49,52,54] through stream processing [6,14,61] to database systems [2,31,33,46]. Performance of these modern data intensive systems often depends on their key-value storage engine's performance [58]. Consequently, research on key-value stores overwhelmingly focuses on efficiency: from I/O efficiency of writes [22,23], point queries [20,21], and range queries [50,73] to memory efficiency [26,48], energy efficiency [5], multi-core scalability [39,65], and reducing I/O write amplification [58].
But systems also depend on their key-value storage engine for the security of stored data. This dependency is not obvious, because key-value stores typically provide only a dictionary abstraction without access control mechanisms [18,32,40,45], leaving access control to the system. Systems enforce access control by mediating user accesses to the key-value store, often based on access control lists (ACLs) stored as value metadata in the key-value store. While this approach blocks users from directly making unauthorized queries, users may * Both authors contributed equally to this research. still be able to indirectly glean information about restricted data if the key-value store is vulnerable to timing attacks [11].
A timing attack exploits differences in query response times to glean information about stored data. A system using a key-value store that is vulnerable to timing attacks can itself become vulnerable to such attacks, because the system's query response time depends on the storage engine's response time, making differences in key-value query response times manifest as differences in the system's response times.
To date, key-value store timing attacks [62,63] have aimed to disclose stored values. We point out, however, that key disclosure is also a security threat. In some systems, keys can explicitly contain secret data. For example, database systems that use key-value storage engines (e.g., CockroachDB, Yu-gabyteDB, or MyRocks) encode rows (or subsets of rows) onto keys [7,28,30,34]. This makes key disclosure equivalent to database data disclosure. Keys may also be implicitly secret, with users expecting them to be hard to obtain. For instance, in object storage systems, such as Amazon S3, identifying valid keys may create an insecure direct object reference vulnerability [55], which enables attackers to probe access to the objects associated with the disclosed keys.
Unfortunately, resilience to timing attacks is not a goal in existing key-value efficiency work-in fact, such resiliency can be at odds with improved performance. In this paper, we demonstrate this trade-off: we analyze key-value store performance mechanisms through a security lens and show that they enable a key disclosure timing attack.
We focus on write-optimized key-value stores based on logstructured merge (LSM) trees [56], which are in widespread use [12,13,17,20,22,24,32,39,47,58,64,68]. In these designs, data in secondary storage consists of multiple immutable files called SSTables. LSM-trees can efficiently sustain writeintensive workloads, but queries may require multiple I/Os to search the many SSTables [56,64]. LSM-trees minimize unnecessary I/Os by issuing the I/O only if the queried key is likely to be in the SSTable. Likelihood is determined by querying an in-memory filter [10], which space-efficiently approximately represents the SSTable's contents. Specifically, filter queries can make "one-sided" errors: if the queried key is present in the SSTable, then the filter always returns true; but for a small fraction of non-present keys, the filter might return a false positive response.
Standard filters can answer point (single-key) queries [8,10,35], but do not support range queries of the form "does the SSTable contain a key in range [X,Y ]." Consequently, LSMtree range queries must search the many SSTables, performing multiple superfluous I/Os [73]. To address this problem, recent work has proposed range filters, which are filters that support range queries in addition to point queries. Range filters such as SuRF [73] and RocksDB's prefix Bloom filter (PBF) [27] compactly store some or all prefixes of each of the SSTable's keys, and leverage this information to answer range and point queries.
From a security perspective, however, we show that certain range filters enable a key disclosure timing attack on LSM-trees. We describe an attack framework, called prefix siphoning, which exploits general range filter characteristics present in both SuRF and PBF. Prefix siphoning successfully leverages benign point queries for non-present keys to identify prefixes of actual keys-and in some cases, full keys-in scenarios where brute force searching for keys (via exhaustive enumeration or random guesses) is infeasible.
Prefix siphoning targets systems with the common design paradigm of storing a key's ACLs as part of its value [1,4], which means that to check access permissions, the system's query handling always tries to read the queried key's value from the key-value store. Prefix siphoning exploits this property to determine if a random key is one on which the LSMtree's filter returns a false positive. This is possible because whether the filter returns true or false can be determined by the attacker observing the query's response time, as the filter's response decides whether the LSM-tree performs I/Os. For range filters meeting our characterization, finding a falsepositive key implies that the false-positive key shares a prefix with some stored key. Prefix siphoning then performs further point queries-tweaking the queried key-to maximize the length of the disclosed prefix. Prefix siphoning can sometimes subsequently perform a limited enumeration search to fully identify the stored key. Our prefix siphoning implementation performs multiple such steps concurrently, ultimately extracting multiple keys or prefixes.
We evaluate prefix siphoning against SuRF and PBF analytically as well as empirically and demonstrate its feasibility in practice. For example, we successfully use prefix siphoning to extract 64-bit stored keys from a RocksDB [32] datastore employing SuRF in minutes, whereas brute force search of this key space is infeasible. Our analysis and evaluation also quantify the cost of prefix siphoning, showing that it effectively reduces the key search space size by multiple orders of magnitude. For instance, SuRF prefix siphoning requires ≈ 10 M queries to disclose a key from a 50 M 64-bit key datasetimplying a 40992× reduction of the key search space size.
Our results draw attention to the security vs. performance trade-offs in key-value store design, and encourage practitioners and researchers to evaluate the security impact of their work. We hope that our characterization of vulnerable range filters will spur research on more secure filters.
Background
This section provides background on key-values stores ( § 2.1), LSM-trees ( § 2.2), and filters ( § 2.3).
Key-value stores
A key-value store exposes a dictionary-like abstraction with the following operations.
• put(k, v). A put stores a mapping from key k to value v. If key is already present in the store, its value is updated.
• get(k). The get() (or point query) returns the value associated with the requested key.
• range_query(k 1 , k 2 ). A range query returns all key-value pairs falling within the given range.
Due to their simple and general abstraction as well as high performance, key-value stores serve as the storage engines for many, more complex systems. Examples of such systems include database systems (e.g., Cassandra [47], MongoDB [2], and MySQL [3]) and storage systems (e.g., CEPH [1]).
LSM-based data stores
The log-structured merge (LSM) tree [56] is a popular choice as the core storage structure for write-optimized key-value stores, which must sustain write-intensive workloads. An LSM-tree consists of levels, each of which contains multiple immutable static sorted table (SSTable) files storing key/value pairs. Two SSTs at the same level never overlap in the key range they store, but SSTables at different levels may overlap.
A put request inserts the key-value pair into an in-memory buffer called the Memtable, which is the LSM-tree's only mutable storage object. Once the Memtable fills up, its data is flushed to secondary storage as an SSTable file. The LSMtree periodically performs compaction, where it unifies SSTs between levels to eliminate duplicate (stale) key-value pairs.
A get query searches for the target key in a top-down manner: first in the Memtable and subsequently in the relevant SSTable (if it exists) in each level. Searching an SSTable requires I/Os to read it from secondary storage. Once the key is found, its value is returned and the query completes.
However, this design penalizes queries, which may require multiple I/Os to search many SSTables [56,64]. In particular, a get() for a non-present key (not associated with any value) searches every level before failing. This not only increases the query response time, but may "thrash" the page cache by reading in many SSTables which will not be accessed later.
LSM-trees minimize unnecessary I/Os by issuing the I/O only if the queried key is likely to be in the SSTable. Likelihood is determined by querying an in-memory filter (described in § 2.3), which space-efficiently approximately represents the SSTable's contents. The LSM-tree only reads an SSTable from secondary storage if its filter returns true for the queried key. As a result, most non-present key queries can respond without performing I/Os. Likewise, a range filter ( § 2.3.1) can answer both point and range queries with one-sided errors. Using a range filters instead of a standard filter enables an LSM-tree to avoid superfluous I/Os also for range queries, which can improve range query throughput by orders of magnitude [50].
Filters
A filter [10] is a data structure used to approximately represent a set D a of keys. A filter can be immutable or dynamic. An immutable filter is provided D upon its creation and can subsequently only be queried. A dynamic filter learns D dynamically, via insert operations.
Responses for filter queries allow "one-sided" errors: if k ∈ D, then a query for k returns true; but for a fraction of keys k ∈ D, a query for k might answer true instead of false. We say k is a positive/negative key if a filter a query for k answers true or false, respectively. A positive key k is a false positive if k ∈ D. We also say that the filter passes positive keys and rejects negative keys.
Filters are compared by their space efficiency and falsepositive rates. Space efficiency is measured in bits per key. The false-positive rate (FPR) of a filter is the probability over keys not in D of being a false positive. I.e., FPR = FP/(FP + NK), where FP is the number of false-positive keys and NK is the number of negative keys. Filters typically have configurable FPRs, with lower FPRs requiring more bits per key for increased accuracy [8,10,35].
Bloom filters A Bloom filter [10] is a widely-used dynamic filter (e.g., the default filter of RocksDB). It consists of an m-bit array and j hash functions H 1 , . . . , H j . The parameters m and j determine the filter's FPR and space. Insertion of key x sets the bits indexes H 1 (k), . . . , H j (k). A query returns true if and only if all bit indexes H 1 (k), . . . , H j (k) are set.
Range filters
A range filter is a filter that also supports range queries with one-sided error: a query for [a, b] returns true if there exists k ∈ D ∩ [a, b], but might also return true if D ∩ [a, b] is empty.
Motivation: avoiding key disclosure
We observe that keys stored in a key-value storage engine can contain sensitive data. It is therefore desirable that users are not able to efficiently discover stored keys that they are not authorized to access. Of course, users can always guess such keys and check if their queries return an authorization error, but such brute force searches are infeasible on large key spaces. The goal is for brute force search to be the only attack option, i.e., to block more efficient key extraction attacks.
Explicitly secret keys Some systems encode secret data in stored keys, which makes key disclosure equivalent to disclosure of the encoded data. For example, database systems such as CockroachDB, YugabyteDB, and MyRocks store table rows as values in a key-value storage engine, with the associated key consisting of the table's id and the row's primary key (one of the cell values). The motivation for this technique is that it enables the database system to perform efficient primary key lookups using key-value store range queries [7,28,30,34].
Implicitly secret keys In many cases, keys are tacitly assumed to be secret or, at least, hard to guess. One example of implicitly secret keys are object identifiers. Many web applications and object storage systems maintain object id-to-value mappings in a key-value store. Key disclosure thus allows attackers to probe access to the associated objects, resulting in an insecure direct object reference vulnerability [55]. While objects typically have ACLs, users often neglect to configure these ACLs. This is not a hypothetical concern: for instance, there are numerous scanning tools for "open" (unprotected) Amazon S3 objects [9,25,57,60,69,70], and open S3 objects have led to exfiltration of employee information, personal identification information, and other sensitive data [29].
Threat model
We consider a high-level system, such as a database system or object store, that utilizes a key-value storage engine to respond to user queries. Key ACLs are stored as part of the value associated with the key. As the high-level system performs key-value queries to satisfy a user's query, it checks the ACL of each key it accesses by inspecting the key's value. If the user is not authorized to read a key, the system returns a failure response to the user.
The attacker's goal is to identify keys stored in the system's key-value storage engine. The attacker cannot compromise the system (e.g., to run attack code) and cannot eavesdrop on requests performed by other users and/or on their responses. The attacker can only interact with the system by making requests via its interfaces, such as a representational state transfer (REST) API [36,Chapter 5].
We assume that the attacker can craft their requests in a way that causes the high-level system to make key-value store point queries for arbitrary keys (i.e., chosen by the attacker) while processing the request. For simplicity, we refer to this process as the attacker "querying the key-value store."
We make no assumption about the attacker's physical location with respect to the attacked system. We only assume that the attacker can observe microsecond-level timing differences in the response times of queries for different keys. Prior work has shown that this assumption is true for attacks over both local and wide area networks. For instance, Crosby et al. were able to measure a difference of 20 µs over the circa 2009 Internet (and 100 ns over a local area network) [19]. This ability can be improved in specific cases. When attacking a system hosted in the public cloud, for example, the attacker can turn themselves into a local-area attacker by placing themselves in the datacenter hosting the target. Moreover, systems that process different requests concurrently (e.g., HTTP/2 servers) are vulnerable to concurrency-based timing attacks [38], which can observe timing differences of 100 ns over the Internet.
Prefix siphoning
Prefix siphoning is a general template for conducting timing attacks, extracting partial or full keys, on systems that use an LSM-tree based storage engine with a certain type of vulnerable range filter (for both point and range queries). The class of vulnerable range filters contains the filters SuRF [73] and RocksDB's prefix Bloom filter (PBF) [27].
Prefix siphoning exploits range filters that respond to point queries based on key prefix information, which exists to support range queries-i.e., filters where range query support affects the point query implementation. Accordingly, prefix siphoning is based only on point queries and does not perform range queries. Henceforth, therefore, the term "query" always refers to a get() point query. We leave exploring attacks against range queries to future work.
In the following, we describe the attack's high-level ideas ( § 5.1), characterize the class of vulnerable filters ( § 5.2), and present the attack template ( § 5.3). We describe instantiations of the attack against SuRF and PBF in § § 6-7.
Notation We treat keys as sequences of symbols over an alphabet Σ (e.g., bytes). When x denotes a key or a set, then |x| refers to the number of symbols or elements, respectively, that x contains.
High-level ideas
Prefix siphoning exploits an inherent trait of filter use in LSMtrees: that whether a key "passes" the filter determines if the LSM-tree searches the SSTable for the key to satisfy a query. This means that for SSTable files that do not reside in the OS page cache, the filter's output for a key significantly affects the LSM-tree's query response time. If the filter returns false for the key, the response is satisfied with only main memory access; otherwise, the LSM-tree needs to perform I/Os to read SSTables from secondary storage. Even for fast storage such as NVMe devices, the difference in query response times between these two cases is enough to affect the system's overall response time in an attacker-measurable way.
This basic filter trait suffices to mount an "approximate membership test" timing attack. The attack simply queries for the target key k and measures the response time. If the response time is fast (i.e., k is rejected by the filters), then k is definitely not stored in the LSM-tree. Otherwise (i.e., k passes some filter), then k is likely in the LSM-tree. The key k might also be a filter false positive and not exist in the LSM-tree, which occurs with a probability bounded by the filter's FPR.
Prefix siphoning starts by randomly generating keys until it finds a key that "passes" the membership test above. For random keys, passing the test overwhelmingly means that the key is a filter false positive. Crucially, it takes just hundreds of attempts to find a false-positive key, because filters are typically configured for FPRs of a few percents for space efficiency reasons [73].
Our main observation is that in vulnerable range filters, a false-positive key likely shares a prefix with some stored key k, whereas negative keys (rejected by the filter) do not (at least with high probability). The crux of a prefix siphoning attack is an algorithm exploiting this trait to identify the shared prefix k through O(|k|) further queries for modified keys iteratively derived from the initial false-positive key.
The revealed prefix of k can already contain sensitive information. But if the system's query responses distinguish between failures due to target key non-presence and lack of authorization, prefix siphoning can fully extract k by performing brute force search of the unknown suffix, thereby extending the revealed prefix to k.
Of course, a system whose responses distinguish between non-present and unauthorized keys is also vulnerable to "brute force" key guessing or enumeration attacks based using the above "membership test" primitive. But such attacks are infeasible for many key spaces (e.g., 64-bit or string keys). The point of prefix siphoning is to narrow down the search space by exploiting vulnerable range filters. Moreover, prefix siphoning extracts key prefixes even if the target system's responses do not reveal whether a key is non-present or unauthorized, whereas the "membership test" primitive cannot.
Vulnerable range filter characterization
We denote an instance of the filter by F and the set of keys it represents by D. A range filter is vulnerable to prefix siphoning if it has the following characteristics, denoted C1-C2. They say that a false-positive key κ likely shares a prefix with some key from D and that an attacker can efficiently identify this prefix by making queries for keys derived from κ.
C1 If κ is a false-positive key for F, then with high probability, κ shares a prefix with some k ∈ D.
C2 There exist the following probabilistic algorithms, which work by querying the system:
1. FindFPK(): Using an expected constant number of queries, outputs a random false-positive key κ.
2. IdPrefix(κ): Given a false-positive κ, uses O(|κ|) queries to identify the shared prefix k that κ shares with some key k ∈ D, if such a prefix exists; otherwise, the output is unspecified.
The FindFPK and IdPrefix algorithms are specific to the range filter design, and need to be developed by the attacker. 1 We refer to designing such algorithms for a range filter as instantiating the attack against that filter. C2 implies existence of a timing attack, and is therefore formally sufficient to characterize the vulnerability. In practice, however, our attack instantiations rely on fundamental properties of filter use in LSM-trees. To highlight this aspect of the attacks, we explicitly capture these properties in C3. C3 1. A get(k) query's response time is measurably lower if k misses in every filter than if k hits in some filter.
2. The filter's FPR is small but non-negligible (e.g., 1% or 0.1%).
C3 (1) implies that it is possible to distinguish negative from positive keys using query response times. It is trivially true because LSM-trees employ filters to speed up queries for which SSTable searching is superfluous, such as filter misses. Our attacks in this paper exploit microsecond-level time differences between queries satisfied completely from main memory and those that require I/O to secondary storage. (There remain time differences between queries that read an in-memory SSTable residing in the OS page cache and those that do not, due to a filter miss. We leave exploiting such smaller time differences to future work.)
C3 (2) implies that generating keys uniformly at random will generate a false-positive key with hundreds to thousands of attempts, on average. It holds because in practice, filters are typically configured with small but non-negligible FPRs (e.g., 0.5%-5%), as negligibly small FPRs blow up the filter's memory consumption [73]. 2
Prefix siphoning template
Prefix siphoning consists of two phases. First, a preliminary phase learns to distinguish queries of negative and positive keys ( § 5.3.1). The second phase consists of multiple rounds, each of which extracts a key or key prefix ( § 5.3.2). Rounds are run concurrently (see § 9).
Learning to distinguish positive from negative keys
The attack starts with a preliminary phase that builds a distribution of query response times, which is used by the second phase to distinguish positive from negative keys. The distribution is built by measuring response times of multiple get() requests for random keys. With large key spaces, such random keys are mostly negative keys, but a small (though non-negligible) fraction will be positive (due to C3). Such positive keys are overwhelmingly likely to be false positives, but that does not matter for this step, which is only concerned with distinguishing negative from positive keys, regardless of whether the positive output is correct.
The expected distribution observed is a bimodal distribution, with peaks corresponding to the average response time of negative and positive keys. From this distribution, the attacker can derive a cutoff value that likely distinguishes negative (fast) from a positive (slow) queries.
Extracting keys
This phase consists of multiple rounds, each of which extracts a key. Each round consists of three steps: 1 finding a falsepositive key κ, 2 identifying the prefix that κ shares with some stored key k, and, when possible, 3 extending the prefix to extract k. Rounds are run concurrently ( § 9).
Step 1 and 2 simply invoke the attacker's FindFPK and IdPrefix algorithms, respectively. These steps are actually the "meat" of the attack, and we later describe their instantiations for SuRF ( § 6) and RocksDB's prefix Bloom filter ( § 7).
Whether step 3 is possible depends on the properties of the attacked system (and this is why it is not part of the vulnerable range filter characterization). If the system's query responses distinguish between failures due to target key absence and lack of authorization, then the attacker can extend the revealed prefix k with some symbol sequence α and query for the key k α. The response will indicate lack of authorization if and only if k α is a valid key. The attacker can thus iterate over all possible suffixes until k is found. Because k is not known to the attacker, they must first try all possible single symbol extensions, then all two symbol extensions, and so on. This process requires O(|Σ| |k|−|k | ) queries, which can be several orders of magnitude less than a full-key brute force search. Crucially, step 3 only attempts to extend "long" prefixes, for which extension is feasible. Other prefixes are discarded.
Rationale for step decomposition For fixed-length keys, it might seem that the IdPrefix algorithm (step 2 ) for identifying the prefix is superfluous. After all, given that κ shares a prefix with some stored key k, the attacker can enumerate all possible suffixes from the end to the beginning, until identifying k. For example, suppose keys are 14-character strings and the attacker has found a false-positive key manchestercars because it shares the prefix manchesterc with the stored key manchestercity. Without knowing (or caring about) the shared prefix, the attacker can start querying for manchestercara, manchestercarb, . . ., manchestercaaa, manchestercaab, and so on-all of which fail due to key absence-until reaching manchestercity, which will fail due to lack of authorization. As before, this process requires O(|Σ| |k|−|k | ) queries and so it theoretically achieves the same results directly, without requiring an IdPrefix algorithm.
Why, then, is existence of an IdPrefix algorithm defined as one of the characteristics of a vulnerable filter? The answer is that without knowledge of the prefix, the attacker cannot efficiently schedule their work in step 3 . They cannot distinguish a small suffix space (as in the example above) from a huge space-e.g., if the false-positive key only shared the prefix m with manchestercity.
The IdPrefix algorithm protects us from the above pitfall. By identifying the shared prefix, it enables the attacker to decide whether to try and extend the prefix to a full key. Moreover, when multiple rounds execute concurrently, the attacker can collect many prefixes and then prioritize extending the longest ones.
SuRF prefix siphoning
Here, we instantiate a prefix siphoning attack against LSMtrees employing the SuRF [73] range filter. § 6.1 summarizes SuRF and § 6.2 shows that it is vulnerable to prefix siphoning.
SuRF primer
The succinct range filter (SuRF) [73] is the first proposed general range filter. Like the LSM-tree SSTables it approximates, SuRF is an immutable structure. SuRF can speed up LSM-tree range queries by 5×, but it imposes a modest cost on point queries due to having higher FPRs than a Bloom filter [73].
At a high level, SuRF is a pruned trie. A trie is a tree data structure that stores keys sorted according to the lexicographic order of Σ. Each edge is labeled with a symbol and each node corresponds to the concatenation of all edge labels on the path to that node. Each leaf thus corresponds to a key and each internal node to a key prefix (Figure 1(a)). An internal node can also correspond to a key (if the key set is not prefix-free), which is indicated by one of its fields. For space-efficiency, SuRF uses a succinct trie representation.
SuRF further saves space by pruning the trie. The basic SuRF variant (SuRF-Base) stores the minimum length key prefixes that uniquely identify each key, i.e., shared key prefixes plus the symbol following the shared prefix of each key (Figure 1(b)). SuRF's pruning results in a space-efficient but only approximate representation of the key set.
Both point and range queries are satisfied from the pruned trie structure. A get(k) returns true (possibly erroneously) if and only if the path induced by k terminates at a node associated with a key. For example, in Figure 1(b), BLOOD is a false positive. Range queries rely on the trie's ordered structure. For example, to check if the SuRF contains a key k ∈ [a, b], the query finds the node corresponding to the smallest key ≥ a. If it corresponds to a key > b, the query returns false; otherwise, it returns (possibly erroneously) true.
SuRF variants to reduce FPR SuRF-Base's FPR is datadependent, i.e., depends on the key set. Compare, for example, two sets of 26 keys: A = {x α | x ∈ A, . . . , Z} and B = {α x | x ∈ A, . . . , Z}, where α is some long string. For A, SuRF's FPR is nearly 100%, as any key except A, . . . , Z is a false positive. But for B, the FPR is extremely small, as only keys that begin with α pass the filter. To improve the FPR, SuRF offers variants that augment SuRF-Base's pruned structure with a few bits per leaf of information about the leaf's suffix. These bits reduce the FPR by allowing queries to reject keys that share a prefix with the stored key but have a different suffix, in exchange for increasing per-key memory consumption.
SuRF-Hash (Figure 1(c)) hashes the leaf's key and stores n bits from the hash value, where n is configurable. SuRF-Real (Figure 1(d)) stores the first m bits of the key's suffix, where m is configurable.
Vulnerability of SuRF
Every SuRF variant has the characteristics defined in § 5.2. C3(1) holds trivially. C3(2) holds empirically: SuRF-Base has an FPR of 4% for random 64-bit keys and SuRF-Hash reduce this FPRs to ≈ 0.1% [73]. C1 holds because in every SuRF variant, every false-positive key κ shares a prefix with some stored k-C1 holds with probability 1.
To show that C2 holds, we describe how to efficiently find a false-positive key ( § 6.2.1) and how to identify the prefix that it shares with a stored key ( § 6.2.2). We assume the ability to check if a key is a filter positive or negative key based on measuring query response times. The implementation of this check is described in § 9.
Finding a false-positive key (FindFPK)
For SuRF, our FindFPK algorithm simply generates queries for uniformly random keys until it detects a positive response, based on the cutoff determined in the attack's preliminary learning phase ( § 5.3.1). Due to C3, this step is expected to terminate with a few hundreds to thousands of attempts.
We refer to the random positive key found as a falsepositive key, because that is the overwhelmingly likely event. However, the attack still works if, unbeknownst to the attacker, the found key is actually a true positive key.
Identifying a shared prefix (IdPrefix)
For a false-positive key κ, let k = k(κ) be the stored key whose shared prefix k with κ is the longest among all stored keys. We write κ = k α and k = k β. Our algorithm will output k .
SuRF-Base/Real
To find k , we exploit SuRF's structure, namely that any key starting with a proper prefix of k is a negative key. Let κ = κ 1 . . . κ n . We repeatedly remove the last symbol from the key, iteratively checking if the keys κ 1 . . . κ n−1 , κ 1 . . . κ n−2 , . . . are negative or positive keys. These keys will be positive until we remove a symbol from k . Thus, the key checked before a negative key is found is k . If the attacked system does not support variable-length keys, removing symbols is not possible. In this case, instead of removing symbols, we change them. We iteratively check if the keys κ 1 . . . κ n , κ 1 . . . κ n−1 κ n . . . are negative or positive keys, where κ i = κ i . Similarly to before, if the first negative key found is κ 1 . . . κ j . . . κ n then k = κ 1 . . . κ j .
Overall, the number of requests made is O(|κ|).
SuRF-Hash SuRF-Hash complicates the attack, because modifying κ's suffix can change its hash value, leading to a key that is rejected by SuRF despite sharing the prefix k . To address this problem, we assume SuRF's hash function hash is public knowledge. (This is a reasonable assumption, because the hash function's purpose is to reduce the FPR and not for security.) We perform essentially the same algorithm(s) as for SuRF-Base/Real, but we only query each modified key κ if hash(κ ) = hash(κ). We are still essentially assured to find keys to query, because SuRF-Hash stores only a small subset of the hash bits, for space-efficiency reasons. For example, with the recommended 4 hash bits [73] and using 8-bit symbols, on average 1 in 16 symbols tried will yield a hash collision and thus a key usable by the IdPrefix algorithm. Similarly, when trying to extend an identified prefix to a full key (step 3 in § 5.3.2), we can skip querying any candidate key whose hash does not match the false-positive key's hash.
Prefix Bloom filter prefix siphoning
This section instantiates prefix siphoning against LSM-trees using the prefix Bloom filter (PBF) [27]. We describe the PBF in § 7.1 and show its vulnerability in § 7.2.
Prefix Bloom filter primer
The PBF is a Bloom filter-based range filter that supports range queries for ranges expressible as fixed-prefix queries. While PBFs do not provide general range queries, they are currently deployed in real-world key-value stores such as RocksDB [32] and LittleTable [59].
A PBF consists of a Bloom filter and a predetermined prefix length, l. When a key k is inserted into the PBF, both k and its l-bit prefix are inserted into the Bloom filter.
PBF range queries must be for ranges of the form "all keys starting with α," where α is an l-bit string. They are answered by querying the Bloom filter for α. If this query responds false, the dataset does not contain keys within the target range.
The PBF answers point queries by querying the Bloom filter for the queried key. We remark that if the high-level system does not prioritize point query efficiency, the PBF can be configured to only store key prefixes. In this case, the PBF implements a point query for key k by querying its Bloom filter for k's l-bit prefix. This option reduces the PBF's memory consumption but increases the FPR of point queries. This PBF configuration does not affect the success of our attack, so we do not discuss it further.
Vulnerability of the PBF
The PBF has the characteristics defined in § 5.2. As with SuRF, C3(1) holds trivially. C3(2) holds because the PBF's FPR is based on its Bloom filter's FPR.
The PBF has an important property: it not only has the usual Bloom filter false positives caused by hash collisions but also has what we call prefix false positives. These occur when a PBF point query falsely returns positive for an input κ that is an l-bit prefix of a dataset key, simply because the Bloom filter stores both dataset keys and their l-bit prefixes. This property implies that C1 holds: with probability 1−FPR, an l-bit false-positive is actually the prefix of some stored key.
To show that C2 holds, we need only describe how to find prefix false positives ( § 7.2.1). Finding them makes the IdPrefix algorithm of C2 trivial: given an l-bit false positive κ, it outputs κ.
Finding l-bit false-positive keys (FindFPK)
The FindFPK algorithm first determines the length of key prefixes stored in the PBF, l, and then proceeds to guess prefix false positives. Crucially, finding l needs to be performed only once per attack. That is, when running the attack's rounds concurrently ( § 9), we run this step only once.
Once l is known, generating queries for uniformly random l-bit strings will find false-positive keys, similarly to the SuRF attack's FindFPK ( § 6.2.1). Given a set of false positive l-bit keys thus found, an expected fraction of p/2 l will be prefix false positives, where p is the number of distinct l-bit prefixes of dataset keys. The remaining false positives will be hash-collision Bloom filter false positives. Because we cannot distinguish between the two types of false positives, the attack's later steps must try to extend all of them to full keys.
The crux of the FindFPK algorithm is to identify l. To this end, we rely on the PBF property that made it vulnerable in the first place. For any prefix length l = l, the probability of an l -bit key being a false positive is exactly the filter's FPR. Only for l-bit keys will we observe a "bump" in the probability of a random l-bit key being a false positive, due to the presence of prefix false positives.
Accordingly, the FindFPK algorithm first generates j queries for uniformly random keys of length l , for every non-trivial prefix length l (e.g., l ≥ 3). It observes the fraction of false positives found and deduces that l is the length l' for which the fraction of false positives found is maximal.
Complexity analysis
The key factor determining prefix siphoning's effectiveness is the probability of FindFPK (step 1 in § 5.3.2) guessing an exploitable key k, which is a false positive whose longest shared prefix with stored keys is of length l, where l is a predetermined constant for which extending k into a full key is feasible (step 3 ).
The full version of this paper [41] includes a theoretical analysis of the SuRF and PBF attacks, which is omitted here due to space constraints. We analyze the case of uniformly random keys, which is the worst case for our attack. (If the key distribution is skewed, then (1) the guessing and full-key extraction steps can incorporate this knowledge; and (2) the prefixes SuRF stores are longer, so our attack will identify longer prefixes and thus extend them to full keys faster.)
The analysis derives the probability of FindFPK guessing an exploitable key. This determines the expected number of queries to guess an exploitable key or, equivalently, the number of keys we ultimately expect to extract after investing G guesses in FindFPK. These values also allow comparing the cost (in queries) of prefix siphoning to brute force guessing.
Under the realistic constraint that |D| 2 l , where D is the dataset (e.g., |D| = 500 M and l = 40), we find that (1) prefix siphoning becomes more effective with growth in dataset size and better FPR-i.e., as the LSM-tree becomes more effective, so does prefix siphoning; and (2) prefix siphoning takes several orders of magnitude fewer queries to guess a key than brute force guessing.
Implementation issues
In previous sections, we assume the attacker can check if a key is a filter positive or negative key, based on measuring query response times. Here, we describe our implementation of this check.
The basic idea is simple. Prefix siphoning's preliminary phase ( § 5.3.1) derives a response time cutoff. Keys whose query response time is below this cutoff are considered negative; otherwise, they are considered positive. However, this cutoff only distinguishes between queries satisfied from memory and those involving I/Os. Once a query for a false-positive key completes, the I/O it performs reads the relevant SSTable into the in-memory page cache. Future queries for falsepositive keys covered by this SSTable will thus get satisfied from memory.
To overcome this problem, we exploit the fact that the attack targets some production system, which is assumed to sustain heavy I/O load due its legitimate operation. This property implies that if the attacker waits after performing a false positive query, the SSTable brought in will be evicted from the page cache due to legitimate I/O traffic.
Unfortunately, waiting for even a few seconds after every query would make the attack impractical. We solve this challenge by performing attack rounds in a concurrently, breadthfirst manner, as described below, instead of working depthfirst (finding a false-positive key and proceeding to identify its prefix and then to extract the full key).
Step 1 of § 5.3.2 (FindFPK execution) generates N ran-dom keys (false positive candidates) and measures a fourquery average response time for each key to identify falsepositive keys. The averages are computed in a breadth-first manner: there are four iterations, each of which performs one query for each key. Waiting for page cache evictions is done only between each iteration.
Step 2 (IdPrefix) similarly executes iteratively, interleaving the next step of IdPrefix for each false-positive key in each iteration, until all invocations output a prefix. Again, waiting for page cache evictions is only done between iterations.
Step 3 (key extraction) likewise interleaves the searches extending each prefix, waiting only between iterations. We optimize step 3 's general-case brute force suffix search by leveraging the fact that step 2 outputs a set of prefixes. This enables us to discard short prefixes, so that step 3 only attempts to extend prefixes where the suffix search is feasible.
The interleaved execution of each step can sped up using parallelization over multiple cores by assigning each core a subset of the N random keys, false-positive keys, or prefixes to execute step 1 , 2 , and 3 , respectively, in the above described manner. This results in linear speedup (in the number of cores) of step execution time. Our implementation parallelizes step 3 , whose execution time dominates the attack ( § 10.2.2), over 16 cores and leaves the other steps single-threaded.
Evaluation
In this section, we evaluate prefix siphoning attacks on SuRF and PBF in RocksDB. We demonstrate the attack's feasibility, successfully mounting it against a full-fledged RocksDB key-value store employing SuRF ( § 10.2). 3 We empirically analyze the SuRF attack's efficiency and sensitivity to data store size and filter FPR ( § 10.3). Consistent with our theoretical analysis, we find that the attack becomes more effective with growth in dataset size and better FPR-i.e., as the LSMtree becomes more effective, so does prefix siphoning. Finally, we demonstrate the attack against the PBF ( § 10.4).
Experimental setup
Both clients and the attacked key-value store run on the same server. However, the time differences we exploit can be measured over the network using prior techniques (see § 4).
We use a server with two Intel Xeon Gold 6132 v6 (Skylake) processors, each of which has 14 2.6 GHz cores with two hyperthreads per core. The server is equipped with 192 GB DDR4 DRAM and two 0.5 TB NVMe SSDs. The server runs Ubuntu 18.04 and code is compiled with GCC 4.8.
RocksDB setup
We use a version of RocksDB modified by the SuRF authors to employ SuRF [73]. The target RocksDB instance uses the NVMe devices as secondary storage. We use Linux cgroup to limit RocksDB's available DRAM to 2 GB. This configuration emulates an industrial-scale, I/O heavy key-value store setup, in which storage capacity far exceeds DRAM capacity.
The RocksDB engine stores 64-bit keys and 1000-byte values and the SuRF-Real variant. Unless noted otherwise, we use a datastore of 50 M uniformly random keys (generated using SHA1). We invoke RocksDB LSM-tree compaction after populating the datastore. We do this to emulate the compaction that naturally occurs in a real workload due to insertions, because our experiments perform only get()s.
Background load
In all experiments, we emulate a realistic, loaded system by running 32 threads that constantly perform get() queries for random keys, with 50% of the queries targeting stored keys and 50% targeting non-present keys.
RocksDB+SuRF-Real key extraction
We implement the attack as described in § 9. § 10.2.1 evaluates the attack's first phase ( § 5.3.1), demonstrating that query response times can be used to distinguish negative from positive keys in practice, even in the presence of heavy background load. § 10.2.2 evaluates the attack's second phase, which extracts full keys, and compares it to a brute force search.
Negative/positive query time differences
In this phase, the attacker performs 10 M get() queries for randomly generated keys to build the response time distribution. Table 1 shows the distribution of response times in terms of 5 microsecond buckets. The distribution is extremely skewed toward values < 25 µs, which our attack therefore assumes are associated with negative keys.
To validate this assumption, Figure 2 visualizes the distribution while breaking the response times by queried key type (negative or false-positive). This breakdown is presented for analysis purposes; it is not available to the attacker. For readability, we present the breakdown in two ways. Figure 2(a) shows only the buckets ≥ 25 µs, which are otherwise dwarfed by the lower end of the distribution. We show both the number of negative (purple) and false-positive (dark green) keys in each bucket and the percent of false-positive keys in each bucket (light green). Figure 2(b) shows the entire distribution, but bucket sizes (Y axis) are percentages instead of absolutes. For each bucket, we report the number of keys in the bucket as well as the percentage of false positives (out of all positives).
Key extraction
The attack executes as described in § 9; specifically, wait is set to 20 seconds and each step is executed in a parallel, breadth-first manner, to minimize the amount of time spent waiting for page cache evictions. The attacker generates a set of 10 M random keys to find false-positive keys (step 1 of § 5.3.2). The attacker next identifies the prefix each falsepositive key shares with a stored key (step 2 ). Finally, the attacker discards every prefix of length < 40 bits and attempts to extend the remaining prefixes into full keys (step 3 ). Figure 3 shows the number of keys extracted as a function of the number of total number of get() requests issued by the attack (aggregated over steps 1 -3 ). The figure also compares the attack to an idealized attack, which uses internal RocksDB debugging counters to accurately determine the filters' responses for each queried key, instead of relying on query response times.
Because the idealized attack never incorrectly classifies a key, it identifies more false positives than the actual attack in step 1 . It thus requires more queries in step 2 to identify the shared prefixes of the keys provided to step 2 , as there are more of them. Consequently, the idealized attack begins step 3 later (in terms of queries) than the actual attack, which is why its line is "shifted" compared to actual attack. For this reason, the idealized attack also requires more queries overall. Ultimately, however, the actual attack extracts only 74 fewer keys than the idealized version.
The idealized attack is also faster (in real time) than the actual attack, because it does not require waiting for page cache evictions. The actual attack's key extraction rate is ≈ 10 minutes/key, while the idealized attack achieves 0.2 minutes/key. Table 2 shows a breakdown of the (actual) attack's queries across all three steps. The bulk of the attack is spent on step 3 , extending prefixes into full keys. Our later analysis ( § 10.3.2) explains this number. The table also reports wasted queries, which are issued when the attack futilely tries to extract a key from an incorrect prefix, which was misidentified due to incorrectly classifying a key as a false-positive (based on its query response time). Additional wasted queries (not shown) are spent identifying prefixes of length < 40 bits in steps 1 -2 , which are then discarded. While over 90% of prefixes identified by steps 1 -2 are discarded, this waste is negligible, as they are discarded before the most expensive step.
Comparison to brute force We further evaluate a brute force attack, that randomly guesses keys until a stored key is found. We allow this attack to run for 10× more time than the prefix siphoning experiment-but it fails to guess a key. Unsurprisingly, brute force search for a large key space is infeasible.
SuRF-Hash vs. SuRF-Real SuRF-Hash complicates the attack. Compared to SuRF-Real with the same per-key space budget, SuRF-Hash replaces key bits (SuRF-Real's suffix bits) with hash value bits. This means that possible prefixes to identify are shorter and that the filter's FPR is lower, making the number of false positives identified in step 2 lower. On the other hand, as discussed in § 6.2.2, when identifying the prefixes and performing key extraction, the attacker can use the false-positive key's hash value to ignore definitely incorrect guess-potentially improving the attack's efficiency.
To evaluate this trade-off, we compare idealized attacks against the same dataset, with RocksDB using either SuRF-Real with 8-bit suffixes or SuRF-Hash with 8-bit hashes. Thus, in SuRF-Hash, the suffix search space when extracting a key 256× larger than in SuRF-Real, but the attacker will ignore 255/256 of its guesses on average. To compensate for SuRF-Hash's lower FPR, the initial false-positive key search of the SuRF-Hash attack uses 3× the number of candidate keys used for SuRF-Real. Figure 4 therefore compares the attacks' amortized cost, in terms of a moving average of queries per extracted key as a function of attack progress. 4 The SuRF-Hash attack's extra initial queries (for finding false positives) manifest as the peak of the per-key cost, when all these extra queries are amortized across only a handful of keys. The extra cost is eventually amortized away, into a per-key cost of 12 M vs. 10 M queries for SuRF-Hash vs. SuRF-Real, respectively. For this similar cost, the SuRF-Hash attack extracts 2490 keys vs. 2171 keys for the SuRF-Real attack.
Attack analysis
This section analyzes the attack's efficiency ( § 10.3.1) and sensitivity to data store size ( § 10.3.2) and filter FPR ( § 10.3.3). Figure 5 shows the attack's efficiency, measured as average get()s per extracted key as a function of attack progress. We compare across three 50 M random 64-bit key sets to show the results are not a function of the specific key set.
Efficiency
The average number of queries per extracted key converges to about 9 M ≈ 2 23 . This indicates that the attack extracts keys with roughly the work required to search a 23-bit space-40992× better than a brute force search of the full key space (2 64 /50 M ≈ 2 38.4 ). The attack also extracts a substantial number of keys (375, 419, and 423 keys).
Sensitivity to dataset size
To evaluate the attack's sensitivity to the dataset size, we progressively shrink our original 50 M key set into smaller subsets of size c · 10 M keys for c ∈ [1,5]. We then perform an idealized attack against the system with each dataset, but using the same set of random keys for step 1 , so any difference in attack behavior can related only to the datastore size and not the key distribution. Figure 6 shows the number of keys extracted as the attack progresses. Prefix siphoning is more effective as the dataset size increases: it extracts ≈ 100 keys from the 10 M dataset, but almost 400 keys from the 50 M dataset.
Sensitivity to SuRF FPR
We show that prefix siphoning becomes more effective as SuRF's FPR improves, i.e., the attack becomes more harmful to the system as SuRF becomes more productive to the system. To demonstrate this effect, we compare idealized attacks against the same dataset, with RocksDB using either SuRF-Base or SuRF-Real. SuRF-Base stores shared key prefixes, padded to the next full byte (which adds 1-8 bits to the prefix). SuRF-Real does the same, plus stores a byte from the key's unique suffix, and thereby improves its FPR (see § 6.1). We carry out the attacks against each SuRF variant using the same initial random key set, used to identify false-positive keys. Figure 7 reports the attack's amortized cost (queries per extracted key) as the attack progresses.
In both cases, the attack has similar efficiency of ≈ 10 M queries per extracted key, as evident from the similar slope of the two lines. However, the attack is more successful against SuRF-Real, where it extracts 420 keys, than against SuRF-Base, where is extracts 21 keys. The reason for the improved effectiveness is that SuRF-Real's extra key byte storage makes an initial false-positive key much more likely to have a prefix length of > 40 bits, resulting in more false positives making it to step 3 .
The situation is similar with SuRF-Hash, which further improves the FPR over SuRF-Real ( Figure 4). As mentioned in § 10.2.2, the idealized SuRF-Hash attack extracts 2490 keys vs. 2171 keys for the idealized SuRF-Real attack.
RocksDB+PBF key extraction
We evaluate an idealized prefix siphoning attack against RocksDB's PBF. We use a dataset of 50 M uniformly random 64-bit keys. We configure the PBF to store prefixes of length l = 40 bits and to consume 18 bits/key (which is roughly the space usage of SuRF in our experiments).
Step 1 (FindFPK) perform 1 M queries for uniformly random 40-bit keys, which result in 457 false-positive keys. The attack then attempts to extend these false positives into full keys. It eventually extracts 46 keys, which matches the expected number of prefix false positives observed in 1 M random guesses (1M · 50 M/2 40 = 45.4). Figure 8 plots the attack's amortized cost (queries per extracted key) as the attack progresses. The PBF attack makes 160 M queries per extracted key, which is 20× more queries/key than the SuRF attack, but still three orders of magnitude better than a brute force search. The reason for this difference is that the PBF attack wastes effort trying to extend Bloom filter false positives that are not prefix false positives.
Mitigation
Here, we discuss approaches for mitigating prefix siphoning attacks. Unfortunately, every potential solution constitutes some trade-off, whether in query performance, memory efficiency, complexity, or other system aspects.
System-level approaches A system can block prefix siphoning attacks by only querying its key-value storage engine for keys the requesting user is allowed to access. This approach requires re-architecting the system so that a key's ACL is kept outside of the key-value store. In addition, a system can rate limit user requests, thereby slowing down prefix siphoning attacks. This approach is viable only if the system is not meant to handle a high rate of normal, benign requests.
Key-value store mitigation A key-value engine can block prefix siphoning by maintaining separate filters for point and range queries for each SSTable file. Unfortunately, this approach will double filter memory consumption. In addition, it will not block attacks that target range queries (which we believe are possible, and are currently exploring).
Filter-level mitigation A natural mitigation is for keyvalue stores to employ non-vulnerable range filters. Like the separate filter approach described above, this mitigation carries the risk of being vulnerable to future extensions of prefix siphoning to range queries.
In addition, the properties that make a range filter nonvulnerable to point query-based prefix siphoning may limit its utility in practice. For example, Rosetta (Robust Space-Time Optimized Range Filter) [50] is a range filter that does not conform to our vulnerable range filter characterization ( § 5.2), but it lacks support for variable-length keys, which are important in practice.
Rosetta uses Bloom filters for SuRF-like prefix-based filtering. Rosetta assumes a bound on the possible key length in bits, L. A Rosetta instance consists of L Bloom filters, B 1 , . . . , B L . When a key k is inserted into the filter, each i-bit prefix is inserted into the i-th Bloom filter B i . A Rosetta point query thus simply queries B L , making Rosetta non-vulnerable to prefix siphoning.
The Rosetta paper does not specify how variable-length keys are handled. Its design is clearly incompatible with such keys if there is no predetermined bound on their size. Even if such a bound exists (and can thus be used for L), Rosetta requires every key to be padded to L bits, so that point queries function correctly. This requirement significantly increases the filter's memory consumption.
Encrypted key-value stores Disclosed keys reveal no sensitive information if they are stored encrypted in the storage engine. However, encrypting key-value pairs requires re-architecting the entire system so it can query on encrypted data [71,72]. Most if not all deployed key-value stores do not support such encryption.
Related Work
Key-value store timing attacks Existing key-value store timing attacks aim to disclose stored values. These attacks work by exploiting external mechanisms such as memory deduplication [62] or memory compression [63], which can be disabled for protection. In contrast, prefix siphoning exploits a mechanism of the key-value store itself, which cannot be disabled for protection without suffering significant throughput degradation and additional I/O traffic.
Storage engine timing attacks Timing attacks mostly target cryptographic software rather than storage engines. Futoransky et al. [37] extract private keys from a MySQL database with a timing attack, but the attack relies on insertions of attacker-chosen data. Wang et al. [67] show a practical timing attack on a multi-user search system, such as Elasticsearch.
Filter attacks Privacy attacks against Bloom filters have been explored in the context of privacy-preserving record linkage [15, 16, 42-44, 51, 53, 66], where Bloom filter serve as encodings of sensitive information. These attacks reveal stored keys by exploiting skewness of the stored key distribution. In contrast, prefix siphoning does not require skewness and succeeds on uniformly random keys.
Conclusion
This paper shows that certain range filters make LSM-trees vulnerable to novel prefix siphoning timing attacks, which exploit differences in query response times to reveal keys and prefixes of keys stored in the LSM-tree. Our results show that key-value store performance improvements may trade security in exchange, and encourage practitioners and researchers to evaluate the security impact of their work. We also hope that our characterization of vulnerable range filters will spur research on more secure filters.
Figure 1 :
1Trie and SuRF variants over the key set BLUE, BLACK, and BLOND.(Figure adaptedfrom[73].)
Figure 2 (
2a) shows that the vast majority of false positive queries have a response time of 25-35 µs. Conversely, Figure 2(b) shows that this response time range contains over 50% of the false-positive keys. Overall, these results show that picking a cutoff point of 25 µs for distinguishing a negative from positive key-which is done based only on the distribution's shape, without knowledge of key types-yields a good distinguisher.
Figure 2 :
2Breakdown of query response time distribution.
Figure 3 :
3Actual vs. idealized prefix siphoning against SuRF-Real: Number of keys extracted as attack progresses. attack step # queries (millions) queries/total (%)
Figure 4 :
4SuRF-Hash vs. SuRF-Real: Moving average of queries per extracted key as a function of attack progress (measured in queries).
Figure 5 :
5Attack efficiency: average number of get()s per extracted key as attack progresses.
Figure 6 :
6Idealized attack against SuRF-Real: Number of keys extracted for different dataset sizes.
Figure 7 :
7SuRF-Real vs. SuRF-Base: Moving average of queries per extracted key as a function of attack progress (measured in queries).
Figure 8 :
8Idealized prefix siphoning against PBF (l = 40 bits).
Table 2 :
2Attack queries per stage. Wasted queries futilely attempt to extend an incorrectly identified prefix into a full key.
Existence of FindFPK and IdPrefix is required in addition to C1 because a filter satisfying only C1 may not allow an attacker to extract the prefixes.2 Prefix siphoning can still be performed for exponentially low false positive rates, but its cost (in terms of number of queries needed) increases proportionally to the decrease in the false positive rate.
We use the SuRF's authors' implementation, https://github.com/ efficient/SuRF.
I.e., the Y axis reports the number of get()s issued divided by the number of keys extracted up to the current X-axis point.
AcknowledgmentsWe extend our deepest thanks to Yuvraj Patel, the paper's shepherd, and the anonymous reviewers for their dedication and assistance in improving this paper and their valuable feedback. We thank Guy Khazma for his work on an earlier stage of this project.
CEPH. CEPH. https://github.com/ceph/ceph.
. Mongodb, MongoDB. https://www.mongodb.com/.
. Mysql Server, MySQL Server. https://github.com/mysql/ mysql-server.
. Amazon. Amazon S3. Amazon. Amazon S3. https://aws.amazon.com/ s3/, 2020.
FAWN: A Fast Array of Wimpy Nodes. David G Andersen, Jason Franklin, Michael Kaminsky, Amar Phanishayee, Lawrence Tan, Vijay Vasudevan, SOSP. David G. Andersen, Jason Franklin, Michael Kaminsky, Amar Phanishayee, Lawrence Tan, and Vijay Vasudevan. FAWN: A Fast Array of Wimpy Nodes. In SOSP, 2009.
Apache Flink -Stateful Computations over Data Streams. Apache, Apache. Apache Flink -Stateful Computations over Data Streams. https://flink.apache.org, 2022.
How We Built a High Performance Document Store on RocksDB?. Mikhail Bautin, Mikhail Bautin. How We Built a High Per- formance Document Store on RocksDB? https://www.yugabyte.com/blog/how-we- built-a-high-performance-document-store- on-rocksdb/, 2019.
. Michael A Bender, Martin Farach-Colton, Rob Johnson, Russell Kraner, Bradley C Kuszmaul, Dzejla Medjedovic, Pablo Montes, Pradeep Shetty, P Richard, Michael A. Bender, Martin Farach-Colton, Rob John- son, Russell Kraner, Bradley C. Kuszmaul, Dzejla Medjedovic, Pablo Montes, Pradeep Shetty, Richard P.
Don't Thrash: How to Cache Your Hash on Flash. Erez Spillane, Zadok, VLDB. Spillane, and Erez Zadok. Don't Thrash: How to Cache Your Hash on Flash. In VLDB, 2012.
Peter Benjamin, s3-fuzzer. Peter Benjamin. s3-fuzzer. https://github.com/ pbnj/s3-fuzzer, 2017.
H Burton, Bloom, Space / Time Trade-offs in Hash Coding with Allowable Errors. CACM. 13Burton H. Bloom. Space / Time Trade-offs in Hash Coding with Allowable Errors. CACM, 13(7), 1970.
Remote Timing Attacks are Practical. David Brumley, Dan Boneh, USENIX Security Symposium. David Brumley and Dan Boneh. Remote Timing Attacks are Practical. In USENIX Security Symposium, 2003.
Characterizing, Modeling, and Benchmarking RocksDB Key-Value Workloads at Facebook. Zhichao Cao, Siying Dong, Sagar Vemuri, David H C Du, FAST. Zhichao Cao, Siying Dong, Sagar Vemuri, and David H.C. Du. Characterizing, Modeling, and Benchmarking RocksDB Key-Value Workloads at Facebook. In FAST, 2020.
Bigtable: A Distributed Storage System for Structured Data. Fay Chang, Jeffrey Dean, Sanjay Ghemawat, C Wilson, Deborah A Hsieh, Mike Wallach, Tushar Burrows, Andrew Chandra, Robert E Fikes, Gruber, ACM TOCS. 262Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C Hsieh, Deborah A Wallach, Mike Burrows, Tushar Chan- dra, Andrew Fikes, and Robert E Gruber. Bigtable: A Distributed Storage System for Structured Data. ACM TOCS, 26(2), 2008.
Realtime Data Processing at Facebook. Jerry Guoqiang, Janet L Chen, Shridhar Wiener, Anshul Iyer, Ran Jaiswal, Nikhil Lei, Wei Simha, Kevin Wang, Tim Wilfong, Serhat Williamson, Yilmaz, SIGMOD. Guoqiang Jerry Chen, Janet L. Wiener, Shridhar Iyer, Anshul Jaiswal, Ran Lei, Nikhil Simha, Wei Wang, Kevin Wilfong, Tim Williamson, and Serhat Yilmaz. Realtime Data Processing at Facebook. In SIGMOD, 2016.
Efficient Cryptanalysis of Bloom Filters for Privacy-Preserving Record Linkage. Peter Christen, Rainer Schnell, Dinusha Vatsalan, Thilina Ranbaduge, Pacific-Asia Conference on Knowledge Discovery and Data Mining. Peter Christen, Rainer Schnell, Dinusha Vatsalan, and Thilina Ranbaduge. Efficient Cryptanalysis of Bloom Filters for Privacy-Preserving Record Linkage. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2017.
Pattern-Mining Based Cryptanalysis of Bloom Filters for Privacy-Preserving Record Linkage. Peter Christen, Anushka Vidanage, Thilina Ranbaduge, Rainer Schnell, Pacific-Asia Conference on Knowledge Discovery and Data Mining. Peter Christen, Anushka Vidanage, Thilina Ranbaduge, and Rainer Schnell. Pattern-Mining Based Cryptanalysis of Bloom Filters for Privacy-Preserving Record Linkage. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2018.
Optimal Hashing in External Memory. Alex Conway, Martín Farach-Colton, Philip Shilane, ICALP. Alex Conway, Martín Farach-Colton, and Philip Shilane. Optimal Hashing in External Memory. In ICALP, 2018.
SplinterDB: Closing the Bandwidth Gap for NVMe Key-Value Stores. Alexander Conway, Abhishek Gupta, Vijay Chidambaram, Martin Farach-Colton, Richard Spillane, Amy Tai, Rob Johnson, USENIX ATC. Alexander Conway, Abhishek Gupta, Vijay Chi- dambaram, Martin Farach-Colton, Richard Spillane, Amy Tai, and Rob Johnson. SplinterDB: Closing the Bandwidth Gap for NVMe Key-Value Stores. In USENIX ATC, 2020.
Opportunities and Limits of Remote Timing Attacks. A Scott, Dan S Crosby, Rudolf H Wallach, Riedi, ACM Transactions on Information and System Security. 123Scott A. Crosby, Dan S. Wallach, and Rudolf H. Riedi. Opportunities and Limits of Remote Timing Attacks. ACM Transactions on Information and System Security, 12(3), 2009.
Monkey: Optimal Navigable Key-Value Store. Niv Dayan, Manos Athanassoulis, and Stratos Idreos. SIG-MODNiv Dayan, Manos Athanassoulis, and Stratos Idreos. Monkey: Optimal Navigable Key-Value Store. In SIG- MOD, 2017.
Optimal Bloom Filters and Adaptive Merging for LSM-Trees. Niv Dayan, Manos Athanassoulis, and Stratos Idreos. 43Niv Dayan, Manos Athanassoulis, and Stratos Idreos. Optimal Bloom Filters and Adaptive Merging for LSM- Trees. ACM TODS, 43(4), dec 2018.
Dostoevsky: Better Space-Time Trade-Offs for LSM-Tree Based Key-Value Stores via Adaptive Removal of Superfluous Merging. Niv Dayan, Stratos Idreos, SIGMOD. Niv Dayan and Stratos Idreos. Dostoevsky: Better Space-Time Trade-Offs for LSM-Tree Based Key-Value Stores via Adaptive Removal of Superfluous Merging. In SIGMOD, 2018.
The Log-Structured Merge-Bush & the Wacky Continuum. Niv Dayan, Stratos Idreos, SIGMOD. Niv Dayan and Stratos Idreos. The Log-Structured Merge-Bush & the Wacky Continuum. In SIGMOD, 2019.
Chucky: A Succinct Cuckoo Filter for LSM-Tree. Niv Dayan, Moshe Twitto, SIGMOD. 2021Niv Dayan and Moshe Twitto. Chucky: A Succinct Cuckoo Filter for LSM-Tree. In SIGMOD, 2021.
Vries Tom De, Teh s3 bucketeers. Tom de Vries. Teh s3 bucketeers. https://github. com/tomdev/teh_s3_bucketeers/, 2021.
SkimpyStash: RAM Space Skimpy Key-Value Store on Flash-Based Storage. Biplob Debnath, Sudipta Sengupta, Jin Li, SIGMOD. Biplob Debnath, Sudipta Sengupta, and Jin Li. SkimpyS- tash: RAM Space Skimpy Key-Value Store on Flash- Based Storage. In SIGMOD, 2011.
Optimizing Space Amplification in RocksDB. Siying Dong, Mark Callaghan, Leonidas Galanis, Dhruba Borthakur, Tony Savor, Michael Stumm, CIDR. Siying Dong, Mark Callaghan, Leonidas Galanis, Dhruba Borthakur, Tony Savor, and Michael Stumm. Optimizing Space Amplification in RocksDB. In CIDR, 2017.
What's the big deal about key-value databases like FoundationDB and RocksDB?. Phil Eaton, Phil Eaton. What's the big deal about key-value databases like FoundationDB and RocksDB? https://notes.eatonphil.com/whats-the- big-deal-about-key-value-databases.html, 2022.
Nathan Eddy, Cloud Misconfig Exposes 3TB of Sensitive Airport Data in Amazon S3 Bucket: 'Lives at Stake. Nathan Eddy. Cloud Misconfig Exposes 3TB of Sensi- tive Airport Data in Amazon S3 Bucket: 'Lives at Stake'. https://www.darkreading.com/application- security/cloud-misconfig-exposes-3tb- sensitive-airport-data-amazon-s3-bucket, 2022.
Structured data encoding in Cock-roachDB SQL. David Eisenstat, David Eisenstat. Structured data encoding in Cock- roachDB SQL. https://github.com/cockroachdb/ cockroach/blob/master/docs/tech-notes/ encoding.md, 2021.
HyperDex: A Distributed, Searchable Key-Value Store. Robert Escriva, Bernard Wong, Emin Gün Sirer, SIGCOMM. Robert Escriva, Bernard Wong, and Emin Gün Sirer. HyperDex: A Distributed, Searchable Key-Value Store. In SIGCOMM, 2012.
. Facebook, Rocksdb, Facebook. RocksDB. https://github.com/ facebook/rocksdb.
. Facebook, Myrocks, Facebook. MyRocks. http://myrocks.io/, 2015.
MyRocks record format. Facebook, Facebook. MyRocks record format. https://github. com/facebook/mysql-5.6/wiki/MyRocks-record- format, 2019.
Cuckoo Filter: Practically Better Than Bloom. Bin Fan, David G Andersen, Michael Kaminsky, Michael Mitzenmacher, CoNEXT. Bin Fan, David G. Andersen, Michael Kaminsky, and Michael Mitzenmacher. Cuckoo Filter: Practically Bet- ter Than Bloom. In CoNEXT, 2014.
Architectural Styles and the Design of Network-based Software Architectures. Roy Thomas Fielding, University of California, IrvinePhD thesisRoy Thomas Fielding. Architectural Styles and the Design of Network-based Software Architectures. PhD thesis, University of California, Irvine, 2000.
The ND2DB Attack: Database Content Extraction Using Timing Attacks on the Indexing Algorithms. Ariel Futoransky, Damián Saura, Ariel Waissbein, WOOT. Ariel Futoransky, Damián Saura, and Ariel Waissbein. The ND2DB Attack: Database Content Extraction Using Timing Attacks on the Indexing Algorithms. In WOOT, 2007.
Timeless Timing Attacks: Exploiting Concurrency to Leak Secrets over Remote Connections. Tom Van Goethem, Christina Pöpper, Wouter Joosen, Mathy Vanhoef, USENIX Security Symposium. Tom Van Goethem, Christina Pöpper, Wouter Joosen, and Mathy Vanhoef. Timeless Timing Attacks: Exploit- ing Concurrency to Leak Secrets over Remote Connec- tions. In USENIX Security Symposium, 2020.
Scaling Concurrent Log-Structured Data Stores. Guy Golan-Gueta, Edward Bortnikov, EuroSys. Eshcar Hillel, and Idit KeidarGuy Golan-Gueta, Edward Bortnikov, Eshcar Hillel, and Idit Keidar. Scaling Concurrent Log-Structured Data Stores. In EuroSys, 2015. [40] Google. LevelDB. https://github.com/google/ leveldb.
Adi Kaufman, Moshik Hershcovitch, Adam Morrison, ab- s/XXXX.XXXPrefix Siphoning: Exploiting LSM-Tree Range Filters For Information Disclosure. arXiv e-prints. Adi Kaufman, Moshik Hershcovitch, and Adam Mor- rison. Prefix Siphoning: Exploiting LSM-Tree Range Filters For Information Disclosure. arXiv e-prints, ab- s/XXXX.XXX, 2023.
Who Is 1011011111. . .1110110010? Automated Cryptanalysis of Bloom Filter Encryptions of Databases with Several Personal Identifiers. Martin Kroll, Simone Steinmetzer, International Joint Conference on Biomedical Engineering Systems and Technologies. Martin Kroll and Simone Steinmetzer. Who Is 1011011111. . .1110110010? Automated Cryptanalysis of Bloom Filter Encryptions of Databases with Several Personal Identifiers. In International Joint Conference on Biomedical Engineering Systems and Technologies, 2015.
A Constraint Satisfaction Cryptanalysis of Bloom Filters in Private Record Linkage. Mehmet Kuzu, Murat Kantarcioglu, Elizabeth Durham, Bradley Malin, PETS. Mehmet Kuzu, Murat Kantarcioglu, Elizabeth Durham, and Bradley Malin. A Constraint Satisfaction Crypt- analysis of Bloom Filters in Private Record Linkage. In PETS, 2011.
A practical approach to achieve private medical record linkage in light of public resources. Mehmet Kuzu, Murat Kantarcioglu, Elizabeth Ashley Durham, Csaba Toth, Bradley Malin, Journal of the American Medical Informatics Association. 202Mehmet Kuzu, Murat Kantarcioglu, Elizabeth Ashley Durham, Csaba Toth, and Bradley Malin. A practical ap- proach to achieve private medical record linkage in light of public resources. Journal of the American Medical Informatics Association, 20(2), 2013.
. Redis Lab, Redis, Redis Lab. Redis. https://github.com/redis/ redis.
. Cockroach Labs. CockroachDB. 2022Cockroach Labs. CockroachDB. https://www. cockroachlabs.com/, 2022.
Cassandra: A Decentralized Structured Storage System. Avinash Lakshman, Prashant Malik, SIGOPS Operating Systems Review. 442Avinash Lakshman and Prashant Malik. Cassandra: A Decentralized Structured Storage System. SIGOPS Operating Systems Review, 44(2), 2010.
SILT: A Memory-efficient, Highperformance Key-value Store. Hyeontaek Lim, Bin Fan, David G Andersen, Michael Kaminsky, SOSP. Hyeontaek Lim, Bin Fan, David G. Andersen, and Michael Kaminsky. SILT: A Memory-efficient, High- performance Key-value Store. In SOSP, 2011.
FollowFeed: LinkedIn's Feed Made Faster and Smarter. Linkedin, LinkedIn. FollowFeed: LinkedIn's Feed Made Faster and Smarter. https://engineering.linkedin.
Rosetta: A Robust Space-Time Optimized Range Filter for Key-Value Stores. Siqiang Luo, Subarna Chatterjee, Rafael Ketsetsidis, Niv Dayan, Wilson Qin, Stratos Idreos, SIGMOD. Siqiang Luo, Subarna Chatterjee, Rafael Ketsetsidis, Niv Dayan, Wilson Qin, and Stratos Idreos. Rosetta: A Ro- bust Space-Time Optimized Range Filter for Key-Value Stores. In SIGMOD, 2020.
A Graph Traversal Attack on Bloom Filter Based Medical Data Aggregation. William Mitchell, Rinku Dewri, Ramakrishna Thurimella, Max Roschke, International Journal of Big Data Intelligence. 44William Mitchell, Rinku Dewri, Ramakrishna Thurimella, and Max Roschke. A Graph Traversal At- tack on Bloom Filter Based Medical Data Aggregation. International Journal of Big Data Intelligence, 4(4), 2017.
Application Data Caching using SSDs. Netflix, Netflix. Application Data Caching using SSDs. http: //techblog.netflix.com/2016/05/application- data-caching-using-ssds.html, 2016.
Cryptanalysis of Basic Bloom Filters Used for Privacy Preserving Record Linkage. Frank Niedermeyer, Simone Steinmetzer, Martin Kroll, Rainer Schnell, Journal of Privacy and Confidentiality. 62Frank Niedermeyer, Simone Steinmetzer, Martin Kroll, and Rainer Schnell. Cryptanalysis of Basic Bloom Fil- ters Used for Privacy Preserving Record Linkage. Jour- nal of Privacy and Confidentiality, 6(2), 2014.
Rajesh Nishtala, Hans Fugal, Steven Grimm, Marc Kwiatkowski, Herman Lee, Harry C Li, Ryan Mcelroy, Tung, and Venkateshwaran Venkataramani. Scaling Memcache at Facebook. In NSDI. Mike Paleczny, Daniel Peek, Paul Saab, David Stafford, TonyRajesh Nishtala, Hans Fugal, Steven Grimm, Marc Kwiatkowski, Herman Lee, Harry C. Li, Ryan McElroy, Mike Paleczny, Daniel Peek, Paul Saab, David Stafford, Tony Tung, and Venkateshwaran Venkataramani. Scal- ing Memcache at Facebook. In NSDI, 2013.
. OWASP. Insecure Direct Object Reference Prevention Cheat Sheet. OWASP. Insecure Direct Object Reference Prevention Cheat Sheet. https://cheatsheetseries.owasp. org/cheatsheets/Insecure_Direct_Object_ Reference_Prevention_Cheat_Sheet.html, 2021.
The Log-Structured Merge-Tree (LSM-Tree). O' Patrick, Edward Neil, Dieter Cheng, Elizabeth O' Gawlick, Neil, Acta Informatica. 334Patrick O'Neil, Edward Cheng, Dieter Gawlick, and Eliz- abeth O'Neil. The Log-Structured Merge-Tree (LSM- Tree). Acta Informatica, 33(4):351-385, 1996.
. Jordan Potti, Awsbucketdump, Jordan Potti. Awsbucketdump. https://github.com/ jordanpotti/AWSBucketDump, 2018.
PebblesDB: Building Key-Value Stores Using Fragmented Log-Structured Merge Trees. Pandian Raju, Rohan Kadekodi, Vijay Chidambaram, Ittai Abraham, SOSP. Pandian Raju, Rohan Kadekodi, Vijay Chidambaram, and Ittai Abraham. PebblesDB: Building Key-Value Stores Using Fragmented Log-Structured Merge Trees. In SOSP, 2017.
LittleTable: A Time-Series Database and Its Uses. Sean Rhea, Eric Wang, Edmund Wong, Ethan Atkins, Nat Storer, SIGMOD. Sean Rhea, Eric Wang, Edmund Wong, Ethan Atkins, and Nat Storer. LittleTable: A Time-Series Database and Its Uses. In SIGMOD, 2017.
. Dan Salmon, S3scanner, 2022Dan Salmon. S3scanner. https://github.com/ sa7mon/S3Scanner, 2022.
State Management. Apache Samza, Apache Samza. State Management. http: //samza.apache.org/learn/documentation/
/container/state-management.html. /container/state-management.html, 2017.
Remote Memory-Deduplication Attacks. Martin Schwarzl, Erik Kraft, Moritz Lipp, Daniel Gruss, NDSS. 2022Martin Schwarzl, Erik Kraft, Moritz Lipp, and Daniel Gruss. Remote Memory-Deduplication Attacks. In NDSS, 2022.
Practical Timing Side-Channel Attacks on Memory Compression. Martin Schwarzl, Pietro Borrello Gururaj, Hanna Saileshwar, Michael Müller, Daniel Schwarz, Gruss, IEEE S&P. Martin Schwarzl, Pietro Borrello Gururaj Saileshwar, Hanna Müller, Michael Schwarz, and Daniel Gruss. Practical Timing Side-Channel Attacks on Memory Compression. In IEEE S&P, 2023.
BLSM: A General Purpose Log Structured Merge Tree. Russell Sears, Raghu Ramakrishnan, SIGMOD. Russell Sears and Raghu Ramakrishnan. BLSM: A Gen- eral Purpose Log Structured Merge Tree. In SIGMOD, 2012.
Cooperative Concurrency Control for Write-Intensive Key-Value Workloads. Mark Sutherland, Babak Falsafi, Alexandros Daglis, ASPLOS. Mark Sutherland, Babak Falsafi, and Alexandros Daglis. Cooperative Concurrency Control for Write-Intensive Key-Value Workloads. In ASPLOS, 2023.
Efficient Pattern Mining Based Cryptanalysis for Privacy-Preserving Record Linkage. Anushka Vidanage, Thilina Ranbaduge, Peter Christen, Rainer Schnell, ICDE. Anushka Vidanage, Thilina Ranbaduge, Peter Christen, and Rainer Schnell. Efficient Pattern Mining Based Cryptanalysis for Privacy-Preserving Record Linkage. In ICDE, 2019.
Side-Channel Attacks on Shared Search Indexes. Liang Wang, Paul Grubbs, Jiahui Lu, Vincent Bindschaedler, David Cash, Thomas Ristenpart, IEEE S&P. Liang Wang, Paul Grubbs, Jiahui Lu, Vincent Bind- schaedler, David Cash, and Thomas Ristenpart. Side- Channel Attacks on Shared Search Indexes. In IEEE S&P, 2017.
An Efficient Design and Implementation of LSM-Tree Based Key-Value Store on Open-Channel SSD. Peng Wang, Guangyu Sun, Song Jiang, Jian Ouyang, Shiding Lin, Chen Zhang, Jason Cong, EuroSys. Peng Wang, Guangyu Sun, Song Jiang, Jian Ouyang, Shiding Lin, Chen Zhang, and Jason Cong. An Efficient Design and Implementation of LSM-Tree Based Key- Value Store on Open-Channel SSD. In EuroSys, 2014.
. Brian Warehime, Brian Warehime. insp3ctor. https://github.com/ brianwarehime/inSp3ctor, 2018.
Bucket finder. Ian Williams, Ian Williams. Bucket finder. https://github.com/ FishermansEnemy/bucket_finder, 2013.
EncKV: An Encrypted Key-Value Store with Rich Queries. Xingliang Yuan, Yu Guo, Xinyu Wang, Cong Wang, Baochun Li, Xiaohua Jia, ASIA CCS. Xingliang Yuan, Yu Guo, Xinyu Wang, Cong Wang, Baochun Li, and Xiaohua Jia. EncKV: An Encrypted Key-Value Store with Rich Queries. In ASIA CCS, 2017.
Building an Encrypted, Distributed, and Searchable Key-Value Store. Xingliang Yuan, Xinyu Wang, Cong Wang, Chen Qian, Jianxiong Lin, ASIA CCS. Xingliang Yuan, Xinyu Wang, Cong Wang, Chen Qian, and Jianxiong Lin. Building an Encrypted, Distributed, and Searchable Key-Value Store. In ASIA CCS, 2016.
SuRF: Practical Range Query Filtering with Fast Succinct Tries. Huanchen Zhang, Hyeontaek Lim, Viktor Leis, David G Andersen, Michael Kaminsky, Kimberly Keeton, Andrew Pavlo, SIGMOD. Huanchen Zhang, Hyeontaek Lim, Viktor Leis, David G. Andersen, Michael Kaminsky, Kimberly Keeton, and Andrew Pavlo. SuRF: Practical Range Query Filtering with Fast Succinct Tries. In SIGMOD, 2018.
| [
"https://github.com/ceph/ceph.",
"https://github.com/mysql/",
"https://github.com/cockroachdb/",
"https://github.com/google/",
"https://github.com/redis/"
] |
[
"Complex Isotropic α-Stable-Rician Model for Heterogeneous SAR Images",
"Complex Isotropic α-Stable-Rician Model for Heterogeneous SAR Images"
] | [
"ErcanMutong Li \nTsinghua-Berkeley Shenzhen Institute\nTsinghua University\n\n",
"Engin Kuruoglu \nTsinghua-Berkeley Shenzhen Institute\nTsinghua University\n\n"
] | [
"Tsinghua-Berkeley Shenzhen Institute\nTsinghua University\n",
"Tsinghua-Berkeley Shenzhen Institute\nTsinghua University\n"
] | [] | This article introduces a novel probability distribution model, namely Complex Isotropic α-Stable-Rician (CIαSR), for characterizing the data histogram of synthetic aperture radar (SAR) images. Having its foundation situated on the Lévy α-stable distribution suggested by a generalized Central Limit Theorem, the model promises great potential in accurately capturing SAR image features of extreme heterogeneity. A novel parameter estimation method based on the generalization of method of moments to expectations of Bessel functions is devised to resolve the model in a relatively compact and computationally efficient manner. Experimental results based on both synthetic and empirical SAR data exhibit the CIαSR model's superior capacity in modelling scenes of a wide range of heterogeneity when compared to other state-of-theart models as quantified by various performance metrics. Additional experiments are conducted utilizing large-swath SAR images which encompass mixtures of several scenes to help interpret the CIαSR model parameters, and to demonstrate the model's potential application in classification and target detection. | null | [
"https://export.arxiv.org/pdf/2306.04383v1.pdf"
] | 259,095,624 | 2306.04383 | b05b380d4380607491c65598ba0579f811318de6 |
Complex Isotropic α-Stable-Rician Model for Heterogeneous SAR Images
ErcanMutong Li
Tsinghua-Berkeley Shenzhen Institute
Tsinghua University
Engin Kuruoglu
Tsinghua-Berkeley Shenzhen Institute
Tsinghua University
Complex Isotropic α-Stable-Rician Model for Heterogeneous SAR Images
SAR image processingSAR amplitude modellingα-stable distributiongeneralized method of momentsclassificationtarget detection
This article introduces a novel probability distribution model, namely Complex Isotropic α-Stable-Rician (CIαSR), for characterizing the data histogram of synthetic aperture radar (SAR) images. Having its foundation situated on the Lévy α-stable distribution suggested by a generalized Central Limit Theorem, the model promises great potential in accurately capturing SAR image features of extreme heterogeneity. A novel parameter estimation method based on the generalization of method of moments to expectations of Bessel functions is devised to resolve the model in a relatively compact and computationally efficient manner. Experimental results based on both synthetic and empirical SAR data exhibit the CIαSR model's superior capacity in modelling scenes of a wide range of heterogeneity when compared to other state-of-theart models as quantified by various performance metrics. Additional experiments are conducted utilizing large-swath SAR images which encompass mixtures of several scenes to help interpret the CIαSR model parameters, and to demonstrate the model's potential application in classification and target detection.
Introduction
Statistical modelling of SAR image data has attracted unanimous attention and effort since the very introduction of this potent radar technology. A successful model does not only contribute to a faithful representation of the image data population, but also lays foundation for post-processing applications such as terrain classification [1] and auto target detection [2,3]. Amongst SAR images of various terrains, heterogeneous scenes such as urban areas and sea surface with ships are the most problematic for models to tackle, since steel plates and concrete constructions prove to be an exceptionally dominant reflector of radio arXiv:2306.04383v1 [eess.IV] 7 Jun 2023 waves. These images generally contain a good proportion of pixels with extremely high values, contributing to a long, thick tail in the data histogram.
SAR models with assorted motivations have been designed alongside the development of SAR technology itself. The empirical models such as Weibull distribution [4] and log-normal distribution [5] are famed for their ability to capture SAR image data characteristics in rather homogeneous scenes. The K family [6] and its special class G 0 [7] are derived as the product model of both texture and speckle distributions for more heterogeneous images. In the context of this paper, we would like to emphasize on complex isotropic models induced by the Rayleigh distribution. The Rayleigh distribution [8] has been the classical model for problems involving complex wave reflections in applications including telecommunications, sensing and radar imaging. Making the assumption of Gaussian real and imaginary components as suggested by the Central Limit Theorem, which states the distribution of a collection of infinitesimal reflectors with finite variance , the amplitude is obtained as Rayleigh which forms the simplest complex isotropic model. Yet, Rayleigh model assumes zero-mean reflections which is violated in the case of a dominant reflector in the scene. To accommodate said scenario, the model is generalized to the Rician model [9,10], which has an additional location parameter that reflects the existence of dominant reflectors. By considering the data population as an amplitude distribution of a complex isotropic random variable, these models practically mimic the amplitude and phase constitution of complex radar signal.
While the Rician model takes care of a small or a single dominant reflector in an otherwise homogeneous scene, in the case of heterogeneous SAR images such as urban scenes, reflectance to radio waves by certain surface types such as automobiles or buildings becomes so strong that these reflectors' contribution to the entire population can no longer be considered as infinitesimal. As a result, the Rician model established upon said assumptions collapses.
A good number of models have been proposed to intentionally cater for the heavy-tailed histogram by replacing the Gaussian distribution in the Rician model with a generalized version. One of the options for substitution is the Generalized Gaussian (GG), aka exponential power distribution. By introducing an additional shape parameter α to exponential argument of conventional Gaussian probability density function (pdf), the GG distribution is capable of accommodating histogram tails of varying thickness. GG distribu-2 tion includes Gaussian and Laplace distribution as its special cases, its pdf is given as follows.
f (x|α, γ, δ) = α 2γΓ ( 1 α ) exp (− x − δ γ α )(1)
Moser et al. [11] first demonstrated a SAR model named Generalized Gaussian-Rayleigh as the amplitude distribution of independent real and imaginary components with zero-mean GG random variables. The GG-Rayleigh model was later generalized by Karakus et al. [12] into Generalized Gaussian-Rician (GGR) by including the missing location parameter δ. The GGR model fully uncovers the potential of GG distribution, and includes other competetive models such as Nakagami-Rice [13] and Laplace-Rician [14] as its special case. However, despite GGR's capability of modelling SAR images of a considerably wide range of heterogeneity, the model still fails to accommodate extremely heterogeneous urban data, and a lack of analytical parameter estimator also restricted the implementation of the model. The probability density function of GGR can be expressed as follows:
f (x|α, γ, δ) = α 2 x 4γ 2 Γ 2 ( 1 α ) ∫ 2π 0 exp (− |xcos θ − δ| α + |xsin θ − δ| α γ α ) dθ(2)
Another strain of complex isotropic model family is established upon Paul Lévy's α-stable distribution [15,16]. The α-stable distribution S α (γ, β, δ) adopts 4 parameters α, β, γ, δ to characterize its impulsiveness, skewness, scale, and location, respectively, and is commonly presented with its characteristic function (CF) due to a lack of compact analytical pdf. The α-stable distribution includes Gaussian (α=2), Cauchy (α=1, β=0), and Levy distribution (α=0.5, β=1) as its special cases, the only ones with a compact analytical pdf. The CF of α-stable distribution is given as follows:
φ (ω|α, β, γ, δ) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ exp { jδω − γ α |ω| α [1 + jβsgn(ω)tan ( απ 2 )]}, if α ≠ 1 exp { jδω − γ|ω| [1 + jβsgn(ω) 2 π log |ω|]}, if α = 1(3)
One immediate advantage of adopting α-stable distribution in a complex isotropic SAR model lies in the distribution's capacity of covering Gnedenko and Kolmogorov's generalization of the the Central Limit Theorem (GCLT), which states that summation of large numbers of random variables of a distribution with power-shaped Paretian tails will tend to α-stable distribution [17]. The incorporation of a heavier tail in α-stable coincides with the impulsive feature of SAR images of heterogeneous scenes, therefore, making the distribution a promising choice to model the real and imaginary parts of a complex radar signal.
Kuruoglu and Zerubia [18] first devised a model named Heavy-Tailed Rayleigh (HTR) which is the amplitude distribution of a complex isotropic α-stable random variable with arbitrary characteristic exponent α and scale parameter γ, the model proved successful in modelling the tail of urban SAR data, and was also adjusted by Achim et al. [19] to filter speckle. The pdf of HTR is shown as follows.
f (x|α, γ) = x∫ ∞ 0 ω exp (−γ α ω α )J 0 (ωx) dω(4)
Later, Karakus et al. [20] developed the Cauchy-Rician model (CR) which utilizes Cauchy (special case of α-stable with arbitrary γ and δ parameter) as a designated treatment to the non-zero mean reflection in extremely heterogeneous urban scene. CR model made amends for HTR's incapability in modelling urban areas with dominant reflectors such as large residential blocks, but at the cost of losing the characteristic exponent α, an important degree of freedom giving flexibility in modelling various levels of impulsiveness.
Cauchy-Rician is frequently more impulsive than the actual SAR data. The pdf of CR model is shown as follows:
f (x|γ, δ) = γx 2π ∫ 2π 0 dθ [γ 2 + x 2 + 2δ 2 + 2xδ (cos θ + sin θ)] 3/2(5)
In this article, we would like to propose a generalization to the HTR model, producing a novel SAR model named the Complex Isotropic α-Stable Rician (CIαSR) which includes the characteristic exponent, scale parameter, and location parameter from the α-stable family. It combines the advantages of varying tail thickness in the HTR model and characteristic of non-zero-mean reflections in CR and Rician models, and manifests great potential in accurately modelling heterogeneous SAR images. Details regarding the model is arranged in the following manner: Section II provides a mathematical derivation of the model;
Section III introduces a quasi-analytical parameter estimator for the model based on a generalized method of moments; Section IV exemplifies practical significance of the model by applying it to SAR images of various heterogeneity; Section V concludes the paper and discusses potential future work.
Complex Isotropic α-Stable Rician Model
The CIαSR model proposed in this work models the amplitude distribution of a complex isotropic α-stable random variable. The derivation starts by generalizing complex symmetric α-stable distribution [21] with an additional location parameter δ. The characteristic function of an isotropic bivariate α-stable distribution with characteristic exponent α, scale parameter γ, and location parameter δ is
φ (ω 1 , ω 2 ) = exp [ j (δ 1 ω 1 + δ 2 ω 2 ) − γ | ⃗ ω| α ] ,(6)
where ⃗ ω = ω 1 + jω 2 is the bivariate vector in frequency domain. δ = √ δ 2 1 + δ 2 2 is the location parameter, with δ 1 and δ 2 arbitrarily allocated to the real and imaginary part. By performing the inverse Fourier transform of the above characteristic function, one acquires the pdf of the real and imaginary parts of the signal amplitude.
f X re , X im (x re , x im ) = 1 4π 2 ∬ ω 1 , ω 2 exp [ j (δ 1 ω 1 + δ 2 ω 2 ) − γ α | ⃗ ω| α ]exp [ j (ω 1 x re + ω 2 x im )]dω 1 dω 2(7)
By assuming ω = ⃗ ω = √ ω 2 1 + ω 2 2 and θ ω = arctan
ω 2 ω 1
, the above integral can be converted into the polar coordinates.
f X re , X im (x re , x im ) = 1 4π 2 ∫ ∞ 0 ω exp (−γ α ω α )∫ 2π 0 exp [ j (δ 1 ω 1 + δ 2 ω 2 )] × exp [ j (ω 1 x re + ω 2 x im )]dθ ω dω(8)
By further assuming x =
√ x 2 re + x 2 im and θ x = arctan x im x re
, one can convert the pdf from that of real and imaginary parts to amplitude and phase.
f X, Θ x (x, θ x ) =xp X re , X im (xcos θ x , xsin θ x ) = x 4π 2 ∫ ∞ 0 ω exp (−γ α ω α )∫ 2π 0 exp [ jω (δ 1 cos θ ω + δ 2 sin θ ω )] × exp [ jωx (cos θ x cos θ ω + sin θ x sin θ ω )]dθ ω dω(9)
Utilize the identity cos A cos B + sin A sin B = cos(A − B), and integrate the pdf given above with respect to Θ x to obtain the marginal pdf of signal amplitude X.
f X (x) =∫ θ x pdf X, Θ x (x, θ x ) dθ x = x 4π 2 ∫ ∞ 0 ω exp (−γ α ω α )∫ 2π 0 exp [ jω (δ 1 cos θ ω + δ 2 sin θ ω )] ∫ 2π 0 exp [ jωxcos (θ x − θ ω )] dθ x dθ ω dω(10)
By invoking the identity acosx+bsin x = √ a 2 + b 2 sin (x + arctan a b ) to merge the trigonometric term within the exponential term, and zeroth-order Bessel function of the first kind J 0 (x) = 1 2π ∫ 2π 0 exp ( jxsin θ) dθ, the above pdf can be reduced into a relatively compact form:
f X (x) = x∫ ∞ 0 ω exp (−γ α ω α ) J 0 (ωδ) J 0 (ωx) dω.(11)
This integral-form equation 11 gives us the expression for the CIαSR pdf. A careful reader may readily notice its resemblance with its degenerate version known as Heavy-Tailed Rayleigh which is obtained for δ = 0. As a simple check, the model can be easily reduced to Rician distribution at α=2 with the following identity [22], just as a univariate α-stable distribution degenerates to Gaussian. Figure 1a shows that characteristic exponent α decides the heaviness of the tail, while Figure 1b shows that scale parameter γ sees how dispersed the peak is when spreading on the x-axis, and Figure 1c shows that location parameter δ mainly controls where the pdf of the distribution is placed along the x-axis.
∫ ∞ 0 xexp (−ρ 2 x 2 )J p (αx) J p (βx) dx = 1 2ρ 2 exp (− α 2 + β 2 4ρ 2 )I p ( αβ 2ρ 2 ) [Re p > −1, |arg ρ| < π/4, α > 0, β > 0](12)
Parameter Estimation with Generalized Method of Moments
As the reader might have noticed, complex isotropic models in general adopt a pdf of integral form, making parameter estimation of which subsequently complicated. Past literature have utilized method of log cumulants (MoLC) [1] for Heavy-Tailed Rayleigh [19] and Markov chain Monte Carlo for Cauchy-Rician [20], but these prior works have proven futile in front of a more complicated CIαSR model. The readers can refer to Appendix ?? for details of these failed efforts. In this section, the thesis would like to propose a quasi-analytical parameter estimator based on generalization of method of moments, despite not completely analytical due to the involvement of a necessary root-finding algorithm, the estimator is otherwise fast and succinct.
The Method of moments requires users to derive the mathematical expectation of a given function, and compare it to the empirical expectation calculated from samples to form:
∫ X f (x) p(x) dx = E [ f (x)] = lim N→∞ N ∑ i=1 f (x i ).(13)
Conventionally, f (x) are chosen to be power functions due to their mathematical simplicity in the evaluation of the expectation integral on the left-hand side of equation 13 and the numerical calculation of the empirical moment from data on the right-hand side of equation 13. However, expectations of power functions for CIαSR model have been proven difficult to solve even for the HTR [19]. A novel method of Bessel moments (MoBM) is proposed in this work to help resolve the the obstinate parameter estimation problem concerning models derived from the α-stable family. We discovered that by assuming f (x) in Eq. 13 to be a Bessel function of the first kind, the function conveniently cancels with the pdf term to yield a compact result. The derivation of this Bessel moment for CIαSR can be summarized as follows.
By setting the function to f (x) = J 0 (ax), the expectation of said function under a CIαSR distribution is
E [ f (x)] = ∫ ∞ 0 ωexp (−γ α ω α )J 0 (ωδ) [∫ ∞ 0 x J 0 (ωx) J 0 (ax) dx] dω.(14)
By Invoking the following identity [22] to integrate out x:
∫ ∞ 0 k J n (ka) J n (kb) dk = 1 a δ D (b − a) [n = 0, 1, 2 . . .] ,(15)
where δ D (k − k 0 ) is the Dirac delta function. This results in
E [J 0 (ax)] = 1 a ∫ ∞ 0 ωexp (−γ α ω α )J 0 (ωδ) δ D (ω − a) dω = exp (−γ α a α )J 0 (aδ) .(16)
Despite having a Bessel term in its final form, E [J 0 (ax)] as a function of a is obviously well behaved:
the first term exp (−γ α a α ) is positive and monotonically decreasing; the second term J 0 (aδ) as a Bessel function oscillates with an asymptotic period of roughly π/δ as a becomes large, and is also positive and monotonically decreasing before the first root.
In fact, the shape of E [J 0 (ax)] will behave similar to zeroth-order Bessel function in respect of zeros and signs, but rapidly closing in to the a-axis due to the exponential term. Since the roots of Bessel function are readily available with high precision, it is feasible to increase the value of a in steps until the empirical expectation ∑ J 0 (a 0 x i ) goes down to zero, and the delta estimate should equal to the quotient of Bessel first root (approximate value 2.405) by a 0 . Another fact to help with finding the root a 0 is that one can hazard a guess of the approximate δ value, since it is in general of the same magnitude as sample mean, this approximate value can be used as reference for step of increase in a. Pseudo-code for the δ estimation procedure is presented in
Algorithm 1 Pseudo-Code for δ Estimation initialize a = 0 step= 0.01 * 1 N * ∑ x i while ∑ J 0 (ax i ) > 0 a = a + step end whilê δ = 2.405/a
Once the location parameter estimateδ is acquired, the remaining two parameters can be solved in closed form as follows:
⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩α = ln { ln E[J 0 (a 1 x)]−ln J 0 (a 1 δ) ln E[J 0 (a 2 x)]−ln J 0 (a 2 δ) }/ln ( a 1 a 2 ) γ = 1 a 3 {ln J 0 (a 3δ ) E[J 0 (a 3 x)] } 1/α ,(17)
where a i (i = 1, 2, 3) are arbitrary hyperparameters of different values. During experiments with synthetic CIαSR data, it is discovered that they yield best performance when chosen around the magnitude of 0.01.
One possible explanation for this choice is that the subsequent value of a i δ falls around the first root of Bessel function, away from zero so that the slope is large enough and Bessel is sensitive to the change of δ,
and not too far away so that Bessel can still be treated as an injective mapping.
Since the estimator offers analytical solutions to parametersα andγ, and only involves a simple iterative operation in the acquisition ofδ, it is computationally economic, and makes a thorough experiment possible on an enormous amount of synthetically generated data as displayed in the following section. The generalization of method of moments to a special function like Bessel function can also inspire the development of alternative generic moment options for solving other analytically complicated distributions.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)
Experimental Study
Simulations with Synthetic Data
To prove the robustness of the estimator, the method of Bessel moments estimator is first implemented on synthetic data generated according to the CIαSR model. Synthetic data are generated using the following steps: [23].
1. Generate a one-sided α-stable random variable A ∼ S α 2 ((cos απ 4 ) 2 α , 1, 0) using the CMS method [24].
2. Generate two i.i.d, zero-mean Gaussian random variables G i ∼ N (µ = 0, 2γ 2 ).
Combine the random variables in the following manner
→ X = (A 1/2 G 1 + δ 1 , A 1/2 G 2 + δ 2 ). The ampli- tude X = → X is a CIαSR random variable.
To begin with, a quick experiment is conducted to study the effect of dataset size on the performance of the MoBM estimator. Figure 3 incorporates MoBM estimator performance against different synthetic dataset size from 1×10 4 to 1×10 7 . In general, a smooth surface of the subfigures suggests that the estimation is stable, and the surface being parallel to one or two axes suggests that the estimation of one parameter is independent from the other parameters. Apparently, the stability of estimation of all three parameters are directly dependent on the number of samples, and a larger data set can ensure a more accurate performance of the estimator. The thesis conducts synthetic experiments with a data size of 1 × 10 6 per dataset in subsequent sections for a trade-off between computation efficiency and stability.
In this subsection, a total number of 1500 synthetic data sets are generated to cover as wide a range of combination of the three parameters, namely 10 different α from 0.1 to 1.9, by 10 different γ from 10 to 100, and by 15 different δ from 10 to 150. Each synthetic data set contains 1E6 samples to simulate the typical size of a SAR image patch of 1000 by 1000 pixels. For the sake of editorial succinctness, Fig. 4 displays only the results when α is at 0.3, 0.7, 1.1, 1.5, and 1.9.
A quick comparison between the sub-figures in Fig. 4 can offer readers with a straight-forward reference on performance of the Bessel moment estimator. The estimation of α fluctuates as actual α value used in data generation gets higher, and is most sensitive to an extremely small γ value. The estimation of γ also experiences minor influence as actual α gets higher or extremely low (α close to 0, which does not occur in any practical scenario), but the effect is negligible; the estimation of δ parameter is the most problematic, since it becomes unstable when actual α and γ is large while actual δ is small. This is most probably because high α and γ leads to a homogeneous, dispersed data histogram, and the proposed estimator lacks adequate ability to distinguish this dispersed feature from a data set of large shift. Fig. 5 includes some of the performance metrics gathered in the synthetic data experiment such as mean square error (MSE) and
Kullback-Leibler divergence (KL-div) to evaluate the estimator quantitatively. Fig. 5a shows the MSE of the three estimated parameters as α range from its domain of (0, 2]. Fig. 5 (b) shows particularly the KL-div of the α=1.9 case, when the estimation accuracy is affected the most.
The MSE performance generally shows the tendency of estimation becoming unstable at large α, the only exception being when the MSE of γ parameter increases as α approaches zero. A main reason for this phenomenon is that the estimator tends to confuse the dispersion of the pdf with the strong heterogeneity Figure 4: Estimated parameter value against a comprehensive spectrum of synthetically generated CIαSR data with various parameters: the X-axis and Y-axis within each subfigure reflects how the estimation is affected by the change of actual γ and δ values during data generation; three columns of subfigures correspond to estimated α, γ, and δ parameter from left to right; and five rows of subfigures correspond to synthetic data generated from α value of 0.3, 0.7, 1.1, 1.5, and 1.9 from top to bottom. characterized with low α values. The KL-div results also confirm the fact that estimation becomes unstable when γ or δ is very small. One intriguing fact is that despite α and γ parameters are estimated on the basis of δ, the deterioration of δ estimation at high α and γ does not seem to drastically contaminate the two parameters or significantly encumber the overall fitting performance, since the KL-div remains in the magnitude of 1 × 10 −4 . One possible explanation for this is that a less accurate δ estimation is compensated by an overestimated dispersion, subsequently reducing its damage to the overall fitting performance. Still, Fig. 5b and Fig. 4o suggest that goodness-of fit is essentially affected by the inaccurate estimation of the characteristic exponent α and location parameter δ.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o)
Experiments on Real Data
In this section, the proposed model and estimator are implemented on SAR image data of various heterogeneity. Images used in this section are captured by ESA's C-band SAR satellite Sentinel-1, and include scenes such as desert, vegetation, mountain, sea with ships (w/Sea), and urban areas. Performance of the model is compared with the state-of-the-art models including Weibull, log-normal, G 0 [7] and GGR [12], and is evaluated using Kullback-Leibler divergence (KL-div) and Kolmogorov-Smirnov score (KSscore). Experimental results are shown in the following table with winner models in bold font. There are certain cases where the winning model by KL-div or KS-score standards are not the same, this is because of a draw between the CIαSR and GGR model since both are good at modelling moderately heterogeneous data. Performance metrics in Table 1 show that the proposed CIαSR model excels in modelling urban and portal sea data with almost unanimous advantage, this observation coincides with CIαSR's theoretical foundation of GCLT which states that the data population of these scenes are to have a heavy power tail. We attribute the superiority of the model at modelling heterogeneous data to the comprehensiveness of it having three independent parameters. The estimated α in heterogeneous data are generally between 1.3 to 1.9, which make amends for the Cauchy-Rician model (α=1) for assuming the heterogeneity way higher that the actual SAR image; the estimated δ is distributed between 20 to 140, which in turn resolves the problem of dominant reflectors neglected by the Heavy-Tailed Rayleigh model. However, CIαSR often fails or comes in second position when modelling homogeneous data such as images of vegetation or desert, despite its theoretical capability of modelling data of a wide range of heterogeneity by changing the characteristic exponent α.
13
To further study the reasons behind fitting performance of CIαSR model, the pdf curves of these candidate models are plotted alongside the SAR image data histogram to provide a direct demonstration of goodness-of-fit. Three sets of image data, namely Urban4, w/Sea2, and Vegetation3 from Table 1 are selected to accommodate different heterogeneity. According to Figure 6, CIαSR exemplifies best representation of heavy histogram tails in all three different scenes, but results in Table 1 shows that the model only wins in Urban4 and w/Sea2 and not in Vegetation3. A closer look into the SAR image histograms may give the answer: while CIαSR curve serves as accurate fitting throughout the data histogram in Figure 6b and 6d, the model fails to accommodate a large part of the transit area between peak and tail (x from 220 to 800) in sub-figure (f), the distinction between bright and dark pixels in sub-figure (a-c) also suggests that
Vegetation3 is of less heterogeneous texture than the other two images. One explanation is that the CIαSR model tends to sacrifice accurate fitting in the peak-tail transition area to exchange for better representation of the histogram tail, yet this preference drags down its rating in performance metrics such as KL-div and KS-score. One possible solution is to further generalize the model by adding skewness parameter β to the model for further flexibility of model pdf shape [25], which requires an adequate parameter estimator that the authors wish to address in their future work.
Interpretation of Model Parameters
In order to provide a practical interpretation of the CIαSR model parameters, as well as excavating and (h-j). Pseudo-RGB images are subsequently generated by assigning the normalized value of the three parameters to red, green, and blue channel, respectively. Please note that both the colour bar for parameter α heatmaps and corresponding red channel intensity in pseudo-RGB images are inverted, since smaller value of α parameter contributes to data of heavier tail, and consequently a brighter pixel in the original images.
Comparison between the CIαSR generated pseudo-RGB image and the original in Fig. 7 helps understand the meaning of CIαSR parameters: characteristic exponent α represents the tail heaviness of data population, and red regions of small α in pseudo-RGB images indicates higher proportion of extremely large data values in contrast to the background, marking out major residential area in Figure 7b and voyag- ing ships in Figure 7g; scale parameter γ represents the dispersion of data population, and green regions of large γ in pseudo-RGB image usually indicate a mixture of different scenes which contribute to a histogram peak spread thin, marking out texture-complex patches that involve change of terrain such as lakeside in Figure 7b and coastline in Figure 7g; location parameter δ represents the average reflection intensity of the area, and blue regions of different shade indicate the intrinsic reflectance to SAR radio illumination, distinguishing land (high reflectance) from water (low reflectance). Sometimes all three channels are of high value to produce a white region, which corresponds to an extremely heterogeneous area with high base reflection and mixture of terrain, an example being the image segment in row 7 column 1 in Figure 7b, which is an extra crowded residential area with a river running pass it, this area coincides with the geographical landmark of Lujiazui, one of the most crowded districts in Shanghai. Likewise, it is possible to analyze image scene composition using the CIαSR parameters for purpose of terrain classification and target detection.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
Conclusion
A novel statistical model named CIαSR for SAR images is proposed to cater scenes of various heterogeneity. The model describes the amplitude distribution of a complex isotropic α-stable random variable, and is theoretically justified by a generalized Central Limit Theorem. Detailed experiments on both synthetic and real data have proved CIαSR to be capable of faithfully representing the heavy tails of impulsive data population, which is particularly observed in SAR image data of urban area or sea loitered with ships.
Alongside the model a quasi-analytical parameter estimator that combines a root-finding method and generalized method of Bessel moments (MoBM) is devised to acquire CIαSR parameters at low computation cost. This generalization for the first time introduces Bessel function to the method of moments and may serve as a generic solution to other complicated distribution models. Physical significance of the CIαSR models are explored by implementing the model on large-swath SAR images containing a mixture of different scenes. The three model parameter proves to be a good characterization of the heavy-tailedness, texture complexity, and average reflection of a SAR image, respectively. The proposed CIαSR model demonstrates admirable potential in classification and target recognition of radar images and perhaps other types of image data.
Despite a dominant advantage in heterogeneous data, CIαSR model falls short in modelling homogeneous data. One possible solution to compensate for this drawback is to further extend the model to include an adjustable skewness parameter β. However, a novel parameter estimator is required to support this more complicated model, which will be addressed by the authors in future work.
Figure 1
1are placed hereby to offer an outright example of how the model pdf varies according to changes of each parameter. The figure also provides an insight into how CIαSR model includes Heavy-Tailed Rayleigh, Cauchy-Rician, and Rician as its special cases.
Figure 1 :
1CIαSR model curves with varying parameter of (a) characteristic exponent α (γ=1, δ=1); (b) scale parameter γ (α=1.5, δ=1); and (c) location parameter δ (α=1.5, γ=1).
Figure 2 :
2Shape of the Bessel moment and the two terms contributing to it: (a) plotting of the exponential term; (b) plotting of the Bessel term; (c) plotting of the Bessel moment.
Figure 3 :
3Test of how size of synthetic sample space affect the stability of MoBM estimator: the X-axis and Y-axis within each subfigure reflects how the estimation is affected by the change of actual γ and δ values during data generation; four columns of subfigures correspond to estimated parameters given synthetic dataset size from 1 × 10 4 to 1 × 10 7 , left to right; the three rows of subfigures corresponds to parameter estimation of α, γ, and δ, respectively.
Figure 5 :
5Performance metrics for the synthetic data experiment: (a) MSE of estimated parameters according to varying α parameter value. (b) KL=div of estimation when α=1.9.
Sea: SAR image of sea surface with ships.
Figure 6 :
6SAR images and histogram versus candidate model fits of (a, b) Urban 4; (c, d) w/Sea 1; and (e, f) Vegetation 2
the model's potential applications in image classification and target detection, two large SAR images of Shanghai and Eastern Jiangsu Province in Figure 7a and portal city Tianjin in Figure 7f with different scenes combined are segmented into patches and the corresponding model parameters are estimated. Both images have an original dimension of 20000 by 1500 pixels that is equally divided into 40 by 30 squareshaped patches of 500 pixels wide. The patches are processed by the proposed CIαSR model to produce matrices of the three model parameters α, γ, and δ, which are displayed using heatmaps in Figure 7 (c-e)
Figure 7 :
7Demonstration of CIαSR's application in SAR feature extraction: (a) original SAR image of Shanghai and Eastern Jiangsu Province with grid indicating how patches are segmented; (c-e) heatmaps of estimated CIαSR parameters α, γ, and δ, respectively; (b) psedo-RGB image generated from estimated parameters. (f-j) are similar counterparts for SAR images of portal city Tianjin.
Table 1 :
1Performance of Different Models on Various SAR DataNo.
Scene
KL-div
KS-score
CIαSR Weibull log-normal
A new statistical model for Markovian classification of urban areas in highresolution SAR images. C Tison, J.-M Nicolas, F Tupin, H Maître, IEEE Transactions on Geoscience and Remote Sensing. 4210C. Tison, J.-M. Nicolas, F. Tupin, H. Maître, A new statistical model for Markovian classification of urban areas in high- resolution SAR images, IEEE Transactions on Geoscience and Remote Sensing 42 (10) (2004) 2046-2057.
Bayesian gamma mixture model approach to radar target recognition. K Copsey, A Webb, IEEE Transactions on Aerospace and Electronic Systems. 394K. Copsey, A. Webb, Bayesian gamma mixture model approach to radar target recognition, IEEE Transactions on Aerospace and Electronic Systems 39 (4) (2003) 1201-1217.
Scheme of parameter estimation for generalized gamma distribution and its application to ship detection in SAR images. G Gao, K Ouyang, Y Luo, S Liang, S Zhou, IEEE Transactions on Geoscience and Remote Sensing. 553G. Gao, K. Ouyang, Y. Luo, S. Liang, S. Zhou, Scheme of parameter estimation for generalized gamma distribution and its application to ship detection in SAR images, IEEE Transactions on Geoscience and Remote Sensing 55 (3) (2016) 1812- 1832.
Textural infornation in SAR images. F T Ulaby, F Kouyate, B Brisco, T L Williams, IEEE Transactions on Geoscience and Remote Sensing. 2F. T. Ulaby, F. Kouyate, B. Brisco, T. L. Williams, Textural infornation in SAR images, IEEE Transactions on Geoscience and Remote Sensing (2) (1986) 235-245.
The detection of nonfluctuating targets in log-normal clutter. S F George, NAVAL RESEARCH LAB WASH-INGTON DC. Tech. repS. F. George, The detection of nonfluctuating targets in log-normal clutter, Tech. rep., NAVAL RESEARCH LAB WASH- INGTON DC (1968).
A model for non-Rayleigh sea echo. E Jakeman, P Pusey, IEEE Transactions on Antennas and Propagation. 246E. Jakeman, P. Pusey, A model for non-Rayleigh sea echo, IEEE Transactions on Antennas and Propagation 24 (6) (1976) 806-814.
A model for extremely heterogeneous clutter. A C Frery, H.-J Muller, C D C F Yanasse, S J S Sant'anna, IEEE Transactions on Geoscience and Remote Sensing. 353A. C. Frery, H.-J. Muller, C. d. C. F. Yanasse, S. J. S. Sant'Anna, A model for extremely heterogeneous clutter, IEEE Transactions on Geoscience and Remote Sensing 35 (3) (1997) 648-659.
J W Strutt, J W S B Rayleigh, The theory of sound. Macmillan11894J. W. Strutt, J. W. S. B. Rayleigh, The theory of sound, Vol. 1, Macmillan, 1894.
Mathematical analysis of random noise. S O Rice, The Bell System Technical Journal. 233S. O. Rice, Mathematical analysis of random noise, The Bell System Technical Journal 23 (3) (1944) 282-332.
A new parameterization for the Rician distribution. J.-M Nicolas, F Tupin, IEEE Geoscience and Remote Sensing Letters. 1711J.-M. Nicolas, F. Tupin, A new parameterization for the Rician distribution, IEEE Geoscience and Remote Sensing Letters 17 (11) (2019) 2011-2015.
SAR amplitude probability density function estimation based on a generalized Gaussian model. G Moser, J Zerubia, S B Serpico, IEEE Transactions on Image Processing. 156G. Moser, J. Zerubia, S. B. Serpico, SAR amplitude probability density function estimation based on a generalized Gaussian model, IEEE Transactions on Image Processing 15 (6) (2006) 1429-1442.
A generalized Gaussian extension to the Rician distribution for SAR image modeling. O Karakuş, E E Kuruoglu, A Achim, IEEE Transactions on Geoscience and Remote Sensing. 60O. Karakuş, E. E. Kuruoglu, A. Achim, A generalized Gaussian extension to the Rician distribution for SAR image modeling, IEEE Transactions on Geoscience and Remote Sensing 60 (2021) 1-15.
The impact of strong scintillation on space based radar design II: Noncoherent detection. R A Dana, D L Knepp, R. A. Dana, D. L. Knepp, The impact of strong scintillation on space based radar design II: Noncoherent detection, IEEE transactions on aerospace and electronic systems (1) (1986) 34-46.
Modelling sea clutter in SAR images using Laplace-Rician distribution. O Karakuş, E E Kuruoglu, A Achim, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEO. Karakuş, E. E. Kuruoglu, A. Achim, Modelling sea clutter in SAR images using Laplace-Rician distribution, in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 1454- 1458.
The Pareto-Levy law and the distribution of income. B Mandelbrot, International Economic Review. 12B. Mandelbrot, The Pareto-Levy law and the distribution of income, International Economic Review 1 (2) (1960) 79-106.
V M Zolotarev, One-dimensional stable distributions. American Mathematical Soc65V. M. Zolotarev, One-dimensional stable distributions, Vol. 65, American Mathematical Soc., 1986.
Limit distributions for sums of independent random variables. B Gnedenko, A Kolmogorov, ADDISON-WESLEYB. Gnedenko, A. Kolmogorov, Limit distributions for sums of independent random variables, ADDISON-WESLEY, 1968.
Modeling SAR images with a generalization of the Rayleigh distribution. E E Kuruoglu, J Zerubia, IEEE Transactions on Image Processing. 134E. E. Kuruoglu, J. Zerubia, Modeling SAR images with a generalization of the Rayleigh distribution, IEEE Transactions on Image Processing 13 (4) (2004) 527-533.
SAR image filtering based on the heavy-tailed Rayleigh model. A Achim, E E Kuruoglu, J Zerubia, IEEE Transactions on Image Processing. 159A. Achim, E. E. Kuruoglu, J. Zerubia, SAR image filtering based on the heavy-tailed Rayleigh model, IEEE Transactions on Image Processing 15 (9) (2006) 2686-2693.
Cauchy-Rician model for backscattering in urban SAR images. O Karakuş, E E Kuruoglu, A Achim, M A Altınkaya, IEEE Geoscience and Remote Sensing Letters. 19O. Karakuş, E. E. Kuruoglu, A. Achim, M. A. Altınkaya, Cauchy-Rician model for backscattering in urban SAR images, IEEE Geoscience and Remote Sensing Letters 19 (2022) 1-5.
Complex symmetric stable variables and processes. S Cambanis, University of North CarolinaS. Cambanis, Complex symmetric stable variables and processes, University of North Carolina, 1982.
I S Gradshteyn, I M Ryzhik, Table of integrals, series, and products. Academic PressI. S. Gradshteyn, I. M. Ryzhik, Table of integrals, series, and products, Academic Press, 2014.
Stable non-Gaussian random processes: stochastic models with infinite variance. G Samorodnitsky, RoutledgeG. Samorodnitsky, Stable non-Gaussian random processes: stochastic models with infinite variance, Routledge, 2017.
A method for simulating stable random variables. J M Chambers, C L Mallows, B Stuck, Journal of the American Statistical Association. 71354J. M. Chambers, C. L. Mallows, B. Stuck, A method for simulating stable random variables, Journal of the American Statis- tical Association 71 (354) (1976) 340-344.
Skewed α-stable distributions for modelling textures. E E Kuruoglu, J Zerubia, Pattern Recognition Letters. 241-3E. E. Kuruoglu, J. Zerubia, Skewed α-stable distributions for modelling textures, Pattern Recognition Letters 24 (1-3) (2003) 339-348. 20
| [] |
[
"RotorPy: A Python-based Multirotor Simulator with Aerodynamics for Education and Research",
"RotorPy: A Python-based Multirotor Simulator with Aerodynamics for Education and Research"
] | [
"Spencer Folk ",
"James Paulos ",
"Vijay Kumar "
] | [] | [] | Simulators play a critical role in aerial robotics both in and out of the classroom. We present RotorPy, a simulation environment written entirely in Python intentionally designed to be a lightweight and accessible tool for robotics students and researchers alike to probe concepts in estimation, planning, and control for aerial robots. RotorPy simulates the 6-DoF dynamics of a multirotor robot including aerodynamic wrenches, obstacles, actuator dynamics and saturation, realistic sensors, and wind models. This work describes the modeling choices for RotorPy, benchmark testing against real data, and a case study using the simulator to design and evaluate a modelbased wind estimator. | null | [
"https://export.arxiv.org/pdf/2306.04485v1.pdf"
] | 259,095,708 | 2306.04485 | b44943f1ade779798fe2138f9f05279cfd879653 |
RotorPy: A Python-based Multirotor Simulator with Aerodynamics for Education and Research
Spencer Folk
James Paulos
Vijay Kumar
RotorPy: A Python-based Multirotor Simulator with Aerodynamics for Education and Research
Simulators play a critical role in aerial robotics both in and out of the classroom. We present RotorPy, a simulation environment written entirely in Python intentionally designed to be a lightweight and accessible tool for robotics students and researchers alike to probe concepts in estimation, planning, and control for aerial robots. RotorPy simulates the 6-DoF dynamics of a multirotor robot including aerodynamic wrenches, obstacles, actuator dynamics and saturation, realistic sensors, and wind models. This work describes the modeling choices for RotorPy, benchmark testing against real data, and a case study using the simulator to design and evaluate a modelbased wind estimator.
I. INTRODUCTION
Dynamics simulation environments aid robotics education and research, providing a playground for rapid experimentation and evaluation of robotic design, perception, and action. Aerial robots, or UAVs, are a complicated application domain-unstable dynamics requiring high speed sensors, actuators, controllers, and planners, and complex aerodynamic interactions with the environment and other UAVs-placing added demands on simulation tools necessary for synthesis and analysis. Existing simulators, driven by target applications, tend to prioritize compute speed with hardware integration like RotorS [1] and Agilicious [2], or photorealistic visualization like with Airsim [3] and Flightmare [4]. Also, reinforcement learning (RL) for UAVs is sprouting simulation environments like Gym-PyBullet-Drones [5] purpose-built for use with common Python-based RL toolkits. The trend seems to be towards increasingly complex and elaborate codebases requiring a high level of expertise to navigate and understand their modeling choices, making it hard to decide whether or not a simulator will fit the needs of a new user. To that end, we developed a new simulation environment called RotorPy 1 , which prioritizes accessibility, transparency, and educational value, serving as a tool for learning and exploration in aerial robotics both for students and researchers. Initially created as a teaching aid for a robotics course at the University of Pennsylvania, RotorPy was designed to be lightweight, easy to install, and accessible to engineers with working knowledge of Python.
This paper introduces RotorPy's modeling choices, structure, and features contributing to an effective environment for probing aspects of UAVs; and then, we present a case study using the simulator to design a model-based wind estimator.
II. MODELING
RotorPy includes a quadrotor UAV model with aerodynamic wrenches, inertial and motion capture sensors, cuboid obstacle environments, and spatio-temporal wind fields.
A. Multirotor dynamics
Following Figure 1, we model a multirotor UAV with coplanar rotors using the Newton-Euler equations:
x = v (1) v = 1 m R(f c + f a ) − ge 3 (2) R = RΩ(3)Ω = J −1 (m c + m a − Ω × JΩ)(4)
where x ∈ R 3 and v ∈ R 3 are respectively the position and velocity vectors; R ∈ SO(3) is the rotation from the body frame to the world frame; Ω ∈ R 3 is the angular velocity; m is the total mass and J is the inertia tensor expressed in the body frame. The terms f c ∈ R 3 and m c ∈ R 3 constitute the control wrench, i.e., the forces and torques produced by the rotor thrust and drag torque. We model the control wrench with
f c = k η n i=1 η 2 i b 3(5)m c = k m n i=1 ϵ i η 2 i b 3 + n i=1 r i × f ci .(6)
Here, η i is the i'th rotor speed, r i is the vector from the center of mass to the rotor hub, and ϵ i ∈ {−1, 1} is the rotor's direction of rotation. We assume that each rotor has the same static thrust, k η , and drag torque, k m , coefficients which can be identified using thrust stand tests.
B. Aerodynamic wrenches
Together, the vectors f a ∈ R 3 and m a ∈ R 3 make up the aerodynamic wrench, which is a collection of forces and torques produced by the relative motion of the UAV through a fluid medium. There are multiple physical phenomena that produce f a and m a on the UAV (see [6] and references therein). These effects are all dependent on the relative body airspeed, v a = R ⊤ (ẋ − w) where w ∈ R 3 is a local wind vector in the world frame. We lump these effects into three contributions to the aerodynamic wrench: parasitic drag, rotor drag, and blade flapping.
1) Parasitic drag: Parasitic drag is the combination of skin friction and pressure drag acting on the body of the UAV. It is characteristically proportional to the airspeed squared:
D p = −C||v a || 2 v a(7)
where C = diag(c Dx , c Dy , c Dz ) is a matrix of parasitic drag coefficients corresponding to each body axis.
2) Rotor drag: In contrast to parasitic drag, which is dominant at higher airspeeds, rotor drag can have a surprisingly large presence on small UAVs even at lower airspeeds. The physical phenomenon responsible for rotor drag is the dissymmetry of lift produced by a rotor in forward flight, whereby the advancing blade experiences a higher airspeed than the retreating blade producing an imbalance of forces on the rotor. We adopt the rotor drag model used in [6] in which the drag force is proportional to the product of the airspeed and the rotor speed.
D ri = −Kη i v ai (8) where K = diag(k d , k d , k z )
is a matrix of rotor drag coefficients 2 corresponding to each body axis.
3) Blade flapping: Dissymmetry of lift at the advancing and retreating sides of the rotor will also cause the rotor blades to deflect up and down as they revolve in a flapping motion. Svacha et al. [6] provides experimental evidence for flapping moments even for small UAVs with rigid rotors. This is a very complex phenomenon that can produce both longitudinal and lateral moments depending on the rigidity of the blades [7]. Our model expresses blade flapping as a longitudinal moment following [6]
m f lapi = −k f lap η i v ai × b 3(9)
with k f lap being the flapping coefficient.
The total aerodynamic force in the body frame is f a = D p + n i=1 D ri , and the total moment is m a = n i=1 (m f lapi + r i × D ri ). 2 As noted in [6], the kz term isn't actually a source of drag, but rather a linear approximation of loss of thrust due to change in inflow. However, it resembles an effective drag on the body z axis.
C. Actuator dynamics
Even for very small UAVs, the motors take time to settle to a commanded speed. Capturing this effect has proven to be important especially for RL applications [5]. We model the actuator delay using a first order process:
η = 1 τ m (η c − η)(10)
where η c ∈ R n are the commanded rotor speeds and τ m is the motor time constant-it can be identified using static thrust stand testing.
D. Sensors 1) Inertial measurement unit: The simulator's inertial measurement unit (IMU) measurement is given by:
h IM U = R B I (v + ge 3 ) + a IM U + b a + ν a Ω + b g + ν g(11)
wherev is given by equation 2, a IM U = Ω × (Ω × r IM U ), r IM U and R B I are the position and orientation of the sensor in the body frame, and ν (·) ∼ N (0, Σ (·) ) are sensor noises. The biases b (·) are driven by random walk to simulate drift.
2) External motion capture: The external motion capture sensor provides information about the pose and twist of the robot in the world frame.
h M C = x + ν x v + ν v q ⊕ q ν Ω + ν Ω .(12)
Above, q is the quaternion representation of R, ⊕ is the quaternion group operation, and q ν is a quaternion formed by small noise perturbations following [8].
E. Wind
Wind is modeled by treating the local average wind acting on the center of mass as an additional state vector. How this state evolves depends on the chosen wind profile. RotorPy offers flexibility by supporting both spatial and temporal wind profiles. Several profiles like step changes, sinusoids, and the Dryden wind turbulence model are included for evaluating controller robustness or estimation accuracy.
III. SIMULATION FRAMEWORK
RotorPy is written entirely in Python-a deliberate choice originally made to serve instructional needs. While this choice might come at a performance cost, the readability and widespread use of Python in scientific computing is the key to this simulator's accessibility and low barrier to entry which is beneficial for education, as originally intended, but also for research. Python also makes installation of both RotorPy and its dependencies possible with one command.
A. Usage
RotorPy is a collection of modules that can be imported to scripts anywhere. The Environment class makes it possible to create, run, and analyze multiple simulations, all with potentially unique configurations, in just a single Python file. We believe this design principle makes RotorPy stand out among other simulators which typically run in a selfcontained manner. In contrast, our simulator is purposefully exportable in a manner conducive to studies that require lots of data (e.g., reinforcement learning, design parameter search, controller verification).
The environment needs a vehicle, controller, and planner; examples of these are all provided out of the box. The Environment also has options to add wind and obstacles, configure sensor intrinsics and extrinsics, and more. Running the simulator only takes one line:
1 results = sim_instance.run(duration, * args)
The output of run() is a dictionary containing the ground truth states, desired state from the trajectory planner, sensor measurements, and controller commands. In addition to optional auto-generated plots and simple animations for quick user assessment, we provide a script for automatically converting the results into a Pandas DataFrame for larger scale data analysis.
B. Numerical integration
UAVs are hybrid systems-the dynamics are continuous but control occurs in discrete instances-motivating an approach to numerical integration that preserves the continuity of dynamics in between controller updates. To that end, RotorPy uses an RK45 integrator with variable step size 3 . An added benefit of the variable step size is that we can run simulations with larger time steps, reducing compute cost, while preserving the integration accuracy.
IV. BENCHMARKING
In order to verify our models, especially for the sensors, we collected flight data from a Crazyflie 2.1 4 performing a series of aggressive maneuvers. In this paper, we present one instance of our hardware trials in which the Crazyflie is commanded to fly in a tight circle at speeds up to 2.5 m/s.
A. Hardware setup
A motion capture system sends pose and twist data at 100Hz to a base station computer, which uses a nonlinear geometric controller to generate a command based on the current state and desired trajectory. RotorPy's controller and trajectory generator are designed to be compatible with our lab hardware. The Crazyflie uses onboard PID controllers to track the collective thrust and attitude commands from the simulator's controller.
B. Circle comparison
In Figure 2, we compare the IMU measurements from simulation and the Crazyflie. This comparison principally highlights the simulator's sensor and aerodynamic models. For our co-planar configuration, the accelerations in the x and y body axes isolate the drag forces for comparison, since we would otherwise expect zero accelerations in the absence of the aerodynamic wrenches [9].
V. CASE STUDY: WIND ESTIMATION
We demonstrate the utility of RotorPy by evaluating a custom Bayesian filter for estimating the local wind vector, w, using measurements from the navigation system rather than a dedicated wind sensor-this approach can be classified as indirect wind estimation [10]. The estimator is implemented in simulation as an unscented Kalman Filter (UKF), using the simulator's accelerometer and the motion capture sensors to observe the wind vector. The process model makes several simplifications: linearized attitude dynamics and a version of the aerodynamics that only considers parasitic drag. For each evaluation, a calibration procedure collects simulated flight data of the quadrotor with randomized parameters, and then fits quadratic drag coefficients for the process model. Figure 3 summarizes the average RMSE over 50 evaluations of the filter using randomized quadrotor parameters in Table I. In half of the trials, the filter's RMSE falls around or under 0.5 m/s; however, performance is poor in cases where the calibration procedure fails to find good drag coefficients, like when the real drag coefficients are small. Figure 4 is one instance of the evaluation, comparing the actual wind components to that estimated from the filter. This trial highlights an important model discrepancy: the process model uses the commanded thrust, not the actual thrust, produced by the rotors. In cases of overwhelming winds like in Figure 4, the motors are saturated which causes a model discrepancy between the commanded and actual thrust, leading to estimation error.
N-(m/s) −2 0-1(10 −3 ) c Dy N-(m/s) −2 0-1(10 −3 ) c Dz N-(m/s) −2 0-2(10 −2 ) k d N-rad-m-s −2 0-1.19(10 −3 ) kz N-rad-m-s −2 0-2.32(10 −3 )
VI. SUMMARY
This work presents RotorPy, a UAV simulation environment that is designed to be accessible to engineers with working knowledge of Python. The simulator is packaged with a 6-DoF model of a quadrotor UAV with aerodynamics and motor dynamics, realistic sensors, obstacles, and wind models. In addition, we provide a tracking controller, multiple trajectory generation methods, and a wind estimation filter for convenience. For verification, we compare our simulator to real data collected from a Crazyflie performing aggressive trajectories that highlight the aerodynamic forces present in high speed flight. We believe that RotorPy can be a useful tool both in and out of the classroom as a way to dig deep into concepts in estimation, planning, and control for UAVs in the presence of high winds. This is demonstrated in our case study, which looks at using RotorPy to evaluate a modelbased wind estimator. Future developments include broader support for different UAV archetypes and incorporation of a fast fluid dynamics solver for native spatio-temporal wind field generation. Fig. 4: A simulated instance of the unscented Kalman Filter estimating the local wind velocity vector for a quadrotor subject to Dryden wind gusts.
Fig. 1 :
1Free body diagram of a UAV subject to control and aerodynamic wrenches. Relative airflow through the fluid medium, v a , produces additional wrenches in the form of aerodynamic drag on the frame, D p and rotors, D ri .
Fig. 2 :
2A comparison between the simulated and actual measurements from the Crazyflie's IMU while tracking a 1.5 m circle at 2.5 m/s.
Fig. 3 :
3Monte Carlo evaluation of the wind filter over 50 simulations; each instance has randomized mass, drag coefficients, and average wind magnitudes.
TABLE I :
IRandomized quadrotor parameters for the Monte Carlo evaluation of the wind filter. Symbols are consistent with Section II.Parameter
Unit
Range (min-max)
m
kg
0.375-0.9375
c Dx
docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve ivp.html 4 www.bitcraze.io/(a) Linear accelerometer measurements in the body frame. (b) Angular velocity measurements in the body frame.
Robot Operating System (ROS): The Complete Reference. F Furrer, M Burri, M Achtelik, R Siegwart, 10.1007/978-3-319-26054-9_23RotorS-A Modular Gazebo MAV Simulator Framework. ChamSpringer International Publishing1F. Furrer, M. Burri, M. Achtelik, and R. Siegwart, Robot Operating System (ROS): The Complete Reference (Volume 1). Cham: Springer International Publishing, 2016, ch. RotorS-A Modular Gazebo MAV Simulator Framework, pp. 595-625. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-26054-9 23
Agilicious: Open-source and open-hardware agile quadrotor for visionbased flight. P Foehn, E Kaufmann, A Romero, R Penicka, S Sun, L Bauersfeld, T Laengle, G Cioffi, Y Song, A Loquercio, D Scaramuzza, AAAS Science Robotics. P. Foehn, E. Kaufmann, A. Romero, R. Penicka, S. Sun, L. Bauersfeld, T. Laengle, G. Cioffi, Y. Song, A. Loquercio, and D. Scaramuzza, "Ag- ilicious: Open-source and open-hardware agile quadrotor for vision- based flight," AAAS Science Robotics, 2022.
Airsim: High-fidelity visual and physical simulation for autonomous vehicles. S Shah, D Dey, C Lovett, A Kapoor, Field and Service Robotics: Results of the 11th International Conference. SpringerS. Shah, D. Dey, C. Lovett, and A. Kapoor, "Airsim: High-fidelity visual and physical simulation for autonomous vehicles," in Field and Service Robotics: Results of the 11th International Conference. Springer, 2018, pp. 621-635.
Flightmare: A flexible quadrotor simulator. Y Song, S Naji, E Kaufmann, A Loquercio, D Scaramuzza, Conference on Robot Learning. Y. Song, S. Naji, E. Kaufmann, A. Loquercio, and D. Scaramuzza, "Flightmare: A flexible quadrotor simulator," in Conference on Robot Learning, 2020.
Learning to fly-a gym environment with pybullet physics for reinforcement learning of multi-agent quadcopter control. J Panerati, H Zheng, S Zhou, J Xu, A Prorok, A P Schoellig, 2021J. Panerati, H. Zheng, S. Zhou, J. Xu, A. Prorok, and A. P. Schoellig, "Learning to fly-a gym environment with pybullet physics for reinforcement learning of multi-agent quadcopter control," in 2021
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 7512-7519.
Imu-based inertia estimation for a quadrotor using newton-euler dynamics. J Svacha, J Paulos, G Loianno, V Kumar, IEEE Robotics and Automation Letters. 53J. Svacha, J. Paulos, G. Loianno, and V. Kumar, "Imu-based inertia es- timation for a quadrotor using newton-euler dynamics," IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3861-3867, 2020.
Flapping characteristics of rigid rotor blades. R W Allen, 10.2514/8.11343Journal of the Aeronautical Sciences. 134R. W. Allen, "Flapping characteristics of rigid rotor blades," Journal of the Aeronautical Sciences, vol. 13, no. 4, pp. 183-186, 1946. [Online]. Available: https://doi.org/10.2514/8.11343
Quaternion kinematics for the error-state kalman filter. J Sola, arXiv:1711.02508arXiv preprintJ. Sola, "Quaternion kinematics for the error-state kalman filter," arXiv preprint arXiv:1711.02508, 2017.
The true role of accelerometer feedback in quadrotor control. P Martin, E Salaün, 2010 IEEE International Conference on Robotics and Automation. P. Martin and E. Salaün, "The true role of accelerometer feedback in quadrotor control," in 2010 IEEE International Conference on Robotics and Automation, 2010, pp. 1623-1629.
Wind measurement and simulation techniques in multi-rotor small unmanned aerial vehicles. P Abichandani, D Lobo, G Ford, D Bucci, M Kam, IEEE Access. 8P. Abichandani, D. Lobo, G. Ford, D. Bucci, and M. Kam, "Wind measurement and simulation techniques in multi-rotor small unmanned aerial vehicles," IEEE Access, vol. 8, pp. 54 910-54 927, 2020.
| [] |
[
"Fair Column Subset Selection",
"Fair Column Subset Selection"
] | [
"Antonis Matakos [email protected] ",
"Bruno Ordozgoiti [email protected] ",
"Suhas Thejaswi [email protected] ",
"\nAalto University Espoo\nFinland\n",
"\nMax Planck Institute for Software Systems\nQueen Mary University of London London\nKaiserslauternUK, Germany\n"
] | [
"Aalto University Espoo\nFinland",
"Max Planck Institute for Software Systems\nQueen Mary University of London London\nKaiserslauternUK, Germany"
] | [] | We consider the problem of fair column subset selection. In particular, we assume that two groups are present in the data, and the chosen column subset must provide a good approximation for both, relative to their respective best rank-approximations. We show that this fair setting introduces significant challenges: in order to extend known results, one cannot do better than the trivial solution of simply picking twice as many columns as the original methods. We adopt a known approach based on deterministic leverage-score sampling, and show that merely sampling a subset of appropriate size becomes NP-hard in the presence of two groups. Whereas finding a subset of two times the desired size is trivial, we provide an efficient algorithm that achieves the same guarantees with essentially 1.5 times that size. We validate our methods through an extensive set of experiments on real-world data. | null | [
"https://export.arxiv.org/pdf/2306.04489v1.pdf"
] | 259,095,734 | 2306.04489 | 89df01b3190a80c863ad64115034b4216f9c91ae |
Fair Column Subset Selection
Antonis Matakos [email protected]
Bruno Ordozgoiti [email protected]
Suhas Thejaswi [email protected]
Aalto University Espoo
Finland
Max Planck Institute for Software Systems
Queen Mary University of London London
KaiserslauternUK, Germany
Fair Column Subset Selection
CCS CONCEPTSComputing methodologies → Feature selection;Mathe- matics of computing → Dimensionality reduction KEYWORDS Algorithmic Fairness, Column Subset Selection, Leverage Scores
We consider the problem of fair column subset selection. In particular, we assume that two groups are present in the data, and the chosen column subset must provide a good approximation for both, relative to their respective best rank-approximations. We show that this fair setting introduces significant challenges: in order to extend known results, one cannot do better than the trivial solution of simply picking twice as many columns as the original methods. We adopt a known approach based on deterministic leverage-score sampling, and show that merely sampling a subset of appropriate size becomes NP-hard in the presence of two groups. Whereas finding a subset of two times the desired size is trivial, we provide an efficient algorithm that achieves the same guarantees with essentially 1.5 times that size. We validate our methods through an extensive set of experiments on real-world data.
INTRODUCTION
Dimensionality reduction techniques such as principal component analysis (PCA) [31] and non-negative matrix factorization (NMF) [26] have proven to be useful for machine learning and data analysis. Among other uses, they are essential for analyzing high-dimensional data to uncover its underlying structure, reduce noise, and enhance computational efficiency. Consequently, these techniques are commonly employed both independently, for gaining insights into the data, and as integral components of larger machine learning pipelines.
In recent years, machine learning models have been deployed as part of decision making processes that affect the lives of millions of people. Many of these models make use of dimensionality reduction in the form of feature selection and extraction. However, these processes have been found to amplify biases against certain social groups, especially towards minorities [24]. For example, a tool used by courts in the United States to make pretrial detention and release decisions was found to be biased against African-Americans [3]. Perhaps the most well-known dimentionality reduction technique, PCA, has been shown in many real-world datasets to incur higher average reconstruction error for a subset of population than the rest [37,39]. Therefore, there has been increased emphasis on developing techniques that can produce low dimensional representations which balance the representation power of different groups. Samadi et al. [37] and Tantipongpipat et al. [39] proposed fair variants of PCA, where the objective is to minimize the maximum reconstruction error for any group. However, a drawback of PCA is that the results are often hard to interpret, since its output consists of abstract variables, which might not necessarily be part of the input data. As an alternative to PCA, one may opt for algorithms that choose a (small) subset of the original variables of the dataset to act as a low dimensional representation.
Column subset selection (CSS) is one such technique, where we are given a real matrix and the problem asks for a subset of its columns with good representation power. The quality of the solution is measured by the residual norm when the input matrix is projected onto the subspace spanned by the chosen columns. This problem has been extensively studied, and approximation results are known for different polynomial-time algorithms under a variety of quality criteria [1,5,7,13,14,30].
In this paper we study CSS in a fair setting, where two groups are present in the data and the reconstruction error must be small for both. More specifically, we assume that the rows of can be partitioned into two subsets, forming submatrices and . Traditional methods for CSS target the overall reconstruction error and may neglect one of the groups.
This setting introduces significant challenges to CSS, making it necessary to look at the problem under a new lens. As an example, consider the goal of achieving a relative-error approximation with respect to the best-possible -dimensional subspace in the Frobenius norm. That is, suppose the chosen column subset forms a matrix , and is the best rank-approximation of , so that ∥ − ∥ 2 is minimal over all possible rank-matrices of appropriate size. Relative-error approximation algorithms seek to minimize the ratio ∥ − + ∥ 2 /∥ − ∥ 2 , where + is the pseudoinverse of .
While is easily obtained, for instance by computing a singular value decomposition (SVD) of , the corresponding optimal -column subset is NP-hard to find [38]. Thus, polynomial-time approximation algorithms are usually sought. A ratio of ( ) is possible if we are allowed to choose at most columns [13], and better results can be obtained if we allow > columns [6].
In the two-group fair setting, these approximation bounds are trivially reproduced if we simply allow twice as many columns in the solution; we can just optimize for both groups separately. However, in this work we are interested in a single set of representative columns for both groups. In fact, treating the two groups separately may result in ethical and legal considerations [27,37].
Unfortunately, as we show through a simple example, it may not be possible to do better than simply choosing twice as many columns. Can we hope to obtain guarantees of any kind in this fair setting? We answer this question in the affirmative. To achieve this, we will adopt an approach based on leverage score sampling.
Leverage scores are obtained from the SVD of a matrix, and can be used to find provably good subsets for CSS. In particular, Papailiopoulos et al. showed that by simply choosing columns so that their leverage scores add up to a certain threshold, a relative-error approximation is possible [30]. While the necessary number of columns cannot be bounded in general, it is possible to do so if the leverage scores follow power-law decay, a condition often seen in real-world data [30]. In standard CSS, finding an appropriate column subset is trivial once the leverage scores are known, as it simply involves picking the largest elements of a sorted list. In the fair setting, however, we show that finding an appropriate subset of minimal size is NP-hard. While the original result is again trivially extended to the fair setting by doubling the number of columns employed, we present an efficient algorithm that achieves this with essentially 1.5 as many columns. Whether this factor can be improved is left as an open question. Finally, we introduce efficient heuristics for the problem based on QR factorizations with column pivoting, and assess their performance in our empirical evaluation.
In various experiments on real-world datasets, we show that our approach produces column subsets with balanced results for the two groups, where conventional methods can neglect one of them.
Our contributions are summarized as follows:
• We introduce the novel problem of fair column subset selection, where two groups are present in the data and the largest reconstruction error of the two must be minimized. • We extend the approach of deterministic leverage score sampling to the two-group fair setting. We show that the smallest column subset that achieves the desired guarantees is NPhard to find, and provide a polynomial-time algorithm which achieves relative-error guarantees with a column subset of essentially 1.5 times the minimum possible size. • We introduce efficient heuristics for the problem based on QR factorizations with column pivoting. • We empirically evaluate our algorithms on real-world datasets and show that our algorithms are able to select fair columns with high accuracy. • We analyse the price of fairness of our formulation.
RELATED WORK
Our work builds upon related work in the areas of algorithmic fairness and column subset selection.
Algorithmic fairness. Dwork et al. 's influential work [16] established a formal notion of fairness in algorithmic decision-making, which has served as a foundation for subsequent research in the field. Pedreschi et al. addressed discrimination in data mining problems [32], while Kamiran and Calders proposed a framework for mitigating discrimination in classification tasks [22,23]. Since then, there has been extensive research on algorithmic fairness from various disciplines, including economics, game theory, statistics, ethics, and computer science [8,[34][35][36]. Fairness objectives have been introduced into many classical computer science problems [2,11,17,19,37,41,42]. Bellamy et al. [4] developed a comprehensive toolkit for assessing, analyzing, and improving fairness in machine learning models. We also refer interested readers to reviews by Pessach and Shmueli [33], Mitchell et al. [28], and Kleinberg et al. [25].
Column Subset Selection. A wide range of machine learning applications require identifying a subset of the data that is the most relevant to the task at hand. Examples include selecting a subset of labelled/unlabelled data points, features or parameters for improving the performance of a machine learning model, or for selecting a subset of columns that aims to determine the most influential features/columns of a dataset (feature selection).
Early work in CSS traces back to the numerical linear algebra community, and the seminal works of Golub [20] on pivoted QR factorizations, which was followed by works addressing the problem of finding efficient rank revealing QR factorizations (RRQR) [9,10,21]. The problem has attracted interest more recently in the computer science community, with approaches that combine sampling with RRQR factorizations to achieve improved outcomes [7,30], while greedy methods have also demonstrated effectiveness [1] .
Fair PCA. Closest to our work is fair principal component analysis (FairPCA) by Samadi et al. [37], where the goal is to find a low-dimensional projection of the data that optimizes a min-max objective that ensures fairness with respect to different groups. Tantipongpipat et al. [39] extended FairPCA to a multi-objective setting. Furthermore, Olfat and Aswani [29] presented a polynomial time algorithm using convex programming for the general instance with more groups. Although we also employ a min-max objective, in contrast to FairPCA, we want to find a subset of actual columns that approximate the reconstruction error of the (two) different groups. We show that finding a fair subset of columns is a significantly harder problem than finding a fair projection. For instance, in the case of FairPCA, the problem can be solved optimally in polynomial time for two groups [39]. In contrast CSS is NP-hard [38] and the hardness result also follows to FairCSS. For instance when the two groups are identical, finding a solution for FairCSS is equivalent to finding a solution for CSS.
PROBLEM STATEMENT
In this section we will describe the relevant terminology and formally define CSS before introducing FairCSS. Throughout this paper, for a positive integer we denote [ ] = {1, 2, . . . , }.
Let ∈ R × be a matrix with rows rows and columns. Let ∈ R × be a matrix comprised of columns of . By we denote the projection operator onto the -dimensional subspace spanned by the columns of . Problem 1 (Column Subset Selection (CSS)). Given a matrix ∈ R × and a positive integer , the goal is to choose columns of to form a matrix ∈ R × such that the reconstruction error ∥ − ∥ is minimized, where ∥ · ∥ denotes the Frobenius norm.
Fairness
The solution to CSS provides the best possible column approximation to the matrix overall. However, if various groups are present in the data, it may be the case that the reconstruction error for each of them is overly disparate, especially if one of these groups is a minority.
We assume that the matrix rows are partitioned into two groups, and . Our goal is to choose a subset of columns which achieves a good reconstruction of both groups.
Various considerations are necessary. First, even if we choose a common column subset for both groups, the optimal reconstruction error for each of them is attained by a different projection operator, namely + and + . Second, it might be the case that one of the groups can simply attain much better reconstruction error than the other. We would not like to severely penalise the reconstruction of , say, simply to obtain a slight improvement in the reconstruction of . To prevent this, we will measure the error in relation to the best possible rankapproximation for each of the groups.
Definition 1 (Normalized group-wise reconstruction error). Given a matrix ∈ R × and a row subset , so that the rank of is larger than , and a matrix ∈ R × formed by choosing columns of , the normalized reconstruction error of is defined as:
( , ) = ∥ − + ∥ ∥ − ( ) ∥ .
This enables us to define our problem formulation.
Problem 2 (FairCSS-MinMax). Given a matrix ∈ R × with non-empty groups and , and a positive integer . The goal is to choose columns of to form a matrix ∈ R × that optimizes
min ∈R × , ⊂ max ∥ − + ∥ ∥ − ∥ , ∥ − + ∥ ∥ − ∥ .
In the following section we extend a well-known method from CSS literature in order to pick a fair set of columns for a given matrix with sub groups and , according to FairCSS-MinMax.
Limitations
Here we argue that it may not be possible to do better than solving for the two groups separately, as claimed in Section 1. In particular, any algorithm that attempts to solve CSS for more than one group, that is, to achieve errors smaller than ∥ ∥ 2 and ∥ ∥ 2 respectively, cannot achieve meaningful bounds on the relative-error with respect to both ∥ ∥ 2 and ∥ ∥ 2 . Consider the following example:
0 0 ,
where , are matrices of rank .
If we can only pick less than 2 columns, the error is at least
min = −1 2 ( ) = 2 ( ) , = −1 2 ( ) = 2 ( ) , which can be unbounded if the rank of either submatrix is numerically close to , that is, if −1 ( ) ≫ ( ) or −1 ( ) ≫ ( ).
Clearly, the only way to prevent this is to pick 2 columns. The instance need not be this idiosyncratic, as a similar result is yielded by matrices where the blocks of zeroes are replaced by small numbers instead. Despite this limitation, in the following section we provide an algorithm with a bounded error relative to the best rank-approximation, by relaxing the requirement on the number of selected columns.
PAIRS OF LEVERAGE SCORES
A useful concept that is extensively studied in CSS literature is leverage scores, which is defined as follows:
Definition 2 (Leverage scores). Let ( ) ∈ R × denote the top-right singular vectors of a × matrix with rank = ( ) ≥ . Then, the rank-leverage score of the -th column of is defined as:
ℓ ( ) ( ) = ∥ [ ( ) ] ,: ∥ 2 2 for all ∈ [ ].
Here, [ ( ) ] ,: denotes the -th row of ( ) .
Leverage scores are used to find a solution with approximation guarantees for CSS. In particular, we focus on the result by Papailiopoulos et al. [30]:
Theorem 1 (Papailiopoulos et al. [30]). Consider a matrix ∈ R × and an integer < rank( ). Define = − for some ∈ (0, 1). Let be a subset of column indices such that ∈ ℓ ( ) ≥ , and ∈ R × be the submatrix of formed by keeping the columns whose indices are in . Then we have that
∥ − + ∥ 2 ≤ (1 − ) −1 ∥ − ∥ 2 .
That is, if we pick a subset of columns whose leverage scores add up to a certain threshold, we obtain relative error guarantees with respect to the best rank-approximation. Papailiopoulos et al. proposed a deterministic algorithm that picks columns with the largest leverage scores, such that the threshold is satisfied. The algorithm has a running time of (min{ , } ).
Naturally, it is possible that we may need to pick more than columns to satisfy the threshold imposed by . This depends on the distribution of the leverage scores, and in the worst case we might require Ω( ) columns. Nevertheless, Papailiopoulos et al. showed that when the leverage scores follow power-law decay, a small factor of suffices to satisfy the threshold [30].
Fair deterministic leverage-score sampling
Of course, the algorithm of Papailiopoulos et al. discussed above cannot be trivially extended for the two-group setting, as it may neglect one of the groups. We extend the method of deterministic leverage-score sampling to satisfy our aspirations of fairness.
In order to achieve guarantees for both groups, we need to ensure that their respective leverage scores add up to the desired threshold. While this is trivially achieved in the single-group setting by simply picking the columns with the largest leverage scores until the threshold is attained, it is significantly harder in the presence of two groups. We formally define and analyse the complexity of the problem next.
Problem 3 (Min-FairnessScores). Given two matrices, ∈ R × and ∈ R × , ∈ N : 0 < < rank( ), rank( ) and a threshold = − , for some ∈ (0, 1), find the smallest set of indices
⊆ [ ] such that: ∑︁ ∈ ℓ ( ) ( ) ≥ and ∑︁ ∈ ℓ ( ) ( ) ≥ .
If we successfully find a subset of columns such that both above inequalities are satisfied, then due to Theorem 1, we have that,
∥ − + ∥ ≤ (1− ) −1/2 ∥ − ∥ =⇒ ( , ) ≤ (1− ) −1/2 ∥ − + ∥ ≤ (1− ) −1/2 ∥ − ∥ =⇒ ( , ) ≤ (1− ) −1/2
A solution to Problem 3 gives us an upper bound on the reconstruction error. Unfortunately, the two-dimentional variant of the problem is harder than its one-dimentional counterpart. In fact, we show that Problem 3 is NP-complete.
Theorem 2. MinFairnessScores is NP-complete.
Proof. To establish hardness we reduce the equal cardinality partition problem (EqCardPartition) to a decision version of MinFairnessScores, called -FairnessScores. The decision version asks to find exactly indices i.e.,
⊆ [ ], | | = such that ∈ ℓ ( ) ( ) ≥ and ∈ ℓ ( ) ( ) ≥ . Given a set = { 1 , .
. . , } of positive integers, EqCardPartition asks to partition into two disjoint subsets , such that ∪ = , | | = | | and
∈ = ∈
. EqCardPartition is known to be NPcomplete [18,SP12].
Given an instance of EqCardPartition ( , ), we reduce it to a -FairnessScores instance ( , , , ) as follows. Let = ∈ and ≫ be some constant.
Let = [ √ 1 , . . . , √ ] and = √︃ − 1 , . . . , √︃ −
be input matrices such that ∈ R ×1 , ∈ R ×1 . Finally, we set = 1/2 and = /2. We claim that EqCardPartition is a yes instance if and only if -FairnessScores is a yes instance. The reduction is polynomial in the input size.
From the singular value decomposition (SVD), we get
ℓ (1) ( ), ℓ (1) ( ) = 2 ∥ì∥ 2 2 , 2 ∥ ì ∥ 2 2 = , − − 1 for all ∈ [ ].
For ease of notation we denote = ℓ ( ) ( ) and = ℓ ( ) ( ).
Let us assume that -FairnessScores is a yes instance. Then we have a subset ⊆ [ ] such that | | = /2,
∑︁ ∈ = ∈ ≥ 1 2 which implies ∑︁ ∈ ≥ 2 ,(1)∑︁ ∈ = ∈ ( − ) − 1 ≥ 1 2 which implies ∑︁ ∈ ≤ 2(2)
For Eq 1 and 2 to hold simultaneously, we must have ∈ = /2. Thus,
= { : ∈ }, = { : ∈ [ ] \ } is a solution to EqCardPartition.
For the sake of contradiction, assume that -FairnessScores is a no instance and , are a solution to EqCardPartition. Let be the set of indices of elements in . We can choose as solution for -FairnessScores since | | = /2,
∑︁ ∈ = ∑︁ ∈ / = 1/2, and ∑︁ ∈ = ∈ ( − ) − 1 = 1/2,
which is a contradiction. Thus, EqCardPartition must be a no instance. Given a solution ⊆ [ ] for -FairnessScores, we can verify in polynomial time if | | = , compute SVD as well as check if ∈ ≥ and ∈ ≥ . Thus, -FairnessScores is NPcomplete. Naturally, -FairnessScores reduces to MinFairnessScores, as the latter finds the smallest such that a solution to -FairnessScores exists. Thus, MinFairnessScores is NP-complete, which concludes the proof. □ Theorem 2 highlights that MinFairnessScores is NP-complete even if matrices , have exactly one row. A similar reduction from -SubsetSum establishes that MinFairnessScores is NPhard even if the thresholds are different for the two groups i.e., ∈ ℓ ( ) ( ) ≥ , ∈ ℓ ( ) ( ) ≥ and ≠ . The proof is omitted due to space constraints.
Even though we establish the challenging nature of MinFair-nessScores, it is trivial to obtain a constant factor approximation as follows: sort ℓ ( ) ( )'s in decreasing order and choose indices with highest leverage scores until they sum to . Then, repeat the process for ℓ ( ) ( )'s, which results in at most 2 columns, where is the optimal number of columns to satisfy the threshold . Next, we present an algorithm that finds a solution to MinFair-nessScores with at most ⌈3 /2⌉ +1 columns, which is a ≈ 1.5-factor approximation. 1 The pseudocode is available in Algorithm 1. For ease of notation, we denote = ℓ ( ) ( ) and = ℓ ( ) ( ). At each iteration we add to the index such that the cumulative gain + is maximized until a step ≤ , where at least one of the inequalities is satisfied (see Line 2-4), i.e., either ∈ ≥ or ∈ ≥ . In the second step, we sort the tuples of leverage scores in [ ] \ based on their contribution to the unsatisfied inequality only, in descending order. Finally, we pick the rest of the tuples based on this order, until the threshold is satisfied (Line 5-8). The following theorem establishes our approximation result. We note that the approximation is in terms of the number of columns in the optimal solution, as we already have established an (1 − ) −1/2 approximation in terms of our objective function. Proof. Given = {( 1 , 1 ), . . . , ( , )}, the task is to find the smallest subset ⊆ such that ∈ ≥ and ∈ ≥ . At each iteration we select a tuple with the maximum contribution, i.e., + until some step , where either ∈ ≥ or ∈ ≥ is satisfied. Without loss of generality at step we assume that ( , ) ∈ ≥ . Let * be the optimal solution and ( * , * ) denote the contribution of -th tuple in * . We assume * is sorted in decreasing order according to * + * . We note that this assumption makes our analysis easier without losing generality.
First we establish that ≤ . As discussed earlier, we assume that at least =1 ≥ . If =1 ≥ then by establishing that ≤ we are done. Otherwise, the algorithm has not terminated, which implies that =1 < .
From the optimality of + in each step ≤ it holds that,
∑︁ =1 + ∑︁ =1 ≥ ∑︁ =1 * + ∑︁ =1 * ∑︁ =1 ≥ ∑︁ =1 * + ∑︁ =1 * − ∑︁ =1 ≥ 2 − ∑︁ =1
≥ thus 1 ≤ ≤ . We now discern two cases for the value of .
Case ≤ ⌈ 2 ⌉ + 1: We have =1 ≥ . To satisfy the second inequality we choose tuples in \ in decreasing order according to , which is at most columns, since the optimal solution has columns. So, we have a solution with size | | ≤ + = ⌈ 3 2 ⌉ + 1. Case ⌈ 2 ⌉ + 1 < ≤ : Again, from the optimality of + at each step in 1 ≤ ≤ we have,
−1 ∑︁ =1 + −1 ∑︁ =1 ≥ −1 ∑︁ =1 * + −1 ∑︁ =1 * ≥ ⌈ 2 ⌉ ∑︁ =1 * + −1 ∑︁ =⌈ 2 ⌉+1 * + ⌈ 2 ⌉ ∑︁ =1 * + −1 ∑︁ =⌈ 2 ⌉+1 * ≥ + −1 ∑︁ =⌈ 2 ⌉+1 * + −1 ∑︁ =⌈ 2 ⌉+1 *
The third step follows from the assumption that the optimal solution * has tuples sorted in decreasing order according to * + * . We also observe that −1 =1 = − . Therefore we have
−1 ∑︁ =1 ≥ −1 ∑︁ = 2 +1 * + −1 ∑︁ = 2 +1 * + Suppose = ⌈ 2 ⌉ + 1 + .
This means we can afford to add at most − columns to our solution if we want to satisfy our bound on the cardinality of .
At this stage, the algorithm will pick the columns with the largest values of * . This means that from those in the optimal solution, we miss at most * 's from among the bottom ones.
From above, we have This holds because the tuples of the optimal solution are sorted in decreasing order of the value of the pair sums. This means that the amount we miss from not adding the last * 's is already covered by what we had, −1
=1
, and so
| | ∑︁ =1 ≥ −1 ∑︁ =1 + − ∑︁ =1 * ≥ .
So the solution has at most − 1 + − = ⌈ 2 ⌉ + + − = ⌈ 3 2 ⌉ + 1 columns and satisfies the threshold. □ Even though Algorithm 1 offers a guarantee in terms of the number of chosen columns, recall that in FairCSS-MinMax we ask for exactly columns. Depending on the task at hand, it may be undesirable to obtain more than columns. Thus, in the following section we extend two well-known algorithms from CSS literature, that allow us to obtain exactly columns. Recall the impossibility result from Section 3.2: unless we pick 2 columns it may be impossible to achieve a relative-error approximation in terms of the rank-reconstruction error. Thus the following algorithms are heuristics. They can be used either directly for FairCSS-MinMax or part of a two-stage approach, that we describe in detail in Section 6. where ∈ × is an orthonormal matrix, ∈ × is upper triangular, 11 ∈ × , 12 ∈ × ( − ) , 22 ∈ ( − ) × ( − ) , and Π ∈ × is a permutation matrix.
FAIR QR DECOMPOSITIONS
Usually, the challenge is to find a permutation strategy to identify the numerical rank of a matrix, that is, the number of singular values that are far from zero. This is achieved by finding a permutation matrix Π and a QR decomposition where ∥ 22 ∥ is bounded from above. Decompositions that fulfill this promise are known as rank revealing QR decompositions (RRQR), and they provide approximation guarantees for CSS. In particular, let Π denote the first columns of a permutation matrix Π. Then for = Π , it can be easily shown that ∥ − ∥ = ∥ 22 ∥ [7]. Here we propose an adaptation of two classic QR algorithms, the first one proposed by Chan [10] (often abbreviated as High-RRQR) and the second one by Chan and Hansen [9] (called Low-RRQR). The algorithms are iterative: At each step, High-RRQR chooses a column to be moved to the back, so as to achieve a small ∥ 22 ∥, whereas Low-RRQR chooses a column to move to the front to obtain a large ∥ 11 ∥. Here we propose alternative pivoting strategies to take fairness into account. For details, please refer to [9,10].
Fair pivoting
We consider the following framework, as in [9,10]: At step of the RRQR, we compute appropriate matrices ( ) and ( ) obtained by performing RRQR for iterations. Our pivoting strategy now comprises of two steps. In the first step, we pick the right singular vector through inspection of the spectra of both groups. In the second step, we pivot the selected column in the same way as in standard RRQR.
Fair H-RRQR. At step , we select the right singular vector of either or corresponding to min{ ( 11 ( )), ( 11 ( ))}. We then select the column with the highest coefficient in said singular vector and permute it to the back. In other words, we construct a permutation Π such that |(Π )| = ∥ ∥ ∞ . The final permutation is obtained by concatenating these Π.
Fair L-RRQR . Similarly, at the -th step of L-RRQR we select the right singular vector corresponding to max{ 1 ( 22 ( )), 1 ( 22 ( ))}, and then we select Π such that |(Π )| 1 = ∥ ∥ ∞ .
For the full algorithms' pseudocode we refer to the Appendix.
EXPERIMENTS
This section describes the experimental setup, datasets utilized, and presents the results of experimental evaluation. Experimental setup. Our implementation is written in python. We use numpy, scipy and scikit-learn for preprocessing, linear algebra operations as well as parallelization. Experiments are executed on a compute node with 32 cores and 256GB of RAM. The source code is available in an anonymized repository https: //anonymous.4open.science/r/fair-css-C2C8. Datasets. We use juvenile recidivism data (recidivism) from Catalunya [40] and medical expenditure survey data 2015 (meps) [12], as well as various datasets from the UCI machine learning repository [15]: "heartcleveland" (heart), "adult" (adult), "german-credit" (german), "creditcard" (credit), "student performance" (student), "compas-recidivism" (compas), "communities" (communities). Data is processed by removing protected attributes, converting categorical variables to one-hot encoding and normalizing each column to unit 2 -norm. Group membership is based on Sex, except for the communities dataset where groups are based on either a majority white or a non-white community. Dataset statistics are reported in Table 1. Experimental evaluation. Our experiments are in threefold. First, we assess the efficacy of the proposed algorithms in addressing the problem of FairCSS-MinMax. This evaluation involves comparing the performance of the proposed algorithms, considering various experimental setups. Second, we evaluate the effectiveness of the FairCSS-MinMax objective in selecting column subsets that result in fair reconstruction errors. Specifically, we compare the reconstruction errors of each group in the optimal solutions obtained using the vanilla CSS objective versus the FairCSS-MinMax objective. Last, we explore the concept of the price of fairness. This entails investigating any potential trade-offs or costs associated with attaining fairness according to FairCSS-MinMax.
Algorithms Evaluation
Algorithms. We refer to Algorithm 1 as FairScoresSampler, and to the two fair RRQR algorithms as Low QR and High QR. The computational complexity of most of the algorithms is dominated by the computation of SVD. Low QR and High QR do this in every step, ( ) and ( − ) times respectively. FairScores-Sampler, on the other hand, only computes an SVD once for each group, and then requires essentially ( log ) to sort the tuples.
We complement our set of algorithms with a straightforward Greedy (called Greedy) algorithm: the algorithm at each step picks the column with the highest direct gain according to MinMaxLoss. The complexity of Greedy is dominated by matrix multiplication. Finally, we also test the performance of picking a random set of columns, with 100 repetitions. We call this method Random.
Picking > columns. Recall that FairScoresSampler chooses columns according to a predefined threshold , and thus it may pick > columns. In the first experiment, we evaluate how well the algorithms perform with respect to a specific low-rank subspace of and over different values of . Thus we evaluate the performance according to MinMaxLoss but keeping ∥ − ∥ and ∥ − ∥ fixed. We do this for the six largest datasets, choosing = 20 in all cases except from meps, where = 50. Figure 1 shows the results. The figure shows a limitation of Low QR and High QR: they can only sample as many columns as min( ( ), ( )). However, Low QR seems to achieve good performance, comparable to Greedy, which is the best performing algorithm. In the largest dataset, meps, the higher costs of the Greedy and High QR algorithms did not allow them to terminate within a reasonable amount of time. We also observe that in that dataset FairScoresSampler performs worse than Random for < 300. This means that the rank 50 leverage scores have not yet decayed enough for this value of , and the algorithm picks many correlated columns. In the Appendix, we plot the rank leverage scores for the datasets for different values of .
Two-stage sampling. Recall that the definition of Problem 2 asks for exactly columns. In order to combine the efficiency of FairScores-Sampler and the effectiveness of algorithms such as Greedy, we introduce and evaluate here a two-stage sampling approach which returns exactly columns. Similar ideas have been explored before in the CSS literature [6]. The approach is as follows: In the first stage of the algorithm we run FairScoresSampler, which takes as input the desired threshold and the rank leverage scores of and . This will return > columns. In the second stage, we run Low QR, High QR or Greedy on the subset of columns returned during the first stage, to obtain a final subset of size . We refer to these methods as S-Low QR, S-High QR and S-Greedy. Table 2 shows the results for the seven largest datasets, and for various values of . was set to − 1/2 in all datasets, apart from meps, where it was set to 3 /4 in order to reduce the number of sampled columns. Thus in all datasets (apart from meps) the resulting number of columns in the first stage, is a √ 2-approximation to the optimal number of columns. Column "c" in Table 2 indicates the number of columns returned from the sampling stage. Notice that in some cases a significant number of columns is needed to satisfy the threshold . For each experiment, we highlight in bold the best performing algorithm.
We observe that Greedy is the best performing algorithm, however does not manage to terminate everywhere within reasonable amount of time. S-Greedy is much faster, due to the sampling stage, and comes close in performance, while S-Low QR is also competitive. On the other hand, we observe that S-High QR is the worst performing algorithm, however we expect it to perform better for closer to . Finally, we observe, as demonstrated in meps, that smaller does not necessarily mean less columns sampled in the first stage. This is because the lower rank leverage scores decay faster; thus many more are needed to ensure they sum to . For a visualization of the decay of leverage scores we refer to the Appendix.
Evaluation of the Fair CSS objective
We examine the imbalance in reconstruction errors of the "vanilla" CSS objective of Problem 1 and the FairCSS-MinMax objective for groups and . We compute the optimal solution for each objective for various , through exhaustive enumeration. In the case of vanilla CSS, we use opt( ) to represent the reconstruction error of matrix , while opt( ) and opt( ) denote the separate reconstruction errors of groups and , respectively, that correspond to the optimal solution of vanilla CSS. Similarly, for the FairCSS-MinMax objective, we denote minmax( ) as the reconstruction error of matrix . Additionally, fair( ) and fair( ) denote the reconstruction errors of groups and , respectively, that correspond to the optimal solution of FairCSS-MinMax. Lastly, fair( ) represents the reconstruction error of corresponding to the optimal solution of FairCSS-MinMax. We note that brute-force enumeration is computationally expensive, even after parallelization and extensive optimization, so we only report results for heart, student and german.
Fairness of solutions. In Figure 2, we compare opt( ), opt( ) which correspond to optimal solution of vanilla CSS and fair( ), fair( ), which corresponds to the optimal solution of FairCSS-MinMax. In most instances, we observe that the reconstruction errors of groups are disproportionate in vanilla CSS. However, the degree of imbalance is not uniform across datasets. One source of imbalance could be vastly different group sizes. However, as can be seen in the case of student, | | > | |, but opt( ) < opt( ). We believe this further highlights the need for sophisticated approaches to fairness in CSS, beyond a mere normalization of groups.
Price of fairness. Next, we verify how the fairness objective influences the quality of the solution in terms of reconstruction error. In Figure 3, we report the optimal solution of vanilla CSS and FairCSS-MinMax to assess the trade-off between fairness and reconstruction error. This analysis quantifies the extent to which we sacrifice reconstruction error to achieve our fairness objective. In most cases, we do not observe a significant difference in the reconstruction errors between vanilla CSS and FairCSS-MinMax, that is, opt( ) and fair( ), even though the value of minmax( ) is significantly higher than opt( ). Note that the observations cannot be generalized across all datasets, and we cannot conclusively claim that there is no trade-off between fairness and reconstruction error. For instance, in the case of the heart dataset, we observe a significant difference between values of opt( ) and fair( ) for = 5.
CONCLUSION
We introduced a novel variant of CSS for a fair setting. In this setting, we assume that two groups are present in the data and we are seeking a subset of columns that minimize two group-specific reconstruction errors. By relaxing the requirement on the number of columns, we showed how we can use the theory of leverage scores to obtain an algorithm with quality guarantees. Additionally, we proposed two algorithms that return a preset number of columns, by adapting two classical algorithms from CSS. Through an extensive experimental evaluation, we compared both how our proposed algorithms fare on real-world data with two groups present, and how using our proposed fair objective results in increased fairness. Our work also leaves some open questions. It may be desirable, as we briefly mention in Section 4, to aim for a different quality of approximation for the two groups, by setting two different thresholds in Problem 3. One such case is if the reconstruction error of one of the two groups suffers too much from aiming to achieve equal approximation factor for both groups. It may also be worth exploring if an algorithm for Problem 3 with a better approximation guarantee is possible. Finally, a natural question to ask is if the proposed framework can be extended to more than two groups.
APPENDIX A DECAY OF LEVERAGE SCORES
We plot the leverage scores of groups and for the experiments in Table 2. The leverage scores are sorted separately for the two groups and plotted in decreasing order of their value. Table 2 Algorithm 3: Fair L-RRQR
Algorithm 1 :
1FairScoresSampler Input: = { ( 1 , 1 ), . . . , ( , ) }, Output: ⊆ 1 ← ∅, ← ∅ // add ( , ) tountil either of the thresholds is satisfied2 while ( , ) ∈ < and ( , ) ∈ < do 3 ( , ) ← max ( ,
Theorem 3 .
3Algorithm 1 returns a solution with at most ⌈ 3 2 ⌉ + 1 columns for MinFairnessScores, where is the number of columns in the optimal solution.
A family of useful algorithms for CSS comes from the numerical linear algebra literature, and is based on QR decompositions with column pivoting.Definition 3 (QR decomposition with column pivoting).Given a matrix ∈ × ( ≥ ) and an integer ( ≤ ), assume QR decompositions of the form:
Figure 1 :
1MinMaxLoss for different values of and fixed target rank
Figure 2 :
2Comparison of reconstruction error of CSS andFairCSS for groups and .
Figure 3 :
3Price of fairness.
1Figure 4 :
4Input: QR factorizations Π = , Π = , 2 Output: permutation Π 1: for i=n,. . . Leverage scores of and for
Table 1 :
1Dataset statistics. and are the number of instances in groups and , respectively. is the number of columns.Dataset
Columns ( )
rank( ) rank( )
heart
14
201
96
13
13
german
63
690
310
49
47
credit
25 18 112 11 888
24
24
student
58
383
266
42
42
adult
109 21 790 10 771
98
98
compas
189
9 336
2 421
167
73
communities
104
1 685
309
101
101
recidivism
227
1 923
310
175
113
meps
1 247 18 414 17 013
1 217
1 200
Table 2 :
2Performance comparison Low QR S-High QR S-Greedy Low QR Greedy RandomDataset
c
k
MinMax Loss
S-communities
89 10
1.27323
1.50763
1.17121
1.25939 1.16976 1.47485
94 23
1.32462
1.54204
1.2717
1.41701 1.23969 1.65518
98 51
1.40469
1.57761
1.42658
1.40934 1.39823 2.95985
compas
118 10
1.04747
1.2738
1.03356
1.05041 1.03057 1.19477
165 19
1.08617
1.34321
1.05688
1.08617 1.0537
1.27599
177 37
1.37174
1.41811
1.14835
1.47138 1.1291
1.67851
adult
70 10
1.02345
1.09485
1.02111
1.02345 1.01768 1.05641
96 22
1.03347
1.12764
1.0374
1.03347
-
1.0589
103 49
1.07796
1.19301
1.40252
1.08317
-
1.0994
german
53 10
1.08088
1.30176
1.08488
1.07711 1.07349 1.14205
54 15
1.1439
1.34599
1.11798
1.11871 1.11088 1.1966
54 24
1.20605
1.38489
1.192
1.20246 1.18624 1.36138
recidivism
134 10
1.02485
1.17864
1.02313
1.02236 1.01483 1.12757
174 24
1.05569
1.29805
1.04054
1.05567 1.03202 1.22332
212 57
1.31688
1.52031
1.16871
1.27495 1.13311 1.59933
student
45 10
1.10833
1.42094
1.10559
1.11333 1.10597 1.17856
46 14
1.14467
1.39265
1.14375
1.15592 1.14361 1.26605
47 21
1.18932
1.5756
1.17832
1.209
1.18771 1.56265
meps
428 10
1.14642
1.83209
-
1.05601
-
1.17759
382 32
1.20093
1.85843
1.777
1.11665
-
1.26749
338 100
1.33916
1.70488
2.48787
-
-
1.47403
Input: QR factorizations Π = , Π = , Output: permutation Π 1: for i=1,. . . ,k do Compute QR fact. 22 = 1˜22 and 22 = 1˜221 2 2:
22 ←
[ :, :], 22 ←
[ :, :]
3:
← max{ 1 ( 22 ( ) ), 1 ( 22 ( ) ) }
4:
Compute permutation such that | (
) 1 | = ∥
∥ ∞
5:
6:
Π ← Π
0
0
7:
← 11
1
12
0˜2 2
and
← 11
1
12
0˜2 2
8: end for
return
An additional column (+1) is required in case is odd.
Greedy column subset selection: New bounds and distributed algorithms. Jason Altschuler, Aditya Bhaskara, Gang Fu, Vahab Mirrokni, ICML. PMLR. Afshin Rostamizadeh, and Morteza ZadimoghaddamJason Altschuler, Aditya Bhaskara, Gang Fu, Vahab Mirrokni, Afshin Ros- tamizadeh, and Morteza Zadimoghaddam. 2016. Greedy column subset selection: New bounds and distributed algorithms. In ICML. PMLR, 2539-2548.
Adriano Fazzone, Cristina Menghini, and Chris Schwiegelshohn. 2020. Spectral relaxations and fair densest subgraphs. Aris Anagnostopoulos, Luca Becchetti, CIKM. ACM. Aris Anagnostopoulos, Luca Becchetti, Adriano Fazzone, Cristina Menghini, and Chris Schwiegelshohn. 2020. Spectral relaxations and fair densest subgraphs. In CIKM. ACM, 35-44.
Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, Machine Bias. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. https://www.propublica.org/article/machine-bias-risk-assessments-in- criminal-sentencing. Accessed: 02-05-2023.
K E Rachel, Kuntal Bellamy, Michael Dey, Hind, C Samuel, Stephanie Hoffman, Kalapriya Houde, Pranay Kannan, Jacquelyn Lohia, Sameep Martino, Aleksandra Mehta, Seema Mojsilovic, Nagar, John Karthikeyan Natesan Ramamurthy, Diptikalyan Richards, Prasanna Saha, Sattigeri, An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. 360Moninder Singh, Kush R. Varshney, and Yunfeng ZhangRachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varsh- ney, and Yunfeng Zhang. 2018. AI Fairness 360: An Extensible Toolkit for Detect- ing, Understanding, and Mitigating Unwanted Algorithmic Bias.
Near-optimal column-based matrix reconstruction. Christos Boutsidis, Petros Drineas, Malik Magdon-Ismail, SIAM J. Comput. 43Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. 2014. Near-optimal column-based matrix reconstruction. SIAM J. Comput. 43, 2 (2014), 687-717.
Unsupervised feature selection for principal components analysis. Christos Boutsidis, W Michael, Petros Mahoney, Drineas, KDD. Christos Boutsidis, Michael W Mahoney, and Petros Drineas. 2008. Unsupervised feature selection for principal components analysis. In KDD. 61-69.
An Improved Approximation Algorithm for the Column Subset Selection Problem. Christos Boutsidis, Michael W Mahoney, Petros Drineas, SODA (SODA '09). SIAM. Christos Boutsidis, Michael W. Mahoney, and Petros Drineas. 2009. An Improved Approximation Algorithm for the Column Subset Selection Problem. In SODA (SODA '09). SIAM, 968-977.
Gender shades: Intersectional accuracy disparities in commercial gender classification. Joy Buolamwini, Timnit Gebru, FAccT. PMLRJoy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAccT. PMLR, 77-91.
Low-rank revealing QR factorizations. Tony F Chan, Per Christian Hansen, Numerical Linear Algebra with Applic. 1Tony F. Chan and Per Christian Hansen. 1994. Low-rank revealing QR factoriza- tions. Numerical Linear Algebra with Applic. 1, 1 (1994), 33-44.
Rank revealing QR factorizations. Linear Algebra and its Applic. Tony F Chan, Tony F. Chan. 1987. Rank revealing QR factorizations. Linear Algebra and its Applic. 88-89 (1987), 67-82.
Fair Clustering Through Fairlets. Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii, NeuRIPS. Curran Associates, IncFlavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. 2017. Fair Clustering Through Fairlets. In NeuRIPS. Curran Associates, Inc., 5029-5037.
The medical expenditure panel survey: a national information resource to support healthcare cost research and inform policy and practice. Joel W Cohen, B Steven, Jessica S Cohen, Banthin, Medical care. Joel W Cohen, Steven B Cohen, and Jessica S Banthin. 2009. The medical expen- diture panel survey: a national information resource to support healthcare cost research and inform policy and practice. Medical care (2009), S44-S50.
Efficient volume sampling for row/column subset selection. Amit Deshpande, Luis Rademacher, FCS. IEEE. Amit Deshpande and Luis Rademacher. 2010. Efficient volume sampling for row/column subset selection. In FCS. IEEE, 329-338.
Matrix approximation and projective clustering via volume sampling. Amit Deshpande, Luis Rademacher, S Santosh, Grant Vempala, Wang, Theory of Computing. 21Amit Deshpande, Luis Rademacher, Santosh S Vempala, and Grant Wang. 2006. Matrix approximation and projective clustering via volume sampling. Theory of Computing 2, 1 (2006), 225-247.
Dheeru Dua, Casey Graff, UCI Machine Learning Repository. Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http: //archive.ics.uci.edu/ml
Fairness through awareness. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel, Innovations in TCS. ACM. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Innovations in TCS. ACM, 214-226.
Modification-fair cluster editing. Vincent Froese, Leon Kellerhals, Rolf Niedermeier, In AAAI. 36Vincent Froese, Leon Kellerhals, and Rolf Niedermeier. 2022. Modification-fair cluster editing. In AAAI, Vol. 36. 6631-6638.
. M R Garey, D S Johnson, Computers and Intractability. W. H. FreemanM. R. Garey and D. S. Johnson. 1979. Computers and Intractability. W. H. Freeman.
Socially fair k-means clustering. Mehrdad Ghadiri, Samira Samadi, Santosh Vempala, FAccT. ACM. Mehrdad Ghadiri, Samira Samadi, and Santosh Vempala. 2021. Socially fair k-means clustering. In FAccT. ACM, 438-448.
Numerical Methods for Solving Linear Least Squares Problems. G Golub, 10.1007/BF01436075Numer. Math. 73G. Golub. 1965. Numerical Methods for Solving Linear Least Squares Problems. Numer. Math. 7, 3 (jun 1965), 206-216. https://doi.org/10.1007/BF01436075
Rank-Revealing QR Factorizations and the Singular Value Decomposition. Y Hong, C T Pan, Math. Comp. 58Y Hong and C. T. Pan. 1992. Rank-Revealing QR Factorizations and the Singular Value Decomposition. Math. Comp. 58 (1992), 213-232.
Classification with no discrimination by preferential sampling. Faisal Kamiran, Toon Calders, Machine Learning Conf. Citeseer1Faisal Kamiran and Toon Calders. 2010. Classification with no discrimination by preferential sampling. In Machine Learning Conf., Vol. 1. Citeseer.
Data preprocessing techniques for classification without discrimination. Faisal Kamiran, Toon Calders, KIS. 33Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. KIS 33, 1 (2012), 1-33.
Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. Matthew Kay, Cynthia Matuszek, Sean A Munson, 10.1145/2702123.2702520Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. the 33rd Annual ACM Conference on Human Factors in Computing SystemsSeoul, Republic of Korea; New York, NY, USAAssociation for Computing MachineryMatthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal Repre- sentation and Gender Stereotypes in Image Search Results for Occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). Association for Computing Machin- ery, New York, NY, USA, 3819-3828. https://doi.org/10.1145/2702123.2702520
Algorithmic fairness. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Ashesh Rambachan, AEA papers and proceedings. 108Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic fairness. In AEA papers and proceedings, Vol. 108. 22-27.
Learning the parts of objects by nonnegative matrix factorization. D Daniel, H Sebastian Lee, Seung, Nature. 401Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by nonnegative matrix factorization. Nature 401 (1999), 788-791.
Does mitigating ML's impact disparity require treatment disparity. Zachary Lipton, Julian Mcauley, Alexandra Chouldechova, Advances in Neural Information Processing. Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML's impact disparity require treatment disparity?. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31. Curran As- sociates, Inc. https://proceedings.neurips.cc/paper_files/paper/2018/file/ 8e0384779e58ce2af40eb365b318cc32-Paper.pdf
Algorithmic fairness: Choices, assumptions, and definitions. Shira Mitchell, Eric Potash, Solon Barocas, Alexander D' Amour, Kristian Lum, Annual Review of Statistics and Its Application. 8Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. 2021. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application 8 (2021), 141-163.
Convex Formulations for Fair Principal Component Analysis. Matt Olfat, Anil Aswani, AAAI. AAAI PressMatt Olfat and Anil Aswani. 2019. Convex Formulations for Fair Principal Component Analysis. In AAAI. AAAI Press, 663-670.
Provable deterministic leverage score sampling. Dimitris S Papailiopoulos, KDD. ACM. Anastasios Kyrillidis, and Christos BoutsidisDimitris S. Papailiopoulos, Anastasios Kyrillidis, and Christos Boutsidis. 2014. Provable deterministic leverage score sampling. In KDD. ACM, 997-1006.
On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science. Karl Pearson. 1901. LIII2Karl Pearson. 1901. LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science 2, 11 (1901), 559-572.
Discrimination-aware data mining. Dino Pedreshi, Salvatore Ruggieri, Franco Turini, KDD. ACM. Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data mining. In KDD. ACM, 560-568.
2022. A review on fairness in machine learning. Dana Pessach, Erez Shmueli, Comput. Surveys. 55Dana Pessach and Erez Shmueli. 2022. A review on fairness in machine learning. Comput. Surveys 55, 3 (2022), 1-44.
Actionable Auditing Revisited: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Deborah Inioluwa, Joy Raji, Buolamwini, Commun. ACM. 66Inioluwa Deborah Raji and Joy Buolamwini. 2022. Actionable Auditing Revisited: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Commun. ACM 66, 1 (2022), 101-108.
An economic perspective on algorithmic fairness. Ashesh Rambachan, Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, AEA Papers and Proceedings. 110Ashesh Rambachan, Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan. 2020. An economic perspective on algorithmic fairness. In AEA Papers and Proceedings, Vol. 110. 91-95.
The AI Ethicist's Dirty Hands Problem. Henrik Skaug Saetra, Mark Coeckelbergh, John Danaher, Commun. ACM. 66Henrik Skaug Saetra, Mark Coeckelbergh, and John Danaher. 2022. The AI Ethicist's Dirty Hands Problem. Commun. ACM 66, 1 (2022), 39-41.
The Price of Fair PCA: One Extra Dimension. Samira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, Santosh Vempala, NeuRIPS (NIPS'18). Curran Associates IncSamira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, and Santosh Vempala. 2018. The Price of Fair PCA: One Extra Dimension. In NeuRIPS (NIPS'18). Curran Associates Inc., 10999-11010.
Column subset selection is NP-complete. Yaroslav Shitov, Linear Algebra Appl. 610Yaroslav Shitov. 2021. Column subset selection is NP-complete. Linear Algebra Appl. 610 (2021), 52-58.
Multi-Criteria Dimensionality Reduction with Applications to Fairness. Samira Uthaipon (tao) Tantipongpipat, Mohit Samadi, Jamie Singh, Santosh Morgenstern, Vempala, NIPS. Red Hook, NY, USACurran Associates IncArticle 1358, 11 pagesUthaipon (Tao) Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, and Santosh Vempala. 2019. Multi-Criteria Dimensionality Reduction with Appli- cations to Fairness. In NIPS. Curran Associates Inc., Red Hook, NY, USA, Article 1358, 11 pages.
Why machine learning may lead to unfairness: Evidence from risk assessment for juvenile justice in catalonia. Marius Songül Tolan, Emilia Miron, Carlos Gómez, Castillo, International Conference on Artificial Intelligence and Law. Songül Tolan, Marius Miron, Emilia Gómez, and Carlos Castillo. 2019. Why machine learning may lead to unfairness: Evidence from risk assessment for juvenile justice in catalonia. In International Conference on Artificial Intelligence and Law. 83-92.
Fairness constraints: Mechanisms for fair classification. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, Krishna P Gummadi, PMLRArtificial intelligence and statistics. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial intelligence and statistics. PMLR, 962-970.
Learning fair representations. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork, ICML. PMLR. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In ICML. PMLR, 325-333.
| [] |
[] | [] | [] | [] | Our purpose in this note is to investigate the order properties of positive operators from a locally convex space into its conjugate dual. We introduce a natural generalization of the Busch-Gudder strength function and we prove Kadison's anti-lattice theorem and Ando's result on the infimum of positive operators in that context.2010 Mathematics Subject Classification. Primary 47B65, 46A20. | null | [
"https://export.arxiv.org/pdf/2306.04402v1.pdf"
] | 259,095,570 | 2306.04402 | 8d569a519686e5ab1312e810831d52898a8c7a40 |
ZSIGMOND TARCSAY ANDÁBEL GÖDE Dedicated to Professor Zoltán Sebestyén on the occasion of his 80th birthdayand phrases Positive operatoranti-dual pairsupremuminfimumorderingLebesgue decompositionstrength function
Our purpose in this note is to investigate the order properties of positive operators from a locally convex space into its conjugate dual. We introduce a natural generalization of the Busch-Gudder strength function and we prove Kadison's anti-lattice theorem and Ando's result on the infimum of positive operators in that context.2010 Mathematics Subject Classification. Primary 47B65, 46A20.
Introduction
Let H be a complex (possibly infinite dimensional) Hilbert space. It is well known that the set of bounded self-adjoint operators B sa (H) is a partially ordered set with respect to the most natural Löwner order
A ≤ B ⇐⇒ (Ax | x) ≤ (Bx | x) (∀x ∈ H).
That is to say, '≤' is a reflexive, transitive and anti-symmetric relation. It is however proved by Kadison [4] that the partially ordered set (B sa (H), ≤) is as far from being a lattice as possible. In fact, the supremum A ∨ B of the self-adjoint operators A and B can exist only in the trivial case when they are comparable. The same is true for the question of the infimum, as we have
A ∧ B = −((−A) ∨ (−B)
).
If we examine the same questions in the cone B + (H) of positive operators, the answer does not change in the case of the supremum. However, it is not difficult to see that the infimum of two orthogonal projections P, Q with disjoint ranges is P ∧ Q = 0 in B + (H). This simple example suggests that the infimum problem considered over the cone of positive operators is a more sophisticated problem than that over B sa (H). The latter issue was completely solved by T. Ando [2]. He showed that the infimum problem is closely related to the so-called Lebesgue-type decomposition of positive operators, the theory of which was also developed by him [1]. Namely, given two positive operators A, B on H, there exists two positive operators B a , B s on H, where B a is absolutely continuous to A, whilst B s is singular to A, such that B a + B s = B (see [1]). It is also proved by Ando that such a decomposition is not unique in general, but there is a distinguished positive operator (denoted by [B]A := B a ) which satisfies the decomposition and which is the largest among those operators C ≥ 0 such that C ≤ A and C ≪ B. (We refer the reader to [1] and [6] for The main result of Ando's paper [2] states that the reverse of this is also true: A∧B exists if and only if the corresponding absolutely continuous parts are comparable. The aim of this note is to investigate the supremum and infimum problem in a much more general context, namely, among the positive cone of operators on so called anti-dual pairs. By anti-dual pair we mean a pair (E, F ) of complex vector spaces which are connected by a so-called anti-duality function ·, · : F × E → C. The latter differs from the well-known duality function (see [5,Chapter IV]) only in that it is conjugate linear in its second variable. The notion of anti-dual pairs allows us to define positivity of linear operators of type A : E → F in a way that is formally identical to the positivity of operators over Hilbert spaces (cf. [7] and [8]). The set L + (E, F ) of positive operators from E to F is then a partially ordered set via the ordering
A ≤ B def ⇐⇒ Ax, x ≤ Bx, x (∀x ∈ E).
In this article we are going to present the corresponding analogous versions to Kadison's and Ando's theorems in the ordered set (L + (E, F ), ≤). In the case of supremum, the key is a corresponding generalization of the Busch-Gudder strength function [3] to positive operators in L + (E, F ). This will enable us to provide a numerical characterization of the ordering. The solution to the infimum problem is based on the Lebesgue decomposition theory for operators on anti-dual pairs developed in [7].
Preliminaries
Throughout the paper, let E and F be complex vector spaces which are intertwined via a function ·, · : F × E → C, which is linear in its first argument and conjugate linear in its second one, so that it separates the points of E and F . We shall refer to ·, · as anti-duality function and the triple (E, F, ·, · ) will be called an anti-dual pair. and shortly denoted by F, E . The prototype of anti-dual pairs is the triple (H, H, (· | ·)) , where H is a Hilbert space and (· | ·) is a scalar product over H. In that case, of course, the concept of symmetric and positive operators coincides with the usual concept of those in functional analysis.
However, the most general example of anti-dual pairs is (X,X ′ , ·, · ), where X is a locally convex Hausdorff space,X ′ denotes the set of continuous and conjugate linear functionals on X, and ·, · is the evaluation
f, x := f (x) (x ∈ X, f ∈X ′ ).
If (E, F ) is an anti-dual pair, then we may consider the corresponding weak topologies w := w(E, F ) on E, and w * := w(F, E) on F , respectively. Both (E, w) and (F, w * ) are locally convex Hausdorff spaces such that
(2.1)Ē ′ = F, F ′ = E.
This fact and (2.1) enable us to define the adjoint (that is, the topological transpose) of a weakly continuous operator. Let F 1 , E 1 and F 2 , E 2 be anti-dual pairs and T : E 1 → F 2 a weakly continuous linear operator, then the (necessarily weakly continuous) linear operator T * :
E 2 → F 1 satisfying T x 1 , x 2 2 = T * x 2 , x 1 1 , x 1 ∈ E 1 , x 2 ∈ E 2
is called the adjoint of T . In the following, we will use two particularly important special cases of this: on the one hand, the adjoint T * of a weakly continuous operator T : E → F is also an operator of type E → F . On the other hand, if H is a Hilbert space, then the adjoint of a weakly continuous operator T : E → H acts as and operator T * : H → F , so that their composition T * T : E → F satisfies
(2.2) T * T x, x = (T x | T x) (x ∈ E). A linear operator S : E → F is called symmetric if Sx, y = Sy, x (x, y ∈ E), and positive, if Sx, x ≥ 0 (x ∈ E).
It can be readily checked that every positive operator is symmetric, and every symmetric operator S is weakly continuous (see [7]). We shall denote the set of weakly continuous operators T :
E → F by L (E, F ), while we write L + (E, F ) for the set of positive operators A : E → F . According to (2.2), T * T ∈ L (E, F ) is a positive operator if T ∈ L (E; H)
. However, if we assume F to be w * -sequentially complete, then we can also state the converse: every positive operator A ∈ L + (E, F ) can be written in the form A = T * T for some auxiliary Hilbert space H and T ∈ L (E; H) (cf. [7]). Recall that the anti-dual pair (X,X ′ ) is w * -sequentially complete if X is a barreled space (e.g., a Banach space). Also, (X,X * ) is w * -sequentially complete with X being an arbitrary vector space andX * its algebraic anti-dual space.
Let us now recall briefly the Hilbert space factorization method of a positive operator A. Let (E, F ) be a w * -sequentially anti-dual pair and let A ∈ L + (E, F ).
Then (Ax | Ay) A := Ax, y (x, y ∈ E).
defines an inner product on the range space ran A under which it becomes a pre-Hilbert space. We shall denote the completion by H A . The canonical embedding operator
(2.3) J A (Ax) = Ax (x ∈ E)
of ran A ⊆ H A into F is weakly continuous. By w * -sequentially completeness, it uniquely extends to a weakly continuous operator J A ∈ L (H A , F ). It can be checked that the adjoint operator J *
A ∈ L (E, H A ) satisfies (2.4) J * A x = Ax ∈ H A (x ∈ E),
that yields the fallowing factorization of A:
(2.5) A = J A J * A .
In order to discuss the infimum problem of positive operators, we will need some concepts and results related to the Lebesgue decomposition theory of positive operators. Hence, for sake of the reader, we briefly summerize the content of [7] which is essential for the understanding of this article. The details can be found in the mentioned paper.
We say that the positive operator B ∈ L + (E, F ) is absolutely continuous with respect to another positive operator
A ∈ L + (E, F ) (in notation, B ≪ A) if, for every sequence (x n ) n∈N of E, Ax n , x n → 0 and B(x n − x m ), x n − x m → 0 imply Bx n , x n → 0. A and B are said to be mutually singular (in notation, A ⊥ B) if the only positive operator C ∈ L + (E, F ) for which C ≤ A and C ≤ B are satisfied is C = 0. It is easy to check that B ≤ A imples B ≪ A, however the converse is apparently not true (cf. [7, Theorem 5.1]).
In [7,Theorem 3.3] it was proved that every positive operator B has a Lebesguetype decomposition with respect to any other positive operator A. More precisely, there exists a pair B a and B s of positive operators such that
(2.6) B = B a + B s , B a ≪ A, B s ⊥ A.
Such a decomposition is not unique in general (cf [7, Theorem 7.2], however, there is a unique operator B a ∈ L + (E, F ) satisfying (2.6) which is maximal among those positive operators C such that C ≤ B and C ≪ A. We shall call this uniquely determined operator B a the A-absolutely continuous part of B, and adopting Ando's [1] notation, we shall write [A]B := B a for it.
The strength of a positive operator
Throughout the section let (E, F ) be a w * -sequentially complete anti-dual pair. Let us introduce the partial order on L + (E, F ) by
A ≤ B ⇐⇒ Ax, x ≤ Bx, x (x ∈ E).
In what follows, inspired by the paper [3] of Busch and Gudder, we associate a function to each positive operator A (called the strength function) which can be used to characterize that ordering. Let f ∈ F be a non-zero vector and set
(f ⊗ f )(x) := f, x · f (x ∈ E),
so that f ⊗ f ∈ L + (E, F ) is a rank one positive operator with range space C · f . For a given positive operator A ∈ L + (E, F ) we set
λ(A, f ) := sup{t ≥ 0 : t · f ⊗ f ≤ A}.
Following the terminology of [3], the nonnegative (finite) number λ(A, f ) will be called the strength of A along the ray f , whilst the function
λ(A, ·) : F \ {0} → [0, +∞)
is the strength function of A. To see that λ(A, f ) is always finite, consider an x ∈ E such that f, x = 0. Then λ(A, f ) = +∞ would result equality Ax, x = +∞, which is impossible. At this point, we note that in [3], the strength function was only defined along vectors from the unit sphere of a Hilbert space. In that case, the strength function has the uniform upper bound A . By the above interpretation of the strength function, λ(A, ·) will not be bounded, but this fact is not of considerable importance for the applications.
In our first result, we examine along which 'rays' f the strength function takes a positive value. The factorization (2.5) of the positive operator A and the auxiliary Hilbert space H A will play an important role in this.
Theorem 3.1. Let A ∈ L + (E, F ) and f ∈ F , f = 0. The following are equivalent:
(i) λ(A, f ) > 0, (ii) there is a constant m ≥ 0 such that | f, x | 2 ≤ m · Ax, x , (x ∈ E), (iii) there is a (unique) ξ f ∈ H A such that J A ξ f = f .
In any case, we have
(3.1) λ(A, f ) = 1 ξ f 2 A = 1 m f , where m f > 0 is the smallest constant m that satisfies (ii). Proof. (i)⇒(ii): Fix a real number 0 < t < λ(A, f ), then (f ⊗ f )x, x ≤ 1 t · Ax, x x ∈ E, that implies (ii). (ii)⇒(iii): Inequality (ii) expresses just that ϕ(Ax) := f, x , x ∈ E,
defines a continuous conjugate linear functional from ran A ⊆ H A to C, namely ϕ 2 ≤ m. Denote by ξ f ∈ H A the corresponding Riesz representing vector. Then
f, x = (ξ f | Ax) A = (ξ f | J * A x) A = J A ξ f , x , x ∈ E, and hence f = J A ξ f . (iii)⇒(i): Suppose that J A ξ f = f for some ξ f ∈ H A . Then | f, x | 2 = |(ξ f | Ax) A | 2 ≤ ξ f 2 A Ax, x , hence λf ⊗ f ≤ A with λ = ξ f −2 A .
As a corollary we retrieve [3, Theorem 3]. The proof presented here is significantly shorter and simpler. if and only if f ∈ ran A 1/2 . In that case,
(3.2) λ(A, f ) = 1 A −1/2 f 2 ,
where A −1/2 denotes the (possibly unbounded) inverse of A 1/2 restricted to ker A ⊥ .
Proof. The first part of the statement is clear because A 1/2 A 1/2 = A = J A J * A implies ran A 1/2 = ran J A . In order to prove (3.2), consider the linear functional
ϕ : ran A 1/2 → C, ϕ(A 1/2 x) := (x | f ) (x ∈ H)
which is well-defined and bounded with norm ϕ 2 = λ(A, f ) −1 . By the Riesz representation theorem, there is a unique vector ζ ∈ ran A 1/2 = ker A ⊥ such that
(x | f ) = (A 1/2 x | ζ) = (x | A 1/2 ζ) (x ∈ H).
Hence A 1/2 ζ = f and thus ζ = A −1/2 f . Furthermore, ζ 2 = ϕ 2 = λ(A, f ) −1 that completes the proof.
Below we establish the most useful property of the strength function for our purposes. Namely, it can be used to characterize the ordering. Theorem 3.3. Let A, B ∈ L + (E, F ) be positive operators then the following assertions are equivalent:
(i) A ≤ B, (ii) λ(A, f ) ≤ λ(B, f ) for every 0 = f ∈ F . Proof. (i)⇒(ii): If A ≤ B and λ · f ⊗ f ≤ A for some λ then also λ · f ⊗ f ≤ B hence λ(A, f ) ≤ λ(B, f ). (ii)⇒(i): Assume λ(A, ·) ≤ λ(B, ·) everywhere on F \ {0}.
By contradiction suppose Ax, x > Bx, x for some x and set f := Ax, x −1 · Ax. For every y ∈ E,
(f ⊗ f )(y), y = | Ax, y | 2 Ax, x ≤ Ay, y
by the Cauchy-Schwarz inequality | Ax, y | 2 ≤ Ax, x Ay, y . Hence λ(A, f ) ≥ 1 and thus, by assumption,
Bx, x ≥ (f ⊗ f )(x), x = Ax, x ,
which leads to a contradiction.
Supremum of positive operators
In this section, we are going to prove the Kadison's anti-lattice theorem in the setting of positive operators on an anti-dual pair. Namely, we show that the supremum of two positive operators A and B exists if and only if they are comparable (that is, A ≤ B or B ≤ A). Proof. The comparability of A and B trivially implies the existence of A ∨ B. For the opposite direction we suppose that T ∈ L + (E, F ) \ {A, B} such that T ≥ A and T ≥ B and show that there exists S ∈ L + (E, F ) such that S ≥ A and S ≥ B, but S is not comparable with T .
In the first case we suppose that the intersection of ran(J T −A ) and ran(J T −B ) contains a non-zero vector e ∈ ran(J T −A )∩ran(J T −B ). By Theorem 3.1 there exists λ > 0 such that λ · e ⊗ e ≤ T − A and λ · e ⊗ e ≤ T − B. Let f ∈ F \ C · e, then with S := T − λ · e ⊗ e + f ⊗ f we have S ≥ A and S ≥ B, but S is apparently not comparable with T .
In the second case we suppose that ran(J T −A ) ∩ ran(J T −B ) = {0}. By Theorem 3.1 there exist e ∈ ran(J T −A ) and f ∈ ran(J T −B ) such that e ⊗ e ≤ T − A and f ⊗ f ≤ T − B. Let us define
S 0 := e ⊗ e + 2e ⊗ f + 2f ⊗ e + f ⊗ f and S := T + 1 3 S 0 .
Clearly, S 0 and S are symmetric operators. We claim that S ≥ A and S ≥ B. Indeed, we have
S 0 + 3e ⊗ e = (2e + f ) ⊗ (2e + f ) ≥ 0, which implies − 1 3 S 0 ≤ e ⊗ e ≤ T − A, therefore 1 3 S 0 ≥ A − T.
A similar argument shows that 1 3 S 0 ≥ B − T . As a consequence we see that
S = T + 1 3 S 0 ≥ T + (B − T ) = B,
and similarly, S ≥ A. However, the symmetric operator S 0 is not positive. For let x ∈ E be such that e, x = 1, and f, x = −1, then S 0 x, x = −2. Consequently, S and T are not comparable, which proves the theorem. Our aim in the present section is to establish the anti-dual pair analogue of Ando's mentioned result:
Theorem 5.1. Let (E, F ) be a w * -sequentially complete anti-dual pair and let A, B ∈ L + (E, F ) positive operators. Then the following two statements are equivalent: Let E be the spectral measure of A. Following the reasoning of [2, Lemma 1] it follows that
(i) the infimum A ∧ B exists in L + (E, F ),((5.4) A ∧ B = A ∧ (I − A) = 1 0 min{t, (1 − t)} dE(t).
The statement of the theorem will be proved if we show that either A ∧ B = A or A ∧ B = B. By (5.4) this is equivalent to prove that is zero. Suppose that 0 < dim P 2 ≤ dim P 1 and take a partial isometry V ∈ B(H) such that [ker V ] ⊥ ⊆ ran P 2 and ran V ⊆ ran P 1 . Then, following Ando's treatment used in the proof of [2, Theorem 1], one can prove that D := ( A − ε) · P 2 + ( B − ε) · P 1 + √ 2ε(V P 2 + P 2 V * ) satisfies 0 ≤ D ≤ A, B, but D is not comparable with C. This, however contradicts the fact that C = A ∧ B, which proves the theorem.
As a direct application of Theorems 4.1 and 5.1, we obtain the following result regarding the supremum and infimum of non-negative forms over a given vector space. With the latter statement we retrieve [9, Theorem 3]. Conversely, every positive operator T : D →D * defines a nonnegative form in the most natural way. The correspondence t → T is an order preserving bijection between nonnnegative forms and positive operators, so that both statements (a) and (b) of the theorem follows from Theorems 4.1 and 5.1, respectively.
Corollary 3 . 2 .
32Let H be a Hilbert space, A ∈ B(H) and f ∈ H, then λ(A, f ) > 0
Corollary 3. 4 .
4Two positive operators are identical if and only if so are their strength functions. Remark 3.5. From the very definition of the strength function it is readily check the following elementary properties: Let A, B ≥ 0 and 0 = f ∈ F , then (1) λ(αA, f ) = αλ(A, f ) for every α ≥ 0, (2) λ(αA, f ) + λ((1 − α)B, f ) ≥ αλ(A, f ) + (1 − α)λ(B, f ) for every α ∈ [0, 1], (3) λ(A + B, f ) ≥ λ(A, f ) + λ(B, f ).
Theorem 4. 1 .
1Let (E, F ) be a w * -sequentially complete anti-dual pair and let A, B ∈ L + (E, F ) positive operators. Then the following two statements are equivalent:(i) the supremum A ∨ B exists, (ii) A and B are comparable.
5 .
5Infimum of positive operators Let A, B ∈ L + (E, F ) be positive operators and consider the corresponding A-, and B-absolute continuous parts [A]B and [B]A, arising from the corresponding Lebesgue decompositions in the sense of (2.6). Thus, (5.1) [A]B = max{C ∈ L + (E, F ) : C ≤ B, C ≪ A}, and (5.2) [B]A = max{C ∈ L + (E, F ) : C ≤ A, C ≪ B}, see [7, Theorem 3.3]. Assume for a moment that [A]B ≤ [B]A, then clearly, [A]B ≤ A, [A]B ≤ B and for every C ∈ L + (E, F ) such that C ≤ A,C ≤ B we also have C ≤ [A]B. This in turn means that [A]B is the infimum of A and B in the cone L + (E, F ). The situation is essentially same if we assume that [B]A ≤ [A]B. Ando's result [2, Theorem 6] states that, in the context of positive operators on Hilbert spaces, the infimum of two operators only in one of the above two cases exists. That is, if A ∧ B exists in the cone B + (H), then [A]B and [B]A are comparable (and the smaller one is the infimum).
A + B, and denote by (· | ·) its inner product and by J := J A+B the corresponding canonical embedding of H into F . Since the nonnegative forms((A + B)x, (A + B)y) → Ax, y , ((A + B)x, (A + B)y) → Bx, y ,defined on the dense subspace ran(A + B) ⊆ H are obviously bounded by norm ≤ 1, the Lax-Milgram lemma provides us two positive operators A, B ∈ B(H) with A , B ≤ 1, which satisfy( A(A + B)x | (A + B)y) = Ax, y , ( B(A + B)x | (A + B)y) = Bx, y .Using the canonical propertyJ * x = (A + B)x, x ∈ E we obtain that A = J AJ * and B = J BJ * .Observe also that (( A + B)J * x | J * y) = (A + B)x, y = (J * x | J * y), (x, y ∈ E), whence A + B = I, where I stands for the identity operator on H. Note also that (5.3) ker A = ker B = {0}, because A ≪ B and B ≪ A (see [7, Lemma 5.2]). Denote by C := A ∧ B and let C ∈ B(H) be the positive operator such that C = J CJ * . We claim that C = A ∧ B in B + (H). Indeed, it is clear on the one hand that C ≤ A and C ≤ B. On the other hand, consider a positive operator D ∈ B + (H) such that D ≤ A, B. It is readily seen that D := J DJ * ∈ L + (E, F ) satisfies D ≤ A, B, whence D ≤ C which implies D ≤ C.
on the spectrum of A. Suppose that is not the case. By (5.3) we have E({0}) = E({1}) = 0, hence there exists some ε > 0 such that none of the spectral projections P 1 := E([ 1 2 + 3ε, 1 − 3ε]) and P 2 := E([3ε, 1 2 − 3ε])
Corollary 5 . 2 .
52Let s and t be nonnegative forms on a complex vector space D.(a) The supremum t ∨ s exists if and only if t ≤ s or s ≤ t, (b) The infimum t ∧ s exists if and only if [t] s ≤ [s] t or [s] t ≤ [t] s. Proof. Denote byD * the conjugate algebraic dual of D (that is,D * consists of all anti-linear forms f : D → C). Then (D,D * ), endowed with the natural anti-duality ·, · , forms a w * -sequentially continuous anti-dual pair. Note that every nonnegative form t on D induces a positive operator T : D →D * by the correspondence T x, y := t(x, y), (x, y ∈ D).
the details.) This maximal operator [B]A is called the B-absolutely continuous part of A. From the maximality property of the absolutely continuous parts it is easy to check that if [B]A and [A]B are comparable, then the infimum of A and B exists, namely, A ∧ B = max{[A]B, [B]A}.
], so we may assume without loss of generality that A ≪ B and B <≪ A.Consider now the auxiliary Hilbert space H := H A+B associated with the sumii) the corresponding absolute continuous parts [A]B and [B]A are comparable.
In any case, A ∧ B = min{[A]B, [B]A}.
Proof. We only prove the non-trivial implication (i)⇒(ii). First of all we remark that
A ∧ B exists if and only if [A]B ∧ [B]A exists, and in that case these two operators
coincide. This is easily obtained from the maximality properties (5.1) and (5.2).
Recall also that [A]B and [B]A are mutually absolutely continuous according to
[7, Theorem 3.6
Lebesgue-type decomposition of positive operators. T Ando, Acta Sci. Math.(Szeged). 383-4T. Ando, Lebesgue-type decomposition of positive operators, Acta Sci. Math.(Szeged) 38 (1976), no. 3-4, 253-260.
Problem of infimum in the positive cone, Analytic and geometric inequalities and applications. , Problem of infimum in the positive cone, Analytic and geometric inequalities and applications (1999), 1-12.
Effects as functions on projective Hilbert space. P Busch, S P Gudder, Letters in Mathematical Physics. 47P. Busch and S.P. Gudder, Effects as functions on projective Hilbert space, Letters in Mathe- matical Physics 47 (1999), 329-337.
Order properties of bounded self-adjoint operators. R Kadison, Proceedings of the American Mathematical Society. 23R. Kadison, Order properties of bounded self-adjoint operators, Proceedings of the American Mathematical Society 2 (1951), no. 3, 505-510.
Topological vector spaces. H H Schaefer, Springer-VerlagNew York-BerlinH. H. Schaefer, Topological vector spaces, Springer-Verlag, New York-Berlin, 1971.
Lebesgue-type decomposition of positive operators. Zs, Tarcsay, Positivity. 17Zs. Tarcsay, Lebesgue-type decomposition of positive operators, Positivity 17 (2013), 803-817.
Operators on anti-dual pairs: Lebesgue decomposition of positive operators. Journal of Mathematical Analysis and Applications. 4842123753, Operators on anti-dual pairs: Lebesgue decomposition of positive operators, Journal of Mathematical Analysis and Applications 484 (2020), no. 2, 123753.
Operators on anti-dual pairs: Generalized Krein-von Neumann extension. Zs, T Tarcsay, Titkos, Mathematische Nachrichten. 2949Zs. Tarcsay and T. Titkos, Operators on anti-dual pairs: Generalized Krein-von Neumann extension, Mathematische Nachrichten 294 (2021), no. 9, 1821-1838.
Ando's theorem for nonnegative forms. T Titkos, Positivity. 164T. Titkos, Ando's theorem for nonnegative forms, Positivity 16 (2012), no. 4, 619-626.
Zs, Tarcsay, IX. Fővám tér 13-15., Budapest H-1093, Hungary, and Department of Applied Analysis and Computational Mathematics. Budapest H-1117, Hungary Email; Budapest H-1117Department of Mathematics, Corvinus University of Budapest ; Eötvös Loránd University ; Eötvös Loránd UniversityPázmány Péter sétány 1/c.. Hungary Email address: [email protected]. Tarcsay, Department of Mathematics, Corvinus University of Budapest, IX. Fővám tér 13-15., Budapest H-1093, Hungary, and Department of Applied Analysis and Compu- tational Mathematics, Eötvös Loránd University, Pázmány Péter sétány 1/c., Budapest H-1117, Hungary Email address: [email protected]ú A. Göde, Department of Applied Analysis and Computational Mathematics, Eötvös Loránd University, Pázmány Péter sétány 1/c., Budapest H-1117, Hungary Email address: [email protected]
| [] |
[
"Glitch systematics on the observation of massive black-hole binaries with LISA",
"Glitch systematics on the observation of massive black-hole binaries with LISA"
] | [
"Alice Spadaro \nDipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n\nINFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n",
"Riccardo Buscicchio \nDipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n\nINFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n\nInstitute for Gravitational Wave Astronomy\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTBirminghamUK\n",
"Daniele Vetrugno \nDepartment of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly\n",
"Antoine Klein \nInstitute for Gravitational Wave Astronomy\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTBirminghamUK\n",
"Davide Gerosa \nDipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n\nINFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n\nInstitute for Gravitational Wave Astronomy\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTBirminghamUK\n",
"Stefano Vitale \nDepartment of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly\n",
"Rita Dolesi \nDepartment of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly\n",
"William Joseph Weber \nDepartment of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly\n",
"Monica Colpi \nDipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n\nINFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n"
] | [
"Dipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"INFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"Dipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"INFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"Institute for Gravitational Wave Astronomy\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTBirminghamUK",
"Department of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly",
"Institute for Gravitational Wave Astronomy\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTBirminghamUK",
"Dipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"INFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"Institute for Gravitational Wave Astronomy\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTBirminghamUK",
"Department of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly",
"Department of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly",
"Department of Physics\nTrento Institute for Fundamental Physics and Applications\nUniversity of Trento\nINFN\n38123Povo, TrentoItaly",
"Dipartimento di Fisica \"G. Occhialini\"\nUniversità degli Studi di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly",
"INFN\nSezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly"
] | [] | Detecting and coherently characterizing thousands of gravitational-wave signals is a core dataanalysis challenge for the Laser Interferometer Space Antenna (LISA). Transient artifacts, or "glitches", with disparate morphologies are expected to be present in the data, potentially affecting the scientific return of the mission. We present the first joint reconstruction of short-lived astrophysical signals and noise artifacts. Our analysis is inspired by glitches observed by the LISA Pathfinder mission, including both acceleration and fast displacement transients. We perform full Bayesian inference using LISA time-delay interferometric data and gravitational waveforms describing mergers of massive black holes. We focus on a representative binary with a detector-frame total mass of 6 × 10 7 M⊙ at redshift 7, yielding a signal lasting ∼ 30 h in the LISA sensitivity band. We explore two glitch models of different flexibility, namely a fixed parametric family and a shapelet decomposition. In the most challenging scenario, we report a complete loss of the gravitational-wave signal if the glitch is ignored; more modest glitches induce biases on the black-hole parameters. On the other hand, a joint inference approach fully sanitizes the reconstruction of both the astrophysical and the glitch signal. We also inject a variety of glitch morphologies in isolation, without a superimposed gravitational signal, and show we can identify the correct transient model. Our analysis is an important stepping stone toward a realistic treatment of LISA data in the context of the highly sought-after "global fit".I. | null | [
"https://export.arxiv.org/pdf/2306.03923v1.pdf"
] | 259,095,820 | 2306.03923 | bd4fd03a4d8ea218135d5b3500d12ae37c7cf193 |
Glitch systematics on the observation of massive black-hole binaries with LISA
Alice Spadaro
Dipartimento di Fisica "G. Occhialini"
Università degli Studi di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
INFN
Sezione di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
Riccardo Buscicchio
Dipartimento di Fisica "G. Occhialini"
Università degli Studi di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
INFN
Sezione di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
Institute for Gravitational Wave Astronomy
School of Physics and Astronomy
University of Birmingham
B15 2TTBirminghamUK
Daniele Vetrugno
Department of Physics
Trento Institute for Fundamental Physics and Applications
University of Trento
INFN
38123Povo, TrentoItaly
Antoine Klein
Institute for Gravitational Wave Astronomy
School of Physics and Astronomy
University of Birmingham
B15 2TTBirminghamUK
Davide Gerosa
Dipartimento di Fisica "G. Occhialini"
Università degli Studi di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
INFN
Sezione di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
Institute for Gravitational Wave Astronomy
School of Physics and Astronomy
University of Birmingham
B15 2TTBirminghamUK
Stefano Vitale
Department of Physics
Trento Institute for Fundamental Physics and Applications
University of Trento
INFN
38123Povo, TrentoItaly
Rita Dolesi
Department of Physics
Trento Institute for Fundamental Physics and Applications
University of Trento
INFN
38123Povo, TrentoItaly
William Joseph Weber
Department of Physics
Trento Institute for Fundamental Physics and Applications
University of Trento
INFN
38123Povo, TrentoItaly
Monica Colpi
Dipartimento di Fisica "G. Occhialini"
Università degli Studi di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
INFN
Sezione di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
Glitch systematics on the observation of massive black-hole binaries with LISA
(Dated: June 8, 2023)
Detecting and coherently characterizing thousands of gravitational-wave signals is a core dataanalysis challenge for the Laser Interferometer Space Antenna (LISA). Transient artifacts, or "glitches", with disparate morphologies are expected to be present in the data, potentially affecting the scientific return of the mission. We present the first joint reconstruction of short-lived astrophysical signals and noise artifacts. Our analysis is inspired by glitches observed by the LISA Pathfinder mission, including both acceleration and fast displacement transients. We perform full Bayesian inference using LISA time-delay interferometric data and gravitational waveforms describing mergers of massive black holes. We focus on a representative binary with a detector-frame total mass of 6 × 10 7 M⊙ at redshift 7, yielding a signal lasting ∼ 30 h in the LISA sensitivity band. We explore two glitch models of different flexibility, namely a fixed parametric family and a shapelet decomposition. In the most challenging scenario, we report a complete loss of the gravitational-wave signal if the glitch is ignored; more modest glitches induce biases on the black-hole parameters. On the other hand, a joint inference approach fully sanitizes the reconstruction of both the astrophysical and the glitch signal. We also inject a variety of glitch morphologies in isolation, without a superimposed gravitational signal, and show we can identify the correct transient model. Our analysis is an important stepping stone toward a realistic treatment of LISA data in the context of the highly sought-after "global fit".I.
INTRODUCTION
The Laser Interferometer Space Antenna (LISA) [1], currently planned to be launched in the early 2030s, will detect gravitational waves (GWs) from space. LISA will extend the exploration of the GW spectrum in the mil-liHertz band -from about 10 −4 to 1 Hz -providing observations of astrophysical sources ranging from Galactic white-dwarf binaries to mergers of massive black-holes at high redshift [2,3].
The detection and characterization of different astrophysical sources is an extremely challenging problem of data-analysis. This is due to the combined effect of the all-sky detector sensitivity and the large number, O(10 4 ), of long-lived GW signals overlapping both in time and frequency. Maximizing the payoff of the LISA mission requires an accurate, efficient, and global analysis [4,5], simultaneously fitting data models for an unknown number of detectable GW sources and uncertain detector noise.
In addition to the abundance of astrophysical sources, the LISA data stream will be polluted by noise transients. These artifacts, also called "glitches" from a terminology borrowed from ground-based detectors, have been observed at a rate of about one per day and extensively characterized by the LISA Pathfinder (LPF) mission [6,7]. * [email protected] † [email protected]
Efforts are ongoing to understand the origin of the LPF glitches by capitalizing on the collected data and eliminating them by design in the LISA hardware. Previous studies stressed the need to assess their impact on the scientific return of the LISA mission [8,9]. The physical nature of glitches in LPF still needs to be fully understood, with possible interpretations including outgassing phenomena, electronics events, and eddy current transients [8]. Moreover, new types of unexpected noise artifacts can appear in LISA because of the increased complexity of both spacecraft and payload design compared to LPF. Because the occurrence and morphology of glitches in the full LISA setup are uncertain, a conservative approach is to prepare a robust data analysis strategy to mitigate their impact downstream. Tackling the fundamental challenge of including glitches in parameter-estimation pipelines is well recognized by the LISA Consortium as part of the core preparation activities for the imminent mission adoption. To this end, a set of LISA Data Challenges (LDCs) [10] are in progress to develop and demonstrate data-analysis readiness. Among others, the LDC nicknamed Spritz is devoted to investigating glitches and gaps in the reconstruction of signals from massive black-hole binaries (MBHBs).
A recent analysis suggests the adoption of heavy-tailed likelihoods to mitigate the effect of noise transients upon the inference of GW sources [11]. In this work, we instead assess for the first time the impact of glitches on shortlived MBHB signals performing direct, joint parameter estimation. We present a complete analytical derivation of the LISA response to two types of instrumental artifacts as detected by LPF, namely force and displacement transients of the test masses. We then report results by including both models in a large, multi-source parameter estimation framework for LISA data analysis. This infrastructure, called Balrog, is currently under active development and has already been tested against different astrophysical sources (see e.g. Refs. [12][13][14][15][16]).
The paper is organized as follows. In Sec. II, we introduce the phenomenology of the expected instrumental artifacts. In Sec. III, we present our glitch models and provide a brief summary of the fiducial GW-source and glitch parameters. In Sec. IV, we derive an alternative set of time-delay interferometric (TDI) variables suitable for the simultaneous treatment of glitches and GW signals. In Sec. V, we provide definitions of relevant statistical quantities and details on our parameter-estimation runs. In Sec. VI, we present our inference results. Finally, in Sec. VII, we summarize our findings and describe future developments. Throughout this paper, we use units where c = 1.
II. LPF GLITCHES IN LISA DATA
A. Phenomenology of LPF glitches
Glitches are observed as additional signals in the data stream. They can be thus modeled and subtracted from the data as such. The strategy here is to (i) get a consistent estimate of the power spectral density (PSD) of the underlying quasi-stationary noise over the entire data stream and thus (ii) improve the astrophysical signal inference by making it robust against glitch-induced biases. The latter constitutes a key element of the LISA data processing pipeline in view of the targeted "global fit" [4,5]. The properties of glitches, namely amplitude, duration, and time morphology, depend both on the measurement system and the originating physical process.
LPF observed two main kinds of glitches: a first class treated as an effective displacement-measurement artifact in the optical metrology chain and another class due to spurious forces acting on the test masses (TMs). Displacement glitches have been rarely observed in nominal conditions, have a typical duration comparable with the LISA sampling cadence, and carry negligible impulse per unit of mass as compared to the typical forces acting on the TMs [8]. As a consequence, fast, low-impulse glitches could be expected to affect the geodesic motion of the LISA constellation only mildly. On the contrary, force events result in impulse-carrying glitches lasting from tens of seconds to several hours, have a significant impact on the noise performance, and can potentially contaminate GW detection and parameter estimation.
During its ordinary runs, LPF observed 102 impulsecarrying glitches and 81 of these were visible in the data stream as a sharp, positive offset of the residual forceper-unit-mass (henceforth loosely referred to as "accelera-tion") [8]. These acceleration glitches correspond to the two TMs moving toward each other along the sensitive axis of the pair, i.e. the direction joining their respective centers of mass. The rate of these events has been estimated to be about 1 per day and compatible with a Poisson distribution [8]. Several possible physical origins for glitches have been vetoed by extensive cross-checking and correlation analysis on LPF data, with the most plausible explanation pointing to either gas outbursts or virtual leaks in the vacuum chamber and the material surrounding the TMs. Dedicated experimental studies are underway to corroborate this hypothesis [8].
B. Guiding principles for LISA differential acceleration measurements
We now list a few guiding principles behind our modeling choices:
• Long-lived glitches related to force phenomena such as those observed by LPF are the most relevant for LISA. For these, we adopt a phenomenological parameterization suitable to describe their temporal evolution in terms of differential test-mass accelerations.
• Constructing the corresponding signal model for fractional phase observables in the frequency domain is more complex, although doable.
• Long-lived transients present in a displacement (optical phase) or velocity (optical frequency) observable disappear in an acceleration observable, with the signal disturbance limited to the duration of the external force transient. Likewise, glitch parameters related to the initial conditions -position and velocity -are eliminated with an acceleration observable.
• In a realistic operational setup, systematic errors arising from force disturbances (e.g. stiffness coupling) could be subtracted directly in acceleration. Thus, our fitting model does not require any additional integration or whitening filter.
• When the effective glitch "signal" has spectral content mainly near the low-frequency end of the LISA sensitivity range, differentiation is numerically safer than integration. In this regime, data correction from systematics in the displacement variables is still viable.
• The corresponding TDI variables written in acceleration allow for a straightforward inclusion of LPF glitches in a Bayesian inference framework.
• GW signal models can be easily rewritten as effective accelerations by differentiating those already available in phase or fractional frequency. These broad considerations are mostly inspired by the observational equivalence between GWs and tidal forces accelerating TMs relative to their local inertial frames [17]. We thus opt to implement our joint inference for glitches and GWs with suitable acceleration TDI variables.
III. TRANSIENTS MODELING
The fundamental observable in LISA is the phase evolution ∆ϕ of a one-way propagating laser along each of the six links connecting the satellites. This can be equivalently written as an optical pathlength
L = ∆ϕ ω l ,(1)
where ω l is the central frequency of the laser signal, which is assumed to be constant. We now focus on three different mechanisms perturbing the phase readout.
A. Acceleration transients
The two TMs housed in each of the LISA satellites are expected to independently exchange momentum with their surrounding environment (see Fig. 1 for a schematic representation). We model the resulting transient acceleration profile ⃗ a i of the i-th test mass as in Ref. [9]. We use a two-damped exponential model inspired by glitches observed in LPF, namely
g(t; A, β 1 , β 2 , τ ) = A β 1 − β 2 e − t−τ β 1 − e − t−τ β 2 Θ(t−τ ),(2)
which we refer to as Model A1. Equation (2) integrates to the net transferred momentum per unit mass:
+∞ −∞ g(t; A, β 1 , β 2 , τ ) dt = A .(3)
The parameters β 1 , β 2 describe the typical timescales of the two exponentials while τ is the glitch onset time entering the Heaviside step function Θ. The corresponding Fourier-domain representation is
g(ω; A, β 1 , β 2 , τ ) = −A e −iτ ω (β 1 ω − i)(β 2 ω − i) .(4)
Accommodating glitches of unknown shape requires a more flexible model. We construct this using a superposition of S Gabor-Morlet shapelets
g(t) = S i σ (t; A i , τ i , β i , n i ) ,(5)
where σ (t; A, τ, β, n) = c n ψ n t − τ β ,
ψ n (t) = 2t n e −t/n L (1) n−1 2t n Θ (t) ,(6)c n = (−1) n−1 A 2βn 2 ,(7)
and L (α) n (t) is the n−th generalized Laguerre polynomial [18]. We refer to these expressions as Model A2. Comparing to Ref. [9], we use a different normalization c n for the individual shapelets such that +∞ −∞ σ(t; A, τ, β, n) dt = A , ∀n ∈ N .
In the frequency domain Eq. (6) reads
σ(ω; A, τ, β, n) = (−1) n e −iωτ A (nβω + i) n−1 (nβω − i) n+1 .(10)
Shapelets in this parametric family are quasi-orthogonal, i.e.
+∞ −∞σ (ω; A, τ, β, n)σ * (ω; A ′ , τ, β, m) dω = δ nm πAA ′ 2nβ ,(11)+∞ −∞σ (ω; A, τ, β, n)σ * (ω; A ′ , τ ′ , β, n)dω = πAA ′ 2n 2 β 2 e − |τ−τ ′ | nβ (nβ + |τ − τ ′ |).(12)
From Eqs. (4) and (10) it is immediate to show that Model A1 tends to Model A2 with n = 1 in the limit where β 1 → β 2 .
B. Displacement transients
The interferometer readout system is also expected to generate transient phase fluctuations. From Eq. (1), we model these as effective displacement transients with the same agnostic shapelet parameterization used in Eq. (5). We use a superposition of S shapelets
∆L(t) = S i σ (t; D i , τ i , β i , n i ) ,(13)where +∞ −∞ dt∆L(t) = S i D i(14)
is the net integrated displacement experienced by the test mass before returning asymptotically to its free-fall condition. We refer to this parametric family of glitches as Model D. The frequency domain representation follows from Eq. (10) and reads
σ(ω; D, τ, β, n) = (−1) n e −iωτ D (nβω + i) n−1 (nβω − i) n+1 . (15)
C. GW transients
Among the large variety of typical sources populating the LISA sensitivity band, the most massive binary systems detectable produce hours to years-long transient signals. To leading-order, the binary time to merger t m from a reference frequency f ref is [19,20]
t m ∼ 3 4η f ref 0.1 mHz − 8 3 M z 10 7 M ⊙ − 5 3
days , (16) where η ≡ m 1 m 2 /(m 1 + m 2 ) 2 is the symmetric mass ratio and M z = (1 + z)(m 1 + m 2 ) is the solar-system barycenter frame total mass for a source of component masses m 1 and m 2 . By contrast, glitches observed by LPF have typical durations of seconds to hours and are positively correlated with the transferred momentum per unit mass ranging from 10 −2 to 10 3 pm/s [8]. Their broadband, short-lived morphology makes them the most likely to impact parameter estimation for GW transient sources of comparable duration.
We select three fiducial noise transients and superimpose them on a short-lived (t m = 30 hours) high-mass (M z = 6×10 7 M ⊙ , η = 3/16) MBHB at redshift z = 5. We assume zero sensitivity below 0.1 mHz [16]. We consider a short-duration Model D glitch (β = 5 s), a moderateduration Model A2 (β = 40 s), and a long-duration Model A1 glitch with β 1 +β 2 = 3300 s. All three glitches have peak amplitudes close to the merger time of the GW source, as shown in Fig. 2. For a conservative approach, we fine-tune the glitch onset times to maximally impact the reconstruction of GW source parameters. This is done by maximizing the match between the glitch and GW waveforms as shown in Fig. 3 (see Sec. V for more details).
We model the GW signal with the IMRPhe-nomXHM [21,22] waveform approximant which captures the full coalescence of a quasi-circular, non-precessing black-hole binary. The implementation of the LISA response to this GW signal in the Balrog code has been presented in Ref. [16].
We choose to parametrize the GW signal injected as follows: m 1z,2z and χ 1,2 denote the binary component redshifted masses and aligned dimensionless spin, respectively; t m , ϕ 0 , ψ denote the time to merger introduced in Eq. (16), initial phase and polarization, respectively; sin β, λ denote the (sine-)ecliptic latitude and longitude; d L and ι denote the source luminosity distance and inclination. Tables III, IV and V list the parameter values of our fiducial GW source which has an SNR of 187 and is common across all of our runs.
IV. ACCELERATION TDIS
We use Eqs. (4), (10), and (15) to model the TDI variables [23]s k (f ; θ) entering the likelihood, cf. Sec. V. We work in the constant equal-armlength approximation and label the three TDI variables M X , M Y , and M Z , respectively. In this approximation, one needs a single time-delay operator D Fiducial waveforms for our parameter-estimation runs. Black solid curves show the MBHB signal we consider (Mz = 6 × 10 7 M⊙ and z = 5), which is identical across the three panels. Colored curves in the top, middle, and bottom panels describe the Model A1, Model A2, Model D glitch amplitudes, respectively. Signals shown in the three panels correspond to injections in runs 9, 10, and 11 and exemplify glitches lasting hours, minutes, and seconds, respectively (cf. Table I). The parameters of the injected signals are shown in Tables III, IV, and V. This is applied to the single-link phase measurements y ijk . Signals denoted by y ijk or y ij ′ k are emitted by the i-th satellite, received by the k-th satellite, therefore traveling along either L j or L j ′ (see Fig. 1 for a schematic representation). The indexes j and j ′ are used to denote cyclic and anti-cyclic permutations of 123, respectively. We thus to obtain the TDI variables
D [f (t)] = f (t − L).(17)M X = y 231 + Dy 13 ′ 2 − y 32 ′ 1 − Dy 123 ,(18)M Y = y 312 + Dy 32 ′ 1 − y 21 ′ 3 − Dy 231 ,(19)M Z = y 123 + Dy 21 ′ 3 − y 13 ′ 2 − Dy 231 .(20)
Incorporating Model A1 and Model A2 signals into Eqs. (18), (19), and (20) requires integrating the singlelink differential accelerations twice. However, any nonzero total transferred momentum necessitate artificial regularization or ad-hoc approximations to construct a Fourier-domain representation of the signal. We solve this problem by introducing a set of "acceleration TDIs" G X,Y,Z which are trivially related to Eqs. (18), (19), and (20) by double differentiation. In the frequency domain one has
F [G X ] = (2πf ) 2 F [M X ](21)G X = g 231 + Dg 13 ′ 2 − g 32 ′ 1 − Dg 123 ,(22)
where F denotes the Fourier transform operator and
g ijk (t) = d dt 2 [y ijk (t)] .(23)
Similar definitions hold for G Y , G Z upon cyclic permutation of indices.
The key advantage of introducing a new set of TDIs lies in its instrumental robustness. Equation (21) also allows us to conveniently recycles signal models available in fractional displacement by including both Model D glitches and GW signals. Furthermore, Eq. (23) does not require a transfer function to model acceleration glitches.
Following the conventions shown in Fig. 1, the singlelink perturbation g ijk (t) is obtained from the instantaneous accelerations ⃗ g i (t) and ⃗ g k (t − L) which are experienced by sender i and receiver k along the link j, and projected along the unit-vectorsâ j (t − L) andâ j ′ (t), respectively. We associate a unit vectorâ j to each test mass M j pointing in the direction opposite to L j . For simplicity, we denote the associated vector components a j . Given the choice of the local reference system, a positive value a i corresponds to a negative displacement ∆L i . The three TDI observables in terms of the individual test mass accelerations are
G X = (1 + D 2 )(a 2 ′ − a 3 ) + 2D(a 2 − a 3 ′ ),(24)G Y = (1 + D 2 )(a 3 ′ − a 1 ) + 2D(a 3 − a 1 ′ ),(25)G Z = (1 + D 2 )(a 1 ′ − a 2 ) + 2D(a 1 − a 2 ′ ).(26)
It is importante to note how the acceleration TDI variable G X (G Y , G Z ) is insensitive to glitches acting on links L 1 and L ′ 1 (L 2 and L ′ 2 , L 3 and L ′ 3 ). This would no longer be true if a single glitch affects more than one TM (or more optical phase measurements); further modeling on this point will be presented elsewhere. Following the standard procedure [23], we combine G X , G Y , and G Z into three noise-orthogonal variables
G A = G Z − G X √ 2 ,(27)G E = G X − 2G Y + G Z √ 6 ,(28)G T = G X + G Y + G Z √ 3 .(29)
Equations (27), (28), and (29) define the data pieces entering our inference pipeline.
V. INFERENCE
The initial search of a GW in noisy data is achieved through matched-filtering techniques [24] which provide initial guesses on the signal parameters. If glitches are present, their preliminary detection and subtraction might not be sufficient to provide data that are sufficiently cleaned to accurately infer the parameters of the astrophysical source [9]. Previous studies presented a matchingpursuit algorithm for an automated and systematic glitch detection [25] showing that, while the search grid on the damping parameter is too coarse to accurately obtain the best-fit glitch, it provides a reliable initial guess. For practical purposes, here we assume that such guess has been identified from the data and can be used to inform our subsequent analyses.
We perform a joint parameter estimation, fitting simultaneously for GW signals and noise artifacts. We construct posteriors on parameters θ p(θ|d) ∝ L(d|θ)π(θ)
through stochastic sampling of the likelihood L(d|θ) under a prior π(θ). We employ a coherent analysis on the three noise-orthogonal TDI channels d = {d k ; k = M A , M E , M T } when considering displacement variables and d = {d k ; k = G A , G E , G T } when considering acceleration variables. We use a Gaussian likelihood [26] lnL
(d|θ) = − k (d k − s k (θ)|d k − s k (θ)) k 2 + const.,(31)
wheres k is the k-th TDI output frequency series associated to the injected signals(f ; θ). The outputs k represent either acceleration or fractional displacements depending on the chosen TDI variable set, thus containing acceleration glitches, displacement glitches, GW transients, or a combination of these (cf. Sec. III). The noiseweighted inner product is defined as
(a | b) k = 4ℜ fmax fminã * (f )b(f ) S k (f ) df ,(32)
where ℜ denotes the real part,ã(f ) is the Fourier transform of the time series a(t), and S k (f ) is the one-sided noise spectral density of the k-th TDI channel. We use the match between two signals
M (a, b) = (a | b) (a | a) 1/2 (b | b) 1/2(33)
to optimize the onset time of the injected glitches as discussed in Sec. III C. Model selection is performed using log-Bayes factors log 10 B j i = log 10 Z i − log 10 Z j ,
where i and j are labels identifying the competing models, and
Z(d) = dθL(d | θ)π(θ)(35)
is the evidence of each parameter estimation. We consider a LISA mission lifetime of T LISA = 4 years, roughly equivalent to a calendar observation time of 4.5 years with an effective duty cycle of 82%. Our frequency resolution is therefore ∆f ≈ 1/T LISA = 1.7 × 10 −8 Hz. We set f min = 0.1 mHz and f max = 30 mHz, which is well above the fiducial GW and the maximum frequencies of all glitch signals. We use a semi-analytical noise spectral density model S k (f ) [27] describing the superposition of LISA stationary instrumental noise and astrophysical confusion noise from unresolved Galactic binaries [28]. In order to reduce the computational cost, we evaluate inner products from Eq. (32) using a Clenshaw-Curtis integration algorithm [29], see e.g Ref. [13] for a summary of its application to LISA data.
Parameter estimation is performed with the Balrog code, which is designed to work with different stochastic samplers. In particular, in this paper we use the nested sampling algorithm [30] as implemented in Nessai [31]. We choose uniform priors on each parameter over either its entire definition domain or a range that is sufficiently large to enclose the entire posterior. where the recovery signal model matches that of the injection. We first perform disjoint parameter estimation on our fiducial GW source and three glitch models (IDs 1-5). We then generate signals from the superposition of a GW signal with single glitches and study them both ignoring (IDs 6-8) and including (IDs 9-11) the glitch in the data model. Finally, we generate GW signals and perform parameter estimation on them including glitches in the data model (IDs 12-14).
ID
Injection Recovery log 10 Z Figure Table Glitch Model GW Source Glitch Model GW Source1 ✗ ✓(acceleration TDI) ✗ ✓(acceleration TDI) −35.27 2 ✗ ✓(displacement TDI) ✗ ✓(displacement TDI) −35.19 3 A1 ✗ A1 ✗ −14.0 4 A2 ✗ A2 ✗ −9.1 5 D ✗ D ✗ −8.8 6 A1 ✓ ✗ ✓ −14537.8 4 III 7 A2 ✓ ✗ ✓ −296.5 5 IV 8 D ✓ ✗ ✓ −48.8 6 V 9 A1 ✓ A1 ✓ −46.8 4 III 10 A2 ✓ A2 ✓ −43.9 5 IV 11 D ✓ D ✓ −40.8 6 V 12 ✗ ✓ A1 ✓ −75.0 13 ✗ ✓ A2 ✓ −44.1 14 ✗ ✓ D ✓ −52.2
ID Injection
Recovery log-Evidence Figure Table Glitch 1 Glitch VI TABLE II. Summary of a large set of injected glitches and associated recoveries. Injected glitches are labeled by X(i,n), with X, n, and i describing the glitch model, the injection point, and the shapelet order (when applicable), respectively. We explore the number of components and shapelet order (IDs [16][17][18][19][20], the number of glitches (ID 21), and the potential misidentification of the injection point (IDs [22][23][24][25][26][27]. Additionally, we simulate data from glitches of increasing complexity (IDs 28, 29) and consider three representative glitches inspired by LPF data (IDs 30-32). These are a short duration and small amplitude glitch (A1s), a medium duration and amplitude glitch (A1m), and a long duration and large amplitude glitch (A1l). Runs with same injected signals are grouped by horizontal lines.
2 Glitch 1 Glitch 2 15 D(1, 1) D(1, 2) ✗ ✗ D(1, 1) D(1, 2) ✗ ✗ −16.1 VII 16 D(1, 1) D(1, 3) ✗ ✗ −18.0 17 D(1, 2) D(1, 3) ✗ ✗ −20.1 18 D(1, 1) ✗ ✗ ✗ −22.9 19 D(1, 2) ✗ ✗ ✗ −23.9 20 D(1, 3) ✗ ✗ ✗ −34.4 21 D(1, 1) D(1, 2) D(1, 3) ✗ −17.0 22 D(1, 1) ✗ ✗ ✗ D(1, 1) ✗ ✗ ✗ −15.2 VII 23 D(2, 1) ✗ ✗ ✗ −3650.20 24 D(3, 1) ✗ ✗ ✗ −3650.18 25 D(1 ′ , 1) ✗ ✗ ✗ −224.1 26 D(2 ′ , 1) ✗ ✗ ✗ −3628.6 27 D(3 ′ , 1) ✗ ✗ ✗ −3640.8 28 D(1, 2) D(1, 3) D(3, 1) D(3, 2) D(1, 2) D(1, 3) D(3, 1) D(3, 2) −34.8 9 VII 29 A2(1, 1) ✗ A2(2 ′ , 2) ✗ A2(1, 1) ✗ A2(2 ′ , 2) ✗ −15.9 8 VI 30 A1s(1) ✗ ✗ ✗ A1s(1) ✗ ✗ ✗ −13.6 7 VI 31 A1m(1) ✗ ✗ ✗ A1m(1) ✗ ✗ ✗ −18.0 7 VI 32 A1l(1) ✗ ✗ ✗ A1l(1) ✗ ✗ ✗ −16.6 7
VI. RESULTS
We perform two sets of parameter-estimation runs:
(i) Joint inference runs on both GW signal and glitches (Sec. VI A), listed with IDs 1 to 14 in Table I. (ii) Inference runs where we inject and recover glitches without GW signal (Sec. VI B), listed with IDs 15 to 32 in Table II.
A. Joint inference with glitches and GWs
If a preliminary search fails to identify and remove a glitch from the data, it is important to assess its impact on the parameters of the overlapping GW source. We thus tackle the following cases for each of the three signals illustrated in Fig. 2:
• Parameter estimation in the absence of glitch in the data ("reference" runs, with IDs 1 and 2). While accounting for the presence of a glitch (ID 9) allows for joint unbiased reconstruction of all parameters, ignoring its potential occurrence (ID 6) yields large systematic biases. Ignoring the presence of a glitch is disfavored with log 10 B 6 9 = −14491. Joint posterior distributions for both these runs are shown in Fig. 4. For comparison, the bottom row shows our reference run where we only inject the GW source (ID 1). The subset of parameters common across runs 1 and 10 does not show appreciable differences.
MBHB
Model A2 Table III. This glitch, if present in data and ignored upon inference, introduces milder biases when compared to run with ID 6: this is due to its shorter duration resulting in a smaller match with the GW waveform. Joint posterior distributions for both runs are shown in Fig. 5. Table III. This glitch, if present in data and ignored upon inference, introduces negligible biases when compared to runs with IDs 6 and 7. This is due to its very short duration, which superimposes with the GW signal only for a few seconds yielding a low match. Joint posterior distributions for both runs are shown in Fig. 6.
MBHB
• Parameter estimation ignoring a glitch when present in the data ("glitch-ignorant" runs, with IDs 6-8).
• Parameter estimation including in the signal model a glitch that is present in the data ("glitch-complete" runs, with IDs 9-11).
Bayesian evidence for each run is listed Tab. I. We report log 10 B 6 9 , log 10 B 7 10 , and log 10 B 8 11 much greater than 2, indicating a "decisive" evidence [32] in favor of a glitch being present in the data.
Summaries are provided in Tables III, IV, and V. We find no appreciable differences in the posterior distribution of the GW-source parameters when comparing reference runs and glitch-complete runs, which is encouraging for LISA science. Individual parameters are well reconstructed, which is expected given the brightness of the source (SNR ≃ 187). In particular, the MBHB component masses, the primary aligned spin components, and time to merger are measured with an accuracy of ∆m i /m i ≈ 8 − 40%, ∆χ 1 ≈ 0.2, and ∆t m ≈ 600 s (where we quote the 90% credible interval of the marginal posterior distributions). Figures 4, 5, and 6 show the posterior distribution for the fiducial MBHB of each glitch-complete run. Similarly, we do not report any appreciable difference with either fractional displacement or acceleration TDIs to model the same GW signal (see runs 1 and 2).
On the contrary, glitch-ignorant runs point to a different conclusion. The resulting posterior depends on the chosen duration and amplitude of each transient (see runs 7, 8, and 9). We find a long-duration, small-amplitude Model A1 glitch massively contaminates the reconstruction of the GW parameters, to a point that the signal cannot be recovered at all. This is shown in Fig. 4, where the glitch-ignorant distribution (red) shows evident issues in the underlying stochastic-sampling procedure. This has to be contrasted with the regularity of the glitch-complete posterior distribution (blue), where instead the parameters of both GW signal and noise transient are successfully recovered. In particular, when the glitch is ignored we find that the posterior on the luminosity distance rails heavily against the lower bound of its prior, thus making the GW source reconstruction highly biased, even in a parameter space that largely encloses the posterior of the glitch-complete run.
As shown in Fig. 5, a Model A2 glitch with moderate duration and amplitude induces milder biases. Although the posterior support is far from the prior boundaries, the injected values lie outside the 99% credible interval for both mass and spin parameters. For the merger time, the true value lies on the 97% confidence interval of the corresponding marginalized posterior distribution. The injected values of polarization, initial phase, inclination, and source position are within their one-dimensional 90% confidence interval.
Equivalent runs for a Model D glitch are shown in Fig. 6. This is a noise transient that overlaps with the GW signal only for a small fraction of a cycle. As expected, we find such a glitch does not significantly impact the measurement of the GW parameters.
Finally, we note that our glitch-complete runs do not exhibit significant cross-correlations between the glitch and GW parameters, thus effectively decoupling the inference on the two signals.
B. Inference with glitches alone, without GWs
We consider all three glitch models presented in Sec. III and inject them separately in the LISA data stream. Results are shown in Figs. 7, 8, and 9 as well as Tables VI and VII.
We perform model selection with different (i) number and order of shapelet components, (ii) number of glitches, and (iii) injection point. In particular, in Tab. II we report "strong" evidence in favor of the correct noisetransient model for the selection of the number and order of shapelets; these are discrete parameters we can confidently identify using log 10 B j 15 with j = 16, . . . , 20. We obtain a "substantial" evidence log 10 B 21 15 = 0.9 for selecting the correct number of glitches. Injection points are selected with a "decisive" evidence given by B n 22 with n = 23, . . . , 27.
All runs point to the same, encouraging result: glitch parameters are confidently reconstructed. In particular, we recover amplitudes across all models (i.e. A, A 0,1 , D 0,1,2,3 ) with accuracies of 1% − 30% at 90% credible level. Glitch-onset times are recovered with fractional accuracy ≲ 0.1%. The parameters β i 's in Model D glitches are recovered with an accuracy of 20%. On the other hand, Model A1 glitches exhibit correlation and multimodalities for the joint posterior on β 1 and β 2 . This is expected given the waveform degeneracy upon exchange of these two parameters, cf. Eqs. (2) and Eq. (4).
VII. CONCLUSIONS
We presented a parameter-estimation strategy to simultaneously extract GWs from MBHBs and glitches from future LISA data. We developed several models for noise transients inspired by those observed by LPF. Crucially, we point out that dealing with glitches in the frequency domain greatly benefits from expressing the LISA response function (i.e. the TDIs) in terms of acceleration instead of displacement as usually done.
Accounting for potential noise transients in the data leads to accurate reconstruction of all GW parameters without significant correlations with the glitch properties. On the contrary, ignoring glitches when present in the data might introduce significant systematic biases on the reconstructed parameters of the MBHB. Our analysis shows that the most crucial property is the length of the glitch, with results ranging from a complete loss of the GW signal to a negligible impact. When considering glitches in isolation, our procedure allows for confident identification of their number, location, and morphology in each of the models considered.
It is important to stress that all glitch models in our suite have a relatively low number of parameters and these are largely uncorrelated to those of the GW source. The computational overhead of including potential glitches in the signal model is therefore negligible, thus making our approach promising for a future "global fit" procedure.
This study is restricted to a single, fiducial GW source as well as glitches are conservatively placed at the time location that maximizes their matches with the GW signal. A broader injection-recovery study over the full MBHB and glitch parameter space is needed to forecast the impact of noise transients on GW signals in the future LISA catalog; this is left to future work.
Overall, this paper showcases our readiness to model and precisely recover glitches when present in the LISA data stream, even when overlapping with GW sources of similar duration such as a MBHB. Table III. When the glitch is included in the inference, each model injected parameter is recovered within the 90% one-dimensional credible region. We do not report notable correlations between glitch and GW parameters. If the glitch is excluded, all MBHB parameters except the initial phase ϕ0 and the polarization angle ψ are systematically biased. In particular, the posterior on the luminosity distance dL rails heavily against the prior lower bound. FIG. 5. Posterior distribution in blue (red) corresponding to run ID 6 (9) where a Model A2 glitch is (not) included in the recovery process. Contours indicate the 50% and 90% credible regions; solid black lines indicate the injected values as listed in Table IV. When the glitch is ignored, the MBHB parameters are somewhat biased; see in particular the black-hole masses and spins. When the glitch is included in the recovery process, all model parameters are recovered within their 90% one-dimensional credible regions. We do not report notable correlations between glitch and GW parameters. Table V. When the glitch is ignored, the MBHB parameters are very mildly biased. In both cases, all model parameters are recovered within their 90% one-dimensional credible regions.
ACKNOWLEDGMENTS
FIG. 1 .
1Schematics of single laser links and glitch reference system conventions. The constellation is made of three satellites (white circles), each housing two TMs (right inset, yellow and gray boxes). Each satellite is connected to the other two by four links, two for each TM. Signals denoted by y ijk or y ij ′ k are emitted by the i-th satellite, received by the k-th satellite, therefore traveling along either Lj or L j ′ . The indexes j and j ′ are used to denote cyclic and anti-cyclic permutations of 123, respectively. Unit vectorsâj parametrize the glitch component along the incoming (outgoing) link L j ′ (Lj) associated with the test mass M j ′ . On satellite 1 a generic acceleration glitches acting on test mass M 2 ′ and M3 are described by the components a 2 ′ and a3, respectively. The former [latter] affects link y 32 ′ 1 (t) [y231(t)] at reception and link y123(t − L) [y 13 ′ 2 (t − L)] at emission.
FIG. 2. Fiducial waveforms for our parameter-estimation runs. Black solid curves show the MBHB signal we consider (Mz = 6 × 10 7 M⊙ and z = 5), which is identical across the three panels. Colored curves in the top, middle, and bottom panels describe the Model A1, Model A2, Model D glitch amplitudes, respectively. Signals shown in the three panels correspond to injections in runs 9, 10, and 11 and exemplify glitches lasting hours, minutes, and seconds, respectively (cf. Table I). The parameters of the injected signals are shown in Tables III, IV, and V.
FIG. 3 .
3Match between the GW and glitch signals as a function of onset time. The blue, green, and red solid curve considers to the Model A1 (A2, D) glitch shown in Fig. 2. The GW signal is fixed to that of our fiducial MBHB. Dashed vertical lines with matching color denote the onset time that maximizes the match. The black dotted line denotes the GW source nominal merger time. The inset shows a 40-minute interval zoom-in around the merger and glitch onset times.
. Parameter estimation results for a GW signal contaminated by a Model D glitch. Results are organized as in
FIG. 4 .
4Posterior distribution in blue (red) corresponding to run ID 7 (10) where a Model A1 glitch is (not) included in the recovery process. Contours indicate the 50% and 90% credible regions; solid black lines indicate the injected values as listed in
FIG. 6 .
6Posterior distribution in blue (red) corresponding to run ID 8(11) where a Model D glitch is (not) included in the recovery process. Contours indicate the 50% and 90% credible regions; solid black lines indicate the injected values as listed in
TABLE I .
ISummary of our runs containing single glitches and the fiducial GW signal. Rows highlighted in teal denote runs
TABLE III .
IIIParameter estimation results for a GW signal contaminated by a Model A1 glitch. The injected parameters are listed in the white rows. Medians and 90% credible intervals for the recovered posteriors are listed in the two rows highlighted in teal.
TABLE IV .
IVParameter estimation results for a GW signal contaminated by a Model A2 glitch. Results are organized as in
We thank Chris Moore, Federico Pozzoli, Eleonora Castelli, Natalia Korsakova, Stas Babak, Martina Muratore, and all Balrog developers for useful commentsTABLE VI. Parameter-estimation results on Model A1 (ID[30][31][32] and Model A2 (ID 29) glitches. In particular, the former corresponds to glitches inspired by LPF observations, with varying duration and amplitudes. White rows show the injected values and teal rows show the recovered median and 90% confidence interval. The posterior distribution for these runs is provided inFigs. 7 and 8. TABLE VII. Parameter estimation results assuming Model D glitches of increasing complexity. White rows show the injected values and teal rows show the recovered median and 90% confidence interval. In particular, we consider a single-component glitch (ID 22), a glitch with two components (ID 15), and two glitches separated by 200 seconds with two components each (ID 28). The posterior distribution for the latter, most complex case is shown in Fig 9.and inputs. A.S. and D.G. are supported by ERC Starting Grant No. 945155-GWmining, Cariplo Foundation Grant No. 2021-0555, and MUR PRIN Grant No. 2022-Z9X4XS. A.S., D.G., and R.B. are supported by the ICSC National Research Center funded by NextGenerationEU. R.D., M.C., S.V., D.V.,W.J.W. acknowledge funding from MUR under the grant PRIN 2017-MB8AEZ. R.B. acknowledges support through the Italian Space Agency grant Phase A activity for LISA mission, Agreement n. 2017-29-H.0. D.G. is supported by Leverhulme Trust Grant No. RPG-2019-350. Computational work was performed using University of Birmingham BlueBEAR High Performance Computing facility and CINECA with allocations through INFN, Bicocca, and ISCRA project HP10BEQ9JB.Software: We acknowledge usage of Mathematica [33] and of the following Python[34] packages for modeling, analysis, post-processing, and production of results throughout: Nessai[31], matplotlib[35], numpy[36], scipy[37].Model A1
Model A2
ID A [pm/s]
β 1 [s]
β 2 [s]
τ [h]
A 0 [pm/s] β 0 [s]
τ 0 [h]
A 1 [pm/s] β 1 [s]
τ 1 [h]
✗
✗
✗
✗
1.48
3600.0
11.94
3.72
3600.0
36.94
29
✗
✗
✗
✗
1.6 +0.6
−0.4
3735 +770
−543 11.94 +0.03
−0.03
4.2 +2.4
−1.4
3848 +993
−719 36.93 +0.04
−0.04
0.3
21.0
20.0
12.0
✗
✗
✗
✗
✗
✗
30 0.300 +0.004
−0.004
20 +13
−17
20 +13
−17
12.001 +0.003
−0.003
✗
✗
✗
✗
✗
✗
2.0
900.0
400.0
12.0
✗
✗
✗
✗
✗
✗
31 2.00 +0.02
−0.02
439 +485
−55
848 +75
−465
12.000 +0.002
−0.002
✗
✗
✗
✗
✗
✗
100.0
7500.0
7400.0
12.0
✗
✗
✗
✗
✗
✗
32
102 +5
−4
7453 +2211
−1417 7453 +2201
−1402 12.000 +0.002
−0.002
✗
✗
✗
✗
✗
✗
Glitch 1
Glitch 2
Component 1
Component 2
Component 1
Component 2
ID D0 [pm · s] β0 [s]
τ0 [h]
D1 [pm · s] β1 [s]
τ1 [h]
D2 [pm · s] β2 [s]
τ2 [h]
D3 [pm · s] β3 [s]
τ3 [h]
2480.0
20.0
12.0
✗
✗
✗
✗
✗
✗
✗
✗
✗
22 2481 +64
−64
20.0 +0.9
−0.9 12.0000 +0.0003
−0.0004
✗
✗
✗
✗
✗
✗
✗
✗
✗
542.0
40.0
12.0
1420.0
80.0
12.0
✗
✗
✗
✗
✗
✗
15 672 +536
−278
69 +60
−29
11.997 +0.005
−0.007
1336 +690
−578
77 +16
−15
12.015 +0.010
−0.012
✗
✗
✗
✗
✗
✗
5000.0
100.0
11.111
1000.0
10.0
11.111
5000.0
40.0
13.89
20000.0
120.0
13.89
28 5021 +1461
−1355
101 +25
−26
11.111 +0.004
−0.003
986 +489
−333
10.2 +2.2
−1.9 11.111 +0.003
−0.004 4975 +820
−821 40.3 +3.8
−3.5 13.889 +0.001
−0.002 19822 +3942
−3998 120.2 +8.4
−7.6 13.89 +0.02
−0.02
−
0 . 8
0 . 0
0 . 8
χ2
0 . 6
1 . 2
β [rad]
1 . 8
2 . 0
2 . 2
λ [rad]
0 . 6
1 . 2
ι [rad]
1 . 5
3 . 0
φ0 [rad]
4 8
dL [10 10
pc]
2 9 .
. P Amaro-Seoane, arXiv:1702.00786[astro-ph.IMP. Amaro-Seoane et al., (2017), arXiv:1702.00786 [astro- ph.IM].
. P Amaro-Seoane, 10.1007/s41114-022-00041-yarXiv:2203.06016Living Rev. Relativ. 26gr-qcP. Amaro-Seoane et al., Living Rev. Relativ. 26, 2 (2023), arXiv:2203.06016 [gr-qc].
. P Auclair, arXiv:2204.05434[astro-ph.COP. Auclair et al., (2022), arXiv:2204.05434 [astro-ph.CO].
. N J Cornish, J Crowder, 10.1103/PhysRevD.72.043005arXiv:gr-qc/0506059Phys. Rev. D. 7243005astro-phN. J. Cornish and J. Crowder, Phys. Rev. D 72, 043005 (2005), arXiv:gr-qc/0506059 [astro-ph].
. T B Littenberg, N J Cornish, 10.1103/PhysRevD.107.063004arXiv:2301.03673Phys. Rev. D. 10763004gr-qcT. B. Littenberg and N. J. Cornish, Phys. Rev. D 107, 063004 (2023), arXiv:2301.03673 [gr-qc].
. M Armano, 10.1103/PhysRevLett.116.231101Phys. Rev. Lett. 116231101M. Armano et al., Phys. Rev. Lett. 116, 231101 (2016).
. M Armano, 10.1103/PhysRevLett.120.061101Phys. Rev. Lett. 12061101M. Armano et al., Phys. Rev. Lett. 120, 061101 (2018).
. M Armano, 10.1103/PhysRevD.106.062001Phys. Rev. D. 10662001M. Armano et al., Phys. Rev. D 106, 062001 (2022).
. Q Baghi, N Korsakova, J Slutsky, E Castelli, N Karnesis, J.-B Bayle, 10.1103/PhysRevD.105.042002arXiv:2112.07490Phys. Rev. D. 10542002gr-qcQ. Baghi, N. Korsakova, J. Slutsky, E. Castelli, N. Kar- nesis, and J.-B. Bayle, Phys. Rev. D 105, 042002 (2022), arXiv:2112.07490 [gr-qc].
. A Sasli, N Karnesis, N Stergioulas, arXiv:2305.04709gr-qcA. Sasli, N. Karnesis, and N. Stergioulas, (2023), arXiv:2305.04709 [gr-qc].
. E Roebber, R Buscicchio, A Vecchio, C J Moore, A Klein, V Korol, S Toonen, D Gerosa, J Goldstein, S M Gaebel, T E Woods, 10.3847/2041-8213/ab8ac9arXiv:2002.10465Astrophys. J. Lett. 89415astro-ph.GAE. Roebber, R. Buscicchio, A. Vecchio, C. J. Moore, A. Klein, V. Korol, S. Toonen, D. Gerosa, J. Goldstein, S. M. Gaebel, and T. E. Woods, Astrophys. J. Lett. 894, L15 (2020), arXiv:2002.10465 [astro-ph.GA].
. R Buscicchio, A Klein, E Roebber, C J Moore, D Gerosa, E Finch, A Vecchio, 10.1103/PhysRevD.104.044065arXiv:2106.05259Phys. Rev. D. 10444065astro-ph.HER. Buscicchio, A. Klein, E. Roebber, C. J. Moore, D. Gerosa, E. Finch, and A. Vecchio, Phys. Rev. D 104, 044065 (2021), arXiv:2106.05259 [astro-ph.HE].
. A Klein, G Pratten, R Buscicchio, P Schmidt, C J Moore, E Finch, A Bonino, L M Thomas, N Williams, D Gerosa, S Mcgee, M Nicholl, A Vecchio, arXiv:2204.03423astro-ph.HEA. Klein, G. Pratten, R. Buscicchio, P. Schmidt, C. J. Moore, E. Finch, A. Bonino, L. M. Thomas, N. Williams, D. Gerosa, S. McGee, M. Nicholl, and A. Vecchio, (2022), arXiv:2204.03423 [astro-ph.HE].
. E Finch, G Bartolucci, D Chucherko, B G Patterson, V Korol, A Klein, D Bandopadhyay, H Middleton, C J Moore, A Vecchio, arXiv:2210.10812astro-ph.SRE. Finch, G. Bartolucci, D. Chucherko, B. G. Patterson, V. Korol, A. Klein, D. Bandopadhyay, H. Middleton, C. J. Moore, and A. Vecchio, (2022), arXiv:2210.10812 [astro-ph.SR].
. G Pratten, A Klein, C J Moore, H Middleton, N Steinle, P Schmidt, A Vecchio, arXiv:2212.02572gr-qcG. Pratten, A. Klein, C. J. Moore, H. Middleton, N. Steinle, P. Schmidt, and A. Vecchio, (2022), arXiv:2212.02572 [gr-qc].
. G Congedo, R Dolesi, M Hueller, S Vitale, W J Weber, 10.1103/PhysRevD.88.082003Phys. Rev. D. 8882003G. Congedo, R. Dolesi, M. Hueller, S. Vitale, and W. J. Weber, Phys. Rev. D 88, 082003 (2013).
Handbook of mathematical functions with formulas, graphs, and mathematical tables. M Abramowitz, I A Stegun, DoverM. Abramowitz and I. A. Stegun, Handbook of mathemat- ical functions with formulas, graphs, and mathematical tables (Dover, 1965).
. P C Peters, J Mathews, 10.1103/PhysRev.131.435Phys. Rev. 131435P. C. Peters and J. Mathews, Phys. Rev. 131, 435 (1963).
. L Blanchet, 10.12942/lrr-2014-2arXiv:1310.1528Living Rev. Relativ. 17gr-qcL. Blanchet, Living Rev. Relativ. 17, 2 (2014), arXiv:1310.1528 [gr-qc].
. C García-Quirós, M Colleoni, S Husa, H Estellés, G Pratten, A Ramos-Buades, M Mateu-Lucena, R Jaume, 10.1103/PhysRevD.102.064002arXiv:2001.10914Phys. Rev. D. 10264002gr-qcC. García-Quirós, M. Colleoni, S. Husa, H. Estellés, G. Pratten, A. Ramos-Buades, M. Mateu-Lucena, and R. Jaume, Phys. Rev. D 102, 064002 (2020), arXiv:2001.10914 [gr-qc].
. G Pratten, S Husa, C García-Quirós, M Colleoni, A Ramos-Buades, H Estellés, R Jaume, 10.1103/PhysRevD.102.064001arXiv:2001.11412Phys. Rev. D. 10264001grqcG. Pratten, S. Husa, C. García-Quirós, M. Colleoni, A. Ramos-Buades, H. Estellés, and R. Jaume, Phys. Rev. D 102, 064001 (2020), arXiv:2001.11412 [gr- qc].
. M Tinto, S V Dhurandhar, 10.1007/s41114-020-00029-6arXiv:gr-qc/0409034Living Rev. Relativ. 24gr-qcM. Tinto and S. V. Dhurandhar, Living Rev. Relativ. 24, 1 (2021), arXiv:gr-qc/0409034 [gr-qc].
. J Neyman, E S Pearson, 10.1098/rsta.1933.0009Philos. Trans. R. Soc. 231289J. Neyman and E. S. Pearson, Philos. Trans. R. Soc. 231, 289 (1933).
. S G Mallat, Z Zhang, 10.1109/78.258082IEEE Trans. Signal Process. 413397S. G. Mallat and Z. Zhang, IEEE Trans. Signal Process. 41, 3397 (1993).
. C Cutler, . E Flanagan, 10.1103/PhysRevD.49.2658arXiv:gr-qc/9402014[gr-qc]Phys. Rev. D. 492658C. Cutler andÉ. E. Flanagan, Phys. Rev. D 49, 2658 (1994), arXiv:gr-qc/9402014 [gr-qc].
. ESA-L3-EST-SCI-RS-001LISA Science Requirements Document. LISA Science Study TeamLISA Science Study Team, LISA Science Requirements Document, ESA-L3-EST-SCI-RS-001 (2018).
. S Babak, J Gair, A Sesana, E Barausse, C F Sopuerta, C P L Berry, E Berti, P Amaro-Seoane, A Petiteau, A Klein, 10.1103/PhysRevD.95.103012arXiv:1703.09722Phys. Rev. D. 95103012gr-qcS. Babak, J. Gair, A. Sesana, E. Barausse, C. F. Sop- uerta, C. P. L. Berry, E. Berti, P. Amaro-Seoane, A. Pe- titeau, and A. Klein, Phys. Rev. D 95, 103012 (2017), arXiv:1703.09722 [gr-qc].
. C Clenshaw, A Curtis, Numer. Math. 2C. Clenshaw and A. Curtis, Numer. Math. 2 (1960).
. J Skilling, Conf, 10.1063/1.1835238735395J. Skilling, IOP Conf. 735, 395 (2004).
. M J Williams, J Veitch, C Messenger, 10.1103/PhysRevD.103.103006arXiv:2102.11056Phys. Rev. D. 103103006gr-qcM. J. Williams, J. Veitch, and C. Messenger, Phys. Rev. D 103, 103006 (2021), arXiv:2102.11056 [gr-qc].
. R E Kass, A E Raftery, J , Am. Stat. Assoc. 90773R. E. Kass and A. E. Raftery, J. Am. Stat. Assoc. 90, 773 (1995).
. G Van Rossum, F L Drake, Python 3 Reference Manual (CreateSpaceG. Van Rossum and F. L. Drake, Python 3 Reference Manual (CreateSpace, 2009).
. J D Hunter, 10.1109/MCSE.2007.55Comput. Sci. Eng. 990J. D. Hunter, Comput. Sci. Eng. 9, 90 (2007).
. C R Harris, K J Millman, S J Van Der Walt, R Gommers, P Virtanen, D Cournapeau, 10.1038/s41586-020-2649-2arXiv:2006.10256Nature. 585cs.MSC. R. Harris, K. J. Millman, S. J. van der Walt, R. Gom- mers, P. Virtanen, D. Cournapeau, et al., Nature 585, 357 (2020), arXiv:2006.10256 [cs.MS].
. P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, 10.1038/s41592-019-0686-2arXiv:1907.10121Nat. Methods. 17cs.MSP. Virtanen, R. Gommers, T. E. Oliphant, M. Haber- land, T. Reddy, et al., Nat. Methods 17, 261 (2020), arXiv:1907.10121 [cs.MS].
| [] |
[
"Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification",
"Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification"
] | [
"Christopher Neff [email protected] \nUniversity of North Carolina at Charlotte\nNCUSA\n",
"· Armin \nUniversity of North Carolina at Charlotte\nNCUSA\n",
"Danesh Pazho [email protected] \nUniversity of North Carolina at Charlotte\nNCUSA\n",
"Hamed Tabkhi [email protected] \nUniversity of North Carolina at Charlotte\nNCUSA\n",
"Armin Danesh \nUniversity of North Carolina at Charlotte\nNCUSA\n",
"Pazho \nUniversity of North Carolina at Charlotte\nNCUSA\n",
"Hamed Tabkhi \nUniversity of North Carolina at Charlotte\nNCUSA\n"
] | [
"University of North Carolina at Charlotte\nNCUSA",
"University of North Carolina at Charlotte\nNCUSA",
"University of North Carolina at Charlotte\nNCUSA",
"University of North Carolina at Charlotte\nNCUSA",
"University of North Carolina at Charlotte\nNCUSA",
"University of North Carolina at Charlotte\nNCUSA",
"University of North Carolina at Charlotte\nNCUSA"
] | [
"Journal of Real-Time Image Processing"
] | Following the popularity of Unsupervised Domain Adaptation (UDA) in person re-identification, the recently proposed setting of Online Unsupervised Domain Adaptation (OUDA) attempts to bridge the gap towards practical applications by introducing a consideration of streaming data. However, this still falls short of truly representing real-world applications. This paper defines the setting of Real-world Real-time Online Unsupervised Domain Adaptation (R 2 OUDA) for Person Re-identification. The R 2 OUDA setting sets the stage for true real-world real-time OUDA, bringing to light four major limitations found in real-world applications that are often neglected in current research: system generated person images, subset distribution selection, time-based data stream segmentation, and a segment-based time constraint. To address all aspects of this new R 2 OUDA setting, this paper further proposes Real-World Real-Time Online Streaming Mutual Mean-Teaching (R 2 MMT), a novel multi-camera system for real-world person re-identification. Taking a popular person re-identification dataset, R 2 MMT was used to construct over 100 data subsets and train more than 3000 models, exploring the breadth of the R 2 OUDA setting to understand the training time and accuracy trade-offs and limitations for real-world applications.R 2 MMT, a real-world system able to respect the strict constraints of the proposed R 2 OUDA setting, achieves accuracies within 0.1% of comparable OUDA methods that cannot be applied directly to real-world applications. | null | [
"https://export.arxiv.org/pdf/2306.03993v1.pdf"
] | 259,095,846 | 2306.03993 | 4f3d1973eb8970b9bd19e2430cbce0c168a0eecd |
Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification
Christopher Neff [email protected]
University of North Carolina at Charlotte
NCUSA
· Armin
University of North Carolina at Charlotte
NCUSA
Danesh Pazho [email protected]
University of North Carolina at Charlotte
NCUSA
Hamed Tabkhi [email protected]
University of North Carolina at Charlotte
NCUSA
Armin Danesh
University of North Carolina at Charlotte
NCUSA
Pazho
University of North Carolina at Charlotte
NCUSA
Hamed Tabkhi
University of North Carolina at Charlotte
NCUSA
Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification
Journal of Real-Time Image Processing
Received: date / Accepted: date(Under Review)Person Re-identification · Online Learn- ing · Unsupervised Learning · Domain Adaptation · Real-World · Real-Time · Computer Vision · Domain Shift · Mutual-Mean Teaching
Following the popularity of Unsupervised Domain Adaptation (UDA) in person re-identification, the recently proposed setting of Online Unsupervised Domain Adaptation (OUDA) attempts to bridge the gap towards practical applications by introducing a consideration of streaming data. However, this still falls short of truly representing real-world applications. This paper defines the setting of Real-world Real-time Online Unsupervised Domain Adaptation (R 2 OUDA) for Person Re-identification. The R 2 OUDA setting sets the stage for true real-world real-time OUDA, bringing to light four major limitations found in real-world applications that are often neglected in current research: system generated person images, subset distribution selection, time-based data stream segmentation, and a segment-based time constraint. To address all aspects of this new R 2 OUDA setting, this paper further proposes Real-World Real-Time Online Streaming Mutual Mean-Teaching (R 2 MMT), a novel multi-camera system for real-world person re-identification. Taking a popular person re-identification dataset, R 2 MMT was used to construct over 100 data subsets and train more than 3000 models, exploring the breadth of the R 2 OUDA setting to understand the training time and accuracy trade-offs and limitations for real-world applications.R 2 MMT, a real-world system able to respect the strict constraints of the proposed R 2 OUDA setting, achieves accuracies within 0.1% of comparable OUDA methods that cannot be applied directly to real-world applications.
Introduction
Person re-identification (ReID) is the task of matching a person in an image with other instances of that person in other images, either from the same camera or a different one. More specifically, it is associating a person's query with its match in a gallery of persons [46]. Person ReID is a common task in many real-world applications. Such applications include video surveillance (e.g. determining when unauthorized people are present in an area), public safety (e.g. understanding pedestrian motion to avoid accidents), and smart health (e.g. mobility assessment and fall detection for seniors needing assistance). Thus, achieving accurate and robust person ReID for any environment is an important research goal for the community.
Many methods have been developed for person ReID [18,41,52,53], and many high quality datasets have been created for the task [25,35,42,49,50]. Deep learning approaches have been able to achieve incredible accuracies, nearly reaching saturation in some cases [32,40,43,55]. However, person ReID is a highly contextspecific task, and models trained on one dataset often fail to perform well on others [46]. Unsupervised
Real-World
UDA OUDA R 2 OUDA (Ours)
Data from target domain is only available through a data stream. ✗ ✓ † ✓ Person crops are not provided and must be generated online.
✗ ✗ ✓ There is no guarantee that every identity will be available during training. ✗ ✗ ✓ The distribution of person crops must be determined online.
✗ ✗ ✓ Training time must be accounted for.
✗ ✗ ✓ Table 1: Challenges of Real-World Applications and if they are addressed in the UDA, OUDA, and R 2 OUDA settings. † Streaming data is simulated.
Domain Adaptation (UDA) has been studied to combat this domain shift [2,8,28,36,42,46]. In UDA, initial training is performed on the labeled data of the source domain, and then inference is done in a different target domain. UDA methods generally achieve lower accuracies than State-of-the-Art (SotA) deep learning approaches that train directly on the target domain. However, recent approaches have begun to close that gap [11,12,48].
One common thread among these approaches is the reliance on having the entirety of the target domain available at training time. While this is convenient for research, many practical applications do not have unrestricted access to the entire target domain. Recently, [33] introduced the setting of Online Unsupervised Domain Adaptation (OUDA). OUDA specifies that data from the target domain can only be accessed through a data stream, bringing research more in line with realworld applications. OUDA adopts a batch-based relaxation [9] where different identities are separated among batches to simulate streaming data. OUDA also argues that confidentiality regulations make it such that many real-world applications can only store data for a limited amount of time, applying a restriction that image data cannot be stored beyond the batch in which it was collected. Table 1 shows the challenges of real-world applications, and how UDA and OUDA fail to fully address them. Like UDA before it, OUDA uses hand-crafted person ReID datasets for the target domain. Not only is the data stream only simulated, but the provided person images were hand selected by the creators of the dataset. In a real-world system, person images need to be generated by the system itself, creating a layer of noise not present in hand-crafted datasets. Further, by using hand-crafted datasets, the distribution of person images is guaranteed to be suitable for training. Specifically, most person ReID dataset tend to have a fairly uniform distribution, having around the same number of person images for each identity [27]. However, in realworld applications, there is no guarantee that person images generated from streaming data will form a uniform distribution in identities. There is also no guaran-tee that every identity in the dataset will be available for training.
To bring the field closer to the real-world, this paper proposes Real-World Real-Time Online Unsupervised Domain Adaptation (R 2 OUDA), a setting designed to address the challenges found in real-world applications, as seen in Table 1. R 2 OUDA defines four major considerations beyond the OUDA setting needed to develop systems for the real world. First, R 2 OUDA considers that person images must be generated algorithmically from streaming data. Second, the distribution of data to be used in training must also be determined algorithmically. Third, R 2 OUDA expands the batchedbased relaxation [9] of online learning to use time segments, relating the conceptual mini-batch to the realworld notion of time inherent in streaming data. Fourth, R 2 OUDA defines a time constraint such that the time spent training a single time segment cannot interfere with the training for subsequent time segments.
To address all aspects of the new R 2 OUDA setting, this paper further proposes Real-World Real-Time Online Streaming Mutual Mean-Teaching (R 2 MMT). R 2 MMT is an end-to-end multi-camera system designed for real-world person ReID. Using object detection, pedestrian tracking, human pose estimation, and a novel approach for Subset Distribution Selection (SDS), R 2 MMT is able to generate person crops directly from a data stream, filter them based on representation quality, and create a subset for training with a suitable distribution. To show the viability of R 2 MMT to meet the challenges of real-world applications, and to explore the breadth of the R 2 OUDA setting, an exhaustive set of experiments were conducted on the popular and challenging DukeMTMC dataset [35]. Using R 2 MMT, over 100 data subsets were created and more than 3000 models were trained, capturing the trade-offs and limitations of real-world applications and the R 2 OUDA setting. R 2 MMT is a real-world system that can meet the demanding requirements of the proposed R 2 OUDA setting, and is able to achieve over 73% Top-1 accuracy on DukeMTMC-reid, within 0.1% of comparable OUDA methods that cannot be directly applied for real-world applications.
To summarize, this paper's contributions are as follows:
-We define the setting of Real-World Real-Time Online Unsupervised Domain Adaptation, accounting for the challenges of real-world applications and bridging the gap between research and application. -We propose Real-World Real-Time Online Streaming Mutal Mean-Teaching, a novel end-to-end multicamera person ReID system designed to meet the challenges of R 2 OUDA and real-world applications. -We perform exhaustive experimentation, creating over 100 data subsets and training over 3000 models, to explore the breadth of the R 2 OUDA setting and understand the trade-offs and limitations of realworld applications.
Related Work
The UDA setting for person ReID has been extensively explored by the research community [24,36,46,51]. In general, there are two main categories of algorithms used to perform UDA for person ReID: style transfer methods and target domain clustering methods.
Style Transfer
Style transfer based methods generally use Generative Adversarial Networks (GANs) [15] to perform imageto-image translation [20], modifying images from the source domain to look like the target domain without affecting the context of the original images. [4] uses self-similarity and domain-dissimilarity to ensure transferred images maintain cues to the original identity without matching to other identities in the target domain, while [14] introduces an online relation-consistency regularization term to ensure relations of the source domain are kept after transfer to the target domain. [28] separates transfers into factor-wise sub-transfers, across illumination, resolution, and camera view, to better fit the source images into the target domain. [2] uses a dual conditional GAN to transfer source domain images to multiple styles in the target domain, creating a multitude of training instances for each source identity. [42] uses a cycle consistent loss [54] with an emphasis on the foreground to better maintain identities between styles. [19] looks at domain shift as background shift and uses a GAN to remove backgrounds without damaging foregrounds, while a densely associated 2-stream network integrates identity related cues present in backgrounds.
Target Domain Clustering
Target domain clustering approaches focus on using clustering algorithms to group features of the target domain for use as labels to fine tune a neural network pre-trained on the source domain [7]. This is usually done in an iterative fashion, where clustering is performed between training epochs to update the group labels as the model learns. [45] proposes using a dynamic graph matching framework to better handle large cross-camera variations. [10] introduces a self-similarity group to leverage part-based similarity to build clusters from different camera views. [27] utilizes a diversity regularization term to enforce a uniform distribution among the sizes of clusters. [13] introduces hybrid memory to dynamically generate instance-level supervisory signal for feature representation learning. [11] builds on [38], using two teacher models and their temporally averaged weights to produce soft pseudo labels for target domain clustering. [3] utilizes both target domain clustering and adversarial learning to create camera invariant features and improve target domain feature learning.
Online Unsupervised Domain Adaptation
While Online Unsupervised Domain Adaptation has been explored for other AI tasks [6,17,23,29,30,39,47], it was first defined for the field of person ReID in [33]. OUDA for Person ReID aims to create a practical online setting similar to that found in practical applications. OUDA builds upon the UDA setting by adding two considerations. First, data from the target domain is accessed via a data stream and not available all at once. Second, due to confidentiality concerns common in many countries, data from the target domain can only be stored for a limited time and only model parameters trained on that data may be persistent.
Proposed R 2 OUDA Setting
The proposed setting of Real-World Real-Time Online Unsupervised Domain Adaptation, building off OUDA [33], considers that we have access to a completely annotated source dataset D S as well as partial access to an unlabeled target dataset D T in the domain of our target application. In contrast to standard UDA, in both OUDA and R 2 OUDA the data from D T is only accessible as an online stream of data. Whereas both UDA and OUDA use person crops from hand crafted datasets, R 2 OUDA specifies that person crops from D T must be generated algorithmically from the data stream. This reflects how data is gathered in the real world. Where hand selected crops from datasets are generally highly representative, crops generated from a data stream will have varying levels of quality. This introduces noise in D T , both in quality and in the inevitable missed detections, which needs to be accounted for.
Additionally, hand crafted datasets choose person images to fit a distribution suitable for training. However, since crops in R 2 OUDA are generated from streaming data, such a distribution can not be assumed. This leads to the second consideration of R 2 OUDA, that the distribution of data to be used in training must be determined algorithmically. Instead of relying on a predefined set of person images, systems must generate their own data subset, determining its size and distribution appropriately. This also reflects the real-world, as it is rarely known beforehand the amount and distribution of person crops that will be collected by an application.
Continuing with the batched-based relaxation [9] of the online learning scenario proposed in [33], we further introduce a time constraint for R 2 OUDA. First, instead of separating our "mini-batches" ("tasks" as defined in [33]) across identities, since R 2 OUDA requires actual streaming data, the data stream is separated into discrete time segments. We consider that for a chosen time segment of length τ , the streaming data will be divided into equal, non-overlapping time segments of length τ whose combined contents are equivalent to the original data stream.
For R 2 OUDA, we must account both for applications that run continuously (i.e. the total length of the data stream is infinite) and the fact that, in the real world, computation resources are not unlimited. This leads to the necessity of a time constraint, but one that is not simple to define. Training time is inherently linked to hardware, and there are many techniques to hide latency or increase throughput in system design. As such, we simply define the time constraint such that, for any time segment τ i , the length of time spent training on data collected during τ i must be such to not interfere with the training for the data collected during τ i+1 . This is to prevent the training time deficit from increasing infinitely as i increases.
In summary, R 2 OUDA introduces four new considerations to better match real-world applications:
-Person crops from the target domain must be generated algorithmically from a data stream. -The selection and distribution of data to be used in training must be determined algorithmically. -An expansion of the batch-based relaxation to use time segments, relating the conceptual mini-batch to the real-world notion of time inherent in streaming data. -An additional time constraint such that the time spent training a single time segment cannot interfere with the training for any subsequent time segments.
Real-World Real-Time Online Streaming MMT
To address the challenges of R 2 OUDA, we present Real-World Real-Time Online Streaming Mutual Mean-Teaching, a novel multi-camera system for real-world person ReID. Similar to [31], R 2 MMT is comprised of multiple Local Nodes and a single Global Node. Local nodes have access to the data stream directly from the cameras and are responsible for generating quality person images. The Global Node has access to all data generated by Local Nodes and is responsible for global ReID, subset distribution selection, and target domain training. An overview of R 2 MMT can be seen in Fig. 1.
On the Local Node, YOLOv5 [22] is used as an object detector to find people in the video stream. Image crops are created for each person and sent to both a pose estimator (HRNet [37]) and a ReID feature extractor (ResNet-50 [16]). Coordinates for each person and features generated by the feature extractor are sent to a tracker [44] for local ReID. Afterward, feature and crop selection are performed to ensure that features and person crops sent to the Global Node for global ReID and crop collection are highly representative. This process utilizes person bounding box coordinates from the tracker to filter out any persons that have significant overlap (IoU >= 0.3) with other persons. This limits the number of crops used for training and features used for ReID contain multiple persons. The pose estimator is used to determine the quality of the features themselves. We reason that if a highly representative feature is present, then poses generated from the person crop should be of high confidence, while the number of keypoints present can help determine if there is significant occlusion or cutoff. Only crops and features with poses containing 15 or more keypoints (out of 17 total [26]) with at least 50% confidence are sent to the Global Node.
On the Global Node, local identities and features are received from the Local Nodes and sent to a matching algorithm. This matching algorithm, as described in [31], performs global (i.e. multi-camera) ReID. Concurrently, person crops from all cameras are collected for a single time segment. Generally, far more features will be collected than can reasonably be used during training. For instance, when DukeMTMC-Video [35] is sampled every frame, the system produces over 4 million crops that pass feature selection. To reduce redun-dancy and computation, R 2 MMT samples crops for selection once every 60 frames.
After all person crops from a single time segment are collected, the Subset Distribution Selection algorithm is used to create a subset that maintains a distribution and number of crops suitable for training. R 2 MMT uses an SDS algorithm based on the metric facility location problem [34]. We define that given a number of features in a metric space, we wish to find a subset of k features such that the minimum distance between any two features within the subset is maximized. However, this problem is known to be NP-hard [21], making it unsuitable for our real-world applications. R 2 MMT instead uses a greedy implementation of the algorithm proven to be Ω(log k)-competitive with the optimal solution while proving to be significantly faster, especially for larger sets of data [1]. For ease of readability, we adopt the nomenclature of K to mean the number of instances per identity. Therefore the total number of person crops in a subset k is equal to the number of identities in the dataset times K. To further reduce complexity, SDS is performed on the data from each camera individually, and their results are combined to form the complete subset.
Once the training subset is created, domain adaptation is performed using Mutual-Mean Teaching (MMT) [11]. R 2 MMT follows the training methodology described in [11], except that epochs and iterations are variable. Clustering is done using DBSCAN [5], as GPU acceleration allows it to perform much faster than CPU based approaches. Exact training parameters, both for pretraining on the source domain and domain transfer on the target domain, are as detailed in [11] unless otherwise noted.
Both SDS and training are time consuming, particularly when dealing with large amounts of data. To meet the time constraint of the R 2 OUDA setting, R 2 MMT utilizes a pipelined processing model, taking advantage of parallel computing resources while hiding the latency of the aforementioned tasks. An illustration of this pipelined approach can be seen in Fig. 2. Crop collection, SDS, and training are separated into their own pipeline stages. This means that while a model collects data for the current time segment, SDS on that data will occur the following time segment, and the training for that subset will occur the time segment after that. More formally, during a single time segment T N , a model trained on data from T N −3 is used to collect data from time segment T N , while subset distribution selection is performed on data collected during T N −1 and another network is being trained on a subset created from data from T N −2 . All of these processes will finish before T N +1 . This means there will always be a latency of two time segments between collection and inference for a single time segment. However, due to the pipeline structure, training throughput remains at a rate of one time segment per time segment. This satisfies the time constraint of R 2 OUDA.
Experimental Results
To explore the setting of R 2 OUDA, we select the Market 1501 dataset [50] as the source domain and the DukeMTMC dataset [35] as the target domain. The DukeMTMC dataset is desirable as a target domain because it has both a video dataset (DukeMTMC-video) and a hand crafted person ReID dataset (DukeMTMCreid), both in the same domain. The video dataset is required in order to satisfy the streaming data constraint of the R 2 OUDA setting. The hand crafted ReID dataset brings two benefits. First, it allows us to directly observe the effect of noisy system generated crops compared hand selected person images when used for training. Second, testing on the ReID dataset allows direct comparison with works done in the UDA and OUDA space. As such, all our Top-1 accuracies are reported on the DukeMTMC-reid dataset. Similarly, we determining subset size, we treat the number of identities for both DukeMTMC-reid and DukeMTMC-video to be 702, as described in [35]. The number of person crops in a subset k is always equal to k × 702.
For all experiments, R 2 MMT is used to perform domain adaptation. Parameters in all experiments are the same as in [11], except where noted otherwise. All Local Nodes are run on a single server with two AMD EPYC 7513 CPUs, 256 GB of RAM, and three Nvidia V100 GPUs. The Global Node is run on a workstation with an AMD Threadripper Pro 3975WX CPU, 256 GB RAM, and three Nvidia RTX A6000 GPUs. All timing results presented in this section are using this Global Node.
Subset Distribution Selection
We first explore the effect of using our baseline Subset Distribution Selection algorithm for training on the DukeMTMC-reid dataset. By using hand selected person crops from the dataset, we remove the effect of noise generated by our system and single out the impact of our SDS algorithm and the reduction in amount of data on domain adaptation. We vary the number of person images per identity K, iterations per epoch I, and total epochs E as shown below. Note that using the entire DukeMTMC-reid dataset would be equivalent to K = 25.
K ∈ [2,4,6,8,10,12,14,16,18,20] I ∈ [100, 250, 500, 750, 1000, 1500] E ∈ [1,2,3,5] (1)
These variable ranges lead to 240 training permutations, which is difficult to list in a single table. Instead, the results are plotted in a three-dimensional space and can be seen in Fig. 3. Training Time and Top-1 make up the x and y axes, Epochs are the z axis, Iterations are noted by color, and k is indicated by size, with bigger circles representing higher values of k. As the purpose of these experiments is to focus on the effects of our SDS algorithm, the system pipeline described in Section 4 is ignored and timing results count SDS and training sequentially. More detailed information on these experiments can be found in the supplementary materials.
From these graphs, we can understand the general trend of the data. Intuitively, we see a fairly linear trend where more data generally results in higher Top-1 accuracy. Likewise, more iterations per epoch and more epochs also tend to result in higher accuracy. Interestingly, with lower values of k we see the reverse effect; more time spent training results in decreased accuracy, sometimes even below the pre-trained accuracy of 42.0%. In general, at least 6 person images per identity are needed to consistently learn, while we start to see diminishing returns at around 16 person images per identity. The top result occurs when K = 20, I = 1500, and E = 5, achieving a Top-1 accuracy of 74.55% with a training time of 82 minutes. This is only 3.5% less than what comparable algorithms are able to achieve in the UDA setting [11] and over 2% greater than the same algorithm in the OUDA setting [33]. When using the same hardware, R 2 MMT is 2.6× faster than its UDA counterpart.
System Generated Data
As explained in Section 3, one of the requirements of the R 2 OUDA setting is that person crops must be generated algorithmically from a data stream. As such, it is necessary to explore the effects of the noise this introduces. The structure of these experiments are exactly the same as in Section 5.1, except that instead of using DukeMTMC-reid, R 2 MMT generates data from the DukeMTMC-video dataset. Similar to Section 5.1, we ignore the system pipeline and focus on the effects of the generated data. Based on the larger amount of data available in DukeMTMC-video, the ranges for our experimental variables are adjusted as shown below. Using all generated data would be equivalent to K = 99.
K ∈ [16,18,20,25,30,40] I ∈ [100, 250, 500, 1000, 1500]
E ∈ [1, 2, 3, 5](2)
The results of this exploration can be seen in Fig. 4, with more details available in the supplementary materials. Axes are identical to Fig. 3, with color and size representing iterations and k respectively. These graphs show a somewhat similar trend as in Section 5.1 with some interesting deviations. While the trend starts off with accuracy increasing as k gets larger, there is a sharp decrease in accuracy when k increases beyond a certain point. The scale of the decrease, as well as how early it occurs, lessens with both iterations and epochs. This is likely a byproduct of how many identities are present in DukeMTMC-video. While DukeMTMC only labels a total of 1404 identities, our system is able to detect far more. Increasing iterations has such a drastic effect here because it determines how many of and how often these identities are seen during an epoch. Further increasing iterations and epochs could help mitigate this, but would also increase overall training time. This, combined with the fact that more epochs and more iterations always result in higher accuracy, suggests that accuracy saturation has not been reached here, and the main limiting factor is training time. The highest accuracy achieved on this noisy data was a Top-1 of 69.34%, with K = 20, I = 1500, E = 5, and a total training time of just under 57 minutes. This is notably worse than both the 74.55% achieved in Section 5.1 and the 72.3% MMT achieves in the OUDA setting [33]. This demonstrates the extreme impact noisy data can have on unsupervised domain adaptation, and why the extra considerations of the R 2 OUDA setting are a necessity when designing algorithms for real-world applications.
R 2 MMT
Finally, we make the first attempt at addressing the R 2 OUDA setting. An exhaustive set of experiments are conducted with R 2 MMT, producing a fully functional, end-to-end system that meets all the requirements of the R 2 OUDA setting. R 2 MMT generates person crops from a stream of data, uses SDS to construct training subsets, operates on the notion of time segments, and must adhere to the strict time constraint outlined in Section 3. A successful implementation will conform to all of those standards while achieving the highest accuracy possible, ideally within range of what was seen in Section 5.1.
One hour of DukeMTMC-video is used as the data stream, split into equal sized continuous segments of size τ . SDS is performed at each time segment on each camera individually, and k refers to the total number of person crops across all training subsets for the full hour. Two methods are used to determine the number of crops needed at each time segment. In the standard method, only data collected in a time segment may be used for training related to that time segment. The second method uses a form of memory, allowing the use of data from the current time segment and previous time segments still in memory. For these experiments, we assume a memory length of up to 60 minutes. Equation 3 and Equation 4 are used to calculate the number of person crops needed from each camera at each time segment, for the standard and memory based methods respectively.
k = 60 τ −1 t=0 8 i=1 P (C i )P (C i ∩ τ t ) (3) k = 60 τ −1 t=0 8 i=1 P (C i ) t η=0 P (C i ∩ τ η )(4)
where k is the total number of person crops desired for the training subset over an hour of video stream, τ t is a time segment of length τ minutes that begins at τ ×t minutes, C i is the i th camera, P (C i ) is the percentage of total person crops received from C I when compared to all cameras over an hour of video, and P (C i )P (C i ∩ τ t ) is the percentage of person crops received during τ t for C i compared to all person crops received from C i over an hour of video.
This ensures the number of person crops selected for a subset from each camera at each time segment is proportional to the number of person crops received. The variable ranges used in these experiments are shown below.
K ∈ [18,20,25,30,40,50] I ∈ [100, 250, 500, 750, 1000, 1500] E ∈ [1,2,3,5] τ ∈ [15,20,30] t ∈ Z :
{0 ≤ t ≤ ( 60 τ − 1)}(5)
This creates over 2500 data points across the two methods, becoming difficult to visualize even in three dimensional space. Fig. 5 displays the distribution of training accuracies for each τ at each time segment.
Out of the 864 configurations tested, more than half of them failed to consistently meet the time requirement of R 2 OUDA and are not included in the statistics. Most notably, all configurations that used memory failed to consistently meet the time requirement when given a τ of 15. When memory is utilized, the time required for SDS greatly increases for successive time segments as more images accumulate. This limits how large k can be, restricting K to 20 or below when τ = 20 and 30 or below when τ = 30. Even without memory, the time constraint proves very limiting. Only when τ = 20 is the entire range of K able to be utilized. For a more fine grain look at all 2500+ data points in this experiment, please see the supplementary materials.
The data in general follows similar trends as seen in Section 5.1 and Section 5.2, but to more of an extreme. In addition to disqualifying several configurations off the bat, the segmented data stream and time constraint generally mean R 2 MMT has less data to work with during any given training. Unlike in the previous experiments, the time constraint prevents the system from just throwing more data and more training at the problem. Instead, a balance must be found. We see an overall increase in top accuracies when τ increases, both in standard and memory configurations. Top accuracies also increase over time, with one notable exception. When τ = 15, accuracy actually drops in the final time segment. This is due to the extremely low amount of data available in that particular time segment.
Another interesting observation can be made by looking at τ = 20 both with and without memory. While the standard R 2 MMT achieves higher overall top accuracies, the distribution is a lot more varied when compared to R 2 MMT with memory. Many configurations actually lose accuracy, far more than when memory is present. This suggests that while memory is limiting, it may add stability to training over time. This is further demonstrated when τ = 30. When memory is used the maximum accuracy is lower in the first time segment, being restricted to a lower value of K, but is higher in the second time segment due to the increased range of available data. 6 shows the best configurations of R 2 MMT, both with and without memory, for each τ . The overall highest accuracy is achieved with memory when τ = 30, K = 30, E = 5, and I = 500, reaching an impressive 73.2% Top-1. Despite the much harsher requirements of the R 2 OUDA setting, this is within 0.1% of the best possible accuracy using MMT in the OUDA setting [33]. However, with a τ of 30 it also has a latency of 60 minutes between collecting data and inferencing with a model trained on that data. This can be reduced to 30 minutes by changing τ to 15, but then accuracy drops to a disappointing 58.08%. A τ of 20 splits the difference, achieving a final Top-1 of 69.97% while reducing the inference latency to 40 minutes. This is within 4% of our best overall result, and reduces the delay by over 30%.
The strict time constraint disqualified many of the configurations in Section 5.3. However, if we ignore the time constraint for a moment we see accuracies reaching up to 76.53% when τ = 15, K = 40, E = 5, and I = 1500 in a system with memory, putting it within 1.5% of MMT in the UDA setting [11]. With further optimization or more powerful hardware, R 2 MMT might be able to achieve higher accuracies with decreased la-tency between collection and inference. This shows that there is a lot of room for improvement and growth in the R 2 OUDA setting. The explorations in this paper can serve as a guideline for future works.
Conclusion
This paper proposed the setting of R 2 OUDA, to better represent the unique challenges of real-world applications. R 2 MMT was introduced as the first attempt at a real-world, end-to-end system that can address all the demands of the R 2 OUDA setting. An exhaustive set of experiments were conducted, using R 2 MMT to create over 100 data subsets and train more than 3000 models, exploring the breadth of the R 2 OUDA setting. While meeting the harsh requirements of R 2 OUDA, R 2 MMT was able to achieve over 73% Top-1 accuracy, reaching within 0.1% of comparable SotA OUDA approaches that cannot be directly applied to real-world applications.
Fig. 1 :
1System view of Real-World Real-Time Online Streaming Mutual Mean-Teaching.
Fig. 2 :
2Illustration of computation overlap through time.
Fig. 3 :
3Results exploring SDS on the hand crafted DukeMTMC-reid dataset[35]. (a) and (b) show two views of the results plotted in three-dimensional space, while (c) shows a two-dimensional view when E = 5. Larger circles represent larger values of k.
Fig. 4 :
4Results exploring the use of system generated data using DukeMTMC-video[35]. (a) and (b) show two views of the results plotted in three-dimensional space, while (c) shows a two-dimensional view when E = 5. Larger circles represent larger values of k.
Fig. 5 :
5Distribution of accuracies achieved on DukeMTMC[35] with R 2 MMT.
Fig. 6 :
6Best results for each system configuration. Dashed lines (--) represent standard configurations. Solid lines (-) represent configurations with memory. Green, blue, and purple denote τ values of 15, 20, and 30 respectively.
Fig.
Fig. 6 shows the best configurations of R 2 MMT, both with and without memory, for each τ . The overall highest accuracy is achieved with memory when τ = 30, K = 30, E = 5, and I = 500, reaching an impressive 73.2% Top-1. Despite the much harsher requirements of the R 2 OUDA setting, this is within 0.1% of the best possible accuracy using MMT in the OUDA setting [33]. However, with a τ of 30 it also has a latency of 60 minutes between collecting data and inferencing with a model trained on that data. This can be reduced to 30 minutes by changing τ to 15, but then accuracy drops to a disappointing 58.08%. A τ of 20 splits the difference, achieving a final Top-1 of 69.97% while reducing the inference latency to 40 minutes. This is within 4% of our best overall result, and reduces the delay by over 30%. The strict time constraint disqualified many of the configurations in Section 5.3. However, if we ignore the time constraint for a moment we see accuracies reaching up to 76.53% when τ = 15, K = 40, E = 5, and I = 1500 in a system with memory, putting it within 1.5% of MMT in the UDA setting [11]. With further optimization or more powerful hardware, R 2 MMT might be able to achieve higher accuracies with decreased la-
Table 2 :
2Distribution of accuracies achieved on DukeMTMC[35] with R 2 MMT.
Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification
K-means++: The advantages of careful seeding. D Arthur, S Vassilvitskii, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07. the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07USAArthur, D., Vassilvitskii, S.: K-means++: The advan- tages of careful seeding. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07, p. 1027-1035. Society for Industrial and Ap- plied Mathematics, USA (2007)
Instance-guided context rendering for cross-domain person re-identification. Y Chen, X Zhu, S Gong, 10.1109/ICCV.2019.000322019 IEEE/CVF International Conference on Computer Vision (ICCV). Chen, Y., Zhu, X., Gong, S.: Instance-guided context ren- dering for cross-domain person re-identification. In: 2019 IEEE/CVF International Conference on Computer Vi- sion (ICCV), pp. 232-242 (2019). DOI 10.1109/ICCV. 2019.00032
Canu-reid: A conditional adversarial network for unsupervised person reidentification. G Delorme, Y Xu, S Lathuiliere, R Horaud, X Alameda-Pineda, https:/doi.ieeecomputersociety.org/10.1109/ICPR48806.2021.94124312020 25th International Conference on Pattern Recognition (ICPR). Los Alamitos, CA, USAIEEE Computer SocietyDelorme, G., Xu, Y., Lathuiliere, S., Horaud, R., Alameda-Pineda, X.: Canu-reid: A conditional adversarial network for unsupervised person re- identification. In: 2020 25th International Confer- ence on Pattern Recognition (ICPR), pp. 4428-4435. IEEE Computer Society, Los Alamitos, CA, USA (2021). DOI 10.1109/ICPR48806.2021.9412431. URL https://doi.ieeecomputersociety.org/10.1109/ ICPR48806.2021.9412431
Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person reidentification. W Deng, L Zheng, Q Ye, G Kang, Y Yang, J Jiao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Deng, W., Zheng, L., Ye, Q., Kang, G., Yang, Y., Jiao, J.: Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re- identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
A density-based algorithm for discovering clusters in large spatial databases with noise. M Ester, H P Kriegel, J Sander, X Xu, kdd. 96Ester, M., Kriegel, H.P., Sander, J., Xu, X., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: kdd, vol. 96, pp. 226-231 (1996)
Online unsupervised learning for domain shift in covid-19 ct scan datasets. N Ewen, N Khan, H Fan, L Zheng, C Yan, Y Yang, 10.1145/324331610.1145/32433162021 IEEE International Conference on Autonomous Systems (ICAS). 14Unsupervised person re-identification: Clustering and fine-tuningEwen, N., Khan, N.: Online unsupervised learning for do- main shift in covid-19 ct scan datasets. In: 2021 IEEE In- ternational Conference on Autonomous Systems (ICAS), pp. 1-5 (2021). DOI 10.1109/ICAS49788.2021.9551146 7. Fan, H., Zheng, L., Yan, C., Yang, Y.: Unsuper- vised person re-identification: Clustering and fine-tuning. ACM Trans. Multimedia Comput. Commun. Appl. 14(4) (2018). DOI 10.1145/3243316. URL https://doi.org/ 10.1145/3243316
Complementary pseudo labels for unsupervised domain adaptation on person re-identification. H Feng, M Chen, J Hu, D Shen, H Liu, D Cai, 10.1109/TIP.2021.3056212IEEE Transactions on Image Processing. 30Feng, H., Chen, M., Hu, J., Shen, D., Liu, H., Cai, D.: Complementary pseudo labels for unsupervised domain adaptation on person re-identification. IEEE Transac- tions on Image Processing 30, 2898-2907 (2021). DOI 10.1109/TIP.2021.3056212
Online continual learning under extreme memory constraints. E Fini, S Lathuilière, E Sangineto, M Nabi, E Ricci, Computer Vision -ECCV 2020. A. Vedaldi, H. Bischof, T. Brox, J.M. FrahmChamSpringer International PublishingFini, E., Lathuilière, S., Sangineto, E., Nabi, M., Ricci, E.: Online continual learning under extreme memory constraints. In: A. Vedaldi, H. Bischof, T. Brox, J.M. Frahm (eds.) Computer Vision -ECCV 2020, pp. 720- 735. Springer International Publishing, Cham (2020)
Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification. Y Fu, Y Wei, G Wang, Y Zhou, H Shi, T S Huang, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)Fu, Y., Wei, Y., Wang, G., Zhou, Y., Shi, H., Huang, T.S.: Self-similarity grouping: A simple unsupervised cross do- main adaptation approach for person re-identification. In: Proceedings of the IEEE/CVF International Confer- ence on Computer Vision (ICCV) (2019)
Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification. Y Ge, D Chen, H Li, International Conference on Learning Representations. Ge, Y., Chen, D., Li, H.: Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification. In: International Conference on Learning Representations (2020). URL https:// openreview.net/forum?id=rJlnOhVYPS
Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. Y Ge, F Zhu, D Chen, R Zhao, H Li, Advances in Neural Information Processing Systems. Ge, Y., Zhu, F., Chen, D., Zhao, R., Li, H.: Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. In: Advances in Neural Information Processing Systems (2020)
Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. Y Ge, F Zhu, D Chen, R Zhao, H Li, Advances in Neural Information Processing Systems. Ge, Y., Zhu, F., Chen, D., Zhao, R., Li, H.: Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. In: Advances in Neural Information Processing Systems (2020)
Structured domain adaptation with online relation regularization for unsupervised person re-id. Y Ge, F Zhu, D Chen, R Zhao, X Wang, H Li, DOI10.48550/ARXIV.2003.06650Ge, Y., Zhu, F., Chen, D., Zhao, R., Wang, X., Li, H.: Structured domain adaptation with online relation reg- ularization for unsupervised person re-id (2020). DOI 10.48550/ARXIV.2003.06650. URL https://arxiv.org/ abs/2003.06650
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, K. WeinbergerCurran Associates, Inc27Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, K. Weinberger (eds.) Advances in Neural Information Processing Systems, vol. 27. Curran Associates, Inc. (2014). URL https://proceedings.neurips.cc/paper/2014/file/ 5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. 2016 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR) pp. 770- 778 (2016)
Online crosssubject emotion recognition from ecg via unsupervised domain adaptation. W He, Y Ye, Y Li, T Pan, L Lu, DOI10.1109/EMBC46164.2021.96304332021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). He, W., Ye, Y., Li, Y., Pan, T., Lu, L.: Online cross- subject emotion recognition from ecg via unsupervised domain adaptation. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Bi- ology Society (EMBC), pp. 1001-1005 (2021). DOI 10.1109/EMBC46164.2021.9630433
* Hermans, A Beyer, * , L Leibe, B , arXiv:1703.07737Defense of the Triplet Loss for Person Re-Identification. arXiv preprintHermans*, A., Beyer*, L., Leibe, B.: In Defense of the Triplet Loss for Person Re-Identification. arXiv preprint arXiv:1703.07737 (2017)
Sbsgan: Suppression of inter-domain background shift for person reidentification. Y Huang, Q Wu, J Xu, Y Zhong, 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Huang, Y., Wu, Q., Xu, J., Zhong, Y.: Sbsgan: Sup- pression of inter-domain background shift for person re- identification. In: 2019 IEEE/CVF International Con- ference on Computer Vision (ICCV), pp. 9526-9535.
. https:/doi.ieeecomputersociety.org/10.1109/ICCV.2019.00962IEEE Computer Society. IEEE Computer Society, Los Alamitos, CA, USA (2019). DOI 10.1109/ICCV.2019.00962. URL https://doi. ieeecomputersociety.org/10.1109/ICCV.2019.00962
Image-to-image translation with conditional adversarial networks. P Isola, J Y Zhu, T Zhou, A Efros, 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Isola, P., Zhu, J.Y., Zhou, T., Efros, A.: Image-to-image translation with conditional adversarial networks. In: 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967-5976 (2017).
. DOI10.1109/CVPR.2017.632DOI 10.1109/CVPR.2017.632
Greedy kkcenter from noisy distance samples. N Jali, N Karamchandani, S Moharir, 10.1109/TSIPN.2022.3164352IEEE Transactions on Signal and Information Processing over Networks. 8Jali, N., Karamchandani, N., Moharir, S.: Greedy kk- center from noisy distance samples. IEEE Transactions on Signal and Information Processing over Networks 8, 330-343 (2022). DOI 10.1109/TSIPN.2022.3164352
G Jocher, A Chaurasia, A Stoken, J Borovec, Nanocode012, Y Kwon, Taoxie, K Michael, J Fang, Lorna, C Wong, Z Yifu, D Montes, Z Wang, C Fati, J Nadar, Laughing, P Skalski, A Hogan, M Strobel, M Jain, L Mammana, xylieong: ultralytics/yolov5: v6.2 -YOLOv5 Classification Models. 1Reproducibility, ClearML and Deci.ai integrationsJocher, G., Chaurasia, A., Stoken, A., Borovec, J., NanoCode012, Kwon, Y., TaoXie, Michael, K., Fang, J., imyhxy, Lorna, Wong, C., Yifu, Z., V, A., Montes, D., Wang, Z., Fati, C., Nadar, J., Laughing, UnglvK- itDe, tkianai, yxNONG, Skalski, P., Hogan, A., Strobel, M., Jain, M., Mammana, L., xylieong: ultralytics/yolov5: v6.2 -YOLOv5 Classification Models, Apple M1, Re- producibility, ClearML and Deci.ai integrations (2022).
. 10.5281/zenodo.7002879DOI 10.5281/zenodo.7002879. URL https://doi.org/ 10.5281/zenodo.7002879
Towards unsupervised online domain adaptation for semantic segmentation. Y Kuznietsov, M Proesmans, L V Gool, 10.1109/WACVW54805.2022.000322022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW). Kuznietsov, Y., Proesmans, M., Gool, L.V.: Towards un- supervised online domain adaptation for semantic seg- mentation. In: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), pp. 261-271 (2022). DOI 10.1109/WACVW54805.2022. 00032
A survey of open-world person re-identification. Q Leng, M Ye, Q Tian, 10.1109/TCSVT.2019.2898940IEEE Transactions on Circuits and Systems for Video Technology. 304Leng, Q., Ye, M., Tian, Q.: A survey of open-world per- son re-identification. IEEE Transactions on Circuits and Systems for Video Technology 30(4), 1092-1108 (2020). DOI 10.1109/TCSVT.2019.2898940
Deepreid: Deep filter pairing neural network for person re-identification. W Li, R Zhao, T Xiao, X Wang, CVPRLi, W., Zhao, R., Xiao, T., Wang, X.: Deepreid: Deep filter pairing neural network for person re-identification. In: CVPR (2014)
Microsoft coco: Common objects in context. T Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Computer Vision -ECCV. D. Fleet, T. Pajdla, B. Schiele, T. TuytelaarsChamSpringer International PublishingLin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: D. Fleet, T. Pa- jdla, B. Schiele, T. Tuytelaars (eds.) Computer Vision -ECCV 2014, pp. 740-755. Springer International Pub- lishing, Cham (2014)
A bottom-up clustering approach to unsupervised person re-identification. Y Lin, X Dong, L Zheng, Y Yan, Y Yang, DOI10.1609/aaai.v33i01.33018738Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Lin, Y., Dong, X., Zheng, L., Yan, Y., Yang, Y.: A bottom-up clustering approach to unsupervised person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence 33(01), 8738-8745 (2019). DOI 10.1609/aaai.v33i01.33018738. URL https://ojs.aaai. org/index.php/AAAI/article/view/4898
Adaptive transfer network for cross-domain person reidentification. J Liu, Z J Zha, D Chen, R Hong, M Wang, DOI10.1109/CVPR.2019.007372019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Liu, J., Zha, Z.J., Chen, D., Hong, R., Wang, M.: Adaptive transfer network for cross-domain person re- identification. In: 2019 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pp. 7195- 7204 (2019). DOI 10.1109/CVPR.2019.00737
A multistage framework with mean subspace computation and recursive feedback for online unsupervised domain adaptation. J Moon, D Das, C S George Lee, 10.1109/TIP.2022.3186537IEEE Transactions on Image Processing. 31Moon, J., Das, D., George Lee, C.S.: A multistage frame- work with mean subspace computation and recursive feedback for online unsupervised domain adaptation. IEEE Transactions on Image Processing 31, 4622-4636 (2022). DOI 10.1109/TIP.2022.3186537
Multi-step online unsupervised domain adaptation. J H Moon, D Das, C G Lee, 10.1109/ICASSP40776.2020.9052976ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Moon, J.H., Das, D., Lee, C.G.: Multi-step online un- supervised domain adaptation. In: ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 41172-41576 (2020). DOI 10.1109/ICASSP40776.2020.9052976
Revamp2t: Real-time edge video analytics for multicamera privacy-aware pedestrian tracking. C Neff, M Mendieta, S Mohan, M Baharani, S Rogers, H Tabkhi, DOI10.1109/JIOT.2019.2954804IEEE Internet of Things Journal. 74Neff, C., Mendieta, M., Mohan, S., Baharani, M., Rogers, S., Tabkhi, H.: Revamp2t: Real-time edge video analytics for multicamera privacy-aware pedestrian tracking. IEEE Internet of Things Journal 7(4), 2591-2602 (2020). DOI 10.1109/JIOT.2019.2954804
Flipreid: Closing the gap between training and inference in person re-identification. X Ni, E Rahtu, 10.1109/EUVIP50544.2021.94840102021 9th European Workshop on Visual Information Processing (EUVIP). Ni, X., Rahtu, E.: Flipreid: Closing the gap between training and inference in person re-identification. In: 2021 9th European Workshop on Visual Information Process- ing (EUVIP), pp. 1-6 (2021). DOI 10.1109/EUVIP50544. 2021.9484010
Online unsupervised domain adaptation for person re-identification. H Rami, M Ospici, S Lathuilière, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) WorkshopsRami, H., Ospici, M., Lathuilière, S.: Online unsuper- vised domain adaptation for person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 3830-3839 (2022)
Heuristic approaches for k-center problem. R Rana, D Garg, 10.1109/ IADCC.2009.48090312009 IEEE International Advance Computing Conference. Rana, R., Garg, D.: Heuristic approaches for k-center problem. In: 2009 IEEE International Advance Com- puting Conference, pp. 332-335 (2009). DOI 10.1109/ IADCC.2009.4809031
Performance measures and a data set for multi-target, multi-camera tracking. E Ristani, F Solera, R Zou, R Cucchiara, C Tomasi, European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking. Ristani, E., Solera, F., Zou, R., Cucchiara, R., Tomasi, C.: Performance measures and a data set for multi-target, multi-camera tracking. In: European Conference on Com- puter Vision workshop on Benchmarking Multi-Target Tracking (2016)
Unsupervised domain adaptive re-identification: Theory and practice. L Song, C Wang, L Zhang, B Du, Q Zhang, C Huang, X Wang, 10.1016/j.patcog.2019.107173Pattern Recogn. 102(C) (2020Song, L., Wang, C., Zhang, L., Du, B., Zhang, Q., Huang, C., Wang, X.: Unsupervised domain adaptive re-identification: Theory and practice. Pattern Recogn. 102(C) (2020). DOI 10.1016/j.patcog.2019.107173. URL https://doi.org/10.1016/j.patcog.2019.107173
Deep high-resolution representation learning for human pose estimation. K Sun, B Xiao, D Liu, J Wang, CVPRSun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. In: CVPR (2019)
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. A Tarvainen, H Valpola, Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17. the 31st International Conference on Neural Information Processing Systems, NIPS'17Red Hook, NY, USACurran Associates IncTarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged consistency targets im- prove semi-supervised deep learning results. In: Proceed- ings of the 31st International Conference on Neural In- formation Processing Systems, NIPS'17, p. 1195-1204. Curran Associates Inc., Red Hook, NY, USA (2017)
Continual unsupervised domain adaptation for semantic segmentation by online frequency domain style transfer. J A Termöhlen, M Klingner, L J Brettin, N M Schmidt, T Fingscheidt, 10.1109/ITSC48978.2021.95645662021 IEEE International Intelligent Transportation Systems Conference (ITSC). Termöhlen, J.A., Klingner, M., Brettin, L.J., Schmidt, N.M., Fingscheidt, T.: Continual unsupervised domain adaptation for semantic segmentation by online fre- quency domain style transfer. In: 2021 IEEE Inter- national Intelligent Transportation Systems Conference (ITSC), pp. 2881-2888 (2021). DOI 10.1109/ITSC48978. 2021.9564566
Spatial-temporal person re-identification. G Wang, J Lai, P Huang, X Xie, 10.1609/aaai.v33i01.33018933Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Wang, G., Lai, J., Huang, P., Xie, X.: Spatial-temporal person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence 33(01), 8933-8940 (2019). DOI 10.1609/aaai.v33i01.33018933. URL https: //ojs.aaai.org/index.php/AAAI/article/view/4921
Person reidentification with cascaded pairwise convolutions. Y Wang, Z Chen, F Wu, G Wang, 10. 1109/CVPR.2018.001592018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Wang, Y., Chen, Z., Wu, F., Wang, G.: Person re- identification with cascaded pairwise convolutions. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1470-1478 (2018). DOI 10. 1109/CVPR.2018.00159
Person transfer gan to bridge domain gap for person re-identification. L Wei, S Zhang, W Gao, Q Tian, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Wei, L., Zhang, S., Gao, W., Tian, Q.: Person transfer gan to bridge domain gap for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
On the unreasonable effectiveness of centroids in image retrieval. M Wieczorek, B Rychalska, J Dabrowski, ArXiv abs/2104.13643Wieczorek, M., Rychalska, B., Dabrowski, J.: On the un- reasonable effectiveness of centroids in image retrieval. ArXiv abs/2104.13643 (2021)
Simple online and realtime tracking with a deep association metric. N Wojke, A Bewley, D Paulus, 10.1109/ICIP.2017.82969622017 IEEE International Conference on Image Processing (ICIP). Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 3645-3649 (2017). DOI 10.1109/ICIP.2017. 8296962
Dynamic graph co-matching for unsupervised video-based person re-identification. M Ye, J Li, A J Ma, L Zheng, P C Yuen, 10.1109/TIP.2019.2893066IEEE Transactions on Image Processing. 286Ye, M., Li, J., Ma, A.J., Zheng, L., Yuen, P.C.: Dynamic graph co-matching for unsupervised video-based person re-identification. IEEE Transactions on Image Processing 28(6), 2976-2990 (2019). DOI 10.1109/TIP.2019.2893066
Deep learning for person re-identification: A survey and outlook. M Ye, J Shen, G Lin, T Xiang, L Shao, S C H Hoi, DOI10.48550/ARXIV.2001.04193Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., Hoi, S.C.H.: Deep learning for person re-identification: A survey and outlook (2020). DOI 10.48550/ARXIV.2001.04193. URL https://arxiv.org/abs/2001.04193
Online unsupervised domain adaptation via reducing inter-and intra-domain discrepancies. Y Ye, T Pan, Q Meng, J Li, H T Shen, 10.1109/TNNLS.2022.3177769IEEE Transactions on Neural Networks and Learning Systems. Ye, Y., Pan, T., Meng, Q., Li, J., Shen, H.T.: Online unsupervised domain adaptation via reducing inter-and intra-domain discrepancies. IEEE Transactions on Neu- ral Networks and Learning Systems pp. 1-15 (2022). DOI 10.1109/TNNLS.2022.3177769
Hierarchical clustering with hard-batch triplet loss for person re-identification. K Zeng, Zeng, K.: Hierarchical clustering with hard-batch triplet loss for person re-identification (2019).
. / Arxiv, 1910.12278/ARXIV.1910.12278. URL https://arxiv.org/ abs/1910.12278
Mars: A video benchmark for large-scale person re-identification. L Zheng, Z Bie, Y Sun, J Wang, C Su, S Wang, Q Tian, Computer Vision -ECCV 2016. B. Leibe, J. Matas, N. Sebe, M. WellingChamSpringer International PublishingZheng, L., Bie, Z., Sun, Y., Wang, J., Su, C., Wang, S., Tian, Q.: Mars: A video benchmark for large-scale per- son re-identification. In: B. Leibe, J. Matas, N. Sebe, M. Welling (eds.) Computer Vision -ECCV 2016, pp. 868-884. Springer International Publishing, Cham (2016)
Scalable person re-identification: A benchmark. L Zheng, L Shen, L Tian, S Wang, J Wang, Q Tian, 10.1109/ICCV.2015.1332015 IEEE International Conference on Computer Vision (ICCV). Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., Tian, Q.: Scalable person re-identification: A benchmark. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1116-1124 (2015). DOI 10.1109/ICCV.2015. 133
Person reidentification: Past, present and future. L Zheng, Y Yang, A Hauptmann, ArXiv abs/1610.02984Zheng, L., Yang, Y., Hauptmann, A.: Person re- identification: Past, present and future. ArXiv abs/1610.02984 (2016)
Person re-identification in the wild. L Zheng, H Zhang, S Sun, M Chandraker, Y Yang, Q Tian, 10. 1109/CVPR.2017.3572017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zheng, L., Zhang, H., Sun, S., Chandraker, M., Yang, Y., Tian, Q.: Person re-identification in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3346-3355 (2017). DOI 10. 1109/CVPR.2017.357
Re-ranking person re-identification with k-reciprocal encoding. Z Zhong, L Zheng, D Cao, S Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR. the IEEE Conference on Computer Vision and Pattern Recognition (CVPRZhong, Z., Zheng, L., Cao, D., Li, S.: Re-ranking person re-identification with k-reciprocal encoding. In: Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Unpaired image-to-image translation using cycle-consistent adversarial networks. J Y Zhu, T Park, P Isola, A A Efros, 10.1109/ICCV.2017.2442017 IEEE International Conference on Computer Vision (ICCV). Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adver- sarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242-2251 (2017). DOI 10.1109/ICCV.2017.244
Viewpoint-aware loss with angular regularization for person re-identification. Z Zhu, X Jiang, F Zheng, X Guo, F Huang, W Zheng, X Sun, DOI10.48550/ARXIV.1912.01300Zhu, Z., Jiang, X., Zheng, F., Guo, X., Huang, F., Zheng, W., Sun, X.: Viewpoint-aware loss with angular reg- ularization for person re-identification (2019). DOI 10.48550/ARXIV.1912.01300. URL https://arxiv.org/ abs/1912.01300
| [] |
[
"On the Design Fundamentals of Diffusion Models: A Survey",
"On the Design Fundamentals of Diffusion Models: A Survey"
] | [
"Ziyi Chang ",
"George A Koulieris ",
"Senior Member, IEEEHubert P H Shum "
] | [] | [] | Diffusion models are generative models, which gradually add and remove noise to learn the underlying distribution of training data for data generation. The components of diffusion models have gained significant attention with many design choices proposed. Existing reviews have primarily focused on higher-level solutions, thereby covering less on the design fundamentals of components. This study seeks to address this gap by providing a comprehensive and coherent review on component-wise design choices in diffusion models. Specifically, we organize this review according to their three key components, namely the forward process, the reverse process, and the sampling procedure. This allows us to provide a fine-grained perspective of diffusion models, benefiting future studies in the analysis of individual components, the applicability of design choices, and the implementation of diffusion models. | null | [
"https://export.arxiv.org/pdf/2306.04542v1.pdf"
] | 259,095,911 | 2306.04542 | 5d43cd1d4c25744f74d3fd22adc8bf4581962aab |
On the Design Fundamentals of Diffusion Models: A Survey
Ziyi Chang
George A Koulieris
Senior Member, IEEEHubert P H Shum
On the Design Fundamentals of Diffusion Models: A Survey
1Index Terms-Diffusion ModelForward ProcessReverse ProcessSampling ProcedureDeep Learning ✦
Diffusion models are generative models, which gradually add and remove noise to learn the underlying distribution of training data for data generation. The components of diffusion models have gained significant attention with many design choices proposed. Existing reviews have primarily focused on higher-level solutions, thereby covering less on the design fundamentals of components. This study seeks to address this gap by providing a comprehensive and coherent review on component-wise design choices in diffusion models. Specifically, we organize this review according to their three key components, namely the forward process, the reverse process, and the sampling procedure. This allows us to provide a fine-grained perspective of diffusion models, benefiting future studies in the analysis of individual components, the applicability of design choices, and the implementation of diffusion models.
INTRODUCTION
D IFFUSION models are an emerging class of generative models to obtain novel data [1]. During training, a diffusion model is trained by adding and removing noise when given a set of training samples. During inference, it generates a new sample using random noise as input.
Diffusion models have been applied in a wide range of fields. Diffusion models have shown impressive results in inpainting [2], text-to-image [3], super-resolution [4], colorization [5], and instance segmentation [6]. Diffusion models have also demonstrated superiority in speech generation [7], music synthesis [8], and audio enhancement [9]. Besides, they become increasingly popular in 3D shape generation [10], human motion synthesis [11], video synthesis [12], molecule synthesis [13], molecule trajectory prediction [14], and astronomical data synthesis [15].
The generic pipeline of diffusion models involves a forward process and a reverse process to learn a data distribution, as well as a sampling procedure to generate novel data. The pipeline is featured by a time-indexed multistep chain where the three components move either forwards, i.e., timestep increases, or backwards, i.e., timestep decreases. The forward process moves forwards on the chain and perturbs the training data by adding noise at each timestep. The reverse process moves on the chain in the opposite direction. It optimizes a network to remove the aforementioned perturbation. The two processes jointly enable the network to be trained for distribution modelling. Finally, the sampling procedure obtains a random noise and uses the optimized network for data generation. It also moves backwards on the chain as the reverse process does. The main difference is that the network during the sampling procedure is already optimized and used for inference only. Typically, two distinctive formulations are available for realizing the three components in the generic pipeline. First, diffusion models may be formulated with discrete timesteps [16], [17]. Second, diffusion models may also be formulated with continuous timesteps. These components are defined by stochastic differential equations (SDEs) [5], [18]. This formulation unifies the one with discrete timesteps, and facilitates the theoretical analysis of diffusion models [19].
Although these three components generally define the generic pipeline, they are currently lacking a comprehensive survey. Some existing survey papers focus on higher-level solutions to facilitate applications, and explore less on the design details of each component [20], [21]. Some concentrate on specific domains or aspects of diffusion models, lacking insights on the holistic design fundamentals of the generic pipeline [22], [23], [24], [25], [26], [27]. Others focus on the application side, and thereby provide fewer observations on theoretical designs [28], [29], [30]. Overall, a survey that concentrates specifically on designing the aforementioned components of diffusion models is lacking.
This survey bridges the gap in the literature by offering a thorough and cohesive review of component-wise design fundamentals in diffusion models. In particular, we have organized design fundamentals of diffusion models into the forward process, the reverse process, and the sampling procedure, as shown in Figure 1. This breakdown is aligned with the generic pipeline. Drawing upon the latest research and implementations, we have summarized the main streams of design fundamentals for each component. Overall, this survey offers a fine-grained perspective of diffusion models, and facilitates future analysis of individual components, the applicability of design fundamentals, and the implementation of diffusion models.
The rest of this survey is organised as follows. Section 2 introduces the preliminaries of diffusion models, including the generic pipeline and its two formulations. Sections 3, 4, and 5 respectively review the design fundamentals of the forward process, the reverse process, and the sampling procedure, as visualised in Figure 1. Section 6 provides insights on the future trends and Section 7 gives a brief conclusion on diffusion models. 1. The overview of diffusion models. The forward process, the reverse process, and the sampling procedure are the three core components of diffusion models, which are responsible for adding noise, training networks, and generating samples, respectively.
PRELIMINARIES
This survey uses commonly-used notations and terminologies in existing papers, as shown in Table 1, and represents concepts with figures whose legends are defined in Table 2. Original distribution at timestep 0 p (xt) Perturbed distribution at timestep t p(x T ) Terminal distribution, i.e., data distribution at timestep T θ Parameters in denoising network θ * Optimal parameters in denoising network p θ (·)
Predicted distribution by denoising network p(xt|x t−1 ) Forward transition p θ (x t−1 |xt) Reverse transition p θ * (x t−1 |xt) Sampling transition ∇x log p(·) Score, i.e., gradient of a log distribution I Identity matrix x * 0 Newly generated data
The Generic Pipeline
The generic pipeline in diffusion models is generally specified by the forward process, the reverse process, and the sampling procedure. The generic pipeline is characterized by a timestep-indexed multi-step chain where the three components move forwards or backwards. The forward and reverse processes are implemented jointly to optimize a denoising network by gradually adding and removing noise [17]. After the optimization, the sampling procedure leverages the trained network to generate a novel sample Combination, e.g., Addition.
With probability p for dropping out
x * 0 ∼ p θ * (x 0 ) ≈ p(x 0 ) whose distribution p θ * (x 0 )
conforms to the same distribution as the original one p(x 0 ) [16]. The forward process perturbs a training sample x 0 to {x t } T t=1 as the timestep t increases, as shown in Figure 2. A forward transition p(x t |x t−1 ) describes such a perturbation where a small amount of noise ϵ t is added between two timesteps. In other words, as the forward process moves on the chain, more and more noise is added through p(x t |x t−1 ), and thereby the perturbed sample x t becomes noisier and noisier. Through multiple timesteps, the original distribution p(x 0 ) is eventually perturbed to a tractable terminal distribution p(x T ), which is usually full of noise [31]. Since only noise is added through the chain, the forward process does not require any parameters to be trained. In particular, the forward process is represented as a chain of forward transitions:
p(x T |x 0 ) := p(x 1 |x 0 ) · · · p(x t |x t−1 ) · · · p(x T |x T −1 ), = T t=1 p(x t |x t−1 ),(1)
where t is the timestep, T is the total number of timesteps, x 0 is a training sample at t = 0 and is then perturbed to be x T after T timesteps, and p(x t |x t−1 ) is a forward distribution transition between two consecutive time steps.
The forward process has both similarities and differences with variational autoencoders (VAEs) [32]. Similar to VAEs, it usually perturbs p(x 0 ) to the commonly-used isotropic Gaussian distribution p(x T ) = N (0, I) as the terminal distribution [33]. In contrast to learning an encoder to obtain p(x T ) in VAEs, the forward process has no trainable parameters and only adds noise to x 0 for perturbation [17]. The reverse process trains a neural network θ to recursively remove the noise that has been previously added by the forward process.
The reverse process trains a denoising network to recursively remove the noise, as shown in Figure 3. Instead of removing all noise in a single timestep like GANs [34], a denoising network is trained to iteratively remove the noise between two consecutive timesteps. The reverse process moves backwards on the multi-step chain as t decreases from T to 0. Such iterative noise removal is termed as the reverse transition p θ (x t−1 |x t ), which is approximated by optimizing the trainable parameters θ in the denoising network [35]. In particular, the reverse process is formulated as a chain of reverse transitions:
p θ (x 0 ) := p(x T )p θ (x T −1 |x T ) · · · p θ (x t−1 |x t ) · · · p θ (x 0 |x 1 ), = p(x T ) T t=1 p θ (x t−1 |x t ),(2)
where θ is the parameters of the denoising network and p θ (x t−1 |x t ) is the reverse distribution transition. In particular, the reverse process is usually parameterized as:
p θ (x t−1 |x t ) := N (x t−1 ; µ θ (x t , t), Σ θ (x t , t)),(3)
where µ θ (x t , t) and Σ θ (x t , t) are, respectively, the Gaussian mean and variance to be estimated by the network θ.
The denoising network is trained by the standard variational bound on negative log likelihood:
L =E[D KL (p(x T |x 0 )||p(x T )) + t≥1 D KL (p(x t−1 |x t , x 0 )||p θ (x t−1 |x t )) − log p θ (x 0 |x 1 )],(4)
where D KL (·∥·) is the Kullback-Leibler (KL) divergence and computes the difference between two distributions. Overall, minimization of the objective L is to reduce the discrepancy between p θ (x 0 ) and p(x 0 ). Fig. 4. The sampling procedure uses the trained denoising network θ * and usually follows the same transitions as the reverse process.
The sampling procedure leverages the optimized denoising network θ * to generate novel data x * 0 , as illustrated in Figure 4. It moves backwards on the chain to recursively apply the optimized network θ * [36]. Concretely, it firstly obtains a sample x T from the terminal distribution p(x T ) and then uses the trained network to iteratively remove noise by the sampling transition p θ * (x t−1 |x t ). Through a chain of such transitions, it finally generates new data
x * 0 ∼ p θ * (x 0 ) ≈ p(x 0 ).
In particular, the sampling procedure is defined as a chain of sampling transitions:
p θ * (x 0 ) := p(x T )p θ * (x T −1 |x T ) · · · p θ * (x 0 |x 1 ), = p(x T ) T t=1 p θ * (x t−1 |x t ),(5)
where θ * represents the optimized parameters of the denoising network, p(x T ) is the terminal distribution, and p θ * (x t−1 |x t ) is the sampling transition.
Discrete and Continuous Formulations
Diffusion models can be formulated in two distinct ways, i.e., discrete and continuous formulations, which differ in the definition of the timesteps [37]. The discrete formulation defines timesteps to be integer values, ranging from 0 to T [17], and results in a finite number of timesteps that is specified by the value of T . In contrast, the continuous formulation defines the timesteps as continuous values, which are constrained within the interval of [0, 1], and allows for an infinite number of timesteps in theory [5].
The Discrete Formulation
Regarding the discrete formulation, denoising diffusion probabilistic model (DDPM) [17] is a popular configuration of such formulated diffusion models. It is straightforward to define, efficient to train, and capable to achieve high quality and high diversity in the generated samples [38].
Concretely, the forward transition in DDPM is defined to add isotropic Gaussian noise ϵ t ∼ N (0, I):
p(x t |x t−1 ) := N (x t ; 1 − β t x t−1 , β t I),(6)
where β t is the noise schedule, which is a hyper-parameter to control the amount of noise to be added in each timestep. As all forward transitions (Eq. 6) are Gaussian, the forward process (Eq. 1) in DDPM is simplified as:
p(x t |x 0 ) := N (x t ; √ᾱ t x 0 , (1 −ᾱ t )I),(7)
whereᾱ t is defined asᾱ t = t s=1 α s and α t = 1 − β t . In theory,ᾱ t has a similar effect with β t in Eq. 6. Please refer to appendix A for the detailed derivation of Eq. 7 from Eq. 6.
The reverse process has the same functional form as the forward process [1]. In DDPM configuration, the transition Eq. 3 in the reverse process is formulated as:
p θ (x t−1 |x t ) := N (x t−1 ; 1 √ α t (x t − 1 − α t √ 1 −ᾱ t ϵ θ (x t , t)), β t I),(8)
where the variance Σ θ (x t , t) in Eq. 3 is empirically fixed as the noise schedule β t , and µ θ (x t , t) is reparametrized by the noise prediction ϵ θ (x t , t). Accordingly, the training objective defined in Eq. 4 is also simplified as:
L = E xt,t ∥ϵ t − ϵ θ (x t , t)∥ 2 2 .(9)
Finally, the sampling process obtains x T ∼ p(x T ), and applies p θ * (x t−1 |x t ) to generate x * 0 .
The Continuous Formulation
The continuous formulation manipulates data distributions in continuous time. Noise is added in an infinitesimal interval between timesteps. Therefore, a stochastic differential equation (SDE) is adopted in such formulated diffusion models to describe changes in continuous timesteps [5]. Concretely, the forward transition to add noise is formulated as a forward SDE:
dx = f (x, t)dt + g(t)dw,(10)
where w is the standard Wiener process and accounts for noise in the forward transition, and f (x, t) and g(t) are the drift and diffusion coefficients to account for the mean and variance in the forward transitions, respectively. At the same time. a reverse SDE for the reverse transition (Eq. 3) is also determined by these coefficients [39]:
dx = f (x, t) − g 2 (t)s θ (x t , t) dt + g(t)dw,(11)
where the output of the denoising network s θ (x t , t) = ∇ x log p(x t ) is the score [40]. Likewise, f (x, t) − g 2 (t)s θ (x t , t) and g(t) account for the mean and the variance in Eq. 3. The training objective is defined as:
L = E t λ(t)E x0 E xt|x0 ∥s θ (x t , t) − ∇ x log p(x t |x 0 )∥ 2 2 ,(12)
where λ(t) is the weighting function. Appendix C presents the equivalence of using ∇ x log p(x t ) and ∇ x log p(x t |x 0 ) in the optimization objective.
Finally, the sampling procedure obtains x T , and applies the trained network θ * to generate novel data.
THE FORWARD PROCESS
The forward process defines the way data to be perturbed by configuring the noise and specifying a transition chain. The way the perturbation happens affects not only the forward process but also the reverse process and the sampling procedure [41]. Noise configuration involves the schedule and the type of noise to be added [42]. The transition chain specifies how the data distribution are transformed [43].
The Noise Configuration
The schedule [41] and the type of noise [44] are configured for effective and expressive diffusion models. Specifically, a suitable schedule provides such a noise intensity that perturbs the data samples at an appropriate speed [18], [45]. Moreover, different types of noise have an influence on the modeling capabilities of diffusion models [45].
The Noise Schedule
The noise schedule controls the amount of noise ϵ t to be added at timestep t. It schedules a value β t as the noise level for each timestep t [1]. The noise ϵ t is scaled by the noise level and then is used to perturb x 0 [16].
A suitable schedule encourages a balance between exploration and exploitation. Exploration describes the ability of generalization on data not seen during training [46] while exploitation refers to the convergence situation where a model fits the training data well [47]. According to the definition of the forward process in Eq. 1, the larger the amount of noise that is used, the faster the data structures are destroyed, and vice versa. On the one hand, a sufficient amount of noise is necessary to encourage exploration to generalize well on unseen data, while excessive noise may result in sub-optimal convergence or inconvergence of a model, which cannot adequately recover the details of data [48]. On the other hand, too little noise boosts exploitation to fit distributions well but undermines generalization [49], [50]. Empirically, smaller noise levels are scheduled at early timesteps for exploitation, and higher levels are assigned at late timesteps for exploration for a better balance [51].
Data are often considered when a noise schedule is designed. From the viewpoint of data representation, the number of data dimensions and the maximum Euclidean distance of training samples need to be considered to design the noise schedule [52]. Furthermore, the noise schedule should align with the complexity and redundancy of the data [41]. For example, larger images may require more noise than smaller ones [53]. In the context of data reconstruction, adding sufficient noise is beneficial for easy modeling of the distribution while adding little noise helps accurate reconstruction [54]. TABLE 3 Comparison of several streams of noise schedules.
Noise Schedule Visualization
Linear [17] Simple Linear [41] Cosine [53] Exponential [16] Sigmoid [55] The noise schedule can be learned by a network or empirically designed using mathematical formulations. To learn a noise schedule, existing methods recognize it as a parameter to be learned jointly with other parameters [54]. These parameters are usually optimized by maximizing a variational lower bound on the log-likelihood [56]. Since the noise schedule is learned by the network, it can be different for training and sampling to achieve optimal results [50]. In contrast, manually-designed noise schedules are formulated with a wide variety of mathematical heuristics. Some design the noise schedule to be affine [17], [41], or to have an exponential relationship [16] with the timestep. Both are thought to perturb data quicker than necessary. Consequently, other functions, such as cosine [53], sigmoid [55], and mathematical integral [18], are used to achieve a more appropriate perturbation speed in the forward process. Table 3 shows several examples of designed noise schedules.
The Noise Type
The selection of the noise type leads to improved distribution approximation and greater degrees of freedom, accentuating its significance on expressiveness of diffusion models. Specifically, selecting an appropriate noise type enhances the model capacity as it fits the perturbed distributions at different timesteps more accurately [33], [45]. Additionally, different types offer varying degrees of freedom [42]. This brings more flexibility in modeling distributions.
TABLE 4
Comparison of several streams of noise types.
Noise Type Visualization
Gaussian [16], [17] Gamma [42], [45] Soft [44], [57] Different noise types have been developed based on empirical experiments. Inheriting from the denoising score matching [58], isotropic Gaussian noise is commonly used for its simplicity and compatibility [1], [16], [17]. Several variants of isotropic Gaussian noise, such as mixture of Gaussian noise [42] and non-isotropic Gaussian noise [59], have also been applied for increased expressiveness. When correlation exists in a data sample, e.g. frames of a video, the noise are designed to be correlated as well [60]. Additionally, Gamma distribution is another feasible and promising alternative. It has one more degree of freedom and fits distributions better [45]. Soft corruptions can be recognized as a generalized noise for perturbation. This type of noise is more than traditional statistical distribution and supports various operators like masking to perturb data [44]. Such operators also destroy data structures as the aforementioned noise does. This greatly extends the expressive power as a wide variety of operators become available [57]. Overall, these attempts on alternative types pave the way towards more generalized diffusion models. On the other hand, their re-formulations imposed by new noise type may be costly [42]. Table 4 visualizes different types of noise.
The Transition Chain
The chain of transitions controls the way of perturbing the given data distribution. Varying the terminal distribution in the forward process helps train the denoising network in the reverse process effectively and efficiently [61]. The systematic method is an emerging transition design to increase expressiveness [62]. The chain is also adapted for different data properties [63]. We discuss all those, next.
The Terminal Distribution
A large discrepancy between the original distribution p(x 0 ) and the terminal distribution p(x T ) may lead to a suboptimal learning outcome for diffusion models. Through the chain of transitions, p(x T ) is determined by adding noise to the original distribution p(x 0 ) after T timesteps. Their discrepancy is usually significant because x T is full of noise with almost no original structures remaining [64]. A denoising network is trained to overcome the discrepancy by transforming x T ∼ p(x T ) back to x 0 ∼ p(x 0 ). Consequently, such a large discrepancy may lead to inefficiencies and slow convergence of optimization in the reverse process. In other words, it may require more timesteps to correct this large discrepancy through the denoising network [65]. The chain, instead, seeks to maintain as many structures of x 0 as possible in the terminal distribution to reduce the discrepancy, as shown in Figure 5. An isotropic Gaussian distribution has no information about the given data samples despite its simplicity to use [1], [17]. Instead, other approaches typically consider the statistics of the training dataset like the mean and variance to indicate data structures [66]. This mitigates the potential discrepancy and improves the convergence [62]. Such directly involving statistics still requires prior knowledge on the exact formulation of the terminal distribution [64], [67]. An extra network is thereby developed to learn the distribution p(x t ) at an earlier timestep t < T and p(x t ) is then recognized as the new terminal distribution. As t < T , more data structures remain in p(x t ) where less noise has been added [68]. The network is often pre-trained to avoid optimizing jointly with the denoising network for an easier optimization [69].
The Systematic Method
The systematic method involves more than one chain of transitions in the generic pipeline. It becomes increasingly popular for its impressive performance and flexibility [70]. Multiple chains are jointly applied on flexible inputs. For example, inputs may be a pair of data and its label [5], or the decomposed data in subspace [71]. Figure 6 shows an example of using two transition chains on decomposed data. The systematic method has various benefits. It increases the expressive power by supporting multiple data components and types in the forward process. With more than one chains, the given samples can be decomposed into several components that are compatible with the the generic pipeline [72]. This increases the expressiveness, and avoids inaccurate high-dimensional extrapolation and high computational cost [73]. Moreover, transitions are feasible to apply different speeds of perturbations in the forward process based on their properties [74], [75]. Different data types are also supported by the systematic method. For instance, discrete and continuous data are both transformed using two different transitions simultaneously [13], [76]. Additionally, the systematic method enables incorporating conditions by perturbing the given condition with an extra chain. For example, label vectors are perturbed and learned jointly with data [5], [77]. Sometimes domain knowledge becomes applicable if the systematic method is used. For example, statistical mechanics in physics is suitable for multiple transition chains leading to better performance [43].
Despite those advantages, the systematic method may require to change the equations of the forward process, and thus, the reverse process and the sampling procedure. The generic pipeline may need to be re-derived [13], which is usually more complicated [73] for computation.
Data Properties
Generalizing diffusion models to a wide range of applications requires adaptations according to data properties. Data in tasks such as text generation and speech generation are represented with different types [78]. Furthermore, the data manifold itself is different in some areas like scientific computing [79]. Latent features are also important in large scale tasks for more compacted representations [80].
Different data types require adaptations of the transition chain. Essentially, different data types have varying characteristics. For instance, categorical data are represented as discrete distributions, while continuous data are modeled as arbitrary values [81], [82]. Although both discrete and continuous data are compatible with diffusion models, their generic pipeline has an inductive bias of continuous data [13]. The inherent data difference requires effort to adapt the transition chain to improve the modeling accuracy for different data types [66], [83]. Some efforts focus on converting discrete data to continuous representations before they are calculated [84], [85]. This is straightforward but accuracy may be lost. Others concentrate on re-designing the transitions with discrete distributions [63], [86], [87] such as the Bernoulli distribution. Despite their superior performance, their derivations sometimes are task-specific and expensive to be adapted to other tasks.
Data manifolds also demand modifications of the transition chain in the generic pipeline. Apart from the commonly used Euclidean manifold, the Riemannian manifold is also explored for handling some scientific data [88]. Riemannian manifold-based diffusion models are motivated by their ability to preserve geometric structures of the underlying data, which are important in many scientific fields [89]. Riemannian manifold-based diffusion models have shown promising results in various applications such as medical imaging and neuroscience [90]. Latent representation is another feasible choice for transitions, as illustrated in Figure 7. Despite positive generation results achieved in the original space, the high dimensionality of data often leads to considerable computational cost and redundancy [91]. Empirical evidence has shown that some transitions in diffusion models are responsible for learning latent representations, which are usually in lowdimensional space [92] and semantically meaningful [93], [94]. A pre-trained encoder is developed to map x 0 into latent code z 0 ahead of diffusion models and its corresponding decoder maps the generated latent code z 0 back to x 0 [95]. Usually latent features are more densely represented by filtering less relevant information. Therefore, all the capacity of diffusion models are encouraged to learn more abstract, semantical relationships effectively [80].
THE REVERSE PROCESS
The reverse process focuses on training a denoising network to remove noise [96]. The denoising network is configured by its network architecture [97] and its output parameterizations [98]. To train the configured network, optimization designs are also developed [52].
Network Architectures
U-Net [99] and Transformer [100] are two commonly adapted architectures for denoising networks [101]. The denoising network is trained to realize the reverse transitions (Eq. 3). It is theoretically flexible to incorporate a wide variety of architectures that keep the dimensionality unchanged during all transitions [102]. Both U-Net and Transformer become increasingly popular for their high capacity on modelling complex relationships in a wide range of applications. While other architectures may also theoretically be compatible without changing dimensions like GAN [103], they are often adopted for task-specific purposes, e.g., adopting GAN for fast generation [48], and may not be generally applicable to other purposes.
U-Net
From a theoretical standpoint, U-Net is a U-shaped encoderdecoder architecture for general purpose. It was originally proposed for image segmentation [99] and later was adapted to a large variety of tasks. Its encoder extracts highlevel features from data and usually contains downsampling layers to compress data. Its decoder leverages such features for different purposes and usually upsamples back to the original dimensionality of the data. This architecture forms an information bottleneck [104] and encourages the network to learn features effectively. Therefore, U-Net has been adopted for a wide variety of tasks.
Various architecture modifications are applied to adapt U-Net as the denoising network. One implementation, known as PixelCNN++ [105] based on a Wide ResNet [106], is adapted as the denoising network by replacing weight normalization [107] with group normalization [108] for learning efficiency [17]. Cross-attention [109] is introduced for higher capacity [110]. Normalization layers are also explored as the conditions in diffusion models [38], [53]. Although various architecture modifications are implemented, the overall architecture of U-Net remains intact [111].
Transformer
Transformer is also an encoder-decoder architecture. Both its encoder and decoder are featured by self-attention functions, which differentially measure the significance between all inputs regardless of their spatial locations [100]. This enables to better capture global dependencies in many tasks.
Transformer is increasingly adapted as an alternative architecture for the denoising network. In principle, a transformer can directly substitute U-Nets because it can also maintain the data dimensions [112]. Nevertheless, direct substitution empirically does not yield better quality because transformers are known to model global relations and may suffer from losing short-range dependency [113]. Therefore, some U-Net structures are usually added to transformers to retain the benefits of U-Net as much as possible [114]. For example, extra long skipping connections in U-Net also play a central role in transformer-based denoising networks [115]. Despite these improvements, not all U-Net structures are essential. For example, the downsampling and up-sampling operators are not necessary for transformer-based denoising networks [115].
Transformer-based denoising networks also exhibit extra desirable properties when compared with U-Net architectures. For example, the transformer-based denoising network achieves comparable performance in conditional image generation [116], [117], and better quality with less network complexity in unconditional image generation task [118]. Graph transformers are also explored to capture graph relationships [119]. Transformers also encourage scalability [120], [121], and multi-modality [122] in diffusion models. The transformer encoder is also separately adopted with other task-specific decoders in the denoising network. This is enabled by the disentanglement in an encoder-decoder structure [97]. For example, a strong transformer encoder is used to achieve better performance in image generation [97] and motion synthesis [11].
Parameterizations of the Reverse Mean
The output of the denoising network is applied to parameterize the reverse mean µ θ (x t , t) in the reverse transition (Eq. 3). Different parameterization ways all center on the estimation of the original data x 0 . Specifically, the true value of the reverse mean, denoted as µ(x t , t), is formulated as:
µ(x t , t) := √ᾱ t−1 (1 −ᾱ t−1 )x t + √ᾱ t−1 (1 − α t )x 0 1 −ᾱ t ,(13)
where x 0 is the original data but is unavailable during the reverse process. Therefore, x 0 needs to be estimated from the observed purturbed data x t and timestep t by the denoising network. One parameterization way is to directly output the estimationx 0 by the denoising network and replace x 0 withx 0 in Eq. 13. An indirect parameterization way designs the denoising network to predict the noiseε t , which is the residual between the unknown x 0 and the observed x t [17]. Another indirect parameterization way is based on the probabilistic viewpoint and predicts the scorê s t via the denoising network.ŝ t is the gradient that points towards the unknown x 0 from the current position x t in data space. Combinations among aforementioned ones are also proposed for special tasks. Figure 5 shows a comparison of different outputs and their corresponding parameterization ways. Different outputs by denoising networks are equivalent to each other [18], [54]. Please refer to Appendix B for a detailed explanation on the equivalence. While essentially equivalent, different outputs as well as corresponding parameterizations show unique characteristics in particular aspects. Usingx 0 mainly supports better accuracy in the initial stage of the reverse process whileε t is preferable in the late stage. Employingŝ t avoids computing the normalizing constant, which is a common problem in the context of distribution modelling. Combining the aforementioned ones provides the flexibility to retain their benefits. Each visualization is about the corresponding output.
Output Parameterization Visualization
Datax 0 µ θ (xt, t) := √ᾱ
t−1 (1−ᾱ t−1 )x t + √ᾱ t−1 (1−α t )x 0 1−ᾱ t Scoreŝt dx := f (x, t) − g 2 (t)ŝt dt + g(t)dw Noiseεt µ θ (xt, t) := 1 √ α t xt − 1−α t √ α t (1−ᾱ t )ε t
Data
Predicting the original data x 0 provides a straightforward denoising direction.x 0 indicates a denoising goal towards which x t should be changed. In particular, given the observation x t at timestep t, the parameterization is defined as:
µ θ (x t , t) := √ᾱ t−1 (1 −ᾱ t−1 )x t + √ᾱ t−1 (1 − α t )x 0 1 −ᾱ t ,(14)
where α t indicates the noise level at timestep t as previously defined in Section 2.2.1. Parameterizing withx 0 is advantageous at the beginning of the sampling procedure, while it leads to inaccuracy when approaching the end of the sampling procedure. Empirical results show that the estimated mean µ θ (x t , t), which is parameterized byx 0 , is closer to the ground truth µ(x t , t) at the beginning of the sampling procedure [3], [123]. This is becausex 0 helps the denoising network with an overall understanding of the global structure [22]. On the contrary, when approaching the end of the sampling procedure where substantial structures have already been formed and only small noise artifacts need to be removed, finer details are difficult to be recovered [124]. In other words, the information brought byx 0 becomes less effectiveness in this case.
Score
Score is the gradient of the logarithm of a distribution [40]. The gradient indicates the most possible changes between two timesteps [125]. Therefore, as shown in Figure 8, denoising samples by the score forms a trajectory in data space [126]. In particular, given the observed x t and timestep t, the predicted score is defined asŝ t := ∇ x log p(x t ) and the corresponding parameterization is the reverse SDE:
dx := f (x, t) − g 2 (t)ŝ t dt + g(t)dw,(15)
where f (x, t) and g(t) are the coefficients as previously introduced in Section 2.2.2. Predicting score in the reverse process avoids the estimation of the constant for normalizing the probabilistic distributions. Instead of representing a probability distribution by probability [127], score computes the gradient of the logarithm of a distribution. This avoids estimating the normalizing constant, which is computationally expensive or sometimes infeasible [128]. In particular, the predicted distribution is usually defined as:
p θ (x) = exp −f θ (x) Z θ ,(16)
where Z θ > 0 is a normalizing constant to be estimated so that p θ (x)dx = 1. Predicting score avoids this problem:
∇ x log p t (x),(17)= − ∇ x f θ (x) − ∇ x log Z θ ,(18)= − ∇ x f θ (x),(19)
where ∇ x log Z θ = 0 as Z θ is a constant with respect to x.
Noise
Noise estimation predicts the noise added in the forward process. Generally, the predicted noise is scaled according to the noise schedule and then subtracted from the observation [17], [129], as shown in Figure 9. In particular, given the observation at a current timestep, the prediction of noise is denoted asε t and the parameterization is defined as:
µ θ (x t , t) := 1 √ α t x t − 1 − α t α t (1 −ᾱ t )ε t ,(20)
where α t indicates the noise level at timestep t as previously defined in Section 2.2.1. Fig. 9. Visualization of the noise-based parameterization. meansεt has a subtractive relationship with xt, and means this results in x t−1 .
The consistent magnitude and residual effect ofε t are advantageous. The fixed statistics of noise, e.g.ε t ∼ N (0, I), lead to a consistent magnitude to be predicted. This encourages the learning of the denoising network [54]. Besides, the residual effect to preserve the input x t in x t−1 is available by predicting zero noise [130]. This becomes increasingly beneficial towards the end of the reverse process where only minor modifications are needed [124].
A large deviation between the ground truth noise ϵ t and the predicted noiseε t may occur at the beginning of the sampling procedure, and is hard to be corrected in the following timesteps. The sampling procedure starts from samples with large noise, with almost no clue for the denoising network to predict noise accurately [17]. This potentially leads to a deviation [124]. The deviation is scaled up by the noise schedule in Eq. 20. The scheduled level of noise is usually large at the beginning of the sampling procedure. Even for a small noise estimation error, the deviation will be sharply enlarged. Moreover, the denoising network is limited to predicting noise, which has a residual effect in the noise-based parameterization. The magnitude of potential correction at each timestep is relatively small, and thereby more timesteps are required to correct such a deviation [22].
Combinations
Combining two or more predictions is also possible for task-specific benefits. Abstractly, the combination is denoted asĉ t := C(x 0 ,ŝ t ,ε t ) where C stands for a combination operator. Therefore, the parameterization is:
µ θ (x t , t) =ĉ t .(21)
This has a wide variety of feasible implementations. The output to be combined and the combination operators can be very diverse [20]. Velocity prediction is one example that linearly combinesx 0 andε t [131], which is designed as:
µ θ (x t , t) := α tεt − σ tx0 ,(22)
where α t and σ t are the scaling factor and noise schedule respectively. Such a combination has better stability in the task of distilling diffusion models [98]. Combination is also used to avoid noise existing inx 0 [132] or to achieve higher likelihood [133]. Dynamically alternating betweenx 0 andε t is found to accelerate the generation [124].
Optimization Designs
Many optimization designs on the learning objectives are developed to train the denoising network [134], with the reverse variance and the learning weight being the two main factors. Optimizing over the reverse variance assists the fitting of the denoising network. The learning weight controls the attention of learning priorities.
The Reverse Variance
Modelling the reverse variance improves the training efficiency of diffusion models. An appropriate variance minimizes the discrepancy between the predicted reverse transition p θ (x t−1 |x t ) and the forward transition p(x t |x t−1 ), fitting the forward process better [135]. This facilitates less timesteps to be used, and improves overall efficiency. Many efforts to model the reverse variance are attempted. Some empirically adopt a handcrafted value for each timestep. The noise schedule is a popular option for its simplicity and empirical performance [136]. Scaling the schedule by a factor is also researched but does not lead to a large difference [17]. Both choices are considered as upper and lower bounds on reverse process entropy [1], and the interpolation between them is learned for flexibility [53]. Others find the optimal variance can be solved analytically. Its formulation is explicitly derived from the predicted score [135], and improves the efficiency of generation [137].
The Learning Weight
Learning priorities are balanced by weights in the learning objective to enhance the learning quality. The change of learning priorities is common in deep learning [138] and has also been observed in the reverse process [139], [140]. Generally, a diffusion model learns features with semantic correspondence [141]. In other words, it pays more attention to global structures at the beginning of the reverse process [142] and then changes to local details when approaching its end [143]. Beside, it is shown to understand high-level phenomena like compositionality [144]. A balance is achievable through adjusting weights and beneficial for training [145]. Weights are usually related with the noise schedule. Directly using the schedule as the weight emphasizes a better learning of global structure by a larger learning weight at the beginning of the reverse process [16], [146]. While it is simple to use, the pre-defined schedule is not flexible and may deviate away from the actual demands. A function of the noise schedule such as the signal-to-noise (SNR) ratio is designed to compute the weights. The actual remaining noise is measured rather than the scheduled one [54]. It takes the data into account and better balances the learning of local details and global structures [143].
SAMPLING PROCEDURE
The sampling procedure in a diffusion model usually follows the transitions in the reverse process but uses a trained denoising network. It moves backwards on the chain and is responsible to transform a sample x T from a terminal dis-
tribution p(x T ) to generate new datax 0 ∼ p θ * (x 0 ) ≈ p(x 0 ).
Conditional and fast generation are two focused areas of the sampling procedure in diffusion models. Without modelling conditions, diffusion models usually do not generate data of high quality when data are considered to follow a conditional distribution [38], [147], e.g., images from LSUN [148] with ten scene categories are considered to follow a conditional distribution. Effective mechanisms of guidance are designed to modify transitions in the sampling procedure to be compatible with conditions. Furthermore, the sampling procedure is several times slower when compared with other generative models [149]. The long generation time is mainly attributed to the large number of timesteps [61]. Thus, designs for acceleration are explored to reduce timesteps without heavily impairing quality.
Guidance Mechanisms
A guidance mechanism modifies the denoising direction. It corrects the unconditional direction based on the given conditions [150], and thereby reduces the discrepancy between the modified and true conditional distributions [151], [152]. The condition c ∈ C can be diverse, e.g., images [153], texts [154], or 2D poses [155]. Without loss of generality, guidance is discussed using score as the output.
A wide variety of mechanisms are proposed to incorporate condition c with diffusion models. As temporal conditions, i.e. timesteps t, are inherently compatible with diffusion models [17], vanilla guidance extends them to incorporate c through operations such as addition [38]. Despite its convenience, such a guidance mechanism is weak and sometimes is not working well to modify the denoising direction [22]. To achieve effective and adjustable strength of conditions, classifier guidance leverages an extra pre-trained classifier to change the denoising direction [38]. The strength is adjusted by scaling the change with a weight. Nonetheless, training a classifier on data with noise leads to extra cost and training instability [156]. To avoid such problems, classifier-free guidance combines the vanilla guidance and unconditional model for guidance. Moreover, learning the modification as guidance instead of manually deriving the guidance is emerging for its flexibility [155].
Vanilla Guidance
Vanilla guidance incorporates the given conditions c jointly with timesteps t as the guidance. A timestep t itself is inherently taken as a condition by a denoising network [157], [158]. A variety of operations such as addition [159], [160], [161], [162], and attention layer [163], [164], [165], [166] are available for this guidance mechanism. Figure 11 shows the condition is added to a timestep in this mechanism. Fig. 11. Vanilla guidance merges the condition c with the denoising network θ * by adding to each timestep t.
While vanilla guidance is simple, its effectiveness is undermined by the lack of adjustable conditional strength [156]. Empirical evidence shows that a conditional diffusion model trained with vanilla guidance may not conform to the conditions or under-perform in conditional generation [22].
Classifier Guidance
For effective and adjustable strength of conditions, an extra classifier with a weight is trained for classifier guidance. The gradient of the classifier is scaled by the weight and then is used to modify the unconditional denoising direction, as shown in Figure 12. In other words, the weight controls how much to encourage the gradient-based modification [167]. To obtain the gradient as accurate as possible, the classifier is trained on data with noise at each timestep. In particular, classifier guidance is formulated as:
∇ x log p(x|c) = ∇ x log p(x) + w∇ x log p(c|x),(23)
where ∇ x log p(x|c) and ∇ x log p(x) are conditional and unconditional scores, respectively, ∇ x log p(c|x) is the gradient of a classifier, and w is the weight. When w = 0, this mechanism becomes unconditional. As the weight increases, the denoising network is more and more constrained to produce samples that satisfy conditions. Classifier guidance provides control in a similar way of gradient-based adversarial attack. A sample is updated to satisfy the classifier by using the classifier gradient [156], which is a similar way in gradient-based adversarial attack [168], [169]. Such updates increases the probability of the sample for which the classifier assigns high likelihood to the correct label [170]. Furthermore, different pre-trained classifiers can be applied in a plug-and-play manner [171].
Additionally learning a classifier may lead to extra cost and training instability. The extra expense is further scaled up because the classifier is trained on data with every scheduled noise level [156]. Moreover, training the classifier on data with noise tends to be unstable [172], [173]. The data structure is almost destroyed because more and larger noise is added according to the noise schedule. Therefore, the quality of classifier gradient may not be consistent [174]. Sometimes its direction is arbitrary or even opposite [175], [176] and leads to less effective or wrong guidance [177].
Classifier-Free Guidance
To avoid the extra classifier, classifier-free guidance replaces the classifier by a mixture of unconditional model and vanilla guidance. It encourages the model in the direction of guidance and simultaneously discourages away from unconditional direction [178]. As shown in Figure 13, instead of training two models, a conditional model and an unconditional one are formulated uniformly by dropping out conditions c with a probability p [156]. The two models are learned jointly as if they were a singular conditional model [22]. In particular, classifier-free guidance is formulated as: (24) where w is the weight of conditions. The weight is slightly different from its counterpart in classifier guidance. When w = 0, the classifier-free guidance becomes unconditional models without vanilla guidance. The vanilla guidance is a special case when w = 1. In this case, the unconditional model is suppressed and conditions are incorporated through vanilla guidance [35]. If w > 1, the classifier-free guidance restrains the unconditional model and prioritizes conditions further by larger weights. The score from classifier-free guidance deviates quickly away from the unconditional score, and thus, samples that better satisfy the conditions will be generated [179]. Instead of removing classifiers, self-guidance [150] reduces or removes the requirement of annotation by using internal values like activations and attention maps [180] to compute guidance. Such design helps finer-grained control and is compatible with aforementioned guidance.
∇ x log p(x|c) = w∇ x log p(x|c) + (1 − w)∇ x log p(x),
Learned Modifications
Modifications for guidance can be learned for more flexibility. A network is deployed to directly learning the modification on the output of unconditional networks [181]. Thus, it is more flexible than the aforementioned ones because of fewer manual designs. For example, a pre-trained unconditional denoising network is commonly augmented by its identically copied network including the parameters. The outputs from the two networks are usually added and other complex operations are also possible. During training, only the copied network is updated to automatically learn suitable modifications to correct the output from the pretrained network. Although fixing the original network and copying its parameters to the identical one avoid completely retraining, learning the copied network may still be difficult due to the larger number of parameters [182]. Nevertheless, this flexibility greatly encourages the pursuit of unified and multi-modality guidance [183].
Acceleration Designs
Reducing the number of timesteps for generation is the main goal of acceleration. Generally, the denoising network needs to wait the results from the timestep t + 1 to accomplish the transition at the current timestep t [61]. The inference speed is significantly slowed down especially when a large number of timesteps are required in the sampling procedure [48]. Efforts have been contributed to reduce the timesteps. Truncation directly cuts the sampling procedure at a certain timestep [69]. Knowledge distillation is adopted to learn a student model that has fewer timesteps in its sampling procedure [184]. Selection strategies are also developed to select a subset of timesteps for fast generation [61].
Truncation
Truncation involves a partial sampling procedure with an extra network. It usually selects an intermediate timestep t ′ , and obtains a sample from the corresponding distribution p(x t ′ ) for the generation [185], as shown in Figure 15. In other words, the process truncates the whole chain at t ′ , and thereby fewer timesteps remain in the partial chain [186]. An extra network needs to be additionally trained to model p(t ′ ) that may not be tractable [17]. Overall, truncation is theoretically effective in acceleration [65], which is proved by the stochastic contraction theory [187]. Truncation effects can be two-sided. One the one hand, truncation comes with several benefits. Not only inference, but also training can be accelerated [68] as both the forward and reverse processes do not require computation at these truncated timesteps. Truncation also strikes a balance between acceleration and quality. It has an adaptable intercepting point to balance the generation quality and efficiency. The selection of such a point depends on the data complexity [68] and the degree of corruptions [65]. Besides, truncation takes advantage of the properties of the involved extra network, which is often another generative model [69]. On the other hand, truncation may lead to an increased training expense because the extra network needs to learn p(t ′ ) as accurately as possible [69].
Knowledge Distillation
Knowledge distillation is a network compression technique. It involves compressing an expensive but high-performing teacher model into a smaller student model [188]. After the training, the student network can perform as well as the teacher with a smaller model size.
The same idea is applied to learn a new sampling procedure with fewer timesteps. In terms of diffusion models, it involves the original sampling procedure as the teacher model and a new one with fewer timesteps as the student model [189], [190]. Figure 16 shows an example method of progressive distillation on the sampling procedure. Knowledge distillation is applied to merge several time steps into one single time step for the new sampling procedure. Directly distilling all timesteps in the teacher sampling procedure into a single timestep in the student one, theoretically reduces significantly generation time [191], [192]. However, this relies on collecting a large dataset of samples from the teacher model for knowledge distillation, which itself is computationally expensive [193], [194]. Instead, the number of timesteps can be progressively reduced [98]. For example, after the student sampling procedure is trained to merge two timesteps in the teacher sampling procedure, it becomes the teacher, and a new student procedure is trained to further reduce the number of sampling steps [195].
Selection Strategies
Many strategies are developed to select a subset of timesteps without undermining quality, and thereby form a shorter sampling procedure [196], [197]. Timesteps around the end of the sampling procedure influence quality less, and they are often dropped out [198]. Figure 17 shows a shorter sampling procedure with selected timesteps. Some may not directly reduce the number of timesteps but select a subset of timesteps for parallel computation [199]. In this way, each single processor deals with fewer timesteps.
Applying differential equation solvers is a popular strategy for those with continuous formulations. They are usually based on well-established mathematical solvers to handle adaptive step sizes and noise schedules. As a continuous formulation itself is based on SDEs, such solvers are straightforward to be adopted. Many existing SDE solvers are available for the sampling procedure like stochastic Runge-Kutta solver [5], Diffusion Exponential Integrator Sampler (DEIS) [200], and Itô-Taylor Sampling by higherorder numerical schemes [201]. Solvers do not necessarily select timesteps uniformly [149]. Dynamic step size is used in the Euler-Maruyama (EM) solver [5]. The selection sometimes also brings improvement on quality [202]. More general strategies are also developed for both formulations. They sometimes also modify the forward and reverse processes for fast inference, leading to retraining diffusion models. Some strategies leverage particular mathematical tools for acceleration. A non-Markovian reverse process is proposed for few-step sampling [136], which theoretically is equivalent with the probability flow sampler [5]. An exact analytical solution of diffusion models are derived but stochasticity is sacrificed [203]. A Taylor expansion process is applied to disregard non-contributory diffusion steps [204]. Others rely on extra networks or algorithms for fast sampling. Some strategies conduct modifications based on noise schedules. If continuous noise levels are used, an extra network is used to adjust the noise levels of a fewstep discrete time reverse diffusion process [50], [205]. The selection of timesteps is directly learned by extra networks [206], [207]. A Dynamic Programming algorithm [208] was also designed to reduce the number of timesteps.
FUTURE TRENDS
We provide our insights on emerging or important trends in this field to encourage more studies in the future.
Theory
The theory of diffusion models can inspire significant future research. Diffusion models are heavily based on mathematical formulations [19], which facilitate future improvements from mathematical perspectives. For example, deriving new formulations for destroying data dimensions [209] leads to a generalized diffusion model. Building connections with well-established fields is also beneficial to achieve better understanding on the theory of diffusion models [210], [211]. Explainable techniques can also assist with understanding on theories of diffusion models [212]. Besides, the success of diffusion models highlights the virtue of auto-regressive generation where a self-correction mechanism is enabled to achieve better quality [213], [214]. This will contributes to the theoretical designs of generative models in the future.
Architecture
The improvement of the architecture in diffusion models has a lot of future potential. Current backbones of denoising networks are mainly U-Net [17] and Transformer [116]. They have achieved impressive results but still have inherent drawbacks in some applications. At the same time, a wide variety of well-established network architectures are available in machine learning with appealing advantages. Applying and adapting these network architectures as the denoising network will introduce additional benefits and unleash the potential for diffusion models [215]. Compression of the architecture with fewer parameters is actively being researched [216]. The conditional mechanism is also developing fast for multi-grained guidance. Besides the backbone choice, the optimization is also a promising avenue. Deeper understanding on the model behaviours has been proposed as more experiments are conducted and more theories are developed [22]. For example, the strategy of reinforcement learning has been explored to train diffusion models [217] These understandings facilitate improving the training speed and efficiency in the future.
Data
Developing data-efficient diffusion models is an important future trend. Conventionally, diffusion models implicitly assume abundant data to train on, especially labelled data for conditional models [218]. This assumption may be violated when acquiring data is expensive or sometimes impossible. Some research explores to improve the training on low quality data [219], [220], [221]. Other research is seeking to combine few-shot learning to efficiently leverage the limited data for training [222], [223]. Several pioneering explorations also consider one-shot [224], [225] and zero-shot [226], [227] learning with diffusion models, further reducing the dependency on abundant training data [228]. These attempts highlight the possibility of applying diffusion models in the absence of data. Besides, changing the way of supervision will be helpful to train conditional models with limited labelled data. Currently, the unsupervised [229], selfsupervised [230] or semi-supervised [231] manners have been explored to train conditional diffusion models. They have shown encouraging results in different areas and will facilitate future exploration in the abscence of labelled data. Their success also indicates future opportunities of applying other available supervision schemes like weakly supervised [232] learning to conditional diffusion models.
Supporting multiple data modalities in a diffusion model also has broad prospects. Incorporating information from several modalities will greatly promote the generative power and applicable flexibility of diffusion models [122]. Currently, many efforts focus on two modalities, usually leveraging text as an extra modality in addition to the data of interest [233]. Initial attempts have also supported other modalities from a wide variety like human pose for the guidance in conditional diffusion models [155]. In the future, exploring other modalities is still worth more efforts for better flexibility. More importantly, finding an incorporation for three or more data modalities to be mixed in a single diffusion model is another potential boost for their generative power [234], [235].
Applications
A wider range of applications beyond generation are promising. Since diffusion models were proposed, they have been applied in a large variety of tasks [236] because of their powerful generative ability. For example, diffusion models are recently applied in policy representation in reinforcement learning [237], neural architecture searching [238], and named entity recognition [239]. Data generated by diffusion models are also employed as the proxy for training new models when the original data are limited [240], [241]. Adapting pre-trained diffusion models to another domain is also popular, e.g., applying an image model for video generation [242]. Recent advances on diffusion models have also witnessed significant increase of successful interdisciplinary applications, especially AI for Science (AI4Science) [243]. Diffusion models have increasingly been combined with Physics [244], [245], Chemistry [246], [247], etc. They are not only applying and adapting diffusion models to solve problems in these domains, but also leveraging knowledge in a disciplinary to theoretically improve diffusion models [248], [249], [250]. Expanding the application scenarios of diffusion models will bring more opportunities to both academic and industrial fields.
Diffusion models are also actively investigated for or against adversarial attack. Diffusion models themselves are analysed against adversarial attack, which is usually measured by robustness. Diffusion models with higher robustness can work vigorously and consistently [251]. Developing diffusion models with higher robustness have attracted more and more public attention to defend adversarial attacks [252], [253]. Diffusion models are also deployed to help other systems with defending adversarial attacks [254], [255], [256], [257]. They are also employed to attack other systems by generating adversarial samples [258], [259].
Societal Impacts
Despite many successful applications, the potential misuse of diffusion models needs to be regulated. Benefiting from academic and industrial efforts, diffusion models become more and more powerful to produce synthetic data, which may be increasingly difficult to be distinguished by realworld data. While synthetic data reduces the cost of obtaining real data in some fields, such an ability may be abused with harmful societal impacts [260]. For example, misinformation and spam can be spread via manipulated data [261], [262]. Distinguishing these manipulated data becomes more expensive and even more difficult [263]. This requires continuous investment of efforts on techniques and policies as diffusion models are also swiftly developing.
Pre-trained diffusion models may have a risk of data leakage. A concern of data privacy is quickly getting public attention by the possibility of recovering the training data from pre-trained diffusion models [264], [265]. Such data may contain confidential/personal information [266]. Training on privacy-sensitive data requires additional safety considerations and efforts in the future [267].
Reproducing or exacerbating biases by diffusion models is another concern. Diffusion models can be biased in data generation for various reasons such as biased datasets [268], [269]. They are found prone to replicating their training data because such data usually have higher likelihoods [270], [271]. This preference may lead to a systemically biased model when the training data themselves have bias [272]. Biased models will bring users with unfair stereotypes on some concepts and lead to harmful societal impacts [273]. For example, a diffusion model trained for face generation with FFHQ dataset [274] (whose data are collected mainly from people between the ages of 21 and 40) will also be more inclined to generate results from that age group and less supportive of other age groups. More efforts are required in the future to encourage fair and unbiased diffusion models.
CONCLUSION
Diffusion models are an emerging type of deep generative models involving three main components: a forward process and a reverse process for optimization, and a sampling procedure for generation. Diffusion models become popular for their high-quality results in various tasks. The success of diffusion models is closely related with various design fundamentals of the three components. The forward process focuses on perturbing p(x 0 ) to p(x T ) by gradually adding noise to x 0 by p(x t |x t−1 ). It is designed based on noise injection and the multi-step chain of transitions. The reverse process focuses on training a denoising network to remove noise. The reverse process includes network-related and optimization-related choices. The sampling procedure uses the trained denoising network for generation and mainly focuses on guidance and acceleration. To achieve conditional generation and fast sampling, guidance and acceleration techniques are designed for the sampling procedure. These designs have all contributed to the current powerful diffusion models. Several future trends that are emerging and important have been introduced to boost this field.
APPENDIX A DERIVATION OF THE FORWARD PROCESS
According to the definition of diffusion process, we have:
x t = √ α t x 0 + β t ϵ t(25)
where each ϵ t is an i.i.d. standard Gaussian. Then, by recursion, we have:
x t = √ α t α t−1 x t−2 + α t β t−1 ϵ t−1 + β t ϵ t = √ α t α t−1 α t−2 x t−3 + α t α t−1 β t−2 ϵ t−2 + α t β t−1 ϵ t−1 + β t ϵ t · · · =
√ᾱ t x 0 + α t α t−1 · · · α 2 β 1 ϵ 1 + · · · + α t β t−1 ϵ t−1 + β t ϵ t .
As a result, q(x t |x 0 ) is still Gaussian. Its mean vector is √ᾱ t x 0 , and its covariance matrix is (α t α t−1 · · · α 2 β 1 + · · · + α t β t−1 + β t )I = (1 −ᾱ t )I. Formally, we have:
x t = √ᾱ t x 0 + √ 1 −ᾱ t ϵ t ,(27)
and in a probabilistic perspective:
q(x t |x 0 ) = N (x t ; √ᾱ t x 0 , (1 −ᾱ t )I),
which is the same as Eq. 7.
APPENDIX B PROOF OF EQUIVALENT OUTPUTS
For completeness, we provide the proof of equivalent output representations. The original one can be found in [22]. The distribution formulation of q(x t |x 0 ) (Eq. 7) can be explicitly written as:
x t = √ᾱ t x 0 + √ 1 −ᾱ t ϵ t ,(29)
and can be reorganized as:
x 0 = x t − √ 1 −ᾱ t ϵ 0 √ᾱ t ,(30)
where x 0 is dependent on ϵ t when x t is given. This allows to predict ϵ t for an indirect estimation of x 0 . Therefore, we have:x
0 = x t − √ 1 −ᾱ tε0 √ᾱ t ,(31)
which shows the equivalence of predicting x 0 and ϵ t . Mathematically, the Tweedie's Formula for a Gaussian variable states that:
E[µ z |z] = z + Σ z ∇ z log p(z),(32)
where µ z and Σ z is the mean and variance of z, respectively. From Eq. 7, we know its mean is:
µ xt = √ᾱ t x 0 ,(33)
and its variance is:
Σ xt = 1 −ᾱ t .(34)
Then, by Tweedie's Formula, we have √ᾱ t x 0 = x t + (1 −ᾱ t )∇ xt log p(x t ),
where x 0 is dependent on the score ∇ xt log p(x t ). Therefore, if the estimation of the score derives an estimation of x 0 :
x 0 = x t + (1 −ᾱ t )∇ x log p(x t ) √ᾱ t .(36)
APPENDIX C EQUIVALENT OPTIMIZATION OBJECTIVES
For completeness, we provide the proof of equivalent optimization of two objectives. The original proof can be found in the Appendix of [58]. Diffusion models should have been optimised by the unconditional form:
J uncon (θ) = E qσ(x) [||s θ (x) − ∂ log q σ (x) ∂x || 2 ],(37)
where s θ is the prediction of our network with learnable parameters θ. However, they are usually optimized by the conditional form:
J con (θ) = E qσ(x,x) [||s θ (x) − ∂ log q σ (x|x) ∂x || 2 ].(38)
We can prove that J uncon (θ) = J con (θ) − C 2 + C 1 where C 1 and C 2 are both constants that do not depend on θ.
Given the unconditional form J uncon (θ):
J uncon (θ) = E qσ(x) [||s θ (x) − ∂ log q σ (x) ∂x || 2 ], = E qσ(x) [||s θ (x)|| 2 ] − 2φ(θ) + C 1 ,(39)
where C 1 = E qσ(x) [|| ∂ log qσ(x) ∂x || 2 ] is a constant that does not depend on θ and
φ(θ) = E qσ(x) [< s θ (x), ∂ log q σ (x) ∂x >], = x q σ (x) < s θ (x), ∂ log q σ (x) ∂x > dx, = x q σ (x) < s θ (x), ∂qσ(x) ∂x q σ (x) > dx, = x < s θ (x), ∂q σ (x) ∂x > dx, = x < s θ (x), ∂ x q 0 (x)q σ (x|x)dx ∂x > dx, = x < s θ (x), x q 0 (x) ∂q σ (x|x)dx ∂x > dx, = x < s θ (x), x q 0 (x)q σ (x|x) ∂ log q σ (x|x)dx ∂x > dx, = x x q 0 (x)q σ (x|x) < s θ (x), ∂ log q σ (x|x) ∂x > dxdx, = x x q σ (x, x) < s θ (x), ∂ log q σ (x|x) ∂x > dxdx, = E qσ(x,x) [< s θ (x), ∂ log q σ (x|x) ∂x >].(40)
Therefore, the unconditional form can be written as follows:
J uncon (θ) =E qσ(x) [||s θ (x)|| 2 ] − 2E qσ(x,x) [< s θ (x), ∂ log q σ (x|x) ∂x >] + C 1 .(41)
At the same time, the conditional form can be written as:
J con (θ) =E qσ(x,x) [||s θ (x) − ∂ log q σ (x|x) ∂x || 2 ] =E qσ(x) [||s θ (x)|| 2 ] − 2E qσ(x,x) [< s θ (x), ∂ log q σ (x|x) ∂x >] + C 2 ,(42)
where C 2 = E qσ(x,x) [|| ∂ log qσ(x|x) ∂x || 2 ]. In conclusion, the two objectives are equivalent in optimising θ because J uncon (θ) = J con (θ) − C 2 + C 1 .
Fig.
Fig. 1. The overview of diffusion models. The forward process, the reverse process, and the sampling procedure are the three core components of diffusion models, which are responsible for adding noise, training networks, and generating samples, respectively.
Fig. 2 .
2The forward process perturbs the original unknown distribution by gradually adding noise to a given set of data samples through a chain of distribution transitions with multiple time steps. Each time step of the chain is denoted by a circle.
Fig. 3 .
3Fig. 3. The reverse process trains a neural network θ to recursively remove the noise that has been previously added by the forward process.
Fig. 5 .
5The transition chain no longer seek an isotropic Gaussian distribution as the terminal distribution. The grey, dashed parts represent that the transition no longer approaches the isotropic Gaussian distribution.
Fig. 6 .
6A systematic method allows the forward process to separately transform the original data in orthogonal subspace.
Fig. 7 .
7The transition chain in a latent space. ψ * is a pre-trained encoder. Data are no longer manipulated in the original space (dashed, grey). They are now transformed within the latent space (rounded rectangle).
ct µ θ (xt, t) := C(x 0 ,ŝt,εt) N/A
Fig. 8 .
8Visualization of the trajectory by predicting score. A score is a direction for next timesteps. Samples are denoised in the direction at each position. Colors represent trajectories of different samples.
Fig. 10 .
10Learning priority changes in the reverse process, which is denoted by different colours.
Fig. 12 .
12Classifier guidance leverages an extra classifier network ξ * to compute a gradient ∇ as the modification on the denoising network θ * . The timestep condition t is omitted here for visualization.
Fig. 13 .
13Classifier-free guidance is based on a mixture of vanilla guidance and unconditional model θ * . A probability p controls whether to drop out the conditions during training.
Fig. 14 .
14Applying an extra network ϑ to directly learn the required modification for guidance. Timestep condition is omitted here.
Fig. 15 .
15The sampling procedure is truncated and starts from a selected timestep. The grey, dashed parts represent discarding for generation.
Fig. 16 .
16Knowledge distillation learns student denoising networks δ and η with fewer timesteps, based on the teacher denoising network θ * .
Fig. 17 .
17Selection strategies for the sampling procedure skip the selected time steps for generation.
TABLE 1
1Notations and terminologies.Notation
Terminology
E
Expectation
N (x; µ, Σ)
Variable x follows the Gaussian distribution with
mean µ and variance Σ
x 0
Training sample
t
Timestep
T
Total number of timesteps
c
Condition
xt
Perturbed sample at timestep t
ϵt
Noise at timestep t
p(·)
Distribution
p(x 0 )
TABLE 2
2Figure legends.Notations
Meaning
Trainable network with parameters θ
Fixed network with parameters ξ *
Component not in use
Data distributions
The distribution at a timestep
Condition c
Timestep t
TABLE 5
5Visualization of parameterization ways. Outputs are the values predicted by a denoising network. The parameterization is the formulation to use the corresponding output in the reverse process.
Ziyi Chang is a PhD student in the Department of Computer Science at Durham University. His research focuses on human-related data in 3D computer vision and graphics, including 3D reconstruction, 3D style analysis and synthesis, human motion analysis and synthesis, and etc. He graduated from the University of Edinburgh in 2020 and Renmin University of China in 2019.
Deep unsupervised learning using nonequilibrium thermodynamics. J Sohl-Dickstein, E Weiss, N Maheswaranathan, S Ganguli, International Conference on Machine Learning. PMLRJ. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, "Deep unsupervised learning using nonequilibrium thermody- namics," in International Conference on Machine Learning. PMLR, 2015, pp. 2256-2265.
Palette: Image-to-image diffusion models. C Saharia, W Chan, H Chang, C Lee, J Ho, T Salimans, D Fleet, M Norouzi, Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings. C. Saharia, W. Chan, H. Chang, C. Lee, J. Ho, T. Salimans, D. Fleet, and M. Norouzi, "Palette: Image-to-image diffusion models," in Special Interest Group on Computer Graphics and Interactive Tech- niques Conference Proceedings, 2022, pp. 1-10.
Hierarchical text-conditional image generation with clip latents. A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen, arXiv:2204.06125arXiv preprintA. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, "Hi- erarchical text-conditional image generation with clip latents," arXiv preprint arXiv:2204.06125, 2022.
Cascaded diffusion models for high fidelity image generation. J Ho, C Saharia, W Chan, D J Fleet, M Norouzi, T Salimans, J. Mach. Learn. Res. 23J. Ho, C. Saharia, W. Chan, D. J. Fleet, M. Norouzi, and T. Sali- mans, "Cascaded diffusion models for high fidelity image gener- ation." J. Mach. Learn. Res., vol. 23, pp. 47-1, 2022.
Score-based generative modeling through stochastic differential equations. Y Song, J Sohl-Dickstein, D P Kingma, A Kumar, S Ermon, B Poole, arXiv:2011.13456arXiv preprintY. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, "Score-based generative modeling through stochastic differential equations," arXiv preprint arXiv:2011.13456, 2020.
Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation. B Kim, Y Oh, J C Ye, arXiv:2209.14566cs, eessB. Kim, Y. Oh, and J. C. Ye, "Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation," Sep. 2022, arXiv:2209.14566 [cs, eess].
AudioLDM: Text-to-Audio Generation with Latent Diffusion Models. H Liu, Z Chen, Y Yuan, X Mei, X Liu, D Mandic, W Wang, M D Plumbley, arXiv:2301.12503cs, eessH. Liu, Z. Chen, Y. Yuan, X. Mei, X. Liu, D. Mandic, W. Wang, and M. D. Plumbley, "AudioLDM: Text-to-Audio Generation with Latent Diffusion Models," Jan. 2023, arXiv:2301.12503 [cs, eess].
Symbolic music generation with diffusion models. G Mittal, J Engel, C Hawthorne, I Simon, arXiv:2103.16091arXiv preprintG. Mittal, J. Engel, C. Hawthorne, and I. Simon, "Sym- bolic music generation with diffusion models," arXiv preprint arXiv:2103.16091, 2021.
A study on speech enhancement based on diffusion probabilistic model. Y.-J Lu, Y Tsao, S Watanabe, 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEEY.-J. Lu, Y. Tsao, and S. Watanabe, "A study on speech enhance- ment based on diffusion probabilistic model," in 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2021, pp. 659-666.
3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models. G Nam, M Khlifi, A Rodriguez, A Tono, L Zhou, P Guerrero, arXiv:2212.00842csG. Nam, M. Khlifi, A. Rodriguez, A. Tono, L. Zhou, and P. Guer- rero, "3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models," Dec. 2022, arXiv:2212.00842 [cs].
Human motion diffusion model. G Tevet, S Raab, B Gordon, Y Shafir, D Cohen-Or, A H Bermano, arXiv:2209.14916arXiv preprintG. Tevet, S. Raab, B. Gordon, Y. Shafir, D. Cohen-Or, and A. H. Bermano, "Human motion diffusion model," arXiv preprint arXiv:2209.14916, 2022.
Mag-icVideo: Efficient Video Generation With Latent Diffusion Models. D Zhou, W Wang, H Yan, W Lv, Y Zhu, J Feng, arXiv:2211.11018csD. Zhou, W. Wang, H. Yan, W. Lv, Y. Zhu, and J. Feng, "Mag- icVideo: Efficient Video Generation With Latent Diffusion Mod- els," Nov. 2022, arXiv:2211.11018 [cs].
Conditional diffusion based on discrete graph structures for molecular graph generation. H Huang, L Sun, B Du, W Lv, arXiv:2301.00427arXiv preprintH. Huang, L. Sun, B. Du, and W. Lv, "Conditional diffusion based on discrete graph structures for molecular graph generation," arXiv preprint arXiv:2301.00427, 2023.
Predicting molecular conformation via dynamic graph score matching. S Luo, C Shi, M Xu, J Tang, Advances in Neural Information Processing Systems. 34S. Luo, C. Shi, M. Xu, and J. Tang, "Predicting molecular confor- mation via dynamic graph score matching," Advances in Neural Information Processing Systems, vol. 34, pp. 19 784-19 795, 2021.
Probabilistic mapping of dark matter by neural score matching. B Rémy, F Lanusse, Z Ramzi, J Liu, N Jeffrey, J.-L Starck, arXiv:2011.08271arXiv preprintB. Rémy, F. Lanusse, Z. Ramzi, J. Liu, N. Jeffrey, and J.-L. Starck, "Probabilistic mapping of dark matter by neural score matching," arXiv preprint arXiv:2011.08271, 2020.
Generative modeling by estimating gradients of the data distribution. Y Song, S Ermon, Advances in Neural Information Processing Systems. 32Y. Song and S. Ermon, "Generative modeling by estimating gradients of the data distribution," Advances in Neural Information Processing Systems, vol. 32, 2019.
Denoising diffusion probabilistic models. J Ho, A Jain, P Abbeel, Advances in Neural Information Processing Systems. 33J. Ho, A. Jain, and P. Abbeel, "Denoising diffusion probabilis- tic models," Advances in Neural Information Processing Systems, vol. 33, pp. 6840-6851, 2020.
Elucidating the design space of diffusion-based generative models. T Karras, M Aittala, T Aila, S Laine, arXiv:2206.00364arXiv preprintT. Karras, M. Aittala, T. Aila, and S. Laine, "Elucidating the de- sign space of diffusion-based generative models," arXiv preprint arXiv:2206.00364, 2022.
On the mathematics of diffusion models. D Mcallester, arXiv:2301.11108arXiv preprintD. McAllester, "On the mathematics of diffusion models," arXiv preprint arXiv:2301.11108, 2023.
A survey on generative diffusion model. H Cao, C Tan, Z Gao, G Chen, P.-A Heng, S Z Li, arXiv:2209.02646arXiv preprintH. Cao, C. Tan, Z. Gao, G. Chen, P.-A. Heng, and S. Z. Li, "A survey on generative diffusion model," arXiv preprint arXiv:2209.02646, 2022.
Diffusion models: A comprehensive survey of methods and applications. L Yang, Z Zhang, Y Song, S Hong, R Xu, Y Zhao, Y Shao, W Zhang, B Cui, M.-H Yang, arXiv:2209.00796arXiv preprintL. Yang, Z. Zhang, Y. Song, S. Hong, R. Xu, Y. Zhao, Y. Shao, W. Zhang, B. Cui, and M.-H. Yang, "Diffusion models: A com- prehensive survey of methods and applications," arXiv preprint arXiv:2209.00796, 2022.
Understanding diffusion models: A unified perspective. C Luo, 2022C. Luo, "Understanding diffusion models: A unified perspec- tive," 2022.
W Fan, C Liu, Y Liu, J Li, H Li, H Liu, J Tang, Q Li, arXiv:2302.02591Generative diffusion models on graphs: Methods and applications. arXiv preprintW. Fan, C. Liu, Y. Liu, J. Li, H. Li, H. Liu, J. Tang, and Q. Li, "Gen- erative diffusion models on graphs: Methods and applications," arXiv preprint arXiv:2302.02591, 2023.
Efficient diffusion models for vision: A survey. A Ulhaq, N Akhtar, G Pogrebna, arXiv:2210.09292arXiv preprintA. Ulhaq, N. Akhtar, and G. Pogrebna, "Efficient diffusion mod- els for vision: A survey," arXiv preprint arXiv:2210.09292, 2022.
Diffusion models in bioinformatics: A new wave of deep learning revolution in action. Z Guo, J Liu, Y Wang, M Chen, D Wang, D Xu, J Cheng, arXiv:2302.10907arXiv preprintZ. Guo, J. Liu, Y. Wang, M. Chen, D. Wang, D. Xu, and J. Cheng, "Diffusion models in bioinformatics: A new wave of deep learn- ing revolution in action," arXiv preprint arXiv:2302.10907, 2023.
Diffusion models for medical image analysis: A comprehensive survey. A Kazerouni, E K Aghdam, M Heidari, R Azad, M Fayyaz, I Hacihaliloglu, D Merhof, arXiv:2211.07804arXiv preprintA. Kazerouni, E. K. Aghdam, M. Heidari, R. Azad, M. Fayyaz, I. Hacihaliloglu, and D. Merhof, "Diffusion models for med- ical image analysis: A comprehensive survey," arXiv preprint arXiv:2211.07804, 2022.
A survey on audio diffusion models: Text to speech synthesis and enhancement in generative ai. C Zhang, C Zhang, S Zheng, M Zhang, M Qamar, S.-H Bae, I S Kweon, arXiv:2303.133362arXiv preprintC. Zhang, C. Zhang, S. Zheng, M. Zhang, M. Qamar, S.-H. Bae, and I. S. Kweon, "A survey on audio diffusion models: Text to speech synthesis and enhancement in generative ai," arXiv preprint arXiv:2303.13336, vol. 2, 2023.
Diffusion models in vision: A survey. F.-A Croitoru, V Hondru, R T Ionescu, M Shah, arXiv:2209.04747arXiv preprintF.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, "Diffusion models in vision: A survey," arXiv preprint arXiv:2209.04747, 2022.
Text-toimage diffusion model in generative ai: A survey. C Zhang, C Zhang, M Zhang, I S Kweon, arXiv:2303.07909arXiv preprintC. Zhang, C. Zhang, M. Zhang, and I. S. Kweon, "Text-to- image diffusion model in generative ai: A survey," arXiv preprint arXiv:2303.07909, 2023.
Diffusion models in nlp: A survey. Y Zhu, Y Zhao, arXiv:2303.07576arXiv preprintY. Zhu and Y. Zhao, "Diffusion models in nlp: A survey," arXiv preprint arXiv:2303.07576, 2023.
How much is enough? a study on diffusion times in score-based generative models. G Franzese, S Rossi, L Yang, A Finamore, D Rossi, M Filippone, P Michiardi, arXiv:2206.05173arXiv preprintG. Franzese, S. Rossi, L. Yang, A. Finamore, D. Rossi, M. Fil- ippone, and P. Michiardi, "How much is enough? a study on diffusion times in score-based generative models," arXiv preprint arXiv:2206.05173, 2022.
Auto-encoding variational bayes. D P Kingma, M Welling, arXiv:1312.6114arXiv preprintD. P. Kingma and M. Welling, "Auto-encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013.
Understanding Deep Learning. S J Prince, MIT PressS. J. Prince, Understanding Deep Learning. MIT Press, 2023.
Large scale gan training for high fidelity natural image synthesis. A Brock, J Donahue, K Simonyan, arXiv:1809.11096arXiv preprintA. Brock, J. Donahue, and K. Simonyan, "Large scale gan train- ing for high fidelity natural image synthesis," arXiv preprint arXiv:1809.11096, 2018.
On calibrating diffusion probabilistic models. T Pang, C Lu, C Du, M Lin, S Yan, Z Deng, arXiv:2302.10688arXiv preprintT. Pang, C. Lu, C. Du, M. Lin, S. Yan, and Z. Deng, "On calibrating diffusion probabilistic models," arXiv preprint arXiv:2302.10688, 2023.
Vaes meet diffusion models: Efficient and high-fidelity generation. K Pandey, A Mukherjee, P Rai, A Kumar, NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications. K. Pandey, A. Mukherjee, P. Rai, and A. Kumar, "Vaes meet dif- fusion models: Efficient and high-fidelity generation," in NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applica- tions, 2021.
Z G Zhang, Fundamentals of Stochastic Models. CRC PressZ. G. Zhang, Fundamentals of Stochastic Models. CRC Press, 2023.
Diffusion models beat gans on image synthesis. P Dhariwal, A Nichol, Advances in Neural Information Processing Systems. 34P. Dhariwal and A. Nichol, "Diffusion models beat gans on image synthesis," Advances in Neural Information Processing Systems, vol. 34, pp. 8780-8794, 2021.
Reverse-time diffusion equation models. B D Anderson, Stochastic Processes and their Applications. 12B. D. Anderson, "Reverse-time diffusion equation models," Stochastic Processes and their Applications, vol. 12, no. 3, pp. 313- 326, 1982.
Estimation of non-normalized statistical models by score matching. A Hyvärinen, P Dayan, Journal of Machine Learning Research. 64A. Hyvärinen and P. Dayan, "Estimation of non-normalized statistical models by score matching." Journal of Machine Learning Research, vol. 6, no. 4, 2005.
On the importance of noise scheduling for diffusion models. T Chen, arXiv:2301.10972arXiv preprintT. Chen, "On the importance of noise scheduling for diffusion models," arXiv preprint arXiv:2301.10972, 2023.
Non gaussian denoising diffusion models. E Nachmani, R S Roman, L Wolf, arXiv:2106.07582arXiv preprintE. Nachmani, R. S. Roman, and L. Wolf, "Non gaussian denoising diffusion models," arXiv preprint arXiv:2106.07582, 2021.
Score-based generative modeling with critically-damped langevin diffusion. T Dockhorn, A Vahdat, K Kreis, arXiv:2112.07068arXiv preprintT. Dockhorn, A. Vahdat, and K. Kreis, "Score-based genera- tive modeling with critically-damped langevin diffusion," arXiv preprint arXiv:2112.07068, 2021.
Soft diffusion: Score matching for general corruptions. G Daras, M Delbracio, H Talebi, A G Dimakis, P Milanfar, arXiv:2209.05442arXiv preprintG. Daras, M. Delbracio, H. Talebi, A. G. Dimakis, and P. Milanfar, "Soft diffusion: Score matching for general corruptions," arXiv preprint arXiv:2209.05442, 2022.
Denoising diffusion gamma models. E Nachmani, R S Roman, L Wolf, arXiv:2110.05948arXiv preprintE. Nachmani, R. S. Roman, and L. Wolf, "Denoising diffusion gamma models," arXiv preprint arXiv:2110.05948, 2021.
On the generalization of diffusion model. M Yi, J Sun, Z Li, M. Yi, J. Sun, and Z. Li, "On the generalization of diffusion model," 2023.
Convergence of denoising diffusion models under the manifold hypothesis. V De Bortoli, arXiv:2208.05314arXiv preprintV. De Bortoli, "Convergence of denoising diffusion models under the manifold hypothesis," arXiv preprint arXiv:2208.05314, 2022.
Tackling the generative learning trilemma with denoising diffusion gans. Z Xiao, K Kreis, A Vahdat, arXiv:2112.07804arXiv preprintZ. Xiao, K. Kreis, and A. Vahdat, "Tackling the generative learning trilemma with denoising diffusion gans," arXiv preprint arXiv:2112.07804, 2021.
Learning energy-based models in high-dimensional spaces with multi-scale denoising score matching. Z Li, Y Chen, F T Sommer, arXiv:1910.07762arXiv preprintZ. Li, Y. Chen, and F. T. Sommer, "Learning energy-based mod- els in high-dimensional spaces with multi-scale denoising score matching," arXiv preprint arXiv:1910.07762, 2019.
Wavegrad: Estimating gradients for waveform generation. N Chen, Y Zhang, H Zen, R J Weiss, M Norouzi, W Chan, arXiv:2009.00713arXiv preprintN. Chen, Y. Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan, "Wavegrad: Estimating gradients for waveform generation," arXiv preprint arXiv:2009.00713, 2020.
Estimating high order gradients of the data distribution by denoising. C Meng, Y Song, W Li, S Ermon, Advances in Neural Information Processing Systems. 34C. Meng, Y. Song, W. Li, and S. Ermon, "Estimating high order gradients of the data distribution by denoising," Advances in Neural Information Processing Systems, vol. 34, pp. 25 359-25 369, 2021.
Improved techniques for training scorebased generative models. Y Song, S Ermon, Advances in neural information processing systems. 33Y. Song and S. Ermon, "Improved techniques for training score- based generative models," Advances in neural information process- ing systems, vol. 33, pp. 12 438-12 448, 2020.
Improved denoising diffusion probabilistic models. A Q Nichol, P , International Conference on Machine Learning. PMLR, 2021. A. Q. Nichol and P. Dhariwal, "Improved denoising diffu- sion probabilistic models," in International Conference on Machine Learning. PMLR, 2021, pp. 8162-8171.
Variational diffusion models. D Kingma, T Salimans, B Poole, J Ho, Advances in neural information processing systems. 34D. Kingma, T. Salimans, B. Poole, and J. Ho, "Variational dif- fusion models," Advances in neural information processing systems, vol. 34, pp. 21 696-21 707, 2021.
Scalable adaptive computation for iterative generation. A Jabri, D Fleet, T Chen, arXiv:2212.11972arXiv preprintA. Jabri, D. Fleet, and T. Chen, "Scalable adaptive computation for iterative generation," arXiv preprint arXiv:2212.11972, 2022.
Noise2music: Textconditioned music generation with diffusion models. Q Huang, D S Park, T Wang, T I Denk, A Ly, N Chen, Z Zhang, Z Zhang, J Yu, C Frank, arXiv:2302.03917arXiv preprintQ. Huang, D. S. Park, T. Wang, T. I. Denk, A. Ly, N. Chen, Z. Zhang, Z. Zhang, J. Yu, C. Frank et al., "Noise2music: Text- conditioned music generation with diffusion models," arXiv preprint arXiv:2302.03917, 2023.
A Bansal, E Borgnia, H.-M Chu, J S Li, H Kazemi, F Huang, M Goldblum, J Geiping, T Goldstein, arXiv:2208.09392Cold diffusion: Inverting arbitrary image transforms without noise. arXiv preprintA. Bansal, E. Borgnia, H.-M. Chu, J. S. Li, H. Kazemi, F. Huang, M. Goldblum, J. Geiping, and T. Goldstein, "Cold diffusion: In- verting arbitrary image transforms without noise," arXiv preprint arXiv:2208.09392, 2022.
A connection between score matching and denoising autoencoders. P Vincent, Neural computation. 237P. Vincent, "A connection between score matching and denoising autoencoders," Neural computation, vol. 23, no. 7, pp. 1661-1674, 2011.
Score-based denoising diffusion with non-isotropic gaussian noise models. V Voleti, C Pal, A Oberman, arXiv:2210.12254arXiv preprintV. Voleti, C. Pal, and A. Oberman, "Score-based denoising dif- fusion with non-isotropic gaussian noise models," arXiv preprint arXiv:2210.12254, 2022.
Preserve your own correlation: A noise prior for video diffusion models. S Ge, S Nah, G Liu, T Poon, A Tao, B Catanzaro, D Jacobs, J.-B Huang, M.-Y Liu, Y Balaji, arXiv:2305.10474arXiv preprintS. Ge, S. Nah, G. Liu, T. Poon, A. Tao, B. Catanzaro, D. Jacobs, J.-B. Huang, M.-Y. Liu, and Y. Balaji, "Preserve your own corre- lation: A noise prior for video diffusion models," arXiv preprint arXiv:2305.10474, 2023.
On fast sampling of diffusion probabilistic models. Z Kong, W Ping, arXiv:2106.00132arXiv preprintZ. Kong and W. Ping, "On fast sampling of diffusion probabilistic models," arXiv preprint arXiv:2106.00132, 2021.
Priorgrad: Improving conditional denoising diffusion models with data-driven adaptive prior. S Lee, H Kim, C Shin, X Tan, C Liu, Q Meng, T Qin, W Chen, S Yoon, T.-Y Liu, arXiv:2106.06406arXiv preprintS.-g. Lee, H. Kim, C. Shin, X. Tan, C. Liu, Q. Meng, T. Qin, W. Chen, S. Yoon, and T.-Y. Liu, "Priorgrad: Improving con- ditional denoising diffusion models with data-driven adaptive prior," arXiv preprint arXiv:2106.06406, 2021.
A reparameterized discrete diffusion model for text generation. L Zheng, J Yuan, L Yu, L Kong, arXiv:2302.05737arXiv preprintL. Zheng, J. Yuan, L. Yu, and L. Kong, "A reparameterized discrete diffusion model for text generation," arXiv preprint arXiv:2302.05737, 2023.
Truncated diffusion probabilistic models and diffusion-based adversarial autoencoders. H Zheng, P He, W Chen, M Zhou, The Eleventh International Conference on Learning Representations. H. Zheng, P. He, W. Chen, and M. Zhou, "Truncated diffu- sion probabilistic models and diffusion-based adversarial auto- encoders," in The Eleventh International Conference on Learning Representations, 2023.
Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. H Chung, B Sim, J C Ye, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition12H. Chung, B. Sim, and J. C. Ye, "Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 413-12 422.
Grad-tts: A diffusion probabilistic model for text-to-speech. V Popov, I Vovk, V Gogoryan, T Sadekova, M Kudinov, International Conference on Machine Learning. PMLR, 2021. V. Popov, I. Vovk, V. Gogoryan, T. Sadekova, and M. Kudinov, "Grad-tts: A diffusion probabilistic model for text-to-speech," in International Conference on Machine Learning. PMLR, 2021, pp. 8599-8608.
Diffsinger: Singing voice synthesis via shallow diffusion mechanism. J Liu, C Li, Y Ren, F Chen, Z Zhao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36J. Liu, C. Li, Y. Ren, F. Chen, and Z. Zhao, "Diffsinger: Singing voice synthesis via shallow diffusion mechanism," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 10, 2022, pp. 11 020-11 028.
Truncated diffusion probabilistic models and diffusion-based adversarial autoencoders. H Zheng, P He, W Chen, M Zhou, arXiv:2202.09671arXiv preprintH. Zheng, P. He, W. Chen, and M. Zhou, "Truncated diffu- sion probabilistic models and diffusion-based adversarial auto- encoders," arXiv preprint arXiv:2202.09671, 2022.
Accelerating diffusion models via early stop of the diffusion process. Z Lyu, X Xu, C Yang, D Lin, B Dai, arXiv:2205.12524arXiv preprintZ. Lyu, X. Xu, C. Yang, D. Lin, and B. Dai, "Accelerating diffusion models via early stop of the diffusion process," arXiv preprint arXiv:2205.12524, 2022.
Dirichlet diffusion score model for biological sequence generation. P Avdeyev, C Shi, Y Tan, K Dudnyk, J Zhou, arXiv:2305.10699arXiv preprintP. Avdeyev, C. Shi, Y. Tan, K. Dudnyk, and J. Zhou, "Dirichlet diffusion score model for biological sequence generation," arXiv preprint arXiv:2305.10699, 2023.
Domain-specific denoising diffusion probabilistic models for brain dynamics. Y Duan, J Zhou, Z Wang, Y.-C Chang, Y.-K Wang, C.-T Lin, arXiv:2305.04200arXiv preprintY. Duan, J. Zhou, Z. Wang, Y.-C. Chang, Y.-K. Wang, and C.-T. Lin, "Domain-specific denoising diffusion probabilistic models for brain dynamics," arXiv preprint arXiv:2305.04200, 2023.
Dddm-vc: Decoupled denoising diffusion models with disentangled representation and prior mixup for verified robust voice conversion. H.-Y Choi, S.-H Lee, S.-W Lee, H.-Y. Choi, S.-H. Lee, and S.-W. Lee, "Dddm-vc: Decoupled denoising diffusion models with disentangled representation and prior mixup for verified robust voice conversion," 2023.
Subspace diffusion generative models. B Jing, G Corso, R Berlinghieri, T Jaakkola, arXiv:2205.01490arXiv preprintB. Jing, G. Corso, R. Berlinghieri, and T. Jaakkola, "Subspace diffusion generative models," arXiv preprint arXiv:2205.01490, 2022.
Nonuniform diffusion models. G Batzolis, J Stanczuk, C.-B Schönlieb, C Etmann, arXiv:2207.09786arXiv preprintG. Batzolis, J. Stanczuk, C.-B. Schönlieb, and C. Etmann, "Non- uniform diffusion models," arXiv preprint arXiv:2207.09786, 2022.
Compositional visual generation with composable diffusion models. N Liu, S Li, Y Du, A Torralba, J B Tenenbaum, arXiv:2206.01714arXiv preprintN. Liu, S. Li, Y. Du, A. Torralba, and J. B. Tenenbaum, "Com- positional visual generation with composable diffusion models," arXiv preprint arXiv:2206.01714, 2022.
Score-based generative modeling of graphs via the system of stochastic differential equations. J Jo, S Lee, S J Hwang, arXiv:2202.02514arXiv preprintJ. Jo, S. Lee, and S. J. Hwang, "Score-based generative modeling of graphs via the system of stochastic differential equations," arXiv preprint arXiv:2202.02514, 2022.
Conditional image generation with score-based diffusion models. G Batzolis, J Stanczuk, C.-B Schönlieb, C Etmann, arXiv:2111.13606arXiv preprintG. Batzolis, J. Stanczuk, C.-B. Schönlieb, and C. Etmann, "Con- ditional image generation with score-based diffusion models," arXiv preprint arXiv:2111.13606, 2021.
Clip-diffusion-lm: Apply diffusion model on image captioning. S Xu, arXiv:2210.04559arXiv preprintS. Xu, "Clip-diffusion-lm: Apply diffusion model on image cap- tioning," arXiv preprint arXiv:2210.04559, 2022.
Riemannian score-based generative modeling. V De Bortoli, E Mathieu, M Hutchinson, J Thornton, Y W Teh, A Doucet, arXiv:2202.02763arXiv preprintV. De Bortoli, E. Mathieu, M. Hutchinson, J. Thornton, Y. W. Teh, and A. Doucet, "Riemannian score-based generative modeling," arXiv preprint arXiv:2202.02763, 2022.
High-resolution image synthesis with latent diffusion models. R Rombach, A Blattmann, D Lorenz, P Esser, B Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionR. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, "High-resolution image synthesis with latent diffusion models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 684-10 695.
Argmax flows and multinomial diffusion: Learning categorical distributions. E Hoogeboom, D Nielsen, P Jaini, P Forré, M Welling, Advances in Neural Information Processing Systems. 34E. Hoogeboom, D. Nielsen, P. Jaini, P. Forré, and M. Welling, "Argmax flows and multinomial diffusion: Learning categorical distributions," Advances in Neural Information Processing Systems, vol. 34, pp. 12 454-12 465, 2021.
Diffusion probabilistic models for scene-scale 3d categorical data. J Lee, W Im, S Lee, S.-E Yoon, arXiv:2301.00527arXiv preprintJ. Lee, W. Im, S. Lee, and S.-E. Yoon, "Diffusion probabilis- tic models for scene-scale 3d categorical data," arXiv preprint arXiv:2301.00527, 2023.
Blackout diffusion: Generative diffusion models in discrete-state spaces. J E Santos, Z R Fox, N Lubbers, Y T Lin, arXiv:2305.11089arXiv preprintJ. E. Santos, Z. R. Fox, N. Lubbers, and Y. T. Lin, "Blackout diffusion: Generative diffusion models in discrete-state spaces," arXiv preprint arXiv:2305.11089, 2023.
Analog bits: Generating discrete data using diffusion models with self-conditioning. T Chen, R Zhang, G Hinton, arXiv:2208.04202arXiv preprintT. Chen, R. Zhang, and G. Hinton, "Analog bits: Generating discrete data using diffusion models with self-conditioning," arXiv preprint arXiv:2208.04202, 2022.
Diffuseq: Sequence to sequence text generation with diffusion models. S Gong, M Li, J Feng, Z Wu, L Kong, arXiv:2210.08933arXiv preprintS. Gong, M. Li, J. Feng, Z. Wu, and L. Kong, "Diffuseq: Sequence to sequence text generation with diffusion models," arXiv preprint arXiv:2210.08933, 2022.
Continuous diffusion for categorical data. S Dieleman, L Sartran, A Roshannai, N Savinov, Y Ganin, P H Richemond, A Doucet, R Strudel, C Dyer, C Durkan, arXiv:2211.15089arXiv preprintS. Dieleman, L. Sartran, A. Roshannai, N. Savinov, Y. Ganin, P. H. Richemond, A. Doucet, R. Strudel, C. Dyer, C. Durkan et al., "Continuous diffusion for categorical data," arXiv preprint arXiv:2211.15089, 2022.
Diffuser: Diffusion via edit-based reconstruction. M Reid, V J Hellendoorn, G Neubig, The Eleventh International Conference on Learning Representations. M. Reid, V. J. Hellendoorn, and G. Neubig, "Diffuser: Diffusion via edit-based reconstruction," in The Eleventh International Con- ference on Learning Representations, 2022.
Unsupervised discovery of semantic latent directions in diffusion models. Y.-H Park, M Kwon, J Jo, Y Uh, arXiv:2302.12469arXiv preprintY.-H. Park, M. Kwon, J. Jo, and Y. Uh, "Unsupervised discovery of semantic latent directions in diffusion models," arXiv preprint arXiv:2302.12469, 2023.
Riemannian diffusion schr\" odinger bridge. J Thornton, M Hutchinson, E Mathieu, V De, Y W Bortoli, A Teh, Doucet, arXiv:2207.03024arXiv preprintJ. Thornton, M. Hutchinson, E. Mathieu, V. De Bortoli, Y. W. Teh, and A. Doucet, "Riemannian diffusion schr\" odinger bridge," arXiv preprint arXiv:2207.03024, 2022.
Riemannian diffusion models. C.-W Huang, M Aghajohari, A J Bose, P Panangaden, A Courville, arXiv:2208.07949arXiv preprintC.-W. Huang, M. Aghajohari, A. J. Bose, P. Panangaden, and A. Courville, "Riemannian diffusion models," arXiv preprint arXiv:2208.07949, 2022.
Diffusion-Based Representation Learning. K Abstreiter, S Mittal, S Bauer, B Schölkopf, A Mehrjou, arXiv:2105.14257csK. Abstreiter, S. Mittal, S. Bauer, B. Schölkopf, and A. Mehrjou, "Diffusion-Based Representation Learning," Aug. 2022, arXiv:2105.14257 [cs].
Diffusevae: Efficient, controllable and high-fidelity generation from lowdimensional latents. K Pandey, A Mukherjee, P Rai, A Kumar, arXiv:2201.00308arXiv preprintK. Pandey, A. Mukherjee, P. Rai, and A. Kumar, "Diffusevae: Efficient, controllable and high-fidelity generation from low- dimensional latents," arXiv preprint arXiv:2201.00308, 2022.
Diffusion autoencoders: Toward a meaningful and decodable representation. K Preechakul, N Chatthee, S Wizadwongsa, S Suwajanakorn, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition10K. Preechakul, N. Chatthee, S. Wizadwongsa, and S. Suwa- janakorn, "Diffusion autoencoders: Toward a meaningful and decodable representation," in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, 2022, pp. 10 619- 10 629.
Diffusion Models already have a Semantic Latent Space. M Kwon, J Jeong, Y Uh, arXiv:2210.10960csM. Kwon, J. Jeong, and Y. Uh, "Diffusion Models already have a Semantic Latent Space," Oct. 2022, arXiv:2210.10960 [cs].
Unifying diffusion models' latent space, with applications to cyclediffusion and guidance. C H Wu, F De La, Torre , arXiv:2210.05559arXiv preprintC. H. Wu and F. De la Torre, "Unifying diffusion models' latent space, with applications to cyclediffusion and guidance," arXiv preprint arXiv:2210.05559, 2022.
Soft truncation: A universal training technique of score-based diffusion model for high precision score estimation. D Kim, S Shin, K Song, W Kang, I.-C Moon, International Conference on Machine Learning. PMLR, 2022. D. Kim, S. Shin, K. Song, W. Kang, and I.-C. Moon, "Soft trun- cation: A universal training technique of score-based diffusion model for high precision score estimation," in International Con- ference on Machine Learning. PMLR, 2022, pp. 11 201-11 228.
Exploring vision transformers as diffusion learners. H Cao, J Wang, T Ren, X Qi, Y Chen, Y Yao, L Zhang, arXiv:2212.13771arXiv preprintH. Cao, J. Wang, T. Ren, X. Qi, Y. Chen, Y. Yao, and L. Zhang, "Ex- ploring vision transformers as diffusion learners," arXiv preprint arXiv:2212.13771, 2022.
Progressive distillation for fast sampling of diffusion models. T Salimans, J Ho, arXiv:2202.00512arXiv preprintT. Salimans and J. Ho, "Progressive distillation for fast sampling of diffusion models," arXiv preprint arXiv:2202.00512, 2022.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted inter- vention. Springer, 2015, pp. 234-241.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. 30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
Wavelet diffusion models are fast and scalable image generators. H Phung, Q Dao, A Tran, H. Phung, Q. Dao, and A. Tran, "Wavelet diffusion models are fast and scalable image generators," 2023.
Introduction to diffusion models for machine learning. R O'connor, R. O'Connor, "Introduction to diffusion models for machine learning," Sep 2022.
Generative adversarial networks. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Communications of the ACM. 6311I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde- Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adver- sarial networks," Communications of the ACM, vol. 63, no. 11, pp. 139-144, 2020.
Deep learning and the information bottleneck principle. N Tishby, N Zaslavsky, 2015 ieee information theory workshop (itw). IEEEN. Tishby and N. Zaslavsky, "Deep learning and the information bottleneck principle," in 2015 ieee information theory workshop (itw). IEEE, 2015, pp. 1-5.
Pix-elcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. T Salimans, A Karpathy, X Chen, D P Kingma, arXiv:1701.05517arXiv preprintT. Salimans, A. Karpathy, X. Chen, and D. P. Kingma, "Pix- elcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications," arXiv preprint arXiv:1701.05517, 2017.
Wide residual networks. S Zagoruyko, N Komodakis, arXiv:1605.07146arXiv preprintS. Zagoruyko and N. Komodakis, "Wide residual networks," arXiv preprint arXiv:1605.07146, 2016.
Weight normalization: A simple reparameterization to accelerate training of deep neural networks. T Salimans, D P Kingma, Advances in neural information processing systems. 29T. Salimans and D. P. Kingma, "Weight normalization: A simple reparameterization to accelerate training of deep neural net- works," Advances in neural information processing systems, vol. 29, 2016.
Group normalization. Y Wu, K He, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Y. Wu and K. He, "Group normalization," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
Crossvit: Cross-attention multi-scale vision transformer for image classification. C.-F R Chen, Q Fan, R Panda, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionC.-F. R. Chen, Q. Fan, and R. Panda, "Crossvit: Cross-attention multi-scale vision transformer for image classification," in Pro- ceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 357-366.
Photorealistic text-to-image diffusion models with deep language understanding. C Saharia, W Chan, S Saxena, L Li, J Whang, E Denton, S K S Ghasemipour, B K Ayan, S S Mahdavi, R G Lopes, arXiv:2205.11487arXiv preprintC. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S. K. S. Ghasemipour, B. K. Ayan, S. S. Mahdavi, R. G. Lopes et al., "Photorealistic text-to-image diffusion models with deep language understanding," arXiv preprint arXiv:2205.11487, 2022.
Diffusion probabilistic fields. P Zhuang, S Abnar, J Gu, A Schwing, J M Susskind, M Á Bautista, The Eleventh International Conference on Learning Representations. P. Zhuang, S. Abnar, J. Gu, A. Schwing, J. M. Susskind, and M.Á. Bautista, "Diffusion probabilistic fields," in The Eleventh International Conference on Learning Representations, 2023.
Vdt: An empirical study on video diffusion with transformers. H Lu, G Yang, N Fei, Y Huo, Z Lu, P Luo, M Ding, H. Lu, G. Yang, N. Fei, Y. Huo, Z. Lu, P. Luo, and M. Ding, "Vdt: An empirical study on video diffusion with transformers," 2023.
Your vit is secretly a hybrid discriminative-generative diffusion model. X Yang, S.-M Shih, Y Fu, X Zhao, S Ji, arXiv:2208.07791arXiv preprintX. Yang, S.-M. Shih, Y. Fu, X. Zhao, and S. Ji, "Your vit is secretly a hybrid discriminative-generative diffusion model," arXiv preprint arXiv:2208.07791, 2022.
U-dit tts: U-diffusion vision transformer for text-tospeech. X Jing, Y Chang, Z Yang, J Xie, A Triantafyllopoulos, B W Schuller, X. Jing, Y. Chang, Z. Yang, J. Xie, A. Triantafyllopoulos, and B. W. Schuller, "U-dit tts: U-diffusion vision transformer for text-to- speech," 2023.
All are worth words: a vit backbone for score-based diffusion models. F Bao, C Li, Y Cao, J Zhu, arXiv:2209.12152arXiv preprintF. Bao, C. Li, Y. Cao, and J. Zhu, "All are worth words: a vit backbone for score-based diffusion models," arXiv preprint arXiv:2209.12152, 2022.
Exploring transformer backbones for image diffusion models. P Chahal, arXiv:2212.14678arXiv preprintP. Chahal, "Exploring transformer backbones for image diffusion models," arXiv preprint arXiv:2212.14678, 2022.
Unleashing transformers: parallel token prediction with discrete absorbing diffusion for fast high-resolution image generation from vector-quantized codes. S Bond-Taylor, P Hessey, H Sasaki, T P Breckon, C G Willcocks, Computer Vision-ECCV 2022: 17th European Conference. Tel Aviv, IsraelSpringerProceedings, Part XXIIIS. Bond-Taylor, P. Hessey, H. Sasaki, T. P. Breckon, and C. G. Willcocks, "Unleashing transformers: parallel token prediction with discrete absorbing diffusion for fast high-resolution image generation from vector-quantized codes," in Computer Vision- ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23- 27, 2022, Proceedings, Part XXIII. Springer, 2022, pp. 170-188.
Scalable diffusion models with transformers. W Peebles, S Xie, arXiv:2212.09748arXiv preprintW. Peebles and S. Xie, "Scalable diffusion models with transform- ers," arXiv preprint arXiv:2212.09748, 2022.
Positional diffusion: Ordering unordered sets with diffusion probabilistic models. F Giuliari, G Scarpellini, S James, Y Wang, A Del Bue, arXiv:2303.11120arXiv preprintF. Giuliari, G. Scarpellini, S. James, Y. Wang, and A. Del Bue, "Positional diffusion: Ordering unordered sets with diffusion probabilistic models," arXiv preprint arXiv:2303.11120, 2023.
Vit-tts: Visual text-to-speech with scalable diffusion transformer. H Liu, R Huang, X Lin, W Xu, M Zheng, H Chen, J He, Z Zhao, H. Liu, R. Huang, X. Lin, W. Xu, M. Zheng, H. Chen, J. He, and Z. Zhao, "Vit-tts: Visual text-to-speech with scalable diffusion transformer," 2023.
An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:2010.11929arXiv preprintA. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
One transformer fits all distributions in multi-modal diffusion at scale. F Bao, S Nie, K Xue, C Li, S Pu, Y Wang, G Yue, Y Cao, H Su, J Zhu, arXiv:2303.06555arXiv preprintF. Bao, S. Nie, K. Xue, C. Li, S. Pu, Y. Wang, G. Yue, Y. Cao, H. Su, and J. Zhu, "One transformer fits all distributions in multi-modal diffusion at scale," arXiv preprint arXiv:2303.06555, 2023.
3d equivariant diffusion for target-aware molecule generation and affinity prediction. J Guan, W W Qian, X Peng, Y Su, J Peng, J Ma, arXiv:2303.03543arXiv preprintJ. Guan, W. W. Qian, X. Peng, Y. Su, J. Peng, and J. Ma, "3d equivariant diffusion for target-aware molecule generation and affinity prediction," arXiv preprint arXiv:2303.03543, 2023.
Dynamic dual-output diffusion models. Y Benny, L Wolf, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition11Y. Benny and L. Wolf, "Dynamic dual-output diffusion models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 482-11 491.
Sliced score matching: A scalable approach to density and score estimation. Y Song, S Garg, J Shi, S Ermon, Uncertainty in Artificial Intelligence. PMLR. Y. Song, S. Garg, J. Shi, and S. Ermon, "Sliced score matching: A scalable approach to density and score estimation," in Uncertainty in Artificial Intelligence. PMLR, 2020, pp. 574-584.
A geometric perspective on diffusion models. D Chen, Z Zhou, J.-P Mei, C Shen, C Chen, C Wang, D. Chen, Z. Zhou, J.-P. Mei, C. Shen, C. Chen, and C. Wang, "A geometric perspective on diffusion models," 2023.
How to train your energy-based models. Y Song, D P Kingma, arXiv:2101.03288arXiv preprintY. Song and D. P. Kingma, "How to train your energy-based models," arXiv preprint arXiv:2101.03288, 2021.
Training products of experts by minimizing contrastive divergence. G E Hinton, Neural computation. 148G. E. Hinton, "Training products of experts by minimizing con- trastive divergence," Neural computation, vol. 14, no. 8, pp. 1771- 1800, 2002.
Geodiff: A geometric diffusion model for molecular conformation generation. M Xu, L Yu, Y Song, C Shi, S Ermon, J Tang, arXiv:2203.02923arXiv preprintM. Xu, L. Yu, Y. Song, C. Shi, S. Ermon, and J. Tang, "Geodiff: A geometric diffusion model for molecular conformation gener- ation," arXiv preprint arXiv:2203.02923, 2022.
A variational perspective on diffusion-based generative models and score matching. C.-W Huang, J H Lim, A C Courville, Advances in Neural Information Processing Systems. 34C.-W. Huang, J. H. Lim, and A. C. Courville, "A variational per- spective on diffusion-based generative models and score match- ing," Advances in Neural Information Processing Systems, vol. 34, pp. 22 863-22 876, 2021.
Imagen video: High definition video generation with diffusion models. J Ho, W Chan, C Saharia, J Whang, R Gao, A Gritsenko, D P Kingma, B Poole, M Norouzi, D J Fleet, arXiv:2210.02303arXiv preprintJ. Ho, W. Chan, C. Saharia, J. Whang, R. Gao, A. Gritsenko, D. P. Kingma, B. Poole, M. Norouzi, D. J. Fleet et al., "Imagen video: High definition video generation with diffusion models," arXiv preprint arXiv:2210.02303, 2022.
Common diffusion noise schedules and sample steps are flawed. S Lin, B Liu, J Li, X Yang, arXiv:2305.08891arXiv preprintS. Lin, B. Liu, J. Li, and X. Yang, "Common diffusion noise schedules and sample steps are flawed," arXiv preprint arXiv:2305.08891, 2023.
Improved techniques for maximum likelihood estimation for diffusion odes. K Zheng, C Lu, J Chen, J Zhu, arXiv:2305.03935arXiv preprintK. Zheng, C. Lu, J. Chen, and J. Zhu, "Improved techniques for maximum likelihood estimation for diffusion odes," arXiv preprint arXiv:2305.03935, 2023.
Diffusion models are minimax optimal distribution estimators. K Oko, S Akiyama, T Suzuki, arXiv:2303.01861arXiv preprintK. Oko, S. Akiyama, and T. Suzuki, "Diffusion models are minimax optimal distribution estimators," arXiv preprint arXiv:2303.01861, 2023.
Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. F Bao, C Li, J Zhu, B Zhang, arXiv:2201.06503arXiv preprintF. Bao, C. Li, J. Zhu, and B. Zhang, "Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models," arXiv preprint arXiv:2201.06503, 2022.
Denoising diffusion implicit models. J Song, C Meng, S Ermon, arXiv:2010.02502arXiv preprintJ. Song, C. Meng, and S. Ermon, "Denoising diffusion implicit models," arXiv preprint arXiv:2010.02502, 2020.
Estimating the optimal covariance with imperfect mean in diffusion probabilistic models. F Bao, C Li, J Sun, J Zhu, B Zhang, arXiv:2206.07309arXiv preprintF. Bao, C. Li, J. Sun, J. Zhu, and B. Zhang, "Estimating the optimal covariance with imperfect mean in diffusion probabilistic models," arXiv preprint arXiv:2206.07309, 2022.
High-Fidelity Guided Image Synthesis with Latent Diffusion Models. J Singh, S Gould, L Zheng, arXiv:2211.17084cs, statJ. Singh, S. Gould, and L. Zheng, "High-Fidelity Guided Image Synthesis with Latent Diffusion Models," Nov. 2022, arXiv:2211.17084 [cs, stat].
Unsupervised semantic correspondence using stable diffusion. E Hedlin, G Sharma, S Mahajan, H Isack, A Kar, A Tagliasacchi, K M Yi, E. Hedlin, G. Sharma, S. Mahajan, H. Isack, A. Kar, A. Tagliasac- chi, and K. M. Yi, "Unsupervised semantic correspondence using stable diffusion," 2023.
On analyzing generative and denoising capabilities of diffusion-based deep generative models. K Deja, A Kuzina, T Trzciński, J M Tomczak, arXiv:2206.00070arXiv preprintK. Deja, A. Kuzina, T. Trzciński, and J. M. Tomczak, "On analyz- ing generative and denoising capabilities of diffusion-based deep generative models," arXiv preprint arXiv:2206.00070, 2022.
A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. J Zhang, C Herrmann, J Hur, L P Cabrera, V Jampani, D Sun, M.-H Yang, J. Zhang, C. Herrmann, J. Hur, L. P. Cabrera, V. Jampani, D. Sun, and M.-H. Yang, "A tale of two features: Stable diffusion comple- ments dino for zero-shot semantic correspondence," 2023.
Diffusion hyperfeatures: Searching through time and space for semantic correspondence. G Luo, L Dunlap, D H Park, A Holynski, T Darrell, G. Luo, L. Dunlap, D. H. Park, A. Holynski, and T. Darrell, "Diffusion hyperfeatures: Searching through time and space for semantic correspondence," 2023.
Perception prioritized training of diffusion models. J Choi, J Lee, C Shin, S Kim, H Kim, S Yoon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition11481J. Choi, J. Lee, C. Shin, S. Kim, H. Kim, and S. Yoon, "Perception prioritized training of diffusion models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 472-11 481.
Are diffusion models vision-and-language reasoners. B Krojer, E Poole-Dayan, V Voleti, C Pal, S Reddy, B. Krojer, E. Poole-Dayan, V. Voleti, C. Pal, and S. Reddy, "Are diffusion models vision-and-language reasoners?" 2023.
Score-based generative modeling in latent space. A Vahdat, K Kreis, J Kautz, Advances in Neural Information Processing Systems. 34A. Vahdat, K. Kreis, and J. Kautz, "Score-based generative mod- eling in latent space," Advances in Neural Information Processing Systems, vol. 34, pp. 11 287-11 302, 2021.
Maximum likelihood training of score-based diffusion models. Y Song, C Durkan, I Murray, S Ermon, Advances in Neural Information Processing Systems. 34Y. Song, C. Durkan, I. Murray, and S. Ermon, "Maximum like- lihood training of score-based diffusion models," Advances in Neural Information Processing Systems, vol. 34, pp. 1415-1428, 2021.
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. S Chen, S Chewi, J Li, Y Li, A Salim, A R Zhang, arXiv:2209.11215arXiv preprintS. Chen, S. Chewi, J. Li, Y. Li, A. Salim, and A. R. Zhang, "Sam- pling is as easy as learning the score: theory for diffusion models with minimal data assumptions," arXiv preprint arXiv:2209.11215, 2022.
Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. F Yu, A Seff, Y Zhang, S Song, T Funkhouser, J Xiao, arXiv:1506.03365arXiv preprintF. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao, "Lsun: Construction of a large-scale image dataset us- ing deep learning with humans in the loop," arXiv preprint arXiv:1506.03365, 2015.
Gotta go fast when generating data with scorebased models. A Jolicoeur-Martineau, K Li, R Piché-Taillefer, T Kachman, I Mitliagkas, arXiv:2105.14080arXiv preprintA. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score- based models," arXiv preprint arXiv:2105.14080, 2021.
Self-guided diffusion models. V T Hu, D W Zhang, Y M Asano, G J Burghouts, C G Snoek, arXiv:2210.06462arXiv preprintV. T. Hu, D. W. Zhang, Y. M. Asano, G. J. Burghouts, and C. G. Snoek, "Self-guided diffusion models," arXiv preprint arXiv:2210.06462, 2022.
Towards enhanced controllability of diffusion models. W Cho, H Ravi, M Harikumar, V Khuc, K K Singh, J Lu, D I Inouye, A Kale, arXiv:2302.14368arXiv preprintW. Cho, H. Ravi, M. Harikumar, V. Khuc, K. K. Singh, J. Lu, D. I. Inouye, and A. Kale, "Towards enhanced controllability of diffusion models," arXiv preprint arXiv:2302.14368, 2023.
Learning Data Representations with Joint Diffusion Models. K Deja, T Trzcinski, J M Tomczak, arXiv:2301.13622cs, statK. Deja, T. Trzcinski, and J. M. Tomczak, "Learning Data Representations with Joint Diffusion Models," Jan. 2023, arXiv:2301.13622 [cs, stat].
Y He, Z Cai, X Gan, B Chang, arXiv:2305.12144Diffcap: Exploring continuous diffusion on image captioning. arXiv preprintY. He, Z. Cai, X. Gan, and B. Chang, "Diffcap: Explor- ing continuous diffusion on image captioning," arXiv preprint arXiv:2305.12144, 2023.
Raphael: Text-to-image generation via large mixture of diffusion paths. Z Xue, G Song, Q Guo, B Liu, Z Zong, Y Liu, P Luo, Z. Xue, G. Song, Q. Guo, B. Liu, Z. Zong, Y. Liu, and P. Luo, "Raphael: Text-to-image generation via large mixture of diffusion paths," 2023.
Adding conditional control to textto-image diffusion models. L Zhang, M Agrawala, arXiv:2302.05543arXiv preprintL. Zhang and M. Agrawala, "Adding conditional control to text- to-image diffusion models," arXiv preprint arXiv:2302.05543, 2023.
Classifier-free diffusion guidance. J Ho, T Salimans, arXiv:2207.12598arXiv preprintJ. Ho and T. Salimans, "Classifier-free diffusion guidance," arXiv preprint arXiv:2207.12598, 2022.
Universal guidance for diffusion models. A Bansal, H.-M Chu, A Schwarzschild, S Sengupta, M Goldblum, J Geiping, T Goldstein, arXiv:2302.07121arXiv preprintA. Bansal, H.-M. Chu, A. Schwarzschild, S. Sengupta, M. Gold- blum, J. Geiping, and T. Goldstein, "Universal guidance for diffusion models," arXiv preprint arXiv:2302.07121, 2023.
A simple and effective positional encoding for transformers. P.-C Chen, H Tsai, S Bhojanapalli, H W Chung, Y.-W Chang, C.-S Ferng, arXiv:2104.08698arXiv preprintP.-C. Chen, H. Tsai, S. Bhojanapalli, H. W. Chung, Y.-W. Chang, and C.-S. Ferng, "A simple and effective positional encoding for transformers," arXiv preprint arXiv:2104.08698, 2021.
Blended latent diffusion. O Avrahami, O Fried, D Lischinski, arXiv:2206.02779csO. Avrahami, O. Fried, and D. Lischinski, "Blended latent diffu- sion," Jun. 2022, arXiv:2206.02779 [cs].
3d shape generation and completion through point-voxel diffusion. L Zhou, Y Du, J Wu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionL. Zhou, Y. Du, and J. Wu, "3d shape generation and completion through point-voxel diffusion," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5826-5835.
A conditional point diffusion-refinement paradigm for 3d point cloud completion. Z Lyu, Z Kong, X Xu, L Pan, D Lin, arXiv:2112.03530arXiv preprintZ. Lyu, Z. Kong, X. Xu, L. Pan, and D. Lin, "A conditional point diffusion-refinement paradigm for 3d point cloud completion," arXiv preprint arXiv:2112.03530, 2021.
Unifying human motion synthesis and style transfer with denoising diffusion probabilistic models. Z Chang, E J Findlay, H Zhang, H P Shum, arXiv:2212.08526arXiv preprintZ. Chang, E. J. Findlay, H. Zhang, and H. P. Shum, "Unifying hu- man motion synthesis and style transfer with denoising diffusion probabilistic models," arXiv preprint arXiv:2212.08526, 2022.
Attend-and-excite: Attention-based semantic guidance for textto-image diffusion models. H Chefer, Y Alaluf, Y Vinker, L Wolf, D Cohen-Or, arXiv:2301.13826arXiv preprintH. Chefer, Y. Alaluf, Y. Vinker, L. Wolf, and D. Cohen-Or, "Attend-and-excite: Attention-based semantic guidance for text- to-image diffusion models," arXiv preprint arXiv:2301.13826, 2023.
Directed diffusion: Direct control of object placement through attention guidance. W.-D K Ma, J Lewis, W B Kleijn, T Leung, arXiv:2302.13153arXiv preprintW.-D. K. Ma, J. Lewis, W. B. Kleijn, and T. Leung, "Directed diffusion: Direct control of object placement through attention guidance," arXiv preprint arXiv:2302.13153, 2023.
Improving sample quality of diffusion models using self-attention guidance. S Hong, G Lee, W Jang, S Kim, arXiv:2210.00939arXiv preprintS. Hong, G. Lee, W. Jang, and S. Kim, "Improving sample quality of diffusion models using self-attention guidance," arXiv preprint arXiv:2210.00939, 2022.
Towards performant and reliable undersampled mr reconstruction via diffusion model sampling. C Peng, P Guo, S K Zhou, V Patel, R Chellappa, arXiv:2203.04292arXiv preprintC. Peng, P. Guo, S. K. Zhou, V. Patel, and R. Chellappa, "Towards performant and reliable undersampled mr reconstruction via diffusion model sampling," arXiv preprint arXiv:2203.04292, 2022.
Conditional diffusion with less explicit guidance via model predictive control. M W Shen, E Hajiramezanali, G Scalia, A Tseng, N Diamant, T Biancalani, A Loukas, arXiv:2210.12192arXiv preprintM. W. Shen, E. Hajiramezanali, G. Scalia, A. Tseng, N. Diamant, T. Biancalani, and A. Loukas, "Conditional diffusion with less explicit guidance via model predictive control," arXiv preprint arXiv:2210.12192, 2022.
Simple black-box adversarial attacks. C Guo, J Gardner, Y You, A G Wilson, K Weinberger, International Conference on Machine Learning. PMLRC. Guo, J. Gardner, Y. You, A. G. Wilson, and K. Weinberger, "Simple black-box adversarial attacks," in International Conference on Machine Learning. PMLR, 2019, pp. 2484-2493.
Gradient-based adversarial attacks against text transformers. C Guo, A Sablayrolles, H Jégou, D Kiela, arXiv:2104.13733arXiv preprintC. Guo, A. Sablayrolles, H. Jégou, and D. Kiela, "Gradient-based adversarial attacks against text transformers," arXiv preprint arXiv:2104.13733, 2021.
Enhancing diffusion-based image synthesis with robust classifier guidance. B Kawar, R Ganz, M Elad, arXiv:2208.08664arXiv preprintB. Kawar, R. Ganz, and M. Elad, "Enhancing diffusion-based image synthesis with robust classifier guidance," arXiv preprint arXiv:2208.08664, 2022.
Plug & play generative networks: Conditional iterative generation of images in latent space. A Nguyen, J Clune, Y Bengio, A Dosovitskiy, J Yosinski, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionA. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski, "Plug & play generative networks: Conditional iterative genera- tion of images in latent space," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4467-4477.
A study and comparison of human and deep learning recognition performance under visual distortions. S Dodge, L Karam, 2017 26th international conference on computer communication and networks (ICCCN). IEEES. Dodge and L. Karam, "A study and comparison of human and deep learning recognition performance under visual distortions," in 2017 26th international conference on computer communication and networks (ICCCN). IEEE, 2017, pp. 1-7.
Google's cloud vision api is not robust to noise. H Hosseini, B Xiao, R Poovendran, 2017 16th IEEE international conference on machine learning and applications (ICMLA). IEEEH. Hosseini, B. Xiao, and R. Poovendran, "Google's cloud vision api is not robust to noise," in 2017 16th IEEE international confer- ence on machine learning and applications (ICMLA). IEEE, 2017, pp. 101-105.
End-to-end diffusion latent optimization improves classifier guidance. B Wallace, A Gokul, S Ermon, N Naik, arXiv:2303.13703arXiv preprintB. Wallace, A. Gokul, S. Ermon, and N. Naik, "End-to-end dif- fusion latent optimization improves classifier guidance," arXiv preprint arXiv:2303.13703, 2023.
Suboptimal behavior of bayes and mdl in classification under misspecification. P Grünwald, J Langford, Learning Theory: 17th Annual Conference on Learning Theory. Banff, CanadaSpringerP. Grünwald and J. Langford, "Suboptimal behavior of bayes and mdl in classification under misspecification," in Learning Theory: 17th Annual Conference on Learning Theory, COLT 2004, Banff, Canada, July 1-4, 2004. Proceedings 17. Springer, 2004, pp. 331-347.
Rethinking the role of gradient-based attribution methods for model interpretability. S Srinivas, F Fleuret, arXiv:2006.09128arXiv preprintS. Srinivas and F. Fleuret, "Rethinking the role of gradient-based attribution methods for model interpretability," arXiv preprint arXiv:2006.09128, 2020.
. C.-H Chao, W.-F Sun, B.-W Cheng, Y.-C Lo, C.-C Chang, Y.-L , C.-H. Chao, W.-F. Sun, B.-W. Cheng, Y.-C. Lo, C.-C. Chang, Y.-L.
Denoising likelihood score matching for conditional score-based data generation. Y.-L Liu, C.-P Chang, C.-Y. Chen, Lee, arXiv:2203.14206arXiv preprintLiu, Y.-L. Chang, C.-P. Chen, and C.-Y. Lee, "Denoising likelihood score matching for conditional score-based data generation," arXiv preprint arXiv:2203.14206, 2022.
Null-text guidance in diffusion models is secretly a cartoon-style creator. J Zhao, H Zheng, C Wang, L Lan, W Huang, W Yang, arXiv:2305.06710arXiv preprintJ. Zhao, H. Zheng, C. Wang, L. Lan, W. Huang, and W. Yang, "Null-text guidance in diffusion models is secretly a cartoon-style creator," arXiv preprint arXiv:2305.06710, 2023.
Imitating human behaviour with diffusion models. T Pearce, T Rashid, A Kanervisto, D Bignell, M Sun, R Georgescu, S V Macua, S Z Tan, I Momennejad, K Hofmann, arXiv:2301.10677arXiv preprintT. Pearce, T. Rashid, A. Kanervisto, D. Bignell, M. Sun, R. Georgescu, S. V. Macua, S. Z. Tan, I. Momennejad, K. Hofmann et al., "Imitating human behaviour with diffusion models," arXiv preprint arXiv:2301.10677, 2023.
Diffusion self-guidance for controllable image generation. D Epstein, A Jabri, B Poole, A A Efros, A Holynski, D. Epstein, A. Jabri, B. Poole, A. A. Efros, and A. Holynski, "Dif- fusion self-guidance for controllable image generation," 2023.
T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. C Mou, X Wang, L Xie, J Zhang, Z Qi, Y Shan, X Qie, arXiv:2302.08453arXiv preprintC. Mou, X. Wang, L. Xie, J. Zhang, Z. Qi, Y. Shan, and X. Qie, "T2i-adapter: Learning adapters to dig out more control- lable ability for text-to-image diffusion models," arXiv preprint arXiv:2302.08453, 2023.
A closer look at parameter-efficient tuning in diffusion models. C Xiang, F Bao, C Li, H Su, J Zhu, arXiv:2303.18181arXiv preprintC. Xiang, F. Bao, C. Li, H. Su, and J. Zhu, "A closer look at parameter-efficient tuning in diffusion models," arXiv preprint arXiv:2303.18181, 2023.
Uni-controlnet: All-in-one control to text-to-image diffusion models. S Zhao, D Chen, Y.-C Chen, J Bao, S Hao, L Yuan, K.-Y K Wong, S. Zhao, D. Chen, Y.-C. Chen, J. Bao, S. Hao, L. Yuan, and K.- Y. K. Wong, "Uni-controlnet: All-in-one control to text-to-image diffusion models," 2023.
Norespeech: Knowledge distillation based conditional diffusion model for noise-robust expressive tts. D Yang, S Liu, J Yu, H Wang, C Weng, Y Zou, arXiv:2211.02448arXiv preprintD. Yang, S. Liu, J. Yu, H. Wang, C. Weng, and Y. Zou, "Norespeech: Knowledge distillation based conditional diffu- sion model for noise-robust expressive tts," arXiv preprint arXiv:2211.02448, 2022.
Theoretical guarantees for sampling and inference in generative models with latent diffusions. B Tzen, M Raginsky, Conference on Learning Theory. PMLRB. Tzen and M. Raginsky, "Theoretical guarantees for sampling and inference in generative models with latent diffusions," in Conference on Learning Theory. PMLR, 2019, pp. 3084-3114.
Accelerating diffusion models for inverse problems through shortcut sampling. G Liu, H Sun, J Li, F Yin, Y Yang, G. Liu, H. Sun, J. Li, F. Yin, and Y. Yang, "Accelerating diffusion models for inverse problems through shortcut sampling," 2023.
Generative adversarial networks for markovian temporal dynamics: Stochastic continuous data generation. S W Park, D W Shu, J Kwon, International Conference on Machine Learning. PMLR, 2021. S. W. Park, D. W. Shu, and J. Kwon, "Generative adversarial net- works for markovian temporal dynamics: Stochastic continuous data generation," in International Conference on Machine Learning. PMLR, 2021, pp. 8413-8421.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintG. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.
Accelerating diffusion sampling with classifier-based feature distillation. W Sun, D Chen, C Wang, D Ye, Y Feng, C Chen, arXiv:2211.12039arXiv preprintW. Sun, D. Chen, C. Wang, D. Ye, Y. Feng, and C. Chen, "Accelerating diffusion sampling with classifier-based feature distillation," arXiv preprint arXiv:2211.12039, 2022.
Knowledge diffusion for distillation. T Huang, Y Zhang, M Zheng, S You, F Wang, C Qian, C Xu, T. Huang, Y. Zhang, M. Zheng, S. You, F. Wang, C. Qian, and C. Xu, "Knowledge diffusion for distillation," 2023.
Knowledge distillation in iterative generative models for improved sampling speed. E Luhman, T Luhman, arXiv:2101.02388arXiv preprintE. Luhman and T. Luhman, "Knowledge distillation in iterative generative models for improved sampling speed," arXiv preprint arXiv:2101.02388, 2021.
Fast sampling of diffusion models via operator learning. H Zheng, W Nie, A Vahdat, K Azizzadenesheli, A Anandkumar, arXiv:2211.13449arXiv preprintH. Zheng, W. Nie, A. Vahdat, K. Azizzadenesheli, and A. Anand- kumar, "Fast sampling of diffusion models via operator learn- ing," arXiv preprint arXiv:2211.13449, 2022.
On distillation of guided diffusion models. C Meng, R Gao, D P Kingma, S Ermon, J Ho, T Salimans, arXiv:2210.03142arXiv preprintC. Meng, R. Gao, D. P. Kingma, S. Ermon, J. Ho, and T. Sali- mans, "On distillation of guided diffusion models," arXiv preprint arXiv:2210.03142, 2022.
Consistency models. Y Song, P Dhariwal, M Chen, I Sutskever, arXiv:2303.01469arXiv preprintY. Song, P. Dhariwal, M. Chen, and I. Sutskever, "Consistency models," arXiv preprint arXiv:2303.01469, 2023.
A comprehensive survey on knowledge distillation of diffusion models. W Luo, arXiv:2304.04262arXiv preprintW. Luo, "A comprehensive survey on knowledge distillation of diffusion models," arXiv preprint arXiv:2304.04262, 2023.
Denoising diffusion samplers. F Vargas, W S Grathwohl, A Doucet, The Eleventh International Conference on Learning Representations. F. Vargas, W. S. Grathwohl, and A. Doucet, "Denoising diffusion samplers," in The Eleventh International Conference on Learning Representations, 2023.
Learning fast samplers for diffusion models by differentiating through sample quality. D Watson, W Chan, J Ho, M Norouzi, International Conference on Learning Representations. D. Watson, W. Chan, J. Ho, and M. Norouzi, "Learning fast samplers for diffusion models by differentiating through sample quality," in International Conference on Learning Representations, 2021.
Selective guidance: Are all the denoising steps of guided diffusion important. P A Golnari, Z Yao, Y He, arXiv:2305.09847arXiv preprintP. A. Golnari, Z. Yao, and Y. He, "Selective guidance: Are all the denoising steps of guided diffusion important?" arXiv preprint arXiv:2305.09847, 2023.
Parallel sampling of diffusion models. A Shih, S Belkhale, S Ermon, D Sadigh, N Anari, A. Shih, S. Belkhale, S. Ermon, D. Sadigh, and N. Anari, "Parallel sampling of diffusion models," 2023.
Fast sampling of diffusion models with exponential integrator. Q Zhang, Y Chen, arXiv:2204.13902arXiv preprintQ. Zhang and Y. Chen, "Fast sampling of diffusion models with exponential integrator," arXiv preprint arXiv:2204.13902, 2022.
It\ˆ{o}-taylor sampling scheme for denoising diffusion probabilistic models using ideal derivatives. H Tachibana, M Go, M Inahara, Y Katayama, Y Watanabe, arXiv:2112.13339arXiv preprintH. Tachibana, M. Go, M. Inahara, Y. Katayama, and Y. Watan- abe, "It\ˆ{o}-taylor sampling scheme for denoising diffusion probabilistic models using ideal derivatives," arXiv preprint arXiv:2112.13339, 2021.
Alleviating exposure bias in diffusion models through sampling with shifted time steps. M Li, T Qu, W Sun, M.-F Moens, M. Li, T. Qu, W. Sun, and M.-F. Moens, "Alleviating exposure bias in diffusion models through sampling with shifted time steps," 2023.
Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. C Lu, Y Zhou, F Bao, J Chen, C Li, J Zhu, arXiv:2206.00927arXiv preprintC. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu, "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps," arXiv preprint arXiv:2206.00927, 2022.
Structural pruning for diffusion models. G Fang, X Ma, X Wang, arXiv:2305.10924arXiv preprintG. Fang, X. Ma, and X. Wang, "Structural pruning for diffusion models," arXiv preprint arXiv:2305.10924, 2023.
Image super-resolution via iterative refinement. C Saharia, J Ho, W Chan, T Salimans, D J Fleet, M Norouzi, arXiv:2104.07636arXiv preprintC. Saharia, J. Ho, W. Chan, T. Salimans, D. J. Fleet, and M. Norouzi, "Image super-resolution via iterative refinement," arXiv preprint arXiv:2104.07636, 2021.
Noise estimation for generative diffusion models. R San-Roman, E Nachmani, L Wolf, arXiv:2104.02600arXiv preprintR. San-Roman, E. Nachmani, and L. Wolf, "Noise estimation for generative diffusion models," arXiv preprint arXiv:2104.02600, 2021.
Optimal linear subspace search: Learning to construct fast and highquality schedulers for diffusion models. Z Duan, C Wang, C Chen, J Huang, W Qian, Z. Duan, C. Wang, C. Chen, J. Huang, and W. Qian, "Optimal linear subspace search: Learning to construct fast and high- quality schedulers for diffusion models," 2023.
Learning to efficiently sample from diffusion probabilistic models. D Watson, J Ho, M Norouzi, W Chan, arXiv:2106.03802arXiv preprintD. Watson, J. Ho, M. Norouzi, and W. Chan, "Learning to efficiently sample from diffusion probabilistic models," arXiv preprint arXiv:2106.03802, 2021.
Trans-dimensional generative modeling via jump diffusion models. A Campbell, W Harvey, C Weilbach, V D Bortoli, T Rainforth, A Doucet, A. Campbell, W. Harvey, C. Weilbach, V. D. Bortoli, T. Rainforth, and A. Doucet, "Trans-dimensional generative modeling via jump diffusion models," 2023.
Unifying gans and score-based diffusion as generative particle models. J.-Y Franceschi, M Gartrell, L D Santos, T Issenhuth, E Bézenac, M Chen, A Rakotomamonjy, J.-Y. Franceschi, M. Gartrell, L. D. Santos, T. Issenhuth, E. de Bézenac, M. Chen, and A. Rakotomamonjy, "Unifying gans and score-based diffusion as generative particle models," 2023.
Expressiveness remarks for denoising diffusion models and samplers. F Vargas, T Reu, A Kerekes, arXiv:2305.09605arXiv preprintF. Vargas, T. Reu, and A. Kerekes, "Expressiveness remarks for denoising diffusion models and samplers," arXiv preprint arXiv:2305.09605, 2023.
Diffusion explainer: Visual explanation for text-to-image stable diffusion. S Lee, B Hoover, H Strobelt, Z J Wang, S Peng, A Wright, K Li, H Park, H Yang, D H Chau, arXiv:2305.03509arXiv preprintS. Lee, B. Hoover, H. Strobelt, Z. J. Wang, S. Peng, A. Wright, K. Li, H. Park, H. Yang, and D. H. Chau, "Diffusion explainer: Visual explanation for text-to-image stable diffusion," arXiv preprint arXiv:2305.03509, 2023.
Amd autoregressive motion diffusion. B Han, H Peng, M Dong, C Xu, Y Ren, Y Shen, Y Li, arXiv:2305.09381arXiv preprintB. Han, H. Peng, M. Dong, C. Xu, Y. Ren, Y. Shen, and Y. Li, "Amd autoregressive motion diffusion," arXiv preprint arXiv:2305.09381, 2023.
Ar-diffusion: Auto-regressive diffusion model for text generation. T Wu, Z Fan, X Liu, Y Gong, Y Shen, J Jiao, H.-T Zheng, J Li, Z Wei, J Guo, arXiv:2305.09515arXiv preprintT. Wu, Z. Fan, X. Liu, Y. Gong, Y. Shen, J. Jiao, H.-T. Zheng, J. Li, Z. Wei, J. Guo et al., "Ar-diffusion: Auto-regressive diffusion model for text generation," arXiv preprint arXiv:2305.09515, 2023.
Diffwave: A versatile diffusion model for audio synthesis. Z Kong, W Ping, J Huang, K Zhao, B Catanzaro, arXiv:2009.09761arXiv preprintZ. Kong, W. Ping, J. Huang, K. Zhao, and B. Catanzaro, "Dif- fwave: A versatile diffusion model for audio synthesis," arXiv preprint arXiv:2009.09761, 2020.
On architectural compression of text-to-image diffusion models. B.-K Kim, H.-K Song, T Castells, S Choi, B.-K. Kim, H.-K. Song, T. Castells, and S. Choi, "On architectural compression of text-to-image diffusion models," 2023.
Training diffusion models with reinforcement learning. K Black, M Janner, Y Du, I Kostrikov, S Levine, K. Black, M. Janner, Y. Du, I. Kostrikov, and S. Levine, "Training diffusion models with reinforcement learning," 2023.
Few-shot diffusion models. G Giannone, D Nielsen, O Winther, arXiv:2205.15463arXiv preprintG. Giannone, D. Nielsen, and O. Winther, "Few-shot diffusion models," arXiv preprint arXiv:2205.15463, 2022.
Gsure-based diffusion model training with corrupted data. B Kawar, N Elata, T Michaeli, M Elad, B. Kawar, N. Elata, T. Michaeli, and M. Elad, "Gsure-based diffusion model training with corrupted data," 2023.
Ambient diffusion: Learning clean distributions from corrupted data. G Daras, K Shah, Y Dagan, A Gollakota, A G Dimakis, A Klivans, G. Daras, K. Shah, Y. Dagan, A. Gollakota, A. G. Dimakis, and A. Klivans, "Ambient diffusion: Learning clean distributions from corrupted data," 2023.
Label-retrieval-augmented diffusion models for learning from noisy labels. J Chen, R Zhang, T Yu, R Sharma, Z Xu, T Sun, C Chen, J. Chen, R. Zhang, T. Yu, R. Sharma, Z. Xu, T. Sun, and C. Chen, "Label-retrieval-augmented diffusion models for learning from noisy labels," 2023.
Meta-dm: Applications of diffusion models on few-shot learning. W Hu, X Jiang, J Liu, Y Yang, H Tian, arXiv:2305.08092arXiv preprintW. Hu, X. Jiang, J. Liu, Y. Yang, and H. Tian, "Meta-dm: Appli- cations of diffusion models on few-shot learning," arXiv preprint arXiv:2305.08092, 2023.
Few-shot image generation with diffusion models. J Zhu, H Ma, J Chen, J Yuan, arXiv:2211.03264arXiv preprintJ. Zhu, H. Ma, J. Chen, and J. Yuan, "Few-shot image generation with diffusion models," arXiv preprint arXiv:2211.03264, 2022.
Dinar: Diffusion inpainting of neural textures for one-shot human avatars. D Svitov, D Gudkov, R Bashirov, V Lemptisky, arXiv:2303.09375arXiv preprintD. Svitov, D. Gudkov, R. Bashirov, and V. Lemptisky, "Dinar: Dif- fusion inpainting of neural textures for one-shot human avatars," arXiv preprint arXiv:2303.09375, 2023.
Relightify: Relightable 3d faces from a single image via diffusion models. F P Papantoniou, A Lattas, S Moschoglou, S Zafeiriou, arXiv:2305.06077arXiv preprintF. P. Papantoniou, A. Lattas, S. Moschoglou, and S. Zafeiriou, "Relightify: Relightable 3d faces from a single image via diffusion models," arXiv preprint arXiv:2305.06077, 2023.
Zero-shot-learning cross-modality data translation through mutual information guided stochastic diffusion. Z Wang, Y Yang, M Sermesant, H Delingette, O Wu, arXiv:2301.13743arXiv preprintZ. Wang, Y. Yang, M. Sermesant, H. Delingette, and O. Wu, "Zero-shot-learning cross-modality data translation through mu- tual information guided stochastic diffusion," arXiv preprint arXiv:2301.13743, 2023.
Zero-shot medical image translation via frequencyguided diffusion models. Y Li, H.-C Shao, X Liang, L Chen, R Li, S Jiang, J Wang, Y Zhang, arXiv:2304.02742arXiv preprintY. Li, H.-C. Shao, X. Liang, L. Chen, R. Li, S. Jiang, J. Wang, and Y. Zhang, "Zero-shot medical image translation via frequency- guided diffusion models," arXiv preprint arXiv:2304.02742, 2023.
Zet-speech: Zeroshot adaptive emotion-controllable text-to-speech synthesis with diffusion and style-based models. M Kang, W Han, S J Hwang, E Yang, M. Kang, W. Han, S. J. Hwang, and E. Yang, "Zet-speech: Zero- shot adaptive emotion-controllable text-to-speech synthesis with diffusion and style-based models," 2023.
Exploring diffusion models for unsupervised video anomaly detection. A O Tur, N Dall'asen, C Beyan, E Ricci, arXiv:2304.05841arXiv preprintA. O. Tur, N. Dall'Asen, C. Beyan, and E. Ricci, "Exploring diffusion models for unsupervised video anomaly detection," arXiv preprint arXiv:2304.05841, 2023.
Dds2m: Selfsupervised denoising diffusion spatio-spectral model for hyperspectral image restoration. Y Miao, L Zhang, L Zhang, D Tao, arXiv:2303.06682arXiv preprintY. Miao, L. Zhang, L. Zhang, and D. Tao, "Dds2m: Self- supervised denoising diffusion spatio-spectral model for hyper- spectral image restoration," arXiv preprint arXiv:2303.06682, 2023.
Diffusion models and semi-supervised learners benefit mutually with few labels. Z You, Y Zhong, F Bao, J Sun, C Li, J Zhu, arXiv:2302.10586arXiv preprintZ. You, Y. Zhong, F. Bao, J. Sun, C. Li, and J. Zhu, "Diffusion models and semi-supervised learners benefit mutually with few labels," arXiv preprint arXiv:2302.10586, 2023.
A brief introduction to weakly supervised learning. Z.-H Zhou, National science review. 51Z.-H. Zhou, "A brief introduction to weakly supervised learn- ing," National science review, vol. 5, no. 1, pp. 44-53, 2018.
Llm-grounded diffusion: Enhancing prompt understanding of text-to-image diffusion models with large language models. L Lian, B Li, A Yala, T Darrell, L. Lian, B. Li, A. Yala, and T. Darrell, "Llm-grounded diffu- sion: Enhancing prompt understanding of text-to-image diffusion models with large language models," 2023.
Anyto-any generation via composable diffusion. Z Tang, Z Yang, C Zhu, M Zeng, M Bansal, arXiv:2305.11846arXiv preprintZ. Tang, Z. Yang, C. Zhu, M. Zeng, and M. Bansal, "Any- to-any generation via composable diffusion," arXiv preprint arXiv:2305.11846, 2023.
Unicontrol: A unified diffusion model for controllable visual generation in the wild. C Qin, S Zhang, N Yu, Y Feng, X Yang, Y Zhou, H Wang, J C Niebles, C Xiong, S Savarese, arXiv:2305.11147arXiv preprintC. Qin, S. Zhang, N. Yu, Y. Feng, X. Yang, Y. Zhou, H. Wang, J. C. Niebles, C. Xiong, S. Savarese et al., "Unicontrol: A unified diffusion model for controllable visual generation in the wild," arXiv preprint arXiv:2305.11147, 2023.
Card: Classification and regression diffusion models. X Han, H Zheng, M Zhou, arXiv:2206.07275arXiv preprintX. Han, H. Zheng, and M. Zhou, "Card: Classification and regres- sion diffusion models," arXiv preprint arXiv:2206.07275, 2022.
Policy representation via diffusion probability model for reinforcement learning. L Yang, Z Huang, F Lei, Y Zhong, Y Yang, C Fang, S Wen, B Zhou, Z Lin, L. Yang, Z. Huang, F. Lei, Y. Zhong, Y. Yang, C. Fang, S. Wen, B. Zhou, and Z. Lin, "Policy representation via diffusion proba- bility model for reinforcement learning," 2023.
Diffusionnag: Taskguided neural architecture generation with diffusion models. S An, H Lee, J Jo, S Lee, S J Hwang, S. An, H. Lee, J. Jo, S. Lee, and S. J. Hwang, "Diffusionnag: Task- guided neural architecture generation with diffusion models," 2023.
Diffusionner: Boundary diffusion for named entity recognition. Y Shen, K Song, X Tan, D Li, W Lu, Y Zhuang, Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, "Diffusion- ner: Boundary diffusion for named entity recognition," 2023.
Is synthetic data from diffusion models ready for knowledge distillation. Z Li, Y Li, P Zhao, R Song, X Li, J Yang, Z. Li, Y. Li, P. Zhao, R. Song, X. Li, and J. Yang, "Is synthetic data from diffusion models ready for knowledge distillation?" 2023.
. S Pan, E Abouei, J Wynne, T Wang, R L J Qiu, Y Li, C.-W , S. Pan, E. Abouei, J. Wynne, T. Wang, R. L. J. Qiu, Y. Li, C.-W.
Synthetic ct generation from mri using 3d transformer-based denoising diffusion model. J Chang, J Peng, P Roper, D S Patel, H Yu, X Mao, Yang, Chang, J. Peng, J. Roper, P. Patel, D. S. Yu, H. Mao, and X. Yang, "Synthetic ct generation from mri using 3d transformer-based denoising diffusion model," 2023.
Towards consistent video editing with text-to-image diffusion models. Z Zhang, B Li, X Nie, C Han, T Guo, L Liu, Z. Zhang, B. Li, X. Nie, C. Han, T. Guo, and L. Liu, "Towards consistent video editing with text-to-image diffusion models," 2023.
A survey on graph diffusion models: Generative ai in science for molecule, protein and material. M Zhang, M Qamar, T Kang, Y Jung, C Zhang, S.-H Bae, C Zhang, arXiv:2304.01565arXiv preprintM. Zhang, M. Qamar, T. Kang, Y. Jung, C. Zhang, S.-H. Bae, and C. Zhang, "A survey on graph diffusion models: Generative ai in science for molecule, protein and material," arXiv preprint arXiv:2304.01565, 2023.
Diffusion models for high-resolution solar forecasts. Y Hatanaka, Y Glaser, G Galgon, G Torri, P Sadowski, arXiv:2302.00170arXiv preprintY. Hatanaka, Y. Glaser, G. Galgon, G. Torri, and P. Sadowski, "Dif- fusion models for high-resolution solar forecasts," arXiv preprint arXiv:2302.00170, 2023.
End-to-end latent variational diffusion models for inverse problems in high energy physics. A Shmakov, K Greif, M Fenton, A Ghosh, P Baldi, D Whiteson, arXiv:2305.10399arXiv preprintA. Shmakov, K. Greif, M. Fenton, A. Ghosh, P. Baldi, and D. Whiteson, "End-to-end latent variational diffusion models for inverse problems in high energy physics," arXiv preprint arXiv:2305.10399, 2023.
Geometry-complete diffusion for 3d molecule generation. A Morehead, J Cheng, arXiv:2302.04313arXiv preprintA. Morehead and J. Cheng, "Geometry-complete diffusion for 3d molecule generation," arXiv preprint arXiv:2302.04313, 2023.
A latent diffusion model for protein structure generation. C Fu, K Yan, L Wang, W Y Au, M Mcthrow, T Komikado, K Maruhashi, K Uchino, X Qian, S Ji, arXiv:2305.04120arXiv preprintC. Fu, K. Yan, L. Wang, W. Y. Au, M. McThrow, T. Komikado, K. Maruhashi, K. Uchino, X. Qian, and S. Ji, "A latent dif- fusion model for protein structure generation," arXiv preprint arXiv:2305.04120, 2023.
Spontaneous symmetry breaking in generative diffusion models. G Raya, L Ambrogioni, G. Raya and L. Ambrogioni, "Spontaneous symmetry breaking in generative diffusion models," 2023.
Moldiff: Addressing the atom-bond inconsistency problem in 3d molecule diffusion generation. X Peng, J Guan, Q Liu, J Ma, arXiv:2305.07508arXiv preprintX. Peng, J. Guan, Q. Liu, and J. Ma, "Moldiff: Addressing the atom-bond inconsistency problem in 3d molecule diffusion gen- eration," arXiv preprint arXiv:2305.07508, 2023.
Se (3) diffusion model with application to protein backbone generation. J Yim, B L Trippe, V De, E Bortoli, A Mathieu, R Doucet, T Barzilay, Jaakkola, arXiv:2302.02277arXiv preprintJ. Yim, B. L. Trippe, V. De Bortoli, E. Mathieu, A. Doucet, R. Barzi- lay, and T. Jaakkola, "Se (3) diffusion model with application to protein backbone generation," arXiv preprint arXiv:2302.02277, 2023.
Mask, stitch, and re-sample: Enhancing robustness and generalizability in anomaly detection through automatic diffusion models. C I Bercea, M Neumayr, D Rueckert, J A Schnabel, C. I. Bercea, M. Neumayr, D. Rueckert, and J. A. Schnabel, "Mask, stitch, and re-sample: Enhancing robustness and generalizability in anomaly detection through automatic diffusion models," 2023.
Text-toimage diffusion models can be easily backdoored through multimodal data poisoning. S Zhai, Y Dong, Q Shen, S Pu, Y Fang, H Su, arXiv:2305.04175arXiv preprintS. Zhai, Y. Dong, Q. Shen, S. Pu, Y. Fang, and H. Su, "Text-to- image diffusion models can be easily backdoored through multi- modal data poisoning," arXiv preprint arXiv:2305.04175, 2023.
Densepure: Understanding diffusion models towards adversarial robustness. C Xiao, Z Chen, K Jin, J Wang, W Nie, M Liu, A Anandkumar, B Li, D Song, arXiv:2211.00322arXiv preprintC. Xiao, Z. Chen, K. Jin, J. Wang, W. Nie, M. Liu, A. Anandkumar, B. Li, and D. Song, "Densepure: Understanding diffusion models towards adversarial robustness," arXiv preprint arXiv:2211.00322, 2022.
On enhancing the robustness of vision transformers: Defensive diffusion. R Imam, M Huzaifa, M E , -A Azz, arXiv:2305.08031arXiv preprintR. Imam, M. Huzaifa, and M. E.-A. Azz, "On enhancing the robustness of vision transformers: Defensive diffusion," arXiv preprint arXiv:2305.08031, 2023.
Robust evaluation of diffusion-based adversarial purification. M Lee, D Kim, arXiv:2303.09051arXiv preprintM. Lee and D. Kim, "Robust evaluation of diffusion-based adver- sarial purification," arXiv preprint arXiv:2303.09051, 2023.
Raising the bar for certified adversarial robustness with diffusion models. T Altstidl, D Dobre, B Eskofier, G Gidel, L Schwinn, arXiv:2305.10388arXiv preprintT. Altstidl, D. Dobre, B. Eskofier, G. Gidel, and L. Schwinn, "Rais- ing the bar for certified adversarial robustness with diffusion models," arXiv preprint arXiv:2305.10388, 2023.
Diffusion theory as a scalpel: Detecting and purifying poisonous dimensions in pre-trained language models caused by backdoor or bias. Z Zhang, D Chen, H Zhou, F Meng, J Zhou, X Sun, arXiv:2305.04547arXiv preprintZ. Zhang, D. Chen, H. Zhou, F. Meng, J. Zhou, and X. Sun, "Diffusion theory as a scalpel: Detecting and purifying poisonous dimensions in pre-trained language models caused by backdoor or bias," arXiv preprint arXiv:2305.04547, 2023.
Diffusion models for imperceptible and transferable adversarial attack. J Chen, H Chen, K Chen, Y Zhang, Z Zou, Z Shi, arXiv:2305.08192arXiv preprintJ. Chen, H. Chen, K. Chen, Y. Zhang, Z. Zou, and Z. Shi, "Diffusion models for imperceptible and transferable adversarial attack," arXiv preprint arXiv:2305.08192, 2023.
Diffusion-based adversarial sample generation for improved stealthiness and controllability. H Xue, A Araujo, B Hu, Y Chen, H. Xue, A. Araujo, B. Hu, and Y. Chen, "Diffusion-based adver- sarial sample generation for improved stealthiness and control- lability," 2023.
Sex, lies, and videotape: Deep fakes and free speech delusions. M A Franks, A E Waldman, Md. L. Rev. 78892M. A. Franks and A. E. Waldman, "Sex, lies, and videotape: Deep fakes and free speech delusions," Md. L. Rev., vol. 78, p. 892, 2018.
Level up the deepfake detection: a method to effectively discriminate images generated by gan architectures and diffusion models. L Guarnera, O Giudice, S Battiato, arXiv:2303.00608arXiv preprintL. Guarnera, O. Giudice, and S. Battiato, "Level up the deepfake detection: a method to effectively discriminate images gener- ated by gan architectures and diffusion models," arXiv preprint arXiv:2303.00608, 2023.
Unsafe diffusion: On the generation of unsafe images and hateful memes from text-to-image models. Y Qu, X Shen, X He, M Backes, S Zannettou, Y Zhang, Y. Qu, X. Shen, X. He, M. Backes, S. Zannettou, and Y. Zhang, "Unsafe diffusion: On the generation of unsafe images and hateful memes from text-to-image models," 2023.
Detecting images generated by diffusers. D A Coccomini, A Esuli, F Falchi, C Gennaro, G Amato, arXiv:2303.05275arXiv preprintD. A. Coccomini, A. Esuli, F. Falchi, C. Gennaro, and G. Am- ato, "Detecting images generated by diffusers," arXiv preprint arXiv:2303.05275, 2023.
Extracting training data from diffusion models. N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, D Ippolito, E Wallace, arXiv:2301.13188arXiv preprintN. Carlini, J. Hayes, M. Nasr, M. Jagielski, V. Sehwag, F. Tramer, B. Balle, D. Ippolito, and E. Wallace, "Extracting training data from diffusion models," arXiv preprint arXiv:2301.13188, 2023.
A reproducible extraction of training images from diffusion models. R Webster, arXiv:2305.08694arXiv preprintR. Webster, "A reproducible extraction of training images from diffusion models," arXiv preprint arXiv:2305.08694, 2023.
Diffprotect: Generate adversarial examples with diffusion models for facial privacy protection. J Liu, C P Lau, R Chellappa, J. Liu, C. P. Lau, and R. Chellappa, "Diffprotect: Generate adver- sarial examples with diffusion models for facial privacy protec- tion," 2023.
Differentially private latent diffusion models. S Lyu, M Vinaroz, M F Liu, M Park, S. Lyu, M. Vinaroz, M. F. Liu, and M. Park, "Differentially private latent diffusion models," 2023.
Analyzing bias in diffusion-based face generation models. M V Perera, V M Patel, arXiv:2305.06402arXiv preprintM. V. Perera and V. M. Patel, "Analyzing bias in diffusion-based face generation models," arXiv preprint arXiv:2305.06402, 2023.
Don't play favorites: Minority guidance for diffusion models. S Um, J C Ye, arXiv:2301.12334arXiv preprintS. Um and J. C. Ye, "Don't play favorites: Minority guidance for diffusion models," arXiv preprint arXiv:2301.12334, 2023.
Generating high fidelity data from low-density regions using diffusion models. V Sehwag, C Hazirbas, A Gordo, F Ozgenel, C Canton, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition11V. Sehwag, C. Hazirbas, A. Gordo, F. Ozgenel, and C. Canton, "Generating high fidelity data from low-density regions using diffusion models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 492-11 501.
Understanding and mitigating copying in diffusion models. G Somepalli, V Singla, M Goldblum, J Geiping, T Goldstein, G. Somepalli, V. Singla, M. Goldblum, J. Geiping, and T. Gold- stein, "Understanding and mitigating copying in diffusion mod- els," 2023.
Diversify your vision datasets with automatic diffusion-based augmentation. L Dunlap, A Umino, H Zhang, J Yang, J E Gonzalez, T Darrell, L. Dunlap, A. Umino, H. Zhang, J. Yang, J. E. Gonzalez, and T. Darrell, "Diversify your vision datasets with automatic diffusion-based augmentation," 2023.
Stable bias: Analyzing societal representations in diffusion models. A S Luccioni, C Akiki, M Mitchell, Y Jernite, arXiv:2303.11408arXiv preprintA. S. Luccioni, C. Akiki, M. Mitchell, and Y. Jernite, "Stable bias: Analyzing societal representations in diffusion models," arXiv preprint arXiv:2303.11408, 2023.
A style-based generator architecture for generative adversarial networks. T Karras, S Laine, T Aila, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionT. Karras, S. Laine, and T. Aila, "A style-based generator ar- chitecture for generative adversarial networks," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401-4410.
| [] |
[
"On Quantum Simulation Of Cosmic Inflation",
"On Quantum Simulation Of Cosmic Inflation"
] | [
"Junyu Liu \nWalter Burke Institute for Theoretical Physics\nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n",
"Yue-Zhou Li \nPhysics Department\nMcGill University\n3600 University StreetH3A 2T8MontrealQCCanada\n"
] | [
"Walter Burke Institute for Theoretical Physics\nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Physics Department\nMcGill University\n3600 University StreetH3A 2T8MontrealQCCanada"
] | [] | In this paper, we generalize Jordan-Lee-Preskill, an algorithm for simulating flat-space quantum field theories, to 3+1 dimensional inflationary spacetime. The generalized algorithm contains the encoding treatment, the initial state preparation, the inflation process, and the quantum measurement of cosmological observables at late time. The algorithm is helpful for obtaining predictions of cosmic non-Gaussianities, serving as useful benchmark problems for quantum devices, and checking assumptions made about interacting vacuum in the inflationary perturbation theory.Components of our work also include a detailed discussion about the lattice regularization of the cosmic perturbation theory, a detailed discussion about the in-in formalism, a discussion about encoding using the HKLL-type formula that might apply for both dS and AdS spacetimes, a discussion about bounding curvature perturbations, a description of the three-party Trotter simulation algorithm for time-dependent Hamiltonians, a ground state projection algorithm for simulating gapless theories, a discussion about the quantum-extended Church-Turing Thesis, and a discussion about simulating cosmic reheating in quantum devices.in an interacting scalar quantum field theory living in the four-dimensional inflationary spacetime. The state |Ω in (t 0 ) is the vacuum state of the full interacting theory, and O H (t) is the Heisenberg operator made by multiple scalar fields. In this theory, the Hamiltonian is time-dependent, so we denote t 0 as the time when inflation starts, while t as the time when inflation ends. The meaning of this correlation function will be precisely given in the later discussion in Section 3, and in fact, most experimental observables are given by the above field theory constructions.The algorithm contains (adiabatic) initial state preparations, time evolution/particle scattering process, and measurement, and is argued to be polynomial, analogous to the Jordan-Lee-Preskill experiment. Moreover, since the theory and the simulation targets are very different from flat-space quantum field theories, it is necessary to include new ingredients appearing in the theory, modifying the Jordan-Lee-Preskill experiment. Here, as an outline of this paper, we will summarize the following most crucial points.• Inflation in a lattice. We systematically construct the lattice quantum field theory of curvature perturbation, both for the free theory and the interacting theory in | 10.1103/physrevd.104.086013 | [
"https://arxiv.org/pdf/2009.10921v5.pdf"
] | 221,857,380 | 2009.10921 | 70e3e9a6fa12bd7d99dfcefe678ad8f3bd45ad6b |
On Quantum Simulation Of Cosmic Inflation
12 Sep 2021
Junyu Liu
Walter Burke Institute for Theoretical Physics
Institute for Quantum Information and Matter
California Institute of Technology
91125PasadenaCAUSA
Yue-Zhou Li
Physics Department
McGill University
3600 University StreetH3A 2T8MontrealQCCanada
On Quantum Simulation Of Cosmic Inflation
12 Sep 2021
In this paper, we generalize Jordan-Lee-Preskill, an algorithm for simulating flat-space quantum field theories, to 3+1 dimensional inflationary spacetime. The generalized algorithm contains the encoding treatment, the initial state preparation, the inflation process, and the quantum measurement of cosmological observables at late time. The algorithm is helpful for obtaining predictions of cosmic non-Gaussianities, serving as useful benchmark problems for quantum devices, and checking assumptions made about interacting vacuum in the inflationary perturbation theory.Components of our work also include a detailed discussion about the lattice regularization of the cosmic perturbation theory, a detailed discussion about the in-in formalism, a discussion about encoding using the HKLL-type formula that might apply for both dS and AdS spacetimes, a discussion about bounding curvature perturbations, a description of the three-party Trotter simulation algorithm for time-dependent Hamiltonians, a ground state projection algorithm for simulating gapless theories, a discussion about the quantum-extended Church-Turing Thesis, and a discussion about simulating cosmic reheating in quantum devices.in an interacting scalar quantum field theory living in the four-dimensional inflationary spacetime. The state |Ω in (t 0 ) is the vacuum state of the full interacting theory, and O H (t) is the Heisenberg operator made by multiple scalar fields. In this theory, the Hamiltonian is time-dependent, so we denote t 0 as the time when inflation starts, while t as the time when inflation ends. The meaning of this correlation function will be precisely given in the later discussion in Section 3, and in fact, most experimental observables are given by the above field theory constructions.The algorithm contains (adiabatic) initial state preparations, time evolution/particle scattering process, and measurement, and is argued to be polynomial, analogous to the Jordan-Lee-Preskill experiment. Moreover, since the theory and the simulation targets are very different from flat-space quantum field theories, it is necessary to include new ingredients appearing in the theory, modifying the Jordan-Lee-Preskill experiment. Here, as an outline of this paper, we will summarize the following most crucial points.• Inflation in a lattice. We systematically construct the lattice quantum field theory of curvature perturbation, both for the free theory and the interacting theory in
Contents
Cosmic inflation is probably the most famous paradigm describing the physics of the very early universe immediately after the big bang. Based on solid predictions from general relativity and quantum field theory in the curved spacetime, the theory of cosmic inflation successfully describes a homogenous and isotropic universe measured by the Cosmic Microwave Background (CMB) and Large Scale Structure (LSS). In the standard theory of inflation, the exponential expansion of the universe is driven classically by a massless scalar field, so-called inflaton, leading to a map of thermal CMB photons, while cosmic perturbations above classical trajectories of the inflaton are predicted by a usually weakly-coupled quantum field theory in the nearly de Sitter spacetime (see references about cosmic inflation ). Moreover, the complete description of the history of the universe after inflation, include reheating [23][24][25][26][27][28], electroweak phase transition and baryogenesis [29][30][31][32], etc., might be related to the consequences of inflation. A similar exponential growth paradigm is also happening in the late histories of the universe: the dark energy [33][34][35]. From its born, the theory of cosmic inflation with its corresponding cosmic perturbation has obtained great achievement in predicting cosmological observations. However, there are still a large number of open problems remaining for our generations, making it yet an active research direction both in theory and in observations. Until now, inflation is not fully confirmed by observations (for instance, lacking a relatively large tensor-to-scalar ratio makes it relatively hard to get distinguished from other theories). Even if the theory is correct, we still don't know what the precise dynamics of inflaton and other possible particles in the very early universe is. A more accurate, quantitative picture after inflation is not fully understood. Theoretical discussions about a full quantum gravitational description of de Sitter space are still working in progress, with the help of technologies from string theory, holography and AdS/CFT [36][37][38][39][40][41][42][43][44]. A full resolution of a possible cosmic singularity and a comprehensive analysis of de Sitter quantum gravity might rely on more precise, hardcore progress about string theory and Planckian physics, which are still in active development [45].
On the other hand, the rapid development of quantum information science provides a lot of opportunities for the next generation of possible computing technologies, and many of them also belong to fundamental physicists. At present and also in the near future, we have the capability to control 50-100 qubits and use them to perform operations, but the noise of quantum systems limits the scale of quantum circuits [46,47]. In the long run, we hope to be able to manufacture universal, fault-tolerant quantum computing devices that can perform precise calculations on specific problems and achieve quantum speedup. As high-energy physicists, we hope that future quantum computing devices can be used in the research of high-energy physics. Therefore, designing and optimizing quantum algorithms for a given high-energy physical process has gradually become a hot research field [48][49][50][51][52]. At the same time, designing quantum algorithms for basic physical processes also has far-reaching theoretical significance. According to the quantum-extended Church-Turing Thesis, any physical process occurring in the real world can be simulated by a quantum Turing machine. Therefore, studying quantum algorithms for simulating physical processes can help support or disprove the quantum-extended Church-Turing Thesis. In practice, designing and running quantum algorithms from fundamental physics could also help us benchmark our near-term quantum devices.
In this paper, we initialize the study on extending the Jordan-Lee-Preskill algorithm for particle scattering [48,49] towards inflationary spacetime. The Jordan-Lee-Preskill algorithm is a basic paradigm for simulating quantum field theory processes in a universal quantum computer. The algorithm constitutes encoding, initial state preparation, time evolution, and measurement. The algorithm is designed to run in the polynomial time, scaling with system size, consistent with the quantum-extended Church-Turing Thesis. Here in this work, we discuss the extension of the algorithm from particle scattering and evaluation of scattering matrix in the flat-space quantum field theory, to the computation of correlation functions of scalar fields in the inflationary background, connecting with the observables of CMB and LSS we discuss in cosmology.
In modern cosmology, the cosmic perturbation theory is constructed by a massless, weakly-coupled quantum field theory living in a nearly de-Sitter spacetime, written in the Friedmann-Robertson-Walker (FRW) metric. The free theory describes a Gaussian perturbation above the thermal spectrum of the CMB map, while the small non-Gaussian corrections are encoding the information, for instance, about gravitational nonlinearities and other fundamental fields appearing during inflation, which could in principle be detected by future experiments. Cosmic perturbation theory treated those perturbations in the standard quantum field theory language. Although in this paper, we only discuss scalar and curvature perturbations, other modes (vector and tensor) modes have also been studied significantly, and some of them are closely related to primordial gravitational waves.
So, why do we simulate a weakly-coupled field theory of the inflationary cosmology? In the standard description, the weakly-coupled nature is motivated by observations: we didn't observe large non-Gaussianity of the CMB map. Since correlation functions could be computed by Feynman diagrams in the weakly-coupled quantum field theory, at least at tree level, we should already know the answer. However, we point out the following reasons, showing that it is, in fact, important to simulate such theories.
• This work could serve as a starting point for understanding the Jordan-Lee-Preskill algorithm in the curved spacetime. Simulating quantum field theories in the curved space is an interesting topic, closely related to many open problems about quantum gravity. However, instead of considering general spacetime, we could treat de Sitter space with fruitful symmetric structures as a warmup example.
• When computing correlation functions, we are using the in-in formalism in the cosmic perturbation theory. The formalism is useful to determine correlation functions reliably under certain physical assumptions, while further implications of the formalism and systematic understandings of cosmic non-Gaussianities, closely related to the nature of the interacting vacuum of quantum field theories in de Sitter space, is still under active research. We believe that our quantum simulation algorithm will have theoretical significance: it will help us validate the assumptions we made about the in-in formalism, and explore, estimate, and bound rigorous errors away from assumptions. (More details about this will be given in Section 3.)
• Strongly-coupled quantum field theory in the curved spacetime is an interesting topic to study and simulate by itself, despite the lack of phenomenological consequences. For instance, we might be interested if we have different phases and phase transitions in those theories.
• It also might be interesting to study loop diagrams beyond the tree level diagrams in the weakly-coupled theories. Thus, quantum simulation of those theories could justify and enrich our understandings about cosmic perturbation theories at higher loops [53][54][55][56].
• This work might be regarded as a starting point for studying quantum simulation of quantum field theories in the FRW spacetime, which is interesting for other cosmic phases. For instance, cosmic (p)reheating and cosmic phase transitions in the early universe might be described by strongly-coupled quantum field theories, which require the computational capability of quantum machines. Thus, this work could provide experiences, insights and intuitions for future studies of quantum simulation of other cosmic phases and phase transitions in those theories (in Appendix B, we discuss a potential simulation problem about cosmic reheating.).
• Explicit construction of the quantum algorithm simulating inflationary dynamics could support the statement of the quantum-extended Church-Turing Thesis. It is of great interest to study how the quantum-extended Church-Turing Thesis is compatible with general relativity since the definition of complexity requires a definition of time coordinates. Thus, our work about inflationary spacetime could provide such an example.
• We could also take the advantage that we already know some aspects of inflationary perturbation theory. In fact, if we could really run the algorithm and simulate the dynamics in the quantum device, we could check the answer from some of our theoretical expectations. Thus, quantum simulation of such theories will be helpful for us to benchmark our quantum devices.
• Explicit construction of such algorithms could also benefit our future study about the nature of quantum gravity. For instance, measuring some similar correlation functions in AdS could be interpreted as holographic scattering experiments, which might be reduced to flat-space scattering amplitudes in the limit of the large AdS radius. Such experiments are important for understanding the nature of holographic theories, conformal bootstrap, and the consistency requirements of holographic CFTs [57][58][59][60][61][62][63][64].
• It is fun.
In this paper, we will construct a full quantum simulation algorithm for measuring correlation functions of the following form
Ω in (t 0 ) |O H (t)| Ω in (t 0 ) ,(1.1)
the Heisenberg picture. Furthermore, we compute the Bogoliubov transformation to diagonalize the Hamiltonian and determine the gap of the free system both for the continuum and the lattice field theories. The estimate of the mass gap is necessary for us to perform the adiabatic state preparation.
• Adiabatic state preparation to find the interacting vacuum. In our algorithm, similar to Jordan-Lee-Preskill, the time evolution is used twice. We firstly use the adiabatic state preparation to find the interacting vacuum at the initial time t 0 , and then we use the time-dependent Trotter simulation to simulate the time evolution in the Heisenberg picture. The treatment will help us determine the nature of the interacting vacuum in the curved spacetime.
• Encoding from the causal structure. When we are simulating the Heisenberg evolution operator, we fix the computational basis to be the field basis at the initial time t 0 . Then during the time evolution, we wish to encode our Hamiltonian for a general time t. We define the basis we use in the Heisenberg Hamiltonian for a general time t by the free theory at t 0 . Thus, we need to express the field operator at t as a linear combination of field operators at t 0 . The integration kernel is related to the Green's identity in the operator context. When we are using the conformal time, the past light cone is the same as the flat space, which is supporting the integration kernel, since the spacetime is conformally flat. This could be regarded as a toy version of the Hamilton-Kabat-Lifschytz-Lowe (HKLL) formula in AdS/CFT [65,66].
• Encoding bounds from the effective field theory (EFT) scale. In this paper, we choose the field basis to do our quantum simulation. In the original algorithm [48,49], the number of qubits we need to encode in this basis, is bounded by the scattering energy of the scattering experiments. However, in this paper, we do not naively have a quantity like scattering energy in the flat space. Instead, we use bounds of field (field momentum) fluctuations from an EFT scale, Λ, which sets the energy scale of cosmic inflation, combining with some other physics in cosmology. The setup of the value Λ is from the inflationary physics we study.
• Exponential expansion and the Trotter errors. When applying the product formula (Lie-Trotter-Suzuki formula) to the inflationary Hamiltonian discretized in a lattice, we notice that there is a dependence of Trotter errors from the scale factor, which increases exponentially measured in the physical time. In the conformal time, the error grows polynomially. The total time of inflation or the e-folding number could set limits on the accuracy of our computation and number of quantum gates.
This paper is organized as the following. In Section 2, we discuss some inflationary quantum field theories and their lattice versions. In Section 3, we discuss the in-in formalism, the basic method for computing correlation functions in the inflationary perturbation theory. In Section 4, we discuss the details of our algorithm, including encoding, state preparation, time evolution, measurement, and error estimates. In Section 5, we make some final remarks about quantum simulation of cosmic inflation, and more generic perspectives about quantum information theory, quantum gravity, and cosmology related to observations. In the appendices, we make some further comments about computation and quantum gravity A, and some comments about quantum simulation of cosmic reheating B.
The theory of inflationary perturbations
In this section, we will review some basics about the theory we will simulate: a simple real massless scalar field moving in the (approximate) de Sitter background written in the FRW coordinate. We will describe the free theory, the lattice version, and interactions. In this section, some of the materials are standard, while some of them are new. In order to make this paper, we feel we have to introduce the full setup and add careful comments about our treatment, and we feel that this material might be helpful for readers. However, for a more detailed study, see some standard review articles [11,13,17,[19][20][21].
The free theory
The inflationary spacetime is given by the following metric
ds 2 = a 2 (τ ) −dτ 2 + dx 2 . (2.1)
Here, x is the coordinate as the three-vector. τ is called the conformal time, where the metric is written in the conformally flat form. The scale factor a(τ ) is given by
a(τ ) = − 1 Hτ , a(t) = a 0 e H(t−t 0 ) , (2.2)
where H is the Hubble parameter 1 , a 0 is the initial scale factor with the initial time t 0 . The conformal time τ is related to the physical time t by
dt = adτ . (2.3)
At the beginning of inflation, we have τ 0 ∼ −∞, while at the end, we have τ end ∼ 0 − . Thus, at the beginning of the inflation, which corresponds to the big bang, we have a(τ 0 ) ∼ 0. In the end, we have a scale factor, which is very large. We often use the e-folding number
N = log a end a 0 ,(2.4)
to measure the time of inflation. Now, we consider a massless scalar field σ moving in this background, with the action
S = d 3 xdt a 3 2σ 2 − a 2 (∂ i σ) 2 = d 3 xdτ a 2 2 σ 2 − (∂ i σ) 2 ,(2.5)
where ∂ i denotes the spatial derivative. Furthermore, we define A = ∂ τ A andȦ = ∂ t A for variable A. Since it is in the free theory, it could be exactly solved. The solution of the field operator in the Heisenberg picture reads
σ(τ, x) = d 3 k (2π) 3 u k a k + u * k a † −k e ik·x . (2.6)
Here, a and a † are creation and annihilation operators
[a k , a k ] = 0, a k , a † k = (2π) 3 δ 3 (k − k ) , (2.7)
u satisfies the classical equation of motion and k ≡ |k| 2 . The definition of the mode function u, and the definition of the vacuum, have some ambiguities due to the nature of the curved spacetime. A canonical choice is so-called the Bunch-Davies vacuum [67],
u k (τ ) = H √ 2k 3 (1 + ikτ )e −ikτ . (2.8)
In the curved spacetime, the quantization defined by the creation and annihilation operators are analogous to the flat-space answer: the mode k is created from the creation operator a † k . How is this related to the cosmic perturbation theory? In fact, the free field σ here we are considering could be understood as the fluctuation of the inflaton field δφ(t, x), where
φ(t, x) =φ(t) + δφ(t, x) . (2.9)
Here,φ(t) is the homogenous and isotropic classical background that is not related to the coordinate, and δφ is understood as a quantum fluctuation (an operator) following the rules of quantum field theories. A similar treatment is done for computing thermal radiations of the black hole, where we are also expanding fluctuations around classical curved geometries. In principle, cosmic perturbation theory requires a full consideration by expanding the metric, together with the inflationary field as well. Then, from the full expansion, one could decompose perturbations into different components based on the spin. In geometry, the curvature perturbation will come together with the inflaton, and there are ambiguities made by redefinitions of coordinates and fields. The redundancy could be fixed by the gauge choice. A standard and convenient choice is to replace the role of δφ by another variable ζ, which represents the curvature perturbation, and this gauge is called the ζ-gauge. Under the ζ-gauge, we could write down the second-order action 3
S = dtd 3 x a 3ζ 2 − a(∂ i ζ) 2 ,(2.10)
where is so-called the slow-roll parameter
≡ −Ḣ H 2 ≈ V = 1 2 ∂φV V 2 ≈ φ 2 2H 2 ,(2.11)
and V = V (φ) is the inflationary potential 4 . More general single field inflationary models have the following second-order action in the ζ-gauge,
S = dtd 3 x a 3 1 c 2 sζ 2 − a(∂ i ζ) 2 , (2.12)
where c s is called the sound speed [14] 5 . In this case, the mode expansion reads
ζ(τ, x) = d 3 k (2π) 3 v k a k + v * k a † −k e ik·x ,(2.15)
where
v k = H √ 4 c s k 3 (1 + ikc s τ ) e −ikcsτ . (2.16)
This action is exactly the free theory piece of the whole action we wish to simulate as the simplest example. 3 We use the unit such that we set the Planck mass M pl = 1. 4 Roughly speaking, one could interpret ζ as ζ = −Hδφ/φ. 5 Defining the Lagrangian density
L ⊃ √ −gP (φ, X) ,(2.13)
for the whole inflaton field before perturbation, where X ≡ −(∂ µ φ∂ µ φ)/2, we could define the sound speed by
c 2 s = P ,X P ,X + 2XP ,XX .
(2.14)
-10 -
The lattice free theory
Now, motivated by the goal of quantum simulation, we define our theory in the lattice. The lattice version in this paper means that we are discretizing the spatial direction, replacing
Ω 3 = bZ 3 L , (2.17)
where b is the lattice spacing, L is the total length of a single direction, and
L = L b . (2.18)
The total number of sites is then given by
V =L 3 . (2.19)
The notation Z 3 L denotes the three-dimensional integer periodic lattice with the peri-odicityL in each direction. One could define a lattice version of the scalar quantum field theory based on the action of the curvature perturbation. From the Lagrangian density in the continuum,
L τ = a 2 1 c 2 s ζ 2 − (∂ i ζ) 2 ,(2.20)
we could write down a discrete version of the Lagrangian
L τ = a 2 b 3 x∈Ω 3 1 c 2 s ζ (x) 2 − (∇ i ζ(x)) 2 , (2.21) where ∇ i ζ(x) ≡ ζ (x + br i ) − ζ(x) b r i ,(2.22)
is defined as the discrete version of the spatial derivative, and r i is the unit vector along the spatial direction i. Note that the definition of the Lagrangian depends on the time coordinate, and here we are using the conformal time τ . Now we could perform the Legendre transform to define the Hamiltonian. The field momentum is
π ζ = ∂L ∂ζ = 2a 2 c 2 s ζ . (2.23)
Thus, the Hamiltonian density in the continuum is
H τ = π ζ ζ − L τ = c 2 s 4a 2 π 2 ζ + a 2 (∂ i ζ) 2 . (2.24)
Thus, the lattice version is
H τ = b 3 x∈Ω 3 c 2 s 4a 2 π 2 ζ (x) + a 2 (∇ i ζ(x)) 2 . (2.25)
Now we could write down the rule of canonical quantization
[ζ(τ, x), π ζ (τ, y)] = ib −3 δ x,y , [ζ(τ, x), ζ(τ, y)] = [π ζ (τ, x), π ζ (τ, y)] = 0 . (2.26)
Now we make some brief comments on the applicability of the above theory. In our theory, we choose the ultraviolet cutoff Λ = 1/b, corresponds to the spatial lattice spacing b. In cosmic inflation, we usually demand,
H < HM pl = √ H ∼ φ 1 b = Λ M pl = 1 . (2.27)
The right-hand side is concerning that we are avoiding Planckian physics, and we set the Planck constant to be 1 in our units. Now, we extend our mode expansions to the lattice version,
ζ(τ, x) = k∈Γ 3 1 L 3 v(k)a k + v * (k)a † −k e ik·x ,(2.28)
where Γ 3 is the dual lattice
Γ 3 = 2π L Z 3 L ,(2.29)
and
[a p , a q ] = 0 , a p , a † q = L 3 δ p,q .
(2.30)
Note that in the formula, we assume that v(−k) = v(k) to make sure we have a real field. v(k) should satisfy the equation of motion for the corresponding classical discrete field theory. We find
v(k) = H 2 c s ω(k) 3 (1 + iω(k)c s τ ) e −iω(k)csτ , ω(k) = 2 b 3 i=1 sin 2 bk i 2 1 2 . (2.31)
When we compare this result with the continuous mode function, it is easy to realize the following properties of the solution.
• We indeed have v(−k) = v(k).
• The form of the solution is intuitively understandable. The form of the dispersion relation is already presented in the case of flat space in the lattice theory [48,49].
Since the metric is conformally flat, the dispersion relation should stay the same in our coordinate. Using the dispersion relation ω(k) in the equation of motion for the Fourier space, we will obtain v(k).
• In the limit where τ → −∞, the mode function has reduced to the flat space lattice solution with the sound speed
v(k) = iHτ √ c s 2 ω(k) e −iω(k)csτ ⇒ √ 2 av(k) = √ c s 2ω(k) e −iω(k)csτ . (2.32)
Note that √ 2 av(k) the honest scalar field defined consistently for the flat space 6 . This is indeed the mode function of the "phonons" in the 3+1 dimensions. Thus, we indeed define a lattice version of the Bunch-Davies vacuum.
• We could also look at the continuum limit b → 0, where the solution is reduced
to the continuum answer
v(k) ⇒ v k = H √ 4 c s k 3 (1 + ikc s τ ) e −ikcsτ . (2.33)
Note that the existence of the cubic lattice breaks the rotational symmetry, then in the lattice, v(k) is not only a function of k. We could also compute the next leading short distance corrections of the dispersion relation,
ω(k) = k − b 2 i k 4 i 24k + O(b 4 ) ,(2.34)
which will sequentially modify, for instance, the two-point function and the power spectrum. This is, in fact, the lattice effect of the Lorentz symmetry breaking in cosmic inflation. The lattice effect of quantum gravity is studied in a completely different context. They are assuming the lattice theory should be some ultraviolet completions of general relativity and ask for phenomenological consequences on bounding the lattice spacing b, for instance, the observational bound on the power spectrum of CMB [68]. But now we are completely under a different situation: we are using the lattice theory to simulate the quantum field theory in the continuum. 6 The coefficient √ 2 comes from the relation between the curvature perturbation and the scalar field perturbation.
The gap
When we are talking about the flat space, we know that the massless scalar field theory is gapless if we don't consider the infrared cutoff when defining the continuum field theory. In the lattice version, the gap will scale as 1/L, which is also the infrared cutoff of the theory. Similarly, a massless field in our inflationary FRW metric is also gapless. Although the Hamiltonian is time-dependent, the Bogoliubov transformation will make the diagonal mode of the Hamiltonian to be still gapless in the whole time during inflation.
A direct way to verify the statement is through a direct computation of the Bogoliubov transformation. To start, we take the free continuum Hamiltonian expressed in ζ fields and field momenta measured in the τ coordinates. Replacing the fields and field momenta by their solutions, we obtain the Hamiltonian evaluated in the Heisenberg picture,
H τ = d 3 k (2π) 3 A k 2a † k a k + (2π) 3 δ 3 (0) + B k a † −k a † k + H.c. ,(2.35)
where H.c. means the Hermitian conjugate, and
A k = 1 + 2c 2 s τ 2 k 2 4c s τ 2 k , B k = 1 − 2ic s τ k 4c s τ 2 k e 2icsτ k . (2.36)
The existence of the off-diagonal term appearing in the Hamiltonian reveals the timedependent nature of the Hamiltonian. To diagonalize the Hamiltonian, we make the following Bogoliubov transformation
a † k = α k b † k + β k b −k , a k = α * k b k + β * k b † −k . (2.37)
Note that now we keep a † k and a k to be static. The coefficient α, β, and the new operators b † k and b k , are allowed to be time-dependent. We wish to ensure that b † k and b k are still canonical
[b k (τ ), b k (τ )] = 0, b k (τ ), b † k (τ ) = (2π) 3 δ 3 (k − k ) . (2.38)
Furthermore, we wish to cancel the off-diagonal terms. We find that we need to demand
|α k | 2 = 1 2 1 + A k |A k | 2 − |B k | 2 , β k = |A k | 2 − |B k | 2 − A k B k α * k . (2.39)
Now the Hamiltonian looks like
H τ = d 3 k (2π) 3 (c s k) b † k b k + (2π) 3 1 2 δ 3 (0) . (2.40)
Ignoring the time-dependence of b k , it seems that the Hamiltonian is formally the same as the flat space. This is because the nature of the above Bogoliubov transformation is to keep re-defining the vacuum state during time evolution evaluated under the definition of b k , to make sure the form of the Hamiltonian is the same as the original time τ 0 . The lattice calculation is very similar to the continuum version, since we only need to replace the continuous momenta to the discrete ones. The Hamiltonian is written after the Bogoliubov transformation as
H τ = k∈Γ 3 1 L 3 (c s ω(k)) b † k b k + 1 2 L 3 . (2.41)
From now, we call the particle created by b † as the "diagonal mode" (normal mode), to distinguish from the particle defined by a † , the ζ-particle (the curvature perturbation). The above calculation in the continuum explicitly shows that the Hamiltonian does not have the mass gap at each time slice. Furthermore, from the energy, we cannot distinguish the vacuum and the diagonal modes carrying zero momenta. In the lattice calculation, the gap scales as 1/L since our momenta are from the discrete set Γ 3 . Those two facts are consistent when we wish to assign an infrared cutoff 1/L to the field theory. We wish to make the following further comments.
• Similar calculations about the above Bogoliubov transformation are done in the circumstance of particle production in inflation, and "cosmic decoherence" by showing similar Bogoliubov transformations, similar to the Unruh effect and black hole information problem, could lead to fruitful particle production. Particles produced by transformations of the vacua are closely related to decoherence where the reduced density matrix becomes nearly diagonal, and quantum perturbations in inflation could become approximately "classical" when exiting the horizon. See some references [23,[69][70][71][72][73][74].
• We are performing the calculation in the conformal time τ . The story might be different when we mix time and space and define some other time coordinates.
In fact, the definition of the mass gap, which is related to energy, is naturally associated with the definition of time. In general, one might consider defining some versions of the ADM mass, although it might face some technical troubles when we are considering the de Sitter boundary condition. This question might be interesting for quantum simulation since the value of the gap is naturally associated with the efficiency of the adiabatic state preparation process for finding the vacuum. This shows further problems about the compatibility between quantum algorithms using quantum mechanics (requiring a definition of time) and relativity (admitting ambiguities of defining time and space), and the nature of quantum-extended Church-Turing Thesis in the curved spacetime. It might also be interesting to consider different definitions of the gap in AdS. We leave those discussions for future research.
Interactions
Now we wish to introduce interactions beyond the second-order action. Currently, in the standard theory of inflation, we only assume a scalar field φ, but the nature of this field is not very clear. Furthermore, it might be possible that there are some other particles interacting with inflaton, for instance, the standard model particles. Furthermore, there might be contributions from higher-order cosmic perturbation theory, gravity nonlinearities, inflaton self-interaction, and so on. Those contributions will, in principle, give contributions beyond the free theory, and one could those contributions order by order in the perturbation theory framework.
In this paper, we could imagine that we are simulating the following specific model, although most discussions in this paper could be easily generalized to much more generic scenarios. We will consider the general single-field inflation framework, and we expand the action towards the third order [14]. We obtain
S int = dtd 3 xa 3 −2 λ H 3ζ 3 + Σ a 2 H 3 1 − c 2 s ζ (∂ i ζ) 2 ,(2.42)
where λ and Σ are model parameters 7 . This interaction gives a general form of cubic interaction, which will generate non-trivial three-point functions, namely, non-Gaussianities.
As we discussed in the introduction, the standard inflation theory is expected to be weakly-coupled. Namely, all interactions we include here should have small coupling constants, admitting perturbative expansions. In terms of quantum simulation, during the adiabatic procedure we will discuss later, we could extend the calculation towards the strongly-coupled regime. Thus, we do not really need to expand the action if we really wish to simulate the theory of general single-field inflation: we could even study the original action without perturbative expansion in principle. However, here we are describing an example where we indeed do an expansion before encoding towards quantum devices, with the motivations described in the introduction section of this paper. Now we write down the Hamiltonian of the above interaction. We have
L τ = a 2 ζ 2 c 2 s − (∂ i ζ) 2 + a − 2λ H 3 ζ 3 + Σ H 3 1 − c 2 s ζ (∂ i ζ) 2 . (2.44)
Up to the leading order perturbation theory, the Legendre transform gives the following total Hamiltonian, split as the addition of the free and the interacting pieces (from now, we ignore the subscript τ if not necessary)
H = H 0 + H I , H 0 = d 3 x c 2 s π 2 ζ 4a 2 + a 2 (∂ i ζ) 2 , H I = d 3 x c 6 s λ 4a 5 H 3 3 π 3 ζ + c 2 s (c 2 s − 1) Σ 2aH 3 π ζ (∂ i ζ) 2 . (2.45) 7
In fact, we have
Σ ≡ XP ,X + 2X 2 P ,XX = H 2 c 2 s , λ ≡ X 2 P ,XX + 2 3 X 3 P ,XXX ,(2.43)
where P is defined in the last footnote. Furthermore, the coupling constants in the cubic interaction are comparable to O( 2 ).
We wish to split the whole Hamiltonian to three pieces
H = H 1 + H 2 + H 3 , H 1 = d 3 x c 2 s π 2 ζ 4a 2 + c 6 s λ 4a 5 H 3 3 π 3 ζ , H 2 = d 3 xa 2 (∂ i ζ) 2 , H 3 = d 3 x c 2 s (c 2 s − 1) Σ 2aH 3 π ζ (∂ i ζ) 2 . (2.46)
Here, H 1 and H 2 stand for the Hamiltonians represented by purely the field quanta and the field momenta, respectively, and H 3 is the mixing term. The lattice version is given by
H = H 1 + H 2 + H 3 , H 1 = x∈Ω 3 b 3 c 2 s π 2 ζ (x) 4a 2 + c 6 s λ 4a 5 H 3 3 π 3 ζ (x) , H 2 = x∈Ω 3 b 3 a 2 (∇ i ζ(x)) 2 , H 3 = x∈Ω 3 b 3 c 2 s (c 2 s − 1) Σ 2aH 3 π ζ (x)(∇ i ζ(x)) 2 . (2.47)
The Hamiltonian H here will be the Hamiltonian we wish to implement in the quantum device. We have the following comments.
• The above splitting H = H 1 + H 2 + H 3 is important when we are doing the Trotter simulation. In the product formula, the Trotter error is estimated by chains of commutators, and the commutation relation about the field and the field momentum will be important to determine those commutators. We will discuss details about this calculation later.
• We owe some higher-order corrections when we are doing the Legendre transformation. That is, the Legendre transformation we have done from the Lagrangian to the Hamiltonian is correct up to the leading order coupling constant. This is because the non-trivial couplings appear in the Legendre transformation. Although we have promised that we are able to simulate the theory non-perturbatively, it should be the Hamiltonian, not the theory defined by the Lagrangian.
Further comments
Finally, we make some further comments on some aspects of inflationary perturbation theory.
• The field theory content and its analog in the lattice, described in this section, could be generically applicable for other field theories which are relativistic. One might apply techniques mentioned in this paper to determine problems they are interested in for non-relativistic quantum field theories.
• Choice of momentum/coordinates. When doing quantum simulation, in this paper, we will use the action as the integral over local fields in the spacetime. We write local fields as variables of spacetime coordinates. However, cosmological observables are usually written directly in terms of momenta instead of coordinates. Thus, an extra Fourier transform will be performed classically after we perform quantum simulations of correlation functions measured in different space coordinates. The advantage of such treatment is that we still have a local quantum field theory when we are doing quantum simulation, admit quantum advantage when we are using the product formula to do digital simulations. Of course, one could consider directly simulating the quantum field theory with coupling in the momentum space. However, in this case, we lose our local quantum field theory description. For instance, in this case, we will have a summation of modes of the form σ k σ −k , which is not local in the space of momenta. Furthermore, the correlation functions measured in the momentum space will carry a delta function δ 3 ( k) due to momentum conservation, which is not easy to deal with numerically. In order to read the prefactor in front of the delta function, we probably need to make an extra integral over some momenta variables to integrate the delta function out. This will bring us some extra costs numerically. In any case, to get the full correlation functions, we always need the information specified by different momenta. We leave the direct treatment in the momentum space for future works. For simulating quantum field theories written in the non-local form, see a review in quantum computational chemistry [75], and recent applications [76].
• The Bunch-Davies vacuum. Fruitful symmetries in de Sitter space lead to a series of definitions of vacuum states for quantum field theories living in it, which are usually called the α-states or the α-vacua. Among those states, the Bunch-Davies vacuum plays a special role: it is the vacuum state that looks instantly the Minkovski vacuum, namely, the zero-particle state observed by a free-fall observer. Furthermore, the Bunch-Davies vacuum minimizes the initial energy in our time-dependent Hamiltonian. Thus, the Bunch-Davies vacuum becomes a natural choice when we are doing the cosmic perturbation theory. For cosmological consequences of non-Bunch-Davies vacua, see [77,78] for discussions. For recent discussions about simulation of Bunch-Davies vacuum, see [79]. Exploring non-Bunch-Davies vacua and looking for possible simulation opportunities in quantum technology are important research directions, and in fact, they are closely related to some fundamental assumptions in cosmic perturbation theory, see [78,80].
• Other coordinate systems. The coordinates we study in this paper are widely used in cosmology literature, but it is only one possible slicing of the global de Sitter. With our coordinate system, it is easy to generalize the flat-space tricks since, in our coordinate, it is manifestly conformally flat. However, one could indeed consider other coordinates. Other slicings and other patches in the de Sitter space might mix the notion of space and time in the current definition. There are similar studies in AdS where people are considering the discretization of the whole spacetime and study their corresponding lattice quantum field theories (see, for instance, [81], and some previous related research [82][83][84][85][86]).
• The picture. We are quantizing the field in the Heisenberg picture, and the Hamiltonian is also written in the Heisenberg picture. In the free theory, the mode function satisfies the equation of motion. Then the quantum field satisfies the Heisenberg equation. Note that now the Hamiltonian is completely timedependent, so the Hamiltonian in the Heisenberg picture is not the same as the Hamiltonian in the Schrödinger picture. In our time-dependent quantum field theory setup, it is more convenient to use the Heisenberg picture, but it is natural to understand quantum computing in the Schrödinger picture. However, it does not affect the current purpose we have. At the first step that we propose in the introduction, we will use the adiabatic state preparation. Since the time is fixed now at τ 0 , and we slowly turn on the coupling constant dynamically, the Hamiltonian and Schrödinger Hamiltonians are the same. In the second step, we will use the Trotter time evolution to compute the expectation value of correlation functions evaluated at a later time. Then, we could use quantum computation to evaluate the unitary transformation in the Heisenberg picture, acting on the interacting vacuum. Hence, we are mimicking the Heisenberg evolution operator using the Schrödinger evolution of the quantum circuit. It might be interesting to consider using the Schrödinger picture completely in future works.
• Trans-Planckian problem. How early should we set τ 0 when we are doing the simulation? We know that the comoving scale should be exponentially small in the past since the universe is exponentially expanding. Thus, early time should correspond to extremely short distances, which might be smaller than the Planck length. This is called the Trans-Planckian problem in cosmology, since we don't have a Planckian theory of quantum gravity [87]. We are not addressing the conceptual resolution to the problem, but it is practically related to where we could choose τ 0 when we are doing the simulation. In fact, this could be understood as an advantage of our simulation. We expect τ 0 would happen at the very far past, and we could test different values of initial time τ 0 other than the ideal −∞ when we are doing the perturbative quantum field theory. This might be helpful for theoretical physicists to address the Trans-Planckian issue. For instance, one could check if the vacuum is really adiabatic on sub-horizon scales. In fact, this issue is closely related to the in-in formalism we will discuss later since we care about interactions in the Trans-Planckian regime happening at an extremely early time 8 .
The in-in formalism
The above section describes how to write down the action of the inflationary perturbations. However, it does not cover how we could make predictions from those actions. The way we are doing perturbation theory and making predictions in cosmology is quite different from what we are usually doing in the flat space. In this section, we will briefly introduce the most popular framework in cosmology, the in-in formalism, and how those observables are directly connected to observations and experiments. Understanding the framework is important for us to perform quantum simulation, which we will comment briefly in this section. Furthermore, exploring the bounded error beyond the approximation of the in-in formalism provides us important motivations for quantum simulation.
The theory
The in-in formalism is a powerful approach for computing correlation functions for the time-dependent Hamiltonian, and it is specifically useful for cosmic perturbation theory. Unlike the S-matrix in the flat space, which starts from infinite past to infinite future (the in-out formalism), the in-in formalism is concerning the correlation functions where bras and kets are both from the past, namely, both the "in states". As we know, our observational data about the universe should, in principle, come from observables in quantum mechanics. Generically, we wish to compute the following expectation value in the Heisenberg picture,
Ω in (τ 0 ) |O H (τ )| Ω in (τ 0 ) , (3.1)
where the operator is now evolving with time, and the state is static, equivalent to the state at the initial time τ 0 . In this section, we will describe how to evaluate the above variable in the cosmic perturbation theory, how it is connected to observations, and how it is related to quantum simulation. Since our theory could generically apply to time-dependent systems, we will use the notation τ for time, and when applying the theory to cosmic inflation, we will use the in-in formalism completely in the conformal time.
Basics
We start with some kindergarten quantum mechanics. Before we start our discussion, we need to quote the following celebrated Feynman's disentangling theorem.
Theorem 1 (Feynman's disentangling theorem). For time-dependent operators A and B, and τ 1 > τ 0 , we have
T exp τ 1 τ 0 (A (τ ) + B (τ ))dτ = T exp τ 1 τ 0 A (τ )dτ T exp τ 1 τ 0B (τ ) dτ , (3.2) whereB (τ ) ≡T exp − τ τ 0 A (τ ) dτ B(τ )T exp τ τ 0 A (τ ) dτ , (3.3)
and T is the time-ordering operator,T is the anti-time-ordering operator.
According to the theorem, we will discuss some time-dependent quantum mechanics. Consider that now we are studying the time-dependent Hamiltonian H S (τ ) in the Schrödinger picture (or we could call it as the Schrödinger Hamiltonian). The evolution of the state |ψ S (τ ) is given by
|ψ S (τ ) = U S (τ, τ 0 ) |ψ S (τ 0 ) , (3.4)
where U S is given by a Dyson series
U S (τ, τ 0 ) = T exp −i τ τ 0 H S (τ ) dτ . (3.5)
Now, let us consider measuring the observable O. In the Schrödinger picture, it is given by
ψ S (τ )|O|ψ S (τ ) = ψ S (τ 0 ) U † S (τ, τ 0 ) OU S (τ, τ 0 ) ψ S (τ 0 ) . (3.6)
Alternatively, we could define the Heisenberg picture. We define the time-dependent Heisenberg operator
U † S (τ, τ 0 ) OU S (τ, τ 0 ) = O H (τ ) ,(3.7)
and the time-dependent result of the measurement is given by
ψ S (τ )|O|ψ S (τ ) = ψ S (τ 0 ) |O H (τ )| ψ S (τ 0 ) . (3.8)
Specifically, we define the Hamiltonian in the Heisenberg picture (or we call it the Heisenberg Hamiltonian),
U † S (τ, τ 0 ) H S (τ )U S (τ, τ 0 ) = H H (τ ) . (3.9)
We could compute the conjugate of the unitary operator U S :
U † S (τ, τ 0 ) = U −1 S (τ, τ 0 ) = 1 − i τ τ 0 dτ H (τ ) − τ τ 0 ,τ 1 >τ 2 dτ 1 dτ 2 H (τ 1 ) H (τ 2 ) + . . . † = 1 + i τ τ 0 dt H (τ ) − τ τ 0 ,τ 1 >τ 2 dτ 1 dτ 2 H (τ 2 ) H (τ 1 ) + . . . † =T exp i τ τ 0 dτ H(τ ) . (3.10)
Note that the Heisenberg Hamiltonian and the Schrödinger Hamiltonian are different in time-dependent quantum mechanics. Now we could use the Feynman's disentangling theorem to show that
Theorem 2. For τ 2 > τ 1 , we have O H (τ 2 ) = U † H (τ 2 , τ 1 ) O H (τ 1 )U H (τ 2 , τ 1 ) , (3.11) where U H (τ 2 , τ 1 ) = T exp −i τ 2 τ 1 H H (τ ) dτ . (3.12)
Proof. Consider an operator O S in the Schrödinger picture. By definition, we have
O S = U S (τ 1 , τ 0 ) O H (τ 1 )U † S (τ 1 , τ 0 ) = U S (τ 2 , τ 0 ) O H (τ 2 )U † S (τ 2 , τ 0 ) . (3.13)
Then
O H (τ 2 ) = U † S (τ 2 , τ 0 ) U S (τ 1 , τ 0 ) O H (τ 1 )U † S (τ 1 , τ 0 ) U S (τ 2 , τ 0 ) . (3.14)
Namely we want to show
U † S (τ 1 , τ 0 ) U S (τ 2 , τ 0 ) = U H (τ 2 , τ 1 ) . (3.15)
Treat τ 1 as τ 2 in the statement of Theorem 1. Then take
A(τ ) = Boole(τ 0 < τ < τ 1 ) (−H S (τ )) , B(τ ) = Boole(τ 1 < τ < τ 2 ) (−H S (τ )) ,(3.16)
where
Boole(x) = x = yes : 1 x = no : 0 . (3.17)
Then we get the answer.
The Heisenberg picture will be the starting point in the following discussion. So when we are talking about the Hamiltonian H, we mean the Heisenberg Hamiltonian in this paper.
We consider a time-dependent Hamiltonian in the perturbation theory. We define the following splitting in the Heisenberg picture
H(τ ) = H 0 (τ ) + H I (τ ) ,(3.18)
where H 0 here is the free Hamiltonian, and H I is the small interaction term. Now we define the interaction picture. The states in the interaction picture are defined as
|ψ I (τ ) =T exp i τ τ 0 dτ H 0 (τ ) |ψ S (τ ) =T exp i τ τ 0 dτ H 0 (τ ) T exp −i τ τ 0 H (τ ) dτ |ψ S (τ 0 ) . (3.19)
From this formula, we obtain the evolution operator in the interaction picture,
U I (τ, τ 0 ) = T exp −i τ τ 0 dτ H I (τ ) ,(3.20)
whereH I is the interacting Hamiltonian H I in the interaction picture. Defining
U 0 (τ, τ 0 ) = T exp −i τ τ 0 dτ H 0 (τ ) ,(3.21)
we defineH I asH
I (τ ) = U † 0 (τ, τ 0 )H I (τ )U 0 (τ, τ 0 ) . (3.22)
Now, we could use the Feynman's disentangling theorem to show that Theorem 3.
U I (τ, τ 0 ) = U † 0 (τ, τ 0 ) U H (τ, τ 0 ) . (3.23)
Proof. Understand τ 2 as τ in the statement of Theorem 1. Call
A = −H 0 , B = −H I . (3.24)
Then we get the answer.
Thus, one could check the expectation value of operators by defining operators in the interaction pictureÕ,Õ
(τ ) = U † 0 (τ, τ 0 ) OU 0 (τ, τ 0 ) , (3.25) so we have ψ I (τ )|Õ(τ ) |ψ I (τ ) = ψ S (τ )| O |ψ S (τ ) . (3.26)
Now, say that we are interested in computing the expectation value of the Heisenberg operator O H (τ ) in the vacuum defined in the interaction theory in the far past τ 0 ,
Ω in (τ 0 )| O H (τ ) |Ω in (τ 0 ) . (3.27)
Here, notice that we define the vacuum by the ground state of the Heisenberg Hamiltonian H(τ 0 ), which is equal to the Schrödinger Hamiltonian H S (τ 0 ). We have
Ω in (τ 0 )| O H (τ ) |Ω in (τ 0 ) = Ω in (τ 0 )| U † I (τ, τ 0 )ÕU I (τ, τ 0 ) |Ω in (τ 0 ) . (3.28)
The i -prescription and the in-in formalism
The evolution operator in the interaction picture, U I , could also help us prepare the interacting vacuum from the free theory vacuum in the flat space quantum field theory. This will happen in not only the time-independent Hamiltonian, but also the timedependent Hamiltonian. We will discuss this point as follows.
Suppose the Hamiltonians H, H 0 and H I are all time-independent, we have
exp(−iH(τ − τ 0 )) |Ω free = n exp(−iE n (τ − τ 0 )) |n n|Ω free = exp(−iE 0 (τ − τ 0 )) |Ω in Ω in |Ω free + n =0 exp(−iE n (τ − τ 0 )) |n n|Ω free . (3.29)
Now, we write |Ω free as the free theory vacuum, the ground state of H 0 , and |Ω in as the interaction vacuum, the ground state of H. Here |n is used to denote all eigenstates of H, and n = 0 is the ground state. We introduce the following prescription 9 ,
τ →τ = τ (1 − i ) , (3.30)
where is a small positive number. Now we have
n exp(−iE n (τ − τ 0 )) |n n|Ω free → n exp(−iE n (τ −τ 0 )) |n n|Ω free = n exp( E n τ 0 ) exp(iE n τ 0 − iE nτ ) |n n|Ω free . (3.31)
Now we take the limit where τ 0 → −∞. In this case, the summation over states is dominated by the first term, namely the ground state contribution,
exp(−iH(τ −τ 0 )) |Ω free ≈ exp(−iE 0 (τ −τ 0 )) |Ω in Ω in |Ω free . (3.32)
From now on, we will not distinguish the difference between τ andτ in notation. So we get the formula for the interaction vacuum
exp(−iH(τ − τ 0 )) |Ω in ≈ exp(−iH(τ − τ 0 )) |Ω free Ω in |Ω free . (3.33)
Here, |Ω in is normalized but |Ω free is not. In the time-dependent case, it is usually claimed that similar things should happen in the time-dependent Hamiltonian,
T exp −i τ τ 0H I (τ ) dτ |Ω in (τ 0 ) ≈ T exp −i τ τ 0H I (τ ) dτ |Ω free (τ 0 ) Ω in (τ 0 )|Ω free (τ 0 )
, (3.34) and the same i -prescription for time coordinates are used. One could argue that this will approximately happen if there is no pole for the Hamiltonian around time τ 0 , so the contribution of the whole time-ordered exponential is dominated roughly by H 0 for a sufficiently long period of time. Thus we have
Ω in (τ 0 ) |O H (τ )| Ω in (τ 0 ) = Ω in (τ 0 ) U † I (τ, τ 0 )ÕU I (τ, τ 0 ) Ω in (τ 0 ) = Ω in (τ 0 ) U † H (τ, τ 0 ) U 0 (τ, τ 0 )ÕU † 0 (τ, τ 0 ) U H (τ, τ 0 ) Ω in (τ 0 ) ≈ Ω free (τ 0 )| U † H (τ, τ 0 ) U 0 (τ, τ 0 )ÕU † 0 (τ, τ 0 ) U H (τ, τ 0 ) |Ω free (τ 0 ) | Ω in (τ 0 ) |Ω free (τ 0 ) | 2 = Ω free (τ 0 )| U † I (τ, τ 0 )ÕU I (τ, τ 0 ) |Ω free (τ 0 ) | Ω in (τ 0 ) |Ω free (τ 0 ) | 2 . (3.35)
Note that since |Ω in (τ 0 ) is normalized, we could put an identity operator O = 1. If we believe that U H , U I and U 0 are still unitaries (although now we use the i -prescription), operator O in other pictures, O H andÕ are also identities. Now we get
| Ω in (τ 0 ) |Ω free (τ 0 ) | 2 = 1 . (3.36)
Note that this does not mean that those two states are equal, since |Ω free (τ 0 ) is not normalized. Finally, we arrive at the main formula used in various literature for computing inflationary perturbations,
Ω in (τ 0 ) |O H (τ )| Ω in (τ 0 ) = Ω free (τ 0 )| U † I (τ, τ 0 )ÕU I (τ, τ 0 ) |Ω free (τ 0 ) . (3.37)
The full methodology is called the in-in formalism in the literature of cosmology. Now we comment on this formalism briefly,
• The approximation
T exp −i τ τ 0H I (τ ) dτ |Ω in (τ 0 ) ≈ T exp −i τ τ 0H I (τ ) dτ |Ω free (τ 0 ) Ω in (τ 0 )|Ω free (τ 0 ) , (3.38)
is promisingly reasonable since the integral is dominated by the early time τ 0 because of the inflationary universe. However, this assumption is not as solid as the perturbative quantum field theory in the flat space, where we have a rigorous statement mathematically, which is called the Gell-Mann and Low theorem [88]. Further issues include the statement where | Ω in (τ 0 ) |Ω free (τ 0 ) | 2 = 1, which requires unitarity but in fact it is not, due to the Wick-rotated time. We believe that the current treatment in the i -prescription of cosmology is physically correct, but can we rigorously prove it? Furthermore, can we bound the error from the above approximation? They are still open problems. Some related insightful discussions include [78,80].
• However, our quantum simulation algorithm completely does not rely on the in-in formalism and the perturbative expansion: it is directly operated in the Heisenberg picture, and the calculation could be performed beyond the perturbative regime. As long as we know the interacting vacuum |Ω in (τ 0 ) in the quantum circuit, we could directly evaluate the following expression based on Theorem 2
Ω in (τ 0 ) |O H (τ )| Ω in (τ 0 ) = Ω in (τ 0 ) U † H (τ, τ 0 ) O H (τ 0 ) U H (τ, τ 0 ) Ω in (τ 0 ) ,(3.39)
by constructing the unitary evolution (3.40) and measure the expectation value using, for instance, post-selection. Thus, we claim that our quantum simulation program could justify the correctness and bound the error of the in-in formalism, hence solve those open problems. Furthermore, since we could extend the algorithm in other cosmic phases (for instance, bouncing universe), it could help us compute correlation functions numerically for geometries beyond inflation where there is no initial manifest dominance in the integral.
U H (τ, τ 0 ) = T exp −i τ τ 0 H H (τ ) dτ ,
Experimental observables
Now we give a brief discussion on how the above observables are connected to experiments. Usually, in the experiments, we study correlation functions in momentum space. For a given operator O(t, x), the Fourier transform is defined as
O k (τ ) = d 3 xO(τ, x)e −ik·x , O(τ, x) = d 3 k (2π) 3 O k (τ )e ik·x .(3.41)
For the two-point function, in the case of cosmology, we know that it is invariant under translation in the following sense,
O(τ, x)O(τ, y) ≡ f O (x − y) . (3.42)
Then the two-point function looks like
O k 1 O k 2 = d 3 x d 3 yf O (x − y)e −ik 1 ·x−ik 2 ·y = (2π) 3 δ 3 (k 1 + k 2 ) d 3 uf O (u)e −ik 1 ·u . (3.43)
We could define the power spectrum
P O O k 1 O k 2 = (2π) 3 δ 3 (k 1 + k 2 ) 2π 2 k 3 1 P O (k 1 ) , P O (k) = k 3 2π 2 d 3 uf O (u)e −ik·u .(3.44)
For curvature perturbation O = ζ, we could firstly compute the correlation function in the coordinate space to obtain the function f ζ using quantum simulation, then we could perform the Fourier transform above to compute the power spectrum. Observationally, the power spectrum P ζ could be determined by the CMB map or the LSS, specifically at the time of "horizon exit" where k = aH. At this time, quantum perturbations decohere and become statistical perturbations. Now the function P ζ (k) = P ζ (k) becomes purely a function of the momentum norm (the wavenumber). To make further contact with experimental observables, we could define transfer functions that could keep track of cosmic evolution after the horizon exit. In the CMB, the angular power spectrum of CMB temperature fluctuations C could be written as an integral of the primordial power spectrum P ζ . In the LSS, the late-time power spectrum of dark matter density fluctuations is proportional to the P ζ with certain transfer functions. For further knowledge, see the lecture notes [19] and references therein. Similarly, one could consider three-point functions. The translational invariance of the three-point function in the coordinate space ensures that in the momentum space, there is a factor given by the delta function. Moreover, we could define the bispectrum F of the curvature perturbation ζ
ζ k 1 ζ k 2 ζ k 3 = (2π) 7 δ 3 (k 1 + k 2 + k 3 ) P 2 ζ k 2 1 k 2 2 k 2 3 F (k 1 /k 3 , k 2 /k 3 ) ,(3.45)
which is directly related to non-Gaussianities of the corresponding inflationary model. Non-Gaussianities provide important primordial information of particles and interactions in the early universe, and they are directly related to observations.
Examples
Inherited from our previous discussions, we present the results directly without specifying the process, referring to previous results in [14]. In general single-field inflationary models we discussed before, the leading-order power spectrum is computed directly from the free theory,
P ζ = H 2 8π 2 c s . (3.46)
The leading-order bispectrum is given by the tree-level diagram using the in-in formalism.
F = 1 c 2 s − 1 − 2λ Σ 3k 1 k 2 k 3 2(k 1 + k 2 + k 3 ) 3 + 1 c 2 s − 1 × − k 2 1 k 2 2 + k 2 1 k 2 3 + k 2 2 k 2 3 k 1 k 2 k 3 (k 1 + k 2 + k 3 ) + k 2 1 k 3 2 + k 2 1 k 3 3 + k 2 2 k 3 3 + k 2 2 k 3 1 + k 2 3 k 3 1 + k 2 3 k 3 2 2k 1 + k 3 1 + k 3 2 + k 3 3 8k 1 k 2 k 3 ,(3.47)
with the parameter λ and Σ defined above. Note that the result is perturbative in the slow-roll parameter.
Further comments
At the current stage, we wish to make some further comments.
• As we mentioned before, note that, in the above sections, we mostly describe the perturbative quantum field theory formalism theoretically. Progress and problems we have mentioned before about this theoretical prescription provide us motivations to perform quantum simulation in the future quantum device and benchmark the quantum simulation algorithms and devices using the known answer from quantum field theory calculations. However, when we are doing quantum simulation, we are completely not using the perturbative method in the interaction picture. In fact, we don't even need to introduce the interaction picture, and we could just focus on the Heisenberg picture when we are doing quantum simulation.
• Furthermore, we wish to mention that when doing quantum simulation, in this paper, we are quoting the perturbative action we will use as examples. However, we could even imagine that we could just simulate the original action beyond the slow-roll expansion. Extra treatments are needed in this process to separate the classical and the quantum part of the action, and furthermore, implement them in the quantum device. It could potentially be an interesting generalization of our current work.
• As we have described, the Hamiltonian we are considering is intrinsically timedependent even for the Heisenberg picture since the spacetime is exponentially expanding with time. Thus, it might be interesting to apply methods and tricks from the study of quantum open-system and quantum thermodynamics in quantum information science, for instance, the Lindblad equation and the quantum resource theory. Moreover, the precise formulation of open systems in quantum field theory needs to be formalized [89]. Moreover, it might be interesting to make use of Floquet dynamics in quantum many-body physics to study conceptual problems in cosmology and analog quantum simulation, since the time-dependent dynamics in Floquet systems, as periodically driven open systems, might be similar to the cosmic evolution of cyclic or bouncing cosmologies (see a related work [90]). Furthermore, for cyclic or bouncing theories, they might be more suitable for quantum algorithms, since they are strongly-coupled processes, which are relatively harder to understand compared to the perturbative dynamics.
Designing the algorithm 4.1 The original Jordan-Lee-Preskill algorithm
The Jordan-Lee-Preskill algorithm [48,49] could be regarded as a generic paradigm for simulating quantum field theories in the quantum computer. Here we briefly review the original version of the Jordan-Lee-Preskill algorithm, and our generalization will be discussed later on. The original algorithm is designed for simulating the λφ 4 scalar quantum field theory in general spacetime dimensions, where the lattice version of Hamiltonian is
H t = b 3 x∈Ω 3 1 2 π 2 φ (x) + 1 2 (∇ i φ(x)) 2 + 1 2 m 2 0 φ(x) 2 + λ 0 4! φ(x) 4 ,(4.1)
where the coupling λ 0 can either be weak or strong. Although the λφ 4 scalar quantum field theory now is defined in a lattice, the value of the scalar field φ(x) at each site is continuous and generally unbounded, while the digital quantum computer is only capable of managing a finite number of qubits. The idea of [48,49] is to bound the field value by φ max with discretize step size as δφ at each site, i.e.,
{−φ max , −φ max + δφ, . . . , φ max − δφ, φ max } ,(4.2)
such that the original Hilbert space is truncated to be finite-dimensional, allowing to encode the scalar quantum field theory with finite qubits into quantum computers.
A key result of [48,49] is to determine the truncation of the Hilbert space and the number of qubits per site by the scattering energy E. To simulate the quantum field scattering process, initial scattering states should be excited. As described in [49], this task can be fulfilled by preparing the initial vacuum state of free theories |Ω free followed by exciting the wave packets (which are represented by |ψ = a † ψ |Ω free ) based on the prepared vacuum, where exciting the wave packets can be realized by involving an auxiliary Hamiltonian with additional ancilla qubits
H ψ = a † ψ ⊗ |1 0| + a ψ ⊗ |0 1| , e −iH ψ π 2 |Ω free ⊗ |0 = −i|ψ ⊗ |1 . (4.3)
Adiabatically turning on the interaction for the wave packets, they will evolve in time to an ending time, which is precisely the scattering process. Simulating the time evolution is a well-known task, and [49] have done this by simply splitting the Hamiltonian into field and field momentum piece, then apply the product formula of Trotter (which will be briefly introduced in subsection 4.6). As the completion of the whole time evolution of the scattering process, it allows measuring any physical observable that satisfies our simulation goal.
As a summary of the above review, the original version of the algorithm consists of the following prescriptions:
Algorithm 4 (Jordan-Lee-Preskill). The original Jordan-Lee-Preskill algorithm is an algorithm for simulating the λφ 4 scalar quantum field theory in general spacetime dimensions at both weak or strong couplings. It is given by the following steps.
• Encoding. We encode the lattice field theory Hilbert space into the quantum computer. The truncation of the original Hilbert space and the number of qubits encoded per site are determined by the scattering energy E.
• Initial state preparation. We construct the initial state using an algorithm proposed by Kitaev and Webb [91] for constructing multivariate Gaussian superpositions. The algorithm could be improved by some other classical methods [92,93].
• Exciting the wave packets in the free theory. This part is done by introducing ancilla qubits.
• Adiabatic state preparation. We adiabatically turn on the interaction to construct the wave packet in the interacting theory. The speed of adiabatic state preparation should be slow enough to make sure the resulting wave packet is still a reasonable single-particle wave packet.
• Trotter simulation. We use the product formula to simulate the time evolution e −iHt by splitting the Hamiltonian to the field piece and the field momentum piece.
• Measurement. After time evolution, we compute the correlation function we are interested in using the quantum circuit. We could measure field operators, number operators, stress tensor, and other quantities we are interested in.
Our generalization of Jordan-Lee-Preskill
The original Jordan-Lee-Presklll algorithm we have described above needs further modifications in order to be applicable to cosmic inflation. Comparing to the original scattering process, the evolution process for computing cosmic correlation functions has the following features.
• In the initial state preparation, we are not setting the scattering energy to be E since we are not doing exactly the scattering experiment. Instead, in cosmic inflation, we are able to set the energy scale of the inflationary perturbation theory Λ = 1/b. We could use Λ instead to bound the energy scale, and then the field momentum fluctuation, which will restrict the dimension of the truncated Hilbert space. A similar analysis in inflationary physics could help us bound the field configuration itself.
• In the scattering experiment in the flat space, the Hamiltonian in the λφ 4 theory is static in the Heisenberg picture and also the Schrödinger picture 10 . However, in the cosmic perturbation theory, the Hamiltonian should be time-dependent in general in both pictures. So the field basis we are using should be generically different for different times. In order to encode the time-dependent Hamiltonian and simulate the Heisenberg evolution
U H (τ end , τ 0 ) = T exp −i τ end τ 0 H H (τ ) dτ ,(4.4)
we need to figure out the field basis transformation depending on different times. This could be realized by computing the Green's identity in the free theory. We discuss this transformation as a reduced case of the HKLL formula [65,66], which is usually appearing in AdS and involving both space and time.
• We still need the Kitaev-Webb algorithm to prepare the free theory vacuum, but we don't need to excite the wavepacket, since our cosmological-motivated correlation functions are evaluated for the vacuum states.
• In the adiabatic state preparation process, we start from the free theory at conformal time τ 0 , and then we use adiabatic state preparation to construct the interacting vacuum at τ 0 . Since the zero-momentum diagonal mode states and the ground state are degenerate in the free theory, as we have discussed before, extra treatment might be needed to split those states in the gapless regime. This treatment is called the ground state projection in the following discussions.
• Then we use the Trotter simulation to evolve the time-dependent Heisenberg evolution. Note that due to possible mixings between quantum fields and field momenta we have in the Lagrangian, we generalize the original calculation in the Jordan-Lee-Preskill to the three-party product formula case, making use of results from the paper [94] and references therein.
• Cosmic perturbation theories have certain measurement tasks that could be directly related to experimental observations. Here, we measure cosmic correlations
Ω in (τ 0 ) |O H (τ end )| Ω in (τ 0 ) = Ω in (τ 0 ) U † H (τ end , τ 0 ) O H (τ 0 ) U H (τ end , τ 0 ) Ω in (τ 0 ) ,(4.
5)
10 However, in the interaction picture, the Hamiltonian is time-dependent.
instead of other operators that are discussed in the original Jordan-Lee-Preskill algorithm.
Thus, we propose the following algorithm that is applicable to compute cosmic correlation functions.
Algorithm 5 (Jordan-Lee-Preskill for cosmic inflation). We consider the following generalization beyond the original Jordan-Lee-Preskill algorithm, specifically applicable for cosmic correlation functions at general couplings.
• Encoding. We use the field basis at time τ 0 . The range and precision of the field basis are truncated based on the energy scale of EFT Λ = 1/b, and some other inflationary physics. Furthermore, to encode the time-dependent Hamiltonian in the Heisenberg picture, we need the transformation from the time τ 0 to a general time τ . The transformation is given by the Green's identity as a special reduced case of the HKLL formula in the 3+1 dimensional de Sitter space.
• Initial state preparation. We still use the Kitaev-Webb algorithm and its improvements to prepare the Gaussian vacuum state |Ω free (τ 0 ) . The variance matrix is given by the two-point function of the free theory.
• Adiabatic state preparation. We use the adiabatic state preparation to prepare the interacting vacuum |Ω in (τ 0 ) . Extra treatment, namely the ground state projection, is needed to filter out the zero-momentum states of diagonal modes.
• Trotter simulation. We use the Trotter algorithm to simulate the Heisenberg time-dependent time evolution. Note that including this evolution and also the adiabatic state preparation piece, we might face the mixing between field and field momenta in the Hamiltonian. Thus, we will use the three-party product formula to do the simulation.
• Measurement. We measure Ω in (τ 0 ) |O H (τ end )| Ω in (τ 0 ) after the evolution by, for instance, standard algorithms like the post-selection.
More details in the above steps will be discussed in the following subsections.
Encoding from the HKLL formula
We start with the encoding problem in our quantum simulation program. At the time τ 0 , we define our field basiŝ
ζ(τ 0 , x) |ζ(τ 0 , x) = ζ(τ 0 , x) |ζ(τ 0 , x) .(4.6)
On the left hand side,ζ(τ 0 , x) is understood as the curvature perturbation operator, and the state vector |ζ(τ 0 , x) defines the eigenvector. The field value ζ(τ, x) could be arbitrary for the fixed space and time x and τ 0 . So for fixed x, the local Hilbert space dimension is infinite. Similarly, we defineζ
(τ, x) |ζ(τ, x) = ζ(τ, x) |ζ(τ, x) ,(4.7)
for an arbitrary time τ 11 . Note that for different time slices, the state vector will be different in general, since the field operatorζ(τ, x) and its eigenspace are different. We will use the field basis |ζ(τ 0 , x) to encode our Hamiltonian. Namely, we will represent all terms in our Hamiltonian on the above basis. For terms containing field momentum operators, the matrix elements could be determined by the canonical commutation relation.
However, the above treatment could only work for the initial time slice τ 0 . The reason is, our Heisenberg Hamiltonian is manifestly time-dependent. So the question is, can we determine the field operator and the field momentum operator at the time τ easily from τ 0 ?
The answer is, yes! In fact, one could naively expect this to happen in the field equation. Since our encoding is based on the free system 12 , our field equation is linear. A naive choice is to use the Heisenberg evolution operator
ζ (τ ) = U † H (τ, τ 0 ) ζ (τ 0 ) U H (τ, τ 0 ) . (4.8)
But the evolution operator U H itself contains operators later than τ 0 . Instead, we solve the dynamical equation, and the answer is expected to be linear
ζ(τ, x) = ∂τ 0 LC − (τ,x) d 3 y (K ζ (τ,
x; τ 0 , y)ζ(τ 0 , y) + K π (τ, x; τ 0 , y)π ζ (τ 0 , y)) . (4.9)
In the above equation, ζ and π ζ are understood as operators, and the integration kernels K ζ and K π are scalar functions. The kernel is supported on the set ∂ τ 0 LC − (τ, x), which is defined as part of the time slice τ 0 , intersecting with the past light cone starting from 11 In most parts of the paper, we will not distinguish operators with their classical counterparts by the hat notation. 12 One might worry about the existence of couplings might change the construction of the encoding basis. In fact, we don't really need to worry about it, since this is simply only a basis choice. For instance, we could use the harmonic oscillator basis to encode the λφ 4 strongly-coupled theory in the flat space, although at strong coupling, the harmonic oscillator in the free theory does not exist. In fact, the time evolution of the basis here illustrates how we define our time-dependent quantum field theory. Figure 1. The encoding process from the past light cone. Here, we use a two-dimensional plot to illustrate past light cones for different τ s at a fixed x. Those light cones are intersected with the time slice τ 0 . We use a one-dimensional real line to represent the direction of x, but in the real world, we have three spatial dimensions.
the point (τ, x). The kernels should be supported on the set ∂ τ 0 LC − (τ, x) determined by the causal structure of the theory. Since our metric is manifestly conformally flat when we are using the conformal time, the light cone structures are the same as the flat space ones. Then we could directly write down
∂ τ 0 LC − (τ, x) = {(τ 0 ,x) ∈ inflationary spacetime: |x − x| < c s |τ − τ 0 |} .
(4.10)
Here, the notation |. . .| is the same as the Euclidean distance. Note that here we also consider the non-trivial sound speed c s . We use Figure 1 to illustrate the above encoding. So how to determine the kernel? In our case, it is purely a PDE problem before we promote our variables to operators. In fact, it could be easily solved by Green's identity. Generally speaking, if we consider the relativistic theory for a Klein-Gorden scalar φ in 3+1 dimensions, the Green's function G will satisfy the following linear equation
∇ µ ∇ µ G(τ, x; τ 0 , y) = δ 3 (x − y)δ(τ − τ 0 ) √ −g , (4.11)
where the covariant derivative ∇ µ is acting with respect to (τ 0 , y). The corresponding Green's identity then reads,
φ(τ, x) = ∂τ 0 LC − (τ,x) d 3 y |h| n µ (φ(τ 0 , y)∇ µ G(τ, x; τ 0 , y) − G(τ, x; τ 0 , y)∇ µ φ(τ 0 , y)) ,(4.12)
where h and n are induced metric and unit normal vector for the surface ∂ τ 0 LC − (τ, x). In our case, the equation of motion for the Green's function is modified as
2 c 2 s H 2 τ 3 0 (c 2 s τ 0 ∂ 2 y + 2∂ τ 0 − τ 0 ∂ 2 τ 0 )G(τ, x; τ 0 , y) = δ 3 (x − y)δ(τ − τ 0 ) . (4.13)
The corresponding Green's identity for our purpose is
ζ(τ, x) = 2 c 2 s H 2 τ 2 0 ∂τ 0 LC − (τ,x) d 3 y G(τ, x; τ 0 , y)∂ τ 0 ζ(τ 0 , y) − ζ(τ 0 , y)∂ τ 0 G(τ, x; τ 0 , y) .
(4.14)
Thus the kernels are given by
K ζ (τ, x; τ 0 , y) = − 2 c 2 s H 2 τ 2 0 ∂ τ 0 G(τ, x; τ 0 , y) , K π (τ, x; τ 0 , y) = G(τ, x; τ 0 , y) . (4.15)
In our case, since we already extract the past light cone, we could take the Green's function directly to be the Wightman's two-point function
G(τ, x; τ 0 , y) = d 3 k (2π) 3 v k (τ )v * k (τ 0 )e ik·(x−y) . (4.16)
Taking a derivative of the time coordinate, we will get the corresponding formula for the field momentum. We will use the above formula to encode the Hamiltonian. What is the nature of the above encoding? In fact, it could be regarded as a reduced version of the HKLL formula in the study of the AdS/CFT correspondence (see the paper [65] and a review [66]). In AdS/CFT, a typical problem is to determine the bulk data from the boundary dynamics. The HKLL formula describes how we write the bulk operator from the boundary in the semiclassical theory O bulk = HKLL kernel × O boundary , (4.17) Figure 2. The AdS-Rindler bulk reconstruction. Here the bulk operators could be reconstructed from the boundary using the HKLL reconstruction formula. This is the standard example mentioned in, for instance, [66].
from solving PDEs. In the AdS case, the situation is more complicated since space and time are mixed: the boundary contains the time direction. The allowed accessible range in the bulk determined by a given range of the boundary is called the causal wedge. Figure 2 illustrates a standard example of the causal wedge reconstruction in the case of AdS-Rindler in three spacetime dimensions [95,96]. In our cosmology case, a little difference comparing to AdS/CFT is that right now we understand the time direction as the "boundary", so we only need the past light cone instead of the full causal wedge. See another discussion about the HKLL formula and bulk reconstruction in [42]. It is remarkable that we are able to use the HKLL formula for a different purpose beyond the usual study of AdS/CFT. This seems to indicate that studying the nature of spacetime might be closely related to quantum simulation in quantum gravity. Furthermore, it is worth noticing that in our discussion, we use the word "encoding" in a completely different circumstance. The word "encoding" we use here means that we are going to make our qubits from our brains to quantum computers, while the word "encoding" in other literature about AdS/CFT usually means the encoding map in the quantum error correction code [95][96][97]. Moreover, the encoding we discuss only requires the free theory. Hence, we are consistent with the semiclassical description and satisfied with the "causal wedge reconstruction".
It will be interesting to study how the encoding of different time slices will change when turning on the coupling. From the perturbative point of view, we have to receive tree or loop corrections in the Witten diagrams. Non-perturbatively, the causal wedge reconstruction might be replaced by the entanglement wedge reconstruction, and the HKLL formula might be replaced by the modular Hamiltonian and the modular flow. It might be interesting to see how the story will go both in AdS and dS cases. There is an insightful discussion recently about holographic scattering and entanglement wedge [98]. Finally, when we are studying the scattering problem in AdS in the quantum computer, probably we might consider using the honest HKLL formula for encoding since, in AdS, space and time direction is mixed. Moreover, one might consider the discretized version of our de Sitter encoding formula in the lattice, and it should not be very hard to obtain since we currently only care about the free theory.
We end this subsection by commenting on the complexity we need to perform encoding. Obviously, the number of terms we need in the encoding map is proportional to the number of sites included in the past light cone regime on the time slice of τ 0 . So we have the complexity estimate:
Encoding Complexity = O c s |τ end − τ 0 | b 3 = O c s |τ 0 |(1 − e −N ) b 3 . (4.18)
Here N is the e-folding number during inflation.
Encoding bounds from the EFT scale
Here, we continue our discussions about the encoding. The ideal study in quantum field theory with infinite-dimensional Hilbert space is not promising for a digital quantum computer, so we have to discretize our field basis and make further truncations. Now, let us consider the following prescription of truncations. We want to bound the range of the curvature perturbation ζ at the time τ 0 by ζ max . Furthermore, we wish the step size (precision) of the discretization of the field value to be δζ. Namely, our choices of field values on each site are exactly eq. (4.2) but replacing φ by ζ. As a result, the number of qubits is estimated as
n b ∼ log ζ max δζ . (4.19)
How to choose the value of δζ and ζ max ? Intuitively, we know that if the field fluctuation ζ is bounded probabilistically (for instance, in terms of expectation values), then we cannot make that much error if we choose ζ max to be comparable to the field fluctuation bounds. This intuition is explicitly proved by the original paper of Jordan-Lee-Presklll [48,49]. We call it the "Jordan-Lee-Preskill bound". In fact, assuming an JLP error in truncation, the probability of the field values appearing outside the truncation window is controlled by the Chebyshev inequality for all possible probability distributions. For all sites, the total probability outside the truncation window p total is controlled by the union bound Vp single , where V is the total number of sites and p single is the probability of making error for a single site.
Here, we just quote the result from the Jordan-Lee-Preskill bound without proving it. Say that we truncate the field to obtain the state |ψ JLP and we introduce the error JLP such that ψ true |ψ JLP = 1 − JLP , where ψ true is the actual state. We have
ζ max ∼ V JLP ψ true |ζ 2 |ψ true . (4.20)
The square root of the prefactor comes from the quadratic relation in the Chebyshev inequality.
Then, how could we bound the precision? It is easy to notice that from the definition of the canonical commutation relation we have
π max,ζ ∼ 1 b 3 δζ . (4.21)
Applying the same Jordan-Lee-Preskill bound towards the field momentum, we get
δζ ∼ 1 b 3 JLP V 1 ψ true |π 2 ζ |ψ true , n b ∼ log Vb 3 JLP ψ true |π 2 ζ |ψ true ψ true |ζ 2 |ψ true . (4.22)
So how could we bound ζ 2 and π 2 ζ ? Now, we need some knowledge about cosmology. We start from π ζ . Since we already know that our ultraviolet cutoff is Note that here a 0 is the initial scale factor. The way of bounding quantities using the total energy is the same as what we did for the original Jordan-Lee-Preskill scattering experiment in the flat space. Furthermore, the bound could serve as a general bound working for all couplings.
1/b ∼ √ H ,
However, how could we bound ζ? At the late time τ end , we know experimentally that ζ 2 ∼ 10 −10 , (4.25) which is purely from the experiment. In the early time, the situation may not be the same, and we could not naively use the bound from the late time. In the free theory, the inflaton is massless so we cannot bound ζ 2 directly from the Hamiltonian. However, we could directly estimate the curvature perturbation from the free theory. We have
| ζ(τ 0 , x)ζ(τ 0 , y) | = d 3 k (2π) 3 v k v * k e ik·(x−y) ≤ d 3 k (2π) 3 v k v * k = 4π (2π) 3 k 2 v k v * k dk ∼ 4π (2π) 3 k UV k IR v k v * k dk ∼ H 2 c 2 s τ 2 0 (k 2 UV − k 2 IR ) + 2 log k UV k IR 16π 2 c s ∼ H 2 c 2 s τ 2 0 b 2 + 2 log L 16π 2 c s . (4.26)
In the last step, we take the cutoff k UV = 1/b and k IR = 1/L. One could see that the above result depends at most logarithmically on the system size. Thus, the dimension of local Hilbert space should be at most polynomial in size in general. The above result is only a result of the free theory. But how about interacting theory at the time τ 0 ? In general, the result should not be drastically changed if we are in the regime of the perturbation theory. The leading correction towards the above twopoint function should be the one-loop diagram in the following plot, which is of order ε 2 if we call the coupling as ε 13 . (see Figure 3 for an illustration.) But the situation might change in the case of strong coupling. Now let us consider the system is approaching the critical point with a second-order phase transition. If such a critical point exists, the two-point function of curvature perturbation should scale as a power law with the distance and a scaling dimension, which is not a drastic dependence for our quantum computer. But what happens in general, in the middle of the renormalization group flow? Although it seems to be not very possible that the field fluctuation is exponential regarding the system size, since it is a non-perturbative problem, we could only make trials numerically if we do not have any theoretical control. In fact, assuming the field configuration is continuous, when constructing the state and measuring the field profile, we could actually get some indications if the size of the local Hilbert space is out of reach. Such trials will be helpful for determining an honest value of the field range up to some given error, with certain convergence conditions. We leave this topic for future research, especially for people with quantum devices and clean qubits.
Initial state preparation by Kitaev and Webb
After we address the encoding problem, here we discuss the initial state preparation. At the beginning, we wish to construct |Ω free (τ 0 ) in the quantum computer. In the free theory, the wave function is given by the Gaussian distribution in the field basis, with the probability distribution
p( ζ) = 1 (2π) V/2 |M| 1/2 exp − 1 2 ζ · M −1 · ζ . (4.27)
Here, we define ζ = (ζ (x 1 ) , ζ (x 2 ) , · · · , ζ (x V )) , (4.28) and the matrix M is the two-point function
M ij = ζ(x i )ζ(x j ) = G(τ 0 , x i ; τ 0 , x j ) . (4.29)
The square root of the probability distribution could define the components on the field basis. Thus, the problem of state preparation becomes a problem of preparing the Gaussian distribution with multiple variables. This problem is discussed and solved in [48,49], and here we describe the solution. We could directly use the Kitaev-Webb algorithm [91] to prepare the vacuum state. The idea is that one could firstly prepare a distribution of Gaussian state in a diagonal form, and then do a transformation to the desired basis. The main time cost in the algorithm is the singular value decomposition of the inverse covariance matrix we have in the Gaussian distribution, which could be improved by using classical algorithms in [92,93]. With the known covariance matrix given by the two-point function, the complexity scales as O(V 2.376 ), which is bounded by polynomials in system size.
There might exist some alternative methods for constructing Gaussian states. For instance, [99] describes another algorithm for Gaussian state preparation, which is related to one-dimensional quantum systems. [100] describes another variational algorithm for preparing the Gaussian state. There might be some future improvements about the Kitaev-Webb algorithm in 3+1 dimensions.
Trotter simulation
Now, say that we already have a state |Ω free (τ 0 ) . The next steps are to construct the interacting vacuum |Ω in (τ 0 ) and then evolve the Heisenberg unitary operator. Both steps are requiring Trotter simulation based on the product formula. Comparing to the flat space, the task we have here in the inflationary spacetime is pretty different in the following two aspects,
• Our time-dependent quantum field theory is different from the flat space by a scale factor. The scale factor will affect Trotter simulation errors by entering the commutators.
• In the original Jordan-Lee-Preskill algorithm, the λφ 4 quantum field theory Hamiltonian could be split into two parts, H φ and H π that contain fields and field momenta separately. However, in our case, there is a generic feature that we will have mixing terms between fields and field momenta.
The above differences motivate us to describe a generalized, time-dependent Trotter simulation theory, which will be presented in this subsection. We will mostly use the notation from [48,49] and [94]. We consider a general time-dependent time evolution operator, and we split the total time into n product intervals
T exp −i τ τ 0 dτ H(τ ) ≈ n product j=1 exp −iH j − 1 n product (τ − τ 0 ) τ − τ 0 n product . (4.30)
We could define a short-hand notation
j − 1 n product (τ − τ 0 ) = τ j . (4.31)
For each term in this exponential, we have the k-th order product formula
exp −iH (τ j ) τ − τ 0 n product = product term + O α com (τ − τ 0 ) 2k+1 n 2k+1 product .
(4.32)
The detailed expression in various forms is given in [94] in detail. The error constant α com is determined by commutators
α com = n split β 1 ,··· ,β 2k+1 =1 H β 2k+1 , · · · , [H β 2 , H β 1 ] ,(4.33)
where we expand the total Hamiltonian by
H = n split β=1 H β . (4.34)
In our case, we assume n split = 3. Physically, in our situation, we have the field variable term, the field momentum term, and the mixing term, where we call them H 1 , H 2 and H 3 .
Adiabatic state preparation and the ground state projection
We start from the adiabatic state preparation with applications of our Trotter simulation formula. Generically, for a generic Hamiltonian H(s) which is parametrized by s ranging in [0, 1], we call the -th eigenstate |v (s) for a given s, H(s) |v (s) = e (s) |v (s) .
(4.35)
The state |v could be approximately achieved by the s-dependent time evolution from 0 to 1, starting from s = 0. We call such state |u k for the energy level e k . The transition amplitude, where we call it the "adiabatic error" ad , is given by
ad ≡ | v |u k | ∼ 1 T (e k − e ) 2 dH(s) ds . (4.36)
Here, T is the total time we use when we are turning on the interaction. In our situation, we wish to turn on the interaction slowly and linearly. So we have dH(s) ds = H I , (4.37) and the adiabatic error is bounded by
ad H I T × gap 2 .
(4.38)
Here, we are starting from the vacuum state of the free theory, so we are interpreting the energy difference as the mass gap in the Heisenberg Hamiltonian. As we have mentioned before, our Hamiltonian has a further problem: the theory is massless. We cannot only use energy to distinguish the vacuum state and zero-momentum states with multiple diagonal modes. Thus, some extra treatments should be used when we are doing the adiabatic state preparation.
Here, we propose the following ground state projection algorithm. We notice that the degeneracy problem among diagonal modes and the vacuum should only happen in free theory. Generically, we don't expect that the transition amplitude will be large for finite coupling constant. Namely, there should exist energy-level splitting when we are slowly turning on the coupling. Thus, we only need to resolve the tunneling around the free theory regime. Note that in the free theory, diagonal modes should carry positive particle numbers. Thus, starting from the free theory vacuum, we keep doing the measurement of the state in the quantum computer by measuring the following operator
N k=0 = b † k=0 b k=0 . (4.39)
We will only select the result when we get zero from the measurement data. The operator N k=0 is exactly the number operator for zero-momentum diagonal modes. We need to perform the measurement for the first few steps in our simulation. After we obtain a significant amount of the coupling, we don't need to do this measurement, and around the finite coupling regime, the number operator has no physical meaning. Using the above ground state projection protocol, we could bound our adiabatic error Here n ad is the number of splitting we use during the adiabatic process. Thus, the total gate estimate, n ad,total , scales as
n ad,total ∼ V × n ad ∼ O ε 1/2k × V 2+3/4k 1+3/4k JLP × T 1+1/2k × |τ 0 | 2 3/2 1/2k . (4.42)
Note that here, we make use of the scaling of our basis norm for ζ and π ζ . Finally, we comment briefly on the issue of the inflaton mass. Generically, the effective field theory of inflation might receive higher derivative corrections suppressed by the cutoff. If the inflaton is not protected, it will receive radiative corrections. A connection between the unstable inflaton mass and the parameter η in slow roll is so-called the η-problem, suggesting that the radiative correction of the inflaton might prevent the inflationary expansion of the spacetime. In our case, we consider the lattice regularization of the theory by the ultraviolet cutoff 1/b, and the theory in the short distance is still massless in the sense of lattice many-body systems. In a sense, we are regularizing the theory in the way that the inflaton mass is protected from being zero in the free theory case. Thus, we are away from the η-problem. However, when we turn on the interaction, the inflaton mass will receive corrections from the coupling, and the theory is generically gapped (although the gap might be very small).
The efficiency of inflation
Now say that we already obtain |Ω in (τ ) based on the above adiabatic state preparation procedure. The next step is to simulate the following unitary operation acting on the state
U H (τ end , τ 0 ) |Ω in (τ 0 ) = T exp −i τ end τ 0 H (τ ) dτ |Ω in (τ 0 ) ,(4.43)
using our Heisenberg Hamiltonian H. Again, we will use the Trotter formula to simulate the above computation. Note that now the Hamiltonian is honestly time-dependent. It is depending on the conformal time, which is different from the adiabatic state preparation case where we are slowly turning on the coupling. Here, we could estimate the efficiency of the Trotter simulation during the expansion of the scale factor. Assuming the Trotter error inflation and the number of time steps n inflation , we have,
inflation ∼ O n inflation j=1 α com (τ j ) (τ end − τ 0 ) 2k+1 n 2k+1 inflation ≤ O α com (τ end ) (τ end − τ 0 ) 2k+1 n 2k inflation .
(4.44)
Here, we use a bound about the time dependence, assuming the dominance of the late-time Hamiltonian because of the expansion of the scale factor. Rigorously computing the error inflation is a hard problem. Here, we will make an intuitive analysis based on the time dependence. In fact, we expect that the time dependence of our three Hamiltonians H 1 , H 2 and H 3 at the time τ will be bounded by
H 1 O( V JLP ) , H 2 O( V JLP 1 τ 2 ) , H 3 O V JLP 3/2 . (4.45)
Here we make a brief explanation on the above bound. The term V/ JLP comes from the bound of the field range of the Chebyshev inequality. Thus, the pure cubic term H 3 will scale with a power 3/2. The time dependence is purely coming from the counting of the geometric factor and the time dependence on the quantum fields. A remarkable feature is the 1/τ dependence appearing in H 2 . This is, in fact, coming from the cutoff logarithmic term that is independent of time in the solution of modes. We expect that this term will also be presented in the interacting theory in the short distance limit, especially since we are working under an exact lattice regularization of the quantum field theory. Thus, we expect that the Trotter constant will dominate at late time. Now assuming the late-time dominance, an example of the dominant piece in the Trotter formula will scale in the following form
[H 1 , [H 2 , [H 1 , [H 2 , · · · , [H 1 , [H 3 , H 2 ]]]]]] ∼ V JLP 2k+ 3 2 τ −2k . (4.46)
Furthermore, the commutator will bring us an extra factor in V,
[H i , H j ] = x y δ x,y · · · = x · · · .
(4.47)
Thus, we have an estimate on the Trotter error,
inflation ∼ O V 2k+5/2 2k+3/2 JLP (τ end − τ 0 ) 2k+1 (|τ end | n inflation ) 2k ∼ O V 2k+5/2 2k+3/2 JLP |τ 0 | 1 − e −N 2k+1 (e −N n inflation ) 2k . (4.48)
Thus, the total gate counting is given by
n inflation,total ∼ O (τ end − τ 0 ) 1+1/2k V 2+5/4k |τ end | 1+3/2k JLP ∼ O |τ 0 | 1/2k 1 − e −N 1+1/2k V 2+5/4k e −N 1+3/2k JLP .
(4.49)
We will leave two comments here for the above formula
• The above formula is a clear example on how the e-folding number N changes the efficiency of the Trotter simulation, and how the expansion history depending on τ will change the number of resources we demand. The calculation of Trotter constants might also be important for other situations as well, especially for other time-dependent quantum field theory or quantum many-body problems.
• We have to admit that the above analysis is a rough estimate based on certain assumptions. Firstly, it assumes the late-time dominance of the Hamiltonian norms and drops the dependence on the coupling constant. Secondly, we make assumptions that there are certain terms dominating the Trotter series based on the late-time dominance assumption. A more careful analysis is useful to fully characterize the Trotter error in the inflation process, where we wish to leave those calculations for future research.
Measurement
Here we briefly discuss the issue about measurement. Say that we could already construct the state
|result = U H (τ end , τ 0 ) |Ω in (τ 0 ) . (4.50)
Then, measuring the expectation value
result| O (τ 0 ) |result ,(4.51)
is a standard problem in quantum computation. Since we already know the operator O(τ 0 ), we could do a probabilistic calculation using the post-selection experiment. The method is statistical, and the total cost we need to perform should scale at most polynomially in 1/error. We wish to make the following comments.
• It might be interesting to look at operators beyond simply curvature perturbations and non-Gaussianities. For instance, we might consider measuring tensor perturbations, other operators like energies or stress tensors in inflationary perturbation theory, or studying the nature of the interacting vacuum by checking the assumptions in the in-in formalism calculation in perturbative field theory.
• The algorithm about the post-selection that we have mentioned here is statistical and probabilistic. One might consider some other algorithms which are deterministic. For instance, we could consider the quantum signal processing algorithm discussed in [101]. Those algorithms might demand extra effort about block encoding and qubitization, and demand some oracles to perform selections. We leave those developments applying to cosmology in future research.
Errors, de Sitter trees, and loops
Here we make a brief analysis of the error induced from the finite lattice to approximate the continuum. From the standard quantum field theory argument, the lattice effect will result in equivalently a series of irrelevant operators in the Lagrangian, which are suppressed by the lattice spacing. The leading irrelevant operators preserving shift symmetry are at least dimension six, so the leading corrections should beζ 3 orζ(∂ i ζ) 2 . Thus, the lattice effect will induce an error in non-Gaussianity scaling as O(b 2 ). Those corrections are at tree level. Note that, those corrections are nothing but adding a small perturbative piece of couplings into the original interaction (which could be non-perturbative in principle), which completely makes sense because we are assuming the interaction of the general single-field inflation.
Corrections from a finite b towards the two-point function should appear at the one-loop order, scaling as b 4 . The analysis and the diagram are the same as the case in 3, although the motivation is completely different from the previous discussion. Now, taking the interactionζ 3 as an example, we could borrow the result from [54] about this one-loop diagram using the language of the effective field theory of inflation. Using our notation, we start from the interaction The paper [54] claims that the result is consistent from both the cutoff and the dimensional regularization. In principle, one could use those one-loop formulas to control the systematic error produced by lattice regularization.
S = b 2 d 3 xdt × κa 3ζ 3 .
Final remarks
In this paper, we present a complete analysis of the generalized Jordan-Lee-Preskill algorithm for inflationary spacetime. The algorithm contains the encoding analysis, initial state preparation, adiabatic state preparation, time evolution during inflation, and the measurement. We compute the time cost of the algorithm and argue that the complexity is polynomial in system size for quantum devices, sharpening the statement of the quantum-extended Church-Turing Thesis. The analysis includes various techniques from high energy physics to quantum information. We also make suggestions on the physical questions we could answer when running the algorithm in the quantum computer.
In this section, we will make further suggestions and comments on the future directions related to this paper.
Improvement of the algorithm
About the algorithm itself, there are still rooms for future improvement. We point out some of them as the following.
• About the encoding treatment, in this paper, we are working on the field basis at the initial time τ 0 and using the HKLL-type formula to encode the field and the field momentum at a general time τ . However, it might be useful to explore the possibility of other bases. For instance, one could consider encoding the field from τ end and evolving it back. If one could realize the treatment, it might be more convenient to bound the field fluctuations since we have the late-time experimental data about the curvature perturbation. Furthermore, one could also consider encode the Hamiltonian in the momentum basis or in the harmonic oscillator basis, as we have discussed before.
• About the Jordan-Lee-Preskill bound from EFT, in this paper, we make some analysis of the curvature perturbation and the field momentum perturbation using some knowledge from cosmology. It might be better to improve those bounds by taking more physics into considerations. For instance, in some specific models, those fluctuations might have better theoretical control. Moreover, it might be interesting to explore the phase diagram of couplings we have mentioned, in the general single-field inflation or some other models.
• About the adiabatic state preparation, we discuss an algorithm here to filter out the degeneracy of zero-momentum diagonal modes, projecting towards the ground state. It might be interesting to make a more detailed analysis of how the error would scale during this process, both in the case of the free theory and the interacting theory with small couplings. It will also be helpful if we could explore the mass gap depending on couplings analytically or numerically. Those research might demand some help from the near-term quantum device. It might also be helpful to try some other algorithms in the adiabatic state preparation process, for instance, the Wan-Kim algorithm proposed recently in [102].
• About the Trotter evolution, in this paper, we mainly discuss some estimations, mostly focusing on the time dependence of the Hamiltonian. It might be helpful to make a more detailed analysis of all other parameters, for instance, the value of couplings, and some exact computations on the Trotter error numerically or analytically.
• About the measurement and the representation of the Hamiltonian, it might be helpful to discuss some improvements of the algorithm using methods with oracle constructions, for instance, quantum signal processing. Those might be associated with other methods to represent the Hamiltonian, for instance, qubitization. See [75] for a review about them.
The quantum Church-Turing Thesis
Here we make some comments about the quantum-extended Church-Turing Thesis.
The quantum-extended Church-Turing Thesis is a claim about the capacity of quantum computation to simulate our real world. It claims that all physical process happening in the real universe, could be computed efficiently using the model of the quantum Turing machine or the quantum circuit. Namely, the thesis is claiming about the efficiency of simulation. As we all know, the cost scaling polynomially with the system size will satisfy computer scientists, while the cost scaling exponentially with the system size will be disappointing. However, it might be challenging when we suggest an implementation of the quantumextended Church-Turing Thesis in general relativity. As we all know, the theory of general relativity has the ambiguity of defining space and time, but quantum mechanics or quantum computation require the definition of time to be specific. More precisely, if the time cost represented in one coordinate suggests that the complexity is polynomial, it might be exponential in some other time coordinates. So the statement of the quantumextended Church-Turing Thesis seems requiring a covariant definition if we believe that the low energy effective description of general relativity and the fundamental existence of time.
In general, the quantum-extended Church-Turing Thesis is widely believed to be correct, although we cannot really prove the statement. It will be very interesting if someone could prove it in some general assumptions, and it will also be interesting if someone could find some cases where the claim is violated. Furthermore, we could imagine that the quantum-extended Church-Turing Thesis could even serve as a swampland criterion in the landscape of theories: if we find in some models, the quantum-extended Church-Turing Thesis is violated, then the model might be unphysical or the operation corresponding to the time cost might be not allowed. Otherwise, we may accept that the model of the quantum Turing machine may not be universal and powerful enough, and we might consider using some other physical process to perform quantum computation.
Our analysis of Trotter simulation could potentially provide evidence for interpreting the quantum-extended Church-Turing Thesis as a swampland criterion, in terms of a potential connection with the Distance Conjecture (DC) [103] and the Trans-Planckian Censorship Conjecture (TCC) [104], which are known as swampland criteria in the literature. More precisely, according to eq. (4.49), the resource demand by the Trotter simulation of inflation approximately grows exponentially in e-folding number, indicating the failure of the quantum-extended Church-Turing Thesis for a large enough e-folding number. Indeed, both the DC and the TCC can be written in terms of the bounds on the e-folding number, see, e.g., [104,105] DC:
∆φ ∼ √ 2 N < O(1) , TCC: e N < 1 H end . (5.1)
Current research suggests further issues about the quantum-extended Church-Turing Thesis in gravity. First, general relativity could seemingly bring us the ambiguity about the efficiency, but also the computability of the Turing machine in classical or quantum computation. A typical experiment of this type is named by Malament and Hogarth [106], suggesting that general relativity is able to solve the Halting Problem (see some comments in Appendix A, and there is also a related discussion about computability and the gravitational path integral [107]). Furthermore, there are recent discussions about the quantum-extended Church-Turing and the holographic correspondence: we know that the entanglement entropy and the complexity in the boundary CFT should be generically computationally hard to evaluate, but in the dual side, the area and the volume in gravity seem to be computationally easy. The puzzle brings other further questions about the nature of holographic correspondence and the quantum-extended Church-Turing Thesis in quantum gravity (see [108][109][110][111][112]).
Here, we wish to point out that our calculations might provide a concrete framework to address the quantum-extended Church-Turing Thesis in the curved spacetime. In our work, we argue that the time cost should be polynomial when we are using the conformal coordinate. But how about the physical time coordinate? How about mixing space and time? One could try to proceed with a similar analysis following our paper. One could also try to generalize our discussions in other spacetime, like quantum field theories in pure AdS or black hole backgrounds.
The role of constants in the Trotter algorithm
Here we wish to point out another perspective in our Trotter simulation. Usually, the Trotter constant appearing in the commutator terms may not be that important, especially when the Hamiltonian is time-independent. However, the constant α com appearing in our calculation is time-dependent during inflation. Here, the constant has clear physical meanings, carrying the geometric factor for the inflationary spacetime.
Here we wish to point out that the Trotter constant might be interesting physically in other circumstances, and it might be interesting to point out the physical meaning of the product formula. For instance, we know that the chaotic Hamiltonian might be harder to simulate comparing to the integrable Hamiltonian based on the intuition of the butterfly effect. Thus, the hardness of quantum simulation based on chaos might be shown in the Trotter commutators. If one could explicitly show some quantitative connections between the commutators in the Trotter formula and the out-of-time-ordered correlators, it will be interesting for a possible physical understanding of our quantum simulation algorithms. This might also be related to the recent discussions about improvements of the Trotter simulation algorithm using randomness techniques (see [113][114][115][116]).
Verifying statements in quantum field theories
In our paper, we point out another issue regarding the value of quantum simulation. As we know, quantum field theories are hard to study, especially when the theories are strongly coupled. Sometimes, the arguments we made are physically intuitive but may not be completely rigorous that could satisfy mathematicians and computer scientists. Thus, we claim that quantum computation, if making our field theory problems exactly verifiable and numerically achievable, might provide us new venues to test and clarify our physical claims in quantum field theory. We believe that our example of the in-in formalism and the interacting vacua is not unique. The future computational power provided by quantum devices might force us to make everything clear and bound the error we make about our physical arguments. For instance, some claims in quantum field theories about duality, from AdS/CFT to duality webs applied to high energy physics and condensed-matter theory recently, might be checked and verified in some similar ways.
Quantum gravity in the lab: analog and near-term simulation
Our paper could be regarded as simulating a certain category of quantum gravitational theory in the digital quantum computer. There are some similar works recently about the quantum simulation of other perspectives of quantum gravitational theories, mostly related to AdS/CFT and black hole physics. For instance, the celebrated Sachdev-Ye-Kitaev (SYK) model and some related quantum gravity effects are considered to be simulated in the analog platforms [117][118][119][120] (see also digital simulations in [121][122][123], and also some early papers about analog simulation of cosmic inflation [124,125]). Thus, it might be interesting to consider those analog platforms in the cosmological setup (see [42,126]). For the quantum simulation in the near-term devices, it might be interesting to explore perspectives about variational quantum simulation, and hybrid quantum-classical algorithms making use of classical knowledge, for instance, the matrix product state [127].
Multi-field inflation and dS conjecture
In this work, we discuss the simulation of single field inflation models of the early universe. The above calculations could be naturally done in the multi-field case. One application of those calculations could exam dS conjectures discussed in the recent literature.
Single-field inflation model is considered to be possibly violating the dS conjectures [105]. dS conjecture argues that scalar field theory consistent with quantum gravity has to satisfy the following criterion,
|∂φV | ≥ cV ,(5.2)
where c represents an O(1) constant. For single field inflation, as we can observe in eq. (2.11), this swampland criterion then implies
∼ V ≥ 1 2 c 2 ,(5.3)
which is in tension with the slow roll condition 1. However, it is claimed that the dS conjecture does not exclude all inflationary models, e.g., it turns out that the multi-field inflation can survive [128]. We expect the algorithm in this work can be extended to multi-field inflation and then test dS conjecture by simulating the physical process. For example, it is shown that when the angular velocity Ω of inflation trajectory in multi-field space is nearly a constant, heavy modes σ orthogonal to inflation trajectory can be integrated out, and the resulting model can be effectively described by eq. (2.12) supplemented with further interaction terms [129][130][131], where the sound speed c s encodes the details of additional heavy modes
c s = (1 + 4Ω 2 M 2 σ ) − 1 2 . (5.4)
Thus most of our algorithm and our analysis of complexity should still be applicable, except that we have to modify the interaction terms that may alter the details of evolution. On the other hand, this model does not necessarily violate the dS conjecture, because now we have
= V (1 + Ω 2 9H 2 ) −1 ≥ c 2 2 (1 + Ω 2 9H 2 ) −1 . (5.5)
Large enough Ω can then rescue the slow roll condition, and the lower bound is given in [128]
Ω H ≥ 3 ( cN ∆ ) 2 − 1 1 2 , (5.6)
where ∆ is O(1) constant associated with the DC. This bound will be important if one aims to simulate such multi-field inflation without violating the dS conjecture. A successful simulation requires a delicate balance between the appropriate input of Ω and Trotter gates eq. (4.49) when using specific e-folding number N .
Other phases of cosmology
It might also be interesting to extend this work to other phases of cosmology. For instance, there are paradigms in the early universe as alternatives of inflation (see, for instance, [132][133][134]). Several proposals, for instance, the ekpyrotic universe proposal in [134] contains brane configurations in string theory, are related to non-perturbative physics. Furthermore, as is mentioned before, it might be interesting to validate the inin computations of correlation functions in those spacetime backgrounds using quantum devices. It will also be interesting to look at other cosmic eras. For instance, one might consider cosmic reheating (see Appendix B) and electroweak phase transition in the early universe, where those eras are closely related to strongly-coupled physics in quantum field theory, and are also observationally relevant [135,136].
It is also important to extend the simulation algorithm in the current paper to quantum field theory living in FLRW universes arising from time-dependent compactification of string theory [137,138]. Simulating these FLRW spacetimes may shed light on resolving the inconsistency between inflation scenario and dS conjecture [139].
Holographic scattering, sub-AdS locality, and the computational complexity of AdS/CFT
Another possibility of generalizing the current work is to look at scattering experiments in AdS. Specifically, one could prepare similar states made by mode functions in the AdS spacetime backgrounds. A typical example we might consider is the holographic scattering experiments, where we start from wave packets prepared near the boundary and shoot them into the bulk. Correlation functions might be computed by tree-level Witten diagrams semiclassically, or the dual correlation functions in the boundary CFT. When turning on the coupling, we might receive loop corrections. There are several confusions and discussions regarding this process that are related to the concept of locality beyond the semiclassical theory at the sub-AdS scale [57][58][59][60][61][62][63][64]. Those processes are well-motivated to study when we have the computational power from quantum devices. When simulating the above theories in the quantum computer, one should firstly discretize the theory in the lattice. Some triangulation procedures might be needed for us to write a lattice theory in some AdS coordinates. Furthermore, one has to resolve similar problems, for instance, the encoding process, in order to generalize the Jordan-Lee-Preskill algorithm in the AdS spacetime. In the dual side, one has to construct a complete algorithm in order to estimate some correlation functions in the boundary CFT, which might be made by some critical points of spin chains.
This problem is also helpful for the conceptual problem we mentioned before about the quantum Church-Turing Thesis. Naively, if we have the boundary large-N CFT that is nearly a generalized free theory, the dual theory is nearly a free particle propagating in the pure AdS. In this setup, both sides should be computationally easy to simulate, although it might be interesting still to concretely come up with some algorithms. However, the situation might be changed if we include a black hole in the AdS background, and the dual theory has some higher energy states describing black hole microstates. We feel that this is a concrete setting to check the computational complexity of the AdS/CFT correspondence in the original sense of Witten [140]. If we could really find there are processes here that are computationally hard, it might be an indication that the quantum-extended Church Turing Thesis is violated. Furthermore, in this concrete setting, one might be able to verify the statement made by Susskind [110]. Digging this problem deeper, we might find some relations between the simulation problem and some fundamental aspects of holography and quantum information theory, for instance, pseudorandomness [111] and quantum entanglement [98].
Other generalizations and further open problems
Here we propose some further generalizations of our work.
• It might be interesting to study the complexity class of the problems in cosmic perturbation theory. For instance, one could try to identify the complexity class of the computational tasks presented in this paper in a more rigorous way, similar to the study in [51]. More precisely, we wish to prove that the cosmic perturbation theory is BQP-complete rigorously in the quantum information language.
• It might be interesting to make a full investigation on the commutators appearing in the Trotter formula and precisely compute the Trotter constant, even for the quantum field theories in the flat space, for instance, the λφ 4 theory or even the free theory. It is also interesting to study the physical implications of the Trotter formula in the continuum limit.
• It might be interesting to relate the discussions about quantum simulation in the bulk of the de Sitter space to the future time slice τ end , namely, the "boundary". Those discussions are potentially related to some proposals of realizing de Sitter space in a dual theory, for instance, the dS/CFT correspondence and some other proposals [37][38][39][41][42][43][44].
• It might be interesting to investigate further about swampland conjectures that have been recently discussed (see, for instance, [104,139,141,142]). Although we are focusing on the effective field theory point of view in this paper, we expect that one day, one could construct quantum algorithms from the ultraviolet perspective and investigate further about those conjectures, besides the swampland criterion about the Church-Turing Thesis condition that has been discussed before.
A Comments on computation and quantum gravity
In this section, we make some brief discussions about computation and quantum gravity. There are several historical comments that are related to this topic, for instance, insightful discussions in [143].
As we mentioned before, the concept of computation is naturally associated with the definition of time, which requires more consideration when we mix the definition of time and space in the theory of relativity. In fact, if we consider this problem in special relativity, we could boost the observer, and the computation might get some speedups in the observer's inertial frame. However, it requires extra energy to perform the boost. Thus, in terms of special relativity, it seems that we have to consider the computational resource by merging the time cost and the energy cost together, which is suggested, in a sense, by the Bekenstein bound [144], which relates energy and entropy [143]. How is this related to cosmic inflation, where we are simulating the spacetime where the energy is not conserved and even a free lunch during the cosmic expansion in this paper?
More weirdness might appear when we consider the theory of general relativity. Taking the boost example mentioned before, one could boost the particle drastically near the horizon of a black hole. Moreover, we could even construct spacetimes where one could see some entire worldlines by traveling a finite amount of time in the inertial frame. This is so-called the Malament-Hogarth spacetime [106]. In the general relativity language, it means that there exist a past-extendible time-like curve and a point (event), such that the curve itself is infinitely long, but the curve is part of the time-like past of the point. One could construct such events in the Kerr or AdS spacetimes. Thus they could be Malament-Hogarth (see [145]). In the Malament-Hogarth spacetime, one could even solve the Halting Problem.
When facing the above weirdness in general relativity, a natural argument is to admit that we have to use quantum gravity when defining those computational tasks. The construction of infinite boosts in general relativity requires the Trans-Planckian statement, where we have to discuss short-distance physics that are smaller than the Planck length. Thus, it is expected that the Malament-Hogarth construction is not physical, and the quantum gravitational backreactions will prevent it from happening. Some related questions are addressed by [107,112,146] on the black hole physics, or the gravitational path integral.
However, before a full equipment of quantum gravity theories, we could make some "temporary choices" when we are simulating some specific theories,
• We could make some specific choices on time coordinates before studying how covariant the system is. For instance, we choose conformal coordinates in this paper. Some similar coordinates include the Poincare coordinates in AdS, or the boundary coordinates in the asymptotically flat spacetime (for instance, the Schwarzschild black hole in the asymptotically flat background). Those coordinates are not that different from the flat space coordinates in the sense that we could take some "limits" or conformal transformations. One could consider constructing algorithms to simulate dynamics in those spacetimes at the starting point.
• We could focus on some well-defined tasks in quantum many-body systems and quantum field theories. For instance, we could temporarily forget how special relativity is emergent from some quantum many-body systems, and just pick some time slices equipped with some well-defined quantum physics machinery.
The freedom of temporary choices we could make does not mean that we will stop thinking about improvements to the framework. We feel that in the future, the following research directions will be valuable.
• Starting from continuum physics, we could think about boosting particles or changing coordinates in some specific systems. For instance, the Rindler transformation in the flat space or the black hole, or transforming from the conformal coordinate to the physical coordinate in cosmology. Moreover, in some models with weakly coupled gravity, we could compute the perturbative correction from quantum gravity to see if it could change the answer, to make the backreaction story we discuss before more precise.
• Starting from discrete physics, we could think about how the light cone is emergent from many-body systems, and how it could be related to quantum complexity. To do this, we might consider spin models with a second-order phase transition as a starting point. We could somehow redefine the computational resource by concerning the energy costs. This might be related to the applications of quantum information theory in thermodynamics and resource theories (see a review in [147]).
B Comments on quantum simulation of cosmic reheating
The result presented in this paper is about cosmic inflation, but many techniques introduced in this paper could be directly applied to another problem in cosmology: cosmic reheating. Unlike cosmic inflation, the reheating process is naturally strongly-coupled. Thus, it is challenging to formulate some exact quantum field theory statements analytically in the reheating process, naturally motivating us to think about numerical investigations and possible computational speedup using quantum devices (see some references in cosmic reheating [23][24][25][26][27][28]). The digital quantum simulation of cosmic reheating might deserve another lengthy paper to investigate. Here, we only point out our idea and leave this possibility for future research. Inflation is a super "cool" process where the universe is expanding with a very low temperature. At the end of inflation, inflaton condensates will decay to other particles. This process (preheating) is claimed to be violent and far from equilibrium, associated with non-perturbative phenomena, such as stochastic resonances. Finally, the universe will thermalize (reheating), setting up the scenario for big-bang nucleosynthesis. The process might also be associated with cosmological observations, such as CMB associated with observations of inflation, gravitational waves, magnetic fields, and baryon asymmetry.
Due to the non-perturbative nature of the (p)reheating process, it is challenging to make predictions only relying on perturbative analysis in quantum field theories. Several lattice simulation has been investigated to explore the process, although a full calculation in the setup of quantum field theory in the curved spacetime with a complete description in the Hilbert space is still in development [148]. Thus, we are facing a situation similar to the lattice gauge theory in the flat space: we might require quantum devices for our future computation to make reliable predictions. As a result, it is well-motivated to study cosmic reheating using quantum computers.
For instance, a typical model in the early study of reheating [23] is to consider the following couplings between the inflaton and the matter fields
S = d 3 xdt √ −g −σφχ 2 − hφψψ . (B.1)
Here χ and ψ are scalar and fermionic decay products with the couplings σ and h. Simple perturbative analysis shows that the decay rate is given by
Γ φ→χχ = O( σ 2 m ) , Γ φ→ψψ = O(h 2 m) , (B.2)
where m is associated with the inflationary potential V = m 2 φ 2 /2. Using those formulas, one could study how matter contents are generated and estimate the temperature based on the above perturbative analysis. However, the analysis could only be trusted in a heuristic sense: the nature of the process is highly non-perturbative, and we cannot use perturbative quantum field theory techniques at tree level. Thus, a simple experiment we could start is to simulate the above process with large couplings using the digital quantum device. The situation here is more similar to the original Jordan-Lee-Preskill setup, merging with our methods in the curved spacetime. The background scale factor should be different, so the mode functions are different in this process. But the quantizations of fields in the free theory are very similar. We could start from the free theory wave packet states of the inflaton and adiabatically turn on the interaction. Finally, we measure the decay rate of particles by evolving the state and do a similar post-selection.
The generalization from our algorithm to the above (p)reheating model should be straightforward, except now we have to encode the solution of fermions. In the flat space, it is challenging also to encode fermion fields in higher dimensions. In [50], an algorithm about encoding fermions in 1+1 dimensions is established with the Bravyi-Kitaev and the Jordan-Wigner transformations. However, in the Trotter simulation, if the Hamiltonian is non-local after encoding (which is the situation of the naive Bravyi-Kitaev or Jordan-Wigner transformations), we do not have a good scaling in complexity (see [94] for a conclusion about k-local interactions). However, we are lucky to have other algorithms which could maintain locality (for instance, see the "Superfast simulation of fermions" in Bravyi-Kitaev [149], or [150][151][152]), suggesting novel ways of ordering, which preserves the locality in the bosonization process. This encoding should be not only useful for studying reheating, but also useful for fermionic quantum field theory simulations using quantum devices in the flat space. In the curved spacetime, several details, for instance, the spin connection in the lattice, need to be figured out. Furthermore, it might also be interesting to try analog simulations for cosmic reheating [153].
-
gravity in the lab: analog and near-term simulation 55
the cutoff must work for a single term in the Hamiltonian
Figure 3 .
3The one-loop diagram at the fixed time τ 0 .
the 1/L gap introduces extra polynomial factors in the system size. The explicit determination of the evolution time T should come from the Trotter computation.
is a dimensionful coupling. The result of the one-loop correction to the two-point function of the curvature perturbation in the momentum space should scale as one loop correction ∼ b 4 κ 2 H 14 H 4 1 k 3 × log (Hb) . (4.53)
We abuse the notation here, where H is the Hubble constant, not the Hamiltonian.
We apologize that we abuse the notation such that d 3 k means the integration over the threevector k. Similar definition works for x and x. We will also use k i to denote the i-th component k i = k · r i , where r i is the i-th unit vector.
The adiabaticity of the vacuum here is not exactly the same as the adiabatic quantum computation we will discuss later. In the later treatment, we artificially turn on the interactions, so it is a timedependent dynamical process at the fixed initial time. But the adiabatic vacuum is made by the time-dependent quantum field theory we discuss here introduced by the scale factor.
Note that this i -prescription now is the standard i -prescription in quantum field theory, and this is not the slow-roll parameter in cosmic inflation.
ε is a combination of Σ, λ and 1 − c s , which we discuss before.
The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems. A H Guth, 10.1103/PhysRevD.23.347Adv. Ser. Astrophys. Cosmol. 3A. H. Guth, "The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems," Adv. Ser. Astrophys. Cosmol. 3 (1987) 139-148.
A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems. A D Linde, 10.1016/0370-2693(82)91219-9Adv. Ser. Astrophys. Cosmol. 3A. D. Linde, "A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems," Adv. Ser. Astrophys. Cosmol. 3 (1987) 149-153.
Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking. A Albrecht, P J Steinhardt, 10.1103/PhysRevLett.48.1220Adv. Ser. Astrophys. Cosmol. 3A. Albrecht and P. J. Steinhardt, "Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking," Adv. Ser. Astrophys. Cosmol. 3 (1987) 158-161.
A New Type of Isotropic Cosmological Models Without Singularity. A A Starobinsky, 10.1016/0370-2693(80)90670-XAdv. Ser. Astrophys. Cosmol. 3A. A. Starobinsky, "A New Type of Isotropic Cosmological Models Without Singularity," Adv. Ser. Astrophys. Cosmol. 3 (1987) 130-133.
First Order Phase Transition of a Vacuum and Expansion of the Universe. K Sato, Mon. Not. Roy. Astron. Soc. 195K. Sato, "First Order Phase Transition of a Vacuum and Expansion of the Universe," Mon. Not. Roy. Astron. Soc. 195 (1981) 467-479.
Entropy Generation in the Early Universe by Dissipative Processes Near the Higgs' Phase Transitions. L Fang, 10.1016/0370-2693(80)90421-9Phys. Lett. B. 95L. Fang, "Entropy Generation in the Early Universe by Dissipative Processes Near the Higgs' Phase Transitions," Phys. Lett. B 95 (1980) 154-156.
Quantum Fluctuations and a Nonsingular Universe. V F Mukhanov, G V Chibisov, JETP Lett. 33V. F. Mukhanov and G. V. Chibisov, "Quantum Fluctuations and a Nonsingular Universe," JETP Lett. 33 (1981) 532-535.
Dynamics of Phase Transition in the New Inflationary Universe Scenario and Generation of Perturbations. A A Starobinsky, 10.1016/0370-2693(82)90541-XPhys. Lett. B. 117A. A. Starobinsky, "Dynamics of Phase Transition in the New Inflationary Universe Scenario and Generation of Perturbations," Phys. Lett. B 117 (1982) 175-178.
Fluctuations in the New Inflationary Universe. A H Guth, S Pi, 10.1103/PhysRevLett.49.1110Phys. Rev. Lett. 49A. H. Guth and S. Pi, "Fluctuations in the New Inflationary Universe," Phys. Rev. Lett. 49 (1982) 1110-1113.
Spontaneous Creation of Almost Scale -Free Density Perturbations in an Inflationary Universe. J M Bardeen, P J Steinhardt, M S Turner, 10.1103/PhysRevD.28.679Phys. Rev. D. 28679J. M. Bardeen, P. J. Steinhardt, and M. S. Turner, "Spontaneous Creation of Almost Scale -Free Density Perturbations in an Inflationary Universe," Phys. Rev. D 28 (1983) 679.
Theory of cosmological perturbations. Part 1. Classical perturbations. Part 2. Quantum theory of perturbations. V F Mukhanov, H Feldman, R H Brandenberger, 10.1016/0370-1573(92)90044-ZPhys. Rept. 215Part 3. ExtensionsV. F. Mukhanov, H. Feldman, and R. H. Brandenberger, "Theory of cosmological perturbations. Part 1. Classical perturbations. Part 2. Quantum theory of perturbations. Part 3. Extensions," Phys. Rept. 215 (1992) 203-333.
Non-Gaussian features of primordial fluctuations in single field inflationary models. J M Maldacena, 10.1088/1126-6708/2003/05/013arXiv:astro-ph/0210603JHEP. 0513J. M. Maldacena, "Non-Gaussian features of primordial fluctuations in single field inflationary models," JHEP 05 (2003) 013, arXiv:astro-ph/0210603.
A D Linde, arXiv:hep-th/0503203Particle physics and inflationary cosmology. 5A. D. Linde, Particle physics and inflationary cosmology, vol. 5. 1990. arXiv:hep-th/0503203.
Observational signatures and non-Gaussianities of general single field inflation. X Chen, M Huang, S Kachru, G Shiu, 10.1088/1475-7516/2007/01/002arXiv:hep-th/0605045JCAP. 012X. Chen, M.-x. Huang, S. Kachru, and G. Shiu, "Observational signatures and non-Gaussianities of general single field inflation," JCAP 01 (2007) 002, arXiv:hep-th/0605045.
The Effective Field Theory of Inflation. C Cheung, P Creminelli, A Fitzpatrick, J Kaplan, L Senatore, 10.1088/1126-6708/2008/03/014arXiv:0709.0293JHEP. 0314hep-thC. Cheung, P. Creminelli, A. Fitzpatrick, J. Kaplan, and L. Senatore, "The Effective Field Theory of Inflation," JHEP 03 (2008) 014, arXiv:0709.0293 [hep-th].
Effective Field Theory for Inflation. S Weinberg, 10.1103/PhysRevD.77.123541arXiv:0804.4291Phys. Rev. D. 77123541hep-thS. Weinberg, "Effective Field Theory for Inflation," Phys. Rev. D 77 (2008) 123541, arXiv:0804.4291 [hep-th].
Cosmological perturbations. K A Malik, D Wands, 10.1016/j.physrep.2009.03.001arXiv:0809.4944Phys. Rept. 475astro-phK. A. Malik and D. Wands, "Cosmological perturbations," Phys. Rept. 475 (2009) 1-51, arXiv:0809.4944 [astro-ph].
Quasi-Single Field Inflation and Non-Gaussianities. X Chen, Y Wang, 10.1088/1475-7516/2010/04/027arXiv:0911.3380JCAP. 0427hep-thX. Chen and Y. Wang, "Quasi-Single Field Inflation and Non-Gaussianities," JCAP 04 (2010) 027, arXiv:0911.3380 [hep-th].
Inflation. D Baumann, 10.1142/9789814327183_0010arXiv:0907.5424Theoretical Advanced Study Institute in Elementary Particle Physics: Physics of the Large and the Small. hep-thD. Baumann, "Inflation," in Theoretical Advanced Study Institute in Elementary Particle Physics: Physics of the Large and the Small, pp. 523-686. 2011. arXiv:0907.5424 [hep-th].
Primordial Non-Gaussianities from Inflation Models. X Chen, 10.1155/2010/638979arXiv:1002.1416Adv. Astron. 638979astro-ph.COX. Chen, "Primordial Non-Gaussianities from Inflation Models," Adv. Astron. 2010 (2010) 638979, arXiv:1002.1416 [astro-ph.CO].
Inflation, Cosmic Perturbations and Non-Gaussianities. Y Wang, 10.1088/0253-6102/62/1/19arXiv:1303.1523Commun. Theor. Phys. 62hep-thY. Wang, "Inflation, Cosmic Perturbations and Non-Gaussianities," Commun. Theor. Phys. 62 (2014) 109-166, arXiv:1303.1523 [hep-th].
D Baumann, L Mcallister, 10.1017/CBO9781316105733arXiv:1404.2601Inflation and String Theory. Cambridge Monographs on Mathematical Physics. Cambridge University Press5hep-thD. Baumann and L. McAllister, Inflation and String Theory. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 5, 2015. arXiv:1404.2601 [hep-th].
Particle Production in the New Inflationary Cosmology. L Abbott, E Farhi, M B Wise, 10.1016/0370-2693(82)90867-XPhys. Lett. B. 11729L. Abbott, E. Farhi, and M. B. Wise, "Particle Production in the New Inflationary Cosmology," Phys. Lett. B 117 (1982) 29.
Baryon Asymmetry in Inflationary Universe. A Dolgov, A D Linde, 10.1016/0370-2693(82)90292-1Phys. Lett. B. 116329A. Dolgov and A. D. Linde, "Baryon Asymmetry in Inflationary Universe," Phys. Lett. B 116 (1982) 329.
Reheating an Inflationary Universe. A Albrecht, P J Steinhardt, M S Turner, F Wilczek, 10.1103/PhysRevLett.48.1437Phys. Rev. Lett. 481437A. Albrecht, P. J. Steinhardt, M. S. Turner, and F. Wilczek, "Reheating an Inflationary Universe," Phys. Rev. Lett. 48 (1982) 1437.
Particle Production During Out-of-equilibrium Phase Transitions. J H Traschen, R H Brandenberger, 10.1103/PhysRevD.42.2491Phys. Rev. D. 42J. H. Traschen and R. H. Brandenberger, "Particle Production During Out-of-equilibrium Phase Transitions," Phys. Rev. D 42 (1990) 2491-2504.
Reheating after inflation. L Kofman, A D Linde, A A Starobinsky, 10.1103/PhysRevLett.73.3195arXiv:hep-th/9405187Phys. Rev. Lett. 73L. Kofman, A. D. Linde, and A. A. Starobinsky, "Reheating after inflation," Phys. Rev. Lett. 73 (1994) 3195-3198, arXiv:hep-th/9405187.
Universe reheating after inflation. Y Shtanov, J H Traschen, R H Brandenberger, 10.1103/PhysRevD.51.5438arXiv:hep-ph/9407247Phys. Rev. D. 51Y. Shtanov, J. H. Traschen, and R. H. Brandenberger, "Universe reheating after inflation," Phys. Rev. D 51 (1995) 5438-5455, arXiv:hep-ph/9407247.
Violation of CP Invariance, C asymmetry, and baryon asymmetry of the universe. A Sakharov, 10.1070/PU1991v034n05ABEH002497Sov. Phys. Usp. 345A. Sakharov, "Violation of CP Invariance, C asymmetry, and baryon asymmetry of the universe," Sov. Phys. Usp. 34 no. 5, (1991) 392-393.
On the Anomalous Electroweak Baryon Number Nonconservation in the Early Universe. V Kuzmin, V Rubakov, M Shaposhnikov, 10.1016/0370-2693(85)91028-7Phys. Lett. B. 15536V. Kuzmin, V. Rubakov, and M. Shaposhnikov, "On the Anomalous Electroweak Baryon Number Nonconservation in the Early Universe," Phys. Lett. B 155 (1985) 36.
Baryon Asymmetry of the Universe in Standard Electroweak Theory. M Shaposhnikov, 10.1016/0550-3213(87)90127-1Nucl. Phys. B. 287M. Shaposhnikov, "Baryon Asymmetry of the Universe in Standard Electroweak Theory," Nucl. Phys. B 287 (1987) 757-775.
Progress in electroweak baryogenesis. A G Cohen, D Kaplan, A Nelson, 10.1146/annurev.ns.43.120193.000331arXiv:hep-ph/9302210Ann. Rev. Nucl. Part. Sci. 43A. G. Cohen, D. Kaplan, and A. Nelson, "Progress in electroweak baryogenesis," Ann. Rev. Nucl. Part. Sci. 43 (1993) 27-70, arXiv:hep-ph/9302210.
Observational evidence from supernovae for an accelerating universe and a cosmological constant. A G Riess, Supernova Search Team Collaboration10.1086/300499arXiv:astro-ph/9805201Astron. J. 116Supernova Search Team Collaboration, A. G. Riess et al., "Observational evidence from supernovae for an accelerating universe and a cosmological constant," Astron. J. 116 (1998) 1009-1038, arXiv:astro-ph/9805201.
The Cosmological constant. S M Carroll, 10.12942/lrr-2001-1arXiv:astro-ph/0004075Living Rev. Rel. 41S. M. Carroll, "The Cosmological constant," Living Rev. Rel. 4 (2001) 1, arXiv:astro-ph/0004075.
The Cosmological Constant and Dark Energy. P Peebles, B Ratra, 10.1103/RevModPhys.75.559arXiv:astro-ph/0207347Rev. Mod. Phys. 75P. Peebles and B. Ratra, "The Cosmological Constant and Dark Energy," Rev. Mod. Phys. 75 (2003) 559-606, arXiv:astro-ph/0207347.
Quantum gravity in de Sitter space. E Witten, arXiv:hep-th/0106109Strings 2001: International Conference. 6E. Witten, "Quantum gravity in de Sitter space," in Strings 2001: International Conference. 6, 2001. arXiv:hep-th/0106109.
The dS / CFT correspondence. A Strominger, 10.1088/1126-6708/2001/10/034arXiv:hep-th/0106113JHEP. 1034A. Strominger, "The dS / CFT correspondence," JHEP 10 (2001) 034, arXiv:hep-th/0106113.
De Sitter space in noncritical string theory. A Maloney, E Silverstein, A Strominger, arXiv:hep-th/0205316Workshop on Conference on the Future of Theoretical Physics and Cosmology in Honor of Steven Hawking's 60th Birthday. 5A. Maloney, E. Silverstein, and A. Strominger, "De Sitter space in noncritical string theory," in Workshop on Conference on the Future of Theoretical Physics and Cosmology in Honor of Steven Hawking's 60th Birthday, pp. 570-591. 5, 2002. arXiv:hep-th/0205316.
The dS/dS correspondence. M Alishahiha, A Karch, E Silverstein, D Tong, 10.1063/1.1848341arXiv:hep-th/0407125AIP Conf. Proc. 7431M. Alishahiha, A. Karch, E. Silverstein, and D. Tong, "The dS/dS correspondence," AIP Conf. Proc. 743 no. 1, (2004) 393-409, arXiv:hep-th/0407125.
. N Arkani-Hamed, J Maldacena, arXiv:1503.08043Cosmological Collider Physics. hep-thN. Arkani-Hamed and J. Maldacena, "Cosmological Collider Physics," arXiv:1503.08043 [hep-th].
De Sitter Holography and Entanglement Entropy. X Dong, E Silverstein, G Torroba, 10.1007/JHEP07(2018)050arXiv:1804.08623JHEP. 0750hep-thX. Dong, E. Silverstein, and G. Torroba, "De Sitter Holography and Entanglement Entropy," JHEP 07 (2018) 050, arXiv:1804.08623 [hep-th].
T T and EE, with implications for (A)dS subregion encodings. A Lewkowycz, J Liu, E Silverstein, G Torroba, 10.1007/JHEP04(2020)152arXiv:1909.13808JHEP. 04152hep-thA. Lewkowycz, J. Liu, E. Silverstein, and G. Torroba, "T T and EE, with implications for (A)dS subregion encodings," JHEP 04 (2020) 152, arXiv:1909.13808 [hep-th].
Y Chen, V Gorbenko, J Maldacena, arXiv:2007.16091Bra-ket wormholes in gravitationally prepared states. hep-thY. Chen, V. Gorbenko, and J. Maldacena, "Bra-ket wormholes in gravitationally prepared states," arXiv:2007.16091 [hep-th].
. T Hartman, Y Jiang, E Shaghoulian, arXiv:2008.01022Islands in cosmology. hep-thT. Hartman, Y. Jiang, and E. Shaghoulian, "Islands in cosmology," arXiv:2008.01022 [hep-th].
De Sitter vacua in string theory. S Kachru, R Kallosh, A D Linde, S P Trivedi, 10.1103/PhysRevD.68.046005arXiv:hep-th/0301240Phys. Rev. D. 6846005S. Kachru, R. Kallosh, A. D. Linde, and S. P. Trivedi, "De Sitter vacua in string theory," Phys. Rev. D 68 (2003) 046005, arXiv:hep-th/0301240.
Quantum computing in the nisq era and beyond. J , 279J. Preskill, "Quantum computing in the nisq era and beyond," Quantum 2 (2018) 79.
Quantum supremacy using a programmable superconducting processor. F Arute, K Arya, R Babbush, D Bacon, J C Bardin, R Barends, R Biswas, S Boixo, F G Brandao, D A Buell, Nature. 5747779F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. Brandao, D. A. Buell, et al., "Quantum supremacy using a programmable superconducting processor," Nature 574 no. 7779, (2019) 505-510.
S P Jordan, K S Lee, J Preskill, 10.1126/science.1217069arXiv:1111.3633Quantum Algorithms for Quantum Field Theories. 336quant-phS. P. Jordan, K. S. Lee, and J. Preskill, "Quantum Algorithms for Quantum Field Theories," Science 336 (2012) 1130-1133, arXiv:1111.3633 [quant-ph].
Quantum Computation of Scattering in Scalar Quantum Field Theories. S P Jordan, K S Lee, J Preskill, arXiv:1112.4833Quant. Inf. Comput. 14hep-thS. P. Jordan, K. S. Lee, and J. Preskill, "Quantum Computation of Scattering in Scalar Quantum Field Theories," Quant. Inf. Comput. 14 (2014) 1014-1080, arXiv:1112.4833 [hep-th].
Quantum algorithms for fermionic quantum field theories. S P Jordan, K S Lee, J Preskill, arXiv:1404.7115arXiv preprintS. P. Jordan, K. S. Lee, and J. Preskill, "Quantum algorithms for fermionic quantum field theories," arXiv preprint arXiv:1404.7115 (2014) .
Bqp-completeness of scattering in scalar quantum field theory. S P Jordan, H Krovi, K S Lee, J Preskill, 244S. P. Jordan, H. Krovi, K. S. Lee, and J. Preskill, "Bqp-completeness of scattering in scalar quantum field theory," Quantum 2 (2018) 44.
Simulating quantum field theory with a quantum computer. J , 10.22323/1.334.0024arXiv:1811.10085PoS. 201824hep-latJ. Preskill, "Simulating quantum field theory with a quantum computer," PoS LATTICE2018 (2018) 024, arXiv:1811.10085 [hep-lat].
Quantum contributions to cosmological correlations. S Weinberg, 10.1103/PhysRevD.72.043514arXiv:hep-th/0506236Phys. Rev. D. 7243514S. Weinberg, "Quantum contributions to cosmological correlations," Phys. Rev. D 72 (2005) 043514, arXiv:hep-th/0506236.
On Loops in Inflation. L Senatore, M Zaldarriaga, 10.1007/JHEP12(2010)008arXiv:0912.2734JHEP. 128hep-thL. Senatore and M. Zaldarriaga, "On Loops in Inflation," JHEP 12 (2010) 008, arXiv:0912.2734 [hep-th].
On Loops in Inflation II: IR Effects in Single Clock Inflation. L Senatore, M Zaldarriaga, 10.1007/JHEP01(2013)109arXiv:1203.6354JHEP. 01109hep-thL. Senatore and M. Zaldarriaga, "On Loops in Inflation II: IR Effects in Single Clock Inflation," JHEP 01 (2013) 109, arXiv:1203.6354 [hep-th].
On Loops in Inflation III: Time Independence of zeta in Single Clock Inflation. G L Pimentel, L Senatore, M Zaldarriaga, 10.1007/JHEP07(2012)166arXiv:1203.6651JHEP. 07166hep-thG. L. Pimentel, L. Senatore, and M. Zaldarriaga, "On Loops in Inflation III: Time Independence of zeta in Single Clock Inflation," JHEP 07 (2012) 166, arXiv:1203.6651 [hep-th].
Local bulk S-matrix elements and CFT singularities. M Gary, S B Giddings, J Penedones, 10.1103/PhysRevD.80.085005arXiv:0903.4437Phys. Rev. D. 8085005hep-thM. Gary, S. B. Giddings, and J. Penedones, "Local bulk S-matrix elements and CFT singularities," Phys. Rev. D 80 (2009) 085005, arXiv:0903.4437 [hep-th].
I Heemskerk, J Penedones, J Polchinski, J Sully, 10.1088/1126-6708/2009/10/079arXiv:0907.0151Holography from Conformal Field Theory. 79hep-thI. Heemskerk, J. Penedones, J. Polchinski, and J. Sully, "Holography from Conformal Field Theory," JHEP 10 (2009) 079, arXiv:0907.0151 [hep-th].
The Gravitational S-matrix. S B Giddings, R A Porto, 10.1103/PhysRevD.81.025002arXiv:0908.0004Phys. Rev. D. 8125002hep-thS. B. Giddings and R. A. Porto, "The Gravitational S-matrix," Phys. Rev. D 81 (2010) 025002, arXiv:0908.0004 [hep-th].
Effective Conformal Theory and the Flat-Space Limit of AdS. A Fitzpatrick, E Katz, D Poland, D Simmons-Duffin, 10.1007/JHEP07(2011)023arXiv:1007.2412JHEP. 0723hep-thA. Fitzpatrick, E. Katz, D. Poland, and D. Simmons-Duffin, "Effective Conformal Theory and the Flat-Space Limit of AdS," JHEP 07 (2011) 023, arXiv:1007.2412 [hep-th].
Emergent Spacetime and Holographic CFTs. S El-Showk, K Papadodimas, 10.1007/JHEP10(2012)106arXiv:1101.4163JHEP. 10106hep-thS. El-Showk and K. Papadodimas, "Emergent Spacetime and Holographic CFTs," JHEP 10 (2012) 106, arXiv:1101.4163 [hep-th].
Analyticity and the Holographic S-Matrix. A Fitzpatrick, J Kaplan, 10.1007/JHEP10(2012)127arXiv:1111.6972JHEP. 10127hep-thA. Fitzpatrick and J. Kaplan, "Analyticity and the Holographic S-Matrix," JHEP 10 (2012) 127, arXiv:1111.6972 [hep-th].
Unitarity and the Holographic S-Matrix. A Fitzpatrick, J Kaplan, 10.1007/JHEP10(2012)032arXiv:1112.4845JHEP. 1032hep-thA. Fitzpatrick and J. Kaplan, "Unitarity and the Holographic S-Matrix," JHEP 10 (2012) 032, arXiv:1112.4845 [hep-th].
Looking for a bulk point. J Maldacena, D Simmons-Duffin, A Zhiboedov, 10.1007/JHEP01(2017)013arXiv:1509.03612JHEP. 0113hep-thJ. Maldacena, D. Simmons-Duffin, and A. Zhiboedov, "Looking for a bulk point," JHEP 01 (2017) 013, arXiv:1509.03612 [hep-th].
Local bulk operators in AdS/CFT: A Boundary view of horizons and locality. A Hamilton, D N Kabat, G Lifschytz, D A Lowe, 10.1103/PhysRevD.73.086003arXiv:hep-th/0506118Phys. Rev. D. 7386003A. Hamilton, D. N. Kabat, G. Lifschytz, and D. A. Lowe, "Local bulk operators in AdS/CFT: A Boundary view of horizons and locality," Phys. Rev. D 73 (2006) 086003, arXiv:hep-th/0506118.
TASI Lectures on the Emergence of Bulk Physics in AdS/CFT. D Harlow, 10.22323/1.305.0002arXiv:1802.01040PoS. 20172hep-thD. Harlow, "TASI Lectures on the Emergence of Bulk Physics in AdS/CFT," PoS TASI2017 (2018) 002, arXiv:1802.01040 [hep-th].
Quantum Field Theory in de Sitter Space: Renormalization by Point Splitting. T Bunch, P Davies, 10.1098/rspa.1978.0060Proc. Roy. Soc. Lond. A. 360T. Bunch and P. Davies, "Quantum Field Theory in de Sitter Space: Renormalization by Point Splitting," Proc. Roy. Soc. Lond. A A360 (1978) 117-134.
Lorentz violation at high energy: Concepts, phenomena and astrophysical constraints. T Jacobson, S Liberati, D Mattingly, 10.1016/j.aop.2005.06.004arXiv:astro-ph/0505267Annals Phys. 321T. Jacobson, S. Liberati, and D. Mattingly, "Lorentz violation at high energy: Concepts, phenomena and astrophysical constraints," Annals Phys. 321 (2006) 150-196, arXiv:astro-ph/0505267.
Cosmological Event Horizons, Thermodynamics, and Particle Creation. G Gibbons, S Hawking, 10.1103/PhysRevD.15.2738Phys. Rev. D. 15G. Gibbons and S. Hawking, "Cosmological Event Horizons, Thermodynamics, and Particle Creation," Phys. Rev. D 15 (1977) 2738-2751.
The Quantum Mechanics of the Scalar Field in the New Inflationary Universe. A H Guth, S.-Y Pi, 10.1103/PhysRevD.32.1899Phys. Rev. D. 32A. H. Guth and S.-Y. Pi, "The Quantum Mechanics of the Scalar Field in the New Inflationary Universe," Phys. Rev. D 32 (1985) 1899-1920.
Inflation and squeezed quantum states. A Albrecht, P Ferreira, M Joyce, T Prokopec, 10.1103/PhysRevD.50.4807arXiv:astro-ph/9303001Phys. Rev. D. 50A. Albrecht, P. Ferreira, M. Joyce, and T. Prokopec, "Inflation and squeezed quantum states," Phys. Rev. D 50 (1994) 4807-4820, arXiv:astro-ph/9303001.
Quantum to classical transition of cosmological perturbations for nonvacuum initial states. J Lesgourgues, D Polarski, A A Starobinsky, 10.1016/S0550-3213(97)00224-1arXiv:gr-qc/9611019Nucl. Phys. B. 497J. Lesgourgues, D. Polarski, and A. A. Starobinsky, "Quantum to classical transition of cosmological perturbations for nonvacuum initial states," Nucl. Phys. B 497 (1997) 479-510, arXiv:gr-qc/9611019.
Quantum to classical transition for fluctuations in the early universe. C Kiefer, D Polarski, A A Starobinsky, 10.1142/S0218271898000292arXiv:gr-qc/9802003Int. J. Mod. Phys. D. 7C. Kiefer, D. Polarski, and A. A. Starobinsky, "Quantum to classical transition for fluctuations in the early universe," Int. J. Mod. Phys. D 7 (1998) 455-462, arXiv:gr-qc/9802003.
J Liu, C.-M Sou, Y Wang, 10.1007/JHEP10(2016)072arXiv:1608.07909Cosmic Decoherence: Massive Fields. 72hep-thJ. Liu, C.-M. Sou, and Y. Wang, "Cosmic Decoherence: Massive Fields," JHEP 10 (2016) 072, arXiv:1608.07909 [hep-th].
Quantum computational chemistry. S Mcardle, S Endo, A Aspuru-Guzik, S C Benjamin, X Yuan, Reviews of Modern Physics. 92115003S. McArdle, S. Endo, A. Aspuru-Guzik, S. C. Benjamin, and X. Yuan, "Quantum computational chemistry," Reviews of Modern Physics 92 no. 1, (2020) 015003.
Quantum simulation of quantum field theories as quantum chemistry. J Liu, Y Xin, arXiv:2004.13234hep-thJ. Liu and Y. Xin, "Quantum simulation of quantum field theories as quantum chemistry," arXiv:2004.13234 [hep-th].
Inflation as a probe of short distance physics. R Easther, B R Greene, W H Kinney, G Shiu, 10.1103/PhysRevD.64.103502arXiv:hep-th/0104102Phys. Rev. D. 64103502R. Easther, B. R. Greene, W. H. Kinney, and G. Shiu, "Inflation as a probe of short distance physics," Phys. Rev. D 64 (2001) 103502, arXiv:hep-th/0104102.
On the initial condition of inflationary fluctuations. H Jiang, Y Wang, S Zhou, 10.1088/1475-7516/2016/04/041arXiv:1601.01179JCAP. 0441hep-thH. Jiang, Y. Wang, and S. Zhou, "On the initial condition of inflationary fluctuations," JCAP 04 (2016) 041, arXiv:1601.01179 [hep-th].
A G Lewis, G Vidal, arXiv:1911.12978Classical Simulations of Quantum Field Theory in Curved Spacetime I: Fermionic Hawking-Hartle Vacua from a Staggered Lattice Scheme. gr-qcA. G. Lewis and G. Vidal, "Classical Simulations of Quantum Field Theory in Curved Spacetime I: Fermionic Hawking-Hartle Vacua from a Staggered Lattice Scheme," arXiv:1911.12978 [gr-qc].
Super-Hubble de Sitter Fluctuations and the Dynamical RG. C Burgess, L Leblond, R Holman, S Shandera, 10.1088/1475-7516/2010/03/033arXiv:0912.1608JCAP. 0333hep-thC. Burgess, L. Leblond, R. Holman, and S. Shandera, "Super-Hubble de Sitter Fluctuations and the Dynamical RG," JCAP 03 (2010) 033, arXiv:0912.1608 [hep-th].
R C Brower, C V Cogburn, A L Fitzpatrick, D Howarth, C.-I Tan, arXiv:1912.07606Lattice Setup for Quantum Field Theory in AdS. 2hep-thR. C. Brower, C. V. Cogburn, A. L. Fitzpatrick, D. Howarth, and C.-I. Tan, "Lattice Setup for Quantum Field Theory in AdS 2 ," arXiv:1912.07606 [hep-th].
Tree-like structure of eternal inflation: A solvable model. D Harlow, S H Shenker, D Stanford, L Susskind, 10.1103/PhysRevD.85.063516arXiv:1110.0496Phys. Rev. D. 8563516hep-thD. Harlow, S. H. Shenker, D. Stanford, and L. Susskind, "Tree-like structure of eternal inflation: A solvable model," Phys. Rev. D 85 (2012) 063516, arXiv:1110.0496 [hep-th].
. S S Gubser, J Knaute, S Parikh, A Samberg, P Witaszczyk, 10.1007/s00220-016-2813-6arXiv:1605.01061Commun. Math. Phys. 3523hep-thS. S. Gubser, J. Knaute, S. Parikh, A. Samberg, and P. Witaszczyk, "p-adic AdS/CFT," Commun. Math. Phys. 352 no. 3, (2017) 1019-1059, arXiv:1605.01061 [hep-th].
Tensor networks, p-adic fields, and algebraic curves: arithmetic and the AdS 3 /CFT 2 correspondence. M Heydeman, M Marcolli, I Saberi, B Stoica, 10.4310/ATMP.2018.v22.n1.a4arXiv:1605.07639Adv. Theor. Math. Phys. 22hep-thM. Heydeman, M. Marcolli, I. Saberi, and B. Stoica, "Tensor networks, p-adic fields, and algebraic curves: arithmetic and the AdS 3 /CFT 2 correspondence," Adv. Theor. Math. Phys. 22 (2018) 93-176, arXiv:1605.07639 [hep-th].
Quantum Circuit Cosmology: The Expansion of the Universe Since the First Qubit. N Bao, C Cao, S M Carroll, L Mcallister, arXiv:1702.06959hep-thN. Bao, C. Cao, S. M. Carroll, and L. McAllister, "Quantum Circuit Cosmology: The Expansion of the Universe Since the First Qubit," arXiv:1702.06959 [hep-th].
Dynamics for holographic codes. T J Osborne, D E Stiegemann, 10.1007/JHEP04(2020)154arXiv:1706.08823JHEP. 04154quant-phT. J. Osborne and D. E. Stiegemann, "Dynamics for holographic codes," JHEP 04 (2020) 154, arXiv:1706.08823 [quant-ph].
The TransPlanckian problem of inflationary cosmology. J Martin, R H Brandenberger, 10.1103/PhysRevD.63.123501arXiv:hep-th/0005209Phys. Rev. D. 63123501J. Martin and R. H. Brandenberger, "The TransPlanckian problem of inflationary cosmology," Phys. Rev. D 63 (2001) 123501, arXiv:hep-th/0005209.
Bound states in quantum field theory. M Gell-Mann, F Low, 10.1103/PhysRev.84.350Phys. Rev. 84M. Gell-Mann and F. Low, "Bound states in quantum field theory," Phys. Rev. 84 (1951) 350-354.
. A Lucia, D Poulin, J Preskill, T Wang, working in progressA. Lucia, D. Poulin, J. Preskill, and T. Wang, "working in progress,".
Stable Cosmic Time Crystals. D A Easson, T Manton, 10.1103/PhysRevD.99.043507arXiv:1802.03693Phys. Rev. D. 99443507hep-thD. A. Easson and T. Manton, "Stable Cosmic Time Crystals," Phys. Rev. D 99 no. 4, (2019) 043507, arXiv:1802.03693 [hep-th].
Wavefunction preparation and resampling using a quantum computer. A Kitaev, W A Webb, arXiv:0801.0342arXiv preprintA. Kitaev and W. A. Webb, "Wavefunction preparation and resampling using a quantum computer," arXiv preprint arXiv:0801.0342 (2008) .
Triangular factorization and inversion by fast matrix multiplication. J R Bunch, J E Hopcroft, Mathematics of Computation. 28125J. R. Bunch and J. E. Hopcroft, "Triangular factorization and inversion by fast matrix multiplication," Mathematics of Computation 28 no. 125, (1974) 231-236.
Matrix multiplication via arithmetic progressions. D Coppersmith, S Winograd, Proceedings of the nineteenth annual ACM symposium on Theory of computing. the nineteenth annual ACM symposium on Theory of computingD. Coppersmith and S. Winograd, "Matrix multiplication via arithmetic progressions," in Proceedings of the nineteenth annual ACM symposium on Theory of computing, pp. 1-6. 1987.
A theory of trotter error. A M Childs, Y Su, M C Tran, N Wiebe, S Zhu, arXiv:1912.08854arXiv preprintA. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu, "A theory of trotter error," arXiv preprint arXiv:1912.08854 (2019) .
Bulk Locality and Quantum Error Correction in AdS/CFT. A Almheiri, X Dong, D Harlow, 10.1007/JHEP04(2015)163arXiv:1411.7041JHEP. 04163hep-thA. Almheiri, X. Dong, and D. Harlow, "Bulk Locality and Quantum Error Correction in AdS/CFT," JHEP 04 (2015) 163, arXiv:1411.7041 [hep-th].
Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality. X Dong, D Harlow, A C Wall, 10.1103/PhysRevLett.117.021601arXiv:1601.05416Phys. Rev. Lett. 117221601hep-thX. Dong, D. Harlow, and A. C. Wall, "Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality," Phys. Rev. Lett. 117 no. 2, (2016) 021601, arXiv:1601.05416 [hep-th].
Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence. F Pastawski, B Yoshida, D Harlow, J Preskill, 10.1007/JHEP06(2015)149arXiv:1503.06237JHEP. 06149hep-thF. Pastawski, B. Yoshida, D. Harlow, and J. Preskill, "Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence," JHEP 06 (2015) 149, arXiv:1503.06237 [hep-th].
Holographic scattering requires a connected entanglement wedge. A May, G Penington, J Sorce, 10.1007/JHEP08(2020)132arXiv:1912.05649JHEP. 20132hep-thA. May, G. Penington, and J. Sorce, "Holographic scattering requires a connected entanglement wedge," JHEP 20 (2020) 132, arXiv:1912.05649 [hep-th].
Quantum simulations of one dimensional quantum systems. R D Somma, arXiv:1503.06319arXiv preprintR. D. Somma, "Quantum simulations of one dimensional quantum systems," arXiv preprint arXiv:1503.06319 (2015) .
Digital quantum computation of fermion-boson interacting systems. A Macridin, P Spentzouris, J Amundson, R Harnik, Physical Review A. 98442312A. Macridin, P. Spentzouris, J. Amundson, and R. Harnik, "Digital quantum computation of fermion-boson interacting systems," Physical Review A 98 no. 4, (2018) 042312.
Optimal hamiltonian simulation by quantum signal processing. G H Low, I L Chuang, Physical review letters. 118110501G. H. Low and I. L. Chuang, "Optimal hamiltonian simulation by quantum signal processing," Physical review letters 118 no. 1, (2017) 010501.
Fast digital methods for adiabatic state preparation. K Wan, I Kim, arXiv:2004.04164arXiv preprintK. Wan and I. Kim, "Fast digital methods for adiabatic state preparation," arXiv preprint arXiv:2004.04164 (2020) .
On the Geometry of the String Landscape and the Swampland. H Ooguri, C Vafa, 10.1016/j.nuclphysb.2006.10.033arXiv:hep-th/0605264Nucl. Phys. B. 766H. Ooguri and C. Vafa, "On the Geometry of the String Landscape and the Swampland," Nucl. Phys. B 766 (2007) 21-33, arXiv:hep-th/0605264.
Trans-Planckian Censorship and the Swampland. A Bedroya, C Vafa, 10.1007/JHEP09(2020)123arXiv:1909.11063JHEP. 09123hep-thA. Bedroya and C. Vafa, "Trans-Planckian Censorship and the Swampland," JHEP 09 (2020) 123, arXiv:1909.11063 [hep-th].
On the Cosmological Implications of the String Swampland. P Agrawal, G Obied, P J Steinhardt, C Vafa, 10.1016/j.physletb.2018.07.040arXiv:1806.09718Phys. Lett. B. 784hep-thP. Agrawal, G. Obied, P. J. Steinhardt, and C. Vafa, "On the Cosmological Implications of the String Swampland," Phys. Lett. B 784 (2018) 271-276, arXiv:1806.09718 [hep-th].
. M Hogarth, Spacetime Predictability, University of CambridgePhD thesisM. Hogarth, Predictability, computability, and spacetime. PhD thesis, University of Cambridge, 1996.
Computability and physical theories. R Geroch, J B Hartle, Foundations of Physics. 166R. Geroch and J. B. Hartle, "Computability and physical theories," Foundations of Physics 16 no. 6, (1986) 533-550.
A Bouland, B Fefferman, U Vazirani, arXiv:1910.14646Computational pseudorandomness, the wormhole growth paradox, and constraints on the AdS/CFT duality. quant-phA. Bouland, B. Fefferman, and U. Vazirani, "Computational pseudorandomness, the wormhole growth paradox, and constraints on the AdS/CFT duality," arXiv:1910.14646 [quant-ph].
Estimating the entropy of shallow circuit outputs is hard. A Gheorghiu, M J Hoban, arXiv:2002.12814quant-phA. Gheorghiu and M. J. Hoban, "Estimating the entropy of shallow circuit outputs is hard," arXiv:2002.12814 [quant-ph].
L Susskind, arXiv:2003.01807Horizons Protect Church-Turing. hep-thL. Susskind, "Horizons Protect Church-Turing," arXiv:2003.01807 [hep-th].
The ghost in the radiation: Robust encodings of the black hole interior. I Kim, E Tang, J Preskill, 10.1007/JHEP06(2020)031arXiv:2003.05451JHEP. 2031hep-thI. Kim, E. Tang, and J. Preskill, "The ghost in the radiation: Robust encodings of the black hole interior," JHEP 20 (2020) 031, arXiv:2003.05451 [hep-th].
B Yoshida, arXiv:2005.12491Remarks on Black Hole Complexity Puzzle. hep-thB. Yoshida, "Remarks on Black Hole Complexity Puzzle," arXiv:2005.12491 [hep-th].
Random compiler for fast hamiltonian simulation. E Campbell, Physical review letters. 123770503E. Campbell, "Random compiler for fast hamiltonian simulation," Physical review letters 123 no. 7, (2019) 070503.
Digital quantum simulation, trotter errors, and quantum chaos of the kicked top. L M Sieberer, T Olsacher, A Elben, M Heyl, P Hauke, F Haake, P Zoller, L. M. Sieberer, T. Olsacher, A. Elben, M. Heyl, P. Hauke, F. Haake, and P. Zoller, "Digital quantum simulation, trotter errors, and quantum chaos of the kicked top," npj Quantum Information 5 no. 1, (2019) 1-11.
Quantum simulation via randomized product formulas: Low gate complexity with accuracy guarantees. C.-F Chen, R Kueng, J A Tropp, arXiv:2008.11751arXiv preprintC.-F. Chen, R. Kueng, J. A. Tropp, et al., "Quantum simulation via randomized product formulas: Low gate complexity with accuracy guarantees," arXiv preprint arXiv:2008.11751 (2020) .
. V Hetherington, A Milsted, private communicationsV. Hetherington and A. Milsted, "private communications,".
Creating and probing the Sachdev-Ye-Kitaev model with ultracold gases: Towards experimental studies of quantum gravity. I Danshita, M Hanada, M Tezuka, 10.1093/ptep/ptx108arXiv:1606.02454cond-mat.quant-gasI. Danshita, M. Hanada, and M. Tezuka, "Creating and probing the Sachdev-Ye-Kitaev model with ultracold gases: Towards experimental studies of quantum gravity," PTEP 2017 no. 8, (2017) 083I01, arXiv:1606.02454 [cond-mat.quant-gas].
Verified Quantum Information Scrambling. K Landsman, C Figgatt, T Schuster, N Linke, B Yoshida, N Yao, C Monroe, 10.1038/s41586-019-0952-6arXiv:1806.02807Nature. 5677746quant-phK. Landsman, C. Figgatt, T. Schuster, N. Linke, B. Yoshida, N. Yao, and C. Monroe, "Verified Quantum Information Scrambling," Nature 567 no. 7746, (2019) 61-65, arXiv:1806.02807 [quant-ph].
A R Brown, H Gharibyan, S Leichenauer, H W Lin, S Nezami, G Salton, L Susskind, B Swingle, M Walter, arXiv:1911.06314Quantum Gravity in the Lab: Teleportation by Size and Traversable Wormholes. quant-phA. R. Brown, H. Gharibyan, S. Leichenauer, H. W. Lin, S. Nezami, G. Salton, L. Susskind, B. Swingle, and M. Walter, "Quantum Gravity in the Lab: Teleportation by Size and Traversable Wormholes," arXiv:1911.06314 [quant-ph].
B Kobrin, Z Yang, G D Kahanamoku-Meyer, C T Olund, J E Moore, D Stanford, N Y Yao, arXiv:2002.05725Many-Body Chaos in the Sachdev-Ye-Kitaev Model. hep-thB. Kobrin, Z. Yang, G. D. Kahanamoku-Meyer, C. T. Olund, J. E. Moore, D. Stanford, and N. Y. Yao, "Many-Body Chaos in the Sachdev-Ye-Kitaev Model," arXiv:2002.05725 [hep-th].
Digital Quantum Simulation of Minimal AdS/CFT. L Garcia-Alvarez, I Egusquiza, L Lamata, A Campo, J Sonner, E Solano, 10.1103/PhysRevLett.119.040501arXiv:1607.08560Phys. Rev. Lett. 119440501quant-phL. Garcia-Alvarez, I. Egusquiza, L. Lamata, A. del Campo, J. Sonner, and E. Solano, "Digital Quantum Simulation of Minimal AdS/CFT," Phys. Rev. Lett. 119 no. 4, (2017) 040501, arXiv:1607.08560 [quant-ph].
Quantum Simulation of the Sachdev-Ye-Kitaev Model by Asymmetric Qubitization. R Babbush, D W Berry, H Neven, 10.1103/PhysRevA.99.040301arXiv:1806.02793Phys. Rev. A. 99440301quant-phR. Babbush, D. W. Berry, and H. Neven, "Quantum Simulation of the Sachdev-Ye-Kitaev Model by Asymmetric Qubitization," Phys. Rev. A 99 no. 4, (2019) 040301, arXiv:1806.02793 [quant-ph].
S Xu, L Susskind, Y Su, B Swingle, arXiv:2008.02303A Sparse Model of Quantum Holography. cond-mat.str-elS. Xu, L. Susskind, Y. Su, and B. Swingle, "A Sparse Model of Quantum Holography," arXiv:2008.02303 [cond-mat.str-el].
Cosmological' quasiparticle production in harmonically trapped superfluid gases. P O Fedichev, U R Fischer, 10.1103/PhysRevA.69.033602arXiv:cond-mat/0303063Phys. Rev. A. 6933602P. O. Fedichev and U. R. Fischer, "'Cosmological' quasiparticle production in harmonically trapped superfluid gases," Phys. Rev. A 69 (2004) 033602, arXiv:cond-mat/0303063.
Quantum simulation of cosmic inflation in two-component Bose-Einstein condensates. U R Fischer, R Schutzhold, 10.1103/PhysRevA.70.063615arXiv:cond-mat/0406470Phys. Rev. A. 7063615U. R. Fischer and R. Schutzhold, "Quantum simulation of cosmic inflation in two-component Bose-Einstein condensates," Phys. Rev. A 70 (2004) 063615, arXiv:cond-mat/0406470.
Cosmology at the end of the world. S Antonini, B Swingle, 10.1038/s41567-020-0909-6arXiv:1907.06667Nature Phys. 168hep-thS. Antonini and B. Swingle, "Cosmology at the end of the world," Nature Phys. 16 no. 8, (2020) 881-886, arXiv:1907.06667 [hep-th].
Quantum simulation with hybrid tensor networks. X Yuan, J Sun, J Liu, Q Zhao, Y Zhou, arXiv:2007.00958quant-phX. Yuan, J. Sun, J. Liu, Q. Zhao, and Y. Zhou, "Quantum simulation with hybrid tensor networks," arXiv:2007.00958 [quant-ph].
The string swampland constraints require multi-field inflation. A Achúcarro, G A Palma, 10.1088/1475-7516/2019/02/041arXiv:1807.04390JCAP. 0241hep-thA. Achúcarro and G. A. Palma, "The string swampland constraints require multi-field inflation," JCAP 02 (2019) 041, arXiv:1807.04390 [hep-th].
The Gelaton Scenario: Equilateral non-Gaussianity from multi-field dynamics. A J Tolley, M Wyman, 10.1103/PhysRevD.81.043502arXiv:0910.1853Phys. Rev. D. 8143502hep-thA. J. Tolley and M. Wyman, "The Gelaton Scenario: Equilateral non-Gaussianity from multi-field dynamics," Phys. Rev. D 81 (2010) 043502, arXiv:0910.1853 [hep-th].
Mass hierarchies and non-decoupling in multi-scalar field dynamics. A Achucarro, J.-O Gong, S Hardeman, G A Palma, S P Patil, 10.1103/PhysRevD.84.043502arXiv:1005.3848Phys. Rev. D. 8443502hep-thA. Achucarro, J.-O. Gong, S. Hardeman, G. A. Palma, and S. P. Patil, "Mass hierarchies and non-decoupling in multi-scalar field dynamics," Phys. Rev. D 84 (2011) 043502, arXiv:1005.3848 [hep-th].
Features of heavy physics in the CMB power spectrum. A Achucarro, J.-O Gong, S Hardeman, G A Palma, S P Patil, 10.1088/1475-7516/2011/01/030arXiv:1010.3693JCAP. 0130hep-phA. Achucarro, J.-O. Gong, S. Hardeman, G. A. Palma, and S. P. Patil, "Features of heavy physics in the CMB power spectrum," JCAP 01 (2011) 030, arXiv:1010.3693 [hep-ph].
Duality invariance of cosmological perturbation spectra. D Wands, 10.1103/PhysRevD.60.023507arXiv:gr-qc/9809062Phys. Rev. D. 6023507D. Wands, "Duality invariance of cosmological perturbation spectra," Phys. Rev. D 60 (1999) 023507, arXiv:gr-qc/9809062.
On the generation of a scale invariant spectrum of adiabatic fluctuations in cosmological models with a contracting phase. F Finelli, R Brandenberger, 10.1103/PhysRevD.65.103522arXiv:hep-th/0112249Phys. Rev. D. 65103522F. Finelli and R. Brandenberger, "On the generation of a scale invariant spectrum of adiabatic fluctuations in cosmological models with a contracting phase," Phys. Rev. D 65 (2002) 103522, arXiv:hep-th/0112249.
The Ekpyrotic universe: Colliding branes and the origin of the hot big bang. J Khoury, B A Ovrut, P J Steinhardt, N Turok, 10.1103/PhysRevD.64.123522arXiv:hep-th/0103239Phys. Rev. D. 64123522J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, "The Ekpyrotic universe: Colliding branes and the origin of the hot big bang," Phys. Rev. D 64 (2001) 123522, arXiv:hep-th/0103239.
Simulating kink scattering in a quantum computer. J Liu, J Preskill, B Sahinoglu, to appearJ. Liu, J. Preskill, and B. Sahinoglu, "Simulating kink scattering in a quantum computer," to appear. https://drive.google.com/file/d/1wOrPmO7lfutZlLEj-12JiMUAVg4nOdQr/view.
Simulating kink scattering in tensor networks. A Milsted, J Liu, J Preskill, G Vidal, to appearA. Milsted, J. Liu, J. Preskill, and G. Vidal, "Simulating kink scattering in tensor networks," to appear. https://www.youtube.com/watch?v=9Om--8LsqFw.
Late-time Cosmic Acceleration from Compactification. J G Russo, P K Townsend, 10.1088/1361-6382/ab0804arXiv:1811.03660Class. Quant. Grav. 36995008hep-thJ. G. Russo and P. K. Townsend, "Late-time Cosmic Acceleration from Compactification," Class. Quant. Grav. 36 no. 9, (2019) 095008, arXiv:1811.03660 [hep-th].
Time-dependent compactification to de Sitter space: a no-go theorem. J G Russo, P K Townsend, 10.1007/JHEP06(2019)097arXiv:1904.11967JHEP. 0697hep-thJ. G. Russo and P. K. Townsend, "Time-dependent compactification to de Sitter space: a no-go theorem," JHEP 06 (2019) 097, arXiv:1904.11967 [hep-th].
G Obied, H Ooguri, L Spodyneiko, C Vafa, arXiv:1806.08362De Sitter Space and the Swampland. hep-thG. Obied, H. Ooguri, L. Spodyneiko, and C. Vafa, "De Sitter Space and the Swampland," arXiv:1806.08362 [hep-th].
Anti-de Sitter space and holography. E Witten, 10.4310/ATMP.1998.v2.n2.a2arXiv:hep-th/9802150Adv. Theor. Math. Phys. 2E. Witten, "Anti-de Sitter space and holography," Adv. Theor. Math. Phys. 2 (1998) 253-291, arXiv:hep-th/9802150.
Distance and de Sitter Conjectures on the Swampland. H Ooguri, E Palti, G Shiu, C Vafa, 10.1016/j.physletb.2018.11.018arXiv:1810.05506Phys. Lett. B. 788hep-thH. Ooguri, E. Palti, G. Shiu, and C. Vafa, "Distance and de Sitter Conjectures on the Swampland," Phys. Lett. B 788 (2019) 180-184, arXiv:1810.05506 [hep-th].
A comment on effective field theories of flux vacua. S Kachru, S P Trivedi, 10.1002/prop.201800086arXiv:1808.08971Fortsch. Phys. 671-21800086hep-thS. Kachru and S. P. Trivedi, "A comment on effective field theories of flux vacua," Fortsch. Phys. 67 no. 1-2, (2019) 1800086, arXiv:1808.08971 [hep-th].
NP-complete problems and physical reality. S Aaronson, arXiv:quant-ph/0502072S. Aaronson, "NP-complete problems and physical reality," arXiv:quant-ph/0502072.
A Universal Upper Bound on the Entropy to Energy Ratio for Bounded Systems. J D Bekenstein, 10.1103/PhysRevD.23.287Phys. Rev. D. 23287J. D. Bekenstein, "A Universal Upper Bound on the Entropy to Energy Ratio for Bounded Systems," Phys. Rev. D 23 (1981) 287.
The extent of computation in Malament-Hogarth spacetimes. P Welch, arXiv:gr-qc/0609035P. Welch, "The extent of computation in Malament-Hogarth spacetimes," arXiv:gr-qc/0609035.
Black holes as mirrors: Quantum information in random subsystems. P Hayden, J Preskill, 10.1088/1126-6708/2007/09/120arXiv:0708.4025JHEP. 09120hep-thP. Hayden and J. Preskill, "Black holes as mirrors: Quantum information in random subsystems," JHEP 09 (2007) 120, arXiv:0708.4025 [hep-th].
Toward physical realizations of thermodynamic resource theories. N Y Halpern, SpringerInformation and InteractionN. Y. Halpern, "Toward physical realizations of thermodynamic resource theories," in Information and Interaction, pp. 135-166. Springer, 2017.
LATTICEEASY: A Program for lattice simulations of scalar fields in an expanding universe. G N Felder, I Tkachev, 10.1016/j.cpc.2008.02.009arXiv:hep-ph/0011159Comput. Phys. Commun. 178G. N. Felder and I. Tkachev, "LATTICEEASY: A Program for lattice simulations of scalar fields in an expanding universe," Comput. Phys. Commun. 178 (2008) 929-932, arXiv:hep-ph/0011159.
Fermionic quantum computation. S B Bravyi, A Y Kitaev, Annals of Physics. 2981S. B. Bravyi and A. Y. Kitaev, "Fermionic quantum computation," Annals of Physics 298 no. 1, (2002) 210-226.
Exact bosonization in two spatial dimensions and a new class of lattice gauge theories. Y.-A Chen, A Kapustin, D Radičević, 10.1016/j.aop.2018.03.024arXiv:1711.00515Annals Phys. 393cond-mat.str-elY.-A. Chen, A. Kapustin, and D. Radičević, "Exact bosonization in two spatial dimensions and a new class of lattice gauge theories," Annals Phys. 393 (2018) 234-253, arXiv:1711.00515 [cond-mat.str-el].
Bosonization in three spatial dimensions and a 2-form gauge theory. Y.-A Chen, A Kapustin, 10.1103/PhysRevB.100.245127arXiv:1807.07081Phys. Rev. B. 10024245127cond-mat.str-elY.-A. Chen and A. Kapustin, "Bosonization in three spatial dimensions and a 2-form gauge theory," Phys. Rev. B 100 no. 24, (2019) 245127, arXiv:1807.07081 [cond-mat.str-el].
Exact bosonization in arbitrary dimensions. Y.-A Chen, arXiv:1911.00017cond-mat.str-elY.-A. Chen, "Exact bosonization in arbitrary dimensions," arXiv:1911.00017 [cond-mat.str-el].
A Chatrchyan, K T Geier, M K Oberthaler, J Berges, P Hauke, arXiv:2008.02290Analog reheating of the early universe in the laboratory. cond-mat.quant-gasA. Chatrchyan, K. T. Geier, M. K. Oberthaler, J. Berges, and P. Hauke, "Analog reheating of the early universe in the laboratory," arXiv:2008.02290 [cond-mat.quant-gas].
| [] |
[
"Contextualize Me -The Case for Context in Reinforcement Learning",
"Contextualize Me -The Case for Context in Reinforcement Learning"
] | [
"Carolin Benjamins [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Theresa Eimer [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Frederik Schubert [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Aditya Mohan [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Sebastian Döhler [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"André Biedenkapp [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Bodo Rosenhahn [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Frank Hutter \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n",
"Marius Lindauer [email protected] \nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n\n"
] | [
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n",
"Leibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\nUniversity of Freiburg\nLeibniz University Hannover\n"
] | [] | While Reinforcement Learning (RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model such changes in a principled manner, thereby enabling flexible, precise and interpretable task specification and generation. Our goal is to show how the framework of cRL contributes to improving zero-shot generalization in RL through meaningful benchmarks and structured reasoning about generalization tasks. We confirm the insight that optimal behavior in cRL requires context information, as in other related areas of partial observability. To empirically validate this in the cRL framework, we provide various context-extended versions of common RL environments. They are part of the first benchmark library, CARL, designed for generalization based on cRL extensions of popular benchmarks, which we propose as a testbed to further study general agents. We show that in the contextual setting, even simple RL environments become challenging -and that naive solutions are not enough to generalize across complex context spaces. | null | [
"https://export.arxiv.org/pdf/2202.04500v2.pdf"
] | 246,680,013 | 2202.04500 | df8e5f2e19b696fc5ed4bec9b61835943c8e8a8f |
Contextualize Me -The Case for Context in Reinforcement Learning
2 Jun 2023
Carolin Benjamins [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Theresa Eimer [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Frederik Schubert [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Aditya Mohan [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Sebastian Döhler [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
André Biedenkapp [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Bodo Rosenhahn [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Frank Hutter
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Marius Lindauer [email protected]
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
University of Freiburg
Leibniz University Hannover
Contextualize Me -The Case for Context in Reinforcement Learning
2 Jun 2023Published in Transactions on Machine Learning Research (06/2023) Reviewed on OpenReview: https: // openreview. net/ forum? id= Y42xVBQusn * Equal Contribution 1 Published in Transactions on Machine Learning Research (06/2023)
While Reinforcement Learning (RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model such changes in a principled manner, thereby enabling flexible, precise and interpretable task specification and generation. Our goal is to show how the framework of cRL contributes to improving zero-shot generalization in RL through meaningful benchmarks and structured reasoning about generalization tasks. We confirm the insight that optimal behavior in cRL requires context information, as in other related areas of partial observability. To empirically validate this in the cRL framework, we provide various context-extended versions of common RL environments. They are part of the first benchmark library, CARL, designed for generalization based on cRL extensions of popular benchmarks, which we propose as a testbed to further study general agents. We show that in the contextual setting, even simple RL environments become challenging -and that naive solutions are not enough to generalize across complex context spaces.
Introduction
Reinforcement Learning (RL) has shown successes in a variety of domains, including (video-)game playing (Silver et al., 2016;Badia et al., 2020), robot manipulation (Lee et al., 2020a;Ploeger et al., 2020), traffic control (Arel et al., 2010), chemistry (Zhou et al., 2017), logistics (Li et al., 2019) and nuclear fusion (Degrave et al., 2022). At the same time, RL has shown little success in real-world deployments that require generalization, rather focusing on narrow domains (Bellemare et al., 2020;Degrave et al., 2022). We believe this can largely be explained by the fact that modern RL algorithms are not designed with generalization in mind, making them brittle when faced with even slight variations of their environment (Yu et al., 2019;Meng & Khushi, 2019;Lu et al., 2020).
To address this limitation, recent research has increasingly focused on generalization capabilities of RL agents. Ideally, general agents should be capable of zero-shot transfer to previously unseen environments and robust to changes in the problem setting while interacting with an environment (Ponsen et al., 2009;Henderson et al., 2018;Cobbe et al., 2020;Zhang et al., 2021b;Fu et al., 2021b;Abdolshah et al., 2021;Sodhani et al., 2021b;Adriaensen et al., 2022;Kirk et al., 2023). Steps in this direction have been taken by proposing new problem settings where agents can test their transfer performance, e.g. the Arcade Learning Environment's flavors (Machado et al., 2018) or benchmarks utilizing Procedural Content Generation (PCG) to increase task variation, e.g. ProcGen (Cobbe et al., 2020), NetHack (Küttler et al., 2020) or Alchemy (Wang et al., 2021). Furthermore, robustness to distribution shift as well as multi-task learning have been long-standing topics in meta-RL, both in terms of benchmarks (Yu et al., 2019;Sodhani et al., 2021a) and solution methods (Pinto et al., 2017;Finn et al., 2017;Zhu et al., 2020;Zhang et al., 2021d).
While these extended problem settings in RL have expanded the possibilities for benchmarking agents in diverse environments, the degree of task variation is often either unknown or cannot be controlled precisely. We believe that generalization in RL is held back by these factors, stemming in part from a lack of problem formalization (Kirk et al., 2023). In order to facilitate generalization in RL, cRL proposes to explicitly take environment characteristics, the so called context (Hallak et al., 2015), into account. This inclusion enables precise design of train and test distributions with respect to this context. Thus, cRL allows us to reason about which types of generalization abilities RL agents exhibit and to quantify their performance on them.
Overall, cRL provides a framework for both theoretical analysis and practical improvements.
In order to empirically study cRL, we introduce a benchmark library for Context-Adaptive Reinforcement Learning: CARL. CARL collects well-established environments from the RL community and extends them with the notion of context. To ensure interpretability, CARL considers context which is mainly based on physical properties and thus intuitive to humans. For example, CARL extends Brax (Freeman et al., 2021) environments with properties such as friction, gravity, or the mass of an object (see Figure 1). Through CARL's interface, it is possible to meticulously define the context distributions on which RL agents are trained and evaluated. We use our benchmark library to empirically show how different context variations can significantly increase the difficulty of training RL agents, even in simple environments. We further verify the intuition that allowing RL agents access to context information is beneficial for generalization tasks in theory and practice.
In short, our contributions are: (i) We provide a theoretical and empirical overview of why Contextual Reinforcement Learning (cRL) is useful for research into zero-shot generalization for RL; (ii) We introduce our benchmark library CARL which enables fine-grained context control in benchmarking cRL; and (iii) We demonstrate that even on simple environments, generalization is challenging for standard RL agents.
Contextual Markov Decision Processes
In order to facilitate generalization, we first have to rethink how we model the RL problem. While we could follow the common notion of modeling environments as Markov Decision Processes (MDPs), this way of modeling typically assumes a single, clearly defined environment. We believe this problem formulation is overly restrictive. Agents trained under such an assumption fail when the underlying environment does not behave exactly as they have experienced during training. Modeling the problem as a contextual MDP (cMDP) following Hallak et al. (2015); Modi et al. (2018) instead, we assume that there are multiple related but distinct environments which an agent might interact with and which can be characterized through context. This notion of context provides us with the means necessary to study the generalization abilities of RL agents in a principled manner.
What is Context? To help build intuition on what context is and how it might influence the learning problem, we first give an informal treatment of context. In essence, context characterizes how the environment behaves and what its goals look like. In contrast to the observations of an MDP, which describe the changes to the environment step by step, context allows us to reason about how the state will evolve without requiring access to the true transition and reward functions. Further, context features are typically static (i.e., do not change during an episode) or change at a much slower time scale than state observations. In a robot, for example, joint friction could inform an RL controller how much torque to apply to execute some desired action. In the short horizon, the friction will not change. However, especially if the robot is not well maintained, the friction can increase due to mechanical degradation with the need to adapt accordingly. Context information can now help to appropriately compensate. Another example could be different payloads as context for a robot or different winds for a helicopter (Koppejan & Whiteson, 2009). Note that such a feature does not need to provide the exact change in transition dynamics, rather it needs to provide a signal of how transition (and reward) functions relate to each other.
Context does not need to influence reward and transition functions at the same time. In goal-based reinforcement learning (e.g., Schaul et al., 2015;Eysenbach et al., 2019), the notion of goals influences the reward function, typically without changing the transition dynamics. For example, an agent needs to traverse an empty gridworld. A goal that is placed to the right of the agent yields high rewards by moving closer. If another goal is now on the left of the agent, the reward for actions switches without changing how the agent traverses the grid (i.e. the transition function).
By its nature, context enables agents to learn more discriminative and thus more general policies. This makes context very well suited to study generalization of RL agents (Kirk et al., 2023). However, context has yet to be widely explored or leveraged in reinforcement learning, and there are many open questions to be addressed. A prominent one among them is how to use context during learning. We discuss contextual RL more formally in Section 4. (Hallak et al., 2015;Modi et al., 2018) allow us to formalize generalization across tasks by extending the standard definition of an MDP in RL. An MDP M = (S, A, T , R, ρ) consists of a state space S, an action space A, transition dynamics T , a reward function R and a distribution over the initial states ρ. Through the addition of context, we can define, characterize and parameterize the environment's rules of behavior and therefore induce task instances as variations on the problem. In the resulting cMDP, the action space A and state space S stay the same; only the transition dynamics T c , the reward R c and the initial state distribution ρ c change depending on the context c ∈ C. Through the context-dependent initial state distribution ρ c , as well as the change in dynamics, the agent may furthermore be exposed to different parts of the state space for different contexts. The context space C can either be a discrete set of contexts or defined via a context distribution p C . A cMDP M therefore defines a set of contextual MDPs M = {M c } c∼p C .
Contextual Markov Decision Processes (cMDP)
cMDPs Subsuming Other Notions of Generalization Even though a lot of work in RL makes no explicit assumptions about task variations and thus generalization, there are extensions of the basic MDP models beyond cMDPs that focus on generalization. One of these is Hidden-Parameter MDPs (Doshi-Velez & Konidaris, 2016), which allows for changes in dynamics just as in cRL, but keeps the reward function fixed. The reverse is true in goal-based RL (Florensa et al., 2018), where the reward function changes, but the environment dynamics stay the same. Block MDPs (Du et al., 2019) are concerned with a different form of generalization than cMDPs altogether; instead of zero-shot policy transfer, they aim at learning representations from large, unstructured observation spaces. An alternative approach is the epistemic POMDP (Ghosh et al., 2021) as a special case of a cMDP. Here, transition and reward functions may vary, but the context is assumed to be unobservable. The corresponding approaches then model the uncertainty about which instance of the cMDP the agent is deployed on during test time.
Settings that can be described as a collection of interacting systems, e.g. multi-agent problems, can be formalized as such through Factored MDPs (Boutilier et al., 1999;Guestrin et al., 2001). The generalization target here is again not necessarily zero-shot policy transfer, but generalization with respect to one or more of the factored components, e.g. the behavior of a competing agent. Both Block MDPs and Factored MDPs are compatible with cMDPs, i.e., we can construct a Block cMDP or a Factored cMDP in order to focus on multiple dimensions of generalization (Sodhani et al., 2021a).
Apart from these MDP variations, there are also less formalized concepts in RL related to or subsumed by cMDPs. Skill-based learning, for example, relates to cMDPs (da Silva et al., 2012). Some variations of the environment, and therefore areas of the context space, will require different action sequences than others. The past experience of an agent, i.e. its memory, can also be seen as a context, even though it is rarely treated as such. Frame stacking, as is common in, e.g., Atari (Bellemare et al., 2016) accomplishes the same thing, implicitly providing context by encoding the environment dynamics through the stacked frames.
Obtaining Context Features Not every task has an easily defined or measurable context that describes the task in detail. Therefore, it is important to examine how context can be obtained in such cases. Often we can only extract very simple context features for a task. For example, even though procedural generation (PCG) based environments do not allow control over the training and test distributions, they can still be considered contextual environments since they are usually seeded. This would give us the random seed as context information (Kirk et al., 2023). However, the seed provides no semantic information about the instance it induces. Obtaining more useful context features should therefore be a focus of cRL. Learned representations provide an opportunity (Jaderberg et al., 2017b;Gelada et al., 2019;Zhang et al., 2021a; to do this in a data-driven manner. As these representations encode the tasks the agent needs to solve, subspaces of the context space requiring different policies should naturally be represented differently. This idea has previously been applied to detecting context changes in continuous environments (da Silva et al., 2006;Alegre et al., 2021) and finding similar contexts within a training distribution (da Silva et al., 2012). Thus, even without readily available context, representation learning can enable the reliable availability of information relevant for generalization to tasks both in-and out-of-distribution.
Related Work
Transferring and generalizing the performance of an RL agent from its training setting to some test variation has been at the center of several sub-communities within RL. Robustness, for example, can be seen as a subcategory of generalization where variations to the context are usually kept small, and the goal is to avoid failures due to exceptions on a single task (Morimoto & Doya, 2000;Pinto et al., 2017;Mehta et al., 2019;Zhang et al., 2021d). Policy transfer is also concerned with generalization in a sense, though here the goal has often been fine-tuning a pre-trained policy, i.e., few-shot generalization instead of zero-shot generalization (Duan et al., 2016;Finn et al., 2017;Nichol et al., 2018). The goal in Multi-Task Learning is to learn a fixed set of tasks efficiently (Yu et al., 2019), not necessarily being concerned with generalizing outside of this set. Meta-Learning in RL usually aims at zero-shot policy generalization similar to Contextual Reinforcement Learning (cRL). This field is very broad with different approaches like learning to learn algorithms or their components Duan et al., 2016;Wang et al., 2017), generating task curricula (Matiisen et al., 2020;Nguyen et al., 2021) or meta-learning hyperparameters (Runge et al., 2019;Zhang et al., 2021c).
The previously mentioned methods were not conceived with cRL in mind but use context implicitly. Many meta-RL methods, however, can or do make use of context information to guide their optimization, either directly (Klink et al., 2020;Eimer et al., 2021) or by utilizing a learnt dynamics model (Kober et al., 2012). The idea of context-aware dynamics models has also been applied to model-based RL (Lee et al., 2020b). These approaches use context in different ways to accomplish some generalization goal, e.g. zero-shot generalization to a test distribution or solving a single hard instance. In contrast, we do not propose a specific Meta-Learning method but examine the foundations of cRL and how context affects policy learning in general.
Zero-shot generalization across a distribution of contexts, specifically, has become a common goal in standard RL environments, often in the form of generalization across more or less randomly generated tasks (Juliani et al., 2019;Cobbe et al., 2020;Samvelyan et al., 2021). The larger the degree of randomness in the generation procedure, however, the less context information and control are available for Meta-Learning methods (Kirk et al., 2023). Such underspecification of tasks can even make evaluations more challenging (Jayawardana et al., 2022). In contrast to other works on generalization in RL, we therefore focus on sampling context without PCG but from explicitly defined distributions. This allows us to analyze the capabilities of our agents in a more fine-grained manner, e.g. how far away from their training distribution generalization performance starts to decrease, instead of relying only on the test reward across all task instances. Our benchmark is styled similarly to the early generalized helicopter environment, where Koppejan & Whiteson (2009) can control and vary wind. Later, Whiteson et al. (2011) propose an evaluation protocol for general RL to avoid overfitting on particular training environments, where they argue for generalized methodologies assessing the performance of an agent on a set or distribution of environments. cMDPs easily fit into this line of evaluation protocols with the advantage of interpretable generalization capabilities because of the definition of interpretable context features. Kirk et al. (2021) similarly propose evaluation protocols, but already with cMDPs in mind. For further ways cRL opens new directions in ongoing work, see Appendix G.
Reinforcement Learning with Context
In this section, we provide an overview of how context can influence the training of RL agents. We discuss training objectives in the contextual setting and give a brief theoretical intuition of the implications of treating generalization problems that can be modeled as cMDPs like standard MDPs. This should serve as a demonstration of why using the cMDP framework for generalization problems is beneficial and should be explored further. Note that we assume standard cMDPs in this section, meaning the context, if provided to the agent, is fully observable and reflects the true environment behavior.
Solving cMPDs
Objectives Having defined context and cMDPs, we can now attempt to solve cMDPs. To this end, we must first formulate potential objectives being addressed by cRL. In contrast to standard RL, cRL offers several different objectives for the same training setting depending on what kind of generalization we are aiming for. We can provide this objective via a target context distribution that induces the target cMDP M. Depending on the relation between target and training distributions, we can measure interpolation performance, robustness to distribution shift, out-of-distribution generalization, and more, e.g., solving a single expensive hard task by only training on easy ones. Thus, the cRL objective is defined by the relationship between train and test settings, similar to supervised learning, but on the level of tasks rather than the level of data points (see Figure 2).
Optimality Regardless of the specific objective, we solve cMDPs in the same way we would solve standard MDPs, though we need to extend the definition of the return. Instead of maximizing the expected reward over time, we use the expected reward over both time and target context distribution. Therefore we define optimality in a cMDP in the following way: Figure 2: Different train and test relationships result in different generalization tasks: interpolation between known friction levels (left), generalizing to goals further away than seen in training (middle), generalizing to the goal distances in the training distribution with lower friction (right).
Definition 1 A policy π * is optimal for a given cMDP M with target context distribution p C iff π * optimally acts on every MDP in M (i.e. maximizes the return G c,π for each context c):
∀c ∼ p C : π * ∈ arg max π∈Π E π [G c,π ]
Note that such an optimal policy does not necessarily exist. Malik et al. (2021) showed that the problem of learning a policy π that satisfies Definition 1 may be intractable for some context distributions, even if the contexts are similar.
In order to compare policies across a given target distribution, we propose to compare the gap between optimal and actual performance, what we call the Optimality Gap OG. Formally, we define OG as the gap between the optimal return G c,π * over the target context distribution c and the return of the given policy π:
OG := E p C [G c,π * − G c,π ].
(1)
In settings where the optimal return is known, we can directly evaluate the optimality gap as the difference between the return of a trained policy and the optimal return. However, in cases where the optimal return is either unknown or intractable, we can instead use an agent trained on each single context as an approximation of the optimal return. However, we have two sources of uncertainty: First, the specialized agent might not reach the best performance achievable in the MDP and second, the uncertainty of the optimal return also depends on the number of context samples for the specialized agent.
Optimal Policies Require Context
In this section, we give an intuition and a proof sketch of why conditioning the policy not only on the state space S but also on the context space C can be beneficial. This is very much analogous to how a larger degree of observability will enable better learning in POMDPs (Kurniawati, 2022). As a reminder, a standard RL policy is defined as a mapping π : S → A from a state observation s ∈ S to an action a ∈ A. 1 In contrast to standard RL, in cRL the state space S is shared among all MDPs within a cMDP, meaning that states can occur in multiple MDPs even though the transition and reward functions might differ.
3-State cMDP
A simple way to exemplify this is through the 3-state cMDP in Figure 3. For the first MDP on the left, the optimal action would be a 0 leading to state S 1 with a high reward of 10. The MDP in the middle is a variation of the same state and action space, where the transition function has changed: while the reward in state S 1 remains 10, action a 0 now leads to state S 2 with a lower reward of 1. Similarly, the MDP on the right is another variation changing the reward function instead of the transition function: a 0 still leads to S 1 , but the associated reward now is 1. An agent exposed to such changes would not be able to react appropriately unless the policy is conditioned on the context c i .
In contrast, a context-conditioned policy can distinguish between the contexts and thus receives more guiding feedback during training. Also, it can be more capable to perform optimally at test time given an approximation of the context. We define context-conditioned policies as follows:
Definition 2 A context-conditioned policy is a mapping π : S × C → A with state space S, context space C and action space A.
In order to formalize this intuitive explanation of why context is helpful in generalization, let us first recall what optimal performance means in cMDPs. Definition 1 requires that there exists a policy π * that is optimal on every context. We will now show that optimal context-oblivious policies like this do not exist in general, but that to obtain optimality, the policy needs to have access to the context.
Proposition 1 For a given cMDP M = {M c1 , M c2 } defined over two possible contexts c 1 and c 2 , there is either a common (context-oblivious) optimal policy π * for both contexts or there is at least one conflict state s ′ at which the optimal policy differs between the contexts. Proof Sketch Let us look at a given state s ∈ S that is reachable in M c1 . Further, let us assume π * c1 is the optimal policy for M c1 that is defined only on states reachable in M c1 . We have to consider the following three possible cases for each state s ∈ S:
(i) s is reachable in M c2 and π * c1 (s) is not optimal on M c2 ; (ii) s is reachable in M c2 and π * c1 (s) is optimal on M c2 ; (iii) s is not reachable in M c2 .
We assume that we act within one MDP.
If (i) is true for at least one state s ∈ S, the optimal policy obviously is different between the contexts in this state, and therefore we have found a conflict state s ′ . The other two cases do not produce such a conflict. We can, however, construct a policy π * that is optimal on M c1 and M c2 from π * c1 if for all s ∈ S either (ii) or (iii) is true. For any state s where (ii) holds, we simply set π * (s) = π * c1 (s). For states s that are not reachable in M c1 as stated in (iii), π * c1 is not defined. We can therefore extend π * by these states without changing its optimality on M c1 . Let a * be the optimal action in such s on M c2 . Then, we define π * (s) = a * . By construction, π * is then optimal on all states s ∈ S, and it exists iff there is no state reachable in both contexts where the optimal action for M c1 differs from the one for M c2 . ■ Theorem 1 An optimal policy π * for any given cMDP M is only guaranteed to exist if it is conditioned on the context: π : S × C → A.
Proof Sketch Let us assume we know an optimal policy π * c for any context c ∈ C. As c induces an MDP and with it a corresponding optimal policy, we know that π * c exists. Furthermore, let the sets of optimal policies between at least two MDPs M c1 and M c2 be disjoint. This means no policy exists that is optimal on both c 1 and c 2 . Now let us examine the optimal policy π * and assume that it exists for this cMDP M. By definition, π * is optimal on c 1 and c 2 . If it is only conditioned on the state, π * (s) results in the same action independent of the context. Because the sets of optimal policies for c 1 and c 2 are disjoint, there must be at least one state s ′ , where π * c1 (s ′ ) ̸ = π * c2 (s ′ ) according to Proposition 1. As both are optimal in their respective contexts but not in the other and do not result in the same action for s, π * cannot actually be optimal for both c 1 and c 2 . Thus, the optimal policy π * does not exist. On the other hand, if we can condition the policy on the context, such that π : S × C → A, we can circumvent this problem (for a discussion on how this relates to partial observability, see Appendix A). π * (s, c) = π * c is optimal for each context. ■ Discussion Theorem 1 raises the question of how performance changes if the policy is not conditioned on the context. Apart from the fact that we can construct cMDPs where a policy not conditioned on the context may perform arbitrarily poorly, we intuitively assume the optimality gap should grow the broader p C becomes and the more impact slight changes in c have on the transitions and rewards. Formally assessing the optimality gap OG is another challenge in itself. Another question is how relevant the assumption of disjoint sets of optimal policies for different contexts is in practice. For the case where the observations implicitly encode the context, a single policy can solve different contexts optimally by never entering the conflict above. Assuming this is how generalization is commonly handled in RL. However, we deem this not to be a reliable mechanism to avoid conflicts between context-optimal policies on the same observations. In conclusion, depending on the environment and on how context reflects on the observations, it might be beneficial to explicitly include context, especially on harder and more abstract generalization tasks.
The CARL Benchmark Library
To analyze how the context and its augmentation influence the agent's generalization capabilities, learning, and behavior, we propose CARL: a library for Context Adaptive Reinforcement Learning benchmarks following the Contextual Reinforcement Learning formalism. In our release of CARL benchmarks, we include and contextually extend classic control and box2d environments from OpenAI Gym (Brockman et al., 2016), Google Brax' walkers (Freeman et al., 2021), a selection from the DeepMind Control Suite (Tassa et al., 2018), an RNA folding environment (Runge et al., 2019) as well as Super Mario levels (Awiszus et al., 2020;Schubert et al., 2021), see Figure 4.
Benchmark Categories
Often the physics simulations (brax, box2d, classic control and dm control) define a dynamic body in a static world with similar physical entities like gravity, geometry of the moving body and mass. In our example CARLFetch from Figure 1a the goal is to move the agent to the target area. The context features joint stiffness, gravity, friction, (joint) angular damping, actuator strength, torso mass as well as target radius and distance define the context and influence the exact instantiation and dynamics of the environment. In principle, the designer is free to select the context features from the set of parameters defining the environment. For Fetch, this could also be limb length of the body. Out of practicality, we choose to vary the most common physical attributes across the environments. When selecting an environment's parameter to become a context feature, it must be guaranteed that the physics and the environment's purpose are not violated, e.g. by setting a negative gravity such that the body flies up and is never able to reach the goal. Please see Appendix H for all registered context features per environment.
Figure 4: The CARL benchmarks
Besides physical simulation environments, CARL provides two more specific, challenging environments. The first is the CARLMarioEnv environment built on top of the TOAD-GAN level generator (Awiszus et al., 2020;Schubert et al., 2021). It provides a procedurally generated game-playing environment that allows customization of the generation process. This environment is therefore especially interesting for exploring representation learning for the purpose of learning to better generalize. Secondly, we move closer to real-world application by including the CARLRNADesignEnvironment (Runge et al., 2019). The challenge here is to design RNA sequences given structural constraints. As two different datasets of structures and their instances are used in this benchmark, it is ideally suited for testing policy transfer between RNA structures.
Properties of Benchmarks
While the categorization of the CARL benchmarks above provides an overview of the kinds of environments included, we also discuss them in terms of relevant environment attributes that describe the nature of their problem setting, see Figure 5.
State Space
Quality of Reward
We cover different types of reward signals with our benchmarks, ranging from relatively sparse step penalty style rewards where the agent only receives a reward of −1 each step to complex composite reward functions in e.g. the Brax-based environments. The latter type is quite informative, providing updates on factors like movement economy and progress toward the goal whereas the former does not let the agents distinguish between transitions without looking at the whole episode. Further examples for sparse rewards are the CARLCartPoleEnv and CARLVehicleRacingEnv.
Context Spaces While the full details of all possible context configurations can be seen in Appendix H, for brevity here we only discuss the differences between context spaces and the configuration possibilities they provide. Depending on the environment the context features have different influences on the dynamics and the reward. Of all 145 registered context features, 99 % influence the dynamics. This means that if a context feature is changed then the transition from states to their successors is affected and likely changed as well.
Only 4 % of the context features shape the reward. Most context features (91 %) are continuous; the rest are categorical or discrete.
Summary Comparing our benchmarks along these attributes, we see a wide spread in most of them ( Figure 5). CARL focuses on popular environments and will grow over time, increasing the diversity of benchmarks. Already now, CARL provides a benchmarking collection that tasks agents with generalizing in addition to solving the problem most common in modern RL while providing a platform for reproducible research.
Experiments
Having discussed the framework of cRL and the implications of context in training, we now study several research questions regarding the empirical effects of context: (i) How much does varying context influence performance? Can agents compensate across context variations in a zero-shot manner? (ii) Can we observe the effects discussed in Section 4 in practice? I.e., is there an observable optimality gap on cMDPs, does the context visibility influence the performance and which role does the width of the context distribution play? (iii) How can we assess generalization performance, and how does the test behavior of agents change when exposed to the context information?
To explore our research questions, we use our benchmark library CARL. Details about the hyperparameter settings and used hardware for all experiments are listed in Appendix C. In each experiment, if not specified otherwise, we train and evaluate on 10 different random seeds and a set of 128 uniformly sampled contexts. All experiments can be reproduced using the scripts we provide with the benchmark library at https://github.com/automl/CARL.
How Does Varying Context Influence Performance?
To get an initial understanding on the generalization capabilities, we train a well-known SAC agent (Haarnoja et al., 2018) on the default version of the Pendulum (Brockman et al., 2016) environment. Pendulum is a very simple environment (see Appendix B for dynamic equations) compared to the majority of RL benchmarks and has been considered solved by deep RL for years. However, we show that we can increase the difficulty of this environment substantially when considering even single context features. Note that this increase in difficulty also means the best return may decrease even for an optimal agent, in extreme cases making instances impossible to solve. The agent is not provided with any explicit information about the context, i.e., it is context-oblivious. Then, for evaluation, we vary each defined context feature by magnitudes A = 0.1, 0.2, . . . 2.2 times the default value for 10 test episodes. In Figure 6, we plot the empirical cumulative distribution functions (eCDF) for the return showing the range of observed returns on the x-axis and the proportion on the y-axis. The further to the right the curve is, the better the observed returns. For the eCDF plots of other CARL environments, see Appendix D.1.
First and foremost we observe that some context features do not have an influence on the generalization performance when varied, see Figure 6. Even in this zero-shot setting, the agent performs similarly well on all context variations of the initial_angle_max and initial_velocity_max features. Yet, the agent's performance is very brittle to other context features, i.e. max_speed, simulation timestep dt, gravity g and length l. The trained agent cannot compensate for the effect the context has on the policy and thus performs poorly on several context variations. It is worth noting that the performance curves across the variations depends on the context feature. max_speed, for example, is hard for values below A = 0.7, but then the challenge level for the agent decreases abruptly. This transition is smoother for l where both very small and very large values are hard to solve. We conclude that it is not straightforward to estimate the impact of changing the environment on agent behavior, especially for physics simulations or similarly complex environments. We also clearly see that zero-shot generalization cannot compensate for variations in environment dynamics. Context variations introduce a significant challenge, even on a simple environment like Pendulum. The next step, therefore, is to train the agent on varying contexts and re-evaluate its generalization performance.
Does the Optimality Gap Exist in Practice?
In the previous section, we saw that context variation heavily influences the test performance of a standard agent. Here, we take a closer look and connect this to the Optimality Gap OG (Equation (1)). In order to demonstrate how significant this gap is in practice, we train a C51 agent ( 2018) on Pendulum. Similarly as above, we use simple environments to demonstrate the induced difficulty by context variation.
To generate instances of both environments, we vary the length of the pole across a uniform distribution p C = U(0.25, 0.75) around the standard pole length for CartPole and the pole length across p C = U(1, 2.2) for Pendulum. For training, we sample 64 contexts from this distribution and train a general agent which experiences all contexts during training in a round robin fashion. However, we do not explicitly provide the context information to the agent. We approximate the optimal performance by training a separate specialized agent on each context. Afterwards, each agent is evaluated on each context it was trained on for 10 episodes.
Comparing the general and specialized agents, we see a difference of at least 30 reward points in median, mean and estimated IQM performance (as proposed by Agarwal et al. (2021)) for CartPole and a smaller, but similar effect on Pendulum. While this is not a huge decrease in performance, looking at how these rewards are distributed across the evaluation instances shows that the general agent solves significantly fewer instances than the specialized agent with a decrease of around 40% of finished episodes on CartPole. This shows that while most instances can be solved by the agent one at a time, training an agent that solves all of them jointly is a significant challenge.
Does Access To Context Improve Training?
We have seen that agents without access to context information are not always able to entirely solve even simple contextual environments. Can these agents improve with access to the context information? We choose a very simple approach of adding either the whole context (concat all) or only the actively changing context feature (concat non-static) to the observation. Obviously this is a simplistic approach that in the case of concat all significantly alters the size of the observation. Though, even with this simple idea, training performance improves by a large margin in some cases, see Figure 8. Here, the agent, this time on CARLDmcWalker with changing viscosity (∼ U[1, 2.5] times the default value), learns faster, is more stable, and reaches a higher final performance with the additional information added to the observation. In testing, we see significantly more failures in the hidden agent compared to the concat ones, with only the concat non-static agent learning to solve the contextual environment without a large decrease in overall test performance. Effective generalization seems to be a matter of a reasonable feature set, similar to supervised learning.
On Pendulum, however, this is not the case; we see no meaningful difference in mean performance of concat (non-static) and hidden when varying length (∼ U[1, 2.2] times the default value). Please note that the large confidence interval stems from runs where the algorithm did not find a meaningful performance, see Appendix Figure 11. The concat agents perform better on some unseen contexts in evaluation, though the hidden agent is far superior on a slice of the instance set.
In both cases, the generalization behavior of contextual agents is different from the hidden one. Most CARL environments show that additional context information is indeed beneficial for train and test performance as in Walker (see Appendix E for full results). Additionally, the performances of concat all and concat non-static agents provide no clear pattern as to how many context features should be provided in training.
We conclude that while context information is indeed useful for generalization performance, simply appending the context to the observation might not be the ideal way to communicate context features. Instead, context embeddings could be a more potent way of capturing the way a context feature changes an environment, as we have seen in some prior work on incorporating goals (Sukhbaatar et al., 2018;Liu et al., 2022) into training. Since our goal was to show the potential of context information, we leave it to future work to investigate better representations of context features.
How Far Can Agents Learn To Act Across Contexts?
As we saw different evaluation behaviors from hidden and visible agents in the last section, we want to further investigate their generalization capabilities in-and out-of-distribution. To this end, we follow a three mode evaluation protocol for Contextual Reinforcement Learning that tests the agent's interpolation capabilities and out-of-distribution generalization under different training distribution shapes (Kirk et al., 2023). This is in contrast to PCG environments where we cannot define evaluation protocols and instead have to rely on the given instance generation procedure. Return Figure 9: In-and Out-of-Distribution Generalization on CartPole. We vary the pole length and up-date_interval. The blue polygon marks the train context area. Black dots mark gaps in the context space due to random sampling. First row: Context-oblivious agent, second row: Concat.
We define train and test distributions for each dimension of the context space individually, allowing us to study different relationships between train and test settings. If at least parts of the test context are within the train distribution, we speak of interpolation, if the whole context is outside, of extrapolation. By choosing two context features and defining uniform train distributions on both, we construct a convex training set in the context feature space (mode A: ). The context feature distributions can also be defined to allow only a small variation (mode B: ) or a single value per feature (mode C: ), creating non-convex train sets. Thus, the convex hull of this non-convex set tests combinatorial interpolation.
To demonstrate this, we again choose contextual CartPole from CARL, and the C51 (Bellemare et al., 2017) agent known to perform well on it. We train the agent for 100 000 timesteps and vary the update_interval and the pole length in the environment, once without access to the context (hidden) and once concatenating pole length and gravity to the observation (concat). We repeat this with 10 random seeds and 5 test episodes per context. For the train and test context sets, we sample 1000 contexts each for the train and test distributions defined in the evaluation protocol, see Figure 9. The test performances are discretized and aggregated across seeds by the bootstrapped mean using rliable (Agarwal et al., 2021).
In Figure 9, we show that both hidden (context-oblivious) and visible (concatenate) agents perform fairly well within their training distribution for evaluation mode A and even generalize to fairly large areas of the test distribution, more so for concat. Large update intervals combined with extreme pole lengths proves to be the most challenging area.
Interestingly, the concat agent is able to solve more of the large update intervals in contrast to the hidden agent which is most pronounced on train distribution C. This itself is counterintuitive, we would expect that the larger the train distribution, the larger the out-of-distribution generalization. The hidden agent in general performs well for low update intervals. This resonates with the intuition that smaller update intervals are easier because there is more granularity (and time) to react to the current state. In addition, we provide results for varying the update interval and the gravity. Here, the results are similar but subdued, see Appendix Figure 12.
Varying the pole length together with the gravity paints a different image (Appendix Figure 13). In this case, the hidden agent performs much better. We suspect that the effects of gravity and pole length cancel out and thus context information is not needed to learn a meaningful policy. These three variations again show that providing context by concatenation can be helpful but not in every case, demanding further investigation on alternatives. Finally, neither agent shows reliable combinatorial interpolation performance, let alone out-of-distribution generalization. We see here a major open challenge for the RL community, for which CARL will support them in the development and precise studies of RL generalization capabilities.
Conclusion
Towards our goal of creating general and robust agents, we need to factor in possible changes in the environment. We propose modeling these changes with the framework of contextual Reinforcement Learning (cRL) in order to better reason about what demands Contextual Reinforcement Learning introduces to the agents and the learning process, specifically regarding the suboptimal nature of conventional RL policies in cRL. With CARL, we provide a benchmark library which contextualizes popular benchmarks and is designed to study generalization in Contextual Reinforcement Learning. It allows us to empirically demonstrate that contextual changes disturb learning even in simple settings and that the final performance and the difficulty correlate with the magnitude of the variation. We also verify that context-oblivious policies are not able to fully solve even simple contextual environments. Furthermore, our results suggest that exposing the context to agents even in a naive manner impacts the generalization behavior, in some cases improving training and test performance compared to non-context-aware agents. We expect this to be a first step towards better solution mechanisms for contextual RL problems and therefore one step closer to general and robust agents.
Broader Impact Statement
We foresee no new direct societal and ethical implications other than the known concerns regarding autonomous agents and RL (e.g., in a military context).
Appendix
A Partial Observability in cMDPs
Discussing the visibility of context for an agent can be linked to the partial observability we see in POMDPs. We believe it is useful to differentiate between the visibility of context and state features or observations as both serve a different function in a cMDP. The state features describe the current state while the context describes the current MDP. Therefore making one only partially observable should influence the learning dynamics in different ways. Therefore we define a cMDP as a special case of a POMDP, analogous to Kirk et al. (2023)
B Pendulum's Dynamic Equations
Figure 10: CARLPendulumEnv Because we use CARLPendulumEnv embedding gym's Pendulum (Brockman et al., 2016) for our task variation experiment (see Section 6.1), we provide the dynamic equations to show the simplicity of the system. The state and observation consists of the angular position θ and velocityθ of the pendulum. The discrete equation defining the behavior of the environment is defined as follows:
θ k+1 =θ k + − 3g 2l sin(θ k + π) + 3 m · l 2 u k · ∆t θ k+1 = θ k +θ k+1 · ∆t .
Here, k is the index of the iteration/step. The dynamic system is parametrized by the context, which consists of g the gravity, l and m the length and mass of the pendulum, u the control input and ∆t the timestep. Figure 10 shows how Pendulum is embedded in CARL.
C Hyperparameters and Hardware
Hyperparameters and Training Details We implemented our own agents using coax (Holsheimer et al., 2023) with hyperparameters specified in Table 1. All experiments can be reproduced using the scripts we provide with the benchmark library at https://github.com/automl/carl.
Hardware All experiments on all benchmarks were conducted on a slurm CPU and GPU cluster (see Table 2). On the CPU partition there are 1592 CPUs available across nodes.
D Additional Experimental Results
In this section, we provide additional information and results for our experiments section (section 6).
D.1 Task Variation Through Context
Following the experimental setup in Section 6.1 we conducted further experiments on representative CARLenvironments.
D.2 Adding Context to the State
When we concatenat all available context features to the observation for CARLPendulum, we often see that the algorithm fails to learn a meaningful policy on some seeds, see Figure 11.
20000 40000 60000 80000 100000
Step 1000 500 Return concat (all) concat (non-static) hidden Figure 11: CARLPendulum with different lengths and 20 seeds. Train performance.
D.3 Generalization Results
Here we provide two more combinations for CARLPendulum for the Kirk generalization protocol (Kirk et al., 2021) from Section 6.4 (same experimental setup).
E Baselines
Here we provide baselines for selected environments in CARL. For each environment we conduct the following experiments. First, we train a default agent on the default environment, i.e. with no context variation. This agent is then evaluated on context variations of single context features with a magnitude A ∈ {0.1, 0.2, . . . , 2.2}. This creates an initial performance profile and shows how sensitive the agent is to which context feature. We plot this as an eCDF, the further right the lines are, the more high returns the agent achieves. This plot features no aggregation. After that, we train an agent with context variation. For this we select a context feature with a visible impact on performance and determine the range by including magnitudes where the environment is still solvable which means reaching returns better than fail. We train the agent with varying context visibility: Once, the varying context feature is hidden, once the complete context set is concatenated to the state and once only the changing context feature. We plot the training curve. Finally, we test the agents trained on context variation on the same number of contexts sampled from the train distribution and report the return histogram and return statistics. If not specified otherwise, we perform each experiment with 10 seeds and 128 training contexts.
We report on classic control (CARLPendulumEnv: Figure 14, CARLMountainCarEnv: Figure 15, CARL-CartPoleEnv: Figure 16 and CARLAcrobotEnv: Figure 17) and box2d (CARLLunarLanderEnv: Figure 18). 1, 2.2)). Figure 19: CARLMarioEnv: Evaluation of the PPO agent on 16 TOAD-GAN training levels and 16 different test levels. The episode completion indicates the average distance from the level start that the agent is able to reach.
E.1 Context-Conditioning on CARLMarioEnv
We train the Proximal Policy Optimization (PPO) agent on 16 distinct levels of the CARLMarioEnv environment and evaluate its performance on another 16 different levels for 10 seeds. In the context-aware variant of the agent, both the policy and value functions are conditioned on the provided context. Unlike the context-aware agent, the hidden agent only receives RGB frames from the environment as input. The context is encoded using a convolutional encoder and integrated into the hidden state representation, which is then fed into the policy and value heads. The context itself consists of a noise tensor sampled from which TOAD-GAN (Awiszus et al., 2020;Schubert et al., 2021) generated the Super Mario Bros. levels. To ensure playability, the generated levels are filtered using static analysis.
As shown in Figure 19, the training performance of the context-aware agent and the agent without context is nearly identical. However, when evaluated, the context-aware agent outperforms the agent without context, demonstrating its ability to effectively incorporate the noise map context into its policy. This observation underscores the value of the CARL benchmark in driving research on context representation.
E.2 Test Performance on Context Variations
In this section we show the test performance of an agent trained on the default context of additional selected environments which is oblivious to the context. For evaluation we run 10 episodes on contexts with different magnitudes of variation. We vary each context feature by a magnitude A = 0.
F Hyperparameter Optimization in cRL
We have observed significant differences in learning performance for hidden and full visible context, but the same is also true for hyperparameter tuning in both of these settings. We use the same DQN and DDPG algorithms as in our other experiments with a narrow context distribution of 0.1 for the CARL Pendulum, Acrobot and LunarLander environments to show this point. To tune the hyperparameters, we use PB2 (J. Parker-Holder et al., 2020) for the learning rate, target update interval and discount factor.
As shown in Figure 24, the evaluation performances of the found hyperparameter schedules differ significantly in terms of learning speed, stability and results per environment. Providing the context sometimes seems to increase the difficulty of the problem (see Section 6.4), i.e. finding a good hyperparameter configuration happens more often and more reliably when the policy is not given the context. We can only speculate on the reasons why this happens, but shows that context introduces complexities to the whole training process beyond simply the policy architecture.
G Open Challenges in cRL
We used the concept of Contextual Reinforcement Learning and its instantiation in CARL to demonstrate the usefulness of context information in theory and in practice. More specifically, we showed that making such information about the environment explicitly available to the agent enables faster training and transfer of agents (see Section 6). While this already provides valuable insights to the community that increasingly cares about learning agents capable of generalization (see Sections 1 & 3) Contextual Reinforcement Learning and by extension CARL enables to study further open challenges for general RL.
G.1 Challenge I: Representation Learning
Our experiments demonstrated that an agent with access to context information can be capable of learning better than an agent that has to learn behaviors given an implicit context via state observations, but the naive method of including context information in the observation is not reliable. We theorize that disentangling the representation learning aspect from the policy learning problem reduces complexity. As CARL provides ground truth for representations of environment properties we envision future work on principled studies of novel RL algorithms that, by design, disentangle representation learning and policy learning (see, e.g., (Rakelly et al., 2019;Fu et al., 2021a;Zhang et al., 2021b) as first works along this line of research). The ground truth given by the context would allow us to measure the quality of learned representations and allows us to relate this to the true physical properties of an environment.
Another direction of research under the umbrella of representation learning follows the work of environment probing policies (Zhou et al., 2019). There, exploratory policies are learned that allow one to identify which environment type an agent encounters. This is complementary to the prior approaches as representations are not jointly learned with the behaviour policies as in the previously discussed approaches but rather in a separate offline phase. Based on CARL, huge amounts of meta-data could be collected that will enable the , 2020). Hidden means that the context is hidden, and visible means that the full context is appended to the observation.
community to make use of classical meta-algorithmic approaches such as algorithm selection (Rice, 1976) for selecting previously learned policies or learning approaches.
G.2 Challenge II: Uncertainty of RL Agents
With access to context information, we are able to study the influence of noise on RL agents in a novel way.
While prior environments enabled studies on the behavior of agents when they could not be certain about their true state in a particular environment, the framework of Contextual Reinforcement Learning further allows studying agents' behaviors in scenarios with uncertainty on their current contextual environment, e.g., because of noise on the context features. In the practical deployment of RL, this is a reasonable concern since context features have to be measured somehow by potentially noisy sensors. As this setting affects the overall transition dynamics, Contextual Reinforcement Learning provides a unique test-bed in which the influence of uncertainty can be studied and how RL agents can deal with such. This line of research can also be combined with the work on unsupervised RL (Laskin et al., 2021;Schubert et al., 2023), where the agent learns a good policy initialization during a unsupervised pretraining phase, followed by a finetuning phase where the agent is optimizing an external reward. CARL enables researchers to either guide the pretraining process via uncertainty measures that are based on the context information or evaluate the robustness of finetuned policies under context noise.
G.3 Challenge III: Interpretable and Explainable Deep RL
Trust in the policy is a crucial factor, for which interpretability or explainability often is mandatory. With the provided ground truth through the explicit use of context features, Contextual Reinforcement Learning could be the base for studying the interpretability and explainability of (deep) RL. By enabling AutoRL studies and different representation learning approaches, Contextual Reinforcement Learning will contribute to better interpreting the training procedures.
Contextual Reinforcement Learning further allows studying explainability on the level of learned policies. We propose to study the sensitivity of particular policies to different types of contexts. Thus, the value and variability of a context might serve as a proxy to explain the resulting learned behavior. Such insights might then be used to predict how policies might look or act (e.g., in terms of frequency of action usage) in novel environments, solely based on the provided context features.
G.4 Challenge IV: AutoRL
AutoRL (Parker-Holder et al., 2022) addresses the optimization of the RL learning process. To this end, hyperparameters, architectures or both of agents are adapted either on the fly (Jaderberg et al., 2017a;Franke et al., 2021) or once at the beginning of a run (Runge et al., 2019). However, as AutoRL typically requires large compute resources for this procedure, optimization is most often done only on a per-environment basis. It is reasonable to assume that such hyperparameters might not transfer well to unseen environments, as the learning procedures were not optimized to be robust or to facilitate generalization, but only to improve the reward on a particular instance.
As we have shown above in Appendix F, Contextual Reinforcement Learning provides an even greater challenge for AutoRL methods. On the other hand, as CARL provides easy-to-use contextual extensions of a diverse set of RL problems, it could be used to drive research in this open challenge of AutoRL. First of all, it enables a large scale-study to understand how static and dynamic configuration approaches complement each other and when one approach is to be preferred over another. Such a study will most likely also lead to novel default hyperparameter configurations that are more robust and tailored to fast learning and good generalization. In addition, it will open up the possibility to study whether it is reasonable to use a single hyperparameter configuration or whether a mix of configurations for different instances is required (Xu et al., 2010). Furthermore, with the flexibility of defining a broad variety of instance distributions for a large set of provided context features, experiments with CARL would allow researchers to study which hyperparameters play a crucial role in learning general agents similar to studies done for supervised machine learning (van Rijn & Hutter, 2018) or AI algorithms (Biedenkapp et al., 2018).
G.5 Challenge V: High Confidence Generalization
The availability of explicit context enables tackling another challenge in the field of safe RL. High Confidence Generalization Algorithms (HCGAs) (Kostas et al., 2021) provide safety guarantees for the generalization of agents in testing environments. Given a worst-case performance bound, the agent can be tasked to decide whether a policy is applicable in an out-of-distribution context or not. This setting is especially important for the deployment of RL algorithms in the real world where policy failures can be costly and the context of an environment is often prone to change. Contextual Reinforcement Learning has the potential to facilitate the development of HCGAs that base their confidence estimates on the context of an environment.
H Context Features for Each Environment
We list all registered context features with their defaults, bounds and types for each environment family in Table 3 (classic control), Table 4 (box2d), Table 5 (brax) and Table 7 (RNA and Mario).
Figure 3 :
3A sample cMDP with three contexts. The original one (left), one changing the transition function (middle) and another the reward function (right).
Figure 5 :
5Characteristics of each environment of the environment families showing the action space size, state space size (log-scale), number of context features (n cf ), the number of context features directly shaping the reward (n cf,reward ) and the ones changing the dynamics (n cf,dynamics ). All axes are scaled to the global extrema and the state space size is additionally on a logarithmic scale.Action SpaceWe provide both discrete and continuous environments, with six requiring discrete actions and the other 14 continuous ones. The number of actions can range from a single action to 19 different actions.
Figure 6 :
6CARLPendulumEnv: eCDF Plot. A is the magnitude multiplied with the default value of each context feature. So, A = 1.0 refers to the standard environment.
Figure 7 :
7Bellemare et al., 2017) on our contextually extended CartPole environment Brockman et al. (2016) as well as a SAC agent (Haarnoja et al., Optimality Gap on CARLCartPole (left) and CARLPendulum (right).
Figure 8 :
8Train (lineplot) and test performance (histogram) of agents with visible and hidden context on CARLDmcWalker with different viscosity values and 5 seeds (left) and CARLPendulum with different lengths and 20 seeds (right). Shown is the mean performance with 95% confidence interval and testing across 200 test contexts (metrics are computed using stratified resampled bootstrapping(Agarwal et al., 2021)).
Figure 14 :Figure 15 :Figure 16 :Figure 17 :
14151617Default agent evaluated on context variation. eCDF Plot. A is the magnitude multiplied with the default value of each context feature. ) Eval. on train distr. (Histogram) ) Eval. on train distr. (Statistics) CARLPendulumEnv: Benchmark. Algorithm sac, 10 seeds. (a): Default agent (trained on default context and context-oblivious) evaluated on context variations. (b-d): Training with varying context visibility. We vary the context feature(s) ['l'] (A ∼ U(0.5, 2.2)). Default agent evaluated on context variation. eCDF Plot. A is the magnitude multiplied with the default value of each context feature. CARLMountainCarEnv: Benchmark. Algorithm c51, 10 seeds. (a): Default agent (trained on default context and context-oblivious) evaluated on context variations. (b-d): Training with varying context visibility. We vary the context feature(s) ['gravity'] Default agent evaluated on context variation. eCDF Plot. A is the magnitude multiplied with the default value of each context feature. CARLCartPoleEnv: Benchmark. Algorithm c51, 10 seeds. (a): Default agent (trained on default context and context-oblivious) evaluated on context variations. (b-d): Training with varying context visibility. We vary the context feature(s) ['pole_length'] Default agent evaluated on context variation. eCDF Plot. A is the magnitude multiplied with the default value of each context feature. CARLAcrobotEnv: Benchmark. Algorithm c51, 10 seeds. (a): Default agent (trained on default context and context-oblivious) evaluated on context variations. (b-d): Training with varying context visibility. We vary the context feature(s) ['link_mass_2'] (A ∼ U(0.1, 2.2)).
Figure 18 :
18CARLLunarLanderEnv: Benchmark. Algorithm c51, 10 seeds. (a): Default agent (trained on default context and context-oblivious) evaluated on context variations. (b-d): Training with varying context visibility. We vary the context feature(s) ['GRAVITY_Y'] (A ∼ U(0.
Figure 20 :Figure 21 :
2021CARLBipedalWalkerEnv: ECDF Plot. A is the magnitude multiplied with the default value of each context feature. CARLDmcFishEnv: ECDF Plot. A is the magnitude multiplied with the default value of each context feature.
Figure 22 :
22CARLDmcQuadrupedEnv: ECDF Plot. A is the magnitude multiplied with the default value of each context feature.
Figure 23 :
23CARLDmcWalkerEnv: ECDF Plot. A is the magnitude multiplied with the default value of each context feature.
Figure 24 :
24Hyperparameter Optimization with PB2 (J. Parker-Holder et al.
Most of our benchmarks have vector-based state spaces, allowing to concatenate context information. Their sizes range from only two state variables in the CARLMountainCar environments to 299 for the CARLHumanoid environment. The notable exceptions here are CARLVehicleRacing and CARLToadGAN, which exclusively use pixel-based observations.state space size
action
space
size
n cf, reward n cf, dynamics
n cf
classic control
state space size
action
space
size
n cf, reward n cf, dynamics
n cf
box2d
state space size
action
space
size
n cf, reward n cf, dynamics
n cf
brax
state space size
action
space
size
n cf, reward n cf, dynamics
n cf
dmc
state space size
action
space
size
n cf, reward n cf, dynamics
n cf
RNA + Mario
X. Fu, G. Yang, P. Agrawal, and T. Jaakkola. Learning task informed abstractions. In M.Meila and T. Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 3480-3491. PMLR, 18-24 Jul 2021b. C. Gelada, S. Kumar, J. Buckman, O. Nachum, and M. Bellemare. Deepmdp: Learning continuous latent space models for representation learning. In Proceedings of the 36th International Conference on Machine Learning, ICML, volume 97 of Proceedings of Machine Learning Research, pp. 2170-2179. PMLR, 2019. P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger. Deep reinforcement learning that matters. In S. McIlraith and K. Weinberger (eds.), Proceedings of the Conference on Artificial Intelligence (AAAI'18). AAAI Press, 2018. K. Holsheimer, F. Schubert, B. Beilharz, and L. Tsao. Coax: Plug-n-play reinforcement learning in python with gymnasium and jax. https://github.com/coax-dev/coax, 2023. M. Jaderberg, V. Dalibard, S. Osindero, W. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan, C. Fernando, and K. Kavukcuoglu. Population based training of neural networks. arXiv:1711.09846 [cs.LG], 2017a. M. Jaderberg, V. Mnih, W. Czarnecki, T. Schaul, J. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR. OpenReview.net, 2017b. Z. Zhu, K. Lin, and J. Zhou. Transfer learning in deep reinforcement learning: A survey. CoRR, abs/2009.07888, 2020.Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, and Sergey Levine. Why generalization
in RL is difficult: Epistemic pomdps and implicit partial observability. CoRR, abs/2107.06277, 2021. URL
https://arxiv.org/abs/2107.06277.
C. Guestrin, D. Koller, and R. Parr. Multiagent planning with factored mdps. In Advances in Neural
Information Processing Systems 14 [Neural Information Processing Systems: Natural and Synthetic, NIPS,
pp. 1523-1530. MIT Press, 2001.
T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep
reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on
Machine Learning, ICML, volume 80 of Proceedings of Machine Learning Research, pp. 1856-1865. PMLR,
2018.
A. Hallak, D. Di Castro, and S. Mannor. Contextual markov decision processes. arXiv:1502.02259 [stat.ML],
2015.
V. J. Parker-Holder, S. J. Nguyen, and Roberts. Provably efficient online hyperparameter optimization with
population-based bandits. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.),
Proceedings of the 33rd International Conference on Advances in Neural Information Processing Systems
(NeurIPS'20), volume 33, pp. 17200-17211, 2020.
V. Jayawardana, C. Tang, S. Li, D. Suo, and C. Wu. The impact of task underspecification in evaluating
deep reinforcement learning. CoRR, abs/2210.08607, 2022.
A. Juliani, A. Khalifa, V. Berges, J. Harper, E. Teng, H. Henry, A. Crespi, J. Togelius, and D. Lange. Obstacle
tower: A generalization challenge in vision, control, and planning. In Proceedings of the Twenty-Eighth
International Joint Conference on Artificial Intelligence, IJCAI, pp. 2684-2691. ijcai.org, 2019.
R. Kirk, A. Zhang, E. Grefenstette, and T. Rocktäschel. A survey of generalisation in deep reinforcement
learning. CoRR, abs/2111.09794, 2021.
R. Kirk, A. Zhang, E. Grefenstette, and T. Rocktäschel. A survey of zero-shot generalisation in deep
reinforcement learning. J. Artif. Intell. Res., 76:201-264, 2023.
P. Klink, C. D'Eramo, J. Peters, and J. Pajarinen. Self-paced deep reinforcement learning. In Advances in
Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems
2020, NeurIPS, 2020.
A. Zhang, R. Thomas McAllister, R. Calandra, Y. Gal, and S. Levine. Learning invariant representations for
reinforcement learning without reconstruction. In 9th International Conference on Learning Representations,
ICLR. OpenReview.net, 2021a.
A. Zhang, S. Sodhani, K. Khetarpal, and J. Pineau. Learning robust state abstractions for hidden-parameter
block mdps. In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net,
2021b.
B. Zhang, R. Rajan, L. Pineda, N. Lambert, A. Biedenkapp, K. Chua, F. Hutter, and R. Calandra. On
the importance of hyperparameter optimization for model-based reinforcement learning. In The 24th
International Conference on Artificial Intelligence and Statistics, AISTATS, volume 130 of Proceedings of
Machine Learning Research, pp. 4015-4023. PMLR, 2021c.
H. Zhang, H. Chen, D. Boning, and C. Hsieh. Robust reinforcement learning on state observations with
learned optimal adversary. In 9th International Conference on Learning Representations, ICLR 2021,
Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021d.
W. Zhou, L. Pinto, and A. Gupta. Environment probing interaction policies. In Proceedings of the International
Conference on Learning Representations (ICLR'19), 2019. Published online: iclr.cc.
Z. Zhou, X. Li, and R. Zare. Optimizing chemical reactions with deep reinforcement learning. ACS central
science, 3(12):1337-1344, 2017.
, where we have an emission function ϕ : S × p C → O s × O c mapping the state space to some state observation space O s and context observation space O c . ϕ differentiates between state s and context c to allow different degrees of observability in state and context, e.g. hiding the context completely but exposing the whole state, in order to enable more flexible learning. It can also introduce the additional challenge of learning from imperfect or noisy context information.
Table 1 :
1Hyperparameters for algorithm and environment combinationsalgorithm
c51
c51
sac
c51
c51
sac
sac
sac
env
CartPole
Acrobot
Pendulum
MountainCar
LunarLander
DmcWalker
DmcQuadruped
Halfcheetah
n_step
5
5
5
5
5
5
5
5
gamma
0.99
0.99
0.9
0.99
0.99
0.9
0.9
0.99
alpha
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
batch_size
128
128
128
128
128
128
128
256
learning_rate
0.001
0.001
0.001
0.001
0.001
0.001
0.001
0.0001
q_targ_tau
0.001
0.001
0.001
0.001
0.001
0.001
0.001
0.005
warmup_num_frames 5000
5000
5000
5000
5000
5000
5000
5000
pi_warmup_num_frames 7500
7500
7500
7500
7500
7500
7500
7500
pi_update_freq 4
4
4
4
4
4
4
2
replay_capacity 100000
100000
100000
100000
100000
100000
100000
1000000
network
{'width': 256,
'num_atoms':
51}
{'width': 256,
'num_atoms':
51}
{'width':
256}
{'width': 32,
'num_atoms':
51}
{'width': 256,
'num_atoms':
51}
{'width':
256}
{'width':
256}
{'width':
1024}
pi_temperature 0.1
0.1
NaN
0.1
0.1
NaN
NaN
NaN
q_min_value
0.0
-50.0
NaN
-100.0
-100.0
NaN
NaN
NaN
q_max_value
110.0
0.0
NaN
100.0
100.0
NaN
NaN
NaN
Table 2 :
2GPU cluster used for trainingType
Model
Quantity RAM CPU (G)
GPU NVIDIA Quattro M5000
1
256
GPU
NVIDIA RTX 2080 Ti
56
384
GPU
NVIDIA RTX 2080 Ti
12
256
GPU
NVIDIA RTX 1080 Ti
6
512
GPU
NVIDIA GTX Titan X
4
128
GPU
NVIDIA GT 640
1
32
Table 3 :
3Context Features: Defaults, Bounds and Types for OpenAI gym's Classic Control environments(Brockman et al., 2016)
Table 4 :
4Context Features: Defaults, Bounds and Types for OpenAI gym's Box2d environments(Brockman et al., 2016) (a) CARLBipedalWalkerEnv
Context Feature
Default
Bounds
Type
FPS
50.00
(1, 500)
float
FRICTION
2.50
(0, 10)
float
GRAVITY_X
0.00
(-20, 20)
float
GRAVITY_Y
-10.00
(-20, -0.01)
float
INITIAL_RANDOM
5.00
(0, 50)
float
LEG_DOWN
-0.27
(-2, -0.25)
float
LEG_H
1.13
(0.25, 2)
float
LEG_W
0.27
(0.25, 0.5)
float
LIDAR_RANGE
5.33
(0.5, 20)
float
MOTORS_TORQUE
80.00
(0, 200)
float
SCALE
30.00
(1, 100)
float
SPEED_HIP
4.00
(1e-06, 15)
float
SPEED_KNEE
6.00
(1e-06, 15)
float
TERRAIN_GRASS
10.00
(5, 15)
int
TERRAIN_HEIGHT
5.00
(3, 10)
float
TERRAIN_LENGTH
200.00
(100, 500)
int
TERRAIN_STARTPAD
20.00
(10, 30)
int
TERRAIN_STEP
0.47
(0.25, 1)
float
VIEWPORT_H
400.00
(200, 800)
int
VIEWPORT_W
600.00
(400, 1000)
int
(b) CARLLunarLanderEnv
Context Feature
Default
Bounds
Type
FPS
50.00
(1, 500)
float
GRAVITY_X
0.00
(-20, 20)
float
GRAVITY_Y
-10.00
(-20, -0.01)
float
INITIAL_RANDOM
1000.00
(0, 2000)
float
LEG_AWAY
20.00
(0, 50)
float
LEG_DOWN
18.00
(0, 50)
float
LEG_H
8.00
(1, 20)
float
LEG_SPRING_TORQUE
40.00
(0, 100)
float
LEG_W
2.00
(1, 10)
float
MAIN_ENGINE_POWER
13.00
(0, 50)
float
SCALE
30.00
(1, 100)
float
SIDE_ENGINE_AWAY
12.00
(1, 20)
float
SIDE_ENGINE_HEIGHT
14.00
(1, 20)
float
SIDE_ENGINE_POWER
0.60
(0, 50)
float
VIEWPORT_H
400.00
(200, 800)
int
VIEWPORT_W
600.00
(400, 1000)
int
(c) CARLVehicleRacingEnv
Context Feature Default Bounds Type
VEHICLE
0 -
categorical
Table 6 :
6Context Features: Defaults, Bounds and Types for Google Deepmind environments(Tassa et al., 2018) (a) CARLDmcWalkerEnv
Context Feature
Default
Bounds
Type
actuator_strength
1.00
(0, inf)
float
density
0.00
(0, inf)
float
friction_rolling
1.00
(0, inf)
float
friction_tangential
1.00
(0, inf)
float
friction_torsional
1.00
(0, inf)
float
geom_density
1.00
(0, inf)
float
gravity
-9.81
(-inf, -0.1)
float
joint_damping
1.00
(0, inf)
float
joint_stiffness
0.00
(0, inf)
float
timestep
0.00
(0.001, 0.1)
float
viscosity
0.00
(0, inf)
float
wind_x
0.00
(-inf, inf)
float
wind_y
0.00
(-inf, inf)
float
wind_z
0.00
(-inf, inf)
float
(b) CARLDmcQuadrupedEnv
Context Feature
Default
Bounds
Type
actuator_strength
1.00
(0, inf)
float
density
0.00
(0, inf)
float
friction_rolling
1.00
(0, inf)
float
friction_tangential
1.00
(0, inf)
float
friction_torsional
1.00
(0, inf)
float
geom_density
1.00
(0, inf)
float
gravity
-9.81
(-inf, -0.1)
float
joint_damping
1.00
(0, inf)
float
joint_stiffness
0.00
(0, inf)
float
timestep
0.01
(0.001, 0.1)
float
viscosity
0.00
(0, inf)
float
wind_x
0.00
(-inf, inf)
float
wind_y
0.00
(-inf, inf)
float
wind_z
0.00
(-inf, inf)
float
(c) CARLDmcFingerEnv
Context Feature
Default
Bounds
Type
actuator_strength
1.00
(0, inf)
float
density
5000.00
(0, inf)
float
friction_rolling
1.00
(0, inf)
float
friction_tangential
1.00
(0, inf)
float
friction_torsional
1.00
(0, inf)
float
geom_density
1.00
(0, inf)
float
gravity
-9.81
(-inf, -0.1)
float
joint_damping
1.00
(0, inf)
float
joint_stiffness
0.00
(0, inf)
float
limb_length_0
0.17
(0.01, 0.2)
float
limb_length_1
0.16
(0.01, 0.2)
float
spinner_length
0.18
(0.01, 0.4)
float
spinner_radius
0.04
(0.01, 0.05)
float
timestep
0.00
(0.001, 0.1)
float
viscosity
0.00
(0, inf)
float
wind_x
0.00
(-inf, inf)
float
wind_y
0.00
(-inf, inf)
float
wind_z
0.00
(-inf, inf)
float
(d) CARLDmcFishEnv
Context Feature
Default
Bounds
Type
actuator_strength
1.00
(0, inf)
float
density
5000.00
(0, inf)
float
friction_rolling
1.00
(0, inf)
float
friction_tangential
1.00
(0, inf)
float
friction_torsional
1.00
(0, inf)
float
geom_density
1.00
(0, inf)
float
gravity
-9.81
(-inf, -0.1)
float
joint_damping
1.00
(0, inf)
float
joint_stiffness
0.00
(0, inf)
float
timestep
0.00
(0.001, 0.1)
float
viscosity
0.00
(0, inf)
float
wind_x
0.00
(-inf, inf)
float
wind_y
0.00
(-inf, inf)
float
wind_z
0.00
(-inf, inf)
float
Table 7 :
7Context Features: Defaults, Bounds and Types for RNA Design(Runge et al., 2019) and Mario Environment(Awiszus et al., 2020;Schubert et al., 2021) (a) CARLRnaDesignEnv
Context Feature
Default Bounds
Type
mutation_threshold
5 (0.1, inf) float
reward_exponent
1 (0.1, inf) float
state_radius
5 (1, inf)
float
dataset
eterna -
categorical, n = 3
target_structure_ids f(dataset) (0, inf)
list of int
(b) CARLMarioEnv
Context Feature Default
Bounds
Type
level_index
0
-
categorical, n = 15
noise
f(level_index,
width, height)
(-1, 1)
float
mario_state
0
-
categorical, n = 3
mario_inertia
0.89
(0.5, 1.5) float
Alternatively, one can also define a policy as a probability distribution P(a|s) over the actions given a state. The following line of arguments also holds for stochastic policies but we only outline it for deterministic policies to not clutter notation.
Published in Transactions onMachine Learning Research (06/2023)
AcknowledgmentsThe work of Frederik Schubert, Sebastian Döhler and Bodo Rosenhahn was supported by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003) and the AI service center KISSKI (grant no. 01IS22093C), the Center for Digital Innovations (ZDIN) and the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122). André Biedenkapp and Frank Hutter acknowledge funding through the research network "Responsive and Scalable Learning for Robots Assisting Humans" (ReScaLe) of the University of Freiburg. The ReScaLe project is funded by the Carl Zeiss Foundation.
A new representation of successor features for transfer across dissimilar environments. M Abdolshah, H Le, T K George, S Gupta, S Rana, S Venkatesh, PMLRProceedings of the 38th International Conference on Machine Learning. M. Meila and T. Zhangthe 38th International Conference on Machine Learning139M. Abdolshah, H. Le, T. K. George, S. Gupta, S. Rana, and S. Venkatesh. A new representation of successor features for transfer across dissimilar environments. In M. Meila and T. Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 1-9. PMLR, 18-24 Jul 2021.
Automated dynamic algorithm configuration. S Adriaensen, A Biedenkapp, G Shala, N Awad, T Eimer, M Lindauer, F Hutter, J. Artif. Intell. Res. 75S. Adriaensen, A. Biedenkapp, G. Shala, N. Awad, T. Eimer, M. Lindauer, and F. Hutter. Automated dynamic algorithm configuration. J. Artif. Intell. Res., 75:1633-1699, 2022.
Deep reinforcement learning at the edge of the statistical precipice. R Agarwal, M Schwarzer, P Castro, A Courville, M Bellemare, Proceedings of the 35th International Conference on Advances in Neural Information Processing Systems (NeurIPS'21). the 35th International Conference on Advances in Neural Information Processing Systems (NeurIPS'21)R. Agarwal, M. Schwarzer, P. Castro, A. Courville, and M. Bellemare. Deep reinforcement learning at the edge of the statistical precipice. In Proceedings of the 35th International Conference on Advances in Neural Information Processing Systems (NeurIPS'21), pp. 29304-29320, 2021.
Minimum-delay adaptation in non-stationary reinforcement learning via online high-confidence change-point detection. L Alegre, A Bazzan, B Da Silva, AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems. ACML. Alegre, A. Bazzan, and B. da Silva. Minimum-delay adaptation in non-stationary reinforcement learning via online high-confidence change-point detection. In AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, pp. 97-105. ACM, 2021.
Reinforcement learning-based multi-agent system for network traffic signal control. I Arel, C Liu, T Urbanik, A Kohls, IET Intelligent Transport Systems. 42I. Arel, C. Liu, T. Urbanik, and A. Kohls. Reinforcement learning-based multi-agent system for network traffic signal control. IET Intelligent Transport Systems, 4(2):128-135, 2010.
TOAD-GAN: Coherent style level generation from a single example. M Awiszus, F Schubert, B Rosenhahn, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment16M. Awiszus, F. Schubert, and B. Rosenhahn. TOAD-GAN: Coherent style level generation from a single exam- ple. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, October 2020.
Agent57: Outperforming the atari human benchmark. A Badia, B Piot, S Kapturowski, P Sprechmann, A Vitvitskyi, Z Guo, C Blundell, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLR2020Virtual EventA. Badia, B. Piot, S. Kapturowski, P. Sprechmann, A. Vitvitskyi, Z. Guo, and C. Blundell. Agent57: Outperforming the atari human benchmark. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 507-517. PMLR, 2020.
Unifying count-based exploration and intrinsic motivation. M G Bellemare, S Srinivasan, G Ostrovski, T Schaul, D Saxton, R Munos, Proceedings of the 29th International Conference on Advances in Neural Information Processing Systems (NeurIPS'16). D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnettthe 29th International Conference on Advances in Neural Information Processing Systems (NeurIPS'16)M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. In D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett (eds.), Proceedings of the 29th International Conference on Advances in Neural Information Processing Systems (NeurIPS'16), pp. 1471-1479, 2016.
Autonomous navigation of stratospheric balloons using reinforcement learning. M G Bellemare, S Candido, P Castro, J Gong, M C Machado, S Moitra, S S Ponda, Z Wang, Nature. 5887836M. G. Bellemare, S. Candido, P. Samuel Castro, J. Gong, M. C. Machado, S. Moitra, S. S. Ponda, and Z. Wang. Autonomous navigation of stratospheric balloons using reinforcement learning. Nature, 588 (7836):77-82, 2020.
A distributional perspective on reinforcement learning. G Marc, Will Bellemare, Rémi Dabney, Munos, PMLRProceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine LearningSydney, NSW, Australia70Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 449-458. PMLR, 2017. URL http://proceedings.mlr.press/v70/bellemare17a. html.
CAVE: Configuration assessment, visualization and evaluation. A Biedenkapp, J Marben, M Lindauer, F Hutter, Proceedings of the International Conference on Learning and Intelligent Optimization (LION). the International Conference on Learning and Intelligent Optimization (LION)SpringerA. Biedenkapp, J. Marben, M. Lindauer, and F. Hutter. CAVE: Configuration assessment, visualization and evaluation. In Proceedings of the International Conference on Learning and Intelligent Optimization (LION), Lecture Notes in Computer Science. Springer, 2018.
Decision-theoretic planning: Structural assumptions and computational leverage. Craig Boutilier, Thomas L Dean, Steve Hanks, 10.1613/jair.575J. Artif. Intell. Res. 11Craig Boutilier, Thomas L. Dean, and Steve Hanks. Decision-theoretic planning: Structural assumptions and computational leverage. J. Artif. Intell. Res., 11:1-94, 1999. doi: 10.1613/jair.575. URL https: //doi.org/10.1613/jair.575.
. G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, W Zaremba, abs/1606.01540G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. CoRR, abs/1606.01540, 2016.
Learning improved representations via sampling-based state similarity for markov decision processes. P Castro, T Kastner, P Panangaden, M Rowland, Mico, abs/2106.08229CoRRP. Castro, T. Kastner, P. Panangaden, and M. Rowland. Mico: Learning improved representations via sampling-based state similarity for markov decision processes. CoRR, abs/2106.08229, 2021.
Leveraging procedural generation to benchmark reinforcement learning. K Cobbe, C Hesse, J Hilton, J Schulman, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLR119K. Cobbe, C. Hesse, J. Hilton, and J. Schulman. Leveraging procedural generation to benchmark reinforcement learning. In Proceedings of the 37th International Conference on Machine Learning, ICML, volume 119 of Proceedings of Machine Learning Research, pp. 2048-2056. PMLR, 2020.
Dealing with non-stationary environments using context detection. B Silva, E Basso, A Bazzan, P Engel, Machine Learning, Proceedings of the Twenty-Third International Conference (ICML. ACM148ACM International Conference Proceeding SeriesB. da Silva, E. Basso, A. Bazzan, and P. Engel. Dealing with non-stationary environments using context detection. In Machine Learning, Proceedings of the Twenty-Third International Conference (ICML, volume 148 of ACM International Conference Proceeding Series, pp. 217-224. ACM, 2006.
Learning parameterized skills. B Silva, G Konidaris, A Barto, Proceedings of the 29th International Conference on Machine Learning, ICML. the 29th International Conference on Machine Learning, ICMLB. da Silva, G. Konidaris, and A. Barto. Learning parameterized skills. In Proceedings of the 29th International Conference on Machine Learning, ICML, 2012.
Magnetic control of tokamak plasmas through deep reinforcement learning. J Degrave, F Felici, J Buchli, M Neunert, B Tracey, F Carpanese, T Ewalds, R Hafner, A Abdolmaleki, D De Las, C Casas, L Donner, C Fritz, A Galperti, J Huber, M Keeling, J Tsimpoukelli, A Kay, J Merle, S Moret, F Noury, D Pesamosca, O Pfau, C Sauter, S Sommariva, B Coda, A Duval, P Fasoli, K Kohli, D Kavukcuoglu, M Hassabis, Riedmiller, Nature. 6027897J. Degrave, F. Felici, J. Buchli, M. Neunert, B. Tracey, F. Carpanese, T. Ewalds, R. Hafner, A. Abdolmaleki, D. de Las Casas, C. Donner, L. Fritz, C. Galperti, A. Huber, J. Keeling, M. Tsimpoukelli, J. Kay, A. Merle, J. Moret, S. Noury, F. Pesamosca, D. Pfau, O. Sauter, C. Sommariva, S. Coda, B. Duval, A. Fasoli, P. Kohli, K. Kavukcuoglu, D. Hassabis, and M. Riedmiller. Magnetic control of tokamak plasmas through deep reinforcement learning. Nature, 602(7897):414-419, 2022.
Hidden parameter markov decision processes: A semiparametric regression approach for discovering latent task parametrizations. F Doshi-Velez, G Dimitri Konidaris, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAIIJCAI/AAAI PressF. Doshi-Velez and G. Dimitri Konidaris. Hidden parameter markov decision processes: A semiparametric regression approach for discovering latent task parametrizations. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI, pp. 1432-1440. IJCAI/AAAI Press, 2016.
Provably efficient RL with rich observations via latent state decoding. S Du, A Krishnamurthy, N Jiang, A Agarwal, M Dudík, J Langford, Proceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USAPMLR97S. Du, A. Krishnamurthy, N. Jiang, A. Agarwal, M. Dudík, and J. Langford. Provably efficient RL with rich observations via latent state decoding. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 1665-1674. PMLR, 2019.
Rl$ˆ2$: Fast reinforcement learning via slow reinforcement learning. Y Duan, J Schulman, X Chen, P Bartlett, I Sutskever, P Abbeel, abs/1611.02779CoRRY. Duan, J. Schulman, X. Chen, P. Bartlett, I. Sutskever, and P. Abbeel. Rl$ˆ2$: Fast reinforcement learning via slow reinforcement learning. CoRR, abs/1611.02779, 2016.
Self-paced context evaluation for contextual reinforcement learning. T Eimer, A Biedenkapp, F Hutter, M Lindauer, Proceedings of the 38th International Conference on Machine Learning (ICML'21). M. Meila and T. Zhangthe 38th International Conference on Machine Learning (ICML'21)PMLR139T. Eimer, A. Biedenkapp, F. Hutter, and M. Lindauer. Self-paced context evaluation for contextual reinforcement learning. In M. Meila and T. Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning (ICML'21), volume 139 of Proceedings of Machine Learning Research, pp. 2948-2958. PMLR, 2021.
Search on the replay buffer: Bridging planning and reinforcement learning. B Eysenbach, R Salakhutdinov, S Levine, Proceedings of the 32nd International Conference on Advances in Neural Information Processing Systems (NeurIPS'19). H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnettthe 32nd International Conference on Advances in Neural Information Processing Systems (NeurIPS'19)B. Eysenbach, R. Salakhutdinov, and S. Levine. Search on the replay buffer: Bridging planning and reinforcement learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Proceedings of the 32nd International Conference on Advances in Neural Information Processing Systems (NeurIPS'19), pp. 15220-15231, 2019.
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, Proceedings of the 34th International Conference on Machine Learning (ICML'17). D. Precup and Y. Tehthe 34th International Conference on Machine Learning (ICML'17)70Proceedings of Machine Learning ResearchC. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In D. Precup and Y. Teh (eds.), Proceedings of the 34th International Conference on Machine Learning (ICML'17), volume 70, pp. 1126-1135. Proceedings of Machine Learning Research, 2017.
Automatic goal generation for reinforcement learning agents. C Florensa, D Held, X Geng, P Abbeel, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningPMLR80C. Florensa, D. Held, X. Geng, and P. Abbeel. Automatic goal generation for reinforcement learning agents. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, volume 80 of Proceedings of Machine Learning Research, pp. 1514-1523. PMLR, 2018.
Sample-efficient automated deep reinforcement learning. J Franke, G Köhler, A Biedenkapp, F Hutter, 9th International Conference on Learning Representations, ICLR. OpenReview.net. J. Franke, G. Köhler, A. Biedenkapp, and F. Hutter. Sample-efficient automated deep reinforcement learning. In 9th International Conference on Learning Representations, ICLR. OpenReview.net, 2021.
Brax -A differentiable physics engine for large scale rigid body simulation. C Freeman, E Frey, A Raichuk, S Girgin, I Mordatch, O Bachem, abs/2106.13281CoRRC. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem. Brax -A differentiable physics engine for large scale rigid body simulation. CoRR, abs/2106.13281, 2021.
Towards effective context for metareinforcement learning: an approach based on contrastive learning. H Fu, H Tang, J Hao, C Chen, X Feng, D Li, W Liu, Proceedings of the Conference on Artificial Intelligence (AAAI'21). the Conference on Artificial Intelligence (AAAI'21)AAAI PressH. Fu, H. Tang, J. Hao, C. Chen, X. Feng, D. Li, and W. Liu. Towards effective context for meta- reinforcement learning: an approach based on contrastive learning. In Proceedings of the Conference on Artificial Intelligence (AAAI'21), pp. 7457-7465. AAAI Press, 2021a.
Reinforcement learning to adjust parametrized motor primitives to new situations. J Kober, A Wilhelm, E Öztop, J Peters, Autonomous Robots. 334J. Kober, A. Wilhelm, E. Öztop, and J. Peters. Reinforcement learning to adjust parametrized motor primitives to new situations. Autonomous Robots, 33(4):361-379, 2012.
Neuroevolutionary reinforcement learning for generalized helicopter control. R Koppejan, S Whiteson, Proceedings of the 11th Annual conference on Genetic and evolutionary computation, GECCO '09. the 11th Annual conference on Genetic and evolutionary computation, GECCO '09Association for Computing MachineryR. Koppejan and S. Whiteson. Neuroevolutionary reinforcement learning for generalized helicopter control. In Proceedings of the 11th Annual conference on Genetic and evolutionary computation, GECCO '09, pp. 145-152. Association for Computing Machinery, 2009.
High confidence generalization for reinforcement learning. J Kostas, Y Chandak, S Jordan, G Theocharous, P Thomas, PMLRProceedings of the 38th International Conference on Machine Learning. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning139J. Kostas, Y. Chandak, S. Jordan, G. Theocharous, and P. Thomas. High confidence generalization for reinforcement learning. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 5764-5773. PMLR, 18-24 Jul 2021.
Partially observable markov decision processes and robotics. H Kurniawati, 10.1146/annurev-control-042920-092451Annu. Rev. Control. Robotics Auton. Syst. 5H. Kurniawati. Partially observable markov decision processes and robotics. Annu. Rev. Control. Robotics Auton. Syst., 5:253-277, 2022. doi: 10.1146/annurev-control-042920-092451. URL https://doi.org/10. 1146/annurev-control-042920-092451.
The nethack learning environment. H Küttler, N Nardelli, A Miller, R Raileanu, M Selvatici, E Grefenstette, T Rocktäschel, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. H. Küttler, N. Nardelli, A. Miller, R. Raileanu, M. Selvatici, E. Grefenstette, and T. Rocktäschel. The nethack learning environment. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 2020.
URLB: Unsupervised Reinforcement Learning Benchmark. M Laskin, D Yarats, H Liu, K Lee, A Zhan, K Lu, C Cang, L Pinto, P Abbeel, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021. Joaquin Vanschoren and Sai-Kit Yeungthe Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021M. Laskin, D. Yarats, H. Liu, K. Lee, A. Zhan, K. Lu, C. Cang, L. Pinto, and P. Abbeel. URLB: Unsupervised Reinforcement Learning Benchmark. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, Virtual, 2021.
Learning quadrupedal locomotion over challenging terrain. J Lee, J Hwangbo, L Wellhausen, V Koltun, M Hutter, Science in Robotics. 5J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. Science in Robotics, 5, 2020a.
Context-aware dynamics model for generalization in model-based reinforcement learning. K Lee, Y Seo, S Lee, H Lee, J Shin, Proceedings of the 37th International Conference on Machine Learning, ICML. the 37th International Conference on Machine Learning, ICMLPMLR119K. Lee, Y. Seo, S. Lee, H. Lee, and J. Shin. Context-aware dynamics model for generalization in model-based reinforcement learning. In Proceedings of the 37th International Conference on Machine Learning, ICML, volume 119 of Proceedings of Machine Learning Research, pp. 5757-5766. PMLR, 2020b.
A cooperative multi-agent reinforcement learning framework for resource balancing in complex logistics network. X Li, J Zhang, J Bian, Y Tong, T Liu, Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. the 18th International Conference on Autonomous Agents and MultiAgent SystemsInternational Foundation for Autonomous Agents and Multiagent SystemsX. Li, J. Zhang, J. Bian, Y. Tong, and T. Liu. A cooperative multi-agent reinforcement learning framework for resource balancing in complex logistics network. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS, pp. 980-988. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
Goal-conditioned reinforcement learning: Problems and solutions. M Liu, M Zhu, W Zhang, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI. Luc De Raedtthe Thirty-First International Joint Conference on Artificial Intelligence, IJCAI2022M. Liu, M. Zhu, and W. Zhang. Goal-conditioned reinforcement learning: Problems and solutions. In Luc De Raedt (ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI, pp. 5502-5511. ijcai.org, 2022.
Is deep reinforcement learning ready for practical applications in healthcare? A sensitivity analysis of duel-ddqn for hemodynamic management in sepsis patients. M Lu, Z Shahn, D Sow, F Doshi-Velez, L H Lehman, AMIA 2020. USAM. Lu, Z. Shahn, D. Sow, F. Doshi-Velez, and L. H. Lehman. Is deep reinforcement learning ready for practical applications in healthcare? A sensitivity analysis of duel-ddqn for hemodynamic management in sepsis patients. In AMIA 2020, American Medical Informatics Association Annual Symposium, Virtual Event, USA, November 14-18, 2020. AMIA, 2020.
Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. M Machado, M Bellemare, E Talvitie, J Veness, M Hausknecht, M Bowling, J. Artif. Intell. Res. 61M. Machado, M. Bellemare, E. Talvitie, J. Veness, M. Hausknecht, and M. Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. J. Artif. Intell. Res., 61: 523-562, 2018.
When is generalizable reinforcement learning tractable?. D Malik, Y Li, P Ravikumar, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems. Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman VaughanNeurIPSD. Malik, Y. Li, and P. Ravikumar. When is generalizable reinforcement learning tractable? In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS, pp. 8032-8045, 2021.
Teacher-student curriculum learning. T Matiisen, A Oliver, T Cohen, J Schulman, IEEE Trans. Neural Networks Learn. Syst. 319T. Matiisen, A. Oliver, T. Cohen, and J. Schulman. Teacher-student curriculum learning. IEEE Trans. Neural Networks Learn. Syst., 31(9):3732-3740, 2020.
Active domain randomization. B Mehta, M Diaz, F Golemo, C Pal, L Paull, 3rd Annual Conference on Robot Learning, CoRL. Leslie Pack Kaelbling, Danica Kragic, and Komei SugiuraPMLR100B. Mehta, M. Diaz, F. Golemo, C. Pal, and L. Paull. Active domain randomization. In Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiura (eds.), 3rd Annual Conference on Robot Learning, CoRL, volume 100 of Proceedings of Machine Learning Research, pp. 1162-1176. PMLR, 2019.
Reinforcement learning in financial markets. T Meng, M Khushi, Data. 43110T. Meng and M. Khushi. Reinforcement learning in financial markets. Data, 4(3):110, 2019.
Markov decision processes with continuous side information. A Modi, N Jiang, S P Singh, A Tewari, Algorithmic Learning Theory (ALT'18). 83A. Modi, N. Jiang, S. P. Singh, and A. Tewari. Markov decision processes with continuous side information. In Algorithmic Learning Theory (ALT'18), volume 83, pp. 597-618, 2018.
Robust reinforcement learning. J Morimoto, K Doya, Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS). Todd K. Leen, Thomas G. Dietterich, and Volker TrespDenver, CO, USAMIT PressJ. Morimoto and K. Doya. Robust reinforcement learning. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp (eds.), Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA, pp. 1061-1067. MIT Press, 2000.
Robots learn increasingly complex tasks with intrinsic motivation and automatic curriculum learning. S Nguyen, N Duminy, A Manoury, D Duhaut, C Buche, Künstliche Intell. 351S. Nguyen, N. Duminy, A. Manoury, D.e Duhaut, and C. Buche. Robots learn increasingly complex tasks with intrinsic motivation and automatic curriculum learning. Künstliche Intell., 35(1):81-90, 2021.
On first-order meta-learning algorithms. A Nichol, J Achiam, J Schulman, abs/1803.02999A. Nichol, J. Achiam, and J. Schulman. On first-order meta-learning algorithms. CoRR, abs/1803.02999, 2018. URL http://arxiv.org/abs/1803.02999.
Automated reinforcement learning (autorl): A survey and open problems. J Parker-Holder, R Rajan, X Song, A Biedenkapp, Y Miao, T Eimer, B Zhang, V Nguyen, R Calandra, A Faust, F Hutter, M Lindauer, J. Artif. Intell. Res. 74J. Parker-Holder, R. Rajan, X. Song, A. Biedenkapp, Y. Miao, T. Eimer, B. Zhang, V. Nguyen, R. Calandra, A. Faust, F. Hutter, and M. Lindauer. Automated reinforcement learning (autorl): A survey and open problems. J. Artif. Intell. Res., 74:517-568, 2022.
Robust adversarial reinforcement learning. L Pinto, J Davidson, R Sukthankar, A Gupta, Proceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine LearningSydney, NSW, AustraliaPMLR70L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta. Robust adversarial reinforcement learning. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 2817-2826. PMLR, 2017.
High acceleration reinforcement learning for real-world juggling with binary rewards. Kai Ploeger, Michael Lutter, Jan Peters, PMLR, 20204th Conference on Robot Learning. Jens Kober, Fabio Ramos, and Claire J. TomlinCambridge, MA, USA2020Virtual EventKai Ploeger, Michael Lutter, and Jan Peters. High acceleration reinforcement learning for real-world juggling with binary rewards. In Jens Kober, Fabio Ramos, and Claire J. Tomlin (eds.), 4th Conference on Robot Learning, CoRL 2020, 16-18 November 2020, Virtual Event / Cambridge, MA, USA, volume 155 of Proceedings of Machine Learning Research, pp. 642-653. PMLR, 2020. URL https://proceedings.mlr. press/v155/ploeger21a.html.
Abstraction and generalization in reinforcement learning: A summary and framework. M Ponsen, M Taylor, K Tuyls, Adaptive and Learning Agents, Second Workshop. ALA; Budapest5924M. Ponsen, M. Taylor, and K. Tuyls. Abstraction and generalization in reinforcement learning: A summary and framework. In Adaptive and Learning Agents, Second Workshop, ALA 2009, Held as Part of the AAMAS 2009 Conference in Budapest, volume 5924 of Lecture Notes in Computer Science, pp. 1-32.
. Springer, Springer, 2009.
Efficient off-policy meta-reinforcement learning via probabilistic context variables. K Rakelly, A Zhou, C Finn, S Levine, D Quillen, Proceedings of the 36th International Conference on Machine Learning (ICML'19). K. Chaudhuri and R. Salakhutdinovthe 36th International Conference on Machine Learning (ICML'19)PMLR97K. Rakelly, A. Zhou, C. Finn, S. Levine, and D. Quillen. Efficient off-policy meta-reinforcement learning via probabilistic context variables. In K. Chaudhuri and R. Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning (ICML'19), volume 97, pp. 5331-5340. PMLR, 2019.
The algorithm selection problem. J Rice, Advances in Computers. 15J. Rice. The algorithm selection problem. Advances in Computers, 15:65-118, 1976.
Learning to Design RNA. F Runge, D Stoll, S Falkner, F Hutter, Proceedings of the International Conference on Learning Representations (ICLR'19). the International Conference on Learning Representations (ICLR'19)Published online: iclr.ccF. Runge, D. Stoll, S. Falkner, and F. Hutter. Learning to Design RNA. In Proceedings of the International Conference on Learning Representations (ICLR'19), 2019. Published online: iclr.cc.
Minihack the planet: A sandbox for open-ended reinforcement learning research. M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, F Petroni, H Küttler, E Grefenstette, T Rocktäschel, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks. Joaquin Vanschoren and Sai-Kit Yeungthe Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and BenchmarksM. Samvelyan, R. Kirk, V. Kurin, J. Parker-Holder, M. Jiang, E. Hambro, F. Petroni, H. Küttler, E. Grefen- stette, and T. Rocktäschel. Minihack the planet: A sandbox for open-ended reinforcement learning research. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks, 2021.
Universal value function approximators. T Schaul, D Horgan, K Gregor, D Silver, Proceedings of the 32nd International Conference on Machine Learning (ICML'15). F. R. Bach and D. M. Bleithe 32nd International Conference on Machine Learning (ICML'15)Omnipress37T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. In F. R. Bach and D. M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning (ICML'15), volume 37, pp. 1312-1320. Omnipress, 2015.
Toad-gan: a flexible framework for few-shot level generation in token-based games. F Schubert, M Awiszus, B Rosenhahn, 10.1109/TG.2021.3069833IEEE Transactions on Games. F. Schubert, M. Awiszus, and B. Rosenhahn. Toad-gan: a flexible framework for few-shot level generation in token-based games. IEEE Transactions on Games, pp. 1-1, 2021. doi: 10.1109/TG.2021.3069833.
Polter: Policy trajectory ensemble regularization for unsupervised reinforcement learning. F Schubert, C Benjamins, S Döhler, B Rosenhahn, M Lindauer, 2835-8856Transactions on Machine Learning Research. F. Schubert, C. Benjamins, S. Döhler, B. Rosenhahn, and M. Lindauer. Polter: Policy trajectory ensemble regularization for unsupervised reinforcement learning. Transactions on Machine Learning Research, April 2023. ISSN 2835-8856.
High-dimensional continuous control using generalized advantage estimation. J Schulman, P Moritz, S Levine, M Jordan, P Abbeel, 4th International Conference on Learning Representations. Yoshua Bengio and Yann LeCunSan Juan, Puerto RicoConference Track ProceedingsJ. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
Mastering the game of go with deep neural networks and tree search. D Silver, A Huang, C Maddison, A Guez, L Sifre, G Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T Lillicrap, M Leach, K Kavukcuoglu, T Graepel, D Hassabis, Nature. 5297587D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Driessche, J. Schrittwieser, I. Antonoglou, V. Pan- neershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.
Mtenv -environment interface for mulit-task reinforcement learning. Github, 2021a. S Sodhani, L Denoyer, P Kamienny, O Delalleau, S. Sodhani, L. Denoyer, P. Kamienny, and O. Delalleau. Mtenv -environment interface for mulit-task reinforcement learning. Github, 2021a. URL https://github.com/facebookresearch/mtenv.
Multi-task reinforcement learning with context-based representations. S Sodhani, A Zhang, J Pineau, PMLRProceedings of the 38th International Conference on Machine Learning. M. Meila and T. Zhangthe 38th International Conference on Machine Learning139S. Sodhani, A. Zhang, and J. Pineau. Multi-task reinforcement learning with context-based representations. In M. Meila and T. Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 9767-9779. PMLR, 18-24 Jul 2021b.
Learning goal embeddings via self-play for hierarchical reinforcement learning. S Sukhbaatar, E Denton, A Szlam, R Fergus, abs/1811.09083CoRRS. Sukhbaatar, E. Denton, A. Szlam, and R. Fergus. Learning goal embeddings via self-play for hierarchical reinforcement learning. CoRR, abs/1811.09083, 2018.
Deepmind control suite. Y Tassa, Y Doron, A Muldal, T Erez, Y Li, D De Las, D Casas, A Budden, J Abdolmaleki, A Merel, T Lefrancq, M Lillicrap, Riedmiller, abs/1801.00690CoRRY. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. de Las Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, T. Lillicrap, and M. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018. URL http://arxiv.org/abs/1801.00690.
Hyperparameter importance across datasets. J Van Rijn, F Hutter, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). Y. Guo and F.Farooqthe 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)ACM PressJ. van Rijn and F. Hutter. Hyperparameter importance across datasets. In Y. Guo and F.Farooq (eds.), Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 2367-2376. ACM Press, 2018.
Learning to reinforcement learn. J Wang, Z Kurth-Nelson, H Soyer, J Leibo, D Tirumala, R Munos, C Blundell, D Kumaran, M Botvinick, Proceedings of the 39th Annual Meeting of the Cognitive Science Society. cognitivesciencesociety.org. G. Gunzelmann, A. Howes, T. Tenbrink, and E. Davelaarthe 39th Annual Meeting of the Cognitive Science Society. cognitivesciencesociety.orgJ. Wang, Z. Kurth-Nelson, H. Soyer, J. Leibo, D. Tirumala, R. Munos, C. Blundell, D. Kumaran, and M. Botvinick. Learning to reinforcement learn. In G. Gunzelmann, A. Howes, T. Tenbrink, and E. Davelaar (eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society. cognitivesciencesociety.org, 2017.
Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents. J Wang, M King, N Porcel, Z Kurth-Nelson, T Zhu, C Deck, P Choy, M Cassin, M Reynolds, H Song, G Buttimore, D Reichert, N Rabinowitz, L Matthey, D Hassabis, A Lerchner, M Botvinick, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks. Joaquin Vanschoren and Sai-Kit Yeungthe Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and BenchmarksJ. Wang, M. King, N. Porcel, Z. Kurth-Nelson, T. Zhu, C. Deck, P. Choy, M. Cassin, M. Reynolds, H. Song, G. Buttimore, D. Reichert, N. Rabinowitz, L. Matthey, D. Hassabis, A. Lerchner, and M. Botvinick. Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks, 2021.
Protecting against evaluation overfitting in empirical reinforcement learning. S Whiteson, B Tanner, M Taylor, P Stone, 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). S. Whiteson, B. Tanner, M. Taylor, and P. Stone. Protecting against evaluation overfitting in empirical reinforcement learning. In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pp. 120-127, 2011.
Hydra: Automatically configuring algorithms for portfolio-based selection. L Xu, H Hoos, K Leyton-Brown, Proceedings of the Twenty-fourth National Conference on Artificial Intelligence (AAAI'10). M. Fox and D. Poolethe Twenty-fourth National Conference on Artificial Intelligence (AAAI'10)AAAI PressL. Xu, H. Hoos, and K. Leyton-Brown. Hydra: Automatically configuring algorithms for portfolio-based selection. In M. Fox and D. Poole (eds.), Proceedings of the Twenty-fourth National Conference on Artificial Intelligence (AAAI'10), pp. 210-216. AAAI Press, 2010.
Reinforcement learning with prototypical representations. D Yarats, R Fergus, A Lazaric, L Pinto, PMLRProceedings of the 38th International Conference on Machine Learning. M. Meila and T. Zhangthe 38th International Conference on Machine Learning139D. Yarats, R. Fergus, A. Lazaric, and L. Pinto. Reinforcement learning with prototypical representations. In M. Meila and T. Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11920-11931. PMLR, 18-24 Jul 2021.
Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. T Yu, D Quillen, Z He, R Julian, K Hausman, C Finn, S Levine, Conference on Robot Learning (CoRL). T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning (CoRL), 2019.
| [
"https://github.com/automl/CARL.",
"https://github.com/automl/carl.",
"https://github.com/coax-dev/coax,",
"https://github.com/facebookresearch/mtenv."
] |
[
"Exploratory Hidden Markov Factor Models for Longitudinal Mobile Health Data: Application to Adverse Posttraumatic Neuropsychiatric Sequelae",
"Exploratory Hidden Markov Factor Models for Longitudinal Mobile Health Data: Application to Adverse Posttraumatic Neuropsychiatric Sequelae"
] | [
"Lin Ge ",
"Xinming An ",
"Donglin Zeng [email protected] ",
"Samuel Mclean [email protected] ",
"Ronald Kessler [email protected] ",
"Rui Song ",
"\nDepartment of Biostatistics\nThe University of North Carolina at Chapel Hill\nChapel Hill27599NC\n",
"\nDepartment of Psy-chiatry\nDepartment of Health Care Policy\nHarvard Medical School\nSamuel McLean is Professor\nThe University of North Carolina at Chapel Hill\n02115BostonMA\n"
] | [
"Department of Biostatistics\nThe University of North Carolina at Chapel Hill\nChapel Hill27599NC",
"Department of Psy-chiatry\nDepartment of Health Care Policy\nHarvard Medical School\nSamuel McLean is Professor\nThe University of North Carolina at Chapel Hill\n02115BostonMA"
] | [] | Adverse posttraumatic neuropsychiatric sequelae (APNS) are common among veterans and millions of Americans after traumatic exposures, resulting in substantial burdens for trauma survivors and society. Despite numerous studies conducted on APNS over the past decades, there has been limited progress in understanding the underlying neurobiological mechanisms due to several unique challenges. One of these challenges is the reliance on subjective self-report measures to assess APNS, which can easily result in measurement errors and biases (e.g., recall bias). To mitigate this issue, in this paper, we investigate the potential of leveraging the objective longitudinal mobile device data to identify homogeneous APNS states and study the dynamic transitions and potential risk factors of APNS after trauma exposure. To handle specific challenges posed by longitudinal mobile device data, we developed exploratory hidden Markov factor models and designed a Stabilized Expectation-Maximization algorithm for parameter estimation. Simulation studies were conducted to evaluate the performance of parameter estimation and model selection. Finally, to demonstrate the practical utility of the method, we applied it to mobile device data collected from the Advancing Understanding of RecOvery afteR traumA (AURORA) study. | null | [
"https://export.arxiv.org/pdf/2202.12819v2.pdf"
] | 247,154,654 | 2202.12819 | ecfe2fce88c94af9ba1d41b6a862c0be72ea37c6 |
Exploratory Hidden Markov Factor Models for Longitudinal Mobile Health Data: Application to Adverse Posttraumatic Neuropsychiatric Sequelae
Lin Ge
Xinming An
Donglin Zeng [email protected]
Samuel Mclean [email protected]
Ronald Kessler [email protected]
Rui Song
Department of Biostatistics
The University of North Carolina at Chapel Hill
Chapel Hill27599NC
Department of Psy-chiatry
Department of Health Care Policy
Harvard Medical School
Samuel McLean is Professor
The University of North Carolina at Chapel Hill
02115BostonMA
Exploratory Hidden Markov Factor Models for Longitudinal Mobile Health Data: Application to Adverse Posttraumatic Neuropsychiatric Sequelae
* Lin Ge is graduate student (E-mail: [email protected]) and Rui Song is Professor ([email protected]), Department of Statistics, North Carolina State University, Raleigh, NC 27695. Xinming An (E-mail: Xinming [email protected]) is Research Assistant Professor, Department of Anesthesiology, The Uni-versity of North Carolina at Chapel Hill, Chapel Hill, NC 27514. Donglin Zeng is Professor Chapel Hill, NC 27514. Ronald Kessler is ProfessorContinuous-time hidden Markov modelDiscrete-time hidden Markov modelMental healthMultinomial logistic modelMultivariate longitudinal data
Adverse posttraumatic neuropsychiatric sequelae (APNS) are common among veterans and millions of Americans after traumatic exposures, resulting in substantial burdens for trauma survivors and society. Despite numerous studies conducted on APNS over the past decades, there has been limited progress in understanding the underlying neurobiological mechanisms due to several unique challenges. One of these challenges is the reliance on subjective self-report measures to assess APNS, which can easily result in measurement errors and biases (e.g., recall bias). To mitigate this issue, in this paper, we investigate the potential of leveraging the objective longitudinal mobile device data to identify homogeneous APNS states and study the dynamic transitions and potential risk factors of APNS after trauma exposure. To handle specific challenges posed by longitudinal mobile device data, we developed exploratory hidden Markov factor models and designed a Stabilized Expectation-Maximization algorithm for parameter estimation. Simulation studies were conducted to evaluate the performance of parameter estimation and model selection. Finally, to demonstrate the practical utility of the method, we applied it to mobile device data collected from the Advancing Understanding of RecOvery afteR traumA (AURORA) study.
Introduction
Adverse posttraumatic neuropsychiatric sequelae (APNS) (e.g., pain, depression, and PTSD) are frequently observed in civilians and military veterans who have experienced traumatic events, such as car accidents and sexual assault. These APNS increase the risk of chronic illnesses, including cancer and heart disease, and substantially contribute to drug abuse, suicide, and disability. Moreover, APNS impose enduring psychosocial and financial burdens not only on individuals with the disorder but also on their families, communities, and society as a whole.
However, little progress has been made in advancing APNS research over the past few decades due to several unique challenges. First, APNS have been evaluated through subjective self-reported measures, which lack objective reliability. Second, the heterogeneity among patients, as recognized in traditional classification and diagnoses, complicates the study of APNS. Lastly, these APNS disorders are often studied and treated independently, despite their frequent co-occurrence (McLean et al., 2020). These obstacles hinder the identification of objective markers, the advancement in understanding the neurobiological mechanisms of APNS, and the development of effective preventative/treatment strategies.
Identifying homogeneous states and exploring the dynamic prognosis of APNS in the immediate aftermath of trauma exposure holds promise for enhancing our understanding of APNS and identifying effective intervention options and appropriate timing at the individual level. Regrettably, due to the lack of appropriate data and effective statistical method, no large-scale studies have been conducted to investigate the onset, dynamic transitions (such as recovery and relapse), and associated risk factors of APNS. To help address the challenges, the National Institutes of Mental Health, joined by the US Army Medical Research and Material Command, several foundations, and corporate partners, developed the Advancing Understanding of RecOvery afteR traumA (AURORA) study (McLean et al., 2020). This study gathered extensive biobehavioral data from a large cohort of trauma survivors (n = 2,997) across the United States, including self-reported surveys, web-based neurocognitive tests, digital phenotyping data (i.e., wrist wearable and smartphone data), psychophysical tests, neuroimaging assessments, and genomics data. Data collection starts in the early aftermath of the traumatic event and continues for a year.
Leveraging this rich dataset, our work aims to mitigate the difficulties associated with APNS by i) identifying homogeneous latent states, ii) studying dynamic transition patterns over time, and iii) investigating potential risk factors of state transition. In contrast to previous studies that attempted to identify homogeneous subgroups for APNS relying on self-report survey data or neuroimaging data (Marquand et al., 2016), we focus on utilizing objective mobile device data, which tracks an individual's behavior, mood, and health in real-time, real-life environments. To achieve these goals, we develop both discrete-time and continuous-time exploratory hidden Markov factor models that can simultaneously identify homogeneous subtypes, investigate subtype-specific structure, and model individuals' progression and associated risk factors based on multivariate longitudinal data.
Hidden Markov Models (HMMs) (Baum and Petrie, 1966) have been widely used in various fields (Mor et al., 2021). However, mobile device data presents two unique challenges that standard HMMs cannot handle, including the interdependent variables with unknown interrelationship structures and unevenly spaced measurements.
Mobile device sensor data, such as accelerometer data and photoplethysmography (PPG) from smartwatches, are highly intensive time series data. Typically, these raw data are preprocessed, and features are extracted using data processing pipelines within a larger time window (e.g., daily activity features derived from accelerometer data). These features are often technical summaries representing different characteristics of each time series variable and hence are often highly correlated. Due to the exponentially growing number of parameters in the covariance matrix as the number of features increases, assuming a fully free covariance matrix is infeasible. Therefore, under the HMM framework, features are commonly assumed to be independent, given the latent state membership. However, this assumption is often violated in real-world applications.
To model the association between features, Factor analysis models (FMs) (Kim and Mueller, 1978) provide an efficient and parsimonious approach and have been incorporated into HMMs in various ways. For example, the factor analyzed hidden Markov model (Rosti and Gales, 2002) combines an FM with a discrete-time HMM (DTHMM) (Vermunt et al., 1999), and has been extensively used in a variety of real-world applications, including speech recognition (Rosti and Gales, 2004), environmental protection (Maruotti et al., 2017), and seizure detection (Madadi, 2019). Similarly, Liu and Chen (2016) introduced the regimeswitching factor model to handle high-dimensional financial market data. However, they all assume homogeneous transition probability matrices, limiting their ability to account for the heterogeneity of transition probabilities over time and among different subjects and explore risk factors of state transition.
To simultaneously capture the interrelationships among observed features and account for the variability of transition probabilities, a joint framework incorporating HMM, FM, and a feature-based transition model was recently proposed (Song et al., 2017;Zhou et al., 2022). However, it is not directly applicable to mobile device data. Firstly, the framework employs a confirmatory factor model (CFM) with pre-specified structures for the factor loading matrices, which is not suitable for mobile device data that lacks such prior knowledge. Therefore, an exploratory factor model (EFM) is needed to explore the interre-lationship structure among all observed features. Secondly, the framework assumes ordered states using the continuation-ratio logit model, which is inappropriate for analyzing AU-RORA data.
Another challenge posed by mobile device data is the irregular spacing of measurements.
For example, activity and heart rate variability (HRV) data were collected only when the participants wore the watches, resulting in non-uniformly spaced observations and significant variation in sampling schedules between individuals. While the aforementioned methods are all based on DTHMM, assuming evenly spaced measurements and neglecting the impact of time gaps between consecutive observations on transition rates, continuous-time discrete-state HMM (CTHMM) was developed to handle irregularly spaced measurements (Cox and Miller, 2017). CTHMM and its extensions that incorporate covariates to characterize transition rates are widely used in medical research that typically involves irregularly collected clinical measures (Liu et al., 2015;Lange et al., 2018;Amoros et al., 2019;Zhou et al., 2020). However, none of them focus on the interrelationships among features.
In this paper, to simultaneously address the two challenges and examine heterogeneous transition patterns, we propose an innovative model consisting of three components, including DTHMM/CTHMM, EFM, and multinomial logistic/log-linear transition model. Our contributions can be summarized as follows. First, we examine the utility of data collected in an open environment from consumer-grade mobile devices for mental health research. This contrasts with most existing studies on data collected in controlled lab environments.
Second, we propose two Exploratory Hidden Markov Factor Models (EHMFM) that address the unique challenges introduced by mobile device data and depict non-homogeneous state transition processes of multiple individuals. While the Discrete-Time EHMFM assumes consistent time intervals, the Continuous-Time EHMFM accepts different structures of longitudinal data collected on a regular or irregular basis. Simulation studies using synthetic data demonstrate exceptional parameter estimation and model selection performance. Finally, we analyze HRV and activity data from the AURORA study, followed by interpretations and discussions of biological findings that highlight the immense potential of mobile health data and our proposed method for mental health research.
AURORA Dataset
This study focuses on two subsets of the AURORA data, each representing a distinct data structure of research interest. The first subset includes observations systematically collected every ten days from 180 patients. The second subset consists of irregularly sampled observations from 258 patients, with each patient providing at least 50 observations. Both subsets include 23 features, with four derived from activity data and the remaining 19 derived from HRV data. See Appendix A for a detailed description of the variables and our data preprocessing approach.
Exploratory Hidden Markov Factor Model (EHMFM)
Motivated by the structures of the processed AURORA datasets, we consider data in the form of repeated measurements of p features over T i occasions for each individual i of N subjects. The proposed models are in the framework of HMM. Let w it be the latent state of individual i at occasion t, taking value from the finite discrete set {1, · · · , J}. Here, J is the total number of states, which is fixed and known. Let W i = (w i1 , · · · , w iT i ) be the state sequence over T i repeated measurements. Let a J × J matrix P it be the transition probability matrix for individual i at occasion t, t = {2, · · · , T i }, of which the (k, j) entry is P it,kj = P (w it = j|w i,t−1 = k), and P it,kk = 1 − j:j̸ =k P it,kj . At t = 1, we assume that the initial state follows a multinomial distribution with probabilities π = (π 1 , · · · , π J ) ′ , such that J i=1 π i = 1. The objective of the HMM is to delineate latent Markov processes given observations by estimating the transition probability matrix P and the initial state distribution.
Unlike the conventional HMM, our model incorporates two additional components to address the unique challenges and achieve our goals. In the first component, discussed in Section 3.1, we posit a state-specific measurement model for the observations to learn interrelationship structures via EFM. In the second component, discussed in Section 3.2, we introduce transition models (TM) for learning heterogeneous transition patterns.
State-Specific Measurement Model
Let y it denote a p × 1 vector of the observed value of p outcome variables for subject i at time t. z it is a K dimensional vector of latent scores assumed to be independent of w it and following a standard multivariate normal distribution. While we assume that K is constant across states, our model can easily be extended to accommodate varying K j . For each individual i, Y i = (y i1 , · · · , y iT ) is a p × T matrix containing all measurements and Z i = (z i1 , · · · , z iT ) is a K × T matrix containing all latent features.
The first component of our model is an FM, with the primary goal of identifying the interrelationship structures between observed response variables and the underlying constructions of latent variables. For individual i at time t, given w it = j, the FM assumes the following state-specific measurement model: (1) where µ j is a p × 1 vector of state-specific expected mean response, Λ j is a p × K statespecific factor loading matrix, Ψ is a p × p diagonal covariance matrix for the error term e it with positive nonconstant diagonal entries. Alternatively, the model (1) can be expressed
[y it |w it = j] = µ j + Λ j z it + e it ; z it i.i.d. ∼ N (0, I K ), e it i.i.d. ∼ N (0, Ψ), z it ⊥ ⊥ e itas [y it |w it = j] i.i.d. ∼ N (µ j , Λ j Λ ′ j + Ψ).
It is crucial to emphasize that, unlike CFM with pre-specified structures of factor loading matrices, our approach does not impose any assumptions on Λ j . Therefore, the structure of Λ j will be completely data-driven, making the first component of our model (1) an EFM.
Transition Model
Considering the two data structures discussed in Section 2, the appropriate transition models vary based on the data at hand. To provide a basic understanding of the structure of the proposed integrated model, we first illustrate the transition model in the DT-EHMFM in Section 3.2.1, which ignores the effects of time intervals between two consecutive observations. In other words, the DT-EHMFM assumes consistent time intervals between measurements, which are frequently violated in mobile health data. Therefore, we subsequently introduce the CT-EHMFM in Section 3.2.2, which relaxes the assumption of identical time intervals to allow longitudinal data to be collected irregularly.
DT-EHMFM
Given a state sequence W i , standard assumptions of DTHMM assume that 1) given a state w it , observations y it are independent, and 2) given a state w it and subjects' contextual features, the state at the subsequent occasion w i,t+1 is unrelated to any information from previous occasions. Utilizing subjects' contextual information, we use a multinomial logistic regression model to explicitly characterize the transition probability matrix P it as follows:
log( P it,kj |x it P it,kk |x it ) = x ′ it B kj , t = {2, · · · , T i },(2)
where x it is a d × 1 vector of vector of covariates for individual i at time t, and B kj is a state-specific d × 1 vector of fixed effects coefficients. Here, x it can be reduced to x i , which contains only baseline features. The B kj intends to quantify the effect of covariates on the probability of transitioning from state k to a different state j to provide an understanding of how covariates influence transition patterns and investigate the potential risk factors.
Conventionally, B kk = 0.
However, extreme caution is required when interpreting the predicted transition model in the case of direct application of DT-EHMFM to irregularly spaced datasets. When data are sampled irregularly, multiple complex transitions can occur between any two consecutive observations. Intuitively, given w it , the distribution of w i,t+1 tends to approach a uniform distribution as the interval between observations lengthen. Therefore, additional bias will be introduced when directly applying the DT-EHMFM on a dataset with varying time intervals.
CT-EHMFM
Contrary to the DTHMM, which requires equal time intervals, the CTHMM takes into account the effects of the time interval. Thus, instead of directly depending on the transition probability matrix P , the continuous-time Markov process is characterized by a transition intensity matrix Q (Albert, 1962), which is the limit of the transition probability matrix P as the time interval approaches zero. Suppose that δ it is the number of pre-specified time units between t th and (t − 1) th observation, then the transition intensity for subject i from state j to state k at time t is
q jk = lim δ it →0 P (w it = k|w i,t−δ it = j) δ it > 0, j ̸ = k,(3)
and q jj = − k̸ =j q jk . The corresponding transition probability matrix P (δ it ) can be calculated as the matrix exponential of δ it * Q. The time intervals are assumed to be independent.
To investigate the impact of covariates on transition rates, the transition intensity matrix can be modeled through a log-linear model (Cook et al., 2002;Habtemichael et al., 2018), such that log(q jk |x it ) = x ′ it B jk . Although the CT-EHMFM is much more general than the DT-EHMFM, calculating the exponential of a matrix can be challenging. For simplicity, we approximate the exp(Q) using the (I + Q/a) a for some sufficiently large a (Ross et al., 1996).
Stabilized Expectation-Maximization Algorithm (SEMA)
Let λ = ({µ j } J j=1 , {Λ j } J j=1 , Ψ, {B kj } J k,j=1 , π)
. Given the sequence of latent states W i and the latent factor scores Z i for each i, and using Markov property of state sequence and the independence of y it conditional on w it , a joint probability distribution of the observations and all latent variables can be constructed as follows:
L ci (λ) = P (w i1 ) × T i t=2 P (w it |w i,t−1 , x it ) × T i t=1 P (y it |w it , z it )P (z it ),(4)
which is also known as the complete likelihood function with full information for individual i. By the independence property of Y i , W i , and Z i across i, the complete likelihood function (L c ) for the whole sample can be obtained by taking the product of equation (4) over i.
Our goal is to estimate λ by maximizing the likelihood function L c , or its logarithm l c . Since both W i and Z i are unobserved, the expectation-maximization (EM) algorithm is commonly used to identify the maximum likelihood estimator (MLE). As the name suggests, the EM algorithm finds a local maximum of the marginal likelihood by iteratively applying the expectation and maximization steps discussed below.
Expectation
Step (E-step)
The E-step gets the expectation of l c given observations, with respect to the current conditional distribution of unobserved variables and the current parameter estimates λ v . Denote the target expectation (i.e., E λ v [l c (λ)|Y , X]) as Ω(λ, λ v ). While an explicit form of the probability density function of z it exists, the calculation of conditional state probabilities can be computationally heavy. Therefore, we utilize a scaled version of the forwardbackward algorithm (FBA) (Rabiner, 1989) to get the conditional state probabilities efficiently.
Specifically, we first define the forward probability α ij (t) as P (w it = j|y i1 , · · · , y it ).
Denote P j (y it ) the probability density function of y it given w it = j and c i (t) the conditional probability of observation y it given all past observations. For each individual i and state j, using a recursion scheme, the forward probabilities at t = 1, · · · , T i will be calculated as:
α ij (1) = π j P j (y i1 ) J j=1 π j P j (y i1 ) = π j P j (y i1 ) c i (1) ; (5) α ij (t) = P j (y it )[ J k=1 α ik (t − 1)P itkj ] J j=1 P j (y it )[ J k=1 α ik (t − 1)P itkj ] = P j (y it )[ J k=1 α ik (t − 1)P itkj ] c i (t) .(6)
Then, we define the backward probability β ij (t) as define a recursion form to update the backward probabilities at t = T i , · · · , 1 as follows:
β ij (T i ) = 1, β ij (t) = J k=1 P i,t+1,jk P k (y i,t+1 )β ik (t + 1) c i (t + 1) .(7)
After that, in the smoothing step, denote
ϵ v ikj (t) as P (w i,t = j, w i,t−1 = k|Y i , λ v ) and γ v ij (t) as P (w it = j|Y i , λ v ).
The target conditional state probabilities are functions of the forward probability and backward probability as follows:
γ v ij (t) = α ij (t)β ij (t), ϵ v ikj (t) = α v ik (t − 1)P itkj P j (y it )β v ij (t) c i (t) .(8)
With the probabilities defined above, the Ω(λ, λ v ) can be written as the sum of three parts:
Ω(λ, λ v ) = constant + h(π) + h({B kj } J k,j=1 ) − 1 2 h(Ψ, {Λ j } J j=1 , {µ j } J j=1 ),(9)
where h(π) depends on the initial state distribution, h({B kj } J k,j=1 ) depends on the probability transition matrix, and h(Ψ,
{Λ j } J j=1 , {µ j } J j=1
) is a function of parameters Ψ, Λ j and µ j . Explicit forms are provided in Appendix B. Note that the E-step is identical for DT-EHMFM and CT-EHMFM, except for the dependence of P it,kj on δ it in CT-EHMFM.
Maximization (M-step)
Within each M-step, since h(Ψ, {Λ j } J j=1 , {µ j } J j=1 ), h(π), and h({B kj } J k,j=1 ) do not share
parameters, we maximize each of them separately. The estimator of π, Λ j , µ j , and Ψ can be directly derived by setting h(π) = 0 and h(Ψ,
{Λ j } J j=1 , {µ j } J j=1 ) = 0 (see Appendix B
for details). For {B kj } J k,j=1 , a one-step Newton-Raphson (NR) algorithm is implemented.
First, considering the DT-EHMFM, let S kj be the first-order partial derivative of h({B kj } J k,j=1 ) with respect to B kj and the (j, j ′ ) block entry of M k (M k (j, j ′ )) to be the negative second-order partial derivative with respect to B kj and B kj ′ . Let S k and B k be defined similarly as
S k = (S ′ k1 , · · · , S ′ k,k−1 , S ′ k,k+1 , · · · , S ′ kJ ) ′ . Then B k is updated as B v+1 k = B v k + M −1 k S k .
Alternatively, to ensure the stability of the algorithm and control
the distance between B v+1 k and B v k , we may update B k as B v+1 k = B v k +(M k +S T k S k ) −1 S k .
Second, considering the CT-EHMFM, though the corresponding likelihood function is in the same form as that corresponding to the DT-EHMFM, the maximization step for CT-EHMFM is more complicated as it involves operations of the matrix exponential. Let θ be an ordered vector of all the transition model parameters,
such that θ = vec({B ′ kj }, k ̸ = j).
Recalling the first derivative of the matrix exponential (Zhou et al., 2020) and using
Theorem 1 in Van Loan (1978),
∂ ∂θ u exp(A(θ u )) = 1 0 exp(uA(θ u ))Ã(θ u )exp((1 − u)A(θ u ))du = exp(H) 0:J,J:2J , whereÃ(θ u ) = (Ã ij (θ u )) = ( ∂A ij (θu) ∂θu ) and H = A(θ u )Ã(θ u ) 0 A(θ u ) . Denote ∂P kj (δ it ) ∂θu
as the (k, j) entry of the first derivative of P (δ it ) with respect to θ u (i.e., the u th entry of u).
Having the first derivative of exp(δ it * Q) = P (δ it ) with respect to each component of θ calculated accordingly, a variant of NR, the Fisher scoring algorithm (FS) (Kalbfleisch and Lawless, 1985) can be directly implemented to update the parameter vector θ to forbid the calculation of the second derivative of matrix exponential. Specifically, denote S * be the score function, and S * u be the u th entry of the score function S * . Then,
S * u (θ) = ∂h({B kj } J k,j=1 ) ∂θ u = N i=1 T i t=2 J j=1 J k=1 ϵ v ikj (t) P kj (δ it ) ∂P kj (δ it ) ∂θ u .(10)
Let M * be the negative Fisher information matrix. Its (u, v) entry M * uv is in the form of:
M * uv (θ) = N i=1 T i t=2 J j=1 J k=1 γ v ik (t − 1) P kj (δ it ) ∂P kj (δ it ) ∂θ u ∂P kj (δ it ) ∂θ v .(11)
After getting both the score function and the Fisher information matrix, parameters θ can be updated as
θ v+1 = θ v + M * (θ v ) −1 S * (θ v ). Similar to the DT-EHMFM, a stabilized version is employed in practice with θ v+1 = θ v + {M * (θ v ) + S * (θ v ) T S * (θ v )} −1 S * (θ v ).
The complete iterative algorithm is summarized in Appendix C. Note that the algorithm requires the specification of (K, J), which are typically unknown in practice. In this study, we propose to determine (K, J) using information criteria, the efficacy of which is evaluated in Section 5.3.
Simulation Study
This section conducts a simulation study to evaluate the proposed methods using synthetic data designed to resemble the AURORA data. The simulations are under similar settings as the AURORA data with respect to the sample size (N ), number of observations per subject (T i ), number of response variables (p), and the number of covariates (d ) in the transition model. Specifically, the synthetic data is generated randomly with N =200, p=23, and d =3. of integrating the EFM and the feature-based TM with the standard HMMs. Finally, Subsection 5.3 explores the performance of information criteria in model selection.
Simulation 1
To validate the estimation procedure, we implement the SEMA under the assumption that both J and K are known a priori. Initial values for parameters are determined by first fitting Gaussian Mixture Models (GMM) and then fitting EFM for each estimated group.
Guided by the insights from a pilot study, we set the maximum number of iterations for each replication at 100. The reliability and precision of the proposed methods are then evaluated from two perspectives: i) the accuracy of each individual parameter estimate and ii) the misclassification rate (C mis ), which quantifies the proportion of estimated states that diverge from the actual states.
The accuracy of π, µ, Λ, and Ψ is determined by calculating the average absolute difference (AAD) between parameter estimates and their true values. Mathematically, the AAD of a parameter matrix o is expressed as
AAD(o) = r i=1 |ô i −o i | r , where o i is a
single entry in the matrix o and r denotes the total number of free parameters in the parameter matrix o. The mean of AADs (standard errors in the parentheses) aggregated over 100 random seeds are presented in Table 1. For both the CT-EHMFM and the DT-EHMFM, the mean ADDs of all parameter matrices are sufficiently close to zero with small standard errors, suggesting a good parameter recovery. In Table 2, we present the mean bias (standard error in parenthesis) of each parameter in the transition model. The mean bias of each parameter in the transition model is close to zero for both CT-EHMFM and DT-EHMFM. Nonetheless, the standard errors for each parameter estimate in the transition model of CT-EHMFM are considerably smaller than those of the DT-EHMFM, which is primarily attributable to the longer panel lengths T i . In the DT-EHMFM setting, each subject has only ten observations, whereas each subject has at least 50 observations in the CT-EHMFM setting. Additional simulations revealed that T i is a critical factor influencing the parameter estimation, which will be illustrated later.
Moreover, we present the mean (standard deviation) of C mis in the last column of Table 1. On average, only 0.24% (0.0005) and 0.22% (0.0010) of observations are misclassified under the CT-EHMFM and DT-EHMFM settings, respectively, demonstrating the outstanding performance of the SEMA algorithm in estimating latent states.
Intuitively, various factors, including sample size (N), the number of measurements per individual (T i ), the sizes of J and K, the size of the common variance Ψ, the differences in µ j and Λ j between states, and the frequency of state transitions, can affect the performance of parameter estimation. Additional simulations for both DT-EHMFM and CT-EHMFM in Appendix E.2 reveals that estimation performance for µ, Λ, Ψ, and B, and the rate of correct classification are improved when (i) common variances decrease, (ii) differences in µ j and Λ j between states increase, (iii) J decreases, or (iv) sample size (N ) or panel length
(T i ) increases.
Increasing the size of K or using a B that induces infrequent transitions has little effect on the estimation of the majority of parameters, but it enhances the precision of transition probability estimation, thereby reducing the misclassification rate. The estimation of π is improved solely by increasing sample size (N ) or state-to-state differences in µ j .
Simulation 2
This section compares the performance of the proposed methods and the baseline approaches in correctly identifying latent states. Three benchmark methods are under our consideration: i) TM+independent HMM, which assumes independence among observed features given the states; ii) CFM+TM+HMM, which addresses interrelationships but inaccurately pre-specifies the latent factor structure by setting certain loading matrix entries to zero; and iii) EFM+HMM, which assumes a homogeneous transition probabilities matrix for all subjects. We first repeat the data generation process of Simulation 1. Then, we consider three additional scenarios by adjusting the state-to-state differences in µ j to be closer (µ: medium diff), increasing the similarity of the Λ j at different states (Λ: medium diff), and increasing the significance of the covariance matrix Ψ (Ψ = 2 × I), respectively.
As depicted in Figure
Simulation 3
Information criteria such as the Akaike information criteria (AIC) and the Bayesian information criteria (BIC) have been widely used in model selection (Preacher et al., 2013; Figure 1: C mis of various methods. The error bars represent the 95% CI. For the DT setting, T = 10. For the CT setting, 50 ≤ T ≤ 100. The first column shows the results under the settings we used in simulation 1. The last three columns summarize the results under different settings by varying the true value of µ, Λ, and Ψ, respectively. The true values of µ and Λ with a medium state-to-state difference can be found in Appendix E.2. Song et al., 2017). Within this simulation study, we investigate whether the AIC or BIC is reliable for determining J and K simultaneously. We repeat the data generation process of Simulation 1, but implement the proposed methods with a different set of (J, K) for each replicate when fitting the generated data. Let J = {2, 3, 4} and K = {2, 3, 4}. We consider all possible combinations of J and K, yielding a total of nine fitted candidate models for each replicate. of observations per individual in the processed AURORA data will yield reliable information criteria-based model selection results and, consequently, reliable parameter estimation.
Analysis of the AURORA Data
Due to the inherent irregularity in the collection of mobile device data, we apply the more general method CT-EHMFM to the smartwatch data from the AURORA study. We consider a collection of 54 candidate models (J = {1, 2, · · · , 6}; K = {1, 2, · · · , 9}). For each candidate model, the SEMA algorithm is implemented with multiple random seeds, and the seed yielding the highest estimated likelihood is selected. Then, we use AIC and BIC to compare all fitted candidate models with different J and K. Finally, a model with three states (J = 3) and eight factors per state (K = 8) is suggested. In the following subsections, we focus on the interpretation of parameter estimates and biological findings from four perspectives: i) the interpretation of three estimated states, ii) the co-occurring patterns of symptoms, iii) the relationship between transition probability and demographic factors such as age and gender, and iv) the common structure of the loading matrix.
Interpretation of Hidden States
To investigate the biological differences between different states, we first focus on the selected features. Figure 2 depicts the scaled sample means of each feature across different Retaining only observations for each individual whose estimated states are known on the same day they submitted survey responses, we summarized the flash survey data with means and 95% CIs in Figure 3. While 0 represents the least severity, 1 represents the greatest severity.
Overall, state 1 exhibits the lowest severity level for all ten symptoms, while state 3 has the highest severity level. Based on the Tukey tests, while states 1 and 2 are not statistically different in hyperarousal, re-experience, anxiety, and somatic symptoms, they diverge significantly from state 3 in these constructs. While the differences in nightmare and sleep discontinuity between states 3 and 2 are not significant, they are statistically more severe than in state 1. For mental fatigue and depression, only the difference between state 1 and state 3 is statistically significant.
In summary, both the flash survey data and the AURORA data (HRV, Activity) support
Co-occurring Pattern of Symptoms
When studying the pattern of co-occurring symptoms within each hidden state, we limit our attention to observations collected during the first week. For each estimated state, the correlations between all ten symptoms are calculated. In state 1 (relative health state), there is a high degree of correlation between hyperarousal and anxiety, which implies that if a patient in state 1 experiences severe hyperarousal symptoms, it is highly likely that he or she will also suffer from severe anxiety symptoms. In other words, there is a high likelihood of concurrent manifestation of hyperarousal and anxiety in patients in state 1.
In state 2, symptoms typically do not co-occur due to the lack of a high correlation between any pair of symptoms. In state 3 (the state with more severe disorders), symptoms such as depression, hyperarousal, anxiety, and re-experience are more likely to co-occur.
Transition Probability
This section mainly investigates the heterogeneity of 1-day transition probabilities among subjects by analyzing the transition probabilities with a time interval δ it = 1. We estimated the transition probabilities for males and females within the sample age range, as depicted in Figure 4. Lines embellished with circles illustrate the probability of remaining in the same state, lines adorned with stars indicate the likelihood of transitioning to a more severe state, while lines marked with 'x' reflect the chance of improvement in psychological conditions.
Overall, both males and females have a tendency to remain in their current state, with infrequent state transitions, aligning with most existing literature. For the male group, the probability of staying at states 3 and 2 increases with age, while the probability of staying at state 1 decreases as aging. Moreover, while the likelihood of psychological deterioration increases with age, the chance of psychological state improvement decreases as age increases.
Specifically, while the probability of transitioning from the most severe state (state 3) to the healthiest state (state 1) approaches zero as age increases, the likelihood of the reverse transition approaches zero as age decreases, with the direct transition between state 1 and state 3 being the least likely. The female group exhibits a similar trend to the male group, with the notable exception that females have a greater likelihood of remaining in the most severe state (state 3) compared to males.
In summary, our analysis of the AURORA data suggests that older patients are more likely to transition to a more severe psychological state. Moreover, achieving psychological improvement becomes increasingly challenging as one ages.
Factor Loading Structure
Finally, we are interested in the structure of the factor loading matrix that explains the interrelationships of observed features within each state. To facilitate the interpretation, the estimated loading matrix presented in Appendix F is rotated by the promax rotation (Hendrickson and White, 1964;Browne, 2001), and then standardized by the estimated standard deviation of each variable (Λ j Λ ′ j + Ψ). Factor loadings with absolute values greater than .4 are considered to indicate moderate to high correlations between a feature and a factor (Peterson, 2000), and are bolded.
Overall, factor loading matrices for the three states share some similarities but also have distinct differences, implying heterogeneous interrelationship structures between different states. For all states, factor 0 is defined by features related to heart rate variability and irregularity. While the structure of factor 0 in Λ 2 and Λ 3 is identical, which is defined by SDDN-, dc-and ApEN-related features, factor 0 in Λ 1 is solely defined by SDNN, and ApEn, with dc-related features contribute to the factor 7 in Λ 1 . Similarly, the components of factor 2 are consistent in states 2 and 3, consisting of features related to lfhf and SD1SD2.
However, factor 2 in state 1 includes two additional SD1SD2-related statistics that define the factor 5 in both states 2 and 3.
Factor 1 is defined by ApEn-related statistics for all states, but the weighting of each statistic varies across states. Factor 3 is positively correlated with the mean heart rate (NNmean) and negatively correlated with the skewness of heart rate (NNskew). Factor 4
summarizes activity features, which shows a negative correlation between average activities (i.e., meanAcc, amplitude, and L5) and the number of transitions between wake and sleep (SWCK), suggesting that individuals who engage in more daytime activities tend to have better sleep quality. Factor 6 is defined solely by the summary statistics of SDNN.
In summary, while the factor loading matrix does not differ significantly between states, state distinction in means contributes the most to distinguishing between states in this case study.
Conclusions and Discussion
This paper investigates the unique challenges of analyzing longitudinal mobile health data, including interdependent variables with unknown interrelationship structures, heterogeneous transition probabilities, and irregular measurements. To address these issues, we
propose two HMM-based models, the DT-EHMFM and the CT-EHMFM, for multivariate longitudinal data collected regularly and irregularly, respectively. Furthermore, the performance of the corresponding Stabilized Expectation-Maximization algorithm for maximum likelihood estimation is supported by extensive simulation studies. Finally, we analyzed the AURORA data and drew biological findings comparable with previous research, implying that the mobile health data sourced from consumer-grade devices, together with the proposed methods, have the immense potential to facilitate mental health diagnostics and understand the dynamic transition mechanism.
The proposed methods can be extended in several ways. First, most entries in the estimated factor loading matrix are close to zero, indicating sparse factor loading matrices in real analysis. Although various methods (e.g., factor rotations and setting factor loadings below specific cutoffs to 0) are frequently used to simplify interpretation, the choice of these methods is subjective. The sparse exploratory factor loading analysis (Xie et al., 2010;Chen and Huang, 2012) provides an automated approach to set the loading entries of redundant variables to 0, thereby enhancing the interpretability of loading matrices without reliance on subjective factors. Therefore, incorporating sparse regularization into the factor loading matrix is an important extension of our current work worth studying.
Second, a large number of baseline covariates are typically available in real data. However, we have no prior knowledge about the significance of each covariate in determining the transition probability. Hence, integrating regularization into the transition model to assist with variable selection can be extremely useful.
Third, mental health, according to domain knowledge, is exceptionally diverse. A key assumption of the HMM is the independence between Y it s given the hidden states. Intuitively, it is easy to be violated in real applications, especially given the likelihood of autocorrelation between observations collected from the same subject. Therefore, adding a random effect to the current model to account for the inter-patient heterogeneity is a natural extension (Altman, 2007;Song et al., 2017).
Finally, previous HRV-related studies are often conducted in well-controlled laboratory environments. Thus, all existing HRV feature extraction tools rely on resting-state heart rate data. However, heart rate data collected in open environments will inevitably contain additional noise. For example, it is reasonable to expect that HRV features corre-sponding to different activity states (e.g., exercising and resting) would differ significantly.
Therefore, recognizing the lack of tools to extract HRV features corresponding to different activity states, we believe it would be advantageous to develop a preprocessing pipeline to concurrently process heart rate and activity data to derive appropriate HRV features. Kim, J.-O. and Mueller, C. W. (1978). Prior research has suggested that heart rate variability (HRV) and activity features are associated with APNS (Hartmann et al., 2019;Jung et al., 2019). Therefore, our focus is primarily on HRV and activity features extracted from PPG and accelerometer data collected from Verily's smartwatches during the first 100 days post-enrollment. Activity features are extracted on a 24-hour window to evaluate the daily activity patterns of the participants. After converting accelerometer data to activity counts, the mean and standard deviation of activity counts for each 24-hour interval are calculated. Additionally, cosinor rhythmometry features were derived to capture circadian rhythm. HRV features were derived by first calculating the beat-to-beat (BB) interval (Shaffer and Ginsberg, 2017) time series from PPG data. After identifying and removing noises from the BB interval time series, normal-to-normal (NN) interval time series are derived. Finally, the NN interval time series was analyzed using a 5-minute window with a 30-second sliding step to derive HRV features. In the following subsections, we will discuss the activity data and HRV data in more detail before concluding with a summary of the final dataset of interest.
Appendix A.1: Activity Features
There are four activity features considered in this study. The meanAcc is the mean of the activity counts calculated by the approach described in Borazio et al. (2014), serving as descriptive statistics about the level of activity in the given time period. The amplitude is a feature derived from the Cosinor Rhythmometry Analysis (Cornelissen, 2014) to quantify circadian rhythm. By applying the Cole-Kripke algorithm (Cole et al., 1992) (Shaffer and Ginsberg, 2017). For this study, seven heart rate characteristics were chosen to assess the mean, variability, and complexity of the heart rate time series. NNmean is the average heart rate, while the NNskew and SDNN are the skewness and standard deviation of the NN interval (Shaffer and Ginsberg, 2017), respectively. In particular, higher skewness indicates rapid accelerations or decelerations. The ratio of low-frequency power to high-frequency power is denoted by Lfhf. A low Lfhf ratio suggests parasympathetic dominance (i.e., engage in tend-and-befriend behaviors), whereas a high Lfhf ratio shows sympathetic dominance (i.e., engage in fight-or-flight behaviors) (Shaffer and Ginsberg, 2017). According to Kantelhardt et al. (2007), DC is a predictor of mortality in heart attack survivors. The lower the DC index, the greater the mortality risk. The remaining two variables characterize the time between successive heartbeats (R-R interval). While SD1SD2 assesses the unpredictability of an R-R interval time series, ApEn measures its regularity and complexity. A large ApEn shows R-R interval volatility, whereas a small ApEn indicates a steady and predictable temporal sequence of R-R intervals (Shaffer and Ginsberg, 2017). To align with the activity data, daily statistical summaries of each HRV feature, such as mean, median, minimum, maximum, kurtosis, skewness, interquartile range, and standard deviation, are used.
Appendix A.3: Data Pre-Processing
We consider a subset of the AURORA data by selecting patients involved in MVC before presenting to the ED to investigate the dynamic change in patients' mental health conditions in the 100 days following MVC exposure. We maintain only observations with no missing activity data and a positive wake percentage. Regarding heart rate data, an individual ideally has 2880 records per day. We keep only observations for days with at least 30% (equivalent to 2880*0.3) recordings in order to derive representative daily summary statistics. Before fitting our model, we further apply the Box-Cox transformation (Osborne, 2010) to each variable to reduce the skewness of the original data, eliminate outliers, and standardize the data by dividing each variable by its sample standard deviation.
The final dataset consists of observations from 258 patients, with each patient's total number of records ranging from 17 to 99. In total, there are 23 variables of interest, of which four are derived from the activity data, and 19 are derived from the HRV data. Besides, the transition model considers age (range from 18-74) and gender (0-male, 1-female) to examine their influence on transition probabilities. Note that observations in this dataset are irregularly sampled and hence suitable for analysis using the CT-EHMFM. Alternatively, researchers can employ the proposed DT-EHMFM model by selecting a subset of equally-spaced observations. The largest aligned subset for the AURORA dataset consists of 180 patients, each with observations collected on {day 2, day 12, · · · , day 82}.
Appendix B: Technical Details
In this section, we show the explicit forms of the components in the Ω(λ, λ v ) and the explicit forms of the MLE of π, Λ, and Ψ.
Appendix B.1: Supplement for E-step
Denote Λ j = (Λ j , µ j ) ∈ R p×(K+1) and z it = (z T it , 1) T ∈ R (K+1) . Each of the three parts has an explicit form as follows:
h(π) = N i=1 J j=1 γ v ij (1)log(π j ), (A.1) h({B kj } J k,j=1 ) = N i=1 T i t=2 J j,k=1 ϵ v ikj (t)log(P itkj ), (A.2) h(Ψ, {Λ j } J j=1 , {µ j } J j=1 ) = N i=1 T i t=1 J j=1 γ v ij (t)log|Ψ| + γ v ij (t)y ′ it Ψ −1 y it − 2γ v ij (t)y ′ it Ψ −1 Λ j E λ v ( z it |y it , w it ) + γ v ij (t)tr( Λ j ′ Ψ −1 Λ j E λ v ( z it z it ′ |y it , w it )). (A.3)
Note that, in the discrete-time case, the log(P itkj ) in the equation (A.2) can be further expressed as x T it B kj − log( J l=1 e x T it B kl ). Using the Woodbury matrix identity (Rasmussen, 2003),
let M v j = (I + Λ v ′ j Ψ v −1 Λ v j ) −1 , two expectation terms in h(Ψ, {Λ j } J j=1 , {µ j } J j=1 ) has
the explicit form as:
E λ v ( z it |y it , w it ) = M v j Λ v ′ j Ψ v −1 (y it − µ v j ) 1 , (A.4) E λ v ( z it z it ′ |y it , w it ) = M v j + E λ v (z it |y it , w it )E λ v (z ′ it |y it , w it ) E λ v (z it |y it , w it ) E λ v (z ′ it |y it , w it ) 1 . (A.5) Appendix B.2: Supplement for M-step Within each M-step, since h(π), h({B kj } J k,j=1 ), and h(Ψ, {Λ j } J j=1 , {µ j } J j=1 ) do not share
parameters, we maximize each of them separately. By solving the first derivative of h(π) equal to 0, the parameters related to the initial state distribution are estimated as follows:
π new j = N i=1 γ v ij (1) N i=1 J k=1 γ v ik (1) . (A.6)
Similarly, the parameters used to characterize the conditional distribution of y it given w it are estimated by setting the first derivative of h(Ψ, {Λ j } J j=1 , {µ j } J j=1 ) equal to 0. Λ j will be updated as follows:
Λ j new = N i=1 T i t=1 γ v ij (t)y it E λ v ( z it |y it , w it ) ′ N i=1 T i t=1 γ v ij (t)E λ v ( z it z it ′ |y it , w it ) −1 . (A.7)
Simultaneously, we got the updated estimation of Ψ as the following.
Ψ new = 1 N i=1 T i diag N i=1 T i t=1 J j=1 γ v ij (t){y it − Λ j new E λ v ( z it )}y ′ it . (A.8)
Here, since we assume Ψ a diagonal matrix, we restrict all the off-diagonal entries of the estimator of Ψ to be 0.
Appendix C: SEMA Algorithm 1 Stabilized Expectation-Maximization Algorithm (SEMA)
1: procedure SEMA({Y i }, {X i }, K, J, δ 1 , δ 2 , λ 0 ) 2: λ v ← λ 0 ; ∆ 1 ← δ 1 + 1; ∆ 2 ← δ 2 + 1 3: while ∆ 1 > δ 1 and ∆ 2 > δ 2 do 4: compute γ v ij (t), ϵ v ikj (t) 5: update h(π), h({B kj }), and h(Ψ, {Λ j }, {µ j }) 6: update {µ j }, {Λ j }, Ψ, π by optimizing h(π), h(Ψ, {Λ j }, {µ j }) 7:
update {B kj } based on h({B kj }) using stabilized NR/FS 8:
∆ 1 ← |logP ({Y i }|λ) − logP ({Y i }|λ v )|▷
Appendix D: Synthetic Data Generation
To resemble the processed AURORA data with irregular measurements (i.e., CT-EHMFM setting), we uniformly sampled T i from the interval [50, 100] for each subject. Then, we randomly selected T i integers from the {1, · · · , 100} without replacement to get the sequences of the occasions (t) that collect measurements. The resulting time intervals {δ it } are then calculated accordingly. To closely replicate the processed AURORA data with only regular measurements (i.e., DT-EHMFM setting), T i is set to 10 for all subjects.
With the number of states J and the number of factors K both fixed at three, we first generated data related to the latent states. The initial state of each individual is independently sampled from a multinomial distribution with probability π = ( 1 3 , 1 3 , 1 3 ). With the initial states, each individual's latent state trajectories are then sampled according to the Table A.1. When evaluating the effect of common variance Ψ, we assess scenarios with the common variance equals .1I, .5I, and 1I (baseline), respectively. Regarding the effect of µ distinc-tion, we consider two additional settings of µ by adjusting the state-to-state differences in µ j as specified in Table A
transition probabilities P it or P it (δ it ) with {B kj } and x T it = (x it1 , x it2 , x it3 ). Λ j 1 * 1 2 1 * 1 2 1 * 1 2 .7 * 1 5 0 * 1 5 0 * 1 5 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8 0 * 1 2 .7 * 1 2 0 * 1 2 1 * 1 2 1 * 1 2 1 * 1 2 0 * 1 3 .7 * 1 3 0 * 1 3 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8 0 * 1 4 0 * 1 4 .7 * 1 4 1 * 1 2 1 * 1 2 1 * 1 2 0 0 .7 0 * 1 8 .7 * 1 8 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 B T 1j (DT) / −2.95 −1 .5 −2.95 −.5 .5 B T 2j (DT) −2.95 −.5 .5 / −2.95 −.5 .5 B T 3j (DT) −2.95 −.5 .5 −2.95 1 .5 / B T 1j (CT) / −2.5 1 −1 −2.5 1 −1 B T 2j (CT) −3 1 −1 / −2.5 1 −1 B T 3j (CT) −3 1 −1 −3 1 −1 /DT-EHMFM CT-EHMFM B jk B jk0 B jk1 B jk2 B jk0 B jk1 B jk2 Bµ:large diff (baseline) 15 * 1 2 10 * 1 3 5 * 1 6 0 * 1 12 17 * 1 2 12 * 1 3 7 * 1 6 2 * 1 12 19 * 1 2 14 * 1 3 9 * 1 6 4 * 1 12 µ:medium diff 15 * 1 2 10 * 1 3 5 * 1 6 0 * 1 12 17 * 1 2 12 * 1 3 7 * 1 6 0 * 1 12 19 * 1 2 14 * 1 3 9 * 1 6 0 * 1 12 µ:minor diff 15 * 1 2 10 * 1 3 5 * 1 6 0 * 1 12 15.75 * 1 2 10.75 * 1 3 5 * 1 6 0 * 1 12 16.5 * 1 2 11.5 * 1 3 5 * 1 6 0 * 1 12
When investigating the effect of Λ distinction, we consider two additional settings of Λ by adjusting the state-to-state differences in Λ j as outlined in Table A.4. 1 * 1 2 1 * 1 2 1 * 1 2 .7 * 1 5 0 * 1 5 0 * 1 5 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8 0 * 1 2 .7 * 1 2 0 * 1 2 1 * 1 2 1 * 1 2 1 * 1 2 0 * 1 3 .7 * 1 3 0 * 1 3 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8
0 * 1 4 0 * 1 4 .7 * 1 4 1 * 1 2 1 * 1 2 1 * 1 2 0 0 .7 0 * 1 8 .7 * 1 8 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 Λ: medium diff .3
.3 .3 .7 * 1 6 0 * 1 6 0 * 1 6 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8
.7 * 1 3 0 * 1 3 0 * 1 3 .3 .3 .3 .7 * 1 3 0 * 1 3 0 * 1 3 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8 .7 * 1 5 0 * 1 5 0 * 1 5 .3 .3 .3 .7 0 0 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8 Λ: minor diff .05
.05 .05 .7 * 1 6 0 * 1 6 0 * 1 6 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8
.7 * 1 3 0 * 1 3 0 * 1 3 .05 .05 .05 .7 * 1 3 0 * 1 3 0 * 1 3 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8 .7 * 1 5 0 * 1 5 0 * 1 5 .05 .05 .05 .7 0 0 0 * 1 8 .7 * 1 8 0 * 1 8 0 * 1 8 0 * 1 8 .7 * 1 8
For the test evaluating the effect of transition frequency, a frequent transition is defined as the probability of remaining in the same state being less than 0.70. The B corresponding to the frequent transition is specified in Table A.5, while the B in the baseline setting corresponds to infrequent transition. .082(.020) .0017(.0002) a Note that the ADD of the corresponding estimated transition probability matrix with frequent transit is .033(.012), while that with infrequent transit is .022(.009). Since for each individual, there are only 10 observations, frequent transition will definitely help estimating the transition model with more different observations. However, the estimated probability matrix might be affected differently.
Appendix E.3: Additional Results for Simulation 3
This section investigates the performance of BIC/AIC in model selection for DT-EHMFM with varying sample sizes. We employ the same data generation process as in Simulation 3 discussed in the main paper, with the only difference being the number of observations per individual. Table A.8 summarizes the percentage of instances where a model with accurate J and K was selected. While BIC consistently recommended the model with accurate J and K, AIC increased the likelihood of recommending a model with accurate J and K as the sample size increased.
Appendix F: Additional Real Analysis Results
This section presents all of the parameter estimates for the model with J = 3 and K = 8 that was fitted. Included in Table A.9 is the estimated µ j for each state. The estimated loading matrix for state 1 is provided in Table A.10, while that for state 2 and state 3 are provided in Table A.11 and Table A.12, respectively. Finally, the parameter estimates for the transition model are displayed in Table A.13.
Appendix G: Flash Survey Data
Installed on the participants' smartphones, the Mindstrong DiscoveryTM application was used to deliver brief questionnaires (flash surveys). Participants were asked to respond daily for the first week of the study, then every other day until week 12, after which they were asked to respond weekly.
Based on the RDoC framework, ten latent constructs associated with APNS were developed using flash survey items selected by domain experts: Pain, Loss, Sleep Discontinuity, Nightmare, Somatic Symptoms, Mental Fatigue, Avoidance, Re-experience, and Anxious.
Participant scores for each construct were calculated based on their responses to the flash survey questions, with higher scores indicating more severe symptoms. While most construct scores range from −1 to 5, the score range for mental fatigue and somatic symptoms is (0, 12), and the range of the pain construct score is (−1, 10). For analytical purposes, we scaled each construct to fall within the range of [0, 1]. Moreover, we only consider observations for each individual whose estimated states are known on the same day they submitted survey responses.
T i = 10
10for discrete-time (DT) setting, while T i ∈ [50, 100] for the continuous-time (CT) setting. Furthermore, we assume that J = 3 and K = 3. See Appendix D for the complete data generation process and the true values of parameters. In the following, Subsection 5.1 evaluates the reliability of the proposed model by comparing the empirical results of parameter estimates with their respective true values. Comparing the performance of the proposed method to that of baseline methods, Subsection 5.2 demonstrates the benefit
1, our proposed methods (CT-EHMFM and DT-EHMFM) consistently outperform the benchmark methodologies in both CT and DT settings. Regardless of sample size, our methods consistently achieve the lowest misclassification rate, nearly approximating zero, thereby emphasizing the importance of each component in our proposed models. Specifically, the comparison with TM+independent HMM shows the importance of accounting for the interrelationship between observed features; the comparison with CFM+TM+HMM reveals the risk of incorrectly specifying the interrelationship structure; and the comparison with EFM+HMM demonstrates the inadequacy of assuming homogeneous transition probabilities.
Figure 3 :
3Sample mean for each symptom in each estimated state. The error bars represent the 95% CI.our interpretation of the three latent states. State 1 is the healthiest, while state 3 indicates having the most severe APNS symptoms.
Figure 4 :
4Estimated transition probability. Fix δ it = 1. (a, b) indicates a transition from state a to state b.
R. E., Stein, P. K., and Bigger Jr, J. T. (2005). Heart rate variability: measurement and clinical utility. Annals of Noninvasive Electrocardiology, 10(1):88-101. Lange, J. M., Gulati, R., Leonardson, A. S., Lin, D. W., Newcomb, L. F., Trock, B. J., Carter, H. B., Cooperberg, M. R., Cowan, J. E., Klotz, L. H., et al. (2018). Estimating and comparing cancer progression risks under varying surveillance protocols. The annals of applied statistics, 12(3):1773. Liu, X. and Chen, R. (2016). Regime-Switching Factor Models for High-Dimensional Time Series. Statistica Sinica, 26(4):1427-1451. Liu, Y.-Y., Li, S., Li, F., Song, L., and Rehg, J. M. (2015). Efficient learning of continuoustime hidden markov models for disease progression. Advances in neural information processing systems, 28:3599. Madadi, M. (2019). A hidden markov factor analysis framework for seizure detection in epilepsy patients. Marquand, A. F., Wolfers, T., Mennes, M., Buitelaar, J., and Beckmann, C. F. (2016). Beyond lumping and splitting: a review of computational approaches for stratifying psychiatric disorders. Biological psychiatry: cognitive neuroscience and neuroimaging, 1(5):433-447. Maruotti, A., Bulla, J., Lagona, F., Picone, M., and Martella, F. (2017). Dynamic mixtures of factor analyzers to characterize multivariate air pollutant exposures. The Annals of Applied Statistics, 11(3):1617-1648.
λ and {w it } = {argmax j (γ ij (t))} 13: end procedure E M
Mimicking the AURORA data, we assume all three covariates in the transition model are baseline features that are static over time. While x it1 = 1 is an intercept, x it2 is a binary variable uniformly and independently sampled from {0, 1} and x it3 follows a uniform distribution between 0 and 1. Given the dynamic trajectories of states for each individual, suppose that individual i is in the state j at time t, the observation vector y it is randomly drawn from a normal distribution with a mean of µ j and covariance Λ j Λ ′ j + Ψ, where Ψ = I. The true values of the unknown parameters are summarized in
12 -.050(.495) -.067(.475) -.006(.845) .014(.142) .017(.150) -.080(.219) B 13 -.024(.460) .032(.408) -.065(.700) -.021(.126) .010(.125) .016(.182) B 21 -.113(.483) -.028(.409) .104(.686) -.018(.147) -.004(.139) .014(.214) B 23 -.016(.453) -.020(.426) .050(.719) -.003(.112) .004(.097) .005(.200) B 31 -.037(.404) -.055(.478) -.007(.614) .003(.112) .002(.107) -.006(.192) B 32 -.013(.372) -.016(.282) .003(.561) .006(.131) -.032(.091) .028(.232) Appendix E.2: Additional Results for Simulation 1 Under Various Settings This section presents additional simulation results that investigate the impact of various factors on estimation performance. Using the simulation settings provided in Appendix D as the baseline setup, we conducted eight additional sets of simulation studies by systematically varying various components. Specifically, for each test, we maintain the baseline setup except for the component under examination. These components include: i) sample size (N), ii) number of measurements for each individual (T i ), iii) J, iv) K, v) size of common variance Ψ, vi) state-to-state difference in µ j , vii) state-to-state difference in Λ j , and viii) transition frequency. Following are descriptions of the parameter specifications under various circumstances, followed by the results.
Table 1 :
1The Mean (standard error) AADs of π, µ, Λ, and Ψ, and C mis of the estimations.
Parameter
π
µ
Λ
Ψ
C mis
DT-EHMFM .027(.014) .040(.005) .037(.002) .030(.005) .0023(.0010)
CT-EHMFM .026(.013) .015(.002) .014(.001) .011(.002) .0024(.0005)
Table 2
2presents the results of 100 replications, suggesting that both BIC and AIC performed well in model selection. In the simulation study for the DT-EHMFM, AIC recommends a model with the correct J and K in 94% of replications, while BIC yields the accurate recommendation in 100% of replications. Notably, as the total number of observations increases, AIC's performance will improve (see related results in Appendix E.3). In the case of the CT-EHMFM, both AIC and BIC consistently recommend the model with accurate J and K. Therefore, we believe that the sample size and the number
Table 2 :
2The percentage of (J, K) pairs selected based on AIC/BIC.
J K Percentage (DTE) Percentage (CTE)
AIC 3 3
94%
100%
4 3
6%
-
BIC 3 3
100%
100%
Figure 2 :
2Relative sample mean for features in each estimated states. The error bars represent the 99% CI, which are small and hence hard to distinguish. states, along with the corresponding 99% confidence interval (CI). Further pairwise Tukey tests indicate significant differences between states concerning almost all features, with the exception of state 1 and state 2 when concerning amplitude, SWCK, L5, and NNskew.q3. oped using flash survey items selected by domain experts: Pain, Loss, Sleep Discontinuity, Nightmare, Somatic Symptoms, Mental Fatigue, Avoidance, Re-experience, and Anxious.Overall, features related to average heart rate (NNmean-related features), heart rate
variability (SDNN-related features), and heart deceleration capacity (dc-related features)
vary significantly between the three latent states. The values of these features show a
sequential decrease from state 1 to state 2, and then to state 3. According to previous
research, lower heart rate variability and deceleration capacity are associated with a higher
mortality rate (Kleiger et al., 2005; Kantelhardt et al., 2007). Therefore, it is reasonable to
conclude that latent states 1 to 3 represent participants' health in descending order, with
state 1 being the healthiest and state 3 being the least healthy. Moreover, regarding activity
features, states 1 and 2 have similar but higher means compared to state 3, indicating that
participants in states 1 and 2 have higher levels of daily activity and thus are in better
health than those in state 3.
Among all the features related to heart rate randomness or unpredictability (lfhf, ApEn,
and SD1SD2), state 3 demonstrates significantly higher values for SD1SD2-related features
but lower values for lfhf-related features compared to state 1 and 2, suggesting a different
interpretation of the estimated states than our previous interpretation. However, it is
important to note that previous studies have suggested that the relationship between these
features and the psychological or physiological state is neither straightforward nor unique
(von Rosenberg et al., 2017).
To confirm the validity of the three states, we further compare their differences regard-
ing self-report symptoms collected from a flash survey (details are provided in Appendix
G). Based on the RDoC framework, ten latent constructs associated with APNS were devel-
ing, and ongoing domestic violence. Proficiency in written and spoken English and owning an internet-accessible iOS/Android smartphone were also prerequisites. Participants used in this study are from the third data freeze, which includes those who were enrolled at least up to day 67 of the study. Participants who became pregnant, were incarcerated, or died during the duration of the study are excluded.Appendix A: Description of AURORA Data
In the AURORA study (McLean et al., 2020), trauma survivors aged 18-75 presenting to
participating EDs within 72 hours of trauma exposure were screened for enrollment eli-
gibility. Motor vehicle collisions (MVC), physical assault, sexual assault, falls >10 feet,
or mass casualty incidents are automatically qualified for enrollment. Major exclusion
criteria include administration of general anesthesia, long bone fractures, laceration with
significant hemorrhage, visual or auditory impairment precluding completion of web-based
neurocognitive evaluations and/or telephone follow-ups, prisoners, pregnant or breastfeed-
on accelerometry epochs, each epoch is classified as either wake or sleep. The SWCK is the number of transitions between wake and sleep epochs divided by the length of the data. Based on the raw accelerometer data (Van Someren et al., 1999), the average activity over the five least active hours (L5) is calculated, indicating nighttime activity.Appendix A.2: Heart Rate Features
Technically, HRV features can be grouped into three categories: time-domain measures,
frequency-domain measures, and nonlinear measures
Table A . 1 :
A1True values of µ, Λ, and B kj used in simulation studies.j
1
2
3
µ j
15 * 1 2
10 * 1 3
5 * 1 6
0 * 1 12
17 * 1 2
12 * 1 3
7 * 1 6
2 * 1 12
19 * 1 2
14 * 1 3
9 * 1 6
4 * 1 12
. 3 .
3Table A.3: True values of µ with different levels of state-to-state differencej
1
2
3
Table A .
A4: True values of Λ with different state-to-state differencej
1
2
3
Λ: large diff (baseline)
Table A .
A5: True values of B kj with Frequent Transition For tests evaluating the effects of J, K, N, and T i , the baseline setups are modified as indicated in the following summary tables.Table A.6 presents the results under the DT settings, while Table A.7 displays the results under the CT settings.j
1
2
3
B T
1j (DT): freq transit
/
−.1.5 −2 .75
−1.5 .75 .5
B T
2j (DT) : freq transit −1.5 .75 1.25
/
−1.5 −1.5 .75
B T
3j (DT) : freq transit
−1.5 .75 .75
−1.5 −2 .75
/
B T
1j (CT) : freq transit
/
.5 1 −.5
.5 1 −.5
B T
2j (CT) : freq transit
−1 .5 1
/
−.25 1 −.5
B T
3j (CT) : freq transit
.5 .5 1
−.5 .5 1
/
Table A .
A6: The Mean (standard error) AADs of π, µ, Λ, Ψ, and B, and C mis of estimations under different DT settings. N = 100; T = 10 .038(.020) .057(.008) .053(.003) .043(.006) .648 (.226) .0025(.0014) N = 200; T = 10 .027(.014) .040(.005) .037(.002) .030(.005) .408 (.099) .0023(.0010) N = 300; T = 10 .022(.011) .033(.004) .031(.002) .024(.004) .340 (.083) .0023(.0009) N = 700; T = 10 .014(.008) .021(.003) .020(.001) .016(.003) N = 200; T = 10 .027(.014) .040(.005) .037(.002) .030(.005) .408 (.099) .0023(.0010) N = 200; T = 20 .026(.014) .029(.004) .027(.002) .021(.003) .280 (.073) .0018(.0007) N = 200; T = 200 .027(.014) .009(.001) .009(.0005) .006(.001)ADD
π
µ
Λ
Ψ
B
C mis
Ψ = 1 * I
.027(.014) .040(.005) .037(.002) .030(.005) .408 (.099)
.002(.001)
Ψ = .5 * I
.027(.014) .033(.005) .027(.002) .014(.002)
.407(.095) .0003(.0004)
Ψ = .1 * I
.027(.014) .026(.006) .018(.002) .003(.001)
.407(.095) .0000(.0000)
µ: large diff
.027(.014) .040(.005) .037(.002) .030(.005) .408 (.099)
.002(.001)
µ: medium diff
.027(.014) .040(.005) .038(.002) .030(.005)
.426(.103)
.007(.002)
µ: minor diff
.034(.022) .089(.056) .098(.048) .032(.008)
.714(.338)
.336(.290)
Λ: large diff
.027(.014) .040(.005) .037(.002) .030(.005) .408 (.099)
.002(.001)
Λ: medium diff
.027(.014) .038(.004) .037(.002) .029(.005)
.422(.101)
.005(.002)
Λ: minor diff
.027(.014) .038(.004) .046(.002) .030(.005)
.414(.098)
.004(.001)
B: infreq transit .027(.014) .040(.005) .037(.002) .030(.005) .408 (.099)
.002(.001)
B: freq transit
.027(.014) .041(.005) .039(.002) .031(.005) .286 a (.080)
.005(.002)
J = 2
.031(.023) .032(.005) .031(.002) .029(.005) .340 (.132)
.001(.001)
J = 3
.027(.014) .040(.005) .037(.002) .030(.005) .408 (.099)
.002(.001)
J = 4
.025(.010) .046(.004) .043(.002) .030(.004)
.512(.094)
.003(.001)
K = 2
.027(.014) .040(.006) .037(.002) .029(.005) .421 (.098)
.006(.002)
K = 3
.027(.014) .040(.005) .037(.002) .030(.005) .408 (.099)
.002(.001)
K = 5
.027(.014) .041(.005) .040(.002) .034(.005)
.408(.095) .0004(.0004)
N = 50; T = 10
.057(.027) .081(.012) .077(.005) .061(.010) 1.199 (.496) .0029(.0022)
.206(.043) .0021(.0005)
N = 200; T = 3
.027(.014) .074(.009) .069(.004) .056(.008) 1.422 (.607) .0044(.0029)
N = 200; T = 5
.027(.014) .058(.007) .053(.003) .043(.006) .707 (.261) .0028(.0017)
Table A .
A7: The Mean (standard error) AADs of π, µ, Λ, Ψ, and B, and C mis of estimations under different CT settings. N = 50; T i ∈ [50, 100] .052(.029) .030(.004) .028(.002) .022(.003) .254(.058) .0027(.0008) N = 100; T i ∈ [50, 100] .042(.019) .021(.003) .020(.001) .015(.002) .175(.038) .0025(.0006) N = 200; T i ∈ [50, 100] .026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .0024(.0005) N = 500; T i ∈ [50, 100] .018(.010) .009(.001) .009(.001) .007(.001) .076(.017) .0024(.0003) N = 200; T i ∈ [10, 30] .029(.014) .029(.004) .027(.002) .021(.004) .251(.061) .0028(.0009) N = 200; T i ∈ [30, 50] .027(.014) .020(.003) .019(.001) .015(.003) .180(.043) .0025(.0005) N = 200; T i ∈ [50, 100] .026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .0024(.0005) N = 200; T i ∈ [100, 150] .028(.015) .012(.001) .011(.001) .008(.001) .098(.021) .0023(.0003)ADD
π
µ
Λ
Ψ
B
C mis
Ψ = 1 * I
.026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .0024(.0005)
Ψ = .5 * I
.026(.013) .012(.002) .011(.001) .005(.001) .119(.027) .0003(.0001)
Ψ = .1 * I
.026(.013) .010(.003) .007(.001) .001(.000) .119(.027) .0000(.0000)
µ: large diff
.026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .002(.0005)
µ: medium diff
.027(.013) .015(.001) .015(.001) .011(.002) .124(.028)
.007(.001)
µ: minor diff
.034(.017) .017(.003) .478(.057) .011(.002) .161(.040)
.085(.004)
Λ: large diff
.026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .002(.0005)
Λ: medium diff
.026(.013) .014(.002) .014(.001) .011(.002) .122(.029)
.006(.001)
Λ: minor diff
.026(.013) .014(.002) .014(.001) .011(.002) .122(.028)
.004(.001)
B: infreq transit
.026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .002(.0005)
B: freq transit
.027(.013) .015(.002) .014(.001) .011(.002) .137(.043)
.007(.001)
J = 2
.027(.021) .012(.002) .012(.001) .011(.002) .087(.026) .0015(.0003)
J = 3
.026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .0024(.0005)
J = 4
.024(.011) .017(.002) .016(.001) .011(.001) .150(.028) .0032(.0005)
K = 2
.026(.013) .015(.002) .014(.001) .010(.002) .124(.027) .0069(.0008)
K = 3
.026(.013) .015(.002) .014(.001) .011(.002) .120(.027) .0024(.0005)
K = 5
.026(.013) .015(.002) .015(.001) .012(.002) .119(.027) .0005(.0002)
Table A .
A8: Percentage of correct model selection of (J, K) based on BIC/AIC (N,T) AIC BIC (200,10) 94% 100% (200,40) 99% 100% (200,200) 100% 100%
Table A .
A9: Estimated µ j Table A.10: Estimated Standardized Promax Factor Loading Matrix: State = 1 (Λ 1 )µ j
j=1
j=2
j=3
meanAcc
7.146
7.231
6.860
amplitude
3.906
3.913
3.621
SWCK
3.269
3.220
3.151
L5
3.765
3.800
3.555
NNmean.min
7.663
7.496
6.881
NNmean.mean
8.406
7.923
6.960
NNskew.q3
0.998
0.956
1.305
SDNN.max
7.284
6.571
6.246
SDNN.min
28.705
27.628
26.944
SDNN.mean
12.673
11.584
10.616
SDNN.var
20.014
19.333
18.512
lfhf.min
-4.393
-4.428
-4.829
lfhf.mean
-0.125
-0.046
-0.642
dc.min
3.888
3.168
2.706
dc.mean
7.423
6.453
5.418
SD1SD2.max
101.371
101.620
102.439
SD1SD2.min
26.001
25.885
26.745
SD1SD2.mean 1,703.646 1,703.711 1,704.842
SD1SD2.var
23.038
23.197
23.943
ApEn.max
4.320
3.813
3.975
ApEn.min
-2.694
-1.915
-2.280
ApEn.mean
1.901
1.596
1.146
ApEn.var
-4.546
-5.396
-4.782
0
1
2
3
4
5
6
7
Table A .
A11: Estimated Standardized Promax Factor Loading Matrix: State = 2 (Λ 2 )
Table A .
A12: Estimated Standardized Promax Factor Loading Matrix: State = 3 (Λ 3 )0
1
2
3
4
5
6
7
Table A .
A13: Estimated Transition Model (B kj )j
B 1j0
B 1j1 B 1j2
B 2j0
B 2j1
B 2j2
B 3j0
B 3j1
B 3j2
1
-
-
--0.462 -0.243 -0.019 -2.430 -0.233 -0.018
2 -1.736 -0.091 0.017
-
-
-0.334 -0.759 -0.021
3 -5.046 1.426 0.015 -1.723 0.053 0.005
-
-
-
P (y i,t+1 ,··· ,y i,T i |w it =j,λ) c i (t+1). Similarly, we
Estimating the infinitesimal generator of a continuous time, finite state markov process. The Annals of Mathematical Statistics. A Albert, Albert, A. (1962). Estimating the infinitesimal generator of a continuous time, finite state markov process. The Annals of Mathematical Statistics, pages 727-753.
Mixed hidden markov models: an extension of the hidden markov model to the longitudinal data setting. R M Altman, Journal of the American Statistical Association. 102477Altman, R. M. (2007). Mixed hidden markov models: an extension of the hidden markov model to the longitudinal data setting. Journal of the American Statistical Association, 102(477):201-210.
. R Amoros, R King, H Toyoda, T Kumada, P J Johnson, T G Bird, Amoros, R., King, R., Toyoda, H., Kumada, T., Johnson, P. J., and Bird, T. G. (2019).
A continuous-time hidden markov model for cancer surveillance using serum biomarkers with application to hepatocellular carcinoma. Metron. 772A continuous-time hidden markov model for cancer surveillance using serum biomarkers with application to hepatocellular carcinoma. Metron, 77(2):67-86.
Statistical Inference for Probabilistic Functions of Finite State Markov Chains. L E Baum, T Petrie, The Annals of Mathematical Statistics. 376Publisher: Institute of Mathematical StatisticsBaum, L. E. and Petrie, T. (1966). Statistical Inference for Probabilistic Functions of Finite State Markov Chains. The Annals of Mathematical Statistics, 37(6):1554-1563. Publisher: Institute of Mathematical Statistics.
Towards benchmarked sleep detection with wrist-worn sensing units. M Borazio, E Berlin, N Kücükyildiz, P Scholl, K Van Laerhoven, 2014 IEEE International Conference on Healthcare Informatics. IEEEBorazio, M., Berlin, E., Kücükyildiz, N., Scholl, P., and Van Laerhoven, K. (2014). Towards benchmarked sleep detection with wrist-worn sensing units. In 2014 IEEE International Conference on Healthcare Informatics, pages 125-134. IEEE.
An overview of analytic rotation in exploratory factor analysis. M W Browne, Multivariate behavioral research. 361Browne, M. W. (2001). An overview of analytic rotation in exploratory factor analysis. Multivariate behavioral research, 36(1):111-150.
Sparse reduced-rank regression for simultaneous dimension reduction and variable selection. L Chen, J Z Huang, Journal of the American Statistical Association. 107500Chen, L. and Huang, J. Z. (2012). Sparse reduced-rank regression for simultaneous dimen- sion reduction and variable selection. Journal of the American Statistical Association, 107(500):1533-1545.
Automatic sleep/wake identification from wrist activity. R J Cole, D F Kripke, W Gruen, D J Mullaney, J C Gillin, Sleep. 155Cole, R. J., Kripke, D. F., Gruen, W., Mullaney, D. J., and Gillin, J. C. (1992). Automatic sleep/wake identification from wrist activity. Sleep, 15(5):461-469.
A generalized mover-stayer model for panel data. R J Cook, J D Kalbfleisch, Yi , G Y , Biostatistics. 33Cook, R. J., Kalbfleisch, J. D., and Yi, G. Y. (2002). A generalized mover-stayer model for panel data. Biostatistics, 3(3):407-420.
Cosinor-based rhythmometry. G Cornelissen, Theoretical Biology and Medical Modelling. 111Cornelissen, G. (2014). Cosinor-based rhythmometry. Theoretical Biology and Medical Modelling, 11(1):1-24.
The theory of stochastic processes. D R Cox, H D Miller, RoutledgeCox, D. R. and Miller, H. D. (2017). The theory of stochastic processes. Routledge.
Missclassification of hiv disease stages with continuous time hidden markov models. T G Habtemichael, A T Goshu, G B Buta, Journal of Advances in Medicine and Medical Research. Habtemichael, T. G., Goshu, A. T., and Buta, G. B. (2018). Missclassification of hiv disease stages with continuous time hidden markov models. Journal of Advances in Medicine and Medical Research, pages 1-15.
Heart rate variability as indicator of clinical state in depression. R Hartmann, F M Schmidt, C Sander, U Hegerl, Frontiers in psychiatry. 9735Hartmann, R., Schmidt, F. M., Sander, C., and Hegerl, U. (2019). Heart rate variability as indicator of clinical state in depression. Frontiers in psychiatry, 9:735.
Promax: A quick method for rotation to oblique simple structure. A E Hendrickson, P O White, British journal of statistical psychology. 171Hendrickson, A. E. and White, P. O. (1964). Promax: A quick method for rotation to oblique simple structure. British journal of statistical psychology, 17(1):65-70.
Heart and brain interaction of psychiatric illness: a review focused on heart rate variability, cognitive function, and quantitative electroencephalography. W Jung, K.-I Jang, S.-H Lee, Clinical Psychopharmacology and Neuroscience. 174459Jung, W., Jang, K.-I., and Lee, S.-H. (2019). Heart and brain interaction of psychiatric illness: a review focused on heart rate variability, cognitive function, and quantitative electroencephalography. Clinical Psychopharmacology and Neuroscience, 17(4):459.
The analysis of panel data under a markov assumption. J Kalbfleisch, J F Lawless, Journal of the american statistical association. 80392Kalbfleisch, J. and Lawless, J. F. (1985). The analysis of panel data under a markov assumption. Journal of the american statistical association, 80(392):863-871.
Phase-rectified signal averaging for the detection of quasiperiodicities and the prediction of cardiovascular risk. J W Kantelhardt, A Bauer, A Y Schumann, P Barthel, R Schneider, M Malik, G Schmidt, Chaos: An Interdisciplinary Journal of Nonlinear Science. 17115112Kantelhardt, J. W., Bauer, A., Schumann, A. Y., Barthel, P., Schneider, R., Malik, M., and Schmidt, G. (2007). Phase-rectified signal averaging for the detection of quasi- periodicities and the prediction of cardiovascular risk. Chaos: An Interdisciplinary Jour- nal of Nonlinear Science, 17(1):015112.
The aurora study: a longitudinal, multimodal library of brain biology and function after traumatic stress exposure. S A Mclean, K Ressler, K C Koenen, T Neylan, L Germine, T Jovanovic, G D Clifford, D Zeng, X An, S Linnstaedt, Molecular psychiatry. 252McLean, S. A., Ressler, K., Koenen, K. C., Neylan, T., Germine, L., Jovanovic, T., Clifford, G. D., Zeng, D., An, X., Linnstaedt, S., et al. (2020). The aurora study: a longitudi- nal, multimodal library of brain biology and function after traumatic stress exposure. Molecular psychiatry, 25(2):283-296.
A systematic review of hidden markov models and their applications. B Mor, S Garhwal, A Kumar, Archives of computational methods in engineering. 283Mor, B., Garhwal, S., and Kumar, A. (2021). A systematic review of hidden markov models and their applications. Archives of computational methods in engineering, 28(3):1429- 1448.
Improving your data transformations: Applying the box-cox transformation. J Osborne, Practical Assessment, Research, and Evaluation. 15112Osborne, J. (2010). Improving your data transformations: Applying the box-cox transfor- mation. Practical Assessment, Research, and Evaluation, 15(1):12.
A meta-analysis of variance accounted for and factor loadings in exploratory factor analysis. Marketing letters. R A Peterson, 11Peterson, R. A. (2000). A meta-analysis of variance accounted for and factor loadings in exploratory factor analysis. Marketing letters, 11:261-275.
Choosing the optimal number of factors in exploratory factor analysis: A model selection perspective. K J Preacher, G Zhang, C Kim, G Mels, Multivariate Behavioral Research. 481Preacher, K. J., Zhang, G., Kim, C., and Mels, G. (2013). Choosing the optimal number of factors in exploratory factor analysis: A model selection perspective. Multivariate Behavioral Research, 48(1):28-56.
A tutorial on hidden markov models and selected applications in speech recognition. L R Rabiner, Proceedings of the IEEE. 772Rabiner, L. R. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286.
Gaussian processes in machine learning. C E Rasmussen, Summer school on machine learning. SpringerRasmussen, C. E. (2003). Gaussian processes in machine learning. In Summer school on machine learning, pages 63-71. Springer.
Stochastic processes. S M Ross, J J Kelly, R J Sullivan, W J Perry, D Mercer, R M Davis, T D Washburn, E V Sager, J B Boyce, V L Bristow, Wiley2New YorkRoss, S. M., Kelly, J. J., Sullivan, R. J., Perry, W. J., Mercer, D., Davis, R. M., Washburn, T. D., Sager, E. V., Boyce, J. B., and Bristow, V. L. (1996). Stochastic processes, volume 2. Wiley New York.
Factor analysed hidden markov models for speech recognition. A I Rosti, M Gales, Computer Speech & Language. 182Rosti, A. I. and Gales, M. (2004). Factor analysed hidden markov models for speech recognition. Computer Speech & Language, 18(2):181-200.
Factor analysed hidden markov models. A I Rosti, M J Gales, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE1949Rosti, A. I. and Gales, M. J. (2002). Factor analysed hidden markov models. In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages I-949. IEEE.
An overview of heart rate variability metrics and norms. F Shaffer, J P Ginsberg, Frontiers in public health. 5258Shaffer, F. and Ginsberg, J. P. (2017). An overview of heart rate variability metrics and norms. Frontiers in public health, 5:258.
Hidden Markov latent variable models with multivariate longitudinal data. X Song, Y Xia, H Zhu, Biometrics. 731Song, X., Xia, Y., and Zhu, H. (2017). Hidden Markov latent variable models with multi- variate longitudinal data. Biometrics, 73(1):313-323.
Computing integrals involving the matrix exponential. C Van Loan, IEEE transactions on automatic control. 233Van Loan, C. (1978). Computing integrals involving the matrix exponential. IEEE trans- actions on automatic control, 23(3):395-404.
Bright light therapy: improved sensitivity to its effects on rest-activity rhythms in alzheimer patients by application of nonparametric methods. E J Van Someren, D F Swaab, C C Colenda, W Cohen, W V Mccall, P B Rosenquist, Chronobiology international. 164Van Someren, E. J., Swaab, D. F., Colenda, C. C., Cohen, W., McCall, W. V., and Rosenquist, P. B. (1999). Bright light therapy: improved sensitivity to its effects on rest-activity rhythms in alzheimer patients by application of nonparametric methods. Chronobiology international, 16(4):505-518.
Discrete-time discrete-state latent markov models with time-constant and time-varying covariates. J K Vermunt, R Langeheine, U Bockenholt, Journal of Educational and Behavioral Statistics. 242Vermunt, J. K., Langeheine, R., and Bockenholt, U. (1999). Discrete-time discrete-state latent markov models with time-constant and time-varying covariates. Journal of Edu- cational and Behavioral Statistics, 24(2):179-207.
Resolving ambiguities in the lf/hf ratio: Lf-hf scatter plots for the categorization of mental and physical stress from hrv. W Von Rosenberg, T Chanwimalueang, T Adjei, U Jaffer, V Goverdovsky, D P Mandic, Frontiers in physiology. 8360von Rosenberg, W., Chanwimalueang, T., Adjei, T., Jaffer, U., Goverdovsky, V., and Mandic, D. P. (2017). Resolving ambiguities in the lf/hf ratio: Lf-hf scatter plots for the categorization of mental and physical stress from hrv. Frontiers in physiology, 8:360.
Penalized mixtures of factor analyzers with application to clustering high-dimensional microarray data. B Xie, W Pan, X Shen, Bioinformatics. 264Xie, B., Pan, W., and Shen, X. (2010). Penalized mixtures of factor analyzers with appli- cation to clustering high-dimensional microarray data. Bioinformatics, 26(4):501-508.
Continuous time hidden markov model for longitudinal data. J Zhou, X Song, L Sun, Journal of Multivariate Analysis. 179104646Zhou, J., Song, X., and Sun, L. (2020). Continuous time hidden markov model for longi- tudinal data. Journal of Multivariate Analysis, 179:104646.
Joint hidden markov model for longitudinal and time-to-event data with latent variables. X Zhou, K Kang, T Kwok, X Song, meanAcc -0.014 0.022 -0.013 -0.032 -0.989 -0.049 0.032 0.001 amplitude 0.011 0.023 0.004 0.046 -0.808 -0.017 -0.090 -0.066Multivariate Behavioral Research. 572-3Zhou, X., Kang, K., Kwok, T., and Song, X. (2022). Joint hidden markov model for longi- tudinal and time-to-event data with latent variables. Multivariate Behavioral Research, 57(2-3):441-457. meanAcc -0.014 0.022 -0.013 -0.032 -0.989 -0.049 0.032 0.001 amplitude 0.011 0.023 0.004 0.046 -0.808 -0.017 -0.090 -0.066
. Nnskew, q3 0.066 0.043 -0.122 -0.388 -0.009 -0.379 -0.354 0.062NNskew.q3 0.066 0.043 -0.122 -0.388 -0.009 -0.379 -0.354 0.062
. Apen, min -0.991 0.024 0.012 -0.086 -0.030 0.163 0.103 -0.020ApEn.min -0.991 0.024 0.012 -0.086 -0.030 0.163 0.103 -0.020
. Apen, mean -0.472 -0.696 0.038 -0.160 0.011 0.117 0.053 -0.116ApEn.mean -0.472 -0.696 0.038 -0.160 0.011 0.117 0.053 -0.116
. Apen, 1.063 -0.246 -0.024 -0.090 -0.030 -0.119 0.117 -0.017ApEn.var 1.063 -0.246 -0.024 -0.090 -0.030 -0.119 0.117 -0.017
. Nnmean, mean -0NNmean.mean -0
. Nnskew, q3 -0.050 -0.044 -0.007 -0.533 -0.018 -0.061 0.327 -0.218NNskew.q3 -0.050 -0.044 -0.007 -0.533 -0.018 -0.061 0.327 -0.218
. Apen, max -0.837 0.490 0.018 -0.150 0.047 0.081 -0.086 -0.013ApEn.max -0.837 0.490 0.018 -0.150 0.047 0.081 -0.086 -0.013
. Apen, min -0.801 -0.357 -0.038 -0.013 -0.002 0.003 -0.157 -0.090ApEn.min -0.801 -0.357 -0.038 -0.013 -0.002 0.003 -0.157 -0.090
. Apen, mean -1.009 0.041 -0.079 0.029 0.003 0.077 -0.021 -0.117ApEn.mean -1.009 0.041 -0.079 0.029 0.003 0.077 -0.021 -0.117
. Apen, var 0.256 0.971 0.021 -0.027 0.028 -0.024 0.127 0.152ApEn.var 0.256 0.971 0.021 -0.027 0.028 -0.024 0.127 0.152
| [] |
[
"Towards IID representation learning and its application on biomedical data",
"Towards IID representation learning and its application on biomedical data"
] | [
"Jiqing Wu [email protected] \nDepartment of Pathology and Molecular Pathology\nUniversity Hospital\nUniversity of Zurich\nSwitzerland\n",
"Inti Zlobec [email protected] \nDepartment of Pathology\nUniversity of Bern\nSwitzerland\n",
"Maxime Lafarge [email protected] \nDepartment of Pathology and Molecular Pathology\nUniversity Hospital\nUniversity of Zurich\nSwitzerland\n",
"Yukun He [email protected] \nDepartment of Mathematics\nCity University of Hong Kong\nChina\n",
"Viktor H Koelzer \nDepartment of Pathology and Molecular Pathology\nUniversity Hospital\nUniversity of Zurich\nSwitzerland\n\nDepartment of Oncology and Nuffield Department of Medicine\nUniversity of Oxford\nUK\n",
"Viktor Koelzer@usz Ch "
] | [
"Department of Pathology and Molecular Pathology\nUniversity Hospital\nUniversity of Zurich\nSwitzerland",
"Department of Pathology\nUniversity of Bern\nSwitzerland",
"Department of Pathology and Molecular Pathology\nUniversity Hospital\nUniversity of Zurich\nSwitzerland",
"Department of Mathematics\nCity University of Hong Kong\nChina",
"Department of Pathology and Molecular Pathology\nUniversity Hospital\nUniversity of Zurich\nSwitzerland",
"Department of Oncology and Nuffield Department of Medicine\nUniversity of Oxford\nUK"
] | [] | Due to the heterogeneity of real-world data, the widely accepted independent and identically distributed (IID) assumption has been criticized in recent studies on causality. In this paper, we argue that instead of being a questionable assumption, IID is a fundamental task-relevant property that needs to be learned. Consider k independent random vectors X i=1,...,k , we elaborate on how a variety of different causal questions can be reformulated to learning a task-relevant function φ that induces IID among Z i . .= φ • X i , which we term IID representation learning.For proof of concept, we examine the IID representation learning on Out-of-Distribution (OOD) generalization tasks. Concretely, by utilizing the representation obtained via the learned function that induces IID, we conduct prediction of molecular characteristics (molecular prediction) on two biomedical datasets with real-world distribution shifts introduced by a) preanalytical variation and b) sampling protocol. To enable reproducibility and for comparison to the state-of-the-art (SOTA) methods, this is done by following the OOD benchmarking guidelines recommended from WILDS. Compared to the SOTA baselines supported in WILDS, the results confirm the superior performance of IID representation learning on OOD tasks. The code is publicly accessible via https://github.com/CTPLab/IID_representation_learning. | 10.48550/arxiv.2203.00332 | [
"https://arxiv.org/pdf/2203.00332v1.pdf"
] | 247,187,734 | 2203.00332 | cd2475909eaa614d22302fa60044887050366cd7 |
Towards IID representation learning and its application on biomedical data
Jiqing Wu [email protected]
Department of Pathology and Molecular Pathology
University Hospital
University of Zurich
Switzerland
Inti Zlobec [email protected]
Department of Pathology
University of Bern
Switzerland
Maxime Lafarge [email protected]
Department of Pathology and Molecular Pathology
University Hospital
University of Zurich
Switzerland
Yukun He [email protected]
Department of Mathematics
City University of Hong Kong
China
Viktor H Koelzer
Department of Pathology and Molecular Pathology
University Hospital
University of Zurich
Switzerland
Department of Oncology and Nuffield Department of Medicine
University of Oxford
UK
Viktor Koelzer@usz Ch
Towards IID representation learning and its application on biomedical data
IIDIID representation learningOOD generalizationcausalitybiomedical
Due to the heterogeneity of real-world data, the widely accepted independent and identically distributed (IID) assumption has been criticized in recent studies on causality. In this paper, we argue that instead of being a questionable assumption, IID is a fundamental task-relevant property that needs to be learned. Consider k independent random vectors X i=1,...,k , we elaborate on how a variety of different causal questions can be reformulated to learning a task-relevant function φ that induces IID among Z i . .= φ • X i , which we term IID representation learning.For proof of concept, we examine the IID representation learning on Out-of-Distribution (OOD) generalization tasks. Concretely, by utilizing the representation obtained via the learned function that induces IID, we conduct prediction of molecular characteristics (molecular prediction) on two biomedical datasets with real-world distribution shifts introduced by a) preanalytical variation and b) sampling protocol. To enable reproducibility and for comparison to the state-of-the-art (SOTA) methods, this is done by following the OOD benchmarking guidelines recommended from WILDS. Compared to the SOTA baselines supported in WILDS, the results confirm the superior performance of IID representation learning on OOD tasks. The code is publicly accessible via https://github.com/CTPLab/IID_representation_learning.
Introduction
In machine learning (Vapnik, 1999), we commonly assume that data entries (y i , x i ) i=1,...,n are independently drawn from the same probability distribution P (Y,X) of a random vector (Y, X). This is referred to as the independent and identically distributed (IID) assumption. However, real-world data is usually characterized by significant heterogeneity (Bareinboim, 2014;Peters et al., 2017;Arjovsky et al., 2019;Rosenfeld et al., 2021). Controlling data heterogeneity is particularly critical in application of data driven methods to the medical domain (Cios and Moore, 2002), as medical algorithms that suffer from prediction degradation on heterogeneous cohorts can have severe consequences in medical practice. Consequently, the IID assumption needs to be critically questioned.
The task of learning a robust model that is resistant to a heterogeneous data distribution is formally denoted as Out-of-Distribution generalization (OOD) (Arjovsky et al., 2019;Koh et al., 2021).
For a thorough overview we refer interested readers to (Shen et al., 2021). A large number of studies with diverse methodologies (Peters et al., 2016;Ganin et al., 2016;Rojas-Carulla et al., 2018;Arjovsky et al., 2019;Sagawa et al., 2020;Rosenfeld et al., 2021) have been proposed to address this issue. From the viewpoint of domain adaptation (Pan et al., 2010), the root causes of OOD failure come from domain or task shift (Wang and Deng, 2018). There have been many studies dedicated to resolve the challenge. proposed to align the second-order statistics of the source and target distributions. In case of simultaneous domain and task shift, (Gong et al., 2016) suggested to pinpoint conditional transferable components. Further, (Long et al., 2018) reduced the shifts in the data distributions across domains via adversarial learning (Goodfellow et al., 2014). Built upon the invariant property reflected in causality (Pearl et al., 2000), (Peters et al., 2016) firstly proposed the seminal invariant causal prediction (ICP) framework. Later, (Rojas-Carulla et al., 2018) investigated the invariant set and extended the ICP to transfer learning (Pan and Yang, 2009;Muandet et al., 2013;Zhuang et al., 2020). Motivated by the ICP, invariant risk minimization (IRM) (Arjovsky et al., 2019) was subsequently proposed to learn an invariant predictor that is optimal for all environments. Recently, (Schölkopf et al., 2021) pointed out the essential role of causal representation learning in OOD generalization. In a nutshell, (Schölkopf et al., 2021) argued that cause-effect relations are critical components of reasoning chains that remain robust in situations beyond training tasks. However, causal variables are usually not given in machine learning tasks. Thus, (Schölkopf et al., 2021) suggested learning causal representations to resolve the limitation of current approaches for OOD generalization.
Inspired by impactful studies centered on the investigation of statistical invariance:
• We introduce a novel pair of definitions: IID symmetry and its generalization. These definitions reflect the core message delivered in the work, i.e., instead of being a questionable assumption, IID is a fundamental task-relevant property that needs to be learned. • Then, we systematically discuss how IID and causality are two sides to the same coin. Consider k independent random vectors X i=1,...,k , we elaborate concrete examples of reformulating diverse causal problems to learning a task-relevant function φ that induces IID among Z i . .= φ • X i , which we term IID representation learning. • For proof of concept, we examine the IID representation learning on Out-of-Distribution (OOD) generalization tasks. Concretely, in utilizing the representation obtained via the learned function that induces IID, we conduct molecular prediction experiments on two comprehensive biomedical datasets (RxRx1 (Taylor et al., 2019) and Swiss Colorectal Cancer (SCRC) (Nguyen et al., 2021)). By following the OOD benchmarking guidelines recommended from WILDS (Koh et al., 2021), we demonstrate that the IID representation learning can improve the molecular predictions compared to the SOTA baselines supported in WILDS.
Proposed Definition
As elaborated above, the common ground of causal studies usually starts with exploring statistical invariance. Thus, we introduce the definitions of IID symmetry and its generalization as follows: Consider k + n independent random vectors X 1 , . . . , X k , X k+1 , . . . , X k+n and a Lebesgue integrable
φ : R l+1 → R m+1 , for i = 1, . . . , k + n, let Q X i be a query distribution 1 of X i = (x i 0 , x i 1 , . . . , x i l ), let Z i = (z i 0 , z i 1 , . . . , z i m ) . .= φ • X i and Q Z i . .= Q X i • φ −1 ,
Definition 1 We say that X 1 , . . . , X k have an (φ−)IID symmetry if φ induced Q Z 1 , . . . , Q Z k are identical distributions, i.e., Q Z 1 = . . . = Q Z k . Further, we say that the (φ−)IID symmetry is generalizable to X k+1 , . . . , X k+n if Q Z 1 , . . . , Q Z k , Q Z k+1 , . . . , Q Z k+n are identical distributions.
Remark 1 It is not difficult to see that Z 1 , . . . , Z k+n are independent, since w.l.o.g. we can reduce the proof to the simpler case of two random vectors Z 1 , Z 2 and φ being continuous. Let f : R m+1 → R be bounded and continuous, then f • φ : R l+1 → R is also bounded and continuous. We have
E[ f (Z 1 ) f (Z 2 )] = E[( f • φ)(X 1 )( f • φ)(X 2 )] = E[ f • φ(X 1 )]E[ f • φ(X 2 )] = E[ f (Z 1 )]E[ f Z 2 )],(1)
where the second equality comes from the independence of X 1 and X 2 . As a large class of functions including piece-wise continuous function (neural network) satisfies the Lebesgue integrability condition, we claim the map φ discussed in this paper always induces independence. Since for i = 1, . . . , k + n, Z i is independent and identically distributed w.r.t. Q φ•X i , we call Z i an (φ−)IID representation. It is worth mentioning that the entries of Z i are not required to be independent.
Remark 2 For i = 1, . . . , k + n, if Q X i = P X i is the probability distribution of X i , then Z 1 , . . . , Z k+n are IID in the canonical sense according to Rem. 1. Besides, the trivial IID symmetry and its generalization always exist, for instance we can define a trivial φ such that φ • X i = const.
From Causality to IID
Causal inference is a fundamental research domain that reflects the zeitgeist in machine learning (Luo et al., 2020). Broadly speaking, prior studies on causal inference can be categorized into two areas of research: causal identification (Pearl et al., 2009;Peters et al., 2017;Hernán and Robins, 2020) and causal transportation (Balke and Pearl, 1995;Bareinboim and Pearl, 2014;Bareinboim, 2014). The former aims to either identify the underlying Structural Causal Models (SCM) (Peters et al., 2017) or quantify the Average Causal Effect (ACE) (Hernán and Robins, 2020), whereas the latter is often meant for licensing the transportable causal knowledge from one population to another (Bareinboim and Pearl, 2014;Bareinboim, 2014). In a recent study (Schölkopf et al., 2021), the authors propose causal representation learning to resolve OOD generalization. To link causal inference and IID, we first introduce two prerequisite concepts: Structural Causal Model. Following the specification in (Peters et al., 2016(Peters et al., , 2017, consider a Structural Causal model (SCM), i.e., there exists a random vector X = (x 0 , . . . , x l ) and a directed acyclic graphs (DAG) consisting of vertices x 0 , . . . , x l and δ 0 , . . . , δ l such that for j = 0, . . . , l we have
x j = f j (X PA j , δ j ), δ j X PA j ,(2)
where X PA j ⊂ {x 0 , x 1 , . . . , x l } is the set of known parents of x j , δ j is the unknown (parent) noise. By drawing arrow(s) from X PA j , δ j to x j defined in Eq. 2, we obtain the edges of the DAG (See Fig. 1 for graph visualization). The SCM bears many practical interests for analyzing complex medical datasets, e.g., given the patient overall survival x 0 . .= x os , we want to identify the key prognostic variables among x age , x gender , x BMI , etc. that directly impact x os (Shapiro and Msaouel, 2021). Do-Intervention. As discussed in (Pearl and Mackenzie, 2018), one of the most prominent building blocks of causal inference is intervention. Formally, we denote the (hard) do-intervention, i.e., the replacement of Eq. 2 with x j . .= const by do(x j = const). Noting that intervening on x j breaks the arrow(s) between X PA j , δ j and x j . Accordingly, we denote the interventional distribution of x 0 conditioned on x 1 , . . . , do(x j = const), . . . , x l by P(x 0 | x 1 , . . . , x e j , . . . , x l ), the random vector by X e = (x 0 , x 1 , . . . , x e j , . . . , x l ) and the set of known parents of x j by X e PA j . In the clinical domain, it should be noted that the implementation of do-intervention is expensive owing to regulatory scrutiny and ethically challenging. This is illustrated by recent publications critically discussing such interventions as placebo surgery (Angelos, 2013) and the involvement of vulnerable patient groups (Caldwell et al., 2004;Farrell et al., 2020), etc.
In real-world applications, randomized clinical trials (RCT) are considered to be the goldstandard for interventional clinical studies (Nout et al., 2010;de Boer et al., 2019). Given the patient outcome x 0 , we are keen on understanding the distribution of x 0 conditioned on (intervened) treatment x 1 and prognostic variables x 2 , . . . , x l in the presence of unknown noises. Thus, we discuss how various related causal problems can be reformulated to learning a function inducing IID.
identical distri- butions P(x 0 | X e 1 PA 0 ) = . . . = P(x 0 | X e k PA 0 )
, the dotted arrows connect the unknown noises. The black hammers indicate the do-interventions e 1 , . . . , e k implemented in the form of RCTs. Right: The graphical visualization for causal effect transportation. The black arrow indicates that the distribution P (x 0 | x e k+1 1 ) is transported from the identical P(x 0 | x e k 1 ), where the gray dotted hammer indicates the do-intervention e k+1 that leads to P (x 0 | x e k+1 1 ) and cannot be implemented in the setting of a RCT due to ethical reasons.
Causal Variable Identification → IID symmetry
Let us assume k SCMs underlying a medical datatset collected from clinical trials, i.e., for i = 1, . . . , k there exists an X e i = (x 0 , x 1 , . . . , x e i j 1 , . . . , x e i j e i , . . . , x l ) and its corresponding DAG with unknown noises δ 0 , . . . , δ l , where x 0 is the patient outcome, e represents the do-intervention(s) imposed on a subset variables of {x 1 , . . . , x l } in X (See Fig. 1 (left)). Due to the NP-hard challenge of learning an entire DAG (Chickering, 1996;Luo et al., 2020), invariant causal prediction (ICP) (Peters et al., 2016) was proposed to identify plausible causal variables given the outcome of interest (here patient outcome x 0 ). Since for i = 1, . . . , k, X e i PA 0 is the set of plausible causal variables of x 0 (Peters et al., 2016), under the assumption of identical interventional distributions P(x 0 | X e i PA 0 ) brought by k different do-interventions we propose:
Question Consider k independent random vectors X e 1 , . . . , X e k specified above, for i = 1, . . . , k let
Q X e i = P(x 0 | x 1 , . . . , x e i j 1 , . . . , x e i j e i , . . . , x l ), can we find a φ in Def. 1 such that Q φ•X e 1 , . . . , Q φ•X e k are identical distributions and it satisfies φ(x 0 , x 1 , . . . , x l ) = (x 0 , . . .)? Discussion The map φ • X e i = (x 0 , X e i PA 0 ) that projects (x 0 , x 1 , . . . , x e i j 1 , . . . , x e i j e i , . . . , x l ) to (x 0 , X e i PA 0 ) induces the identical Q φ•X e i = P(x 0 | X e i PA 0 )
. This is the consequence of Eq. 2, since for i = 1, . . . , k the assignment f 0 between x 0 and X e i PA 0 , δ 0 remains unchanged and δ 0 is independent of X e i PA 0 . In the toy experiments (App. A), we demonstrate the robustness of learning a projection map inducing identical interventional distributions, where the map is parametrized with a simple neural network.
Causal Effect Transportation → IID Generalization
Consider for i = 1, . . . , k, we know the assignment f 0 between x 0 and X e i PA 0 (Eq. 2) w.r.t. the identical P(x 0 | X e i PA 0 ), since it is unethical and infeasible to re-run the clinical trial on lots of patient cohorts, we often want to transport the causal knowledge to a new observational cohort (Bareinboim, 2014). Let X k+1 = (x 0 , x 1 , x 2 , . . . , x l ) be a random vector representing the observational cohort, based on the causal knowledge learned by X e 1 , . . . , X e k , we aim to compute P (x 0 | x e k+1 1 ) of X e k+1 = (x 0 , x e k+1 1 , x 2 , . . . , x l ) (Bareinboim, 2014), i.e., the distribution of patient outcome x 0 conditioned on the intervened treatment x e k+1 1 , Under the assumption of identical interventional distributions brought by k + 1 different do-interventions we propose:
Question Consider k independent random vectors X e 1 , . . . , X e k specified in Sec. 3.1, Fig. 1 (right)). Otherwise if the patient outcome conditioned on the intervened treatment in the same age group (x 2 . .= x age ) remains invariant, then we need to derive
for i = 1, . . . , k let Q X e i = P(x 0 |X e i PA 0 ), we further assume an X e k+1 = (x 0 , x e k+1 1 , x 2 , . . . , x l ) independent of X e 1 , . . . , X e k and Q X e k+1 = P (x 0 | X e k+1 PA 0 ), can we find a φ in Def. 1 such that Q φ•X e 1 , . . . , Q φ•X e k , Q φ•X e k+1 are identical distributions and it satisfies φ(x 0 , x PA 0 ) = (x 0 , x 1 , . . .)? Discussion If the patient outcome conditioned on the intervened treatment remains invariant across different cohorts, by determining φ • X e k+1 PA 0 = (x 0 , x e k+1 1 ) we have Q φ•X e 1 = . . . = Q φ•X e k+1 = P (x 0 | x e k+1 1 ) (Seeφ • X e k+1 PA 0 = (x 0 , x e k+1 1 , x 2 ) and obtain Q φ•X e 1 = . . . = Q φ•X e k+1 = P (x 0 |x e k+1 1 , x age ), thus we conclude P (x 0 |x e k+1 1 ) = P (x 0 | x e k+1 1 , x age )P (x age ), where P (x age ) is the marginal distribution of x age .
Causal Feature Representation → IID Representation
One of the open questions raised in (Schölkopf et al., 2021) is how to learn a reusable feature representation of X = (x 1 , . . . , x l ). This question becomes essential when x 1 . . . , x l do not correspond to well-studied treatment and prognostic variables, but to pixels of medical imaging data that bear critical information of possibly unknown variables. Based on the Independent Causal Mechanism (ICM) (Peters et al., 2017) and Sparse Mechanism Shift (SMS), (Schölkopf et al., 2021) hypothesize that learning a causal-aware representation in an auto-encoder fashion is promising for its reusability in downstream tasks. In alignment with this keen insight and the assumption that latent representations of training, validation and test datasets have identical probability distributions:
Question Consider k+n+p independent random vectors X 1 , . . . , X k , X k+1 , . . . , X k+n , X k+n+1 , . . . , X k+n+p , for i = 1, . . . , k + n + p let Q X i = P X i be the probability distribution of X i , can we find a φ in Def. 1 such that Q φ•X 1 , . . . Q φ•X k+n+p are identical distributions and there exists a φ :
R m → R l satisfying φ • φ = id?
Discussion According to Rem. 1, 2, we aim to learn an IID representation Z i = φ•X i = (z i 1 , . . . , z i m ) for i = 1, . . . , k + n + p as if the images in training (X 1 , . . . , X k ), validation (X k+1 , . . . , X k+n ) and test (X k+n+1 , . . . , X k+n+p ) datasets can be faithfully reconstructed from the identical distribution P Z i . In the following experiments, we demonstrate the reusability of learned IID representation for downstream prediction tasks.
OOD Experiment
As discussed above, one of the biggest challenges in application of machine learning methodologies to the medical domain lies in data heterogeneity that violates the conventional IID assumption.
There are many factors contributing to the heterogeneity such as preanalytical variation (Taylor et al., 2019), sampling protocol (Karamitopoulou et al., 2011), etc. As the goal of OOD generalization is to resolve the challenge of heterogeneous training and test data (Shen et al., 2021), we examine the IID representation learning under the OOD setting and conduct prediction of molecular characteristics (molecular prediction) on two comprehensive biomedical datasets-RxRx1 (Taylor et al., 2019) and Swiss Colorectal Cancer (SCRC) (Nguyen et al., 2021). The former aims to predict genetic perturbations given fluorescence microscopy images of cancer cells contaminated with preanalytical batch effects, while the latter study aims to classify the consensus molecular subtypes (imCMS1-4 (Sirinukunwattana et al., 2020)) of colorectal cancer (CRC) based on tissue microarray (TMA) images, where the TMAs are heterogeneously sampled from different tumor regions.
To enable reproducibility and for comparison to the SOTA methods, we run molecular prediction experiments by following the guidelines of WILDS (Koh et al., 2021). Accordingly, we split RxRx1 to training (40612 images), validation (9854), in-distribution (ID) (40612) and OOD test (34432) data. Since SCRC contains TMAs sampled from tumor front (3333), micro-environment (micro) (2819) and center (3914) regions, we take images from two out of the three tumor regions to form the training data. By excluding 2 TMAs/patient from the held-back region as validation, we have the remaining TMAs as OOD test data. This leads to three variants of experiments: SCRC0 (front and micro for training), 1 (micro and center for training) and 2 (center and front for training). We then compare the IID representation learning to the SOTA baselines supported in WILDS: Empirical risk minimization (ERM) that minimizes the average classification loss on training sample (Vapnik, 1992;Shen et al., 2021), invariant risk minimization (IRM) (Arjovsky et al., 2019) with ERM + gradient regularization, correlation alignment (CORAL) with ERM + covariance regularization, group distributed robust optimization (GroupDRO) (Sagawa et al., 2020) with ERM + worst-case group regularization. For the IID representation learning, we first learn an IID representation in an auto-encoder fashion and then combine the learned IID representation with ERM for downstream molecular predictions, i.e., ERM + IID representation (See Fig. 2). According to WILDS' experiment and metric design, all molecular prediction experiments are run at least 3 times (4 times in our case) and we report average prediction results with standard deviation (SD). ) and its downstream molecular predictor (ERM + IID representation). Right: The visual comparison and average PSNR with SD achieved by the IID representation learning. Here, we normalize the RxRx1 images along each channel and zoom in on a small region of ground-truth (red bounding box) and reconstructed images for better visualization.
Learning the Approximate IID Representation. Despite being conceptually simple, learning an IID representation that can faithfully reconstruct a given input image is non-trivial. To approximate the IID property and to achieve good reconstruction quality, we propose to utilize the instance normalization (IN) (Ulyanov et al., 2016) in the encoder for proof of concept. Concretely, we apply two kinds of blocks containing IN operations: morphology (morph) and stain to obtain Based on the recent development in image inversion , we instantiate the φ, φ in Sec. 3.3 with the Restyle encoder and StyleGAN decoder (Karras et al., 2020). As shown in Fig. 2 (left), we couple the morph and stain with the noise (A) and style (B) modules of StyleGAN respectively. This is meant for learning a semantic-aware representation for the follow-up interpretation (See Fig. 3). Then, the objective is to reconstruct the input image with 256 × 256 resolution and defined as L = λ 0 L 2 + λ 1 L lpips + λ 2 L sim , where L 2 is the pixel-wise loss, L lpips is the perceptual loss (Tov et al., 2021), L sim is the loss measuring the cosine similarity, λ 0,1,2 are the coefficients weighing on the losses. Fig. 2 (right) shows that the approximate IID representation Z i induced by φ (Restyle encoder with IN) achieves robust image reconstruction for RxRx1 and SCRC. See App. B, C for more hyper-parameter and result discussions. The Learned IID Representation in ERM. After freezing the learned Restyle encoder φ described above, we integrate the φ induced IID representation Z i to two standard (ResNet (He et al., 2016), DenseNet (Huang et al., 2017)) and two light-weight (MobileNet (Sandler et al., 2018), Mnas-Net (Tan et al., 2019)) backbones (See Fig. 2 (left)) that are widely used under the ERM framework. Due to the dimensional compatibility between Z i m,0 , . . . , Z i m,4 , Z i s,0 , . . . , Z i s,13 and layer outputs of the compared backbones, this is implemented via adding the scaled 2-dim output (z i m, j = λ m, j z i m, j for j = 0, . . . , 4) of morph blocks to the block of backbones, and via processing the 1-dim outputs (z i s = Conv1d(Cat(z i s,0 , . . . , z i s,13 ))) of stain blocks for latent vector concatenation (See also Fig. 2 (left bottom)), where λ m, j is a learnable scalar coefficient. Accordingly, the objective is to predict the class of genetic perturbation (RxRx1) and imCMS (SCRC) and defined as L = λL crs + (1 − λ)L arc , where L crs is the cross-entropy loss, L arc is the ArcFace loss (Deng et al., 2019), λ is the coefficient balancing the losses. See App. E for more hyper-parameter discussions on SOTA baselines supported in WILDS and proposed method.
Molecular Prediction Result
. Surprisingly, the ERM method outperforms the SOTA IRM (ERM + gradient), CORAL (ERM + covariance) and GroupDRO (ERM + worst-case group) in the experiments (See Tab. 1 and 2). More importantly, our proposed method (Prop: ERM + IID representation) achieves top classification accuracies compared to these optimally tuned baselines supported in WILDS for both ID (RxRx1) and OOD test data (SCRC, RxRx1). The consistent improvements under various backbones (Tab. 1 (right) and Tab. 2 (left)) confirm the reusability of learned IID representation. With further stratifying the results by cell types (Tab. 1 (bottom)) and imCMS classes (Tab. 2 (right)) we conclude that the proposed IID representation learning achieves superior results on OOD generalization tasks for RxRx1 and SCRC. Discussion.When examining stain and morph blocks individually (See Fig. 3), the takeaways are mixed. For RxRx1, the stand-alone stain blocks clearly contribute to the prediction improvement. This may be explained by the preanalytical variation in forms of batch-wise staining shift embedded in validation and test images. For SCRC, neither stain nor morph blocks bring clear quantitative improvements individually. Only by utilizing both of them can we robustify the OOD generalization.
Conclusion
In this paper, we propose the IID representation learning and discuss its essential connection to causality. Experimental results on two biomedical datasets show that reusing learned IID representation can improve downstream molecular predictions in terms of OOD generalization. In future work, follow-up investigations from theoretical and biological viewpoints need be conducted to better understand the theoretical guarantee and underlying biological drivers of the IID representation.
Appendix A. Toy Experiments for Causal Variable Identification
Complementary to Sec. 4, we conduct toy experiments on the causal variable identification task (Sec. 3.1) for validating the proposed IID representation learning. This is done by following the experimental design of AICP (Gamella and Heinze-Deml, 2020) (Please see also https: //github.com/juangamella/aicp). Specifically, we start the data simulation by creating a directed acyclic graph (DAG) endowed with vertices, edges and Gaussian noises, where the vertices of the DAG correspond to the variables x 0 , x 1 , . . . , x l in Sec. 3.1. These specifications form a linear Gaussian SCM.
W.l.o.g. consider x 0 be the outcome variable and X PA 0 be the set of x 0 's parent variables, under the assumption of without intervening on the outcome x 0 , we implement do-interventions e j=1,...,l independently via breaking the edges pointing to x j and letting x j . .= c, which simulates the RCT setting described in Sec. 3.1. As specified in https://github.com/juangamella/aicp, we then collect l batches of data samples that are randomly drawn from the SCM intervened with e j=1,...,l resp. In the same manner as AICP (Gamella and Heinze-Deml, 2020), given such a dataset, our goal is to identify the set of parent variables of the outcome x 0 . Instead of a sophisticated auto-encoder proposed in Sec. 4, here we utilize a simple neural network φ • φ, where φ is a standard MLP layer, φ(x) = w x is the element-wise multiplication of the input x and binary penalty weights w (initialized with 1). We propose to learn the projection map φ inducing identical interventional distribution among (X e j PA 0 , x 0 ) for j = 1, . . . , l, where φ should project {x 1 , . . . , x l } to X PA 0 . Noting that φ also induces independence among (X e j PA 0 , x 0 ) for j = 1, . . . , l due to the independently intervened SCMs. Concretely, we train φ • φ for l epochs with . 2 norm and iteratively penalize if x j ∈ X PA 0 holds true (w j of φ remains to be 1) for j = 1, . . . , l per epoch. Such penalty is conditioned on max j=1,...,l FID (µ j , µ j c ), where FID is the Fréchet inception distance (Heusel et al., 2017), µ j , µ j c are the interventional distributions of φ • φ(x 1 , . . . , x l ) − x 0 2 w.r.t. the data sampled from {e j } and {e 1 , . . . , e l } \ {e j } intervened SCM(s) resp. Then we compare the proposed method with ICP (Peters et al., 2016), NICP (Heinze-Deml et al., 2018) and AICP, all of which are developed upon the idea of identifying X PA 0 via the intersection of sets of plausible causal variables. To better examine the robustness of compared methods, not only do we randomly choose 50 DAGs to re-run the experiments but additionally introduce 1 and 2 hidden confounder(s) for each DAG resp. As shown in Tab. 3, our proposed IID representation learning outperforms the ICPs especially with the inclusion of hidden confounder(s), in terms of better Jaccard Similarity (JS) and Family-wise error rate (FWER) (Gamella and Heinze-Deml, 2020) averaged on 50 DAGs,
JS(Z, X PA 0 ) = |Z ∩ X PA 0 | |Z ∪ X PA 0 | , FWER = P(Z X PA 0 ), where Z = φ(x 1 , . . . , x l ) = (w 1 x 1 , . . . , w l x l ). (3)
Appendix B. Unsupervised Training of StyleGAN Decoder
Since there are not pre-trained StyleGAN (Karras et al., 2020) decoders available for the IID representation learning on RxRx1 and SCRC, we start the experiments with training StyleGAN in an unsupervised manner. Concretely, we take the widely-used PyTorch implementation https: //github.com/rosinality/stylegan2-pytorch for training StyleGAN. Following the suggestions from WILDS, we only utilize the training data of RxRx1 and SCRC0,1,2 to learn four different StyleGAN models that can synthesize visually plausible microscopy images, while the validation and test data are held back during training. Due to the nature of moderate amount of training data, we follow the default configurations of StyleGAN training suggested in the repository except that we customize the training iterations to be 100k for all experiments, batch size to be 32 for RxRx1 and 16 for SCRC. Then we take advantage of the Distributed Data-Parallel (DDP) mechanism provided in PyTorch and train the StyGAN models on 4 A-100 GPUs and 2 A-100 GPUs for RxRx1 and SCRC respectively. We report the average Fréchet inception distance (FID) (Heusel et al., 2017) scores with SD obtained with four different random seeds for all the experiments in Tab. 4 and demonstrate the non-cherry-picked synthesized images in Fig. 4, 5, 6, 7. Noting that the large FID score for RxRx1 is resulted from comparing the total statistical difference on the ensemble of fluorescent medical images with more than 1000 classes of genetic perturbation, which differs from the common FID score computation in terms of a single class natural image generation (Karras et al., 2019(Karras et al., , 2020.
Appendix C. Learning the Approximate IID Representation
To achieve faithful microscopy image reconstruction, we utilize the pre-trained StyleGAN decoder discussed in App. B and Restyle encoder for learning the approximate IID representation. For the perceptual L lpips (Zhang et al., 2018;Tov et al., 2021) and cosine similarity L sim (Chen et al., 2020) loss of the reconstruction objective, we follow the default configuration introduced in Restyle encoder , i.e., the L lpips and L sim are computed based on features extracted from the linear layer of the pre-trained AlexNet (Krizhevsky et al., 2012) and the MoCoV2 (Chen et al., 2020) pretrained ResNet50 (He et al., 2016) respectively, see also https: //github.com/yuval-alaluf/restyle-encoder for more implementation details. Besides, by tuning on the validation data, it suffices to execute one step for iterative refinement and train all the experiments with 90k iterations. Lastly, the hyper-parameters λ 0,1,2 in the reconstruction objective are determined to be 1.5, 0.5, 0.5 and 5, 0.2, 0.2 for SCRC0,1,2 and RxRx1 respectively. With computing the batch-wise statistics, the batch normalization (Ioffe and Szegedy, 2015) (BN) introduces unnecessary batch dependence between training data. Because of the element-wise affine operation applied on each image by default, the requirement that learning a function inducing identical distributions cannot be guaranteed by layer normalization (Ba et al., 2016) (LN). In combination of these observations and independent, approximately identically distributed Z i (Sec. 3.3) obtained via instance normalization (Ulyanov et al., 2016) Fig. 8, 9, 10, 11).
Appendix E. The learned IID Representation in ERM
To enable reproducibility and for comparison to the SOTA methods, we utilize the WILDS repository https://github.com/p-lambda/wilds.git to run the experiments. Precisely, we call the data loader functions of RxRx1 implemented in WILDS and write the corresponding data loader functions for SCRC following the WILDS coding style. Except that we introduce the CutMix (Yun et al., 2019) as a complement to the standard augmentation methods supported in WILDS, we do not use additional techniques such as fusing the outputs from several rotated inputs or from multiple models to boost the performance for compared methods. During the training, we do not feed the validation and test data to the model, the validation data is only used for hyper-parameter tuning. Accordingly, all compared methods are well tuned on the hyper-parameters with the careful selection of augmentations, backbones, etc.
For ERM, we determine the optimal λ to be 0.8 and λ CutMix to be 1 for RxRx1 and SCRC, ResNet50/DenseNet121 for RxRx1 and MobileNetV2 for SCRC. For IRM and CORAL, we determine the optimal λ to be 1, the backbone to be MobileNetV2 and λ CutMix = 0 for both RxRx1 and SCRC. In terms of GroupDRO, the configurations are the same to IRM and CORAL except that with utilizing DenseNet121 it achieves competitive results to MobileNetV2 in SCRC experiments. As to the proposed method (Prop), the optimal results are obtained by λ = 0.8, λ CutMix = 1 for both RxRx1 and SCRC, as well as ResNet50 for RxRx1 and MobileNetV2 for SCRC. Complementary to the Tab. 1 and 2 in the main manuscript, we present detailed results for all compared methods with respect to the same backbones in Tab. 6 and 7.
Figure 1 :
1Left: The graphical visualization for causal variable identification. The black arrows indicate
Figure 2 :
2Left: The model illustrations of the proposed IID representation learning (Restyle Encoder and StyleGAN Decoder
Sec. 3.3 (SeeFig. 2). Compared to other normalization strategies(Ioffe and Szegedy, 2015;Ba et al., 2016), IN allows to impose the identical mean and standard deviation on the entries of Z i m,0 , . . . , Z i m,4 , Z i s,0 , . . . , Z i s,13 without violating the independence of Z i (See App. C for more normalization studies). This suggests that the learned representation Z i is independent and approximately identically distributed.
1 :
1The main results of RxRx1. Top: The average classification accuracies with SD for optimally tuned (Optimal) compared methods (Left) and for ERM and proposed method (Prop) under the same backbones (Right). Bottom: The overall stratified accuracies with SD for ERM and Prop on 4 cell types: HEPG2, HUVEC, RPE, U2OS(Taylor et al., 2019).
Figure 3 :
3Left: The ablation studies of utilizing stain or morph blocks individually. Right: The visualization of interpolating the outputs of stain and morph blocks simultaneously, interpolating stain outputs while freezing morph ones and vice-versa (See App. D for more enlarged interpolation visual results).
Figure 8 :
8The RxRx1 visual comparison between ground-truth (red bounding box) and reconstructed images for Batch (BN), Layer (LN), Group (GN) and Instance (IN) normalization. Here, we normalize the ground-truth and reconstructed images along each channel for a clearer comparison. Please zoom in on the image details for better visualization.
Figure 9 :
9The SCRC0 visual comparison between ground-truth (red bounding box) and reconstructed images for Batch (BN), Layer (LN), Group (GN) and Instance (IN) normalization. Please zoom in on the image details for better visualization.
Figure 11 :
11The SCRC2 visual comparison between ground-truth (red bounding box) and reconstructed images for Batch (BN), Layer (LN), Group (GN) and Instance (IN) normalization. Please zoom in on the image details for better visualization.
Table
Table 2 :
2The main results of SCRC. Left: The average classification accuracies with SD for optimally tuned (Optimal) compared methods and for ERM and proposed method (Prop) under the same backbones (Right). Right: The overall stratified accuracies with SD for ERM and Prop on imCMS1, 2, 3, 4 (Nguyen et al., 2021).
Table 3 :
3Top: The results of causal variable identification for the toy experiments between ICPs and the proposed IID
representation learning. Here, we report Jaccard Similarity (JS) and Family-wise error rate (FWER) (Gamella and Heinze-
Deml, 2020) for quantitative comparison. Bottom: The visual illustration of SCMs with and without hidden confounder.
Table 4 :
4The average FID scores with SD achieved by StyleGAN on RxRx1 and SCRC0, 1, 2 obtained with four random seeds.
(IN), we impose IN on the Restyle encoder (including the ResNet backbone). Under the same Restyle architecture, we run experiments and compare the reconstruction performance achieved between IN, BN (utilized in the default Restyle encoder), LN as well as group normalization (Wu and He, 2018) (GN). As a result, we experimentally justified the superiority of IN in terms of robust PSNR scores (See Tab. 5) and better visual qualities (See
Table 5 :
5The average PSNR with SD achieved by four compared normalization methods under the same architecture of Restyle encoder and StyleGAN decoder.
Table 6 :
6The average classification accuracies with SD of RxRx1 that are obtained with four different backbones for all compared methods.
Table 7 :
7The average classification accuracies with SD of SCRC that are obtained with four different backbones for all compared methods.
© 2022 J. Wu, I. Zlobec, M. Lafarge, Y. He & V.H. Koelzer.
towards iid representation learning
AcknowledgmentsWe would like to thank the Colorectal Cancer Research Group and gratefully acknowledge all members of the Translational Research Unit at the Institute of Pathology, University of Bern for excellent collaboration and provision of the CRC image dataset. We gratefully acknowledge the S:CORT consortium, a Medical Research Council stratified medicine consortium led by Prof. Tim Maughan at the University of Oxford, jointly funded by the MRC and CRUK; the current implementation of imCMS is a joint development of the S:CORT consortium at the University of Oxford in particular Prof. Jens Rittscher and Dr. Korsuk Sirinukunwattana at the Department of Engineering Science, Prof. Tim Maughan at the CRUK/MRC Oxford Institute for Radiation Oncology, and Dr. Enric Domingo at the Department of Oncology, University of Oxford with the Computational and Translational Pathology Group at the University of Zurich (Dr. Maxime Lafarge, Prof. Viktor Koelzer). The authors thank Anja Frei for data processing, Sonali Andani and Dr. Marta Nowak for insightful discussion. We gratefully acknowledge funding by the Promedica Foundation F-87701-41-01.Appendix D. More Interpolation VisualizationFigure 12: The RxRx1 and SCRC0,1,2 visualization of interpolating the outputs of stain and morph blocks simultaneously, interpolating stain outputs while freezing morph ones and vice-versa. Please zoom in on the image details for better visualization.
Restyle: A residual-based stylegan encoder via iterative refinement. Yuval Alaluf, Or Patashnik, Daniel Cohen-Or, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionYuval Alaluf, Or Patashnik, and Daniel Cohen-Or. Restyle: A residual-based stylegan encoder via iterative refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6711-6720, 2021.
Ethical issues of participant recruitment in surgical clinical trials. Peter Angelos, Annals of surgical oncology. 2010Peter Angelos. Ethical issues of participant recruitment in surgical clinical trials. Annals of surgical oncology, 20(10):3184-3187, 2013.
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, David Lopez-Paz, arXiv:1907.02893Invariant risk minimization. arXiv preprintMartin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Counterfactuals and policy analysis in structural models. Alexander Balke, Judea Pearl, Proceedings of the Eleventh conference on Uncertainty in artificial intelligence. the Eleventh conference on Uncertainty in artificial intelligenceAlexander Balke and Judea Pearl. Counterfactuals and policy analysis in structural models. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 11-18, 1995.
Generalizability in causal inference: Theory and algorithms. Elias Bareinboim, UCLAPhD thesisElias Bareinboim. Generalizability in causal inference: Theory and algorithms. PhD thesis, UCLA, 2014.
Transportability from multiple environments with limited experiments: Completeness results. Elias Bareinboim, Judea Pearl, Advances in neural information processing systems. Elias Bareinboim and Judea Pearl. Transportability from multiple environments with limited ex- periments: Completeness results. In Advances in neural information processing systems, pages 280-288, 2014.
. H Y Patrina, Sharon B Caldwell, Phyllis N Murphy, Jonathan C Butow, Craig, Clinical trials in children. The Lancet. 3649436Patrina HY Caldwell, Sharon B Murphy, Phyllis N Butow, and Jonathan C Craig. Clinical trials in children. The Lancet, 364(9436):803-811, 2004.
Xinlei Chen, Haoqi Fan, arXiv:2003.04297Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprintXinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
Learning bayesian networks is np-complete. David Maxwell, Chickering , Learning from data. SpringerDavid Maxwell Chickering. Learning bayesian networks is np-complete. In Learning from data, pages 121-130. Springer, 1996.
Uniqueness of medical data mining. J Krzysztof, G William Cios, Moore, Artificial intelligence in medicine. 261-2Krzysztof J Cios and G William Moore. Uniqueness of medical data mining. Artificial intelligence in medicine, 26(1-2):1-24, 2002.
Adjuvant chemoradiotherapy versus radiotherapy alone in women with highrisk endometrial cancer (portec-3): patterns of recurrence and post-hoc survival analysis of a randomised phase 3 trial. Melanie E Stephanie M De Boer, Linda Powell, Dionyssios Mileshkin, Paul Katsaros, Christine Bessette, Petronella B Haie-Meder, Jonathan A Ottevanger, Pearly Ledermann, Khaw, D' Romerai, Amico, The Lancet Oncology. 209Stephanie M de Boer, Melanie E Powell, Linda Mileshkin, Dionyssios Katsaros, Paul Bessette, Christine Haie-Meder, Petronella B Ottevanger, Jonathan A Ledermann, Pearly Khaw, Romerai D'Amico, et al. Adjuvant chemoradiotherapy versus radiotherapy alone in women with high- risk endometrial cancer (portec-3): patterns of recurrence and post-hoc survival analysis of a randomised phase 3 trial. The Lancet Oncology, 20(9):1273-1285, 2019.
Arcface: Additive angular margin loss for deep face recognition. Jiankang Deng, Jia Guo, Niannan Xue, Stefanos Zafeiriou, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2019.
Pregnant women in trials of covid-19: a critical time to consider ethical frameworks of inclusion in clinical trials. Ruth Farrell, Marsha Michie, Rachel Pope, Ethics & human research. 424Ruth Farrell, Marsha Michie, and Rachel Pope. Pregnant women in trials of covid-19: a critical time to consider ethical frameworks of inclusion in clinical trials. Ethics & human research, 42 (4):17-23, 2020.
Active invariant causal prediction: Experiment selection through stability. L Juan, Christina Gamella, Heinze-Deml, arXiv:2006.05690arXiv preprintJuan L Gamella and Christina Heinze-Deml. Active invariant causal prediction: Experiment selec- tion through stability. arXiv preprint arXiv:2006.05690, 2020.
Domain-adversarial training of neural networks. The journal of machine learning research. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, 17Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural net- works. The journal of machine learning research, 17(1):2096-2030, 2016.
Domain adaptation with conditional transferable components. Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, Bernhard Schölkopf, International conference on machine learning. PMLRMingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Schölkopf. Domain adaptation with conditional transferable components. In International con- ference on machine learning, pages 2839-2848. PMLR, 2016.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural informa- tion processing systems, 27, 2014.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
Invariant causal prediction for nonlinear models. Christina Heinze-Deml, Jonas Peters, Nicolai Meinshausen, Journal of Causal Inference. 62Christina Heinze-Deml, Jonas Peters, and Nicolai Meinshausen. Invariant causal prediction for nonlinear models. Journal of Causal Inference, 6(2), 2018.
Causal inference: What if. Ma Hernán, Robins, Chapman & Hill/CRC2020Boca RatonMA Hernán and JM Robins. Causal inference: What if. Boca Raton: Chapman & Hill/CRC, 2020.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, Advances in neural information processing systems. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pages 6626-6637, 2017.
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, International conference on machine learning. PMLRSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448- 456. PMLR, 2015.
Systematic analysis of proteins from different signaling pathways in the tumor center and the invasive front of colorectal cancer. Eva Karamitopoulou, Inti Zlobec, Ioannis Panayiotides, S Efstratios, George Patsouris, George Peros, Christos Rallis, Petros Lapas, Luigi M Karakitsos, Alessandro Terracciano, Lugli, Human pathology. 4212Eva Karamitopoulou, Inti Zlobec, Ioannis Panayiotides, Efstratios S Patsouris, George Peros, George Rallis, Christos Lapas, Petros Karakitsos, Luigi M Terracciano, and Alessandro Lugli. Systematic analysis of proteins from different signaling pathways in the tumor center and the invasive front of colorectal cancer. Human pathology, 42(12):1888-1896, 2011.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019.
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Ana- lyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110-8119, 2020.
Wilds: A benchmark of in-the-wild distribution shifts. Pang Wei Koh, Shiori Sagawa, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, International Conference on Machine Learning. PMLRPang Wei Koh, Shiori Sagawa, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, et al. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, pages 5637- 5664. PMLR, 2021.
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in neural information processing systems. 25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep con- volutional neural networks. Advances in neural information processing systems, 25:1097-1105, 2012.
Conditional adversarial domain adaptation. Mingsheng Long, Zhangjie Cao, Jianmin Wang, Michael I Jordan , NeurIPS. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In NeurIPS, 2018.
When causal inference meets deep learning. Yunan Luo, Jian Peng, Jianzhu Ma, Nature Machine Intelligence. 28Yunan Luo, Jian Peng, and Jianzhu Ma. When causal inference meets deep learning. Nature Machine Intelligence, 2(8):426-427, 2020.
How much do clinical trials cost. Linda Martin, Melissa Hutchens, Conrad Hawkins, Alaina Radnov, Nat Rev Drug Discov. 166Linda Martin, Melissa Hutchens, Conrad Hawkins, and Alaina Radnov. How much do clinical trials cost. Nat Rev Drug Discov, 16(6):381-382, 2017.
Domain generalization via invariant feature representation. Krikamol Muandet, David Balduzzi, Bernhard Schölkopf, International Conference on Machine Learning. PMLRKrikamol Muandet, David Balduzzi, and Bernhard Schölkopf. Domain generalization via invariant feature representation. In International Conference on Machine Learning, pages 10-18. PMLR, 2013.
Image-based assessment of extracellular mucin-to-tumor area predicts consensus molecular subtypes (cms) in colorectal cancer. Oxana Huu-Giao Nguyen, Annika Lundström, Heather Blank, Alessandro Dawson, Maria Lugli, Inti Anisimova, Zlobec, Modern Pathology. Huu-Giao Nguyen, Oxana Lundström, Annika Blank, Heather Dawson, Alessandro Lugli, Maria Anisimova, and Inti Zlobec. Image-based assessment of extracellular mucin-to-tumor area pre- dicts consensus molecular subtypes (cms) in colorectal cancer. Modern Pathology, pages 1-9, 2021.
Vaginal brachytherapy versus pelvic external beam radiotherapy for patients with endometrial cancer of high-intermediate risk (portec-2): an open-label, non-inferiority, randomised trial. Remi Abubakar Nout, Hein Smit, Putter, M Ina, Jan J Juergenliemk-Schulz, Jobsen, Lutgens, M Elzbieta, Jan Van Der Steen-Banasik, M Willem, Annerie Mens, Slot, Mc Stenfert Kroese, The Lancet. 3759717Remi Abubakar Nout, VTHBM Smit, Hein Putter, Ina M Juergenliemk-Schulz, Jan J Jobsen, LCHW Lutgens, Elzbieta M van der Steen-Banasik, Jan Willem M Mens, Annerie Slot, MC Sten- fert Kroese, et al. Vaginal brachytherapy versus pelvic external beam radiotherapy for patients with endometrial cancer of high-intermediate risk (portec-2): an open-label, non-inferiority, ran- domised trial. The Lancet, 375(9717):816-823, 2010.
A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Transactions on knowledge and data engineering. 2210Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2009.
Domain adaptation via transfer component analysis. Ivor W Sinno Jialin Pan, James T Tsang, Qiang Kwok, Yang, IEEE transactions on neural networks. 222Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE transactions on neural networks, 22(2):199-210, 2010.
The book of why: the new science of cause and effect. Judea Pearl, Dana Mackenzie, Basic Books. Judea Pearl and Dana Mackenzie. The book of why: the new science of cause and effect. Basic Books, 2018.
Models, reasoning and inference. Judea Pearl, CambridgeUniversityPress19Cambridge, UKJudea Pearl et al. Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress, 19, 2000.
Causal inference in statistics: An overview. Judea Pearl, Statistics surveys. 3Judea Pearl et al. Causal inference in statistics: An overview. Statistics surveys, 3:96-146, 2009.
Causal inference by using invariant prediction: identification and confidence intervals. Jonas Peters, Peter Bühlmann, Nicolai Meinshausen, Journal of the Royal Statistical Society. Series B (Statistical Methodology). Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference by using invariant pre- diction: identification and confidence intervals. Journal of the Royal Statistical Society. Series B (Statistical Methodology), pages 947-1012, 2016.
Elements of causal inference. Jonas Peters, Dominik Janzing, Bernhard Schölkopf, The MIT PressJonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference. The MIT Press, 2017.
Invariant models for causal transfer learning. Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters, The Journal of Machine Learning Research. 191Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. The Journal of Machine Learning Research, 19(1):1309-1342, 2018.
The risks of invariant risk minimization. Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski, International Conference on Learning Representations. 9Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. The risks of invariant risk minimization. In International Conference on Learning Representations, volume 9, 2021.
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. Shiori Sagawa, Pang Wei Koh, B Tatsunori, Percy Hashimoto, Liang, The 7th International Conference on Learning Representations. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generaliza- tion. The 7th International Conference on Learning Representations, 2020.
Mo-bilenetv2: Inverted residuals and linear bottlenecks. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionMark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mo- bilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510-4520, 2018.
Toward causal representation learning. Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, Yoshua Bengio, Proceedings of the IEEE. 1095Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612-634, 2021.
Causal diagram techniques for urologic oncology research. D Daniel, Pavlos Shapiro, Msaouel, Clinical genitourinary cancer. 193Daniel D Shapiro and Pavlos Msaouel. Causal diagram techniques for urologic oncology research. Clinical genitourinary cancer, 19(3):271-e1, 2021.
Towards out-of-distribution generalization: A survey. Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui, arXiv:2108.13624arXiv preprintZheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. Towards out-of-distribution generalization: A survey. arXiv preprint arXiv:2108.13624, 2021.
Image-based consensus molecular subtype classification (imcms) of colorectal cancer using deep learning. K Sirinukunwattana, Domingo, Richman, Redmond, C Blake, Verrill, Leedham, C Chatzipli, Hardy, C Whalley, Wu, U Beggs, P Mcdermott, Dunne, Meade, Walker, Murray, Samuel, Seymour, P Tomlinson, Quirke, Maughan, V H Rittscher, Koelzer, Gut, 2020. towards iid representation learningK Sirinukunwattana, E Domingo, S Richman, K Redmond, A Blake, C Verrill, S Leedham, A Chatzipli, C Hardy, C Whalley, C Wu, A Beggs, U McDermott, P Dunne, A Meade, S Walker, G Murray, L Samuel, M Seymour, I Tomlinson, P Quirke, T Maughan, J Rittscher, and VH Koelzer. Image-based consensus molecular subtype classification (imcms) of colorectal cancer using deep learning. Gut, 2020. towards iid representation learning
Deep coral: Correlation alignment for deep domain adaptation. Baochen Sun, Kate Saenko, European conference on computer vision. SpringerBaochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443-450. Springer, 2016.
Return of frustratingly easy domain adaptation. Baochen Sun, Jiashi Feng, Kate Saenko, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence30Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
Mnasnet: Platform-aware neural architecture search for mobile. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, Quoc V Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2820-2828, 2019.
Rxrx1: An image set for cellular morphological variation across many experimental batches. J Taylor, Earnshaw, M Mabey, J Victors, Yosinski, The 7th International Conference on Learning Representations. J Taylor, B Earnshaw, B Mabey, M Victors, and J Yosinski. Rxrx1: An image set for cellular morphological variation across many experimental batches. In The 7th International Conference on Learning Representations, 2019.
Designing an encoder for stylegan image manipulation. Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or, ACM Transactions on Graphics (TOG). 404Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. ACM Transactions on Graphics (TOG), 40(4):1-14, 2021.
Instance normalization: The missing ingredient for fast stylization. Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky, arXiv:1607.08022arXiv preprintDmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
Principles of risk minimization for learning theory. Vladimir Vapnik, Advances in neural information processing systems. Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in neural infor- mation processing systems, pages 831-838, 1992.
The nature of statistical learning theory. Vladimir Vapnik, Springer science & business mediaVladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 1999.
Deep visual domain adaptation: A survey. Mei Wang, Weihong Deng, Neurocomputing. 312Mei Wang and Weihong Deng. Deep visual domain adaptation: A survey. Neurocomputing, 312: 135-153, 2018.
Group normalization. Yuxin Wu, Kaiming He, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3-19, 2018.
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6023-6032, 2019.
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionRichard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018.
A comprehensive survey on transfer learning. Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, Qing He, Proceedings of the IEEE. 1091Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1): 43-76, 2020.
The SCRC1 visual comparison between ground-truth (red bounding box) and reconstructed images for Batch (BN), Layer (LN). 10Group (GN) and Instance (IN) normalization. Please zoom in on the image details for better visualizationFigure 10: The SCRC1 visual comparison between ground-truth (red bounding box) and reconstructed images for Batch (BN), Layer (LN), Group (GN) and Instance (IN) normalization. Please zoom in on the image details for better visual- ization.
| [
"https://github.com/CTPLab/IID_representation_learning.",
"https://github.com/juangamella/aicp,",
"https://github.com/p-lambda/wilds.git"
] |
[
"Interpretable and Interactive Deep Multiple Instance Learning for Dental Caries Classification in Bitewing X-rays",
"Interpretable and Interactive Deep Multiple Instance Learning for Dental Caries Classification in Bitewing X-rays"
] | [
"Benjamin Bergner [email protected] \nDigital Health & Machine Learning\nHasso Plattner Institute\nUniversity of Potsdam\nGermany\n",
"Csaba Rohrer [email protected] \nDepartment of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany\n",
"Aiham Taleb [email protected] \nDigital Health & Machine Learning\nHasso Plattner Institute\nUniversity of Potsdam\nGermany\n",
"Martha Duchrau [email protected] \nDepartment of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany\n",
"Guilherme De Leon [email protected] \nContraste Radiologia Odontológica\nBlumenauBrasil\n",
"Jonas Almeida Rodrigues [email protected] \nSchool of Dentistry\nDepartment of Surgery and Orthopedics\nUniversidade Federal do Rio Grande do Sul -UFRGS\nPorto AlegreRSBrazil\n",
"Falk Schwendicke [email protected] \nDepartment of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany\n",
"Joachim Krois [email protected] \nDepartment of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany\n",
"Christoph Lippert \nDigital Health & Machine Learning\nHasso Plattner Institute\nUniversity of Potsdam\nGermany\n\nIcahn School of Medicine at Mount Sinai\nHasso Plattner Institute for Digital Health at Mount Sinai\nNYCUSA\n",
"Christoph Lippert@hpi De "
] | [
"Digital Health & Machine Learning\nHasso Plattner Institute\nUniversity of Potsdam\nGermany",
"Department of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany",
"Digital Health & Machine Learning\nHasso Plattner Institute\nUniversity of Potsdam\nGermany",
"Department of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany",
"Contraste Radiologia Odontológica\nBlumenauBrasil",
"School of Dentistry\nDepartment of Surgery and Orthopedics\nUniversidade Federal do Rio Grande do Sul -UFRGS\nPorto AlegreRSBrazil",
"Department of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany",
"Department of Oral Diagnostics\nDigital Health and Health Services Research\nCharité -Univer-sitätsmedizin Berlin\nGermany",
"Digital Health & Machine Learning\nHasso Plattner Institute\nUniversity of Potsdam\nGermany",
"Icahn School of Medicine at Mount Sinai\nHasso Plattner Institute for Digital Health at Mount Sinai\nNYCUSA"
] | [
"Proceedings of Machine Learning Research -Under Review"
] | We propose a simple and efficient image classification architecture based on deep multiple instance learning, and apply it to the challenging task of caries detection in dental radiographs. Technically, our approach contributes in two ways: First, it outputs a heatmap of local patch classification probabilities despite being trained with weak image-level labels. Second, it is amenable to learning from segmentation labels to guide training. In contrast to existing methods, the human user can faithfully interpret predictions and interact with the model to decide which regions to attend to. Experiments are conducted on a large clinical dataset of ∼38k bitewings (∼316k teeth), where we achieve competitive performance compared to various baselines. When guided by an external caries segmentation model, a significant improvement in classification and localization performance is observed. | null | [
"https://arxiv.org/pdf/2112.09694v1.pdf"
] | 245,334,559 | 2112.09694 | 173d15cb47a13795e9b9a5953ecb7e0f5f4bcc23 |
Interpretable and Interactive Deep Multiple Instance Learning for Dental Caries Classification in Bitewing X-rays
1-19, 2022
Benjamin Bergner [email protected]
Digital Health & Machine Learning
Hasso Plattner Institute
University of Potsdam
Germany
Csaba Rohrer [email protected]
Department of Oral Diagnostics
Digital Health and Health Services Research
Charité -Univer-sitätsmedizin Berlin
Germany
Aiham Taleb [email protected]
Digital Health & Machine Learning
Hasso Plattner Institute
University of Potsdam
Germany
Martha Duchrau [email protected]
Department of Oral Diagnostics
Digital Health and Health Services Research
Charité -Univer-sitätsmedizin Berlin
Germany
Guilherme De Leon [email protected]
Contraste Radiologia Odontológica
BlumenauBrasil
Jonas Almeida Rodrigues [email protected]
School of Dentistry
Department of Surgery and Orthopedics
Universidade Federal do Rio Grande do Sul -UFRGS
Porto AlegreRSBrazil
Falk Schwendicke [email protected]
Department of Oral Diagnostics
Digital Health and Health Services Research
Charité -Univer-sitätsmedizin Berlin
Germany
Joachim Krois [email protected]
Department of Oral Diagnostics
Digital Health and Health Services Research
Charité -Univer-sitätsmedizin Berlin
Germany
Christoph Lippert
Digital Health & Machine Learning
Hasso Plattner Institute
University of Potsdam
Germany
Icahn School of Medicine at Mount Sinai
Hasso Plattner Institute for Digital Health at Mount Sinai
NYCUSA
Christoph Lippert@hpi De
Interpretable and Interactive Deep Multiple Instance Learning for Dental Caries Classification in Bitewing X-rays
Proceedings of Machine Learning Research -Under Review
1-19, 2022Editors: Under Review for MIDL 2022Full Paper -MIDL 2022 submissiondental deep learningMILinterpretabilityinteractive learning
We propose a simple and efficient image classification architecture based on deep multiple instance learning, and apply it to the challenging task of caries detection in dental radiographs. Technically, our approach contributes in two ways: First, it outputs a heatmap of local patch classification probabilities despite being trained with weak image-level labels. Second, it is amenable to learning from segmentation labels to guide training. In contrast to existing methods, the human user can faithfully interpret predictions and interact with the model to decide which regions to attend to. Experiments are conducted on a large clinical dataset of ∼38k bitewings (∼316k teeth), where we achieve competitive performance compared to various baselines. When guided by an external caries segmentation model, a significant improvement in classification and localization performance is observed.
Introduction
Dental caries is the most prevalent health disease, affecting more than three billion people worldwide (Kassebaum et al., 2017). For diagnosis, clinicians commonly analyze bitewing radiographs (BWRs), which show the maxillary and mandibular teeth of one side of the jaw. However, the assessment of caries in bitewings is associated with low detection rates. For example, Schwendicke et al. (2015) reported domain expert-level sensitivity of only 24% (21%-26%, 95% CI) for the detection of both initial and advanced carious lesions.
The challenging nature of caries detection and the growing quantity of dental data motivate the use of deep learning techniques for this task. In order to support dentists, such models must overcome various technical challenges, as follows: (1) Diagnosing caries is a low signal-to-noise ratio problem. That is, lesions may occupy only few pixels in the image. Standard convolutional neural networks (CNN) have been shown to struggle in this setting (Pawlowski et al., 2020). In contrast, models using attention are designed to focus on important regions while ignoring the prevalent background (Katharopoulos and Fleuret, 2019).
(2) Caries classification is a multiple instance learning (MIL) problem (Dietterich et al., 1997). That is, an image is considered positive if at least one carious lesion is present and negative if and only if no lesion is present. In this context, an image is described as a bag of image region features called instances; see Carbonneau et al. (2018) for an introduction.
(3) BWRs contain multiple teeth, and each may be affected by caries. However, classification outputs are restricted to a single probability score and thus lack interpretability (Zhang and Zhu, 2018). A supporting model should indicate where each lesion is located so that its correctness can be verified. (4) Optimal decision support is receptive to feedback (Holzinger, 2016). Beyond only outputting information about the occurrence of caries (learned from weak labels), a dentist or teacher (Hinton et al., 2015) could interact with the model by providing strong labels (such as segmentation masks) to improve performance.
We present Embedding Multiple Instance Learning (EMIL), which is an interpretable and interactive method that fulfills above considerations. EMIL extracts 3D patches from a spatial embedding resulting from any CNN. Each patch may show caries and is classified individually, and all predictions together form a heatmap of local probabilities, notably without access to patch labels. An attention mechanism weighs local predictions and aggregates them into a global image-level prediction. Besides standard classification, the method enables (but does not rely on) the inclusion of dense labels. Although EMIL adds important capabilities for the present use case, classification of dental caries, it is a simple adaptation to common CNNs with low computational cost that translates to other diagnosis tasks. We evaluate performance and interpretability using a large clinical bitewing dataset for imageand tooth-level classification, and show the positive impact of including strong tooth and caries labels. Our code is available at: https://github.com/benbergner/emil.
Related Work
Caries prediction models
Recently, several caries prediction models have been published. Tripathi et al. (2019) used a genetic algorithm on 800 BWRs and reported an accuracy of 95.4%. Srivastava et al. (2017) trained a 100+ layer CNN on 2,500 BWRs and reported an F-score of 70%. Megalan Leo and Kalpalatha Reddy (2020) trained a CNN on 418 cropped teeth from 120 BWRs and achieved an accuracy of 87.6%. Kumar and Srivastava (2018) proposed an incremental learning approach and trained a U-Net on 6,000 BWRs, which yielded an F-score of 61%. Cantu et al. (2020) trained a U-Net on 3,686 BWRs and reported tooth-level accuracy and F-score of 80% and 73%, respectively. Bayraktar and Ayan (2021) trained YOLO on 800 bitewings and reported an AUC score of 87%.
Deep Multiple Instance Learning
MIL is commonly used for the classification of microscopic images in which, e.g., a single cancer cell positively labels a bag (Kraus et al., 2016;Sudharshan et al., 2019). MIL has also recently been applied in radiology, e.g. Han et al. (2020) screened chest CTs for COVID-19 and Zhou et al. (2018) detected diabetic retinopathy in retinal images. To the best of our knowledge, this is the first application of MIL to the field of dental radiology.
Instance representations are commonly created from patches extracted from the input image (Xu et al., 2014), but can also be extracted from a CNN embedding (Pawlowski et al., 2020;Dosovitskiy et al., 2021). Furthermore, one can distinguish between approaches predicting at the instance (Wu et al., 2015;Campanella et al., 2018) or bag-level (Wang et al., 2018;Ilse et al., 2018). Our approach combines the extraction of instances as overlapping patches from a CNN embedding with the classification of individual instances that are aggregated with a constrained form of attention-based MIL pooling.
Method
Below, we describe our proposed model, the creation of a local prediction heatmap, and a method to incorporate strong labels. A schematic of the architecture is shown in Figure 1.
Patch extraction and classification
We consider an image X ∈ R H X ×W X ×C X as input, with H X , W X , C X being height, width and number of channels, and assign a binary label y ∈ {0, 1}. First, a(ny) convolutional backbone computes a feature map U ∈ R H U ×W U ×C U :
U = f Enc (X).
(1)
Then, K patches P ∈ R K×H P ×W P ×C U are extracted from U, with H P ≤ H U , W P ≤ W U .
For this purpose, a sliding window is used with kernel size (H P , W P ) and stride (H S , W S ). Each patch is spatially pooled, resulting in a feature matrix P ∈ R K×C U . We use average pooling, which is most prevalent in image classification (Lin et al., 2014):
P k = 1 H P W P H P h=1 W P w=1 P k,h,w .(2)
Both patch extraction and pooling are implemented by an ordinary local pooling operation. Each patch may show a carious tooth region and is thus classified independently by a shared fully-connected layer parametrized by o ∈ R C U . This is followed by a sigmoid operator, which outputs classificationsỹ ∈ R K holding class probabilities for each patch:
y k = σ((P o) k ) , k = 1 . . . K.(3)
Patch weighting and aggregation
We use a patch weight vector w ∈ R K×1 to focus on carious lesions while neglecting background and non-caries tooth regions. The image-level predictionŷ is computed as:
y = K k w kỹk max K k w k , K min .(4)
The denominator ensures that at least K min patches are attended to, and provides a way to include prior knowledge about the target's size. For caries classification, a single positive patch should lead to a positive prediction, so we set K min = 1 (see Appendix D for more details). The weight of each patch is determined by its own local representation. We use a variant of the gated attention mechanism (Ilse et al., 2018), which is a two-branch multilayer perceptron parametrized by A ∈ R C U ×D , B ∈ R C U ×D and c ∈ R D×1 , with D hidden nodes:
w = σ tanh (P A) σ (P B) c .(5)
Compared to the original formulation using softmax, we employ sigmoid as outer function, and normalize Eq. 4 accordingly. This makes the weights independent of each other and allows to ignore all patches, which is useful for classifying the negative class.
Interpretability
A heatmap Mỹ is constructed with each element corresponding to a local predictionỹ k . For visualization, the heatmap is interpolated and superimposed on the input image. Similarly, another heatmap M w is built from patch weights w. While Mỹ shows local predictions, M w indicates which areas are considered for global classification. Note that the locations in Mỹ and M w can be interpreted as probabilities, a property that attribution methods lack. The probability for any group of patches (e.g., a tooth) can be calculated with Eq. 4 by updating patch indices in both sums. Furthermore, note that EMIL is optimized for faithfulness (Alvarez Melis and Jaakkola, 2018). That is, one can tell exactly by how mucĥ y changes when removing any patch i, which is − w iỹi K min if k w k −w i < K min and 0 otherwise. For example, if K min = 1 and 2 caries patches are present,ŷ shouldn't change by removing one caries patch, which aligns with the standard MIL assumption (Foulds and Frank, 2010).
Interactivity
Optionally, learning can be guided by providing additional labels, such as segmentation masks. For example, a dentist could interactively correct errors/biases; while a data scientist might want to incorporate dense (and expensive) labels for a subset of the data. To create patch-wise labels y ∈ R K , a downscaled binary annotation mask is max-pooled with kernel size/stride from Sect. 3.1, and vectorized. Then, to compute compound loss L for an image, both patch and image-level cross-entropy losses = [L image , L patch ] are weighted and added:
L image = − y log(ŷ) + (1 − y) log(1 −ŷ) ,(6)L patch = − 1 K K k=1 y k log(ỹ k ) + (1 − y k ) log(1 −ỹ k ) ,(7)L = i α i i , α i = const max j j i .(8)
Due to class imbalance in caries masks, the network easily fits the background class and we observe that L patch L image . Thus, L image dominates the compound loss and diminishes the benefit of strong labels. To mitigate this problem, coefficient α is introduced to dynamically scale each partial loss to the magnitude of the largest one. Note that α is transformed into a constant so that the partial losses are detached from the computational graph.
Experiments
Below, we describe the experiments and answer the following research questions: (1) How well can EMIL predict caries in BWRs and cropped tooth images? (2) Can it highlight caries and provide clinical insight? (3) To what extent do strong labels improve performance?
Dataset
The dataset stems from three dental clinics in Brazil specialized in radiographic and tomographic examinations. The dataset consists of 38,174 BWRs (corresponding to 316,388 cropped tooth images) taken between 2018 and 2021 from 9,780 patients with a mean (sd) age of 34 (14) years. Tooth-level caries labels were extracted from electronic health records (EHRs) that summarize a patient's dental status. Next to these EHR-based ground truth labels, which are associated with uncertainties and biases (Gianfrancesco et al., 2018), a random sample of 355 BWRs was drawn, and annotated with caries masks by 4 experienced dentists, yielding 254 positive and 101 negative cases. These annotations were reviewed by a senior radiologist (+13 years of experience) to resolve conflicts and establish a test set.
Experimental setup
We consider caries classification on BWR and tooth level and use stratified 5-fold crossvalidation with non-overlapping patients for training and hyperparameter tuning. Due to class imbalance, the balanced accuracy is used as stopping criterion. In the tooth-level task, both class terms in L image are weighted by the inverse class frequency to account for class imbalance. Results are reported on the hold-out test set as average of the 5 resulting models with 95% CI. Binary masks from two teacher models are used to simulate interactivity: (1) a tooth instance-segmentation model ( , unpublished) pointing at affected teeth and (2) a caries segmentation model ( , Cantu et al. (2020)). Note that these models are subject to errors and do not replace class labels, but only guide training; if a segmentation contradicts the classification label, it is discarded. More training details are described in Appendix C.
Baselines
Several baselines are used to show competitive performance. ResNet-18 (He et al., 2016) serves as backbone for all methods. EMIL makes only few changes to the default CNN, so we also employ ResNet-18 as a baseline. In order to study the effect of embedding-based patch extraction, we compare to DeepMIL (Ilse et al., 2018), which operates on patches cropped from the input image. As patch sizes, 32 and 128 px with 50% overlap are used.
To show the effect of our patch weighting and aggregation approach, the attention mechanism is replaced by the max operator, which is common in instance-based MIL (Amores, 2013). However, this did not fit the training data. A more powerful baseline is the hybrid version of the Vision Transformer (ViT) (Dosovitskiy et al., 2021) with a single encoding block. As in EMIL, patches are extracted from the output of the last conv layer, and we found large overlapping patches to be beneficial. The hybrid base and pure attention versions were not included because they performed worse or did not fit the training data.
For the evaluation of the interactive settings, a simple baseline consists in attaching a 1x1 conv layer (Lin et al., 2014) with a single output channel, kernel and stride of 1 and zero padding, to the last ResNet-18 encoder block to output a segmentation map. As a stronger baseline, we adapt Y-Net (Mehta et al., 2018), which consists of a U-Net (with a ResNet-18 encoder) and a standard output layer attached to the last encoding block for classification. In all interactive settings, the same loss functions are used (Eq. 6-8). See also Appendix B. Table 1 and Fig. 2 summarize the results. In bitewings, EMIL shows highest mean accuracy, F-score and sensitivity across settings (in bold). In contrast, increasing capacity (ResNet-50) and more complex self-attention based aggregation functions (ViT) do not improve performance. Furthermore, DeepMIL-32/128 exhibit lower accuracy indicating that context beyond patch borders is crucial in classifying caries. The use of tooth/caries masks increases mean performance. In particular, EMIL + exhibits higher scores across metrics compared to a non-guided model and has a significantly higher F-score and AUROC than ResNet-18. These trends continue in the tooth-level data, although improvements of guided models are smaller. This is expected because the signal-to-noise ratio is higher than in bitewings.
Classification results
We also report results for the clinical labels (EHR GT) on which all models are based. The summary metrics (Bal. Acc., F-score) are higher for bitewings, possibly because errors at the tooth-level may still lead to true positives. Intriguingly, all tooth-level models show a higher accuracy/F-score than the EHR GT. This suggests that salient patterns are learned that clinicians missed (or did not report) and may result from the fact that mislabeled false positives have less weight in the loss function. Table 1 also reports average runtimes per iteration and peak memory usage for a realistic batch size of 16. EMIL is nearly as efficient as its underlying backbone, and up to 6.6× faster than Y-Net, while consuming up to 9.5× less memory at similar/better mean performance. , as well as DeepMIL, are sensitive to positive cases (rows 1-3) but not precise. Moreover, these methods do not ignore the negative class (row 4), and false negatives are accompanied by false positive visualizations (row 5). This is resolved in Y-Net and EMIL (Mỹ), and caries may be highlighted although the activation is too low to cross the classification threshold (e.g., row 5, column 8). Table 2 pixels of each map to 1, so that the total area equals the respective ground truth. When considering all confidences (IoU@0), Saliency and EMIL localize best. In the interactive settings, scores improve significantly in Y-Net and EMIL. We also conduct an experiment where only confident predictions (ŷ ≥ 0.95) are retained. Average localization performance increases most in EMIL models, by 12.4 and 30.9 percentage points, respectively.
Evaluation of interpretability
Discussion
We presented two caries classifiers for bitewing and tooth images. The former indicates the general presence of caries in the dentition (with high PR AUC), while the latter may support diagnosis. One limitation is that training is performed with EHRs, which makes the labels error-prone. However, our dataset is much larger than related work, and relabeling at scale is impractical. Yet, the tooth-level model shows higher sensitivity than clinicians (62.68±4.37 vs. 44.27), suggesting that more lesions can be found and treated in practice. The heatmaps are a useful tool to see on what grounds a prediction is made, and to estimate caries severity. Furthermore, we showed that strong labels pointing at relevant regions improve classification and localization, opening up ways to integrate the user into the training process. Technically, our approach may serve further computer-aided diagnosis applications in radiology, where trust and the ability to integrate human knowledge are critical. Table 3 shows classification results and prevalence for different tooth and caries types for the guided EMIL model. In terms of tooth types, the model performs significantly worse for canines, which can be explained by the low prevalence in the data. A higher average F-score is observed for premolars compared to molars. The former may be easier to detect because they appear centrally in the bitewing and are unlikely to be partially cut out of the image. In secondary/recurrent caries, the average sensitivity is higher than in primary/initial caries. One possible reason for this result is that such lesions are adjacent to restorations which are radiopaque, sharply demarcated and easy to detect. In addition, secondary caries can spread more quickly because it no longer has to penetrate the hard enamel, but can quickly reach the softer interior of the tooth.
Appendix A. Tooth-level classification results
Appendix B. Conceptual architecture comparison
In Table 4, we revisit the baselines used in the main paper and make a conceptual comparison. EMIL and ViT (hybrid) extract patches from a CNN embedding, while DeepMIL extracts patches from the input image. The embedding approach has the advantage that the context is large due to the growing receptive field resulting from the sequence of convolutional layers. The MIL literature (Carbonneau et al., 2018;Amores, 2013) distinguishes between two different types of inputs for the classification function. The first type is the bag representation, which is calculated by aggregating instances using a MIL pooling operation (such as mean or attention). The second type used by EMIL is individual instances that are classified before aggregation. Standard CNNs instead use a global embedding, without distinguishing between instances and bags. There are also different assumptions about when a bag is considered positive (Foulds and Frank, 2010). The most common is the standard assumption (any positive instance → bag positive; no positive instance → bag negative).
In the weighted collective assumption (used by DeepMIL), all instances are considered in a weighted manner to infer the class of the bag. EMIL uses both the weighted collective and the threshold-based assumption, where a minimum number of patches (K min ) must be positive for the bag to be classified as positive. Standard CNNs are black boxes that require post-hoc attribution methods to give insights about their predictions (in ViT, attention maps can be visualized as well). The drawback of such methods is that they are not optimized for faithfulness (Adebayo et al., 2018;Rudin, 2019) and cannot explain the negative class (see Fig. 4, row 4). Y-Net is interpretable through its decoder but requires segmentation labels and does not use the decoder output for classification. DeepMIL uses attention, but weights need to sum to 1, which is unintuitive for the negative class. EMIL uses both attention weights and patch probabilities to create faithful explanations for a prediction. Regarding interactive learning, standard CNNs are trained with classification labels but cannot learn from dense labels. Y-Net and EMIL are both able to learn from dense labels, but EMIL does it efficiently. Ilse et al. (2018) 2 is used. For ViT, we adapt the vit-pytorch repository 3 and found that a minimal hybrid version using a single transformer encoder block, with 8 heads (each 64-dimensional), works best. Both patch representations and inner MLP layers are 128-dimensional. No dropout is used, and all patch representations are averaged before the MLP head. We employ the same approach as in EMIL and create overlapping patches with a large kernel size of 5 and a stride of 1. For Y-Net, we adapt the residual U-Net implementation of the ResUnet repository 4 , where we add a fully-connected output layer to the bottleneck and learn the upsampling in the decoder. For saliency maps, occlusion sensitivity and Grad-CAM, we use the Captum library (Kokhlikyan et al., 2020).
C.2. Preprocessing
The dataset consists of 38,174 bitewings, which corresponds to 316,388 teeth. To prepare the data, we make use of the following exclusion criteria.
C.3. Optimization
We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001, β 1 = 0.9, β 2 = 0.999 and no weight decay. In each fold, we train for 20-30 epochs depending on the training progress of the respective methods. We observed that training in the interactive settings is faster, which is expected as the segmentation masks guide the model to the salient patterns. A batch size of 32 is used for bitewings, and 128 for the tooth data. For Y-Net, we had to reduce the batch size to 16 and 32, respectively, due to memory constraints.
C.4. Segmentation masks
Segmentation masks originally have the same dimension as the input image. In contrast, EMIL uses downscaled masks. To avoid that small carious lesions disappear due to downscaling, EMIL performs bilinear upsampling of the encoder output by a factor of 4 before extracting patches, resulting in spatial feature map resolutions of 64x84 for bitewing images and 48x48 for tooth images. Note that the primary task is classification, i.e., segmentation masks are used to guide training, but do not replace the classification label. A positive mask is only used if it corresponds to the class label; if the class label is negative, all elements of the mask are set to 0. Due to label noise, we do not use negative masks for the tooth-level task. Considering these filters, 35,683 masks (∼97%) remain for the bitewing data, and 43,178 masks (∼72% of all positive instances) remain for the tooth-level data. For more details on the performance of the caries segmentation model, see Cantu et al. (2020).
Appendix D. EMIL hyperparameters
EMIL has two interesting hyperparameters, which we want to explain in more detail: K min and the patch size. Hyperparameter K min represents the minimum collective weight that must be assigned to the set of patches to be able to obtain a confident positive classification (i.e.,ŷ = 1). For simplicity, consider the case where attention weights can only take on values in {0, 1}. Then K min can be thought of as the minimum number of patches that must be attended to. If this constraint is violated, the denominator of Eq. 4 turns into a constant, and the network is incentivized (for the positive class) to attend to more patches by increasing w through the nominator. Note that the value ofŷ also depends onỹ, i.e., attended patches must be classified positively to obtain a high positive class score. Fig. 5 shows the effect of K min on the patch weight map. For increasing values of K min , sensitivity increases but precision decreases. If the value is too high, performance decreases because healthy tooth regions will be attended, which erroneously reduces disease probability (see, e.g., the first row of Fig. 5). When K min = 0, little attention is assigned to any patch because all possible class scores can be obtained independent of w . According to the standard MIL assumption (Dietterich et al., 1997;Foulds and Frank, 2010), a single positive instance is sufficient to positively label a bag, therefore K min is set to 1 and must not be searched. The second hyperparameter is the patch size, which we set equal for both dimensions, H P = W P -we use H P in the following to denote both width and height. The patch size GT Tooth Figure 5: Weight maps for increasing K min . Sensitivity increases, precision decreases. Figure 6: Unnormalized weight maps for increasing patch sizes, with H P = W P . Sensitivity increases, precision decreases.
GT Tooth
controls the individual regions that are classified. If H P = H U , then a single global patch is considered and the training behavior is similar to a standard CNN. Fig. 6 shows the effect of H P on the heatmap for H S = 1. Attention weights of overlapping patches are summed and clipped at 1 to improve visualizations. One can observe that the sensitivity increases while precision decreases. In our experiments, the patch size had little impact on classification performance, but we prefer a small value for precise localization. Note that a small patch in the embedding space has a large receptive field and thus sufficient context to detect both small and larger lesions (Luo et al., 2016). For the main experiments, we set K min = 1, H P = W P = 1 and H S = W S = 1.
Prediction GT Bitewing Weights
Figure 1 :
1EMIL classification architecture.
Figure 2 :
2ROC and PR curves with AUC values and CIs for bitewing and tooth datasets.
Fig. 3
3shows a positive BWR and EMIL heatmaps. While Mỹ is sensitive and recognizes all lesions, M w is precise and focuses on the most discriminative regions. A colorbar indicates local class probabilities and attention values, respectively.Fig. 4adds a qualitative visualization comparison. Attribution methods such as saliency maps(Simonyan et al., 2014),Grad-CAM (Selvaraju et al., 2017) or occlusion maps (Zeiler and Fergus, 2014) (ResNet)
Figure 3 :
3adds a quantitative comparison where the overlap of ground truth and heatmaps is computed as Intersection over Union (IoU, in %) for the positive class. For a fair comparison, we follow Viviano et al. (2021) and set the topmost Visualization on a test image. EMIL is trained without expert segmentations.
Figure 4 :
4Tooth visualizations of different methods.
Figure 8 :Figure 9 :Figure 10 :
8910False negative bitewings and EMIL heatmaps. True False negative teeth.
Table 1 :
1Caries classification results with 95% CI and computational comparison.Bal. Acc.
F-score
Sens.
Spec.
Time [ms] RAM [GB]
Bitewing (512x672)
ResNet-18
73.31 ± 2.65
75.03 ± 0.87
64.25 ± 2.25
82.38 ± 7.08 44
2.48
ResNet-50
71.15 ± 3.68
75.90 ± 0.85
67.24 ± 3.12
75.05 ± 10.19 143
9.19
DeepMIL-32
67.28 ± 1.07
75.17 ± 3.11
68.43 ± 5.89
66.14 ± 6.64
256
9.43
DeepMIL-128
70.68 ± 2.92
73.08 ± 5.85
62.76 ± 9.50
78.61 ± 7.95
133
7.51
ViT
72.92 ± 3.67
74.35 ± 3.90
63.46 ± 7.18
82.38 ± 11.51 50
3.40
EMIL
73.64 ± 1.77 77.88 ± 2.11 69.45 ± 4.43 77.82 ± 6.65
45
2.48
ResNet-18 +
74.10 ± 4.43
73.19 ± 9.51
61.26 ± 12.34 86.93 ± 5.32 47
2.48
ResNet-18 +
75.40 ± 2.17
77.32 ± 3.18
67.24 ± 5.95
83.56 ± 7.41
47
2.48
Y-Net +
75.78 ± 1.41
77.13 ± 3.07
66.61 ± 5.27
84.95 ± 4.38
318
23.54
Y-Net +
75.90 ± 2.47
77.56 ± 1.74
67.24 ± 2.14
84.55 ± 4.31
318
23.54
EMIL +
74.69 ± 2.12
77.79 ± 3.07
68.58 ± 4.82
80.79 ± 3.64
48
2.48
EMIL +
76.64 ± 1.50 79.52 ± 3.48 70.71 ± 6.76 82.57 ± 7.66
48
2.48
EHR GT
80.90
83.11
74.80
87.00
-
-
Tooth (384x384)
ResNet-18
75.13 ± 0.72
65.40 ± 0.97
62.39 ± 2.66
87.88 ± 1.85
24
1.17
ResNet-50
74.72 ± 1.05
64.88 ± 1.34
60.57 ± 4.41
88.88 ± 2.44
68
4.14
DeepMIL-32
70.04 ± 0.80
58.03 ± 1.13
59.75 ± 4.94
80.33 ± 4.04
105
4.03
DeepMIL-128
74.52 ± 0.72
64.60 ± 0.98
60.16 ± 2.54
88.88 ± 1.45
51
2.92
ViT
74.52 ± 1.01
64.73 ± 1.38
58.54 ± 4.12
90.49 ± 2.53 29
1.56
EMIL
75.67 ± 1.74 66.02 ± 2.25 64.16 ± 5.26 87.17 ± 2.14
25
1.18
ResNet-18 +
74.99 ± 1.32
65.53 ± 1.90
58.57 ± 3.93
91.41 ± 2.06
26
1.17
Y-Net +
76.40 ± 1.13 67.50 ± 1.49 62.21 ± 3.83
90.59 ± 2.10 148
10.25
EMIL +
76.14 ± 1.29
67.01 ± 1.86
62.68 ± 4.37 89.59 ± 3.12
26
1.18
EHR GT
70.03
57.44
44.27
95.79
-
-
Table 2 :
2Localization comparisonMethod
IoU@0
IoU@95
Saliency
17.8 ± 0.8 23.2 ± 0.7
Grad-CAM 10.6 ± 2.6
21.1 ± 4.5
Occlusion
10.9 ± 1.1
25.5 ± 2.2
DeepMIL
14.2 ± 0.2
20.7 ± 2.0
EMIL
15.9 ± 2.9
28.3 ± 6.0
Y-Net +
44.9 ± 0.8 63.6 ± 1.2
EMIL +
38.1 ± 0.7
69.0 ± 3.3
Table 3 :
3Classification results for different tooth and caries types for EMIL + .Bal. Acc.
F-score
Sens.
Spec.
Preval.
Canine
67.48 ± 4.80 45.47 ± 7.14 42.55 ± 12.25 92.42 ± 4.14 11.0
Premolar
76.88 ± 1.21 69.08 ± 1.69 64.11 ± 4.52
89.66 ± 3.25 41.5
Molar
76.18 ± 1.49 67.50 ± 2.15 63.74 ± 4.17
88.63 ± 2.86 47.5
Primary
76.63 ± 0.98 61.99 ± 2.17 63.68 ± 3.58
89.59 ± 3.12 17.4
Secondary 77.67 ± 2.11 62.25 ± 3.21 65.74 ± 5.70
89.59 ± 3.12 16.0
Table 4 :
4Conceptual comparisonThe ResNet-18 backbone is based on the original Pytorch implementation 1 . For DeepMIL, the original implementation ofMethod
Patch extraction Classifier input
MIL assumption
Interpretability
Interactivity
ResNet
Global embedding
Post-hoc
ViT
Embedding
Bag Representation
Post-hoc
Y-Net
Global embedding
Decoder
DeepMIL Input image
Bag Representation
Weighted collective Patch weights
EMIL
Embedding
Instance Representation
Threshold-based +
Weighted collective
Patch weights +
Patch probabilities
Appendix C. Implementation and training details
C.1. Baseline implementations
859 with caries) remain for training. The test set consists of 355 BWRs, 254 positive, 101 negative. This corresponds to 2,938 tooth images, 879 positive, 2,059 negative. Bitewing images are resized to 512x672 pixels, which preserves the prevalent height to width ratio. Tooth images are cropped with a 50-pixel padding on each side, and then resized to 384x384 pixels. Intensities are normalized in the range[-1,1]. No other augmentations (such as rotations, translations, contrast enhancements, AutoAugment) are used, as no improvement was observed.BWRs are excluded if a single
tooth is located on the wrong jaw side or if all carious lesions occur in incisors. Similarly,
a tooth image is excluded if it shows an incisor or if the tooth is located on the wrong
jaw side. We remove all images from the test set, as well as other images from test set
patients. After applying these filters, 36,676 bitewings (26,393 with caries) and 274,877
teeth (59,
© 2022 B. Bergner et al.
AcknowledgmentsThis project has received funding by the German Ministry of Research and Education (BMBF) in the projects SyReal (project number 01|S21069A) and KI-LAB-ITSE (project number 01|S19066).Appendix E. Further visualizationsWhen correct, both models detect lesions with few false positive visualizations. One reason for misclassification is low attention weights. For example, consider the first row ofFig. 8, where the patch prediction heatmap weakly highlights both lesions, however little attention is assigned to them. Nevertheless, a dentist may use these maps to detect caries and mark lesions so that the network can learn to locate them explicitly.
Sanity checks for saliency maps. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim, Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In S. Bengio, H. Wallach, H. Larochelle, K. Grau- man, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Pro- cessing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings. neurips.cc/paper/2018/file/294a8ed24b1ad22ec2e7efea049b8737-Paper.pdf.
Towards robust interpretability with selfexplaining neural networks. David Alvarez Melis, Tommi Jaakkola ; In, S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc31David Alvarez Melis and Tommi Jaakkola. Towards robust interpretability with self- explaining neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Sys- tems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips. cc/paper/2018/file/3e9f0fc9b2f89e043bc6233994dfcf76-Paper.pdf.
Multiple instance classification: Review, taxonomy and comparative study. Jaume Amores, 10.1016/j.artint.2013.06.003.URLhttps:/www.sciencedirect.com/science/article/pii/S00043702130005810004-3702Artificial Intelligence. 201Jaume Amores. Multiple instance classification: Review, taxonomy and comparative study. Artificial Intelligence, 201:81-105, 2013. ISSN 0004-3702. doi: https://doi.org/10.1016/ j.artint.2013.06.003. URL https://www.sciencedirect.com/science/article/pii/ S0004370213000581.
Diagnosis of interproximal caries lesions with deep convolutional neural network in digital bitewing radiographs. Yusuf Bayraktar, Enes Ayan, Clinical oral investigations. Yusuf Bayraktar and Enes Ayan. Diagnosis of interproximal caries lesions with deep con- volutional neural network in digital bitewing radiographs. Clinical oral investigations, pages 1-10, 2021.
Terabyte-scale deep multiple instance learning for classification and localization in pathology. Gabriele Campanella, Vitor Werneck Krauss Silva, Thomas J Fuchs, arXiv:1805.06983arXiv preprintGabriele Campanella, Vitor Werneck Krauss Silva, and Thomas J Fuchs. Terabyte-scale deep multiple instance learning for classification and localization in pathology. arXiv preprint arXiv:1805.06983, 2018.
Detecting caries lesions of different radiographic extension on bitewings using deep learning. Anselmo Garcia Cantu, Sascha Gehrung, Joachim Krois, Akhilanand Chaurasia, Jesus Gomez Rossi, Robert Gaudin, Karim Elhennawy, Falk Schwendicke, Journal of dentistry. 100103425Anselmo Garcia Cantu, Sascha Gehrung, Joachim Krois, Akhilanand Chaurasia, Je- sus Gomez Rossi, Robert Gaudin, Karim Elhennawy, and Falk Schwendicke. Detecting caries lesions of different radiographic extension on bitewings using deep learning. Journal of dentistry, 100:103425, 2020.
Multiple instance learning: A survey of problem characteristics and applications. Marc-André Carbonneau, Veronika Cheplygina, Eric Granger, Ghyslain Gagnon, 10.1016/j.patcog.2017.10.009.URLhttps:/www.sciencedirect.com/science/article/pii/S00313203173040650031-3203Pattern Recognition. 77Marc-André Carbonneau, Veronika Cheplygina, Eric Granger, and Ghyslain Gagnon. Mul- tiple instance learning: A survey of problem characteristics and applications. Pat- tern Recognition, 77:329-353, 2018. ISSN 0031-3203. doi: https://doi.org/10.1016/ j.patcog.2017.10.009. URL https://www.sciencedirect.com/science/article/pii/ S0031320317304065.
Solving the multiple instance problem with axis-parallel rectangles. Thomas G Dietterich, Richard H Lathrop, Tomás Lozano-Pérez, 10.1016/S0004-3702(96)00034-3.URLhttps:/www.sciencedirect.com/science/article/pii/S00043702960003430004-3702Artificial Intelligence. 891Thomas G. Dietterich, Richard H. Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1):31-71, 1997. ISSN 0004-3702. doi: https://doi.org/10.1016/S0004-3702(96)00034-3. URL https:// www.sciencedirect.com/science/article/pii/S0004370296000343.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
A review of multi-instance learning assumptions. The knowledge engineering review. James Foulds, Eibe Frank, 25James Foulds and Eibe Frank. A review of multi-instance learning assumptions. The knowledge engineering review, 25(1):1-25, 2010.
Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. Milena A Gianfrancesco, Suzanne Tamang, Jinoos Yazdany, Gabriela Schmajuk, 10.1001/jamainternmed.2018.3763JAMA Internal Medicine. 17811Milena A. Gianfrancesco, Suzanne Tamang, Jinoos Yazdany, and Gabriela Schmajuk. Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11):1544-1547, 11 2018. ISSN 2168-6106. doi: 10.1001/jamainternmed.2018.3763. URL https://doi.org/10.1001/jamainternmed. 2018.3763.
Accurate screening of covid-19 using attention-based deep 3d multiple instance learning. Zhongyi Han, Benzheng Wei, Yanfei Hong, Tianyang Li, Jinyu Cong, Xue Zhu, Haifeng Wei, Wei Zhang, 10.1109/TMI.2020.2996256IEEE Transactions on Medical Imaging. 398Zhongyi Han, Benzheng Wei, Yanfei Hong, Tianyang Li, Jinyu Cong, Xue Zhu, Haifeng Wei, and Wei Zhang. Accurate screening of covid-19 using attention-based deep 3d multiple instance learning. IEEE Transactions on Medical Imaging, 39(8):2584-2594, 2020. doi: 10.1109/TMI.2020.2996256.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arxiv:1503.02531Comment: NIPS 2014 Deep Learning Workshop. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. URL http://arxiv.org/abs/1503.02531. cite arxiv:1503.02531Comment: NIPS 2014 Deep Learning Workshop.
Interactive machine learning for health informatics: when do we need the human-in-the-loop?. Andreas Holzinger, Brain Informatics. 32Andreas Holzinger. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics, 3(2):119-131, 2016.
Attention-based deep multiple instance learning. Maximilian Ilse, Jakub Tomczak, Max Welling, PMLRProceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning80Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th Interna- tional Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2127-2136. PMLR, 10-15 Jul 2018. URL https://proceedings.mlr. press/v80/ilse18a.html.
W Marcenes, and GBD 2015 Oral Health Collaborators. Global, regional, and national prevalence, incidence, and disability-adjusted life years for oral conditions for 195 countries, 1990-2015: a systematic analysis for the global burden of diseases, injuries, and risk factors. J Nicholas, Kassebaum, G C Amanda, Eduardo Smith, Bernabé, D Thomas, Alex E Fleming, Theo Reynolds, Vos, Murray, Journal of dental research. 964Nicholas J Kassebaum, Amanda GC Smith, Eduardo Bernabé, Thomas D Fleming, Alex E Reynolds, Theo Vos, CJL Murray, W Marcenes, and GBD 2015 Oral Health Collabo- rators. Global, regional, and national prevalence, incidence, and disability-adjusted life years for oral conditions for 195 countries, 1990-2015: a systematic analysis for the global burden of diseases, injuries, and risk factors. Journal of dental research, 96(4):380-387, 2017.
Processing megapixel images with deep attention-sampling models. Angelos Katharopoulos, Francois Fleuret, PMLRProceedings of the 36th International Conference on Machine Learning. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning97Angelos Katharopoulos and Francois Fleuret. Processing megapixel images with deep attention-sampling models. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Pro- ceedings of Machine Learning Research, pages 3282-3291. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/katharopoulos19a.html.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representa- tions, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Captum: A unified and generic model interpretability library for pytorch. Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-RichardsonNarine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020.
Classifying and segmenting microscopy images with deep multiple instance learning. Z Oren, Jimmy Lei Kraus, Brendan J Ba, Frey, Bioinformatics. 3212Oren Z Kraus, Jimmy Lei Ba, and Brendan J Frey. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics, 32(12):i52-i59, 2016.
Example mining for incremental learning in medical imaging. Pratyush Kumar, Muktabh Mayank Srivastava, 2018 IEEE symposium series on computational intelligence (SSCI). IEEEPratyush Kumar and Muktabh Mayank Srivastava. Example mining for incremental learn- ing in medical imaging. In 2018 IEEE symposium series on computational intelligence (SSCI), pages 48-51. IEEE, 2018.
. Min Lin, Qiang Chen, Shuicheng Yan, abs/1312.4400Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. CoRR, abs/1312.4400, 2014.
Understanding the effective receptive field in deep convolutional neural networks. Wenjie Luo, Yujia Li, Raquel Urtasun, Richard Zemel, Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Red Hook, NY, USACurran Associates IncISBN 9781510838819Wenjie Luo, Yujia Li, Raquel Urtasun, and Richard Zemel. Understanding the effective receptive field in deep convolutional neural networks. In Proceedings of the 30th Interna- tional Conference on Neural Information Processing Systems, NIPS'16, page 4905-4913, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
Dental caries classification system using deep learning based convolutional neural network. Leo L Megalan, T Kalpalatha Reddy, Journal of Computational and Theoretical Nanoscience. 179L Megalan Leo and T Kalpalatha Reddy. Dental caries classification system using deep learning based convolutional neural network. Journal of Computational and Theoretical Nanoscience, 17(9-10):4660-4665, 2020.
Y-net: joint segmentation and classification for diagnosis of breast biopsy images. Sachin Mehta, Ezgi Mercan, Jamen Bartlett, Donald Weaver, Joann G Elmore, Linda Shapiro, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerSachin Mehta, Ezgi Mercan, Jamen Bartlett, Donald Weaver, Joann G Elmore, and Linda Shapiro. Y-net: joint segmentation and classification for diagnosis of breast biopsy im- ages. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 893-901. Springer, 2018.
Needles in haystacks: On classifying tiny objects in large images. Nick Pawlowski, Suvrat Bhooshan, Nicolas Ballas, Francesco Ciompi, Ben Glocker, Michal Drozdzal, Nick Pawlowski, Suvrat Bhooshan, Nicolas Ballas, Francesco Ciompi, Ben Glocker, and Michal Drozdzal. Needles in haystacks: On classifying tiny objects in large images, 2020. URL https://openreview.net/forum?id=H1xTup4KPr.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Cynthia Rudin, Nature Machine Intelligence. 15Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215, 2019.
Radiographic caries detection: a systematic review and meta-analysis. Falk Schwendicke, Markus Tzschoppe, Sebastian Paris, Journal of dentistry. 438Falk Schwendicke, Markus Tzschoppe, and Sebastian Paris. Radiographic caries detection: a systematic review and meta-analysis. Journal of dentistry, 43(8):924-933, 2015.
Grad-cam: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on com- puter vision, pages 618-626, 2017.
Deep inside convolutional networks: Visualising image classification models and saliency maps. Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Yoshua Bengio and Yann LeCun. ICLR (Workshop Poster)Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional net- works: Visualising image classification models and saliency maps. In Yoshua Bengio and Yann LeCun, editors, ICLR (Workshop Poster), 2014. URL http://dblp.uni-trier. de/db/conf/iclr/iclr2014w.html#SimonyanVZ13.
Pratyush Muktabh Mayank Srivastava, Lalit Kumar, Srikrishna Pradhan, Varadarajan, arXiv:1711.07312Detection of tooth caries in bitewing radiographs using deep learning. arXiv preprintMuktabh Mayank Srivastava, Pratyush Kumar, Lalit Pradhan, and Srikrishna Varadarajan. Detection of tooth caries in bitewing radiographs using deep learning. arXiv preprint arXiv:1711.07312, 2017.
Multiple instance learning for histopathological breast cancer image classification. Caroline Pj Sudharshan, Fabio Petitjean, Luiz Eduardo Spanhol, Laurent Oliveira, Paul Heutte, Honeine, Expert Systems with Applications. 117PJ Sudharshan, Caroline Petitjean, Fabio Spanhol, Luiz Eduardo Oliveira, Laurent Heutte, and Paul Honeine. Multiple instance learning for histopathological breast cancer image classification. Expert Systems with Applications, 117:103-111, 2019.
Genetic algorithms-based approach for dental caries detection using back propagation neural network. C Paras Tripathi, M Malathy, Prabhakaran, International Journal of Recent Technology and Engineering. 81S2Paras Tripathi, C. Malathy, and M. Prabhakaran. Genetic algorithms-based approach for dental caries detection using back propagation neural network. International Journal of Recent Technology and Engineering, 8(1S2):316-319, 2019.
Saliency is a possible red herring when diagnosing poor generalization. D Joseph, Becks Viviano, Francis Simpson, Yoshua Dutil, Joseph Paul Bengio, Cohen, International Conference on Learning Representations. Joseph D Viviano, Becks Simpson, Francis Dutil, Yoshua Bengio, and Joseph Paul Cohen. Saliency is a possible red herring when diagnosing poor generalization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=c9-WeM-ceB.
Revisiting multiple instance neural networks. Xinggang Wang, Yongluan Yan, Peng Tang, Xiang Bai, Wenyu Liu, Pattern Recognition. 74Xinggang Wang, Yongluan Yan, Peng Tang, Xiang Bai, and Wenyu Liu. Revisiting multiple instance neural networks. Pattern Recognition, 74:15-24, 2018.
Deep multiple instance learning for image classification and auto-annotation. Jiajun Wu, Yinan Yu, Chang Huang, Kai Yu, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJiajun Wu, Yinan Yu, Chang Huang, and Kai Yu. Deep multiple instance learning for image classification and auto-annotation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3460-3469, 2015.
Deep learning of feature representation with multiple instance learning for medical image analysis. Yan Xu, Tao Mo, Qiwei Feng, Peilin Zhong, Maode Lai, Chao Eric, Chang, 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEEYan Xu, Tao Mo, Qiwei Feng, Peilin Zhong, Maode Lai, I Eric, and Chao Chang. Deep learning of feature representation with multiple instance learning for medical image anal- ysis. In 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 1626-1630. IEEE, 2014.
Visualizing and understanding convolutional networks. D Matthew, Rob Zeiler, Fergus, European conference on computer vision. SpringerMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer, 2014.
Visual interpretability for deep learning: a survey. Quanshi Zhang, Song-Chun Zhu, Frontiers of Information Technology & Electronic Engineering. 19Quanshi Zhang and Song-Chun Zhu. Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19:27-39, 2018.
Deep multiple instance learning for automatic detection of diabetic retinopathy in retinal images. Lei Zhou, Yu Zhao, Jie Yang, Qi Yu, Xun Xu, IET Image Processing. 124Lei Zhou, Yu Zhao, Jie Yang, Qi Yu, and Xun Xu. Deep multiple instance learning for automatic detection of diabetic retinopathy in retinal images. IET Image Processing, 12 (4):563-571, 2018.
| [
"https://github.com/benbergner/emil."
] |
[
"LEARNING OPERATORS WITH COUPLED ATTENTION A PREPRINT",
"LEARNING OPERATORS WITH COUPLED ATTENTION A PREPRINT"
] | [
"Georgios Kissas \nDepartment of Mechanical Engineering and Applied Mechanics\nUniversity of Pennsylvania\n19104PhiladelphiaPA\n",
"Jacob Seidman \nGraduate Group in Applied Mathematics and Computational Science\nUniversity of Pennsylvania\n19104PhiladelphiaPA\n",
"Leonardo Ferreira Guilhoto \nGraduate Group in Applied Mathematics and Computational Science\nUniversity of Pennsylvania\n19104PhiladelphiaPA\n",
"Victor M Preciado \nDepartment of Electrical and Systems Engineering\nUniversity of Pennsylvania\n19104 January 5, 2022PhiladelphiaPA\n",
"George J Pappas \nDepartment of Electrical and Systems Engineering\nUniversity of Pennsylvania\n19104 January 5, 2022PhiladelphiaPA\n",
"Paris Perdikaris \nDepartment of Mechanical Engineering and Applied Mechanics\nUniversity of Pennsylvania\n19104PhiladelphiaPA\n"
] | [
"Department of Mechanical Engineering and Applied Mechanics\nUniversity of Pennsylvania\n19104PhiladelphiaPA",
"Graduate Group in Applied Mathematics and Computational Science\nUniversity of Pennsylvania\n19104PhiladelphiaPA",
"Graduate Group in Applied Mathematics and Computational Science\nUniversity of Pennsylvania\n19104PhiladelphiaPA",
"Department of Electrical and Systems Engineering\nUniversity of Pennsylvania\n19104 January 5, 2022PhiladelphiaPA",
"Department of Electrical and Systems Engineering\nUniversity of Pennsylvania\n19104 January 5, 2022PhiladelphiaPA",
"Department of Mechanical Engineering and Applied Mechanics\nUniversity of Pennsylvania\n19104PhiladelphiaPA"
] | [] | Supervised operator learning is an emerging machine learning paradigm with applications to modeling the evolution of spatio-temporal dynamical systems and approximating general black-box relationships between functional data. We propose a novel operator learning method, LOCA (Learning Operators with Coupled Attention), motivated from the recent success of the attention mechanism. In our architecture, the input functions are mapped to a finite set of features which are then averaged with attention weights that depend on the output query locations. By coupling these attention weights together with an integral transform, LOCA is able to explicitly learn correlations in the target output functions, enabling us to approximate nonlinear operators even when the number of output function in the training set measurements is very small. Our formulation is accompanied by rigorous approximation theoretic guarantees on the universal expressiveness of the proposed model. Empirically, we evaluate the performance of LOCA on several operator learning scenarios involving systems governed by ordinary and partial differential equations, as well as a black-box climate prediction problem. Through these scenarios we demonstrate state of the art accuracy, robustness with respect to noisy input data, and a consistently small spread of errors over testing data sets, even for out-of-distribution prediction tasks. | null | [
"https://arxiv.org/pdf/2201.01032v1.pdf"
] | 245,668,802 | 2201.01032 | d97d68f0d3e06c4ab34356a3a1655b2f71f9cfd6 |
LEARNING OPERATORS WITH COUPLED ATTENTION A PREPRINT
Georgios Kissas
Department of Mechanical Engineering and Applied Mechanics
University of Pennsylvania
19104PhiladelphiaPA
Jacob Seidman
Graduate Group in Applied Mathematics and Computational Science
University of Pennsylvania
19104PhiladelphiaPA
Leonardo Ferreira Guilhoto
Graduate Group in Applied Mathematics and Computational Science
University of Pennsylvania
19104PhiladelphiaPA
Victor M Preciado
Department of Electrical and Systems Engineering
University of Pennsylvania
19104 January 5, 2022PhiladelphiaPA
George J Pappas
Department of Electrical and Systems Engineering
University of Pennsylvania
19104 January 5, 2022PhiladelphiaPA
Paris Perdikaris
Department of Mechanical Engineering and Applied Mechanics
University of Pennsylvania
19104PhiladelphiaPA
LEARNING OPERATORS WITH COUPLED ATTENTION A PREPRINT
Deep LearningReproducing Kernel Hilbert SpacesWavelet Scattering NetworkFunctional data AnalysisUniversal Approximation
Supervised operator learning is an emerging machine learning paradigm with applications to modeling the evolution of spatio-temporal dynamical systems and approximating general black-box relationships between functional data. We propose a novel operator learning method, LOCA (Learning Operators with Coupled Attention), motivated from the recent success of the attention mechanism. In our architecture, the input functions are mapped to a finite set of features which are then averaged with attention weights that depend on the output query locations. By coupling these attention weights together with an integral transform, LOCA is able to explicitly learn correlations in the target output functions, enabling us to approximate nonlinear operators even when the number of output function in the training set measurements is very small. Our formulation is accompanied by rigorous approximation theoretic guarantees on the universal expressiveness of the proposed model. Empirically, we evaluate the performance of LOCA on several operator learning scenarios involving systems governed by ordinary and partial differential equations, as well as a black-box climate prediction problem. Through these scenarios we demonstrate state of the art accuracy, robustness with respect to noisy input data, and a consistently small spread of errors over testing data sets, even for out-of-distribution prediction tasks.
Introduction
The great success of modern deep learning lies in its ability to approximate maps between finite-dimensional vector spaces, as in computer vision [1], natural language processing [2], precision medicine [3], bio-engineering [4], and other data driven applications. A particularly successful class of such models are those built with the attention mechanism [5]. For example, the Transformer is an attention-based architecture that has recently produced state of the art performance in natural language processing [2], computer vision [6,7], and audio signal analysis [8,9].
Another active area of research is applying machine learning techniques to approximate operators between spaces of functions. These methods are particularly attractive for many problems in computational physics and engineering where the goal is to learn the functional response of a system from a functional input, such as an initial/boundary condition or forcing term. In the context of learning the response of systems governed by differential equations, these learned models can function as fast surrogates of traditional numerical solvers.
For example, in climate modelling one might wish to predict the pressure field over the earth from measurements of the surface air temperature field. The goal is then to learn an operator, F, between the space of temperature functions to the space of pressure functions (see Figure 1). An initial attempt at solving this problem might be to take a regular grid of measurements over the earth for the input and output fields and formulate the problem as a (finite-dimensional) image to image regression task. While architectures such as convolutional neural networks may perform well under this setting, this approach can be somewhat limited. For instance, if we desired the value of the output at a query location outside of the training grid, an entirely new model would need to be built and tuned from scratch. This is a consequence of choosing to discretize the regression problem before building a model to solve it. If instead we formulate the problem and model at the level of the (infinite-dimensional) input and output function spaces, and then make a choice of discretization, we can obtain methods that are more flexible with respect to the locations of the point-wise measurements. Figure 1: An example sketch of operator learning for climate modeling: By solving an operator learning problem, we can approximate an infinite-dimensional map between two functions of interest, and then predict one function using the other. For example, by providing the model with an input function, e.g. a surface air temperature field, we can predict an output function, e.g. the corresponding surface air pressure field.
Formulating models with functional data is the topic of Functional Data Analysis (FDA) [10,11], where parametric, semi-parametric or non-parametric methods operate on functions in infinite-dimensional vector spaces. A useful class of non-parametric approaches are Operator-Valued Kernel methods. These methods generalize the use of scalarvalued kernels for learning functions in a Reproducing Kernel Hilbert Space (RKHS) [12] to RKHS's of operators. Kernel methods were thoroughly studied in the past [13,14] and have been successfully applied to nonlinear and high-dimensional problem settings [15,16]. Previous work has successfully extended this framework to learning operators between more general vector spaces as well [17,18,19,20,21]. This framework is particularly powerful as the inputs can be continuous or discrete, and the underlying vector spaces are typically only required to be normed and separable.
A parametric-based approach to operator learning was introduced in Chen et. al. [22] where the authors proposed a method for learning non-linear operators based on a one-layer feed-forward neural network architecture. Moreover, the authors presented a universal approximation theorem which ensures that their architecture can approximate any continuous operator with arbitrary accuracy. Lu et. al. [23] gave an extension of this architecture, called DeepONet, built with multiple layer feed-forward neural networks, and demonstrated effectiveness in approximating the solution operators of various differential equations. In follow up work, error estimates were derived for some specific problem scenarios [24], and several applications have been pursued [25,26,27]. An extension of the DeepONet was proposed by Wang et. al. [28,29,30], where a regularization term is added to the loss function to enforce known physical constraints, enabling one to predict solutions of parametric differential equations, even in the absence of paired input-output training data.
Another parametric approach to operator learning is the Graph Neural Operator proposed by Li et. al. [31], motivated by the solution form of linear partial differential equations (PDEs) and their Greens' functions. As an extension of this work, the authors also proposed a Graph Neural Operator architecture where a multi-pole method is used sample the spatial grid [32] allowing the kernel to learn in a non-local manner. In later published work, this framework has been extended to the case where the integral kernel is stationary, enabling one to efficiently compute the integral operator in the Fourier domain [33].
Both the Fourier Neural Operator and the DeepONet methods come with theoretical guarantees of universal approximation, meaning that under some assumptions these classes of models can approximate any continuous operator to arbitrary accuracy. Other parametric-based models include a deep learning approach for directly approximating the Green's function of differential equations [34], a multi-wavelet approach for learning projections of an integral kernel operator to approximate the true operator and a random feature approach for learning the solution map of PDEs [35], but no theoretical guarantees of the approximation power of these approaches are presented.
While some of the previously described operator learning methods can be seen as generalizations of deep learning architectures such as feed-forward and convolutional neural networks, in this paper we are motivated by the success of the attention mechanism to propose a new operator learning framework. Specifically, we draw inspiration from the Badhanau attention mechanism [5], which first constructs a feature representation of the input and then averages these features with a distribution that depends on the argument of the output function to obtain its value. We will also use the connection between the attention mechanism and kernel methods [36] to couple these distributions together in what we call a Kernel-Coupled Attention mechanism. This will allow our framework to explicitly model correlations within the output functions of the operator. Moreover, we prove that under certain assumptions the model satisfies a universal approximation property.
The main contributions of this work can be summarized in the following points:
• Novel Architecture: We propose an operator learning framework inspired by the attention mechanism, operator approximation theory, and the Reproducing Kernel Hilbert Space (RKHS) literature. To this end, we introduce a novel Kernel-Coupled Attention mechanism to explicitly model correlations between the output functions' query locations. • Theoretical Guarantees: We prove that the proposed framework satisfies a universal approximation property, that is, it can approximate any continuous operator with arbitrary accuracy. • Data Efficiency: By modelling correlations between output queries, our model can achieve high performance when trained with only a small fraction (6-12%) of the total available labeled data compared to competing methods. • Robustness: Compared to existing methods, our model demonstrates superior robustness with respect to noise corruption in the training and testing inputs, as well as randomness in the model initialization. Our model's performance is stable in that the errors on the test data set are consistently concentrated around the median with significantly fewer outliers compared to other methods. • Generalization: On a real data set of Earth surface air temperature and pressure measurements, our model is able to learn the functional relation between the two fields with high accuracy and extrapolate beyond the training data. On synthetic data we demonstrate that our model is able to generalize better than competing methods over increasingly out-of-distribution examples.
The paper is structured as follows. In Section 2 we introduce the supervised operator learning problem. In Section 3, we introduce the general form of the model and in following subsections present the construction of its different components. In Section 4 we prove theoretical results on the approximation power of this class of models. In Section 5 we present the specific architecture choices made for implementing our method in practice. Section 6 discusses the similarities and differences of our model with related operator learning approaches. In Section 7, we demonstrate the performance of the proposed methodology across different benchmarks in comparison to other state-of-the-art methods.
In Section 8, we discuss our main findings, outline potential drawbacks of the proposed method, and highlight future directions emerging from this study.
Problem Formulation
We now provide a formal definition of the operator learning problem. Given X ⊂ R dx , Y ⊂ R dy , we will refer to a point x ∈ X as an input location and a point y ∈ Y as a query location. Denote by C(X , R du ) and C(Y, R ds ) the spaces of continuous functions from X → R du and Y → R ds , respectively. We will refer to C(X , R du ) as the space of input functions and C(Y, R ds ) the space of output functions. For example, in Figure 1, if we aim to learn the correspondence between a temperature field over the earth and the corresponding pressure field, u ∈ C(X , R) would represent the temperature field and s ∈ C(Y, R) would be a pressure field, where X = Y represents the surface of the earth. With a data set of of input/output function pairs, we formulate the supervised operator learning problem as follows. Problem 1. Given N pairs of input and output functions {u (x), s (y)} N =1 generated by some possibly unknown ground truth operator G : C(X , R du ) → C(Y, R ds ) with u ∈ C(X , R du ) and s ∈ C(Y, R ds ), learn an operator F : C(X , R du ) → C(Y, R ds ), such that for = 1, . . . , N ,
F(u ) = s .
This problem also encompasses scenarios where more structure is known about the input/output functional relation. For example, u could represent the initial condition to a partial differential equation and s the corresponding solution. In this case, G would correspond to the true solution operator and F would be an approximate surrogate model. Similarly, u could represent a forcing term in a dynamical system described by an ordinary differential equation, and s the resulting integrated trajectory. In these two scenarios there do exist a suite of alternate methods to obtain the solution function s from the input u, but with an appropriate choice of architecture for F the approximate model can result in significant computational speedups and the ability to efficiently compute sensitivities with respect to the inputs using tools like automatic differentiation.
Note that while the domains X and Y need not be discrete sets, in practice we may only have access to the functions u and s evaluated at finitely many locations. However, we take the perspective that it is beneficial to formulate the model with continuously sampled input data, and consider the consequences of discretization at implementation time. As we shall see, this approach will allow us to construct a model that is able to learn operators over multiple output resolutions simultaneously.
Proposed Model: Learning Operators with Coupled Attention (LOCA)
We will construct our model through the following two steps. Inspired by the attention mechanism [5], we will first define a class of models where the input functions u are lifted to a feature vector v(u) ∈ R n×ds . Each output location y ∈ Y will define d s probability distributions ϕ(y) ∈ ds i=1 ∆ n , where ∆ n is the the n-simplex. The forward pass of the model is then computed by averaging the rows of v(u) over the probability distributions ϕ(y).
Next, we augment this model by coupling the probability distributions ϕ(y) across different query points y ∈ Y. This is done by acting on a proposal score function g : Y → R n×ds with a kernel integral operator. The form of the kernel determines the similarities between the resulting distributions. We empirically demonstrate that the coupled version of our model is more accurate compared to the uncoupled version when the number of output function evaluations per example is small.
The Attention Mechanism
The attention mechanism was first formulated in Bahdanau et. al. [5] for use in language translation. The goal of their work was to translate an input sentence in a given language {u 1 , . . . , u Tu } to a sentence in another language {s 1 , . . . , s Ts }. A context vector c i was associated to each index of the output sentence, i ∈ {1, . . . , T s }, and used to construct a probability distribution of the i-th word in the translated sentence, s i . The attention mechanism is a way to construct these context vectors by averaging over features associated with the input in a way that depends on the output index i.
More concretely, the input sentence is first mapped to a collection of features {v 1 , . . . , v Tu }. Next, depending on the input sentence and the location/index i in the output (translated) sentence, a discrete probability distribution {ϕ i1 , . . . , ϕ iTu } is formed over the input indices such that
ϕ ij ≥ 0, Tu j=1 ϕ ij = 1.
The context vector at index i is then computed as
c i = Tu j=1 ϕ ij v j .
If the words in the input sentence are represented by vectors in R d , and the associated features and context vector are in R l , the attention mechanism can be represented by the following diagram.
[T s ] × R Tu×d R l ∆ Tu × R Tu×l Attn (ϕ,v) E
We will apply this attention mechanism to learn operators between function spaces by mapping an input function u to a finite set of features v(u) ∈ R n×ds , and taking an average over these features with respect to d s distributions ϕ(y) ∈ ds k=1 ∆ n that depend on the query location y ∈ Y for the output function. That is,
F(u)(y) := E ϕ(y) [v(u)],
where v(u) ∈ R n×ds , ϕ is a function from y ∈ Y to d s copies of the n-dimensional simplex ∆ n , and E : ds k=1 ∆ n × R n×ds → R ds is an expectation operator that takes (ϕ, v) → i ϕ i v i , where denotes an element-wise product. This can be represented by the following diagram.
Y × C(X , R du ) R ds ds k=1 ∆ n × R n×ds F (ϕ,v) E
In the next section, we will construct the function ϕ and provide a mechanism for enabling the coupling of its values across varying query locations y ∈ Y. Later on, we will see that this allows the model to perform well even when trained on small numbers of output function measurements per input function.
Kernel-Coupled Attention Weights
In order to model correlations among the points of the output function we couple the probability distributions ϕ(y) across the different query locations y ∈ Y. We first consider a proposal score function g : Y → R n×ds . If we were to compose this function with a map into d s copies of the probability simplex ∆ n , such as the softmax function σ : R n → ∆ n applied to the rows of g(y), we would obtain the probability distributions ϕ(y) = σ(g(y)).
The disadvantage of this formulation is that it solely relies on the form of the function g to capture relations between the distributions ϕ(y) across different y ∈ Y. Instead, we introduce the Kernel-Coupled Attention (KCA) mechanism to model these relations by integrating the function g against a coupling kernel κ : Y × Y → R. This results in the score function,g
(y) = Y κ(y, y )g(y ) dy ,(1)
which can be normalized across its rows to form the probability distributions
ϕ(y) = σ Y κ(y, y )g(y ) dy .(2)
The form of the kernel κ will determine how these distributions are coupled across y ∈ Y. For example, given a fixed y, the locations y where k(y, y ) is large will enforce similarity between the corresponding score functionsg(y) andg(y ). If k is a local kernel with a small bandwidth then points y and y will only be forced to have similar score functions if they are very close together.
Formulation of the Coupling Kernel
In this section we construct the coupling kernel κ that will be used to relate the query distributions as in (2). We first lift the points y ∈ Y via a nonlinear parameterized mapping q θ : Y → R l . We then apply a universal kernel k : R l × R l → R [37] over the lifted space, such as the Gaussian RBF kernel,
k(z, z ) = γ exp(−β z − z 2 ), γ, β > 0.(3)
Finally, we apply a normalization to the output of this kernel on the lifted points to create a similarity measure. The effect of the normalization is to maintain the relative scale of the proposal score function g. Overall, our kernel is defined as κ(y, y ) := k(q θ (y), q θ (y )) Y k(q θ (y), q θ (z))dz 1/2 Y k(q θ (y ), q θ (z))dz
1/2 .(4)
By tuning the parameters θ, β and γ in the functions q θ and k, the kernel κ is able to learn the appropriate measures of similarity between the points in the output function domain Y. The LOCA method builds a feature representation, v(u), of the input function and averages it with respect to ϕ(y). The transform D is first applied to the input function to produce a list of features, illustrated by disks in this case, and then a fully-connected network is applied to construct v(u). The score function g is applied to the output query locations y i together with the softmax function to produce the score vector ϕ i .
The v i and ϕ i vectors are combined to evaluate the solution at each query location by computing E ϕ(y) [v(u)] at the last step.
Input Function Feature Encoding
The last architecture choice to be made concerns the functional form of the feature embedding v(u). Here, we construct the map v as a composition of two mappings. The first is a function
D : C(X , R du ) → R d ,(5)
that maps an input function u to a finite-dimensional vector D(u) ∈ R d . After creating the d-dimensional representation of the input function D(u), we pass this vector through a function f from a class of universal function approximators, such as fully connected neural networks. The composition of these two operations forms our feature representation of the input function,
v(u) = f • D(u).(6)
One example for the operator D is the image of the input function under d linear functionals on C(X ; R du ). For example, D could return the point-wise evaluation of the input function at d fixed points. This would correspond to the action of d translated δ-functionals. The drawback of such an approach is that the model would not be able to accept measurements of the input function at any other locations. As a consequence the input resolution could never vary across forward passes of the model.
Alternatively, if we consider an orthonormal basis for L 2 (X ; R du ), we could also have D be the projection onto the first d basis vectors. For example, if we use the basis of trigonometric polynomials the Fast Fourier Transform (FFT) [38] allows for efficient computation of these values and can be performed across varying grid resolutions. We could also consider the projection onto an orthogonal wavelet basis [39]. In the case of complex valued coefficients for these basis functions, the range space dimension of D would be doubled to account for the real and imaginary part of these measurements.
Model Summary
Overall, the forward pass of the proposed model is written as follows, see Figure 2 for a visual representation.
F(u)(y) = E ϕ(y) [v(u)] = n i=1 σ Y κ(y, y )g(y ) dy i v i (u),(7)
where κ : Y × Y → R is the kernel of equation (1), σ is the softmax function, v the input feature encoder and g is the proposed score function. Practical aspects related to the parametrization of κ, v and g, as well as the model evaluation and training will be discussed in section 5.
In the next sections, we will perform analysis on this model. We will show that under certain architecture choices other models in the literature can be recovered and theoretical guarantees of universal approximation can be proven.
Theoretical Guarantees of Universality
In this section we give conditions under which the LOCA model is universal. There exist multiple definitions of universality present in the literature, for example see Sriperumbudur et. al. [40]. To be clear, we formally state the definition we use below.
Definition 4.1 (Universality). Given compact sets X ⊂ R dx , Y ⊂ R dy and a compact set U ⊂ C(X , R du ) we say a class of operators A F : C(X , R du ) → C(Y, R ds ) is universal if it is dense in the space of operators equipped with the supremum norm. In other words, for any continuous operator G :
C(X , R du ) → C(Y, R ds ) and any > 0, there exists F ∈ A such that sup u∈U sup y∈Y G(u)(y) − F(u)(y) 2 R ds < .
To explore the universality properties of our model we note that if we remove the softmax normalization and the kernel coupling, the evaluation of the model can be written as
F(u)(y) = n i=1 g i (y) v i (u).
The universality of this class of models has been proven in Chen et. al. [22] (when d s = 1) and extended to deep architectures in Lu et. al. [23]. We will show that our model with the softmax normalization and kernel coupling is universal by adding these components back one at a time. First, the following theorem shows that the normalization constraint ϕ(y) ∈ ds k=1 ∆ n does not reduce the approximation power of this class of operators. Theorem 4.1 (Normalization Preserves Universality). If U ⊂ C(X , R du ) is a compact set of functions and G : U → C(Y, R ds ) is a continuous operator with X and Y compact, then for every > 0 there exists n ∈ N, functionals v j,k : U → R, for j ∈ [n], k ∈ [d s ], and functions ϕ j : Y → R ds with ϕ j (y) ∈ [0, 1] ds and n j=1 ϕ j (y) = 1 ds for all y ∈ Y such that
sup u∈U sup y∈Y G(u)(y) − E ϕ(y) [v(u)] 2 R ds < .
Proof. The proof is given in Appendix B.
It remains to show that the addition of the kernel coupling step for the functions ϕ also does not reduce the approximation power of this class of operators. By drawing a connection to the theory of Reproducing Kernel Hilbert Spaces (RKHS), we are able to state the sufficient conditions for this to be the case. The key insight is that, under appropriate conditions on the kernel κ, the image of the integral operator in (1) is dense in an RKHS H κ which itself is dense in C(Y, R n ). This allows (2) to approximate any continuous function ϕ : Y → ds i=1 ∆ n and thus maintains the universality guarantee of Theorem 4.1.
T κ :C(Y, R n ) → C(Y, R n ), f → Y κ(y, z)f (z)dz. If A ⊆ C(Y, R n ) is dense, then T κ (A) ⊂ C(Y, R n ) is also dense.
The statement of Proposition 4.1 requires that the kernel κ be symmetric, positive definite, and universal. We next show that by construction it will always be symmetric and positive definite, and under an assumption on the feature map q it will additionally be universal. Proof. The proof is provided in Appendix D.
Lastly, we present a result showing that a particular architecture choice for the input feature encoder v also preserves universality. We show that if there is uniform convergence of spectral representations of the input, projections onto these representations can be used to construct a universal class of functionals on C(X , R du ).
Proposition 4.3 (Spectral Encoding Preserves Universality). Let
A d ⊂ C(R d , R n ) be a set of functions dense in C(R d , R n ), and {e i } ∞ i=1 a set of basis functions such that for some compact set U ⊆ C(X , R du ), ∞ i=1 u, e i L 2 e i converges to u uniformly over U. Let D d : U → R d denote the projection onto {e 1 , . . . , e d }.
Then for any continuous functional h : U → R n , and any > 0, there exists d and f ∈ A N such that
sup u∈U h(u) − f • D d (u) < .
Proof. The proof is provided in Appendix E.
For example, if our compact space of input functions U is contained in C 1 (X , R du ), and D is a projection onto a finite number of Fourier modes, the architecture proposed in equation (6) is expressive enough to approximate any functional from U → R, including those produced by the universality result stated in Theorem 4.1.
Implementation Aspects
To implement our method, it remains to make a choice of discretization for computing the integrals required for updating the KCA weights ϕ(y), as well as a choice for the input function feature encoding v(u). Here we address these architecture choices, and provide an overview of the proposed model's forward evaluation.
Computation of the Kernel Integrals
To compute the kernel-coupled attention weights ϕ(y), we are required to evaluate integrals over the domain Y in (1) and (4). Adopting an unbiased Monte-Carlo estimator using P points y 1 , . . . , y P ∈ Y, we can use the approximations
Y κ(y, y )g(y ) ≈ vol(Y) P P i=1 κ(y, y i )g(y i ),
for equation (1), and
Y k(q(y), q(z))dz ≈ vol(Y) P P i=1 k(q(y), q(y i )),
for use in equation (4). Note that due to the normalization in κ, the vol(Y) term cancels out. In practice, we allow the query point y to be one of the points y 1 , . . . , y P used for the Monte-Carlo approximation.
When the domain Y is low dimensional, as in many physical problems, a Gauss-Legendre quadrature rule with weights w i can provide an accurate and efficient alternative to Monte Carlo approximation. Using Q Gauss-Legendre nodes and weights, we can approximate the required integrals as
Y κ(y, y )g(y )dy ≈ Q i=1 w i κ(y, y i )g(y i ),
for equation (1) and
Y k(q(y), q(z))dz ≈ Q i=1 w i k(q(y), q(z i )),
for use in equation (4).
If we restrict the kernel κ to be translation invariant, there is another option for computing these integrals. As in Li et. al. [33], we could take the Fourier transform of both κ and g, perform a point-wise multiplication in the frequency domain, followed by an inverse Fourier transform. However, while in theory the discrete Fourier transformation could be performed on arbitrarily spaced grids, the most available and computationally efficient implementations rely on equally spaced grids. We prefer to retain the flexibility of arbitrary sets of query points y and will therefore not pursue this alternate approach. In Section 7, we will switch between the Monte-Carlo and quadrature strategies depending on the problem at hand.
Positional Encoding of Output Query Locations
We additionally adopt the use of positional encodings, as they have been shown to improve the performance of attention mechanisms. For encoding the output query locations, we are motivated by the positional encoding in Vaswani et. al. [2], the harmonic feature expansion in Di et. al. [26], and the work of Wang et. al. [41] for implementing the encoding to more than one dimensions. The positional encoding for a one dimensional query space is given by
e(y i , 2j + (i − 1)H) = cos(2 j πy i ) e(y i , 2j + 1 + (i − 1)H) = sin(2 j πy i ),(8)
where H the number of encoding coefficients, j = 1, ..., H/2, y i the query coordinates in different spatial dimensions and i = 1, ..., d y . In contrast to Vaswani et. al. [2] we consider the physical position of the elements of the set y as the position to encode instead of their index position in a given list, as the index position in general does not have a physically meaningful interpretation.
Wavelet Scattering Networks as a Spectral Encoder
While projections onto an orthogonal basis allows us to derive a universality guarantee for the architecture, there can be some computational drawbacks. For example, it is known that the Fourier transform is not always robust to small deformations of the input [42]. More worrisome is the lack of robustness to noise corrupting the input function. In real world applications it will often be the case that our inputs are noisy, hence, in practice we are motivated to find an operator D with stronger continuity with respect to these small perturbations.
To address the aforementioned issues, we make use of the scattering transform [43], as an alternate form for the operator D. The scattering transform maps an input function to a sequence of values by alternating wavelet convolutions and complex modulus operations [43]. To be precise, given a mother wavelet ψ and a finite discrete rotation group G, we denote the wavelet filter with parameter λ = (r, j) ∈ G × Z as
ψ λ (u) = 2 dxj ψ(2 j r −1 x).
Given a path of parameters p = (λ 1 , . . . , λ m ), the scattering transform is defined by the operator
S[p]u = ||||u ψ λ1 | ψ λ2 | · · · | ψ λm | φ(x),(9)
where φ(x) is a low pass filter. We allow the empty path ∅ as a valid argument of S with S[∅]u = u φ. As shown in Bruna et. al. [43], this transform is Lipschitz continuous with respect to small deformations, while the modulus of the Fourier transform is not. This transform can be interpreted as a deep convolutional network with fixed filters and has been successfully applied in multiple machine learning contexts [44,45]. Computationally, the transform returns functions of the form (9) sampled at points in their domain, which we denote byŜ[p](u).
By choosing d paths p 1 , . . . , p d , we may define the operator D as
D(u) = Ŝ [p 1 ](u), . . . ,Ŝ[p d ](u) .
In practice, the number of paths used is determined by three parameters: J, the maximum scale over which we take a wavelet transform; L, the number of elements of the finite rotation group G, and m 0 , the maximum length of the paths p. While Proposition 4.3 does not necessarily apply to this form of D, we find that empirically this input encoding gives the best performance.
Loss Function and Training
The proposed model is trained by minimizing the empirical risk loss over the available training data pairs,
L(θ) = 1 N N i=1 P =1 (s i (y i ) − F θ (u i )(y i )) 2 ,(10)
where θ = (θ q , θ f , θ g ) denotes all trainable model parameters. This is the simplest choice that can be made for training the model. Other choices may include weighting the mean square error loss using the L 1 norm of the ground truth output [26,30], or employing a relative L 2 error loss [33]. The minimization is performed via stochastic gradient descent updates, where the required gradients of the loss with respect to all the trainable model parameters can be conveniently computed via reverse-mode automatic differentiation.
Implementation Overview
Algorithm 1 provides an overview of the steps required for implementing the LOCA method. The training data set is first processed by passing the input functions through a wavelet scattering network [43], followed by applying a positional encoding to the query locations and the quadrature/Monte-Carlo integration points. The forward pass of the model is evaluated and gradients are computed for use with a stochastic gradient descent optimizer. After training, we make one-shot predictions for super-resolution grids, and we compute the relative L 2 error between the ground truth output and the prediction.
Algorithm 1 Implementation summary of the LOCA method Require:
• Input/output function pairs {u i , s i } N i=1 . • Query locations y i for evaluating s i . • Quadrature points z i .
Pre-processing:
• Apply transformation (6) on the input function to getû, the input features.
• Apply positional encoding (8) to query coordinates y, z, to getŷ,ẑ.
• Choose the network architectures for functions q θq , f θ f , and g θg .
• Initialize the trainable parameters θ = (θ q , θ f , θ g ), and choose a learning rate η. Training:
for i = 0 to I do Randomly select a mini-batch of (û,ŷ,ẑ, s). Evaluate g θg (q θq (ẑ)).
Compute the Coupling Kernel κ(q θq (ŷ), q θq (ẑ)) (4). Numerically approximate the KCA (1) and compute ϕ(y).
Evaluate f θ f (û), as in Equation (6). Evaluate the expectation (7) and get s * , the model prediction.
Evaluate the training loss (10) and compute its gradients ∇ θ L(θ i ).
Update the trainable parameters via stochastic gradient descent: θ i+1 ← θ i − η∇ θ L(θ i ). end for
Connections to Existing Operator Learning Methods
In this section, we provide some insight on the connections between our method and similar operator learning methods.
DeepONets
Note that if we identify our input feature map, v(u), with the DeepONet's branch network, and the location dependent probability distribution, ϕ(y), with the DeepONet's trunk network, then the last step of both models is computed the same way. We can recover the DeepONet architecture from our model under three changes to the architecture in the forward evaluation. First, we would remove the normalization step in the construction of ϕ. Next, we remove the KCA mechanism that is applied to the candidate score function g (equivalently we may fix the kernel κ to be δ-distributions along the diagonal). Finally, in the construction of the input feature map v(u), instead of the scattering transform we would act on the input with a collection of δ distributions at the fixed sensor locations. These differences between DeepONets and LOCA result in increased performance of our model, as we will see in Section 7.
Neural Operators
The connection between Neural Operators and DeepONets has been presented in Kovachki et. al. [46], where it is shown that a particular choice of neural operator architecture produces a DeepONet with an arbitrary trunk network and a branch network of a certain form. In particular, a Neural Operator layer has the form,
v ( +1) (z) = σ W ( ) v ( ) (z) + Z k ( ) (s, z)v ( ) (s)ds ,(11)
where here σ is a point-wise nonlinearity. It is shown in Kovachki et. al. [46] that this architecture can be made to resemble a DeepONet under the following choices. First, set W ( ) = 0. Next, lift the input data to n tiled copies of itself and choose a kernel k that is separable in s and z. If the output of the layer is then projected back to the original dimension by summing the coordinates, the architecture resembles a DeepONet.
The correspondence between our model and DeepONets described above allows us to transitively connect our model to Neural Operators as well. We additionally note that the scattering transform component of our architecture can be viewed as a collection of multiple-layer Neural Operators with fixed weights. Returning to (11), when W ( ) = 0 for all , the forward pass of the architecture is a sequence of integral transforms interleaved with point-wise nonlinearities. Setting σ to be the complex modulus function and k ( ) to be a wavelet filter ψ λ we may write v ( +1) = |v * ψ λ |.
When we compose L of these layers together, we recover (9) up to the application of the final low pass filter (again a linear convolution)
v (L) = ||||u * ψ λ1 | * ψ λ2 | · · · | * ψ λ L |.
Thus, we may interpret the scattering transform as samples from a collection of Neural Operators with fixed weights. This connection between the scattering transform and convolutional neural architectures with fixed weights was noticed during the original formulation of the wavelet scattering transform by Bruna et. al. [43], and thus also extends to Neural Operators via the correspondence between Neural Operators and (finite-dimensional) convolutional neural networks [46].
A key difference between our model and Neural Operators is how the kernel integral transform is applied. In the Neural Operator, it is applied directly to the input and the output functions of the internal layers, while in LOCA the kernel acts only on a score function of the output domain Y, as in (1).
Other Attention-Based Architectures
Here we compare our method with two other recently proposed attention-based operator learning architectures. The first is the Galerkin/Fourier Transformer [47]. This method operates on a fixed input and output grid, and most similarly represents the original sequence-to-sequence Transformer architecture [2] with different choices of normalization. As in the original sequence-to-sequence architecture, the attention weights are applied across the indices (sensor locations) of the input sequence. By contrast, in our model the attention mechanism is applied to a finite-dimensional feature representation of the input that is not indexed by the input function domain. Additionally, our attention weights are themselves coupled over the domain Y via the KCA mechanism (2) as opposed to being defined over the input function domain in an uncoupled manner.
A continuous attention mechanism for operator learning was also proposed as a special case of Neural Operators in Kovachki et. al. [46]. There, it was noted that if the kernel in the Neural Operator was (up to a linear transformation) of the form
k(v(x), v(y)) = exp Av(s), Bv(y) √ m ds −1 exp Av(x), Bv(y) √ m ,
with A, B ∈ R m×n , then the corresponding Neural Operator layer can be interpreted as the continuous generalization of a transformer block. Further, upon discretization of the integral this recovers exactly the sequence-to-sequence discrete Transformer model.
The main difference of this kind of continuous transformer with our approach is again how the attention mechanism is applied to the inputs. The Neural Operator Transformer is similar to the Galerkin/Fourier Transformer in the sense that the attention mechanism is applied over the points of the input function itself, whereas our model first creates a different finite dimensional feature representation of the input function which the attention is applied to. We note that our model does make use of attention weights defined over a continuous domain, but it is the domain of the output functions Y as opposed to X . The coupling of the attention weights as a function of the output query in (2) with the kernel in (4) can be interpreted as a kind of un-normalized continuous self-attention mechanism where we view the query space Y as its own input space to generate the attention weights ϕ(y).
Experimental Results
In this section we provide a comprehensive collection of experimental comparisons designed to assess the performance of the proposed LOCA model against two state of the art operator learning methods, the Fourier Neural Operator (FNO) [33] and the DeepONet (DON) [23]. We will show that our method requires less labeled data than competing methods, is robust against noisy data and randomness in the model initialization, has a smaller spread of errors over testing data sets, and is able to successfully generalize in out-of-distribution testing scenarios. Evidence is provided for the following numerical experiments, see Figure 3 for a visual description.
• Antiderivative: Learning the antiderivative operator given multi-scale source terms.
• Darcy Flow: Learning the solution operator of the Darcy partial differential equation, which models the pressure of a fluid flowing through a porous medium with random permeability. • Mechanical MNIST: Learning the mapping between the initial and final displacement of heterogeneous block materials undergoing equibiaxial extension. • Shallow Water Equations: Learning the solution operator for a partial differential equation describing the flow below a pressure surface in a fluid with reflecting boundary conditions. • Climate modeling: Learning the mapping from the air temperature field over the Earth's surface to the surface air pressure field, given sparse measurements.
For all experiments the training data sets will take the following form. For each of the N input/output function pairs, (u i , s i ), we will consider m discrete measurements of each input function at fixed locations, (u i (x i 1 ), . . . , u i (x i m )), and M available discrete measurements of each output function (s i (y i 1 ), . . . , s i (y i M )), with the query locations {y i } M
=1
potentially varying over the data set. Out of the M available measurement points {y i } M =1 for each output function s i , we consider the effect of taking only P of these points for each input/output pair. For example, if we use 10% of labeled data, we set P = M/10 and build a training data set where each example is of the form
({u i (x i j )} m j=1 , {s i (y )} P =1 )
. We round the percentages to the nearest integer or half-integer for clarity. We present details on the input and output data construction, as well as on the different problem formulations in Section F.5 of the Appendix.
In each scenario the errors are computed between both the models output and ground truth at full resolution. Throughout all benchmarks, we employ Gaussian Error Linear unit activation functions (GELU) [48], and initialize all networks using the Glorot normal scheme [49]. All networks are trained via mini-batch stochastic gradient descent using the Adam optimizer with default settings [50]. The detailed hyper-parameter settings, the associated number of parameters for all examples, the computational cost, and other training details are provided in Appendix F.2. All code and data accompanying this manuscript will be made publicly available at https://github.com/PredictiveIntelligenceLab/LOCA.
Data Efficiency
In this section we investigate the performance of our model when the number of labeled output function points is small. In many applications labeled output function data can be scarce or costly to obtain. Therefore, it is desirable that an operator learning model is able to be successfully trained even without a large number of output function measurements. We investigate this property in the Darcy flow experiment by gradually increasing the percentage of labeled output Figure 3: A schematic visualization of the operator learning benchmarks considered in this work. Shown are the input/output function and a description of their physical meaning, as well as the operator that we learn for each example. In the Mechanical MNIST example, for visual clarity we do not present the map that the model is actually learning, which is the displacement in the vertical and the horizontal directions, but the position of each pixel under a specified displacement. See Appendix Section F.5 for more details on the data set generation. function measurements used per input function example. Next, we compare the performance of all models for the Shallow Water benchmark in the small data regime. Lastly, we demonstrate that the proposed KCA weights provide additional training stability specifically in the small data regime. One important aspect of learning in the small data regime is the presence of outliers in the error statistics, which quantify the worst-case-scenario predictions. In each benchmark we present the following error statistics across the testing data set: the error spread around the median, and outliers outside the third quantile. Figure 4 shows the effect of varying the percentage of labeled output points used per training example in the Darcy flow prediction example. The box plot shows the distribution of errors over the test data set for each model. We see that the proposed LOCA model is able to achieve low prediction errors even with 1.5% of the available output function measurements per example. It also has a consistently smaller spread of errors with fewer outliers across the test data set in all scenarios. Moreover, when our model has access to 6% of the available output function measurements it achieves lower errors against both the DON and FNO trained with any percentage (up to 100%) of the total available labeled data. Figure 5 shows the spread of errors across the test data set for the Shallow Water benchmark when the LOCA model is trained on 2.5% of the available labeled data per input-output function pair. We observe that our model outperforms DON and FNO in predicting the wave height, ρ, and provides similar errors to the FNO for the two velocity components, v 1 and v 2 . Despite the fact that the two methods perform in a similar manner for the median error, LOCA consistently provides a much smaller standard deviation of errors across the test data set, as well as far fewer outliers.
We hypothesize that the ability of our model to successfully learn from fewer output function measurements stems from the KCA mechanism used in constructing ϕ(y). By coupling the values of the output function in this way, the model is able to learn the global behavior of the output functions with fewer example points. To demonstrate this, we use a Table 1: (Data Efficiency) Performance of LOCA with and without KCA: We present the mean, the standard deviation, the minimum and the maximum relative L 2 errors for the Darcy equation with and without KCA when using 1.5% of available output function measurements per training example. We see that the presence of KCA in the model guarantees stability in the training and results in small testing error.
low percentage of output function measurements, and train the LOCA model with and without the KCA step. Table 1 shows the result for the case where we use the KCA weights, and the case where we do not. We consider 1.5% of the available labeled data and lower the amount of samples from N = 1, 000 to N = 200. When KCA is removed, the training becomes unstable and results in a high testing error. With the KCA step for ϕ included, we see that the model still performs well in this small data regime.
Robustness
Operator learning can be a powerful tool for cases where we have access to clean simulation data for training, but wish to deploy the model on noisy experimental data. Alternatively, we may have access to noisy data for training and want to make predictions on noisy data as well. We will quantify the ability of our model to handle noise in the data by measuring the percentage increase in mean error clean to noisy data scenarios. For all experiments in this section, we consider 7% of the available labeled data.
We use the Mechanical MNIST benchmark to investigate the robustness of our model with respect to noise in the training and testing data. We consider three scenarios: one where the training and the testing data sets are clean, one where the training data set is clean, but the output data set is corrupted Gaussian noise sampled from N (0, .15I), and On the left, we present the ρ quantity which is the height of the water, and v 1 and v 2 which are the two components of the fluid velocity vector. We observe that LOCA achieves higher accuracy, and presents fewer outliers and more concentrated error spread compared its competitors.
one where both the input and the output data sets are corrupted by Gaussian noise sampled from N (0, .15I). In Figure 6 we present the distribution of errors across the test data set for each noise scenario. We observe that for the case where both the training and the testing data are clean, the FNO achieves the best performance. In the scenario where the training data set is clean but the testing data set is noisy, we observe a percentage increase to the approximation error of all methods.
For the Clean to Noisy scenario the approximation error of the FNO method is increased by 1, 930% and 2, 238% for the displacement in the horizontal and vertical directions, respectively. For the DON method, the percentage increase is 112% and 96% for the displacement in the horizontal and vertical directions (labeled as v 1 and v 2 ), respectively. For the LOCA method the percentage increase is 80% and 85% for the displacement in the horizontal and vertical directions, respectively. For the Noisy to Noisy scenario the approximation error of the FNO method is increased by 280% and 347% for the displacement in the horizontal and vertical directions, respectively. For the DON method, the percentage increase is 128% and 120%, and for LOCA is only 26% and 25% for each displacement component, respectively. We present the mean prediction error for each scenario and the corresponding percentage error increase in Table 2.
We observe that even though the FNO is very accurate for the case where both training and test data sets are clean, a random perturbation of the test data set can cause a huge decrease in accuracy. On the other hand, even though the DON method presents similar accuracy as our model in the clean to clean case, the standard deviation of the error is greater and its robustness to noise is inferior. LOCA is clearly superior in the case where the testing data are corrupted with Gaussian noise. We again emphasise that the metric in which we assess the performance is not which method has the lowest relative prediction error, but which method presents the smallest percentage increase in the error when noise exists in testing (and training in the case of Noisy to Noisy) data compared to the case where there exist no noise.
Next, we examine the variability of the models' performance with respect to the random initialization of the network parameters. We consider the Mechanical MNIST benchmark where the input data is clean but the output data contain noise. We train each model 10 times with different random seeds for initialization and record the maximum error in each case. In Figure 7 we present the distribution of maximum prediction errors under different random seeds for the displacement in horizontal and vertical directions, respectively. We observe that LOCA displays a smaller spread of error for the case of displacement in the horizontal direction, v 1 , and similar performance to the FNO for the case of displacement in the vertical direction, v 2 . ]. The last two rows show the percentage increase in mean error from the noiseless case to the scenarios where the testing input data is corrupted by noise and to the scenario that both the training and testing input data sets are corrupted by noise. For each case we consider 7% of the total data set as labeled data for training. We observe that our method shows the least percentage increase for each noise scenario. The left and right subplots show the distribution of maximum errors over the testing data set for the horizontal and vertical displacements, respectively. We consider 7% of the available output function measurments for training and run the model for 10 different random initializations. We observe that our method shows better performance than the other methods for both parameters v 1 and v 2 .
Generalization
The ultimate goal of data-driven methods is to perform well outside of the data set they are trained on. This ability to generalize is essential for these models to be practically useful. In this section we investigate the ability of our model to generalize in three scenarios. We first consider an extrapolation problem where we predict the daily Earth surface air pressure from the daily surface air temperature. Our training data set consists of temperature and pressure measurements from 2000 to 2005 and our testing data set consists of measurements from 2005 to 2010. In Figure 8, we present the results for the extrapolation problem when considering 4% of the available pressure measurements each day for training. We observe that our method achieves the lowest error rates while also maintaining a small spread of these errors across the testing data set. While the DON method achieves a competitive performance with respect to the median error, the error spread is larger than both LOCA and FNO with many outliers.
Next, we examine the performance of our model under a distribution shift of the testing data. The goal of the experiment is to learn the antiderivatve operator where the training and testing data sets are sampled from a Gaussian process. We fix the length-scale of the testing distribution at 0.1 and examine the effect of training over 9 different data sets with length-scales ranging from 0.1 to 0.9. In Figure 9, we present the error on the testing data set after being trained on each different training data set. The error for each testing input is averaged over 10 random network initializations. We observe that while the LOCA and FNO methods present a similar error for the first two cases, the FNO error is rapidly increasing. On the other hand, the DON method while presenting a larger error at first, eventually performs better than the FNO as the training length-scale increases. We find that LOCA outperforms its competitors for all cases.
Lastly, we examine the performance of the three models when the training and testing data set both contain a wide range of scale and frequency behaviors. We consider this set-up as a toy model for a multi-task learning scenario and we want to explore the generalization capabilities of our model for this case. We construct a training and testing data set by sampling inputs from a Gaussian process where the length-scale and amplitude are chosen over ranges of 2 and 4 orders of magnitude, respectively. In Figure 10, we present samples from the input distribution, the corresponding output functions, and the distribution of errors on the testing data set. We observe that our method is more accurate and Figure 8: (Generalization) Relative L 2 error boxplots for the climate modeling experiment: We present the errors for the temperature prediction task on the testing data set. We consider 4% of the whole data set as labeled data used for training. We observe that our method performs better than the other methods both with respect to the median error and the error spread.
the error spread is smaller than DON and Fourier Neural Operators. While the FNO method shows a median that is close to the LOCA model, there exist many outliers that reach very high error values.
Discussion
This work proposes a novel operator learning framework with approximation theoretic guarantees. Drawing inspiration from the Bahdanau attention mechanism [5], the model is constructed by averaging a feature embedding of an input function over probability distributions that depend on the corresponding output function's query locations.
A key novelty of our approach is the coupling of these probability distributions through a variation of the classic attention mechanism called Kernel-Coupled Attention (KCA). Instead of normalizing a single proposal score function g defined over the query domain Y, the KCA mechanism couples the score function across point in Y by integrating against a similarity kernel. Thus, the KCA mechanism is able to model correlations between different query scores explicitly instead of relying on the score function g to learn these relations alone. We hypothesize, and support with experiments, that this property allows the model to learn very efficiently using a small fraction of labeled data. In order to have a feature encoder that is robust to small deformations and noise in the input, we employ a multi-resolution feature extraction method based on the wavelet scattering transform of Bruna et. al. [43]. We empirically show that this is indeed a property of our model. Our experiments additionally show that the model is able to generalize across varying distributions of functional inputs, and is able to extrapolate on a functional regression task with global climate data.
A potential drawback of the proposed method is the computational cost needed for numerically approximating the integrals in the KCA mechanism. When using Monte-Carlo with P query points there is a complexity of O(P 2 ). Instead if a quadrature approach is taken with P queries and Q nodes there is a complexity of O(P Q + Q 2 ). The relative efficiency of these two approaches in general will depend on the number of quadrature points necessary for Figure 9: (Generalization) Antiderivative relative L 2 error boxplots for out-of distribution testing sets: We show the performance of all models when trained on increasingly out of distribution data sets from the testing set. We use all available output function measurements for training. a good approximation, and thus on the dimension of the query domain. In general, integrals over high dimensional domains will become increasingly costly to compute.
Therefore, an immediate future research direction is to use further approximations to allow the kernel integral computations to scale to larger numbers of points and dimensions. A first approach is to parallelize this integral computation by partitioning the domain into pieces and summing the integral contributions from each piece. To lower the the computational complexity of the kernel computations between the query and integration points we can also use approximations of the kernel matrix. For example, in the seminal paper of Rahimi and Recht [51] the authors propose an approximation of the kernel using a random feature strategy. More recently, in the context of transformer architectures, a number of approximations have been proposed to reduce the complexity of such computations to be linear in the number of kernel points O(N ). A non-exhaustive list of references include Linformers [52], Performers [53], Nyströformers [54] and Fast Transformers [55].
Another potential extension of our framework is to take the output of our model as the input function of another LOCA module and thus make a layered version of the architecture. While in our experiments we did not see that this modification significantly increased the performance of the model, it is possible that other variants of this modular architecture could give performance improvements. Lastly, recall that the output of our model corresponds to the context vector generated in the Bahdanau attention. In the align and translate model of Bahdanau et. al. [5] this context vector is used to construct a distribution over possible values at the output location. By using the output of our model as a context vector in a similar architecture, we can create a probabilistic model for the potential values of the output function, therefore providing a way to quantify the uncertainty associated with the predictions of our model.
A main application of operator learning methods is for PDEs, where they are used as surrogates for traditional numerical solvers. Since the forward pass of an operator learning model is significantly faster than classical numerical methods, the solution of a PDE under many different initial conditions can be expediently obtained. This can be a key enabler in design and optimal control problems, where many inputs must be tested in pursuit of identifying an optimal system configuration. A key advantage of operator learning techniques in this context is that they also allow the quick evaluation of sensitivities with respect to inputs (via automatic differentiation), thus enabling the use of gradient-based optimization. Conventional methods for computing sensitivities typically rely on solving an associated adjoint system with a numerical solver. In contrast, a well-trained operator learning architecture can compute these sensitivities at a fraction of the time. Therefore, we expect that successful application of operator learning methods to predict the output of physical systems from control inputs can have a significant impact in the design of optimal inputs and controls. Some preliminary work in this direction has been explored in Wang et. al. [56].
[n] Output function arguments (queries). u
Input function in C(X , R du ). s
Output function in C(Y, R ds ). F
Operator mapping input functions u to output functions s. g(y)
Proposal score function. g(y)
Kernel-Coupled score function. ϕ(y)
Attention weights at query y. v(u) Feature encoder. κ(y, y )
Coupling kernel. k(y, y ) Base similarity kernel. Table 3: (Nomenclature) A summary of the main symbols and notation used in this work. Table 3 summarizes the main symbols and notation used in this work.
A Nomenclature
B Proof of Theorem 4.1
Proof. The starting point of the proof is the following lemma, which gives justification for approximating operators on compact sets with finite dimensional subspaces as in [22,24,46]. The lemma follows immediately from the fact that for any compact subset U of a Banach space E and any > 0, there exists a finite dimensional subspace E n ⊂ E such that d(U, E n ) < .
Lemma B.1. Let U ⊂ E be a compact subset of a Banach space E. Then for any > 0, there exists n ∈ N, φ 1 , . . . , φ n ∈ E, and functionals c 1 , . . . , c n with c i : E → R such that
sup u∈U u − n i=1 c i (u)φ i E < .
Returning to the problem of learning a continuous operator G : U → C(Y, R ds ), since U is assumed to be compact and G is continuous, the image G(U) is compact in the co-domain. Thus, we may apply Lemma B.1 to the set G(U). This shows that for any > 0, there exists c 1 , . . . , c n with each c i : U → R linear and continuous and functions φ 1 , . . . , φ n with φ i : Y → R ds such that
sup u∈U sup y∈Y G(u)(y) − n i=1 c i (u)φ i (y) < .(12)
Next, we show that the approximation of G(u)(y) given in (12) can be expressed equivalently as vector of averages of a modified collection of functionals c i . These functionals c i will form the coordinates of our input feature vector v(u). First, for each φ i : Y → R ds we may form the positive and negative parts, whose coordinates are defined by
(φ + i ) q = max{(φ i ) q , 0} (φ − i ) q = − min{(φ i ) q , 0}
Note that φ + i and φ − i are continuous, non-negative, and that φ i = φ + i − φ − i . For j = 1, . . . , 2n define a new collection of functions ϕ j : Y → R ds by
ϕ j = 1 2n φ + i ∞ φ + i if j = 2i 1 2n φ − i ∞ φ − i if j = 2i − 1 and define ϕ 2n+1 := 1 ds − 2n j=1 ϕ j .
By construction, for all y we have that ϕ j (y) ∈ [0, 1] ds ,
span{φ i } n i=1 ⊆ span{ϕ j } 2n+1 j=1 ,
and 2n+1 j=1 ϕ j (y) = 1 ds .
In order to allow each output dimension of each ϕ j to have its own coordinate function, (and thus have v(u) ∈ R n×ds ), for each ϕ j , we create d s new functions,
ϕ j,k (y) := e k ϕ j (y),
where e k ∈ R ds is the k-th standard basis vector in R ds . Thus, we have constructed a collection of vectors ϕ j,k such that ϕ j,k , e m = 0 if and only if k = m, 2n+1 j=1 ds k=1 ϕ j,k (y) = 1 ds , ∀y ∈ Y,
and span{φ i } n i=1 ⊆ span ϕ j,k j∈[2n+1]
k∈ [ds] .
Since from Lemma B.1 we know
d(span{φ i } n i=1 , G(U)) < , we conclude that d(span ϕ j,k j∈[2n+1]
k∈ [ds] , G(U)) < , and can conclude the statement of the theorem.
C Proof of Proposition 4.1
Proof. Note that im(T 1/2 k ) = H κ , [62]. Since κ is universal, im(T
1/2 κ ) = H κ ⊂ C(Y, R n ) is dense. Thus, it suffices to show that T κ (A) is dense in im(T 1/2 κ ).
We will make use of the following fact, which we state as a lemma.
Lemma C.1. If f : X → Y is a continuous map and A ⊂ X is dense, then f (A) is dense in im(f ).
By the above lemma, we have that T κ (A) is dense in im(T κ ). Now we must show that im(T κ ) ⊂ im(T 1/2 κ ) is dense as well. This again follows from the above lemma by noting that im(T k ) = T
D Proof of Proposition 4.2
In this section, we show that the coupling kernel is symmetric and positive semi-definite. These two conditions are necessary to obtain theoretical guarantees of universality. The symmetry of the kernel κ follows immediately from the symmetry of the base kernel k in (3) and the form of κ in (4). To prove κ is positive semi-definite we must show for any v 1 , . . . , v n ∈ R and y 1 , . . . , y n ∈ Y,
n i,j=1 v i v j κ(y i , y j ) ≥ 0.
For ease of notation define
Z i := Y k(q(y i ), q(z)) dz 1/2 .
Using the definition of κ from (4),
n i,j=1 v i v j κ(y i , y j ) = n i,j=1 v i v j k(q(y i ), q(y j )) Z i Z j = n i,j=1 v i Z i v j Z j k(q(y i ), q(y j )) ≥ 0,
where in the last line we have used the positive semi-definiteness of k.
Finally, the injectivity of the map g would imply that the overall feature map of κ is injective, which gives that the kernel is universal [63].
E Proof of Proposition 4.3
Proof. Since U is compact, h is uniformly continuous. Hence, there exists δ > 0 such that for any u − v < δ,
h(u) − h(v) < /2. Define u d := d i=1
u, e i e i . By the uniform convergence of u d → u over u ∈ U, there exists d such that for all u ∈ U, u − u d < δ. Thus, for all u ∈ U
h(u) − h(u d ) < 2 .
If we define r : R d → C(X , R du ) as R n ), and recall that, by assumption, the function class A d is dense in C(R d , R n ). This means there exists f ∈ A d such that f − h • r < /2. Putting everything together, we see that
r(α) := d i=1 α i e i , we may write h(u d ) = (h • r)(D d (u)). Now, note that h • r ∈ C(R d ,h(u) − f • D d (u) ≤ h(u) − h(u d ) + (h • r)(D d (u)) − f • D d (u) < .
F Supplementary Information for Experiments
In this section, we present supplementary information on the experiments presented in Section 7.
F.1 Computational Complexity
In LOCA the most expensive operations are the integral computations in the KCA mechanism. Let z 1 , . . . , z Q be the integration nodes with z = [z 1 , . . . , z Q ] , and let the associated weights be w 1 , . . . , w Q , with w = [w 1 , . . . , w Q ] . For evaluating the KCA mechanism at a single query location y 0 with Q integration nodes we are required to compute the matrices k(y 0 , z) ∈ R 1×Q , with [k(y 0 , z)] j = k(y 0 , z j ) and k(z, z) ∈ R Q×Q , with [k(z, z)] ij = k(z i , z j ). These are combined to compute the kernel κ as
κ(y 0 , z) ≈ 1 (k(y 0 , z)w) 1/2 k(y 0 , z) (k(z, z)w) −1/2 ,
where the exponent of −1/2 in the last factor is applied coordinate-wise. Carrying out this computation requires Q steps to compute k(y 0 , z) and Q 2 steps to compute k(z, z), giving an overall complexity of Q + Q 2 . When considering P > 1 the complexity for computing k(y 0 , z) becomes P Q and the overall complexity becomes P Q + Q 2 because we need to compute k(z, z) only once. For the Monte Carlo case, Q = P and y = z, so we only need to make one computation of k(y, y). Therefore in this case, we have a complexity of Q 2 .
Both methods have their benefits and disadvantages: in the Monte Carlo case, we need to perform P 2 computations once, but the cost scales exponentially with P . On the other hand, Gauss-Legendre quadrature requires P Q + Q 2 evaluations, but if Q is small the overall computational cost is less than Monte-Carlo integration.
F.2 Architecture Choices and Hyper-parameter Settings
In this section we present the neural network architecture choices, the training details, the training wall-clock time, as well as the number of training parameters for each model compared in the experiments. Specifically, for the DON and FNO models, we have performed an extensive number of simulations to identify settings for which these competing methods achieve their best performance.
For LOCA and DON we set the batch size to be 100 and use exponential learning rate decay with a decay-rate of 0.99 every 100 training iterations. For the FNO training, we set the batch size to be 100 and consider a learning rate l r = 0.001, which we then reduce by 0.5 every 100 epochs and a weight decay of 0.0001. Moreover, for the FNO method we use the ReLU activation function.
F.2.1 LOCA
For the LOCA model, we present the structure of the functions g, f , and q in Table 4. In Table 5 we present the number of samples considered for the train and test data sets, the number of points where the input and the output functions are evaluated, the dimensionality of positional encoding, the dimensionality of the latent space where we evaluate the expectation E(u)(y), the batch size used for training, and the number of training iterations. We present the parameters of the wavelet scattering network in Table 6. The method used for computing the kernel integral for each example is presented in Table 7.
F.2.2 DON
For the DON model, we present the structure of b and t, the branch and the trunk functions, in Table 8. In Table 9 we present the number of samples considered for the train and test data sets, the number of points where the input and the output functions are evaluated, the dimensionality of the positional encoding, the dimensionality of the latent space, the batch size used for training, and the number of training iterations. In order to achieve competitive performance, we also adopted some of the improvements proposed in Lu et. al. [64], including the application of harmonic feature expansions to both input and outputs, as well as normalization of the output functions.
F.2.3 FNO
For the FNO model, we present the architecture choice in Table 10. In Table 11 we present the number of samples considered for the train and test data sets, the number of points where the input and the output functions are evaluated, the batch size used for training and the number of training epochs.
F.3 Computational Cost
We present the wall clock time, in minutes, needed for training each model for each different example presented in the manuscript in Table 13. For the case of the Darcy flow, the computational time is calculated for the case of P = 1024, meaning we use all available labeled output function measurements per training example. We choose this number of query points to show that even when the number of labeled data is large, the computational cost is still reasonable, despite the KCA computation bottleneck. We observe that the wall clock time for all methods lie in the same order of magnitude. All the models are trained on a single NVIDIA RTX A6000 GPU. Antiderivative 2 100 1 500 2 100 Darcy Flow 2 100 2 100 2 100 Mechanical MNIST 2 256 2 256 2 256 Shallow Water Eq. 1 1024 1 1024 1 1024 Climate Modeling 2 100 2 100 2 100 Table 4: LOCA Architectural choices for each benchmark considered in this work: We present the chosen architecture for g and q, the functions that constructs φ(y), and the function v which together build up the architecture of the LOCA model. Table 5: LOCA model parameters for each benchmark considered in this work: We present the numbers of training and testing data N train and N test , respectively, the number of input coordinate points m where the input function is evaluated, the number of coordinates P where the output function is evaluated, the dimension of the latent space n over which we evaluate the expectation, the number of positional encoding features H for the positional encoding, the dimensionality of the encoder l, the size of the batch B and the iterations for which we train the model. Table 6: Chosen parameters for the wavelet scattering network: J represents the log-2 scatteting scales, L the angles used for the wavelet transform and m o the maximum order of scattering coefficients to compute. The wavelet scattering network is implemented using the Kymatio library [58].
Example g depth g width f depth f depth q depth q width
Example
Example
Integration
F.4 Comparison Metrics
Throughout this work, we employ the relative L 2 error as a metric to assess the test accuracy of each model, namely:
Test error metric = ||s i (y) −ŝ i (y)||
F.5 Experiments
In this section, we present additional details about the experimental scenarios discussed in Section 7.
F.5.1 Antiderivative
We approximate the antiderivative operator for demonstrating the generalization capabilities of the LOCA in two inference scenarios. The antiderivative operator is defined as
ds(x) dx = u(x), s(x) = s 0 + x 0 u(τ )dτ,
where we consider x ∈ X = [0, 1] and the initial condition s(0) = 0. For a given forcing term u the solution operator G of system (F.5.1) returns the antiderivative s(x). In the notation of our model, the input and output function domains coincide, X = Y with d x = d y = 1. Since the solution operator is a map between scalar functions, we also have d u = d s = 1. Under this setup, our goal is to learn the solution operator G : C(X , R) → C(X , R).
To construct the data sets we sample the forcing function u(x) from a Gaussian process prior and measure these functions at 500 points. We numerically integrate them to obtain 100 measurements of each output function to use for training different operator learning models.
For investigating the performance of LOCA on out-of-distribution prediction tasks, we create training data sets by choosing l train ∈ [0.1, 0.9], and consider 9 cases of increasing l train spaced by 0.1 each. The training and testing data sets each have N = 1, 000 solutions of equation (F.5.1), and we use 100% of all available output evaluation function points, both for training and testing.
For the case where we train and test on multiple length and output scales, we construct each example in the data set as follows. To construct each input sample, we first we sample a uniform random variable δ ∼ U(−2, 1), and set the corresponding input sample length-scale to l = 10 δ . Similarly, we construct a random amplitude scale by sampling ζ ∼ U(−2, 2), and setting o = 10 ζ . Then we sample u(x) from a Gaussian Process prior u(x) ∼ GP (0, Cov(x, x )),
where Cov(x, x ) = o exp x−x l
. The length and the outputs scales are different for each realization, therefore we have 1, 000 different length and outputs scales in the problem.
F.5.2 Darcy Flow
Fluid flow through porous media is governed by Darcy's Law [65], which can be mathematically expressed by the following partial differential equation system,
∇ · (u(x)∇s(x)) = f (x), x ∈ X ,(13)
subject to appropriate boundary conditions
s = 0 on Γ X , (u(x)∇s(x)) · n = g on Γ N ,Γ N = {(x, 0) ∪ (x, 1) | x ∈ [0, 1] ⊂ ∂X }.
For a given forcing term f and set of boundary conditions, the solution operator G of system (13) maps the permeability function u(x) to the fluid pressure function s(x). In the notation of our model, the input and output function domains Figure 11: Comparison between the full resolution prediction and ground truth for the flow through porous medium data set: We present the input sample, the prediction, the ground truth and the absolute error for three realizations of the Darcy flow system. coincide, X = Y with d x = d y = 2. Since in this case the solution operator is a map between scalar functions, we also have d u = d s = 1. Under this setup, our goal is to learn the solution operator G : C(X , R) → C(X , R).
We set the Neumann boundary condition to be g(x) = sin(5x), the forcing term f (x) = 5 exp(−((x 1 − 0.5) 2 + (x 2 − 0.5) 2 )), and sample the permeability function u(x) from a Gaussian measure, as u(x) = exp(u 0 cos(x)) with u 0 ∼ N (0, 7 3/2 (−∆ + 49I) −1.5 . The training and testing data sets are constructed by sampling the initial condition along a 32 × 32 grid and solving the forward problem with the Finite Element library, Fenics [66]. This gives us access to 32 × 32 solution values to use for training different operator learning models. Sub-sampling these solution values in the manner described in Section 7 allows us to create training data sets to examine the effect of using only a certain percentage of the available data. Figure 11 gives a visual comparison of the outputs of our trained model against the ground truth for three randomly chosen initial conditions, along with a plot of the point-wise error. We see that our model performs well across random initial conditions that were not present in the training data set.
F.5.3 Mechanical MNIST
For this example, our goal is to learn the operator that maps initial deformations to later-time deformations in the equi-biaxial extension benchmark from the Mechanical MNIST database [67]. The data set is constructed from the results of 70, 000 finite-element simulations of a heterogeneous material subject to large deformations. MNIST images are considered to define a heterogeneous block of material described by a compressible Neo-Hookean model [67].
In our case, we are interested in learning displacement fields at later times, given some initial displacement. The material constitutive law is described by Lejeune et. al. [67]
ψ = 1 2 µ F : F − 3 − 2ln(detF) + 1 2 λ ((detF) 2 − 1) − ln(detF) ,(14)
where ψ is the strain energy, F is the deformation energy, µ and λ are Lamé constants that can be found from the Young's modulus and the Poisson ratio
E = µ(3λ + 2µ) λ + µ , ν = λ 2(λ + µ) .
The Young's modulus is chosen based on the bitmap values to convert the image to a material as
E = b 255 (100 − 1) + 1,
where b is the bitmap value. Here, the Poisson ratio is fixed to ν = 0.3 for all block materials. This means that the pixels inside the digits are block materials that are much stiffer than the pixels that are outside of the digits. In this benchmark the input and the output function domains coincide, X = Y with d x = d y = 2, while the solution operator, G, is a map between vector fields with d u = d s = 2. Consequently, our goal here is to learn the solution operator G : C(X , R 2 ) → C(X , R 2 ). Even though we create a map between displacement vectors, we present the magnitude of the displacement
s = v 2 1 + v 2 2
, for visual clarity of our plots.
The data set is constructed by sampling MNIST digits on a 28 × 28 grid and solving equation 14 using the Finite Element library, Fenics [66]. Out of the 70, 000 realizations that the MNIST data set contains, 60, 000 are used for training and 10, 000 are used for testing, therefore N train = 60, 000 and N test = 10, 000. We randomly sub-sample the number of measurement points per output function, as explained in Section 7, to create a training data set to demonstrate that our model only needs a small amount of labeled data to provide accurate predictions.
We present a visual comparison of the outputs of the trained model against the ground truth solution for three randomly chosen initial conditions from the test data set in Figure 12. Figure 13 presents the same comparison for one initial condition to show the change in the pixel position due to the applied displacement, which is not visible in the case where we present multiple solutions at the same time. The error reported in Figure 13 illustrates the discrepancy (shown in magenta) between the ground truth and the predicted pixel positions.
F.5.4 Shallow Water Equations
The modeling of the currents in Earth science is often modelled by the Shallow Water equations, which describes the flow below a pressure surface when the horizontal length-scales are much larger than the vertical ones. The system of equations is defined as:
∂ρ ∂t + ∂(ρv 1 ) ∂x 1 + ∂(ρv 2 ) ∂x 2 = 0, ∂(ρv 1 ) ∂t + ∂ ∂x 1 (ρv 2 1 + 1 2 gρ 2 ) + ∂(ρv 1 v 2 ) ∂x 2 = 0, t ∈ (0, 1], x ∈ (0, 1) 2 ∂(ρv 2 ) ∂t + ∂(ρv 1 v 2 ) ∂x 1 + ∂ ∂x 2 (ρv 2 2 + 1 2 gρ 2 ) = 0,
where ρ is the total fluid column height, v 1 the velocity in the x 1 -direction, v 2 the velocity in the x 2 -direction, averaged across the vertical column, ρ the fluid density and g the acceleration due to gravity. The above equation can be also written in conservation form: function domains coincide, therefore X = Y with d x = d y = 3 and d u = d s = 3. The goal is to learn the operator G : C(X , R 3 ) → C(X , R 3 ).
∂ U ∂t + ∂ F ∂x 1 + ∂ G ∂x 2 = 0,
We set the boundary conditions by considering a solid, impermeable wall with reflective boundaries: v 1 · n x1 + v 2 · n x2 = 0, wheren = n x1î + n x2ĵ is the unit outward normal of the boundary. We sample the initial conditions by considering a falling droplet of random width, falling from a random height to a random spatial location and zero initial velocities:
ρ = 1 + h exp −((x 1 − ξ) 2 + (x 2 − ζ) 2 )/w v 1 = v 2 = 0,
where ρ corresponds to the altitude that the droplet falls from, w the width of the droplet, and ξ and ζ the coordinates that the droplet falls in time t = 0s. Because the velocities v 1 , v 2 are equal to zero in the initial time t 0 = 0s for all realizations, we choose time t 0 = dt = 0.002s as the initial time to make the problem more interesting. Therefore, the input functions become
ρ = 1 + h exp −((x 1 − ξ) 2 + (x 2 − ζ) 2 )/w , v 1 = v 1 (dt, y 1 , y 2 ), v 2 = v 2 (dt, y 1 , y 2 ).
We set the random variables h, w, ξ, and ζ to be distributed according to the uniform distributions h = U(1.5, 2.5), w = U(0.002, 0.008), ξ = U(0.4, 0.6), ζ = U(0.4, 0.6).
The data set is constructed by sampling the initial conditions along a 32 × 32 grid and solving the forward problem using a Lax-Friedrichs scheme. This provides us with a solution for a 32 × 32 grid which we can use for different operator learning models. Sub-sampling the solution to create the training data set allows us to predict the solution using only a percentage of the available spatial data.
In Figures 14, 15 We see that our model provides favorable solutions for all time steps for an initial condition not in the train data set.
F.5.5 Climate Modeling
For this example, our aim is to approximate the map between the surface air temperature and surface air pressure. In contrast to the previous examples, here we do not consider a relation between these two fields, for example a partial differential equation or a constitutive law. Therefore, we aim to learn a black-box operator which we then use for making predictions of the pressure using the temperature as an input. Therefore, we consider the map
T (x) → P (y),
where x, y ∈ [−90, 90] × [0, 360] for the latitude and longitude. For a given day of the year the solution operator maps the surface air temperature to the surface air pressure. For this set-up, the input and output function domains coincide which means X = Y with d x = d y = 2 and d u = d s = 1 because we the input and output functions are scalar fields. We can write the map as G : C(X , R) → C(X , R).
For constructing the training data set, we consider the Physical Sciences Laboratory meteorological data [68](https: //psl.noaa.gov/data/gridded/data.ncep.reanalysis.surface.html) from the year 2000 to 2005. We consider the different model realizations to be the values of the daily Temperature and Pressure for these 5 years, meaning N train = 1825 (excluding the days for leap years). We sub-sample the spatial coverage from 2.5 degree latitude × 2.5 degree longitude global grid (144 × 73) to 72 × 72 for creating a regular grid for both the quantities. We consider a test data set consisting of the daily surface air temperature and pressure data from the years 2005 to 2010, meaning N test = 1825 (excluding leap years), on an 72 × 72 grid also.
We present the prediction and the ground truth together with the respective input and error Figure 19. The prediction, the ground truth solution and the absolute error are all presented on a 72 × 72 grid. We present the inputs to the model (initial conditions), the ground truth and the predicted parameters, as well as the absolute error for time instance t = 0.26s. Figure 18: Comparison between the predicted and ground truth solution of the Shallow Water Equations benchmark:
We present the inputs to the model (initial conditions), the ground truth and the predicted parameters, as well as the absolute error for time instance t = 0.31s. Figure 19: Comparison between the full resolution prediction and base line for the climate modeling benchmark: We present the input temperature field, the output prediction and ground truth, as well as the absolute error between our model's prediction and the ground truth solution.
Figure 2 :
2Schematic illustration of LOCA:
Preserves Universality). Let κ : Y × Y → R be a positive definite and symmetric universal kernel with associated RKHS H κ and define the integral operator
Proposition 4 . 2 (
42Universality of the Kernel κ). The kernel defined in (4) is positive definite and symmetric. Further, if q is injective, it defines a universal RKHS.
Figure 4 :
4(Data Efficiency) Relative L 2 error boxplots for the solution of Darcy flow: We present the error statistics for the case of the Darcy flow in the form of boxplots for the case where we train on [1.5, ..., 100]% of the available output function measurements per example. We observe that our model presents fast convergence to a smaller median error than the other methods and the error spread is more concentrated around the median with fewer outliers.
Figure 5 :
5(Data Efficiency) Relative L 2 error boxplots for the solution of the Shallow Water equations: We present the errors for each different predicted quantity of the Shallow Water equations.
Figure 6 :
6(Robustness) Relative L 2 error boxplots for the Mechanical MNIST benchmark with noisy data: The left figure gives the distribution of errors for the displacement in the horizontal axis, v 1 , and the right figure gives the displacement in the vertical axis v 2 . For all cases we consider 7% of the whole training data set as labeled data used during training.
Figure 7 :
7(Robustness) Maximum relative L 2 error boxplots for Mechanical MNIST with over random model initializations:
Figure 10 :
10(Generalization) Antiderivative relative L 2 error boxplots given input functions with multiple lenghtscales and amplitudes: We present samples of the input and output functions from the testing data set in the top left and right figures, respectively, as well as the test error boxplots for each method, bottom figure.
y) the model predicted solution, s(y) the ground truth solution and i the realization index. The relative L 2 error is computed across all examples in the testing data set, and different statistics of the error vector are computed: the median, quantiles, and outliers. For all examples the errors are computed between the full resolution reconstruction and the full resolution ground truth solution.
2 where u is permeability of the porous medium, and s is the corresponding fluid pressure. Here we consider a domain X = [0, 1] × [0, 1] with a Dirichlet boundary Γ D = {(0, x) ∪ (1, x) | x 2 ∈ [0, 1] ⊂ ∂X }, and a Neumann boundary
For the equi-biaxial extension experiments, Dirichlet boundary conditions are applied by considering different displacement values d = [0.0, 0.001, 0.01, 0.1, 0.5, 1, 2, 4, 6, 8, 10, 12, 14] for the right and top of the domain, and −d for the left and bottom of the domain.
Figure 12 :
12For a given set of initial conditions, the solution operator G of F.5.4 maps the initial fluid column height and velocity fields to the fluid column height and velocity fields at later times. Again in this problem the input and the output Comparison between the predicted and the ground truth displacement magnitudes of the Mechanical MNIST case: We present the results of our model for 3 different MNIST digits under final displacement d = 14. Despite the our model having displacement vector fields as inputs and outputs, we present our inputs and results in the form of positions. For this purpose, we add the displacements in the horizontal and vertical directions to the undeformed positions of the MNIST digit pixels which we assume that lie on a regular grid. The normalized absolute error is computed with respect to the position and not the displacement in each direction.
Figure 13 :
13Schematic comparison between the predicted and the ground truth final positions for the Mechanical MNIST benchmark: We present the ground truth final and the predicted final positions of the block material together with the point-wise error between them (shown in magenta), as well as the initial position. We present this result in a schematic manner, meaning without some indication of the error magnitude, in order to provide a sense of the deformation of the MNIST pixels under the final displacement.
, 16, 17, 18 we provide a visual comparison of the outputs of the trained model for 5 time steps, t = [0.11, 0.16, 0.21, 0.26, 0.31]s, for a randomly chosen initial condition along with the point-wise absolute error plot.
Figure 14 :
14Comparison between the predicted and ground truth solution of the Shallow Water Equations benchmark: We present the inputs to the model (initial conditions), the ground truth and the predicted parameters, as well as the absolute error for time instance t = 0.11s.
Figure 15 :
15Comparison between the predicted and ground truth solution of the Shallow Water Equations benchmark: We present the inputs to the model (initial conditions), the ground truth and the predicted parameters, as well as the absolute error for time instance t = 0.16s.
Figure 16 :
16Comparison between the predicted and ground truth solution of the Shallow Water Equations benchmark: We present the inputs to the model (initial conditions), the ground truth and the predicted parameters, as well as the absolute error for time instance t = 0.21s.
Figure 17 :
17Comparison between the predicted and ground truth solution of the Shallow Water Equations benchmark:
Table 2 :
2(Robustness) Mechanical MNIST prediction error with noisy data: The first three rows present the mean relative L 2 errors of the vertical and horizontal displacements [Error(v 1 ), Error(v 2 )
Integral computation method for each benchmark considered in this work: We present the method that is used to compute the required kernel integrals for the LOCA method. DON architecture choices for each benchmark considered in this work.method
Antiderivative
Quadrature
Darcy Flow
Quadrature
Mechanical MNIST
Quadrature
Shallow Water Eq.
Monte Carlo
Climate Modeling
Monte Carlo
Table 7: Example
b depth b width t depth t depth
Antiderivative
2
512
2
512
Darcy Flow
6
100
6
100
Mechanical MNIST
4
100
4
100
Shallow Water Eq.
11
100
11
100
Climate Modeling
4
100
4
100
Table 8: Example
N train N test
m
P
n
H
l
Batch # Train iterations
Antiderivative
1000
1000 1000 100 100 2 100
100
50000
Darcy Flow
1000
1000 1024
-
100 6 100
100
20000
Mechanical MNIST 60000 10000 784
56 500 10 100
100
100000
Shallow Water Eq.
1000
1000 1024 128 480 2 100
100
80000
Climate Modeling
1825
1825 5184 144 100 10 100
100
100000
Table 9 :
9DON model parameter for each benchmark considered in this work: We present the numbers of training and testing data N train and N test , respectively, the number of input coordinate points m where the input function is evaluated, the number of coordinates P where the output function is evaluated, the dimension of the latent space n over which we evaluate the inner product of the branch and the trunk networks, the number of positional encoding features H, the size of the batch B and the iterations for which we train the model. FNO architecture choices for each benchmark considered in this work. FNO model parameter for each benchmark considered in this work: We present the numbers of training and testing data N train and N test , respectively, the number of input coordinate points m where the input function is evaluated, the number of coordinates P where the output function is evaluated, the size of the batch B and the epochs for which we train the model. Total number of trainable parameters for each model, and for each benchmark considered in this work.Example
# of modes width # of FNO layers
Antiderivative
32
100
4
Darcy Flow
8
32
4
Mechanical MNIST
12
32
4
Shallow Water Eq.
8
25
4
Climate Modeling
12
32
4
Table 10: Example
N train N test
m
P
Batch # Train Epochs
Antiderivative
1000
1000 1000 100
100
500
Darcy Flow
1000
1000 1024
-
100
500
Mechanical MNIST 60000 10000 784
56
100
200
Shallow Water Eq.
1000
1000 1024 128
100
400
Climate Modeling
1825
1825 5184 144
73
250
Table 11: Example
LOCA
DON
FNO
Antiderivative
1,677,300 2,186,672 1,333,757
Darcy Flow
381,000
449,400
532,993
Mechanical MNIST 2,475,060 3,050,300 1,188,514
Shallow Water Eq.
5,528,484 5,565,660 5,126,690
Climate Modeling
1,239,500 5,805,800 1,188,353
Table 12: Example
LOCA DON FNO
Antiderivative
2.23
2.08
2.06
Darcy Flow (P = 1024)
5.51
3.5
1.50
Mechanical MNIST
21.70 16.61 22.87
Shallow Water Eq.
12.10 15.39 13.95
Climate Modeling
4.52
7.51 10.49
Table 13 :
13Computational cost for training each model across all benchmarks considered in this work: We present the wall clock time in minutes that is needed to train each model on a single NVIDIA RTX A6000 GPU.
Acknowledgements. We also thank the developers of the software that enabled our research, including JAX[57], Kymatio[58], Matplotlib[59], Pytorch[60]and NumPy[61]. We would also like to thank Andreas Kalogeropoulos and Alp Aydinoglu for their useful feedback on the manuscript.
Generalized deep image to image regression. Venkataraman Santhanam, I Vlad, Larry S Morariu, Davis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionVenkataraman Santhanam, Vlad I Morariu, and Larry S Davis. Generalized deep image to image regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5609-5619, 2017.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
Machine learning in medicine. Alvin Rajkomar, Jeffrey Dean, Isaac Kohane, New England Journal of Medicine. 38014Alvin Rajkomar, Jeffrey Dean, and Isaac Kohane. Machine learning in medicine. New England Journal of Medicine, 380(14):1347-1358, 2019.
Machine learning in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive 4d flow mri data using physics-informed neural networks. Georgios Kissas, Yibo Yang, Eileen Hwuang, R Walter, John A Witschey, Paris Detre, Perdikaris, Computer Methods in Applied Mechanics and Engineering. 358112623Georgios Kissas, Yibo Yang, Eileen Hwuang, Walter R Witschey, John A Detre, and Paris Perdikaris. Machine learning in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive 4d flow mri data using physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 358:112623, 2020.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, arXiv:2010.11929An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Image transformer. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, Dustin Tran, International Conference on Machine Learning. PMLRNiki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International Conference on Machine Learning, pages 4055-4064. PMLR, 2018.
. Yuan Gong, Yu-An Chung, James Glass, arXiv:2104.01778Ast: Audio spectrogram Transformer. arXiv preprintYuan Gong, Yu-An Chung, and James Glass. Ast: Audio spectrogram Transformer. arXiv preprint arXiv:2104.01778, 2021.
. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, M Andrew, Dai, D Matthew, Monica Hoffman, Douglas Dinculescu, Eck, arXiv:1809.04281Music transformer. arXiv preprintCheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M Dai, Matthew D Hoffman, Monica Dinculescu, and Douglas Eck. Music transformer. arXiv preprint arXiv:1809.04281, 2018.
When the data are functions. O James, Ramsay, Psychometrika. 474James O Ramsay. When the data are functions. Psychometrika, 47(4):379-396, 1982.
Some tools for functional data analysis. O James, C J Ramsay, Dalzell, Journal of the Royal Statistical Society: Series B (Methodological). 533James O Ramsay and CJ Dalzell. Some tools for functional data analysis. Journal of the Royal Statistical Society: Series B (Methodological), 53(3):539-561, 1991.
The elements of statistical learning: data mining, inference, and prediction. Trevor Hastie, Robert Tibshirani, Jerome Friedman, Springer Science & Business MediaTrevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.
Kernel methods in machine learning. The annals of statistics. Thomas Hofmann, Bernhard Schölkopf, Alexander J Smola, Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. Kernel methods in machine learning. The annals of statistics, pages 1171-1220, 2008.
Kernel methods for pattern analysis. John Shawe-Taylor, Nello Cristianini, Cambridge university pressJohn Shawe-Taylor, Nello Cristianini, et al. Kernel methods for pattern analysis. Cambridge university press, 2004.
Kernel regression for image processing and reconstruction. Hiroyuki Takeda, Sina Farsiu, Peyman Milanfar, IEEE Transactions on image processing. 162Hiroyuki Takeda, Sina Farsiu, and Peyman Milanfar. Kernel regression for image processing and reconstruction. IEEE Transactions on image processing, 16(2):349-366, 2007.
Training neural networks as learning data-adaptive kernels: Provable representation and approximation benefits. Xialiang Dou, Tengyuan Liang, Journal of the American Statistical Association. Xialiang Dou and Tengyuan Liang. Training neural networks as learning data-adaptive kernels: Provable representation and approximation benefits. Journal of the American Statistical Association, pages 1-14, 2020.
On learning vector-valued functions. A Charles, Massimiliano Micchelli, Pontil, Neural computation. 171Charles A Micchelli and Massimiliano Pontil. On learning vector-valued functions. Neural computation, 17(1):177-204, 2005.
Universal multi-task kernels. Andrea Caponnetto, A Charles, Massimiliano Micchelli, Yiming Pontil, Ying, The Journal of Machine Learning Research. 9Andrea Caponnetto, Charles A Micchelli, Massimiliano Pontil, and Yiming Ying. Universal multi-task kernels. The Journal of Machine Learning Research, 9:1615-1646, 2008.
Nonlinear functional regression: a functional RKHS approach. Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Manuel Davy, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsJMLR Workshop and Conference ProceedingsHachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, and Manuel Davy. Nonlinear functional regression: a functional RKHS approach. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 374-380. JMLR Workshop and Conference Proceedings, 2010.
Operator-valued kernels for learning from functional response data. Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, Julien Audiffren, The Journal of Machine Learning Research. 171Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, and Julien Audiffren. Operator-valued kernels for learning from functional response data. The Journal of Machine Learning Research, 17(1):613-666, 2016.
Do ideas have shape? plato's theory of forms as the continuous limit of artificial neural networks. Houman Owhadi, arXiv:2008.03920arXiv preprintHouman Owhadi. Do ideas have shape? plato's theory of forms as the continuous limit of artificial neural networks. arXiv preprint arXiv:2008.03920, 2020.
Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. Tianping Chen, Hong Chen, IEEE Transactions on Neural Networks. 64Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 6(4):911-917, 1995.
Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. Lu Lu, Pengzhan Jin, George Em Karniadakis, arXiv:1910.03193arXiv preprintLu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193, 2019.
Error estimates for deeponets: A deep learning framework in infinite dimensions. Samuel Lanthaler, Siddhartha Mishra, George Em Karniadakis, arXiv:2102.09618arXiv preprintSamuel Lanthaler, Siddhartha Mishra, and George Em Karniadakis. Error estimates for deeponets: A deep learning framework in infinite dimensions. arXiv preprint arXiv:2102.09618, 2021.
Shengze Cai, Zhicheng Wang, Lu Lu, A Tamer, George Em Zaki, Karniadakis, arXiv:2009.12935Deepm&mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks. arXiv preprintShengze Cai, Zhicheng Wang, Lu Lu, Tamer A Zaki, and George Em Karniadakis. Deepm&mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks. arXiv preprint arXiv:2009.12935, 2020.
Deeponet prediction of linear instability waves in high-speed boundary layers. P Clark Di Leoni, Lu Lu, Charles Meneveau, George Karniadakis, Zaki, arXiv:2105.08697arXiv preprintP Clark Di Leoni, Lu Lu, Charles Meneveau, George Karniadakis, and Tamer A Zaki. Deeponet prediction of linear instability waves in high-speed boundary layers. arXiv preprint arXiv:2105.08697, 2021.
Operator learning for predicting multiscale bubble growth dynamics. Chensen Lin, Zhen Li, Lu Lu, Shengze Cai, Martin Maxey, George Em Karniadakis, The Journal of Chemical Physics. 15410104118Chensen Lin, Zhen Li, Lu Lu, Shengze Cai, Martin Maxey, and George Em Karniadakis. Operator learning for predicting multiscale bubble growth dynamics. The Journal of Chemical Physics, 154(10):104118, 2021.
Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Sifan Wang, Hanwen Wang, Paris Perdikaris, Science Advances. 7408605Sifan Wang, Hanwen Wang, and Paris Perdikaris. Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Science Advances, 7(40):eabi8605, 2021.
Long-time integration of parametric evolution equations with physics-informed deeponets. Sifan Wang, Paris Perdikaris, arXiv:2106.05384arXiv preprintSifan Wang and Paris Perdikaris. Long-time integration of parametric evolution equations with physics-informed deeponets. arXiv preprint arXiv:2106.05384, 2021.
Improved architectures and training algorithms for deep operator networks. Sifan Wang, Hanwen Wang, Paris Perdikaris, arXiv:2110.01654arXiv preprintSifan Wang, Hanwen Wang, and Paris Perdikaris. Improved architectures and training algorithms for deep operator networks. arXiv preprint arXiv:2110.01654, 2021.
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2003.03485Neural operator: Graph kernel network for partial differential equations. arXiv preprintZongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485, 2020.
Multipole graph neural operator for parametric partial differential equations. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2006.09535arXiv preprintZongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. arXiv preprint arXiv:2006.09535, 2020.
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2010.08895Fourier neural operator for parametric partial differential equations. arXiv preprintZongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
Deepgreen: Deep learning of Green's functions for nonlinear boundary value problems. Craig R Gin, E Daniel, Shea, L Steven, J Nathan Brunton, Kutz, Scientific reports. 111Craig R Gin, Daniel E Shea, Steven L Brunton, and J Nathan Kutz. Deepgreen: Deep learning of Green's functions for nonlinear boundary value problems. Scientific reports, 11(1):1-14, 2021.
The random feature model for input-output maps between banach spaces. H Nicholas, Andrew M Nelsen, Stuart, arXiv:2005.10224arXiv preprintNicholas H Nelsen and Andrew M Stuart. The random feature model for input-output maps between banach spaces. arXiv preprint arXiv:2005.10224, 2020.
Transformer dissection: An unified understanding for Transformer's attention via the lens of kernel. Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov, arXiv:1908.11775arXiv preprintYao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. Transformer dissection: An unified understanding for Transformer's attention via the lens of kernel. arXiv preprint arXiv:1908.11775, 2019.
Kernels for multi-task learning. A Charles, Massimiliano Micchelli, Pontil, NIPS. Citeseer8689Charles A Micchelli and Massimiliano Pontil. Kernels for multi-task learning. In NIPS, volume 86, page 89. Citeseer, 2004.
An algorithm for the machine calculation of complex Fourier series. W James, John W Cooley, Tukey, Mathematics of computation. 1990James W Cooley and John W Tukey. An algorithm for the machine calculation of complex Fourier series. Mathematics of computation, 19(90):297-301, 1965.
Orthogonal bases of compactly supported wavelets. Daubechies, I Daubechies. Orthogonal bases of compactly supported wavelets, communications on pure and applied, 1988.
Universality, characteristic kernels and RKHS embedding of measures. K Bharath, Kenji Sriperumbudur, Gert R G Fukumizu, Lanckriet, Journal of Machine Learning Research. 1270Bharath K. Sriperumbudur, Kenji Fukumizu, and Gert R.G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. Journal of Machine Learning Research, 12(70):2389-2410, 2011.
Translating math formula images to LaTeX sequences using deep neural networks with sequence-level training. Zelun Wang, Jyh-Charn Liu, Zelun Wang and Jyh-Charn Liu. Translating math formula images to LaTeX sequences using deep neural networks with sequence-level training, 2019.
Group invariant scattering. Stéphane Mallat, Communications on Pure and Applied Mathematics. 6510Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10):1331- 1398, 2012.
Invariant scattering convolution networks. Joan Bruna, Stéphane Mallat, IEEE transactions on pattern analysis and machine intelligence. 35Joan Bruna and Stéphane Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872-1886, 2013.
Scaling the scattering transform: Deep hybrid networks. Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionEdouard Oyallon, Eugene Belilovsky, and Sergey Zagoruyko. Scaling the scattering transform: Deep hybrid networks. In Proceedings of the IEEE international conference on computer vision, pages 5618-5627, 2017.
A learning framework for age rank estimation based on face images with scattering transform. Yu Kuang, Chu-Song Chang, Chen, IEEE Transactions on Image Processing. 243Kuang-Yu Chang and Chu-Song Chen. A learning framework for age rank estimation based on face images with scattering transform. IEEE Transactions on Image Processing, 24(3):785-798, 2015.
Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, arXiv:2108.08481Neural operator: Learning maps between function spaces. arXiv preprintNikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces. arXiv preprint arXiv:2108.08481, 2021.
Shuhao Cao, arXiv:2105.14995Choose a Transformer: Fourier or Galerkin. arXiv preprintShuhao Cao. Choose a Transformer: Fourier or Galerkin. arXiv preprint arXiv:2105.14995, 2021.
Gaussian error linear units (gelus). Dan Hendrycks, Kevin Gimpel, arXiv:1606.08415arXiv preprintDan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256. JMLR Workshop and Conference Proceedings, 2010.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Random features for large-scale kernel machines. Ali Rahimi, Benjamin Recht, NIPS. Citeseer3Ali Rahimi, Benjamin Recht, et al. Random features for large-scale kernel machines. In NIPS, volume 3, page 5. Citeseer, 2007.
Linformer: Self-attention with linear complexity. Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, Hao Ma, arXiv:2006.04768arXiv preprintSinong Wang, Belinda Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.
Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, arXiv:2009.14794arXiv preprintKrzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.
Nyströmformer: A Nyström-based algorithm for approximating self-attention. Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh, arXiv:2102.03902arXiv preprintYunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. Nyströmformer: A Nyström-based algorithm for approximating self-attention. arXiv preprint arXiv:2102.03902, 2021.
Transformers are rnns: Fast autoregressive transformers with linear attention. Angelos Katharopoulos, Apoorv Vyas, International Conference on Machine Learning. PMLRNikolaos Pappas, and François FleuretAngelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156-5165. PMLR, 2020.
Fast PDE-constrained optimization via self-supervised operator learning. Sifan Wang, Mohamed Aziz Bhouri, Paris Perdikaris, arXiv:2110.13297arXiv preprintSifan Wang, Mohamed Aziz Bhouri, and Paris Perdikaris. Fast PDE-constrained optimization via self-supervised operator learning. arXiv preprint arXiv:2110.13297, 2021.
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vanderplas, Skye Wanderman-Milne, Qiao Zhang, JAX: composable transformations of Python+NumPy programs. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transforma- tions of Python+NumPy programs, 2018.
Scattering transforms in python. Tomás Mathieu Andreux, Georgios Angles, Roberto Exarchakis, Gaspar Leonarduzzi, Louis Rochette, John Thiry, Stéphane Zarka, Joakim Mallat, Eugene Andén, Belilovsky, J. Mach. Learn. Res. 2160Mathieu Andreux, Tomás Angles, Georgios Exarchakis, Roberto Leonarduzzi, Gaspar Rochette, Louis Thiry, John Zarka, Stéphane Mallat, Joakim Andén, Eugene Belilovsky, et al. Kymatio: Scattering transforms in python. J. Mach. Learn. Res., 21(60):1-6, 2020.
Matplotlib: A 2D graphics environment. D John, Hunter, IEEE Annals of the History of Computing. 903John D Hunter. Matplotlib: A 2D graphics environment. IEEE Annals of the History of Computing, 9(03):90-95, 2007.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32:8026-8037, 2019.
Array programming with numpy. Jarrod Charles R Harris, Millman, J Stéfan, Ralf Van Der Walt, Pauli Gommers, David Virtanen, Eric Cournapeau, Julian Wieser, Sebastian Taylor, Nathaniel J Berg, Smith, Nature. 5857825Charles R Harris, K Jarrod Millman, Stéfan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array programming with numpy. Nature, 585(7825):357-362, 2020.
An introduction to the theory of reproducing kernel Hilbert spaces. I Vern, Mrinal Paulsen, Raghupathi, Cambridge university press152Vern I Paulsen and Mrinal Raghupathi. An introduction to the theory of reproducing kernel Hilbert spaces, volume 152. Cambridge university press, 2016.
Universal kernels on non-standard input spaces. Andreas Christmann, Ingo Steinwart, Advances in neural information processing systems. CiteseerAndreas Christmann and Ingo Steinwart. Universal kernels on non-standard input spaces. In Advances in neural information processing systems, pages 406-414. Citeseer, 2010.
A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data. Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, George Em Karniadakis, arXiv:2111.05512arXiv preprintLu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and George Em Karniadakis. A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data. arXiv preprint arXiv:2111.05512, 2021.
Dynamics of fluids in porous media. Jacob Bear, Courier Corporation. Jacob Bear. Dynamics of fluids in porous media. Courier Corporation, 2013.
Martin Alnaes, Jan Blechta, Johan Hake, August Johansson, Benjamin Kehlet, Anders Logg, Chris Richardson, Johannes Ring, Marie E Rognes, Garth N Wells, The FEniCS project version 1.5. Archive of Numerical Software. 3Martin Alnaes, Jan Blechta, Johan Hake, August Johansson, Benjamin Kehlet, Anders Logg, Chris Richardson, Johannes Ring, Marie E Rognes, and Garth N Wells. The FEniCS project version 1.5. Archive of Numerical Software, 3(100), 2015.
Mechanical mnist: A benchmark dataset for mechanical metamodels. Emma Lejeune, Extreme Mechanics Letters. 36100659Emma Lejeune. Mechanical mnist: A benchmark dataset for mechanical metamodels. Extreme Mechanics Letters, 36:100659, 2020.
The ncep/ncar 40-year reanalysis project. Eugenia Kalnay, Masao Kanamitsu, Robert Kistler, William Collins, Dennis Deaven, Lev Gandin, Mark Iredell, Suranjana Saha, Glenn White, John Woollen, Bulletin of the American meteorological Society. 773Eugenia Kalnay, Masao Kanamitsu, Robert Kistler, William Collins, Dennis Deaven, Lev Gandin, Mark Iredell, Suranjana Saha, Glenn White, John Woollen, et al. The ncep/ncar 40-year reanalysis project. Bulletin of the American meteorological Society, 77(3):437-472, 1996.
| [
"https://github.com/PredictiveIntelligenceLab/LOCA."
] |
[
"COMPARISON BETWEEN BAYESIAN AND FREQUENTIST TAIL PROBABILITY ESTIMATES",
"COMPARISON BETWEEN BAYESIAN AND FREQUENTIST TAIL PROBABILITY ESTIMATES"
] | [
"Nan Shen ",
"Bárbara González-Arévalo ",
"Luis Raúl Pericchi "
] | [] | [] | Tail probability plays an important part in the extreme value theory. Sometimes the conclusions from two approaches for estimating the tail probability of extreme events, the Bayesian and the frequentist methods, can differ a lot. In 1999, a rainfall that caused more than 30,000 deaths in Venezuela was not captured by the simple frequentist extreme value techniques. However, this catastrophic rainfall was not surprising if the Bayesian inference was used to allow for parameter uncertainty and the full available data was exploited[2].In this paper, we investigate the reasons that the Bayesian estimator of the tail probability is always higher than the frequentist estimator. Sufficient conditions for this phenomenon are established both by using Jensen's Inequality and by looking at Taylor series approximations, both of which point to the convexity of the distribution function. | null | [
"https://export.arxiv.org/pdf/1905.03426v2.pdf"
] | 148,574,175 | 1905.03426 | 49ae815fbc5e7350d4e276f654db5ebcb8ccda24 |
COMPARISON BETWEEN BAYESIAN AND FREQUENTIST TAIL PROBABILITY ESTIMATES
Nan Shen
Bárbara González-Arévalo
Luis Raúl Pericchi
COMPARISON BETWEEN BAYESIAN AND FREQUENTIST TAIL PROBABILITY ESTIMATES
Tail ProbabilityTaylor SeriesJensen's InequalityConvexity
Tail probability plays an important part in the extreme value theory. Sometimes the conclusions from two approaches for estimating the tail probability of extreme events, the Bayesian and the frequentist methods, can differ a lot. In 1999, a rainfall that caused more than 30,000 deaths in Venezuela was not captured by the simple frequentist extreme value techniques. However, this catastrophic rainfall was not surprising if the Bayesian inference was used to allow for parameter uncertainty and the full available data was exploited[2].In this paper, we investigate the reasons that the Bayesian estimator of the tail probability is always higher than the frequentist estimator. Sufficient conditions for this phenomenon are established both by using Jensen's Inequality and by looking at Taylor series approximations, both of which point to the convexity of the distribution function.
Introduction
Tragedies like the 9/11 attacks, earthquakes or volcanic eruptions are rare events, but they are always followed by catastrophic consequences. Estimating the probabilities of extreme events has become more important and urgent in recent decades [7]. Both large deviations theory [11] and extreme value theory, which is widely used in disciplines like actuarial science, environmental sciences and physics [6], investigate both the theoretical and practical problems arising from rare events [4] [5].
With the popularization of Bayesian statistics, we now have two approaches for evaluating the probability of the tail: Bayesian and frequentist [3] [12] [10]. However, these two methods sometimes give different conclusions. As the case shows in [2], before 1999 simple frequentist extreme value techniques were used to predict future levels of extreme rainfall in Venezuela. In December 1999, daily precipitation of more than 410 mm, almost three times the daily rainfall measurements of the previously recorded maximum, was not captured which caused an estimated 30,000 deaths. Figure 1 in [2] shows that the precipitation of 1999 is exceptional even relative to the better fitting model under the frequentist MLE method. However, Figure 2 gives that the 1999 event can be anticipated if we use Bayesian inferences to fully account for the uncertainties due to parameter estimation and exploit all available data. Table 1 which is taken from [1] gives the return level estimates of 410.4 mm, the 1999 annual maximum, using different models. From the table we can also see that the The lesson we learned from [1] and [2] is the motivation for our research. Reasons why these two methods give huge different results, especially why the Bayesian model usually gives a larger probability of the tail than the classic frequentist method need to be investigated here. The Bayesian estimation of probability of tails is well founded on Probability Theory: It is a marginal computation that integrates out the parameters of the tail. On the other hand, the "plug-in" insertion a point estimate of the parameters is obviously an "ad-hoc" procedure not based on probability calculus but on an approximation, that may be close to the correct calculation at the center of the range but that deteriorates spectacularly as we move away to the interesting tails, where extreme values occur.
1.1. Definitions and Goal. Let X be the random variable. Suppose X indicate the magnitude and intensity of an earthquake, then the tail probability P (X > a) identifies the probability of an , including the 1999 data; , excluding the 1999 data; •, empirical estimates based on the 49 available annual maxima. earthquake occurrence when a is some value much greater than the mean. The Bayesian estimator of this probability is defined as
P B (X > a) = Θ [1 − F (a|θ)]π(θ|a)dθ,
which is the expectation of the tail probability 1 − F (a|θ) under the posterior distribution π(θ|x) given x = a, where θ denotes the parameters in the distribution function and could be one dimension or generalized to a high dimensional vector. The frequentist estimator is also called the plug-in estimator which is defined as
P F (X > a) = 1 − F (a|θ),
whereθ is the Maximum Likelihood Estimator(MLE) of θ.
Numerical experimental results point to the fact that the asymptotic behavior of the Bayesian estimator of the tail probability is usually higher than the frequentist estimator. We will use a very simple hierarchical model as an example to illustrate this phenomenon. Suppose we have a sequence of random variable x = x 1 , · · · , x n that follows the exponential distribution. The density of the exponential distribution is given by
f (x|λ) = 1 λ e −x/λ
where x ≥ 0 and λ > 0. So the tail probability is
φ(λ) = 1 − F (a|λ) = ∞ a f (x|λ)dx = e − a λ
The marginal distribution can be calculated as
m(x) = ∞ 0 f (x|λ)π J (λ)dλ = Γ(n) n i=1 x i −n ,
where we use Jeffereys prior as π J (λ) ∝ 1/λ. By which the posterior distribution is obtained as
π(λ|x) = 1 λ (n+1) Γ(n) exp − n i=1 x i λ n i=1 x i n (a)
The ratio of P B and P F when a is moderate.
(b)
The ratio of P B and P F when a is large.
Figure (3) Plots of P B (X>a) P F (X>a) for different a.
Then the Bayesian estimator of the tail probability is
P B (X > a) = E[φ(λ)|x] = ∞ 0 φ(λ)π(λ|x)dλ = 1 + a nx −n
And the frequentist estimator of the tail probability is
P F (X > a) = φ(λ =x) = e − ā x
Note that P B goes to zero at a polynomial rate, whereas P F goes to zero at an exponential rate. So, they are not even asymptotically equivalent, and thus it should be the case that:
P B (X > a) P F (X > a) → ∞ as a → ∞
We conduct the numerical experiments and plot the ratio of P B and P F . The results below show the ratio for different range of a. We can see that when a is some moderate number between 50 and 100. The ratio is greater than 1 and increases slowly. However, when a varies from 500 to 1000, the ratio goes to extremely large number very quick. In other words, P B (X>a) P F (X>a) → ∞ as a → ∞ which implies that the Bayesian estimate is higher than the frequentist estimate asymptotically.
We investigate the reasons for this behavior and the conditions under which it happens. We first use Jensen's inequality [13] [8], which states that if φ(θ) = 1 − F (a|θ) is a convex function of the random variable θ, then
φ (E[θ]) ≤ E [φ(θ)] .
We then verify that the convexity conditions are met for certain distributions that are widely used in extreme value analysis using a Taylor series approximation of the tail probability 1 − F (a|θ) around the MLE of θ, and plug it into the difference between the Bayesian and the frequentist estimators which is defined as
D(a) = P B (X > a) − P F (X > a).
In conclusion, we can show that the convexity of φ(θ) = 1 − F (a|θ) is a sufficient condition for P B > P F . We also verify this convexity condition for specific distributions which are widely used in extreme value theory.
Methodology
We will investigate why the Bayesian estimator of the tail probability is usually asymptotically higher than the frequentist one in this section. The method is to prove that if φ(θ) = 1 − F (a|θ) is convex then we can apply Jensen's inequality directly. Then we use the Taylor expansion of the tail probability 1 − F (a|θ) to verify the results we get which leads to the same conditions on our distribution function F (a|θ).
2.1. Convexity Investigation Using Jensen's Inequality. For tail probability estimations,
Bayesian method gives P B (X > a) = E π(θ|a) [1 − F (a|θ)], while frequentist method using P F (X > a) = 1 − F (a|θ).
To investigate the relation between P B and P F , Jensen's inequality tells something similar and I will state formally here as:
Theorem 2.1. Let (Ω, A, µ) be a probability space, such that µ(Ω) = 1. If g : Ω → R d is a real-valued function that is µ-integrable,
and if φ is a convex function on the real line, then:
φ Ω g dµ ≤ Ω φ • g dµ.
Note here the measurable function g is our parameter θ. So Jensen's inequality gives φ (E[θ]) ≤ E [φ(θ)] [8]. The inequality we want to prove, however, is that φ
[θ] ≤ E[φ(θ)].
The following theorem and proof shows that as a → ∞ we have φ[θ] and φ[E(θ)] are quite close to each other, which implies that φ[θ] ≤ E[φ(θ)].
Theorem 2.2. Let X be a continuous random variable supported on semi-infinite intervals, usually [c, ∞) for some c, or supported on the whole real line, with F (a|θ) be the cumulative distribution function (CDF) of X where a is some extremely large number on the support, and φ(θ) = 1−F (a|θ) is a convex function. Supposeθ is the maximum likelihood estimation of the parameter θ, then
φ[θ] ≤ E[φ(θ)] Proof. φ[E(θ)] − φ[θ] = (1 − F [a|E(θ)]) − 1 − F [a|θ] = 1 − a −∞ f (x|E(θ))dx − 1 − a −∞ f (x|θ)dx = ∞ a f (x|θ)dx − ∞ a f (x|E(θ))dx ≤ ∞ a f (x|θ) − f (x|E(θ)) dx Let's define g(x) = f (x|θ) − f (x|E(θ))
Then we can see that
∞ −∞ |g(x)|dx ≤ ∞ −∞ f (x|θ) dx + ∞ −∞ |f (x|E(θ))| dx = ∞ −∞ f (x|θ)dx + ∞ −∞ f (x|E(θ))dx = 1 + 1 = 2 < ∞. Which means that g(x) is a integrable function. Thus implies lim a→∞ ∞ a |g(x)|dx = 0, i.e lim a→∞ ∞ a f (x|θ) − f (x|E(θ)) dx = 0 Thus, for ∀ϵ > 0, ∃a such that ∞ a f (x|θ) − f (x|E(θ)) dx < ϵ. Which implies ∃a such that |φ[θ] − φ[E(θ)]| < ϵ. Thus −ϵ < φ[θ] − φ[E(θ)] < ϵ or we could write φ[E(θ)] − ϵ < φ[θ] < φ[E(θ)] + ϵ.
Equality in Jensen's inequality holds only if our function φ is essentially constant, and suppose our function φ(θ) is strictly convex, which is true for most of the cases that we encounter, then we know our Jensen's inequality is strict also
, i.e. φ[E(θ)] < E[φ(θ)]. Which implies ∃ϵ > 0 such that E[φ(θ)] ≥ φ[E(θ)] + ϵ.
Hence for this ϵ, as a → ∞ we have
E[φ(θ)] ≥ φ[E(θ)] + ϵ > φ[θ]. □
Taylor Expansion Examination.
In this section, we will use Taylor series for the tail probability to check the results we got in the previous section. Let
D(a) = P B (X > a) − P F (X > a) = Θ [1 − F (a|θ)]π(θ|a)dθ − 1 − F (a|θ)
which is the difference of the tail probabilities between the Bayesian and the frequentist estimators. The Taylor series of 1 − F (a|θ) at the MLE of θ,θ, is given as
φ(θ) = 1 − F (a|θ) (2.1) = 1 − F (a|θ) − ∇ θ F (a|θ)| θ=θ (θ −θ) − 1 2 H θ (F (a|θ))| θ=θ (θ −θ) 2 − R(θ) (2.2) R(θ) = 1 6 D 3 θ (F (a|θ)) θ=θ L (θ −θ) 3 where θ L is between θ andθ. (2.3)
where ∇ θ F (a|θ) is the gradient of F (a|θ) such that
(∇ θ F (a|θ)) i = ∂ ∂θ i F (a|θ)
and H θ is the Hessian matrix of dimension |θ| × |θ| such that
H ml = ∂ 2 ∂θ m ∂θ l F (a|θ)
And D 3 θ (F (a|θ)) is the third partial derivative of F (a|θ) w.r.t. θ in a similar manner. Then D(a) could be rewritten as
D(a) = Θ [1 − F (a|θ)]π(θ|a)dθ − 1 − F (a|θ) = − ∇ θ F (a|θ)|θ Θ π(θ|a)(θ −θ)dθ − 1 2 H θ (F (a|θ))|θ ∞ −∞ π(θ|a)(θ −θ) 2 dθ − R * (θ) = − ∇ θ F (a|θ)|θ E π(θ|a) (θ −θ) − 1 2 H θ (F (a|θ))|θ E π(θ|a) (θ −θ) 2 − R * (θ)
Here we simplify the notation by writing dF (a|θ)/dθ | θ=θ = F ′ (a|θ), d 2 F (a|θ)/dθ 2 | θ=θ = F ′′ (a|θ), and
R * (θ) = 1 6 D 3 θ (F (a|θ)) θ L ∞ −∞ π(θ|x)(θ −θ) 3 dθ = 1 6 D 3 θ (F (a|θ)) θ L E π(θ|x) (θ −θ) 3
In order for φ(θ) to be convex, and D(a) to be negative we would need the first term ∇ θ F (a|θ)|θ E π(θ|a) (θ −θ) and the third term R * (θ) to go to zero asymptotically, and the second term H θ (F (a|θ))|θ E π(θ|a) (θ −θ) 2 to be negative as a → ∞. In what follows, we will show some examples with specific distributions that are widely used in extreme value analysis where this happens. We conjecture that this is true for a broad set of cumulative distribution functions, since it worked in all the examples we tried. This would be an interesting open problem to solve in the future.
Example 1. Exponential distribution
The density of the exponential distribution is given by
f (x|λ) = 1 λ e −x/λ
where x ≥ 0 and λ > 0. So the tail probability is
1 − F (a|λ) = ∞ a f (x|λ)dx = e − a λ
Taking derivative with respect to λ at both sides we have
− d dλ F (a|λ) = a λ 2 e − a λ ; − d 2 dλ 2 F (a|λ) = − 2a λ 3 + a 2 λ 4 e − a λ ; − d 3 dλ 3 F (a|λ) = 6a λ 4 − 6a 2 λ 5 + a 3 λ 6 e − a λ
Suppose we have i.i.d sample x = (x 1 , ..., x n ) from f (x|λ), then the marginal distribution can be calculated as
m(x) = ∞ 0 f (x|λ)π J (λ)dλ = Γ(n) n i=1 x i −n ,
where we use Jeffereys prior as π J (λ) ∝ 1/λ. By which the posterior distribution is obtained as
π(λ|x) = 1 λ (n+1) Γ(n) exp − n i=1 x i λ n i=1
x i n After some arithmetic manipulation and usingλ =x we obtain
E π(λ|x) (λ −λ) = ∞ 0 (λ −λ)π(λ|x)dλ =x (n − 1) , E π(λ|x) (λ −λ) 2 = ∞ 0 (λ −λ) 2 π(λ|x)dλ =x 2 n + 2 (n − 1)(n − 2)
,
E π(λ|x) (λ −λ) 3 = ∞ 0 (λ −λ) 3 π(λ|x)dλ =x 3 7n + 6 (n − 1)(n − 2)(n − 3)
.
Plug these terms into D(a) we have
D(a) = − d dλ F (a|λ) λ E π(λ|x) (λ −λ) − 1 2 d 2 dλ 2 F (a|λ) λ E π(λ|x) (λ −λ) 2 − 1 6 d 3 dλ 3 F (a|λ) λ L E π(λ|x) (λ −λ) 3 = e − ā x ā x −4 (n − 1)(n − 2) + a 2 2x 2 n + 2 (n − 1)(n − 2) + e − a λ L a 3 − 6λ L a 2 + 6λ 2 L a 6λ 6 L x 3 (7n + 6) (n − 1)(n − 2)(n − 3)
Here we have that a >> 0 , andx ≥ 0; λ L is some number between x andx so λ L ≥ 0. All of which implies D(a) ≥ 0. Then, we want to show that lim a→∞ R * (λ) = 0, it is sufficient to show that
lim a→∞ d 3 F (a|λ) dλ 3 λ L = 0.
And we could obtain this simply by using L'Hospital's rule. In conclusion, the second derivatives for exponential distribution exp(λ) w.r.t. λ is
d 2 dλ 2 F (a|λ) = 2 − a λ a λ 3 e − a λ
Since a is assumed to be some extreme number, which implies d 2 F (a|λ)/dλ 2 ≤ 0, i.e. the tail probability φ(λ) = 1 − F (a|λ) is convex.
Example 2. Pareto Distribution
Given the scale parameter β = 1 and the shape parameter α as unknown, the pdf of the Pareto distribution is given by
f (x|α) = αx −α−1
where x ≥ 1 and 0 < α < 1, and the cumulative distribution function is
F (x|α) = x 1 f (t|α)dt = 1 − x −α , x ≥ 1
By setting the derivative of the log-likelihood equal to zero we get the MLE of α asα = n/ n i=1 log x i . We are interested in calculating the tail probability when b is extremely large. Note that
φ(α) = 1 − F (b|α) = b −α Taking derivatives of φ(α) with respect to α we obtain − d dα F (b|α) = −b −α ln b; − d 2 dα 2 F (b|α) = b −α (ln b) 2 ; − d 3 dα 3 F (b|α) = −b −α (ln b) 3 .
Using Jeffreys's prior π J (α) ∝ 1/α, we have
m(x) = 1 0 f (x|α)π J (α)dα = Γ(n, 0) − Γ(n, n i=1 ln x i ) n i=1 x i [ n i=1 ln x i ] n ,
where the upper incomplete gamma function is defined as Γ(s, x) = ∞ x t s−1 e −t dt. Then the posterior distribution is given by
π(α|x) = L(α|x)π J (α) m(x) = α n−1 ( n i=1 x i ) −α [ n i=1 ln x i ] n Γ(n, 0) − Γ(n, n i=1 ln x i )
Using the properties of the incomplete gamma function, and integration by parts we find the recurrence relation Γ(s + 1, x) = sΓ(s, x) + x s e −x . We obtain
E π(α|x) (α −α) = 1 0 (α −α)π(α|x)dα = − [ n i=1 ln x i ] n−1 n i=1 x i [Γ(n, 0) − Γ(n, n i=1 ln x i )] , E π(α|x) (α −α) 2 = n [ n i=1 ln x i ] 2 + (n − 1)( n i=1 ln x i ) n−2 − ( n i=1 ln x i ) n−1 n i=1 x i [Γ(n, 0) − Γ(n, n i=1 ln x i )] , E π(α|x) (α −α) 3 = 2n ( n i=1 ln x i ) 3 − ( n i=1 ln x i ) n−3 [n 2 + 2 + (2 − 2n)( n i=1 ln x i ) + ( n i=1 ln x i ) 2 ] ( n i=1 x i )[Γ(n, 0) − Γ(n, n i=1 ln x i )] .
To show D(b) ≥ 0, is equivalent to show that the first term in the expression of D(b) after plugging in the Taylor expansion of 1 − F (b|α) goes to zero as b → ∞, which could be obtain by using the L'Hospital's rule. And we also need to show the second term d 2 /dα 2 F (b|α) α E π(α|x) (α −α) 2 is asymptotically negative. We can see this from the fact that
d 2 /dα 2 F (b|α) α = −b −α (ln b) 2 ≤ 0 and E π(α|x) (α −α) 2 ≥ 0.
Then, We want to show that lim b→∞ R * (α) = 0, it is sufficient to show that
lim b→∞ d 3 dα 3 F (b|α) α L = 0.
And we could obtain this simply by using L'Hospital's rule. In conclusion, the second derivatives for Pareto distribution is
d 2 dα 2 F (b|α) = −b −α (ln b) 2
Since b is assumed to be some extreme number, which implies d 2 /dα 2 F (b|α) ≤ 0, i.e. the tail probability φ(α) = 1 − F (b|α) is convex.
Example 3. Normal Distribution
Normal distribution with unknown standard deviation σ and expectation µ is a case where the parameter is a two dimensional vector, i.e. θ = (µ, σ). Since x|µ, σ ∼ N (µ, σ 2 ), then the CDF when x = a is
F (a|µ, σ) = 1 2 1 + erf a − µ σ √ 2 = 1 2 + 1 √ π a−µ σ √ 2 0 e −t 2 dt
where erf(x) is the related error function defined as erf(x) = 2/ √ π
x 0 e −t 2 dt. Looking at the Hessian matrix, we have
H = ∂ 2 ∂µ 2 F (a|µ, σ) ∂ 2 ∂σ∂µ F (a|µ, σ) ∂ 2 ∂µ∂σ F (a|µ, σ) ∂ 2 ∂σ 2 F (a|µ, σ) = − a−µ √ 2πσ 3 e − a−µ σ √ 2 2 − 1 √ 2πσ 2 e − a−µ σ √ 2 2 (a−µ) 2 σ 2 − 1 − 1 √ 2πσ 2 e − a−µ σ √ 2 2 (a−µ) 2 σ 2 − 1 − a−µ √ 2πσ 3 e − a−µ σ √ 2 2 (a−µ) 2 σ 2 − 2
To show H is negative definite for large a, we need to show for ∀v T = (v 1 , v 2 ) ̸ = 0, we have v T Hv < 0.
By tedious calculation we have Since a is assumed to be some extreme large number, so a − µ > 0, then the leading term in the bracket is v 2 2 a − µ σ 3 which is positive. Hence, v T Hv < 0, i.e. H is negative definite for large a as we expected. In other words, φ(µ, σ) = 1 − F (a|µ, σ) is a convex function.
v T Hv = − 1 √ 2πσ 2 e − a−µ σ √ 2 2 (v 2 1 − 2v 2 2 ) a − µ σ + 2v 1 v 2 a − µ σ 2 + v 2 2 a − µ σ 3 − 2v 1 v 2 (a) Normal Distribution (b) Exponential Distribution (c) Pareto Distribution
Conclusions
Bayesian and frequentist estimations of the tail probability sometimes give conclusions of huge differences which can cause serious consequences. Thus in practice, we will have to take both of the results into consideration. To be specific, Bayesian method always estimate the tail probability higher than the frequentist model. The Bayesian estimation of probability of tails is well founded on Probability Theory: It is a marginal computation that integrates out the parameters of the tail. On the other hand, the frequentist estimation is an approximation.
By looking at the Taylor expansion of the tail and investigate the convexity of the distribution function, we claim that the Bayesian estimator for tail probability being higher than the frequentist estimator depends on how φ(θ) = 1 − F (a|θ) is shaped. The condition that φ(θ) is a strictly convex function is equivalent to H θ F (a|θ) < 0. Other examples (only continuous cases with infinite support) like the Cauchy Distribution, Logistic Distribution, Log-normal Distribution, Double Exponential Distribution, Weibull Distribution, etc., also satisfy our convexity conditions here.
However, in general convexity is a much stronger argument than Jensen's inequality, i.e. φ(θ) = 1 − F (a|θ) is a convex function, or equivalently H θ F (a|θ) < 0 is only a sufficient condition for Jensen's inequality to hold but not a necessary condition. There are distributions with H θ F (a|θ) ≥ 0 but we still have the Bayesian estimator for the tail probability is higher than the frequentist approximation. In [9], they found conditions on the random variable to make the other direction work which will be discussed in our future work.
Acknowledgment
We grateful thank the Royal Statistical Society for giving us the permission to reuse the Figure 1 and 2 of Coles and Pericchi [2] Applied Statistics(2003), Volume52, Issue4, Pages 405-416, published by Wiley.
Figure (
Figure (1) Return level plots for models fitted to the annual maximum Venezuelan rainfall data( , GEV model, maximum likelihood estimates; , GEV model, limits of the 95% confidence intervals; , Gumbel model, maximum likelihood estimates; , Gumbel model, limits of the 95% confidence intervals; •, empirical estimates based on the complete 49-year data set): (a) excluding the 1999 data; (b) including the 1999 data.
1
Figure ( 2 )
2Predictive distributions for seasonal models fitted to the Venezuelan rainfall data:
Figure ( 4 )
4Plots of distribution functions when parameters are variables. Note in (a), the parameter θ is the mean of the normal distribution, i.e. in this case σ is given and we can see that F (a|θ) is concave down when a > θ.
) Return level plots for models fitted to the annual maximum Venezue-•, empirical estimates based on the complete 49-year data set): (a) excluding the 1999 data; (b) including the 1999 data. Bayesian method gives a way much smaller return period that the frequentist Maximum Likelihood Estimation(MLE) under different models. Return period of 410.4 mm Mode of Inference Model 1999 data excluded 1999 datum includedTable (1) Return level estimates of 410.4 mm, the 1999 annual maximum, using different models and modes of inference. For the MLE analysis the values correspond to the maximum likelihood estimates of the return period. For the Bayesian analysis the values are the predictive return periods. From [1] (with permission)lan rainfall data(
, GEV model, maximum likelihood estimates;
, GEV
model, limits of the 95% confidence intervals;
, Gumbel model, maxi-
mum likelihood estimates;
, Gumbel model, limits of the 95% confidence
intervals; MLE
Gumbel
17,600,000
737,000
GEV
4280
302
Bayes
Gumbel
2,170,000
233,000
GEV
660
177
A Fully Probabilistic Approach to Extreme Rainfall Modeling. Stuart Coles, Luis Raúl Pericchi, Scott Sisson, Journal of Hydrology. Stuart Coles, Luis Raúl Pericchi, Scott Sisson (2003) A Fully Probabilistic Approach to Extreme Rainfall Modeling, Journal of Hydrology.
Anticipating catastrophes through extreme value modelling. Stuart Coles, Luis Pericchi, Journal of the Royal Statistical Society: Series C (Applied Statistics). Stuart Coles and Luis Pericchi (2003) Anticipating catastrophes through extreme value modelling, Journal of the Royal Statistical Society: Series C (Applied Statistics).
Bayesian and frequentist approaches to parametric predictive inference (with discussion). R L Smith, In Bayesian Statistics. J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith6Oxford University PressSmith, R. L. (1999) Bayesian and frequentist approaches to parametric predictive inference (with discussion). In Bayesian Statistics 6 (eds J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith). Oxford: Oxford University Press.
Statistical Analysis of Extreme Values. R D Reiss, M Thomas, Birkhäuser, BaselReiss RD., Thomas M. (1997) Statistical Analysis of Extreme Values. Birkhäuser, Basel
An Introduction to Statistical Modeling of Extreme Values. S Coles, Springer Series in Statistics. SpringerColes S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer Series in Statistics. Springer, London
Extreme values in finance, telecommunications, and the environment. B Finkenstadt, Rootzén, B Finkenstadt, H Rootzén (2003) Extreme values in finance, telecommunications, and the environment
ESTIMATING THE HISTORICAL AND FUTURE PROBABILITIES OF LARGE TERRORIST EVENTS. Aaron Clauset, Ryan Woodard, Aaron Clauset, Ryan Woodard (2013) ESTIMATING THE HISTORICAL AND FUTURE PROBABILITIES OF LARGE TERRORIST EVENTS
George Casella, L Roger, Berger, Statistical Inference. 2nd editionGeorge Casella. Roger L. Berger (June 18, 2001) Statistical Inference, 2nd edition.
Necessary and Sufficient Conditions for the Validity of Jensen's Inequality. A Guessab, G Schmeisser, A. Guessab and G. Schmeisser (2013) Necessary and Sufficient Conditions for the Validity of Jensen's Inequality
Bayesian analysis of extreme events with threshold estimation. N Cibele, Behrens, F Hedibert, Dani Lopes, Gamerman, Cibele N Behrens, Hedibert F Lopes and Dani Gamerman (2004) Bayesian analysis of extreme events with threshold estimation
A Course on Large Deviations with an Introduction to Gibbs Measures. Firas Rassoul-Agha, Timo Seppäläinen, Firas Rassoul-Agha, Timo Seppäläinen (2014) A Course on Large Deviations with an Introduction to Gibbs Measures
A simple general formula for tail probabilities for frequentist and Bayesian inference. D A S Fraser, N Reid, J Wu, D.A.S. FRASER, N.REID and J. WU A simple general formula for tail probabilities for frequentist and Bayesian inference
Z Cvetkovski, Convexity, Jensen's Inequality. Inequalities. SpringerBerlin, HeidelbergCvetkovski Z. (2012) Convexity, Jensen's Inequality. In: Inequalities. Springer, Berlin, Heidelberg
| [] |
[
"Optimizing Non-Autoregressive Transformers with Contrastive Learning",
"Optimizing Non-Autoregressive Transformers with Contrastive Learning"
] | [
"Chenxin An \nThe University of Hong Kong\n\n\nFudan University\n\n",
"Jiangtao Feng ",
"Fei Huang [email protected] \nThe CoAI group\nTsinghua University\n\n",
"Xipeng Qiu [email protected]@gmail.com \nFudan University\n\n",
"Lingpeng Kong \nThe University of Hong Kong\n\n"
] | [
"The University of Hong Kong\n",
"Fudan University\n",
"The CoAI group\nTsinghua University\n",
"Fudan University\n",
"The University of Hong Kong\n"
] | [] | Non-autoregressive Transformers (NATs) reduce the inference latency of Autoregressive Transformers (ATs) by predicting words all at once rather than in sequential order. They have achieved remarkable progress in machine translation as well as many other applications. However, a long-standing challenge for NATs is the learning of multi-modality data distribution, which is the main cause of the performance gap between NATs and ATs. In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution. We derive contrastive constraints to stabilize the training process and integrate this resulting objective with the state-of-the-art NAT architecture DA-Transformer [18]. Our model CODAT is examined on 3 different tasks, including machine translation, text summarization, and paraphrasing with 5 benchmarks. Results show that our approach outperforms previous non-autoregressive baselines by a significant margin and establishes new state-of-the-art results for non-autoregressive transformers on all the benchmarks.1We use the word multi-modality in the context of machine translation literature, where it refers to to the phenomenon that there are multiple acceptable translations in the ground-truth data distribution and all of them are used as training labels for the same input.Preprint. Under review. | 10.48550/arxiv.2305.13667 | [
"https://export.arxiv.org/pdf/2305.13667v2.pdf"
] | 258,841,342 | 2305.13667 | 229eedf42b820123ef6eeecb0f8d0481d3d1039d |
Optimizing Non-Autoregressive Transformers with Contrastive Learning
Chenxin An
The University of Hong Kong
Fudan University
Jiangtao Feng
Fei Huang [email protected]
The CoAI group
Tsinghua University
Xipeng Qiu [email protected]@gmail.com
Fudan University
Lingpeng Kong
The University of Hong Kong
Optimizing Non-Autoregressive Transformers with Contrastive Learning
Non-autoregressive Transformers (NATs) reduce the inference latency of Autoregressive Transformers (ATs) by predicting words all at once rather than in sequential order. They have achieved remarkable progress in machine translation as well as many other applications. However, a long-standing challenge for NATs is the learning of multi-modality data distribution, which is the main cause of the performance gap between NATs and ATs. In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution. We derive contrastive constraints to stabilize the training process and integrate this resulting objective with the state-of-the-art NAT architecture DA-Transformer [18]. Our model CODAT is examined on 3 different tasks, including machine translation, text summarization, and paraphrasing with 5 benchmarks. Results show that our approach outperforms previous non-autoregressive baselines by a significant margin and establishes new state-of-the-art results for non-autoregressive transformers on all the benchmarks.1We use the word multi-modality in the context of machine translation literature, where it refers to to the phenomenon that there are multiple acceptable translations in the ground-truth data distribution and all of them are used as training labels for the same input.Preprint. Under review.
Introduction
Autoregressive Transformers (ATs) have become the dominant architecture for text generation, but token-by-token decoding usually leads to inefficiency in inference stage. Non-autoregressive Transformers (NATs) [9,12,13,18] significantly reduce the decoding latency by removing the dependency between target tokens and generating the whole sequence in parallel.
Despite the fast decoding speed, the main challenge of NATs lies in the learning of the ground-truth data distribution, which often has a large number of modalities [13]. 1 ATs mitigate this problem by treating sequence generation as a gradual modality collapsing process [13,37]. As the generation of later tokens is conditioned on the previous ones, it is unlikely for the model to flip around different modalities. NATs, on the other hand, generate all the tokens all at once, hence prone to generate tokens from mixed modalities in one sequence, which strongly hurts their performance.
A common fix for this issue is to directly reduce the number of modalities of original data distribution by knowledge distillation through an autoregressive model [13]. Intuitively, this step regenerates the training set using an autoregressive model learned from the original data distribution, making it more manageable for the NATs but also introducing a redundant pipeline. Recent efforts start to scrutinize multi-modal data distribution p(·|x) (a)
multi-modal data distribution p(·|x) model distribution q(·|x) optimized with y~p(·|x)
(b)
multi-modal data distribution p(·|x) model distribution q(·|x) optimized with y~q(·|x)
(c) Figure 1: (a) shows a bimodal data distribution (blue contours). Orange contours in (b) show a single modal model distribution that is optimized to fit the data distribution by samples from the bimodal data distribution. similar to (b), but the model distribution is optimized to a different local minimum with samples from the captured modality of model distribution.
NATs' conditional independence assumption and address the multi-modality challenge mainly by learning target alignment [6,9,18,27] and enhancing input [8,37].
In this work, we propose to tackle this problem from a new perspective, with the goal of bypassing the modalities that are difficult for the model to capture in the learning procedure. Our method starts from the reverse Kullback-Leibler (KL) divergence optimization, which effectively converges to a local single modality given a complex multi-modal target distribution (Bishop and Nasrabadi 3; Figure 1). The connection between modality learning and model distribution optimization is further explained in §3.1. To stabilize the training and prevent collapsing, we derive a set of constraints from the data distribution and impose them to the model distribution. We find that in theory leads to a contrastive learning objective ( §3.2). Finally, we show how to integrate this objective with the state-of-the-art NAT architecture DA-Transformer [18]
( §3.3).
We test the performance of our model, CODAT, on three generation tasks with five benchmarks. Experiments on two major WMT machine translation benchmarks demonstrate that our approach substantially improves the performance over strong baseline DA-Transformer and establishes new state-of-the-art results directly trained on the raw datasets ( §4.1). Similar to machine translation, we also achieve the best result for NAT on paraphrasing ( §4.1), and impressive results on nonautoregressive summarization ( §4.1). CODAT exceeds the autoregressive model on two widely-used summarization benchmarks XSum [32] and Gigaword [38].
Background
Non-autoregressive Generation Consider predicting a target sequence y = {y 1 , y 2 , . . . , y n } with a source sequence x = {x 1 , x 2 , . . . , x m }, where n and m are target and source sequence length respectively. Typical autoregressive transformers model the conditional probability p(y|x) via autoregressive decomposition as:
p(y|x) = n i=1 p(y i |y <i , x).(1)
While non-autoregressive transformers factorize p(y|x) independently with conditional independence assumption:
p(y|x) = n i=1 p(y i |x).(2)
With this assumption, NATs are able to drop the left-to-right dependencies and decode the target sequence in parallel.
DP Training for NATs The strict position alignment between predicted and target tokens [9,12] of vanilla NAT has a poor ability to capture the multi-modality data distribution which typically results in generated tokens from mixed modality and repeated words [13]. Dynamic Programming (DP) training [6,9,27] greatly alleviate this problem by introducing a long decoder length and alignmentbased training objectives that marginalize all the possible alignments that yield the ground-truth.
Take the latest work DAT (DA-Transformer) [18] as an example, given the ground truth sequence y = {y 1 , y 2 , . . . , y n } whose length is n and decoder length L n, log q(y|x) can be defined as:
log q(y|x) = log a∈Γ q(y, a|x) = log a∈Γ q(y|a, x) · q(a|x),
q(a|x) = n−1 i=1 E ai.ai+1 , q(y|a, x) = n i=1 softmax(W vocab h ai ),(3)
where a is a set of decoder position indexes sorted in ascending order whose size |a| = n and Γ contains all possible a with a size of L n . For example, target length n = 3 and L = 6 means Γ contains 6 3 = 20 possible a, and a ∈ {0, 1, 2}, {0, 1, 3}...{3, 4, 5}. h ai means the a i -th decoder hidden state. q(y|a, x) is the token prediction probablity in generation models. q(a|x) is given by the transition matrix E ∈ R L×L modeling the first-order dependency between decoder position indexes where E ai.ai+1 means the transition probability of index a i to index a j and it is predicted based on the decoder hidden states. Enumerating all a ∈ Γ will result in the curse of combination, but luckily E can be trained with the dynamic programming algorithm. Details of the algorithm can be found in Huang et al. [18]. In Figure 4 Appendix, we present an example that highlights 1) the difference between a Dynamic Programming (DP) model and a Vanilla model; 2) the calculation of q(a|x) and q(y|a, x). For the decoding procedure, please refer to Appendix E. Training with Dynamic Programming is essential for NATs on raw data; otherwise, their performance lags notably behind that of ATs.
Method
Learning on Model Distribution
Unlike previous efforts that mitigate multi-modality problems via architecture design or decoding algorithm, we attribute the multi-modality problem to the learning objective, maximum likelihood estimation (MLE). MLE minimizes the KL divergence between the data distribution p(·|x) and the model distribution q(·|x), which can be formulated as follows,
KL(p q) = E y∼p(·|x) [log p(y|x) q(y|x) ] = −H(p) + E y∼p(·|x) [− log q(y|x)](4)
where H(p) denotes the entropy of data distribution and is a constant. Note that MLE requires sampling from the data distribution p(·|x), which forces the NAT model to assign a reasonable probability for each sampled y. We argue that this objective could be too difficult for NAT models because of its limited capacity, which may result in low-quality predictions with mixed modalities.
In this work, we suggest that the above problem can be alleviated by replacing the samples from p(·|x) with the samples from q(·|x), therefore dropping some modalities that are too difficult for NATs to capture. Specifically, we propose a generalized divergence D(q p) as our objective:
D(q p) = E y ∼q(·|x) [M p,q (y |x)],(5)
where M p,q (y |x) measures the discrepancy between model and data distribution with a given sample y from the simpler model distribution.
To better explain our intuition, we illustrate an example in Figure 1. In previous objective, E y∼p(·|x) [·] optimizes samples from all modalities in data distribution p(·|x) by assigning them importance rates p(y|x), thereby leading to a mixed-modal distribution (see Figure 1b); in contrast, E y ∼q(·|x) [·] focuses on samples within a captured modality from modeling distribution q(·|x) by assigning low importance rate to uncaptured modalities and keeps on optimizing the captured modalities (see Figure 1c). The final training objective is the sum of KL(p q) and D(q p). KL(p q) acts as a regularization to prevent collapse if the samples from model distribution are low-quality.
Our idea is connected to previous work that optimizes reverse KL based on reinforcement learning 2 [2,48]. Note that the reverse KL(q p) is a special case of Eq. 5, which adopts the discrepancy measure as log q(y |x) p(y |x) . However, reinforcement learning necessitates a well-structured parameter space for initialization [5], but in our preliminary experiments, the original KL loss optimized by DP training is easily perturbed by the RL loss that it ultimately impedes performance improvements (see Figure 2a). In the next section, we bypass the difficulty of directly optimizing the absolute reward distribution by suggesting more flexible necessary conditions and optimizing the proposed objective contrastively.
A Contrastive Learning Objective
Having the connection between NATs' multi-modality challenge and model distribution optimization in the previous discussion, we focus on how to concretize a discrepancy measure M p,q (y |x), in a contrastive way, to NAT models. The general methodology is investigating the constraints that are sufficient and necessary to data distribution and then imposing these constraints to model distribution to guide the learning process.
With y sampled from model distribution, it is usually unobserved in the enormous target space, with high probability, in a dataset generated by an unknown distribution. Thus it is untractable to quantify the likelihood p(y |x). A practical estimation is to introduce a reward-based distribution p R (·|x, y), i.e.,
p R (y |x, y) = 1 Z R exp(R(y, y )), (x, y) ∼ D,(6)
where R(·, ·) is a reward function measuring (y, y )'s divergence, and Z R denotes the normalizer.
We here use BLEU [34] as a lexical measure for optimization.
We then seek a series of contrastive conditions that are sufficient and necessary to generate the data distribution p R (·|x, y):
∀y + ,y − : log p R (y + |x, y) p R (y − |x, y) = (y + , y − |y) (y + , y − |y) = R(y + , y) − R(y − , y),(7)
where y + and y − are two samples in target space satisfying R(y + , y) > R(y − , y), without loss of generality, and (y + , y − |y) represents (y + , y − )'s reward gap. Detailed proof of necessity and sufficiency is shown in Appendix B. We here focus on its necessity, which is treated as a constraint to model distribution in the future. Consider a bundle of generated sequences {y k ∼ q(·|x)} K k=1 , satisfying ∀i > j : R(y i , y) > R(y j , y), where K is the number of samples. Assuming a small positive lower bound LB , namely ∀i > j : (y i , y j |y) ≥ LB , we have looser necessary conditions pairwisely, as
∀i > j : log p R (y i |x, y) p R (y j |x, y) ≥ LB ,(8)
where LB is treated as a hyper-parameter. We neglect the event R(y i , y) = R(y j , y) whose probability is negligible. By taking {y j+1 , . . . , y i−1 } as intermediate states, we derive stronger necessary conditions,
∀i > j : log p R (y i |x, y) p R (y j |x, y) ≥ (i − j) LB ,(9)
which is also a ranking-based condition. After all, we impose the conditions in Eq. 9 as constraints to model distribution q(·|x), and penalize q(·|x) voilating the constraints:
∀i > j : L i,j = max{0, − log q(y i |x) + log q(y j |x) + (i − j) LB }.(10)
In typical decomposition-based sequence modeling, q(·|x) is heavily biased by sequence length, where shorter ones obtain higher probability and are more favorable [48]. To eliminate the length bias, we normalize q(·|x) by the target length ||y ||. In experiments, we implement Eq. 10 by: 1) sampling K hypotheses from the model distribution, 2) constructing K 2 pairs, and 3) calculating L i,j for each pair. Eq. 10 is a contrastive objective, facilitating M p,q (·|x) in Eq. 5 by penalizing violated constraints and we show that Eq. 10 can be written with similar form with Eq. 5 in Appendix C, yet it's crucial to note that they are not strictly equivalent. By optimizing with the contrastive constraints, we show that the original regularization (DP) loss is less likely to obviously increase in Figure 2b.
Implementation on NATs
Practical implementation of the contrastive loss includes sampling multiple hypotheses from the model distribution and ranking the likelihood of generated sequences according to some evaluation metrics such as BLEU. The sampling algorithm can be Noisy Parallel Decoding [4,13] on vanilla NATs which randomly add Gaussian noise to the word embedding layer to generate diverse output, but it involves sampling_size times additional forward passes consequently leading to increased training costs. Results on vanilla NATs are shown in Table 2. As for DP-based models, sampling positive-negative examples do not obviously bring training cost since the sampling process can be viewed as combining different decoder predictions (See Figure 4 in Appendix) and can be typically implemented by beam search or nucleus sampling [17].
We select previous the best DP-based model DA-Transformer and its pretrained version (for summarization tasks) as our base model in the main experiments considering the following two advantages. i) Accuracy: Samples used to minimize the divergence in Eq.5 are expected to be high-quality and single-modal. In other words, if all training samples are of low quality, the model will be tuned in the wrong direction. Basing on SOTA provides a high-quality sampling space that stabilizes the optimization process and benefits the performance. ii) Efficiency: DA-Transformer is a DP-based model which means sampling multiple hypotheses usually does not involve more cost with the sample size increasing. As mentioned before, DA-Transformer and most DP-based models usually have an extended decoder length to relax the strict alignment between decoder prediction and target words. Actually, sampling hypotheses from DP-based models is equivalent to combing the predicted words from different decoder positions, which means a generated sequence (length<L) is a subset of words from all L predicted words (see the example in Figure 4 Appendix) without more floating point operations or forward passes (see Appendix E).
In experiments, we find the process of contrasting low-likelihood sequences adversely has a negative effect on the original DP training loss, resulting in instability and unsatisfied results during training. In order to avoid training on such samples, we adopt two filtering tricks: 1) hypotheses-level filtering: sampling a larger amount of hypotheses first and then keeping only the top 25% for training. 2) Samples-level filtering: some challenging training samples may exhibit significantly low sequence likelihood on target from data distribution and usually have low-quality sampling space. Persisting in optimizing on these samples with model generated sequences will ultimately result in poor performance (in both reinforcement learning and contrastive learning setting). Therefore, we exclude training samples with DP training loss exceeding α when optimizing the discrepancy M p,q (y |x) with target sequences from the model distribution. α is practically set to the value of E y∼V [− log q(y|x)] where V means the validation set.
Experiments
We verify CODAT on three different text generation tasks, including machine translation, summarization, and paraphrasing, with five different benchmarks. We show that the results of non-autoregressive Transformers can be boosted to a new level with the contrastive learning framework. Particularly, we exceed the autoregressive Transformer on the two summarization datasets and achieve an improvement of 0.83 BLEU score on average on the translation datasets and paraphrase dataset. Training details and hyperparameters can be found in Appendix D
Quantitative Results
This section shows the results of CODAT on serval mainstream text generation benchmarks. To prove our method has a stronger modality learning ability, we directly use the raw dataset for machine translation and paraphrasing. We use distillation data for summarization following previous work. We use the most standard evaluation metrics for each task in our main results.
Machine Translation For machine translation, we evaluate CODAT on WMT14 English ↔ German translation task (4M samples), WMT17 English ↔ Chinese translation task (20M samples) and follow the default split of test and validation set of FairSeq [33]. We evaluate our model on all datasets with tokenized BLEU except WMT17 En→Zh, which is assessed with SacreBLEU [35]. We also reproduce the autoregressive Transformer and DA-Transformer (DAT) with the same running environment as CODAT. Since previously reported results is trained with 100k steps and a batch [18]. +Reward means we optimize the reward distribution derived from Eq. 5 and +Constraints means we optimize the contrastive objective. The best results for NATs are in bold.
Model
Iter size of 32k tokens, our implementation achieves higher performance. The main results are shown in Table 1. Among all iterative NATs and fully NATs, our method CODAT achieves the best result and outperforms previous state-of-the-art by a remarkable margin. The average performance gap between the autoregressive transformer and non-autoregressive transformer is reduced to 0.4 with our contrastive learning method. We also show that directly optimizing the reward can also bring improvements to the original DAT. However, due to its negative impact on the original training loss, optimizing the constraints usually brings higher performance. All the hypotheses are sampled from the model distribution in parallel without normalizing and selecting at each step like the beam search algorithm. Therefore, it still maintains a 10.2 times speedup during inference with only native PyTorch operations.
Paraphrase For paraphrase, we adopt the Quora Question Pairs (QQP) 3 collected from the Quora community question answering forum with 147 training pairs. We evaluate generated results with BLEU and ROUGE-L. The performance gap between CODAT and the autoregressive baseline is reduced to 0.02 BLEU and we even get 0.6 improvements over the autoregressive model in terms of ROUGE-L.
Summarization For summarization, we achieve new state-of-the-art for NATs on two widely-used news summarization datasets XSum [32] and Gigaword [38]. Gigaword has 4M article-headline pairs and XSum has 227k articles with human-written summaries from BBC News. We use ROUGE [28] as the evaluation metric. To have a fair comparison with previous work, we also perform knowledge distillation before training. Table 4 shows the ROUGE scores on Summarization datasets. Similar to machine translation, we also implement an autoregressive Transformer with the same running environment e.g., the same knowledge distillation dataset and training steps. CODAT surpasses the autoregressive baselines on both XSum and Gigaword. Generally, CODAT improves DAT by more than 0.5 ROUGE-1 score and improves the pretrained version of DAT by 1.07 ROUGE-1 score. On XSum, CODAT without pretraining obtains on-par performance with BANG [36] which is pretrained on large-scale unlabeled corpus and exceeds BANG by 2.44 ROUGE-2. CODAT achieves a speedup ratio of ∼ 8.4× on XSum and ∼ 6.0× on Gigaword compared with the autoregressive basline. In addition, we also validate our method by building on the strong pretrained version of DAT [19]. Results show that CODAT (pretrained) even exceeds many strong pretrained AT baselines on XSum.
Discussion
Reducing Data Modality Normalized Corpus-level Multimodality(NCM) [44] is a metric for measuring corpus-level multimodality for NATs (the lower, the better). Given a translation dataset D we have the NCM as:
E (x,y)∼D [− log p(y|x)] E y∼D [|y|]
. A non-autoregressive model concurrently considering numerous potential translations, it tends to distribute excess probability across different sequences. As a result, it ends up creating translations that are less likely. We validate the NCM of CODAT and DAT on WMT14 En→De and De→En translation task. Results show that we reduce the NCM of DAT from 0.86 to 0.72 on En→De translation and from 0.91 to 0.79 on De→En.
But there is no widely accepted metric for modality learning of NATs, we also provide a case study from WMT14 De→En in Table 5 to showcase that the first-order dependency DP loss can not guarantee generated texts are single modal. The top-3 hypotheses sampled from DAT in Table 5 are all mixed modal translations. For example, the first hypothesis from DAT is a mixed translation Ground truth:
According to the proposal, parking fees are to be increased by 50 percent.
outputs from DAT
1. There is an increase in parking charges should be increased by 50 %.
2. The proposed increase in parking charges should therefore be increased by 50 %. 3. Accordingly, the increase in parking charges are to be increased by 50 %.
3 outputs from CoDAT 1.The proposal is that parking charges should therefore be increased by 50 %.
2. According to it, the parking charges should be increased by 50% .
3. According to it, there will be increase in parking charges should be increased by 50 % . of "There is a 50% increase in parking charges" and "parking charges should be increased by 50%" due to "charges" and "should" are very likely to appear together in the training corpus so that they will be assigned a high transition score. But mixing words and phrases from the two modalities will always result in a bad translation while evaluating. We show that the problem can be alleviated via training with contrastive constraints which improves the ranking of high-quality and single-modal hypotheses (there is only 1 translation mixing words from different modalities results).
Scaling with Decoder Length and Sampling Size
We further examine how the decoder length of DAT affects our method. Notice that a longer decoder length often provides an enlarged sampling space. We vary the decoder length from 128 to 1,024 and measure the performance gap between CODAT and DAT according to BLEU on the test set of WMT14 En→De translation task. Results are shown in Figure 3b, when we have a decoder length longer than 384, CODAT generally has an improvement of more than 0.5 BLEU. CODAT (decoder length = 384) realizes on-par performance with DAT (decoder length = 1024). However, when the decoder length is set to a smaller value of 256, the improvement seems marginal. Recalling that CODAT advocates modality learning by optimizing D(q p) with samples from model distribution y ∼ q(·|x). We attribute the low performance of CODAT to the reduced sampling space when with a short decoder length. We measure the sampling space with a heuristic method: BLEU of the best sample. We respectively calculate the best result among the decoded 64 samples (named Oracle BLEU) with decoder lengths set to 256 and 512. The longer length achieves 34.12 Oracle BLEU while the shorter length only achieves 30.28 Oracle BLEU. The performance gap indicates the size of the sampling space plays an important role in our method. We further examine the performance of CODAT with sampling size. With the increase in sampling size, single-modal translations also have a higher probability of appearing in the sampling space. According to our training objective, their ranking in the sampled outputs will be elevated, thus improving the model's performance. (a) Test BLEU score with the sample size on WMT14 De→En translation. We use the size of generated samples as the X-axis and the Y-axis represents the BLEU score. (b) Test BLEU score of CODAT and DAT with the decoder length on WMT14 En→De translation. We use the decoder length as the X-axis and the Y-axis represents the BLEU score.
Related work
Non-autoregressive Text Generation Non-autoregressive text generation is originally developed in machine translations [12,16]. Gu et al. [12] design the first NAT model that decodes the whole sequence in parallel. Fully NATs predict the sequence with only one forward pass. Compared with fully NATs, iterative NATs [7,15,16,24] repeatedly refine the prediction by iterative decoding. Iterative NATs usually achieve better performance but trade efficiency for accuracy. As introduced in the background, non-iterative NATs are greatly improved by DP training [9,18,27] which abandons the position-wise alignment of Vanilla NAT and allows flexible length. Based on DAT, the latest work FDAT [31] introduces fuzzy alignment for DAT and greatly improves the performance on translation tasks but sacrifices the generation diversity. Non-autoregressive models are also introduced in other areas. Wiseman et al. [47] propose a hidden semi-Markov model for data-to-text generation. BANG [36] and MIST [21] are pretraining frameworks for NAT models. Su et al. [42], Sun et al. [45] successfully utilize pre-trained encoder BERT and a conditional random field (CRF) to nonautoregressive summarization and machine translation. Most NATs strongly depend on sentence-level knowledge distillation [13] datasets. This requires pretraining an autoregressive model to generate the training set but is very useful to reduce the data modality [50].
Contrastive Learning for Text Generation The triplet loss used in this work is originally proposed in faceNet [40], aiming to learn a good representation via contrasting positives to negatives. Contrastive learning has also been widely used in autoregressive text generation [1,25,26] and summarization [29,43], and to the best of our knowledge, we are the first contrastive learning framework in non-autoregressive generation. In the field of text summarization, Zhong et al. [49] propose contrastive learning to model that the document and its summary should be close in the embedding space. Previous work on autoregressive generation [1,25,30] shares a common motivation: mitigating the exposure bias problem. The contrastive loss we used in training has a similar format to the training objective of previous autoregressive models BRIO [30] and CoNT [1]. Our work differs from theirs in two aspects. 1) Different settings: our method is based on the setting of DP-based non-autoregressive models which has a very different input space and training process compared with the autoregressive model. 2) Different motivations: we hold the motivation of helping the modality learning of NATs and our contrastive objective is derived from seeking contrastive constraints while they hold the motivation from exposure bias and learning fine-gained representations.
Conclusion
In this paper, we develop the first contrastive learning framework for non-autoregressive generation. The goal of our contrastive learning objective is alleviating the modality learning of NATs by using training samples from the model distribution instead of complex data distribution. In order to stabilize training, we derive contrastive constraints which finally lead to a contrastive learning loss. Experiments on three text generation tasks demonstrate that our method achieves the best results on all these tasks for non-autoregressive models. Our work has many limitations please refer to Appendix F for details.
A Reverse KL and Reinforcement Learning
Above all, we treat p(y|x) as Eq. 6.
KL(q p)
= E y ∼q(·|x) [log q(y |x) p(y |x) ] = E y ∼q(·|x) [log q(y |x) − R(y, y ) + log Z R ] = −H(q) − E y ∼q(·|x) [R(y, y ) − log Z R ]
where H(q) is the entropy of model distribution, E y∼q(·|x) [R(y, y ) − log Z R ] is reinforcement learning with a normalized reward. Thus minimizing KL(q p) implicitly maximizes rewards on model distribution.
B Sufficiency & Necessity
Proposition B.1. For any sequence pair y 1 , y 2 , y 1 , y 2 ∼ p R (y |x, y) = 1 Z R exp(R(y, y )) is sufficient and necessary to log p R (y 1 |x,y) p R (y 2 |x,y) = R(y 1 , y) − R(y 2 , y).
Proof. We prove the sufficiency and necessity individually as follows:
(Sufficiency)
∀y 1 , y 2 : log p R (y 1 |x, y) p R (y 2 |x, y) = R(y 1 , y) − R(y 2 , y) ∀y 1 , y 2 : log p R (y 1 |x, y) − R(y 1 ) = log p R (y 2 |x, y) − R(y 2 )
Without loss of generality, we set the difference at both side to a constant C ∀y : log p R (y |x, y) − R(y ) = C Thus ∀y : p R (y |x, y) = exp(C) exp(R(y, y ))
By rewriting this into a normalization form, we have ∀y : p R (y |x, y) = 1 Z R exp(R(y, y ))
where Z R = 1/ exp(C).
(Necessity) The necessity is quite straightforward by directly replacing p R (y 1 |x, y) = 1 Z R exp(R(y, y 2 )) and p R (y 2 |x, y) = 1 Z R exp(R(y, y 2 )) into log p R (y 1 |x,y) p R (y 2 |x,y) .
C Contrastive Discrepancy
The original Eq. 5 can be rewritten to:
KL(q p) = E x,y∼D,y ∼q(·|x) [M p,q (y |x)] = E x,y∼D,{y k } K k=1 ∼q(·|x) [ 1 K k M p,q (y k |x)].
We can also understand the contrastive loss from the perspective of the relative version of the reverse KL loss in original Eq. 5. let [y i , y j ] denote the difference between a sequence pair from the model distribution, and y is from the data distribution. If there is a significant disparity in the data distribution of the two samples, we will also expect a commensurate divergence in their respective model distribution. M p,q ([y i , y j ]|x))] measures the discrepancy between model and data distribution given the difference of the pair instead of directly optimizing with only one sample like the original Eq. 5. We can estimate p([y i , y j ]|x) with a reward distribution: 1 Z1 · e R(y i ,y) e R(y j ,y) and q([y i , y j ]|x) with 1 Z2 · q(y i |x) q(y j |x) where Z 1 and Z 2 are normalizing constants. By optimizing M p,q ([y i , y j ]|x))], the Eq. 10 also has a similar form with Eq. 5:
E x,y∼D,{y k } K k=1 ∼q(·|x) [ 2 K(K − 1) i>j M p,q ([y i , y j ]|x)] =E x,y∼D,{y k } K k=1 ∼q(·|x) [ 2 K(K − 1) i>j log p([y i , y j ]|x) q([y i , y j ]|x) ] ≈E x,y∼D,{y k } K k=1 ∼q(·|x) [ 2 K(K − 1) i>j max{0, − log q(y i |x) + log q(y j |x) + (i − j) LB }]
D Experimental Setup
Implementation In order to sample meaningful positive and negative examples, we first warmup DA-Transformer with the DP loss shown in Eq. 3. In the warmup stage, we follow the default settings of the open source implementation 4 of DA-Transformer. The learning rate for the machine translation task is set to 5 × 10 −4 but for summarization and paraphrasing, we use a smaller learning rate 1 × 10 −4 . Excluding contrastive learning, we also apply DP training and glancing to help reduce the multi-modality of data distribution. For glancing, we linearly decrease the unmasking ratio from 0.5 to 0.1 until the training steps = 200k. For DP training, we extend the decoder length to 8 times of the encoder length. We set the margin value LB to 0.001. The sampling size in both training and inference is 128 and we only keep the 25% for training. We set the sampling temperature τ to 0.1 during training and 0.05 during inference. The contrastive loss and DP loss are optimized with the same Adam optimizer [23]. We use a batch size of 256k tokens by reducing the frequency of gradient updates to stabilize the training process. After pretraining the model for 120 epochs with DP loss, we further train the model for 5 epochs with the sum of DP loss and contrastive loss. Our code base is built on fairseq [33] toolkit. It takes about 1 hour on 4 NVIDIA Tesla A100 GPUs for contrastive optimization on the XSum summarization dataset and 4 hours for the large-scale WMT14 En-De translation task. We assess BLEU on the validation set every 40 steps, and then the top 5 checkpoints are saved and averaged to produce the result on the test set. We use the base version of Transformer [46] with 6 encoders and 6 decoders as the autoregressive baseline. The decoding speedup is tested with the batch size set to 1 following previous work on single A100 GPU and we report the average speedup of three runs. We calculate ROUGE with the pyrouge library 5 . Both the tokenized BLEU and sacreBLEU are calculated with fairseq. During decoding, We use the sampling algorithm to derive multiple hypotheses from the model by Algorithm 2. We decompose the sequence probability into transition score (q(a|x)) and prediction score (q(y|x, a)). the sequence score is given by: β · q(y|x, a) + (1 − β) · q(a|x) where beta is a hyperparameter tuned on the validation set. We tune β from [0.3, 0.5, 0.7]. The validation loss for the checkpoint we used for WMT14 En-De translation is 2.134, so we set α to 2.1 which means training samples with DP Loss larger than 2.1 will not be optimized with samples from the model distribution for stability.
E Details of DAT Inference
Latest work CTC [27] and DA-Transformer [18] adopt a Dynamic Programming (DP) training framework significantly improving WMT14 en-de translation benchmark by 4 ∼ 5 BLEU. Figure 4: Comparasion of Vanilla NATs and DP-based NATs. The input/output length of Vanilla NATs is usually predicted by the model, and the prediction from each position is directly taken as model output. While the decoder length of DP-based models such as CTC [27] and DAT [18] is a hyperparameter (usually set as 2x∼8x encoder length). For DAT, the decoder first predicts a word from each position, and the hypothesis is a subset of all predicted words constructed via the transition matrix. Given a hypothesis y 1 from positions a 1 = {0, 1, 7, 9}, q(a 1 |x) is calculated as E 0,1 · E 1,7 · E 7,9 and q(y 1 |a 1 , x) is the product of the corresponding token probability p('AI'|0, x) · p('deserves'|1, x) · p('more'|7, x) · p('attention'|9, x).
In this part, we present a concrete demonstration to perform sampling on DA-Transformer. Given the decoder length L and decoder hidden states H = {h 0 , h 1 , . . . , h L }, we can get the token prediction distribution by: softmax(W vocab H) and the L predicted tokens by argmax function. Figure 4 is a specific case when L = 10. Notably, the L words can not be viewed as the final output in DAT. Compared with vanilla NAT, a hypothesis in DA-Transformer is represented as a small subset of the L predicted tokens which means sampling hypotheses is equivalent to combining different tokens from the model's prediction. Notably, this can be done efficiently without any additional forward passes. Sampling a subset from these predictions usually depends on the transition matrix E ∈ R L×L trained with the DP loss in Eq. 3 where E ij means the transition probability between position i and j. Every hypothesis from DAT starts from position 0 and the next token depends on arg max(E [0,:] ), and the generation process stops when decoding the '<eos>' token. This is the simplest version of decoding in DAT. The commonly used decoding algorithm in DAT is called LOOKAHEAD. Details of the algorithm are shown in Algorithm 1. By replacing the arg max function with multinomial sampling, we can get diverse output parallelly (see Algorithm 2). Algorithm 1 Given decoder length L, transition matrix E ∈ R L×L and the token probability vector t ∈ R L ; return decoder indexes of the output 1: procedure LOOKAHEAD 2: P = E + t.UNSQUEEZE(dim=0) consider both transition probability and prediction probability 3:
i
F Limitations
We summarize our limitations from three aspects. Firstly, the performance increase brought by contrastive learning-based training is closely related to the ability of the backbone model. The base model should generate a set of diverse hypotheses and contains high-quality and single-modal samples which can be measured with Oracle BLEU. This limits our contrastive training procedure from directly starting from a random initialized model distribution, and it needs the parameters of the model distribution to have been previously trained to a good parameter space that can provide highquality samples. Considering the training samples of contrastive loss are from the model distribution instead of data distribution the improvements over strong baselines (especially for the pretrained model) are usually more remarkable.
Algorithm 2 Given decoder length L, temperature τ , transition matrix E ∈ R L×L and the token probability vector t ∈ R L ; return decoder indexes of the output 1: procedure SAMPLING 2: t = t / SUM(t) 3: P = E + t.UNSQUEEZE(dim=0) 4: P = TOPP_FILTER( P, p = 0.5) skip positions with low probability 5: P = SOFTMAX( P / τ , dim=1) re-normalize after filtering 6:
i Secondly, CODAT also has impacts on the training and inference efficiency. CODAT needs further training for 5 epochs. Our approach also has a minor impact on inference speed. Since the loss helps the hypothesis with higher bleu have a higher ranking, using sampling algorithms described in Algorithm 2 during inference instead of only decoding one hypothesis usually boosts the performance. During inference, we use a sampling size of 128 and we implement the sampling algorithm with the native torch operations for simplicity.
In practice, using the sampling described in Algorithm 2 is not feasible. Calling torch.MULTINOMIAL every step which is significantly slower than torch.ARGMAX resulting in only a 7.2x speedup ratio compared with the AT counterpart on WMT14 En-De translation task. To omit to call torch.MULTINOMIAL in the loop, we change the algorithm by sampling max_step tokens at each position in parallel and directly obtain the output token instead of sampling every step. Please refer to our updated sampling algorithm Algorithm 3. Despite we have optimized the sampling algorithm numerous times, solely with native torch operations still cannot achieve the same decoding speed as the original DAT that decodes only a single hypothesis.
Algorithm 3 Given decoder length L, temperature τ , transition matrix E ∈ R L×L and the token probability vector t ∈ R L ; return decoder indexes of the output 1: procedure SAMPLING 2: t = t / SUM(t) 3: P = E + t.UNSQUEEZE(dim=0) 4: P = TOPP_FILTER( P, p = 0.5) skip positions with low probability 5: P = SOFTMAX( P / τ , dim=1) re-normalize after filtering 6:
i
er e o gh o ho ld be gi en more higher a en ion constr ct h potheses ia transition matri E y 1 : AI de er e more a
Table 1 :
1BLEU scores on WMT14 En↔De, WMT17 Zh↔En translation tasks. The results of previous NATs are quoted from their papers and Huang et al.
. Speedup WMT14 WMT17 En-De De-En En-Zh Zh-EnTransformer [46]
N
1.0x
27.6
31.4
34.3
23.7
Transformer (Ours)
N
1.0x
27.85
31.64
34.89
23.72
CMLM [7]
10
2.2x
24.61
29.40
-
-
SMART [10]
10
2.2x
25.10
29.58
-
-
DisCo [22]
≈ 4
3.5x
25.64
-
-
-
Imputer [39]
8
2.7x
25.0
-
-
-
CMLMC [20]
10
1.7x
26.40
30.92
-
-
Vanilla NAT [12]
1
15.3x
11.79
16.27
18.92
8.69
AXE [9]
1
14.2x
20.40
24.90
-
-
OaXE [6]
1
14.2x
22.4
26.8
-
-
CTC [27]
1
14.6x
18.42
23.65
26.84
12.23
GLAT [37]
1
15.3x
19.42
26.51
29.79
18.88
CTC + GLAT [37]
1
14.2x
25.02
29.14
30.65
19.92
DAT [18]
1
13.9x
26.57
30.68
33.83
22.82
DAT+Viterbi [41]
1
13.2x
26.89
31.10
33.65
23.24
DAT (Ours)
1
13.9x
26.63
30.73
33.56
22.68
+ Reward
1
10.2x
27.06
31.22
33.59
23.16
+ Constraints
1
10.2x
27.40
31.64
34.23
23.71
Table 2 :
2Results on the WMT14 en-de benchmark
of vanilla model GLAT and DP-based model DAT
with different training objectives: MLE, Reward, and
Constraints. For GLAT, directly optimizing the re-
ward distribution also yields satisfactory results, but
for DAT, optimizing using constraints usually leads to
better performance. We use the NPD size = 7.
Model Dataset
Objective
BLEU
GLAT
Raw
D(p q)
19.42
GLAT
KD
D(p q)
25.28
GLAT
KD
D(q p) (Reward)
25.71
GLAT
KD
D(q p)(Constraints)
25.66
DAT
Raw
D(p q)
26.63
DAT
Raw
D(q p) (Reward)
27.06
DAT
Raw
D(q p)(Constraints)
27.40
Table 3 :
3Results on the test set of QQP in terms of BLEU and ROUGE-L. Results with * are from Gong et al.[11]. The best results of NAT are in bold. CODAT has ∼6.8× speedup compared with the autoregressive model.Model
BLEU ROUGE-L
GRU-attention * 18.94
51.29
Transformer
27.80
57.94
LevT [14] *
22.68
57.95
GLAT [37]
25.16
58.28
DiffuSeq [11] *
24.13
58.80
DAT [18]
27.00
58.31
CODAT
27.78
58.54
Table 4 :
4ROUGE scores on Summarization datasets. Avg means the average of ROUGE-1, ROUGE-2 and ROUGE-L. Results with † are token from Su et al.[42] and results with * are from Qi et al.[36]. Results that exceed the autoregressive model are underlined (w/o pretrained), and the best results of NAT are in bold. CODAT accelerates the autoregressive model by ∼8.4 times on XSum and ∼6.0 times on Gigaword. The third block contains results of pretrained baselines.Model
XSum
Gigaword
R-1
R-2
R-L
Avg
R-1
R-2
R-L
Avg
Transformer [46]
30.66 10.80 24.48 22.0 35.74 16.97 33.43 28.7
Transformer (Ours)
32.07 11.31 25.96 23.1 36.99 18.22 34.48 29.9
Vanilla NAT [12] *
24.04
3.88
20.32 16.1 27.20
8.96
25.58 20.6
LevT [14]
24.75
4.18
20.87 16.6 30.40 11.89 29.56 23.9
CMLM [20] *
23.82
3.60
20.15 15.8
-
-
-
-
NART-CRF [45] †
-
-
-
-
30.29 12.61 28.71 23.9
BERT-NAG [42] †
-
-
-
-
35.05 16.48 33.28 28.3
DAT [18]
31.87 11.02 25.93 22.9 36.52 17.94 34.30 29.6
BANG [36] *
32.59
8.98
27.41 23.0
-
-
-
-
MIST [21]
34.63 11.29 28.70 24.9
-
-
-
-
DAT (Pretrained) [19] 38.80 16.07 31.78 28.9 36.15 17.49 33.84 29.2
CODAT
32.45 11.42 26.30 23.4 37.01 18.68 34.63 30.1
CODAT (Pretrained)
39.87 17.38 32.61 30.0 36.92 18.23 34.46 29.9
Table 5 :
5An example from WMT14 De→En translation. We adopt a Sampling size = 128 and
present the top 3 hypotheses from DAT and CoDAT. Given the input German x ="Demnach sollen
die Parkgebühren um 50 Prozent erhöht werden ." Though, the first-order dependency
introduced by DAT effectively reduces repeated words, it still can not prevent the generated sequence
mixing words from several possible translations while the rank of single-modal translation is higher
in the sampling results of CODAT
2.1
2.12
2.14
2.16
2.18
DP Loss
0
20
40
80
160
1000
0.2
0.3
0.4
Training steps
RL Loss
(a) Reinforcement Learning (RL) loss and DP Loss
on the validation set of WMT14 en-de benchmark.
The left Y-axis represents the value of RL loss, while
the right Y-axis corresponds to the value of DP loss.
2.1
2.12
2.14
2.16
2.18
DP Loss
0
20
40
80
160
1000
0.2
0.3
0.4
Training steps
CL Loss
(b) Contrastive Learning (CL) loss and DP Loss on
the validation set of WMT14 en-de benchmark. The
left Y-axis represents the value of CL loss, while
the right Y-axis corresponds to the value of DP loss.
:= 1, output := ZEROS [max_step] 4: repeat 5: dist := P [outputs[i − 1]] get the distribution of current step given the previous step output6:
output[i] := ARGMAX(dist)
7:
until i = max_step;
8:
return output
:= 1, output := ZEROS [max_step] 7: repeat 8: dist := P [outputs[i − 1]]get the distribution of current step given the previous step output9:
output[i] := MULTINOMIAL(dist, num=1)
10:
until i = max_step;
11:
return output
:= 1, output := ZEROS [max_step] 7: sampled_tokens = MULTINOMIAL(P ,max_step) shape L× max_step, sampling max_step tokens at each decoder position 8: repeat 9: output[i] := sampled_tokens[output[i − 1]][i] 10: until i = max_step; 11: return output
Detailed discussion on reverse KL and reinforcement learning are presented in Appendix A.
https://www.kaggle.com/c/quora-question-pairs
https://github.com/thu-coai/DA-Transformer 5 https://github.com/bheinzerling/pyrouge
CoNT: Contrastive neural text generation. Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, Xuanjing Huang, Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun ChoChenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, and Xuanjing Huang. CoNT: Contrastive neural text generation. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=mjVZw5ADSbX.
An actor-critic algorithm for sequence prediction. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. In International Conference on Learning Representations, 2017. URL https://openreview. net/forum?id=SJDaqqveg.
Pattern recognition and machine learning. M Christopher, Bishop, M Nasser, Nasrabadi, Springer4Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning, volume 4. Springer, 2006.
Noisy parallel approximate decoding for conditional recurrent language model. Kyunghyun Cho, arXiv:1605.03835arXiv preprintKyunghyun Cho. Noisy parallel approximate decoding for conditional recurrent language model. arXiv preprint arXiv:1605.03835, 2016.
On the weaknesses of reinforcement learning for neural machine translation. Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend, arXiv:1907.01752arXiv preprintLeshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. On the weaknesses of reinforce- ment learning for neural machine translation. arXiv preprint arXiv:1907.01752, 2019.
Order-agnostic cross entropy for non-autoregressive machine translation. Cunxiao Du, Zhaopeng Tu, Jing Jiang, PMLRProceedings of the 38th International Conference on Machine Learning, ICML 2021. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning, ICML 2021139Cunxiao Du, Zhaopeng Tu, and Jing Jiang. Order-agnostic cross entropy for non-autoregressive machine translation. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 2849-2859. PMLR, 2021. URL http://proceedings.mlr.press/v139/du21c.html.
Mask-predict: Parallel decoding of conditional masked language models. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer, 10.18653/v1/D19-1633Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wanthe 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsMarjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Mask-predict: Parallel decoding of conditional masked language models. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 6111- 6120. Association for Computational Linguistics, 2019. doi: 10.18653/v1/D19-1633. URL https://doi.org/10.18653/v1/D19-1633.
Mask-predict: Parallel decoding of conditional masked language models. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer, 10.18653/v1/D19-1633Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMarjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112-6121, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1633. URL https://aclanthology.org/D19-1633.
Aligned cross entropy for non-autoregressive machine translation. Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. Aligned cross entropy for non-autoregressive machine translation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3515-3523. PMLR, 2020. URL http: //proceedings.mlr.press/v119/ghazvininejad20a.html.
Semi-autoregressive training improves mask-predict decoding. CoRR, abs. Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer, Marjan Ghazvininejad, Omer Levy, and Luke Zettlemoyer. Semi-autoregressive training improves mask-predict decoding. CoRR, abs/2001.08785, 2020. URL https://arxiv.org/ abs/2001.08785.
Diffuseq: Sequence to sequence text generation with diffusion models. Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong, arXiv:2210.08933arXiv preprintShansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and LingPeng Kong. Diffuseq: Sequence to sequence text generation with diffusion models. arXiv preprint arXiv:2210.08933, 2022.
Nonautoregressive neural machine translation. Jiatao Gu, James Bradbury, Caiming Xiong, O K Victor, Richard Li, Socher, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track ProceedingsJiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. Non- autoregressive neural machine translation. In 6th International Conference on Learning Repre- sentations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Pro- ceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=B1l8BtlCb.
Nonautoregressive neural machine translation. Jiatao Gu, James Bradbury, Caiming Xiong, O K Victor, Richard Li, Socher, International Conference on Learning Representations. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. Non- autoregressive neural machine translation. In International Conference on Learning Rep- resentations, 2018. URL https://openreview.net/forum?id=B1l8BtlCb.
Levenshtein transformer. CoRR, abs/1905.11006. Jiatao Gu, Changhan Wang, Jake Zhao, Jiatao Gu, Changhan Wang, and Jake Zhao. Levenshtein transformer. CoRR, abs/1905.11006, 2019. URL http://arxiv.org/abs/1905.11006.
Levenshtein transformer. Jiatao Gu, Changhan Wang, Junbo Zhao, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman GarnettNeurIPS; Vancouver, BC, CanadaJiatao Gu, Changhan Wang, and Junbo Zhao. Levenshtein transformer. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 11179-11189, 2019. URL https://proceedings.neurips.cc/paper/ 2019/hash/675f9820626f5bc0afb47b57890b466e-Abstract.html.
Jointly masked sequence-to-sequence model for non-autoregressive neural machine translation. Junliang Guo, Linli Xu, Enhong Chen, 10.18653/v1/2020.acl-main.36Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online. Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreaultthe 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, OnlineAssociation for Computational Linguistics2020Junliang Guo, Linli Xu, and Enhong Chen. Jointly masked sequence-to-sequence model for non-autoregressive neural machine translation. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 376-385. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.acl-main.36. URL https: //doi.org/10.18653/v1/2020.acl-main.36.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, arXiv:1904.09751The curious case of neural text degeneration. arXiv preprintAri Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019.
Directed acyclic transformer for non-autoregressive machine translation. Fei Huang, Hao Zhou, Yang Liu, Hang Li, Minlie Huang, arXiv:2205.07459arXiv preprintFei Huang, Hao Zhou, Yang Liu, Hang Li, and Minlie Huang. Directed acyclic transformer for non-autoregressive machine translation. arXiv preprint arXiv:2205.07459, 2022.
Directed acyclic transformer pre-training for high-quality non-autoregressive text generation. Fei Huang, Pei Ke, Minlie Huang, arXiv:2304.11791arXiv preprintFei Huang, Pei Ke, and Minlie Huang. Directed acyclic transformer pre-training for high-quality non-autoregressive text generation. arXiv preprint arXiv:2304.11791, 2023.
Improving non-autoregressive translation models without distillation. Xiao Shi Huang, Felipe Perez, Maksims Volkovs, International Conference on Learning Representations. Xiao Shi Huang, Felipe Perez, and Maksims Volkovs. Improving non-autoregressive translation models without distillation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=I2Hw58KHp8O.
Improving non-autoregressive generation with mixup training. Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang, arXiv:2110.11115arXiv preprintTing Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, and Qi Zhang. Improving non-autoregressive generation with mixup training. arXiv preprint arXiv:2110.11115, 2021.
Non-autoregressive machine translation with disentangled context transformer. Jungo Kasai, James Cross, Marjan Ghazvininejad, Jiatao Gu, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. Non-autoregressive machine translation with disentangled context transformer. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 5144-5155. PMLR, 2020. URL http: //proceedings.mlr.press/v119/kasai20a.html.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980arXiv preprintDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Deterministic non-autoregressive neural sequence modeling by iterative refinement. Jason Lee, Elman Mansimov, Kyunghyun Cho, 10.18653/v1/d18-1149Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujiithe 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 1173-1182. Association for Computational Linguistics, 2018. doi: 10.18653/v1/d18-1149. URL https://doi.org/10.18653/v1/d18-1149.
Contrastive learning with adversarial perturbations for conditional text generation. Seanie Lee, Dong Bok Lee, Sung Ju Hwang, International Conference on Learning Representations. Seanie Lee, Dong Bok Lee, and Sung Ju Hwang. Contrastive learning with adversarial perturba- tions for conditional text generation. In International Conference on Learning Representations, 2020.
Keywords and instances: A hierarchical contrastive learning framework unifying hybrid granularities for text generation. Mingzhe Li, Xiexiong Lin, Xiuying Chen, Jinxiong Chang, Qishen Zhang, Feng Wang, Taifeng Wang, Zhongyi Liu, Wei Chu, Dongyan Zhao, arXiv:2205.13346arXiv preprintMingzhe Li, XieXiong Lin, Xiuying Chen, Jinxiong Chang, Qishen Zhang, Feng Wang, Taifeng Wang, Zhongyi Liu, Wei Chu, Dongyan Zhao, et al. Keywords and instances: A hierarchical contrastive learning framework unifying hybrid granularities for text generation. arXiv preprint arXiv:2205.13346, 2022.
End-to-end non-autoregressive neural machine translation with connectionist temporal classification. Jindrich Libovický, Jindrich Helcl, 10.18653/v1/d18-1336Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujiithe 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJindrich Libovický and Jindrich Helcl. End-to-end non-autoregressive neural machine translation with connectionist temporal classification. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 3016-3021. Association for Computational Linguistics, 2018. doi: 10.18653/v1/d18-1336. URL https://doi.org/10.18653/v1/d18-1336.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81, 2004.
Simcls: A simple framework for contrastive learning of abstractive summarization. Yixin Liu, Pengfei Liu, arXiv:2106.01890arXiv preprintYixin Liu and Pengfei Liu. Simcls: A simple framework for contrastive learning of abstractive summarization. arXiv preprint arXiv:2106.01890, 2021.
Yixin Liu, Pengfei Liu, Dragomir Radev, Graham Neubig, Brio, arXiv:2203.16804Bringing order to abstractive summarization. arXiv preprintYixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. Brio: Bringing order to abstractive summarization. arXiv preprint arXiv:2203.16804, 2022.
Fuzzy alignments in directed acyclic graph for non-autoregressive machine translation. Zhengrui Ma, Chenze Shao, Shangtong Gui, Min Zhang, Yang Feng, arXiv:2303.06662arXiv preprintZhengrui Ma, Chenze Shao, Shangtong Gui, Min Zhang, and Yang Feng. Fuzzy align- ments in directed acyclic graph for non-autoregressive machine translation. arXiv preprint arXiv:2303.06662, 2023.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, B Shay, Mirella Cohen, Lapata, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingShashi Narayan, Shay B Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, 2018.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, 10.18653/v1/N19-4009Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Minneapolis, MinnesotaAssociation for Computational LinguisticsMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-4009. URL https://aclanthology. org/N19-4009.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https: //aclanthology.org/P02-1040.
A call for clarity in reporting BLEU scores. Matt Post, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBelgium, BrusselsAssociation for Computational LinguisticsMatt Post. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels, October 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/ W18-6319.
Bang: Bridging autoregressive and nonautoregressive generation with large scale pretraining. Weizhen Qi, Yeyun Gong, Jian Jiao, Yu Yan, Weizhu Chen, Dayiheng Liu, Kewen Tang, Houqiang Li, Jiusheng Chen, Ruofei Zhang, International Conference on Machine Learning. PMLRWeizhen Qi, Yeyun Gong, Jian Jiao, Yu Yan, Weizhu Chen, Dayiheng Liu, Kewen Tang, Houqiang Li, Jiusheng Chen, Ruofei Zhang, et al. Bang: Bridging autoregressive and non- autoregressive generation with large scale pretraining. In International Conference on Machine Learning, pages 8630-8639. PMLR, 2021.
Glancing transformer for non-autoregressive neural machine translation. Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei Li, 10.18653/v1/2021.acl-long.155Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. Glancing transformer for non-autoregressive neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1993-2003, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.155. URL https://aclanthology.org/2021.acl-long.155.
A neural attention model for abstractive sentence summarization. Sumit Alexander M Rush, Jason Chopra, Weston, arXiv:1509.00685arXiv preprintAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015.
Non-autoregressive machine translation with latent alignments. Chitwan Saharia, William Chan, Saurabh Saxena, Mohammad Norouzi, 10.18653/v1/2020.emnlp-main.83Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liuthe 2020 Conference on Empirical Methods in Natural Language ProcessingOnlineAssociation for Computational Linguistics20202020Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. Non-autoregressive machine translation with latent alignments. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 1098-1108. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.emnlp-main.83. URL https: //doi.org/10.18653/v1/2020.emnlp-main.83.
Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFlorian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823, 2015.
Rephrasing the reference for nonautoregressive machine translation. Chenze Shao, Jinchao Zhang, Jie Zhou, Yang Feng, arXiv:2211.16863arXiv preprintChenze Shao, Jinchao Zhang, Jie Zhou, and Yang Feng. Rephrasing the reference for non- autoregressive machine translation. arXiv preprint arXiv:2211.16863, 2022.
Yixuan Su, Deng Cai, Yan Wang, David Vandyke, Simon Baker, Piji Li, Nigel Collier, arXiv:2102.08220Non-autoregressive text generation with pre-trained language models. arXiv preprintYixuan Su, Deng Cai, Yan Wang, David Vandyke, Simon Baker, Piji Li, and Nigel Col- lier. Non-autoregressive text generation with pre-trained language models. arXiv preprint arXiv:2102.08220, 2021.
A contrastive framework for neural text generation. Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, Nigel Collier, arXiv:2202.06417arXiv preprintYixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. A contrastive framework for neural text generation. arXiv preprint arXiv:2202.06417, 2022.
An em approach to non-autoregressive conditional sequence generation. Zhiqing Sun, Yiming Yang, International Conference on Machine Learning. PMLRZhiqing Sun and Yiming Yang. An em approach to non-autoregressive conditional sequence generation. In International Conference on Machine Learning, pages 9249-9258. PMLR, 2020.
Fast structured decoding for sequence models. Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, Zhihong Deng, Advances in Neural Information Processing Systems. 32Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. Fast structured decoding for sequence models. Advances in Neural Information Processing Systems, 32, 2019.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman GarnettLong Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
Learning neural templates for text generation. Sam Wiseman, M Stuart, Alexander M Shieber, Rush, arXiv:1808.10122arXiv preprintSam Wiseman, Stuart M Shieber, and Alexander M Rush. Learning neural templates for text generation. arXiv preprint arXiv:1808.10122, 2018.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, abs/1609.08144Oriol Vinyals. Greg Corrado, Macduff Hughes, and Jeffrey DeanCoRRYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609. 08144.
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, arXiv:2004.08795Xipeng Qiu, and Xuanjing Huang. Extractive summarization as text matching. arXiv preprintMing Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. Extractive summarization as text matching. arXiv preprint arXiv:2004.08795, 2020.
Understanding knowledge distillation in non-autoregressive machine translation. Chunting Zhou, Jiatao Gu, Graham Neubig, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Chunting Zhou, Jiatao Gu, and Graham Neubig. Understanding knowledge distillation in non-autoregressive machine translation. In 8th International Conference on Learning Represen- tations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=BygFVAEKDH.
| [
"https://github.com/thu-coai/DA-Transformer",
"https://github.com/bheinzerling/pyrouge"
] |
[
"CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic Encryption",
"CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic Encryption"
] | [
"Mengxin Zheng ",
"Fan Chen ",
"Mengxin Zheng ",
"Qian Lou ",
"Fan Chen ",
"Lei Jiang ",
"Yongxin Zhu ",
"\nIndiana University of Bloomington\nUSA\n",
"\nQIAN LOU\nUniversity of Central Florida\nUSA\n",
"\nIndiana University of Bloomington\nUSA\n",
"\nLEI JIANG\nIndiana University of Bloomington\nUSA\n",
"\nShanghai Advanced Research Institute\nYONGXIN ZHU\nChinese Academy of Sciences\nChina\n"
] | [
"Indiana University of Bloomington\nUSA",
"QIAN LOU\nUniversity of Central Florida\nUSA",
"Indiana University of Bloomington\nUSA",
"LEI JIANG\nIndiana University of Bloomington\nUSA",
"Shanghai Advanced Research Institute\nYONGXIN ZHU\nChinese Academy of Sciences\nChina"
] | [] | Fully homomorphic encryption (FHE) protects data privacy in cloud computing by enabling computations to directly occur on ciphertexts. To improve the time-consuming FHE operations, we present an electro-optical (EO) FHE accelerator, CryptoLight. Compared to prior FHE accelerators, on average, CryptoLight reduces the latency of various FHE applications by >94.4% and the energy consumption by >95%.CCS Concepts: • Security and privacy → Privacy protections; • Hardware → Emerging optical and photonic technologies.ACM Reference Format: | 10.1145/3565478.3572544 | [
"https://export.arxiv.org/pdf/2211.13780v2.pdf"
] | 254,017,678 | 2211.13780 | fe98e86b7284db2867ce228f2091365a00247eeb |
CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic Encryption
Mengxin Zheng
Fan Chen
Mengxin Zheng
Qian Lou
Fan Chen
Lei Jiang
Yongxin Zhu
Indiana University of Bloomington
USA
QIAN LOU
University of Central Florida
USA
Indiana University of Bloomington
USA
LEI JIANG
Indiana University of Bloomington
USA
Shanghai Advanced Research Institute
YONGXIN ZHU
Chinese Academy of Sciences
China
CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic Encryption
10.1145/3565478.3572544
Fully homomorphic encryption (FHE) protects data privacy in cloud computing by enabling computations to directly occur on ciphertexts. To improve the time-consuming FHE operations, we present an electro-optical (EO) FHE accelerator, CryptoLight. Compared to prior FHE accelerators, on average, CryptoLight reduces the latency of various FHE applications by >94.4% and the energy consumption by >95%.CCS Concepts: • Security and privacy → Privacy protections; • Hardware → Emerging optical and photonic technologies.ACM Reference Format:
INTRODUCTION
Data privacy is gaining tremendous importance around the world. This leads a surge in demand for privacy-preserving computing solutions protecting data confidentiality while in transit, rest, and in-use. Fully homomorphic encryption (FHE) [3] emerges as one of the most promising solutions to guaranteeing data privacy by allowing computations to directly happen on ciphertexts. FHE operations are extremely time-consuming, i.e., one FHE bootstrapping costs several seconds on a CPU. Prior work proposes GPU [3]-, FPGA [4]-, and ASIC [2,5]-based accelerators to process FHE operations. Among all, the ASIC-based FHE accelerators, CraterLake [5] and BTS [2], obtain the state-of-the-art performance.
However, their performance is seriously limited by their narrow datapaths and intensive matrix transpositions. An FHE ciphertext consists of two polynomials of large degrees (e.g., several thousand) with large integer coefficients (e.g., several hundred bits). To efficiently compute with polynomials, FHE schemes (e.g., CKKS [5]) adopt Residue Number System (RNS) [5], and Number Theoretic Transform (NTT). First, to compute with large integer coefficients, RNS divides each coefficient into multiple smaller bit-width (e.g., 60bit) residues, each of which can be processed by the datapath of prior Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. [3] that makes output ciphertexts to be encrypted by the same secret key as the input ciphertext(s). The computational overhead of KS greatly increases with an enlarging number of residues. Second, for two large degree-polynomials, NTT and inverse NTT (INTT) reduces the time complexity of their multiplications to O ( log ). But it is difficult to perform an (i)NTT, i.e., NTT or iNTT, on a large degree-polynomial directly. Prior FHE accelerators [2,5] place it as an × matrix, where = 2 , perform an (i)NTT on each row, multiply the matrix with some constants, transpose the matrix, and perform an (i)NTT on each row again. As a result, frequent matrix transpositions greatly prolong the latency of KS in various FHE operations by introducing huge volumes of on-chip memory traffic.
We propose an electro-optical (EO) FHE accelerator, CryptoLight, to support a large bit-width datapath and free its computing units from matrix transpositions. Our contribution is summarized as:
• A 512-bit EO CU. We propose a 512-bit EO Computing Units (CUs) built upon ultra-fast EO integer adders and multipliers to process polynomials with large coefficients. • An in-SPM TU. We build a low-power eDRAM-based on-chip scratchpad (SPM) system.
CRYPTOLIGHT 2.1 A 512-bit EO CU
We build a 512-bit EO CU featured by an NTT unit, a modular add/mult unit, an automorphism unit, and a true random number generation (TRNG) unit. Its most important component is the 512bit EO NTT unit, which has an arithmetic and inversion unit, an address generation unit, and two butterfly units. A 512-bit EO NTT unit also supports the kernel of iNTT working in a different data flow. We also use EO adders and multipliers to construct the other CU components. An EO NTT Unit. We present an EO NTT unit for the CU. Matrix transpositions are done by TUs in SPM banks, but the other three steps of the NTT on a large polynomial are computed by an NTT unit. CryptoLight aims to support 64K-element NTTs, so a NTT unit supports 256-element NTT operations. The details are summarized as follows.
• A butterfly unit: We propose an EO butterfly unit (BU) to accelerate radix-2 NTT butterflies, as shown in Figure 1(a) unit performs modular reduction on the multiplication result. By an EO adder and a comparator, the EO modular adder performs modular additions and subtractions. Two EO modular adders can generate the radix-2 butterfly outputs concurrently. • A pipelined EO array multiplier: Through EO ripple-carry adders [6], we propose an EO pipelined integer array multiplier. We show the example of a 4-bit × 4-bit pipelined array multiplier in Figure 1(b). An -bit × -bit pipelined array multiplier consists of stages, each of which is an -bit EO ripple-carry adder. Between two stages, there is an -bit register file to buffer the intermediate result. The inputs of the first pipeline stage of the multiplier are the results of AND operations between the corresponding bits of the multiplier inputs. • A Montgomery modular reduction unit: As Figure 1(c) shows, we build an EO Montgomery modular reduction unit (MMRU) in a BU to perform modular reduction operations. The MMRU implements the modular reduction algorithm [4] shown in Figure 1(d).
Besides some logic operations, and 2's complement conversions, the most intensive operation in a modular reduction operation is the multiply-add operation (i.e., 1 + ( · 2) + ), which can be computed by EO adders and an EO multiplier. The output of each iteration of the loop can be cached in a register file and used as the input for the next iteration. A MUX selects one between the outputs of the register file and an EO adder as the modular reduction output.
An eDRAM-based SPM System with TUs
Transposing a Matrix. Matrix transpositions are heavily used in the (i)NTT and automorphism kernels of FHE operations. As Figure 1(e) shows, we implement the recursive algorithm [5] to transpose a large matrix. For the transposition of a large × matrix, we divide the matrix into four 2 × 2 matrices, i.e., , , , and , at the top level. Instead of transposing the matrix directly, we compute , , , and . By repeating this process recursively, the × matrix can be transposed.
In-SPM Transpose Unit. We create an in-SPM transpose unit (TU) to transpose a matrix inside SPM banks without sending it to CUs via NoCs. As Figure 1(f) shows, along the H-tree of all SPM banks, we hierarchically deploy TUs in all SPM banks. We assume a SPM bank has four sub-arrays. Each sub-array in a SPM bank has a TU. One bank has a level-2 TU to enable data movements inside the bank. All banks share a level-3 TU for inter-bank communication. The level-3 and level-2 TUs perform the recursive matrix transposition algorithm. When the recursive process reaches the 2 × 2 matrix level, a TU attached to a sub-array swaps each element into its new position. A TUs share a similar structure shown in Figure 1(g) with different numbers of FIFOs.
Fig. 1 .
1An NTT unit in an EO CU, and a TU in a SPM.
Fig. 2 .
2The comparison of three accelerators (norm. to Lake).
Request permissions from [email protected] '22, December 7-9, 2022, Virtual, OR, USA
© 2022 Association for Computing Machinery.
ACM ISBN 978-1-4503-9938-8/22/12. . . $15.00
https://doi.org/10.1145/3565478.3572544
FHE accelerators. The latency of an FHE multiplication, rotation, or
bootstrapping is dominated by expensive key-switching (KS) primi-
tives
EXPERIMENTAL METHODOLOGY AND RESULTSWe modeled CryptoLight by a cycle-accurate FHE accelerator simulator, Sapphire-Sim[1], which is validated against several cryptoprocessor chips. We compared CryptoLight against the state-of-theart ASIC-based FHE hardware accelerators, CrateLake (Lake)[5]and BTS[2]. We studied the performance and energy of CKKS FMUL, FROT, and FBOT operations on ciphertexts.FMUL/FROT/FBOT latency. The latency comparison between CryptoLight and various accelerator baselines is shown inFigure 2(a). Compared to Lake, BTS decreases the latency of FMUL, FROT and FBOT by 69% on average, due to its larger bit-width datapath and larger SPM system. Because of the 512-bit EO datapath and the TUs in the SPM, CryptoLight reduces the latency of FMUL, FROT and FBOT by 96% on average over BTS.FMUL/FROT/FBOT energy. The energy comparison between CryptoLight and various accelerator baselines is shown inFigure 2(b). BTS consumes slightly larger power than Lake. Compared to BTS, CryptoLight reduces the energy consumption of FMUL, FROT and FBOT by 98.8% on average.
Sapphire: A Configurable Crypto-Processor for Post-Quantum Lattice-based Protocols. U Banerjee, IACR Transactions on Cryptographic Hardware and Embedded Systems. 4U. Banerjee, et al., "Sapphire: A Configurable Crypto-Processor for Post-Quantum Lattice-based Protocols," IACR Transactions on Cryptographic Hardware and Em- bedded Systems, (4):17-61, Aug. 2019.
BTS: An Accelerator for Bootstrappable Fully Homomorphic Encryption. S Kim, ACM International Symposium on Computer Architecture. S. Kim, et al., "BTS: An Accelerator for Bootstrappable Fully Homomorphic En- cryption, " in ACM International Symposium on Computer Architecture, 2022.
PALISADE Operations using CUDA. Palisade, 2020Palisade, "PALISADE Operations using CUDA, " , 2020.
HEAX: An Architecture for Computing on Encrypted Data. M S Riazi, Architectural Support for Programming Languages and Operating Systems. M. S. Riazi, et al., "HEAX: An Architecture for Computing on Encrypted Data, " in Architectural Support for Programming Languages and Operating Systems, 2020.
CraterLake: A Hardware Accelerator for Efficient Unbounded Computation on Encrypted Data. N Samardzic, IEEE/ACM International Symposium on Computer Architecture. N. Samardzic, et al., "CraterLake: A Hardware Accelerator for Efficient Unbounded Computation on Encrypted Data, " in IEEE/ACM International Symposium on Com- puter Architecture, 2022.
Electro-Optic Ripple-Carry Adder in Integrated Silicon Photonics for Optical Computing. Z Ying, Journal of Selected Topics in Quantum Electronics. Z. Ying, et al., "Electro-Optic Ripple-Carry Adder in Integrated Silicon Photonics for Optical Computing, " Journal of Selected Topics in Quantum Electronics, 2018.
| [] |
[
"Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages",
"Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages"
] | [
"Rahul Tangsali \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Aabha Pingle \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Aditya Vyawahare \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Isha Joshi \nSCTR's Pune Institute of Computer Technology\nPune\n",
"Raviraj Joshi \nIndian Institute of Technology Madras\nChennai 3 L3Cube, Pune\n"
] | [
"SCTR's Pune Institute of Computer Technology\nPune",
"SCTR's Pune Institute of Computer Technology\nPune",
"SCTR's Pune Institute of Computer Technology\nPune",
"SCTR's Pune Institute of Computer Technology\nPune",
"Indian Institute of Technology Madras\nChennai 3 L3Cube, Pune"
] | [] | The research on text summarization for low-resource Indian languages has been limited due to the availability of relevant datasets. This paper presents a summary of various deep-learning approaches used for the ILSUM 2022 Indic language summarization datasets. The ISUM 2022 dataset consists of news articles written in Indian English, Hindi, and Gujarati respectively, and their ground-truth summarizations. In our work, we explore different pre-trained seq2seq models and fine-tune those with the ILSUM 2022 datasets. In our case, the fine-tuned SoTA PEGASUS model worked the best for English, the fine-tuned IndicBART model with augmented data for Hindi, and again fine-tuned PEGASUS model along with a translation mapping-based approach for Gujarati. Our scores on the obtained inferences were evaluated using ROUGE-1, ROUGE-2, and ROUGE-4 as the evaluation metrics. | 10.48550/arxiv.2212.05702 | [
"https://export.arxiv.org/pdf/2212.05702v1.pdf"
] | 254,563,804 | 2212.05702 | 5122b1239af8c259190ff2725a7289ed52c5e879 |
Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages
Rahul Tangsali
SCTR's Pune Institute of Computer Technology
Pune
Aabha Pingle
SCTR's Pune Institute of Computer Technology
Pune
Aditya Vyawahare
SCTR's Pune Institute of Computer Technology
Pune
Isha Joshi
SCTR's Pune Institute of Computer Technology
Pune
Raviraj Joshi
Indian Institute of Technology Madras
Chennai 3 L3Cube, Pune
Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages
Abstractive text summarizationIndian LanguagesNLPPretrained models
The research on text summarization for low-resource Indian languages has been limited due to the availability of relevant datasets. This paper presents a summary of various deep-learning approaches used for the ILSUM 2022 Indic language summarization datasets. The ISUM 2022 dataset consists of news articles written in Indian English, Hindi, and Gujarati respectively, and their ground-truth summarizations. In our work, we explore different pre-trained seq2seq models and fine-tune those with the ILSUM 2022 datasets. In our case, the fine-tuned SoTA PEGASUS model worked the best for English, the fine-tuned IndicBART model with augmented data for Hindi, and again fine-tuned PEGASUS model along with a translation mapping-based approach for Gujarati. Our scores on the obtained inferences were evaluated using ROUGE-1, ROUGE-2, and ROUGE-4 as the evaluation metrics.
Introduction
Text summarization is a trending research domain that has gained popularity with a plethora of emerging use cases seeking its application [1,2]. The last few decades have witnessed tremendous growth in NLP research, especially text summarization. Text summarization has applications in a wide range of domains, including medicine, politics, news, etc. With the massive influx of news data in the form of newspaper articles, digital media, social media platforms, and so on, a need exists to automate the news summarization process so that useful insights could be achieved much faster than human workers were employed for the same task. Effective summarization approaches investigated recently have hastened the process and made their mark in the NLP research community by achieving state-of-the-art (SoTA) accuracies.
Three distinct types of text summarization techniques like extractive, abstractive, and hybrid. In extractive text summarization, key sentences and phrases are picked from the original document and are integrated to generate a final summary [3]. This summarization technique is easier to perform, but it may overlook the text's overall context or omit some essential information. This type of summary text is helpful for taking notes. Abstractive summarization analyses the full text and generates a summary based on the fundamental concepts of the text [4]. This summary is made using an entirely different wording style than the original text. Unlike the extractive summarization methodology, sentences from the original text aren't picked up directly. Abstractive summarization provides an intelligently curated summarization using unique phrases which are not native to the input text. However, with deep learning methodologies, preparing abstractive summaries could be difficult and take a long time with human judgment. The hybrid-based text summarization approach utilizes both extractive and abstractive text summarization methods to generate the final summary [5,6].
With the emergence of NLP research worldwide, research on text summarization has been conducted in high-resource languages such as English and texts written in Indian subcontinentbased languages. Hindi and Gujarati are two of the most spoken Indian languages. Hindi is the most spoken language in India and is considered the official language in 9 states and 3 union territories and an additional official language in 3 other states across the country. Hindi is also one of the 22 scheduled languages of the Republic of India. Hindi is spoken by approximately 615 million people worldwide and was recorded as the third most spoken language in the world as of 2019. Gujarati is an Indo-Aryan language spoken predominantly by the Gujarati people in the Indian state of Gujarat. It is the sixth most spoken language in India and is spoken by around 55 million people worldwide. Hindi and Gujarati are spoken by a considerable percentage of the population across the world. Yet, there has been a backfoot witnessed in NLP research in these languages compared to high-resource languages spoken worldwide.
Text summarization research stretches back to 1958 when the first paper on the subject was published [7]. Since then, various methodologies have been presented for both abstractive and extractive text summarization in English. These include statistical-based, clustering-based, graph-based, semantic-based, machine learning, and deep learning-based approaches. Deep learning-based approaches, which focus on training neural nets, include work done by Mohsen et al. [8], Xu [9], Alami et al. [10], and Anand and Wagh [11]. In addition, encoder-decoder models have been proposed, with attention mechanisms incorporated in several proposed methodologies.
In comparison to English, lesser research has been done on text summarization research in Hindi and Gujarati. There is a significant shortage in dataset resources, preprocessing methodologies, and other research for many Indian languages, especially Gujarati, compared to English. This motivated us to develop system pipelines that could perform efficient extractive summarization for articles written in Hindi and Gujarati and achieve decent accuracy for the generated summaries. Many organizations are leveraging their services to Indian language speakers, and we aim to solve a small part of this challenge by performing summarization research in two of the widely spoken languages in India.
We implement pre-trained models [12] and tweak the conventional pipelines along with fine-tuning with new data to obtain better results than previously implemented systems. For English, we implement the PEGASUS 1 [13], BRIO [14], and T5 [15] models and also leverage the SentenceBERT model for extractive summarization purposes [16,17]. For Hindi, we implemented fine-tuning of IndicBART 2 [18] with a right-shift operation (augmenting the original dataset by shifting the last sentence of the article to the top), XL-Sum [19], and mBART [20] models. For Gujarati, we implemented extractive summarization by translating each sentence in the Gujarati article to English, and by creating a corresponding mapping between the Gujarati and translated English sentences, and applying fine-tuned PEGASUS model for English to the resultant English article to generate the English summary. The generated extractive summary in English is then translated back to Gujarati by a back-mapping mechanism to get the final Gujarati summary. We also fine-tuned XL-Sum and mBART 3 models for Gujarati article summarization.
Related Work
Text summarization research dates back to 1958 when the first article on the topic [7] was published. Since then, numerous rule-based and deep learning-based techniques have been presented. Rule-based approaches include work done by Baxendale [21], which selects sentences for a summary based on word position and heading of the article, and that by Oliviera in 2016 [22], which used scoring criteria such as lexical similarity, sentence centrality, text rank, and so on for text summarization.
Research on deep-learning approaches for text summarization picked up the pace when encoder-decoder [23] and attention-based architectures [24] were proposed. Yu [25] suggested methods for creating one-sentence summaries of news stories that use recurrent neural network models like LSTM [26] and GRU [27], as well as with/without attention. In recent years, fine-tuning pre-trained models using domain-specific datasets has been the dominant paradigm in text summarization research. Pre-trained models which implement the BART [28], T5 [15], etc. architectures have been proposed, which are available in the Hugging Face library. Recent research includes the implementation of an importance-based ordering approach implemented by Zhao et al., a cascade approach to abstractive summarization with content selection and fusion proposed by Lebanoff et al [29]., and usage of prompt-based models such as GPT-3 [30], PaLM [31], T0 [32], etc. Many times, articles considered for summarization can be multidocument in nature. Wang et al. [33] suggested a task-specific architecture for multi-document summarization by combining numerous texts into a single graph. Zhong et al. [34] implemented a semantic-based framework for the same.
In the case of Hindi and Gujarati, there has been relatively little research on text summarization. K. Vimal Kumar et al. [35] suggested a graph-based method for summarising text in Hindi. Gulati et al. [36] developed a unique fuzzy inference method for summarising multi-source Hindi literature. Gupta et al. [37] suggested a rule-based method for Hindi that included dead phrase and deadwood reduction strategies. Jain et al. [38] presented a real coded genetic algorithm for Hindi text summarization. For Gujarati, Shah and Patel suggested Gujarati Text Summarizer, which uses Textblob 4 and Gensim 5 to construct summaries from Gujarati text. Patel examines the preprocessing phase for text summarization of Gujarati texts, emphasizing related issues and appropriate solutions [39].
Dataset Description
The ILSUM 2022 datasets, as provided were organized in a CSV format, with multiple columns describing each record in the file. These datasets were built using articles and headline pairs from several leading newspapers across India. The columns in the CSV files were-"id": denoting the ID for the article for unique identification, "Link": hyperlink from where the article has been extracted, "Heading": heading/title of the article, "Article": the actual content of the article, and "Summary": gold extractive summary of the article. Each article consisted, on average, of about 9 to 10 sentences, and the extractive summaries, on average, were a single sentence long. For the validation and test CSV files, there were only two columns: "id" and "Article", where it was expected to find the summary of the text present in the "Article" column. The dataset content was raw, with unnecessary punctuations and delimiters hindering the proposed pipeline, hence causing a need for efficient data cleaning. Table 1 proposes the contents of the training, validation and test datasets in terms of number of records present in each set.
Data Preparation
The datasets pretty raw, with redundant punctuations and delimiters in the content. Hence, it was necessary to remove those so that the clean data obtained could be further tokenized and passed to the model. In addition, we remove stopwords present in the text [40] to avoid model redundancy towards not-so-useful data and convert the text to lowercase to generalize the model perception towards the text. Out of the five columns present in the CSV file, the "id", "Link" and "Heading" columns were seemingly redundant to be taken into consideration, so we filtered out those columns for the model to get trained only on the articles and their corresponding extractive summaries.
We use the SentencePiece 6 tokenizer for tokenizing the English, Hindi, and Gujarati article texts. SentencePiece is an unsupervised text tokenizer and detokenizer intended specifically for Neural Network-based text generation systems with a preset vocabulary size before neural model training. It extends direct training from raw sentences to incorporate subword units (e.g., byte-pair-encoding (BPE) [41]) and the unigram language model. This tokenizer can be defined implicitly using Hugging Face API 7 for model fine-tuning so that both tokenization and detokenization processes can be carried out without explicit code. First, a vocabulary of all the common words in all articles is created and further utilized to quantify the text to a vectorized format. Additionally, we apply padding to the maximum sequence length in the batch so that sequences of uniform length only would be passed ahead to the model [42].
For the translation+mapping-based approach that we implement as one of the approaches for Gujarati, we first split the sentences using the full stop as a delimiter. Then, each sentence is translated to English using the Google Translate API 8 , and then the mapping is created between the original Gujarati sentence and the translated English sentence obtained. Finally, these English sentences from each paragraph are concatenated to get the final translated summaries in English.
Systems implemented
For English
Fine-tuning PEGASUS
PEGASUS stands for "Pre-training with Extracted Gap sentences for Abstractive Summarization", the paper for which was presented at the 2020 International Conference on Machine Learning by Zhang et al. [43]. By masking entire sentences from the text and then appending the gap sentences, the PEGASUS model yields a pseudo-summary of the input text. The PEGASUS model picks sentences that are essential to the model and removes or masks them from the input document. The model is then assigned with recovering those vital phrases, which it accomplishes by constructing the output sequence, including the critical documents entirely from the document's non-essential parts. The advantage of this technique is its self-supervision; the model may generate as many instances as there are documents without the need for human annotation, which is sometimes a bottleneck in fully supervised systems.
We fine-tuned the "pegasus-large" model 9 available on Hugging Face with the training dataset for English. This model is pre-trained on 350 million web pages and 1.5 billion news articles, making its accuracy state-of-the-art in text summarization research. The Hugging Face transformer library was used for fine-tuning purposes, which made the implementation easier. Since the training data was large enough, we decided to fine-tune the model for 1 epoch on the training data, along with a weight decay of 0.01, which took about 3.5 hours for the same. The inferences yielded a significant increase in ROUGE scores as observed to those obtained with the only pre-trained version of the model.
To further increase the ROUGE scores, we tried experimenting with the max-tokens parameter of the model during inference generation, which is the maximum length the generated inference can have. The organizers had specified the standard value of the same to 75. We experimented with a range of max-tokens values around that range, and we got max-tokens=65 to be the ideal value for the highest ROUGE scores.
We also experimented with augmenting the dataset by adding noise to each record of the dataset so that the model could predict the result better despite the noisy text present. The ROUGE score increase for the same needed to be increased, compared to the highest score we received.
Fine-tuning BRIO
BRIO stands for "Bringing Order to Abstractive Summarization", the paper presented in 2022 by Liu et al. [14]. Maximum Likelihood Estimation (MLE) [44] is often used to train summarization models. MLE presupposes that an ideal model would allocate full probability mass to the reference summary, which may result in poor performance when a model must compare numerous candidates that vary from the reference. Instead of relying on MLE training, BRIO has a contrastive learning component, enabling abstractive models to more precisely assess the likelihood of system-generated summaries.
We fine-tune the "Yale-LILY/brio-cnndm-uncased" version 10 of the BRIO model available on Hugging Face on the English dataset. Since BRIO is an extension to the BART model, we apply BART-based tokenization to the input text, which uses SentencePiece internally. We fine-tuned the English dataset on the model for 1 epoch with a weight decay of 0.01 and even experimented further with adding noisy text to each training record. The model's performance, however, was not as good as the fine-tuned PEGASUS model mentioned earlier.
Leveraging SentenceBERT for extractive summarization
The approach was a tweaked implementation derived from the paper "Fine-tune BERT for Extractive Summarization" presented by Liu in 2019 [45]. Here, extractive summarization is approached as a classification problem by predicting a score between 0 and 1 for each phrase in a text, i.e., by determining whether or not it belongs to the summary. The algorithm then creates a summary based on these scores by picking the phrases with the highest scores determined by certain relevant parameters.
We extract sentences using the SpaCy 11 library for each article in the training dataset. For every sentence in each training example, we assign a label of 1 if it belongs to the final extractive summary, else 0. The original dataset was unbalanced, as most of the sentences are unlikely to be in the summary. We augmented our dataset with new examples that balanced positive and negative examples. This annotated data, along with the labels, constitutes the input to our BERT model. We fine-tuned the "sentence-transformers/all-mpnet-base-v2" model 12 , since it proved to be the fastest among all models available in the sentence-transformers library. We set the batch size for training to 4, and the maximum sequence length of the generated summary to 512. The learning rate for training was set to 0.00001. We fine-tuned the SentenceBERT model for 3 epochs, which took approximately 4.5 hours. The original pre-trained BERT model is modified by drop out and a dense layer on top of the BERT model to get the final output label. Finally, we get the inferences from the model by taking two sentences with the highest scores obtained from the BERT model, which gave an average summary length of around 70. We add only those sentences in the summary whose length is more than 25 characters.
Fine-tuning T5
The Text-to-Text-Transfer-Transformer (T5) paradigm suggests recasting all NLP tasks as a single text-to-text format with text strings as input and output. The original text of the input and output pairs during T5 pre-training is modified by introducing noise.
We fine-tuned the 'mrm8488/t5-base-finetuned-summarize-news' version 13 of the T5 model, which is pre-trained on 4515 English news articles. We fine-tuned this model on our dataset. We applied T5 tokenization to our dataset and fine-tuned the model for 20 epochs. The maximum length of the summary during the inference was set to 75.
For Hindi
Fine-tuning IndicBART
We used IndicBART [18], a multilingual, sequence-to-sequence pre-trained model. The model focuses on Indic languages majorly, and English as well. IndicBART is based on the mBART architecture and provides support for 11 Indian languages, and can be used to build various natural language generation applications for tasks like machine translation and summarization.
We fine-tuned the 'ai4bharat/IndicBART' version 14 of IndicBART available on Hugging Face on the training dataset for Hindi. The data used for training the model was augmented by adding noise to each record of the dataset. The model gave better results better after training on such a dataset. The model was fine-tuned for 2 epochs. We also experimented with the maximum length parameter while generating the inferences. Inferences obtained with 'max length' set to 60 gave the best ROUGE scores.
Fine-tuning XL-Sum
The paper titled 'XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages' [19] presents a multilingual dataset as well as an mT5 checkpoint fine-tuned on the dataset and proposes fine-tuned mT5 [46] with XL-Sum and experimented on multilingual and low resource summarization tasks. The model was fine-tuned on the 45 languages of the XL-Sum dataset.
We used the 'csebuetnlp/mT5_multilingual_XLSum' checkpoint 15 available on Hugging Face for our summarization task. To get the best results, we fine-tuned this checkpoint on the given Hindi training dataset for 2 epochs. This method gave ROUGE scores comparable to the Indic BART scores.
Fine-tuning mBART
Pre-trained on multilingual corpora containing 25 languages, mBART (Multilingual Denoising Pre-training for Neural Machine Translation) [20] can be used for a wide range of tasks, including machine translation and summarization. We used the "facebook/mbartlarge-cc25" 16 , "GiordanoB/mbart-large-50-finetuned-summarization-V2" 17 and "ARTeLab/mbartsummarization-mlsum" 18 pre-trained models on the dataset.
The results obtained differed minutely. However, the "facebook/mbart-large-cc25" model gave us the best ROUGE scores; hence, we fine-tuned the model on the dataset for 1 epoch.
For Gujarati
Translation+Mapping+PEGASUS
We implemented the PEGASUS model for Gujarati by fine-tuning the "pegasus-large" model available on Hugging Face. As this model wasn't initially trained for the Gujarati language, we implemented translation and mapping steps to use this model for generating inferences on our Gujarati dataset.
First, we translated the Gujarati validation dataset to English and simultaneously stored the mapping between the English-translated sentence and the Gujarati sentence for each article in a dictionary. For translation, we used the GoogleTranslator module provided by deep-translator 19 library. Then, we generated the inferences on the English-translated validation dataset using the PEGASUS model fine-tuned for English, the max-tokens parameter for which was set to 75 initially. Finally, the generated inferences were back-mapped to give the original Gujarati sentences. As the dataset provided was extractive, we performed the mapping and back-mapping steps mainly to keep the summaries extractive in nature. It should be noted that the translation process was only used once, and the original Gujarati text was retrieved using the mapping developed during the Gujarati to English translation process.
To further increase the ROUGE scores, we experimented with the max-tokens parameter of the model. We observed that the English-translated sentences were longer than the original Gujarati sentences. Therefore, we tested by increasing the max-tokens parameter, and we inferred that max-tokens set to 85 provided the highest ROUGE scores.
Fine-tuning mBART
For this approach, we used the "facebook/mbart-large-cc25" 20 model. After applying the mBART tokenizer on the given Gujarati dataset, we fine-tuned the model for one epoch. This methodology gave us competent ROUGE scores. However, we improved our results by augmenting the dataset by adding noise to each record of the dataset to create a new record so that the model could predict better.
The ROUGE scores obtained after fine-tuning the mBART model on this dataset were comparable to the Translation+Mapping+PEGASUS model.
Fine-tuning XL-Sum
We used the XLSum model, an mT5 model fine-tuned on the multilingual XLSum dataset. We used the checkpoint 'csebuetnlp/mT5_multilingual_XLSum' available on Hugging Face to generate inferences on the Gujarati dataset. The model was trained for 5 epochs with the max-tokens parameter set to 75.
Evaluation Metrics
In our study, the ROUGE Score, which stands for Recall-Oriented Understudy for Gisting Assessment, was chosen as the evaluation metric [47]. For our summary, we recorded ROUGE-1, ROUGE-2, and ROUGE-4 scores. ROUGE-1 calculated the unigram overlap between the candidate and reference summaries, whereas ROUGE-2 assessed the bigram similarities between the summaries. All ROUGE scores are graded out of one, with the ROUGE score closer to one, indicating more parallel with the gold summaries.
Results
Tables 2, 3 and 4 describe the results obtained on our approaches by testing on the validation data. Table 5 describes the test data results obtained on the best validation models. We evaluated the performance of the models using ROUGE scores as the evaluation metrics. The best-performing approaches were: fine-tuned PEGASUS model with max-tokens set to 65 for English, the IndicBART model with the right-shift operation for Hindi, and the (Translation Mapping + PEGASUS) based approach for Gujarati. We achieved optimum accuracies with these approaches on both the validation and test datasets.
Conclusion and Future Work
Thus, we have illustrated the findings of our research which we performed on the ILSUM 2022 datasets. We have experimented with text summarization on news articles written in English, Hindi, and Gujarati. We implemented pre-trained models in our research and data manipulation operations performed in some of the operations. Finally, we evaluate the ROUGE scores on the inferences obtained from each system we trained. We achieved decent accuracy on our best-performing models, with accuracies very close to SoTA accuracies. We can conclude from this analysis that there is a lot of scope for improvement in research performed for low-resource Indian languages, such as Gujarati, compared to English. The research foundation for text summarization for English is robust, as there are many pre-trained models and attention-based mechanisms that one can leverage. However, this foundation has to be scaled up drastically in the coming years for Hindi and Gujarati.
In the future, we plan to leverage our work on larger datasets, especially for Hindi and Gujarati, as we believe that clean and well-formatted datasets are one of the significant barriers that cause the gap between text summarization research in English and low-resource Indian languages. Furthermore, we plan to implement our approaches on high-end GPUs and use better preprocessing and tokenization techniques to shorten this research gap.
Table 1
1Details of ILSUM 2022 datasetsEnglish Hindi Gujarati
Train
12565
7958
8731
Validation 898
569
606
Test
4487
2842
3020
Table 2
2Results obtained in the validation set (English) Results obtained in the validation set (Hindi) The test set results for the best modelsApproach Implemented
ROUGE-1 ROUGE-2 ROUGE-4
Fine-tuned PEGASUS
0.5618
0.4509
0.4218
Fine-tuned BRIO
0.4878
0.3723
0.3383
SentenceBERT leveraged for summarization 0.4639
0.3421
0.3156
Fine-tuned T5
0.4851
0.3588
0.3226
Table 3
Approach Implemented ROUGE-1 ROUGE-2 ROUGE-4
Fine-tuned IndicBART
0.5536
0.4572
0.4162
Fine-tuned XL-Sum
0.5281
0.4098
0.337
Fine-tuned mBART
0.5269
0.4271
0.3806
Table 4
Results obtained in the validation set (Gujarati)
Approach Implemented
ROUGE-1 ROUGE-2 ROUGE-4
Translation+Mapping+PEGASUS 0.2028
0.1155
0.0835
Fine-tuned mBART
0.1924
0.1095
0.0723
Fine-tuned XL-Sum
0.1718
0.0718
0.0361
Table 5
Language Model description
ROUGE-1 ROUGE-2 ROUGE-4
English
Fine-tuned PEGASUS
0.5568
0.4430
0.4123
Hindi
Fine-tuned IndicBART
0.5559
0.4547
0.4136
Gujarati
Translation+Mapping+PEGASUS 0.2087
0.1192
0.0838
https://huggingface.co/l3cube-pune/english-pegasus-summary 2 https://huggingface.co/l3cube-pune/hindi-bart-summary
https://huggingface.co/l3cube-pune/gujarati-bart-summary 4 https://textblob.readthedocs.io/en/dev/ 5 https://radimrehurek.com/gensim/
https://github.com/google/sentencepiece 7 https://huggingface.co/
https://cloud.google.com/translate/ 9 https://huggingface.co/google/pegasus-large
https://huggingface.co/Yale-LILY/brio-cnndm-uncased 11 https://spacy.io/ 12 https://huggingface.co/sentence-transformers/all-mpnet-base-v2
https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news 14 https://huggingface.co/ai4bharat/IndicBART 15 https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum
https://huggingface.co/facebook/mbart-large-cc25 17 https://huggingface.co/GiordanoB/mbart-large-50-finetuned-summarization-V2 18 https://huggingface.co/ARTeLab/mbart-summarization-mlsum 19 https://github.com/nidhaloff/deep-translator 20 https://huggingface.co/facebook/mbart-large-cc25
AcknowledgementsThis research was accomplished as part of the L3Cube Pune mentoring program. We convey our gratitude to our L3Cube mentors for their continuous assistance and encouragement.References[1] A. Vhatkar, P. Bhattacharyya, K. Arya, Survey on text summarization, 2020.
N Moratanch, C Gopalan, 10.1109/ICCPCT.2016.7530193A survey on abstractive text summarization. N. Moratanch, C. Gopalan, A survey on abstractive text summarization, 2016, pp. 1-7. doi:10.1109/ICCPCT.2016.7530193.
M Kirmani, N Hakak, M Mohd, M Mohd, 10.1007/978-981-13-0589-4_7Hybrid Text Summarization: A Survey: Proceedings of SoCTA. M. Kirmani, N. Hakak, M. mohd, M. Mohd, Hybrid Text Summarization: A Survey: Pro- ceedings of SoCTA 2017, 2019, pp. 63-73. doi:10.1007/978-981-13-0589-4_7.
Hybrid approach to abstractive summarization. D Sahoo, A Bhoi, R C Balabantaray, 10.1016/j.procs.2018.05.038.05.038, international Conference on Computational Intelligence and Data Science. 132D. Sahoo, A. Bhoi, R. C. Balabantaray, Hybrid approach to abstractive summariza- tion, Procedia Computer Science 132 (2018) 1228-1237. URL: https://www.sciencedirect. com/science/article/pii/S1877050918307701. doi:https://doi.org/10.1016/j.procs. 2018.05.038, international Conference on Computational Intelligence and Data Science.
The automatic creation of literature abstracts. H P Luhn, 10.1147/rd.22.0159IBM Journal of Research and Development. 2H. P. Luhn, The automatic creation of literature abstracts, IBM Journal of Research and Development 2 (1958) 159-165. doi:10.1147/rd.22.0159.
A hierarchical self-attentive neural extractive summarizer via reinforcement learning (hsasrl). F Mohsen, J Wang, K Al-Sabahi, Applied Intelligence. 50F. Mohsen, J. Wang, K. Al-Sabahi, A hierarchical self-attentive neural extractive summarizer via reinforcement learning (hsasrl), Applied Intelligence 50 (2020) 2633-2646.
J Xu, G Durrett, arXiv:1902.00863Neural extractive text summarization with syntactic compression. arXiv preprintJ. Xu, G. Durrett, Neural extractive text summarization with syntactic compression, arXiv preprint arXiv:1902.00863 (2019).
En-nahnahi, Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning. N Alami, M Meknassi, N , Expert systems with applications. 123N. Alami, M. Meknassi, N. En-nahnahi, Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning, Expert systems with applications 123 (2019) 195-211.
Effective deep learning approaches for summarization of legal texts. D Anand, R Wagh, Journal of King Saud University-Computer and Information Sciences. D. Anand, R. Wagh, Effective deep learning approaches for summarization of legal texts, Journal of King Saud University-Computer and Information Sciences (2019).
Pre-trained models: Past, present and future. X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, L Zhang, W Han, M Huang, Q Jin, Y Lan, Y Liu, Z Liu, Z Lu, X Qiu, R Song, J Tang, J.-R Wen, J Yuan, W X Zhao, J Zhu, 10.1016/j.aiopen.2021.08.002AI Open. 2X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, J. Qiu, Y. Yao, A. Zhang, L. Zhang, W. Han, M. Huang, Q. Jin, Y. Lan, Y. Liu, Z. Liu, Z. Lu, X. Qiu, R. Song, J. Tang, J.-R. Wen, J. Yuan, W. X. Zhao, J. Zhu, Pre-trained models: Past, present and future, AI Open 2 (2021) 225-250. URL: https://www.sciencedirect.com/science/article/pii/S2666651021000231. doi:https: //doi.org/10.1016/j.aiopen.2021.08.002.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. J Zhang, Y Zhao, M Saleh, P J Liu, Proceedings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org. the 37th International Conference on Machine Learning, ICML'20, JMLR.orgJ. Zhang, Y. Zhao, M. Saleh, P. J. Liu, Pegasus: Pre-training with extracted gap-sentences for abstractive summarization, in: Proceedings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org, 2020.
BRIO: Bringing order to abstractive summarization. Y Liu, P Liu, D Radev, G Neubig, 10.18653/v1/2022.acl-long.207Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, Ireland, 2022Association for Computational Linguistics1Long Papers)Y. Liu, P. Liu, D. Radev, G. Neubig, BRIO: Bringing order to abstractive summarization, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 2890-2903. URL: https://aclanthology.org/2022.acl-long.207. doi:10.18653/v1/2022. acl-long.207.
Exploring the limits of transfer learning with a unified text-to-text transformer. C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, J. Mach. Learn. Res. 21C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu, et al., Exploring the limits of transfer learning with a unified text-to-text transformer., J. Mach. Learn. Res. 21 (2020) 1-67.
N Reimers, I Gurevych, arXiv:1908.10084Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprintN. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks, arXiv preprint arXiv:1908.10084 (2019).
Y Liu, arXiv:1903.10318Fine-tune bert for extractive summarization. arXiv preprintY. Liu, Fine-tune bert for extractive summarization, arXiv preprint arXiv:1903.10318 (2019).
IndicBART: A pre-trained model for indic natural language generation. R Dabre, H Shrotriya, A Kunchukuttan, R Puduppully, M Khapra, P Kumar, 10.18653/v1/2022.findings-acl.145Findings of the Association for Computational Linguistics: ACL 2022. Dublin, Ireland, 2022Association for Computational LinguisticsR. Dabre, H. Shrotriya, A. Kunchukuttan, R. Puduppully, M. Khapra, P. Kumar, IndicBART: A pre-trained model for indic natural language generation, in: Findings of the Association for Computational Linguistics: ACL 2022, Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 1849-1863. URL: https://aclanthology.org/2022.findings-acl.145. doi:10.18653/v1/2022.findings-acl.145.
XL-sum: Large-scale multilingual abstractive summarization for 44 languages. T Hasan, A Bhattacharjee, M S Islam, K Mubasshir, Y.-F Li, Y.-B Kang, M S Rahman, R Shahriyar, 10.18653/v1/2021.findings-acl.413Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsT. Hasan, A. Bhattacharjee, M. S. Islam, K. Mubasshir, Y.-F. Li, Y.-B. Kang, M. S. Rah- man, R. Shahriyar, XL-sum: Large-scale multilingual abstractive summarization for 44 languages, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics, Online, 2021, pp. 4693-4703. URL: https:// aclanthology.org/2021.findings-acl.413. doi:10.18653/v1/2021.findings-acl.413.
Multilingual denoising pre-training for neural machine translation. Y Liu, J Gu, N Goyal, X Li, S Edunov, M Ghazvininejad, M Lewis, L Zettlemoyer, Transactions of the Association for Computational Linguistics. 8Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, L. Zettlemoyer, Multilingual denoising pre-training for neural machine translation, Transactions of the Association for Computational Linguistics 8 (2020) 726-742.
Machine-made index for technical literature-an experiment. P B Baxendale, 10.1147/rd.24.0354IBM Journal of Research and Development. 2P. B. Baxendale, Machine-made index for technical literature-an experiment, IBM Journal of Research and Development 2 (1958) 354-361. doi:10.1147/rd.24.0354.
Assessing shallow sentence scoring techniques and combinations for single and multi-document summarization. H Oliveira, R Ferreira, R Lima, R D Lins, F Freitas, M Riss, S J Simske, 10.1016/j.eswa.2016.08.0302016.08.030. doi:10.1016/j.eswa.2016.08.030Expert Syst. Appl. 65H. Oliveira, R. Ferreira, R. Lima, R. D. Lins, F. Freitas, M. Riss, S. J. Simske, Assessing shallow sentence scoring techniques and combinations for single and multi-document summarization, Expert Syst. Appl. 65 (2016) 68-86. URL: https://doi.org/10.1016/j.eswa. 2016.08.030. doi:10.1016/j.eswa.2016.08.030.
Understanding how encoderdecoder architectures attend. K Aitken, V V Ramasesh, Y Cao, N Maheswaranathan, 10.48550/ARXIV.2110.15253doi:10.48550/ ARXIV.2110.15253K. Aitken, V. V. Ramasesh, Y. Cao, N. Maheswaranathan, Understanding how encoder- decoder architectures attend, 2021. URL: https://arxiv.org/abs/2110.15253. doi:10.48550/ ARXIV.2110.15253.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. GarnettCurran Associates, Inc30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, I. Polosukhin, Attention is all you need, in: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems, volume 30, Curran Associates, Inc., 2017. URL: https://proceedings.neurips.cc/ paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
Summarization with attention-based deep recurrent neural networks. H Yu, H. Yu, Summarization with attention-based deep recurrent neural networks, 2017.
Long short-term memory. S Hochreiter, J Schmidhuber, 10.1162/neco.1997.9.8.1735Neural computation. 9S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural computation 9 (1997) 1735-80. doi:10.1162/neco.1997.9.8.1735.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K Cho, Y Bengio, 10.48550/ARXIV.1412.3555doi:10. 48550/ARXIV.1412.3555J. Chung, C. Gulcehre, K. Cho, Y. Bengio, Empirical evaluation of gated recurrent neu- ral networks on sequence modeling, 2014. URL: https://arxiv.org/abs/1412.3555. doi:10. 48550/ARXIV.1412.3555.
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational LinguisticsBARTM. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, L. Zettle- moyer, BART: Denoising sequence-to-sequence pre-training for natural language gen- eration, translation, and comprehension, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Lin- guistics, Online, 2020, pp. 7871-7880. URL: https://aclanthology.org/2020.acl-main.703. doi:10.18653/v1/2020.acl-main.703.
A cascade approach to neural abstractive summarization with content selection and fusion. L Lebanoff, F Dernoncourt, D S Kim, W Chang, F Liu, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsL. Lebanoff, F. Dernoncourt, D. S. Kim, W. Chang, F. Liu, A cascade approach to neural abstractive summarization with content selection and fusion, in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Association for Computational Linguistics, Suzhou, China, 2020, pp. 529-535. URL: https://aclanthology. org/2020.aacl-main.52.
Language models are few-shot learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, D Amodei, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. LinCurran Associates, Inc33T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei, Language models are few-shot learners, in: H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Advances in Neural Information Processing Systems, volume 33, Curran Associates, Inc., 2020, pp. 1877-1901. URL: https://proceedings.neurips.cc/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, P Barham, H W Chung, C Sutton, S Gehrmann, P Schuh, K Shi, S Tsvyashchenko, J Maynez, A Rao, P Barnes, Y Tay, N Shazeer, V Prabhakaran, E Reif, N Du, B Hutchinson, R Pope, J Bradbury, J Austin, M Isard, G Gur-Ari, P Yin, T Duke, A Levskaya, S Ghemawat, S Dev, H Michalewski, X Garcia, V Misra, K Robinson, L Fedus, D Zhou, D Ippolito, D Luan, H Lim, B Zoph, A Spiridonov, R Sepassi, D Dohan, S Agrawal, M Omernick, A M Dai, T S Pillai, M Pellat, A Lewkowycz, E Moreira, R Child, O Polozov, K Lee, Z Zhou, X Wang, B Saeta, M Diaz, O Firat, M Catasta, J Wei, K Meier-Hellstern, D Eck, J Dean, S Petrov, N Fiedel, 10.48550/ARXIV.2204.02311Scaling language modeling with pathways. PalmA. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, N. Fiedel, Palm: Scaling language modeling with pathways, 2022. URL: https://arxiv.org/abs/2204.02311. doi:10.48550/ARXIV.2204.02311.
. V Sanh, A Webson, C Raffel, S Bach, L Sutawika, Z Alyafeai, A Chaffin, A Stiegler, A Raja, M Dey, M S Bari, C Xu, U Thakker, S S Sharma, E Szczechla, T Kim, G Chhablani, N Nayak, D Datta, J Chang, M T Jiang, H Wang, M Manica, S Shen, Z X , V. Sanh, A. Webson, C. Raffel, S. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, A. Raja, M. Dey, M. S. Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chh- ablani, N. Nayak, D. Datta, J. Chang, M. T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z. X.
Multitask prompted training enables zero-shot task generalization. H Yong, R Pandey, T Bawden, T Wang, J Neeraj, A Rozen, A Sharma, T Santilli, J A Fevry, R Fries, T L Teehan, S Scao, L Biderman, T Gao, A M Wolf, Rush, International Conference on Learning Representations. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J. A. Fries, R. Teehan, T. L. Scao, S. Biderman, L. Gao, T. Wolf, A. M. Rush, Multitask prompted training enables zero-shot task generalization, in: International Conference on Learning Representations, 2022. URL: https://openreview.net/forum?id=9Vrb9D0WI4.
Heterogeneous graph neural networks for extractive document summarization. D Wang, P Liu, Y Zheng, X Qiu, X Huang, 10.18653/v1/2020.acl-main.553Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsD. Wang, P. Liu, Y. Zheng, X. Qiu, X. Huang, Heterogeneous graph neural networks for extractive document summarization, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 6209-6219. URL: https://aclanthology.org/2020.acl-main.553. doi:10. 18653/v1/2020.acl-main.553.
Extractive summarization as text matching. M Zhong, P Liu, Y Chen, D Wang, X Qiu, X Huang, 10.18653/v1/2020.acl-main.552Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsM. Zhong, P. Liu, Y. Chen, D. Wang, X. Qiu, X. Huang, Extractive summarization as text matching, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 6197-6208. URL: https://aclanthology.org/2020.acl-main.552. doi:10.18653/v1/2020.acl-main.552.
Graph based technique for hindi text summarization. K V Kumar, D Yadav, A Sharma, Information Systems Design and Intelligent Applications. J. K. Mandal, S. C. Satapathy, M. Kumar Sanyal, P. P. Sarkar, A. MukhopadhyayNew DelhiSpringer IndiaK. V. Kumar, D. Yadav, A. Sharma, Graph based technique for hindi text summarization, in: J. K. Mandal, S. C. Satapathy, M. Kumar Sanyal, P. P. Sarkar, A. Mukhopadhyay (Eds.), Information Systems Design and Intelligent Applications, Springer India, New Delhi, 2015, pp. 301-310.
A novel technique for multidocument hindi text summarization. A N Gulati, S D Sawarkar, Nascent Technologies in Engineering. A. N. Gulati, S. D. Sawarkar, A novel technique for multidocument hindi text summarization, 2017 International Conference on Nascent Technologies in Engineering (ICNTE) (2017) 1-6.
Text summarization of hindi documents using rule based approach. M Gupta, N K Garg, International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE). M. Gupta, N. K. Garg, Text summarization of hindi documents using rule based approach, 2016 International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE) (2016) 366-370.
Automatic text summarization for hindi using real coded genetic algorithm. A Jain, A Arora, J Morato, D Yadav, V Kumar, K Automatic, J Szymanski, H Mora, D Logofătu, A Sobecki, D Jain, 10.3390/app12136584Applied Sciences. 12A. Jain, A. Arora, J. Morato, D. Yadav, V. Kumar K, Automatic, J. Szymanski, H. Mora, D. Logofătu, A. Sobecki, D. Jain, Automatic text summarization for hindi using real coded genetic algorithm, Applied Sciences 12 (2022). doi:10.3390/app12136584.
Pre-processing phase of text summarization based on gujarati language. P Patel, International Journal of Innovative Research in Computer Science and Technology ISSN. P. Patel, Pre-processing phase of text summarization based on gujarati language, Interna- tional Journal of Innovative Research in Computer Science and Technology ISSN (2014) 2347-5552.
Stopwords in technical language processing. S Sarica, J Luo, 10.1371/journal.pone.0254937PLOS ONE. 16S. Sarica, J. Luo, Stopwords in technical language processing, PLOS ONE 16 (2021) e0254937. URL: https://doi.org/10.1371%2Fjournal.pone.0254937. doi:10.1371/journal. pone.0254937.
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, 10.18653/v1/P16-1162doi:10.18653/ v1Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)R. Sennrich, B. Haddow, A. Birch, Neural machine translation of rare words with subword units, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 1715-1725. URL: https://aclanthology.org/P16-1162. doi:10.18653/ v1/P16-1162.
Effects of padding on lstms and cnns. M Dwarampudi, N V S Reddy, 10.48550/ARXIV.1903.07288M. Dwarampudi, N. V. S. Reddy, Effects of padding on lstms and cnns, 2019. URL: https: //arxiv.org/abs/1903.07288. doi:10.48550/ARXIV.1903.07288.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. J Zhang, Y Zhao, M Saleh, P J Liu, 10.48550/ARXIV.1912.08777doi:10.48550/ ARXIV.1912.08777J. Zhang, Y. Zhao, M. Saleh, P. J. Liu, Pegasus: Pre-training with extracted gap-sentences for abstractive summarization, 2019. URL: https://arxiv.org/abs/1912.08777. doi:10.48550/ ARXIV.1912.08777.
Counterfactual maximum likelihood estimation for training deep networks. X Wang, W Chen, M Saxon, W Y Wang, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J. W. VaughanCurran Associates, Inc34X. Wang, W. Chen, M. Saxon, W. Y. Wang, Counterfactual maximum likelihood estimation for training deep networks, in: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J. W. Vaughan (Eds.), Advances in Neural Information Processing Systems, volume 34, Curran Associates, Inc., 2021, pp. 25072-25085. URL: https://proceedings.neurips.cc/paper/2021/ file/d30d0f522a86b3665d8e3a9a91472e28-Paper.pdf.
Fine-tune bert for extractive summarization. Y Liu, ArXiv abs/1903.10318Y. Liu, Fine-tune bert for extractive summarization, ArXiv abs/1903.10318 (2019).
Raffel, mT5: A massively multilingual pre-trained text-to-text transformer. L Xue, N Constant, A Roberts, M Kale, R Al-Rfou, A Siddhant, A Barua, C , 10.18653/v1/2021.naacl-main.41Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsL. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, C. Raffel, mT5: A massively multilingual pre-trained text-to-text transformer, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online, 2021, pp. 483-498. URL: https://aclanthology.org/2021.naacl-main.41. doi:10.18653/v1/2021.naacl-main.41.
ROUGE: A package for automatic evaluation of summaries, in: Text Summarization Branches Out. C.-Y. Lin, Association for Computational LinguisticsBarcelona, SpainC.-Y. Lin, ROUGE: A package for automatic evaluation of summaries, in: Text Summariza- tion Branches Out, Association for Computational Linguistics, Barcelona, Spain, 2004, pp. 74-81. URL: https://aclanthology.org/W04-1013.
| [
"https://github.com/google/sentencepiece",
"https://github.com/nidhaloff/deep-translator"
] |
[
"Writing user personas with Large Language Models Testing phase 6 of a Thematic Analysis of semi-structured interviews",
"Writing user personas with Large Language Models Testing phase 6 of a Thematic Analysis of semi-structured interviews"
] | [
"Stefano De Paoli [email protected] \nAbertay University\nSociology Division\n"
] | [
"Abertay University\nSociology Division"
] | [] | The goal of this paper is establishing if we can satisfactorily perform a Thematic Analysis (TA) of semistructured interviews using a Large Language Model (more precisely GPT3.5-Turbo). Building on previous work by the author, which established an embryonal process for conducting a TA with the model, this paper will perform a further analysis and then cover the last phase of a TA (phase 6), which entails the writing up of the result. This phase was not covered by the previous work. In particular, the focus will be on using the results of a TA done with the LLM on a dataset of user interviews, for writing user personas, with the model building on the TA to produce the personas narratives. User personas are models of real users, usually built from a data analysis like interviews with a sample of users. User personas are tools often used in User Centered Design processes. The paper shows that the model can build basic user personas with an acceptable quality deriving them from themes, and that the model can serve for the generation of ideas for user personas. | 10.48550/arxiv.2305.18099 | [
"https://export.arxiv.org/pdf/2305.18099v1.pdf"
] | 258,959,113 | 2305.18099 | 1e0b6544fa931420d323a62b477cb00a3f40360b |
Writing user personas with Large Language Models Testing phase 6 of a Thematic Analysis of semi-structured interviews
Stefano De Paoli [email protected]
Abertay University
Sociology Division
Writing user personas with Large Language Models Testing phase 6 of a Thematic Analysis of semi-structured interviews
Large Language ModelsThematic AnalysisUser PersonasThemesWriting
The goal of this paper is establishing if we can satisfactorily perform a Thematic Analysis (TA) of semistructured interviews using a Large Language Model (more precisely GPT3.5-Turbo). Building on previous work by the author, which established an embryonal process for conducting a TA with the model, this paper will perform a further analysis and then cover the last phase of a TA (phase 6), which entails the writing up of the result. This phase was not covered by the previous work. In particular, the focus will be on using the results of a TA done with the LLM on a dataset of user interviews, for writing user personas, with the model building on the TA to produce the personas narratives. User personas are models of real users, usually built from a data analysis like interviews with a sample of users. User personas are tools often used in User Centered Design processes. The paper shows that the model can build basic user personas with an acceptable quality deriving them from themes, and that the model can serve for the generation of ideas for user personas.
Introduction
The problem of this paper is to understand if we can use a Large Language Model (specifically GPT3.5-Turbo) to perform a Thematic Analysis (TA) of semi-structured interviews, also focusing on the last phase of a TA, which entails the writing up of the results. This work is conducted within a process of Human-AI collaboration, a concept in the field of Artificial Intelligence (AI) that assumes that humans and AI systems can collaborate to achieve goals and tasks (see e.g. Vössing et al., 2022, Siemon, 2021and Jiang et al. 2021, specifically for qualitative analysis reflections on this). Elsewhere, I presented an embryonal, and perhaps still crude, process for conducting a TA with the same LLM, following some of the key phases to a TA proposed by Braun & Clarke (2006) in their seminal work. These 6 phases include the human analyst(s): familiarisation with the data (phase 1), generation of initial codes, i.e. relevant features in the data (phase 2, done inductively in my case), the generation of themes, or patterns in the data, based on sorting the code in themes (phase 3), confirming the validity of themes, by reviewing them (phase 4), the renaming and summarising of themes, to confirm their validity (phase 5) and the writing up of the results, as an integral part of TA (phase 6). In my previous work I argued that the LLM can only reasonably perform phases 2-5 of a TA. Phase 1 and phase 6 were not tackled directly, either because the memory and tokens limits of the LLMs do not allow the LLM to perform the work (phase 1familiarisation) or because of potential ethical issues (phase 6writing the report). In this paper, I would like to tackle some aspects associated with phase 6 of a TA and the process of writing up the results.
The focus of this paper is methodological: establishing the validity of the use of LLMs for the conduction of qualitative analysis of data (specifically TA), with a focus on semi-structured interviews and largely in the context of social sciences. As I discussed in my previous paper, one of the epistemological challenges for this is that qualitative analysis is normally done by humans through interpretation of meaning, and this is something that LLMs are not necessarily capable as they operate on the language from a computational, logical and structural perspective (Floridi, 2023). Nonetheless, it has been possible to show that an LLM can perform something looking like a basic inductive TA with at least some degree of validity, through a qualitative comparison with the work of human analysts (De Paoli, 2023). Other authors have used Cohen's kappa to confirm inter-reliability between an LLM and human conders, based on deductive coding processes (Xiao et al., 2023;Gao et al., 2023). TA can indeed be done with a deductive approach (where the grid of analysis is decided before-hand) or inductive where codes and themes are generated bottom-up from the data. I am interested in performing with LLMs an inductive approach to TA. It remains therefore to be assessed whether we can also cover phase 6 of the TA. For Clarke & Braun (2013): "Writing is an integral element of the analytic process in TA (and most qualitative research). Writing-up involves weaving together the analytic narrative and (vivid) data extracts to tell the reader a coherent and persuasive story about the data and contextualising it in relation to existing literature.". As such writing cannot be detached from the process of doing a TA. Therefore, to assess if an LLM can satisfactorily conduct a TA, we need to attempt also phase 6 with the model.
There is discussion about using LLMs to write research work, and there clearly are important ethical implications associated with this (see Lund et al. 2023 for an overview), whilst we have also seen authors citing these models as co-authors (e.g. King & chatGPT). On the other hand, journals and scholarly publishers have begun to create policies to clarify what is the acceptable use of these models in academic publishing. These, correctly in my view, start to be very stringent. For example, the editor-in-chief of the journal Science stated that they decided to update their "license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author." (Thorp, 2023, p. 313). There also have been cases where publishers have removed chatGPT from the list of authors of already published papers (e.g. O'Connor, 2022).
We can agree with Thorp when he points out that AI manipulated images or text which is not produced by an author should not be included in academic publications and may largely amount to academic misconduct. However, we should consider that using LLMs to write research results may also entail working on intermediate phases of the writing process and within a Human-AI collaboration approach, or on scientific products which may have other applied use for other research activities. For Braun & Clarke (2006), phase 6 of TA, is an integral part of the analysis method, and therefore we should explore if also this phase can be performed with the support of an LLM. This paper will attempt at tackling this problem.
In this way the focus of the manuscript is not on the ethics discourses around the use of LLMs for writing scholarly work, but it is rather methodological. I will not propose a process to use the LLM to write a full paper (or part of it), rather I will focus on using the LLM to write intermediate narratives/models, which can be used to support other research activities. I will concentrate on using the LLM to write user personas, based on the results of a TA (of semi-structured interviews) conducted by the LLM in cooperation with the author.
User personas are an ideal candidate for the goal of this work. The Norman&Nielsen Group (NNG) defines a user persona as "a fictional, yet realistic, description of a typical or target user of the product. It is used to promote empathy, increase awareness and memorability of target users, prioritize features, and inform design decisions." (Harley, 2015). User personas are an important and established tool in user-centered design (UCD) and contribute significantly toward the creation of e.g. tools or services and help the designers to focus on people and specific aspects of them, during the design activities. A persona is normally a narrative with a variety of additional details (such as a picture, or specific traits of the user). They are built with a focus on identifying key aspects of the target users for a new design (or re-design), which include for instance the identification of the user needs, of their pain points, objectives, attitudes, behaviours, skills and so on, all of which should contribute to inform design decisions about e.g. a digital service. A useful overview of the advantages of user personas is provided by Miaskiewicz & Kozar (2011), who reviewed how several key authors have formulated these advantages. For a better understanding of the perception and importance of personas in UCD readers can consult for example the papers by Nielsen & Storgaard (2014) or Matthews & Whittaker (2012).
There may also be different types of personas used in UCD, which can support the design work in different ways. Again NNG, for example write that there can be broad and narrow personas (Salazar, 2020). The first is based on shallow data and can serve high level decisions. The narrow ones instead are based on more granular data and can support more specific decisions. Likewise, NNG stipulates that personas can be lightweight (or proto persona), qualitative and statistical (Laubheimer, 2020). The last two in particular are based on solid empirical evidence, with the statistical personas based on large quantity of data (but very costly to produce) about the target uses and the qualitative personas (which are the most widely used in UCD) that are based on the use qualitative research data, like interviews conducted with a sample of target users or with ethnographic observations of e.g. the target users at work. It is the qualitative and narrow personas I would like to focus on. For building qualitative personas a common way to gather data is with semi-structured interviews with a sample of the potential target users, and then an analysis is done on the interviews to identify patterns across the interviews. These patterns are the basis for building the personas narrative, including e.g. their background, goals, preferences or challenges. This is because recurring patterns signal potential common user needs or pain points which can and should then be the focus of the UCD work. TA is one of the analysis methods that can be adopted for the identification of these patterns across the interviews (see for example Turner et al., 2013or Rosala, 2019.
We can therefore understand that (qualitative) user personas are an intermediate product of the user research in UCD, which can support the work of designers, and that they are the results of collecting empirical data and of qualitative analysis. If, as I suggested previously, we assume that that an LLM can perform at least in an embryonal form an inductive TA of semi-structured interviews (Phases 2-5 according to Braun & Clarke), we can explore whether the LLM can satisfactorily produce personas narratives based on this same analysis, or at least produce something which has some semblance with a user persona. Personas can therefore potentially be a good example of textual result here we can attempt to cover phase 6 of a TA as proposed by Braun & Clarke. This will be to focus of the following pages.
Literature review
There is limited research on the use of LLMs for qualitative analysis. For example, a working paper by Mesec (2023) provided a comparative study between chatGPT and a human analyst based on one interview. However, this approach does appear rather simplistic as it just leverages the web chat application (https://chat.openai.com/), rather than use the model via an API to process data with some scale. This mirrors some tutorials that are available online 1 , which propose basic elements of qualitative analysis, leveraging the web chat version of the model, and needing to manually paste interview transcripts in chatGPT and copying the results in e.g. text processing files. More solid existing research largely focusses on the process coding (covering mostly the equivalent of phase 2 of a TA), and as far as the author's understanding these works adopted a deductive approach to analysis (Xiao et al., 2023;Gao et al., 2023). The approach I am exploring is instead entirely inductive where the model operates without a prior coding framework. Moreover, in my opinion these works (nor indeed the online tutorials) do not align or propose any significant methodological discussion or reflection, and they seem to assume we can do a qualitative analysis with an LLM, with limited engagement of what this means for the methodological rigour of social research. They do not much reflect on what are the implications of doing social sciences qualitative analysis with the support of an LLM, or if indeed we should attempt qualitative analysis with an LLM. My approach is more cautious, and I propose that methodological issues should be put into focus alongside our attempts to establish whether we can indeed perform a qualitative analysis with LLMs. For example, we still do not know if academic journals will accept a paper which adopts an LLM performed TA, and we still need to find agreement if this has sufficient scientific rigour. Nonetheless, in particular the work by Xiao et al. (2023) and Gao et al. (2023) are important because they raise attention to several important issues, which include the crucial role of the prompt to the model (see later), the human-AI collaboration in qualitative analysis with LLMs, the use of LLMs for doing intermediate steps of a full analysis (like e.g. initial coding, or team discussion). The use of LLMs for qualitative analysis does seem indeed a potentially nascent area of inquiry, and therefore there is limited literature, but also a need to develop processes which are methodologically appropriate and accepted by the scientific community.
Some words need also to be spent in relation to personas, and specifically in relation to the use of LLMs for the generation of personas. As LLMs can easily produce textual outputs from a prompt, we have seen propositions to use chatGPT (again the web chat version) for producing user personas. There is material such as online videos or blog posts which explain how to generate a persona with chatGPT, with prompts such as: "Generate a user persona about a busy mum who lives in a city that wants to make sure her kids eat healthy without compromising time away from her hobbies and career" 2 . Prompt such as these can be used within the web chat to generate a user persona, with some believable details including a narrative and demographics. Further, it is possible to ask the model to write the output received "in the format of a user persona" and the model will propose lists of e.g. pain points, needs attitudes and so on. As it has been noted, there is something powerful in the model since it knows what a user persona is 3 without needing any contextual explanation. This is a relevant observation as later in the analysis and the generation of personas, we will assume the LLM knows what a user persona is. However, these personas generated with the LLM without being based on any empirical material are entirely fictional, and do not have the realistic component that the definition of a persona seen earlier implies ('fictional, yet realistic'). Whilst they look believable, they seem unlikely to be a good and realistic representation of the potential target users for UCD. Kocaballi (2023) has proposed a more structured approach, rather than just simply asking the LLM to generate a persona, to emulate an entire UCD process with an LLM, including the generation of personas. However, even if more structured, this work is entirely fictionally created by the model. So far so that, that there have been invitations to avoid using LLMs for generating personas 4 , since these cannot be a true representation of target users for a product or service, and LLMs cannot replace the work of user research which requires inquiring with real users (e.g. with interviews or ethnographies). This debate is at the moment taking place largely on websites and other online fora, rather than on academic publications, and it is symptomatic of larger processes of automation of human work and of the oversimplifications which are possible with LLMs. It is not my intent to enter this specific debate, especially in the field of user research.
However, I believe it is rather different to use the LLM to generate user personas fictionally, without any underlying empirical research, and have instead personas generated based on some form of TA and on real qualitative data, within a Human-AI research collaboration mode. If the LLM (with the support of the human researcher) can produce at least satisfactorily some forms (or at least ideas) of user personas based on a data analysis, we may also be able to make a step forward toward covering phase 6 of a TA.
Research design
This paper proposes a research design which mirrors the one I proposed in my previous paper (De Paoli, 2023). I will, in particular, cover phases 2-4 of a TA which will include an initial generation of codes, the grouping of the codes in themes, and a validation of themes. Phase 5, which covers the renaming of themes, will not be covered here (largely for keeping the text compact), but it certainly is possible also to operate this phase if needed. Readers can consult the specificities of the proposed (albeit embryonal) approach to a TA with an LLM in De Paoli (2023). Here I will concentrate on a shorter summary of the method-process for performing a TA, and then focus on covering phase 6. In the previous publication I also reflected on the limits of the LLM I am using (GPT3.5-Turbo), including on the limits to the number of tokens and the lack of memory, which impact the processing of large amounts of text. Again, the reader can see these reflections in the previous paper.
First, we need to provide some definitions. We have already mentioned earlier the 'prompt', this is essentially what we ask the model to produce. The output/response is what the model produces based on its understanding of the prompt instructions. Figure 1, shows a simple prompt and the response, as generated by the chat version of GPT. Second, to perform this research I used the model via the OpenAI API (https://openai.com/blog/openaiapi), and not the web chat. The API allows to call the model inside python scripts. The advantage of this (over the use of the chat version) is that we can work with data and do manipulation and parsing of the data. It is because of this that we can have the model process real textual data with some degree of scale, upon which attempting to perform a data analysis (like a TA). In this paper the specificities of the python scripts will not be discussed unless necessary, whilst more space will be given to a discussion of the prompts.
As my goal is to build personas from a TA of semi-structured interviews, I selected an open dataset (covered by a Creative Commons license), available from zenodo. This ensures previous anonymisation of the data and the possibility to reuse this data. The selected dataset is the farmers' interviews (n=14) conducted by the project EUREKA, which is a subset of a larger dataset (Vágó, & Spanoghe, 2023). EUREKA was an H2020 project which aimed "to strengthen and improve the flow of agricultural and rural-related knowledge and innovation at European, national and regional level." 5 . EUREKA has produced a "first working version of the so-called EU FarmBook" digital platform. Interviews were conducted with several user groups including farmers, forestry workers or various advisors, in different European broad regions (including e.g. the Mediterranean or the North Sea regions). These interviews were conducted to produce user personas and user journeys, and therefore the research was planned with a focus on UCD. For the farmers' group, six personas were prepared and are available on the project website 6 . Therefore, we can consider this a potentially good dataset for the task of using an LLM to perform the writing up of a TA in the form of user personas.
Due to the maximum limit of the tokens in the model (4097), interviews had to be broken down into smaller chunks. From the initial 14 interviews, 31 chunks were created ranging between 700 to 1600 tokens (except for one short interview, which was already around 400 tokens). Data cleaning and preparation was necessary before the analysis, like removing special characters (e.g. text in Cyrillic for example, which had an English translation in the interview transcript) and saving all the interviews in plain text files.
The design I propose here is based on performing three steps: (1) first perform a TA with LLM on the interview chunks, replicating and enriching the process already proposed in my previous work; (2) using the LLM to create personas based on the results of the TA, with appropriate prompting; (3) offer an evaluation of the results, by looking at which themes and codes were used by the LLM to create one persona.
Performing the TA with the LLM Generating codes (phase 2)
Phase 2 of a TA involves generating inductively the codes from the data. Since the goal here is to try to develop user personas, I asked the model to identify separately codes for the users' a) needs and b) challenges. For this I used the prompt presented in Figure 2. This prompt is similar to the one I proposed in my previous work, however I added to the prompt a definition of what is a challenge. This definition in part was taken from a report of the EUREKA project (Van der Cruyssen, 2021), where a challenge was defined in relation to "access to data and knowledge" and, to specify this better, I added a simpler definition of a challenge. For the separate generation of user needs, I used in the prompt a definition from the UK Government (2017) which I consider simple and clear which defines user needs as "the needs that a user has of a service, and which that service must satisfy for the user to get the right outcome for them". Considering the limits of the max tokens of the model, I asked the model to generate up to 2 challenges and up to 3 needs from each interview chunk. Each chunk of interview is passed into the prompt via a forloop in the variable 'text'.
Figure 2 -Prompt for initial coding of challenges
This prompt allowed the identification of n=62 user code-challenges and n=93 user code-needs from the interviews. Due to possible duplications (i.e the model goes through each interview chunk separately, and thus can duplicate codes), I used a previous strategy of reducing each codebook (see De Paoli, 2023), by merging very similar codes. After the reduction, the codebook was n=39 for the challenges and n=75 needs. Examples of codes generated with this prompt can be seen in Tables 5-8, in the section 'Evaluation' of the paper.
Finding Themes (phase 3)
The subsequent phase of TA (following Braun & Clarke process) entails sorting and grouping the codes (from phase 2) in themes. In essence codes are features in the data that the analyst identifies, and themes are patterns in the data which encapsulate multiple codes. For this purpose, I used the prompt shown in Figure 3. This is very similar to the one I used in my previous work (again see De Paoli, 2023). In this case I asked the LLM (for both the list of code-challenges and code-needs, separately) to generate 12 groups (or themes). This number (12) is, to an extent, arbitrary, however I estimated it as follows: the EUREKA project had produced 6 personas from the dataset of farmers, I thus estimated that for building one persona I would need to use 2 theme-challenges and 2 theme-needs (so 6 *2 for each theme book).
Figure 3 -Prompt used to sort codes into themes
Four full examples of themes (for challenges and needs) are presented in Tables 5-8 in the section 'Evaluation' of the paper, including the name of the theme, its description (as provided by the model) and the list codes, descriptions and related quotes. Table 1 and Table 2 instead present the full list of themechallenges and theme-needs, with name, descriptions and the number of underlying codes, generated by the model in this phase (with the prompt from Figure 3). This group contains topics related to the challenge of finding information for business planning, particularly for innovative crops.
Nr. Theme-Challenge
9 Personal Connection
This group contains topics related to the concern about the lack of personal connection and the superficiality of digital communication in the agricultural world. This group includes topics related to the need for digital tools and sources to have practical relevance, practical digital tools to increase productivity, digital tools to aid in agricultural tasks and processes, and affordable digital tools and equipment.
10 3
Professional Exchange and Networking
This group includes topics related to the need for personal and professional exchange with colleagues and representatives, a platform to share experiences and knowledge, and connection and community within the agricultural industry.
Animal Health and Farming Issues
This group includes topics related to finding solutions to animal health and farming issues, optimizing work to prevent waste in the company, and monitoring and tracking poultry health and mortality rates.
Expert Advice and Consultation
This group includes topics related to the need for expert advice and consultation for problem-solving and informed decision-making.
6 Language and Communication
This group includes topics related to the need for communication and gathering information in a language that is understood, particularly when it comes to technical terms related to agriculture. This group includes topics related to reliable information sources on agricultural machinery and equipment, objective comparison or testing of agricultural products, and finding sensors specific to agriculture.
3 Table 2 -Theme-needs generated by the model in phase 3
Evaluating Themes (phase 4)
In this phase of TA, the analyst re-evaluates the themes, identifies which ones are indeed solid themes and which may just be sub-themes or only codes. This is a phase of re-evaluation of the previous phase and solidification of the TA. In De Paoli (2023), I suggested using the parameter Temperature (T) of the model for this evaluation. This parameter (alongside a similar parameter top_p) influences the response of the model with a degree of randomness or 'creativity'. The parameter accepts values between 0 (no randomness) and 1 (max randomness). All the previous analysis (phase 2 and 3) was done with T at 0 (zero). The approach I experimented with for phase 4, thus far, is to change the T parameter to a higher value to see if some themes appear consistent and if others emerge. This time I also introduced a further aspect for the potential review: the number of codes of each theme as presented in phase 3. Indeed, a few themes generated in phase 3, are composed of just one code (see Tables 1 and 2). This may signal that the theme could be weak and just a code (rather than a theme), and that operating the model with a higher T at phase 4 may tell us new candidates' themes. Four theme-challenges (Nr. 9, 10, 11, 12 - Table 1) and the same number of theme-needs (Nr. 6, 8, 9, 11 - Table 2) are only composed of one code. We also need to remember that the list of code-challenges is smaller than that of code-needs, and therefore the model had a small set of codes to build theme-challenges. Moreover, we need to remember that the themes were built out of a reduction of the codebook, therefore even if they only have one code, in fact this code might have appeared in multiple interviews. Table 3 shows three sets of themes generated at T=0.5 for the theme-challenges (and the associated number of codes). Table 4 shows the same for the theme-needs. We are looking in these tests for consistency, but potentially also for new themes replacing previous themes with just one code.
Nr.
Theme (Test_1) Nr Codes
Theme ( Table 3 -List of theme-challenges generated with T at 0.5 (3 tests) and number of codes
We can now compare Table 3 with Table 1 and look for consistency of (and thus solid) themes and for potential candidates' themes to replace those with only one code. What we can observe is that the themes in Table 1, from Nr 1 to 9 appear consistently also in Table 3. We can preliminary assume these are all valid themes to keep. The code Nr. 11 in Table 1 ('Language Barrier'), despite having only one code, is consistent across Table 3 and should probably be kept. In fact, if we look at Table 6 in the 'Evaluation' section of the paper (which reports the full challenge-theme 'Language Barriers') we can see these barriers emerged across multiple interviews. The remaining 3 themes with just one code in Table 1 ('Personal Connection', 'Advocacy and Sharing Information' and 'Access to Agricultural Machinery') do not seem to appear in Table 3 and may be candidates for clear replacement. In Table3, the theme 'Superficiality of digital communication' appears twice (also with just one code), upon reading the description of this new theme, this seemed to overlap with the theme 'Personal Connections' (or the lack thereof) from Table 1. The original theme is then kept. The theme 'Reliability' appears in both Test_2 and Test_3 in Table 3 with three codes and is a potential theme that can replace the theme on 'Agricultural Machineries', due to the description of this theme referring to reliability (which also appear in the theme 'Agricultural Machineries from Table 1). The theme 'Advocacy' from Table 1 appears also in Test_1 in Table 3. This may therefore be a good candidate to keep (despite the single code). Thus, the new list of theme-challenges from Table 1 will see the replacement of the theme Nr. 12 ('Agricultural Machineries'), with the theme 'Reliability' (from Test_3, Nr. 11, Table 3), which is highlighted in yellow.
Nr. Theme (Test_1) Nr Codes
Theme ( Table 2, themes 1 to 7 will be kept as valid since they reappear consistently in various shapes in Table 4. Also, in this case the theme 'Language and Communication' despite having only one code, is kept due to its consistency across the three tests at T 0.5. The themes 10 and 12 from Table 2 also will be kept as they reappear in Table 4 and have more than one code. A decision needs thus to be made about the remaining themes with just one code from Table 2 (respectively 'Organic Agriculture and Ancient Seed Varieties', 'Mapping and Documentation' and 'Grain Growing and Livestock Feed'). In Table 4, unlike Table 2, we see the issue of 'Personalization' appearing twice, this is a good candidate theme, also directly related with specific needs for digital tools (we will use the Nr. 9 from Test_2 in Table 4, for our final list of themes). Also, the theme on 'Community' appears in all three tests and not in Table 2, this is also a good candidate theme (we will use the Nr. 8 from Test_2, Table 4). Lastly the theme of 'Waste Reduction' also appears in Test_1 and Test_2 but not in Table 2, hence it will be kept as theme (we will reuse Nr 10 from Test_2, Table 4). The theme-needs from phase 4 adopted for the final theme-needs book (replacing three weak themes from Table 2), are highlighted in yellow in Table 4.
The final theme books (for both challenges and needs) thus encompass the changes to the themes as presented above, with one change for the theme-challenges, and three changes for the theme-needs. Moreover, the final theme books include all the previous codes, descriptions and quotes (identified in phase 2, before the reduction of the codebook). Again, for examples see Tables 5-8 in the section 'Evaluation' of the paper.
The process of conducting a TA also entails a phase 5 related with renaming and summarising themes. As said earlier, this phase is not performed here, and readers can consult De Paoli (2023) for an approach to operate that phase. Will focus now on using the two theme books (challenges and needs) to conduct phase 6 of a TA: the writing of the results.
Writing up user personas
We have a list of 12 theme-needs and 12 challenges-needs from the TA. We will attempt to build user personas with this analysis. In phase 3 (when estimating the number of themes to be inferred) I had assumed that one persona would be built using 2 theme-challenges and 2 theme-needs. Therefore, since not all the themes will be used (and are not normally used by human writers of personas), we need to operate a selection. There may be different ways of deciding which themes should compose a single persona. For example, we could have the human analyst decide which theme-needs and theme-challenges to use, or we can use a randomised process of selection of the themes, or a mix of the above two. For this simple experiment, I wrote python code to select at random two sets of themes to be used (i.e 2 themechallenges and 2 theme-needs), as described in Figure 4. This was done by creating a list of random tuples for each theme book (6 tuples of 2 themes each) and then selecting one tuple. Every time that the script is run, the list of tuples is randomised again. This randomised process entails in fact 78 combinations for each theme book (thus potentially allowing 6084 combinations across all themes). The selected themechallenges and theme-needs are then passed to the model via the prompt, to generate a user persona. To increase the variability of the personas I also operated the model with Temperature set a 1, but this is just a choice to have some more 'creativity' from the model in its response.
Figure 4 -Themes (tuples) selection and personas writing
Note also that when building personas in UCD, normally there are more dimensions included such as attitudes, behaviour, interests and so on. Here we are working only on two theme-dimensions (as we are attempting to establish a process), however it would be possible to have more themes related to other dimensions of personas. It is sufficient to reproduce the previous steps of the analysis to generate these additional themes. However, several theme-dimensions may impact the limit of the model in relation to the max tokens. Figure 5 presents the prompt used for generating personas. This deserves some comments. First the randomly selected tuples of themes (2 theme-needs and 2 challenge-needs) are passed to the model as lists ('List of needs', 'List of challenges'), each element of a tuple includes the theme name, the theme description (from phase 3-4) and all the underlying codes and the associated quotes (phase 2-4). This is the material (i.e. the analysis results) that we ask the model to use for building the persona. In the prompt, we ask the model to include the persona's: goal, background description, a vague indication of the age, three needs, two challenges, and a quote illustrating the goal and whether the persona is tech savvy and open to innovation.
An important aspect of this prompt is that it is not telling the model anything about what kind of personas to build, like who the persona is (what the persona does, her job, her goal, her motives etc). It just informs the model of what formal element to include (e.g. a goal, a quote, three needs). This is significantly different from the examples of prompts we can find online related to asking chatGPT to generate user personas, where details about the persona need to be provided to the model. For instance, in the literature review we mentioned one such prompt (taken from a youtube video tutorial), where it was specified that the persona to be generated was a busy mum, living in a city, with kids, concerned about health food and career. Another example taken from an online tutorial (this time of an audience persona) is "Create an audience persona for a business that sells trendy fashion wear for young adults, Colorful clothing and jewelry. The business sells at reasonable prices so all young adults can purchase from this brand" 7 . This level of details is not needed in the prompting when the persona is generated using results of a TA, and the prompt only needs to provide indication about the formal structure or components of the persona narrative. I only added one detail to the prompt that the persona should be from a 'from a specific EU country'. Indeed, the prompt (without this specific instruction) would sometimes provide also the country in which the persona works but sometimes it would not, and this I believe is dependent upon countries being named or not in the data that the model uses (i.e. tuples).
What follows are two examples of personas generated with the above prompt. The persona Katharina was generated using (with the random selection) the following tuple of theme-challenges: (1) Navigating Online Information and (2) I structured the personas in a table and included a picture which I generated with Dall-e (https://openai.com/product/dall-e-2), mostly to attempt to reproduce some of the empathy that user personas seek to achieve. However, what we are interested in considering is the textual narrative and whether the textual description of these personas do appear credible (i.e. 'fictional, yet realistic' - Harley, 2015). This will be discussed in the next section of the paper. Please note that in the appendix there are some further examples of personas generated from the analysis, using the prompt of Figure 4. Also note the order of items of the personas are presented in the way they were produced by the model.
Name: Katarina
Age: Middle-aged Country: Poland Narrative Background: Katarina is a middle-aged farmer based in Poland. She inherited her family farm and has been managing it for over a decade. She is passionate about farming and aims to optimize her productivity while following ethical farming practices. Since there are limited resources and solutions available to her in her area, Katarina has to rely mostly on online information to find solutions to her problems related to animal health and farming practices. However, she faces language barriers and struggles to maneuver through the vast amount of information available online. Katarina values user-friendly digital tools and prioritizes the need for problem-solving and optimization of her farming practices.
Main Goal: To find innovative and effective solutions to improve animal health and optimize her farming practices.
Main Needs: -Filtering information sources to ensure the accuracy of the information -Finding solutions to problems related to animal health and farming practices -Optimizing work in the company and avoiding waste Main Challenges: -Difficulty accessing information due to language barriers -Navigating through vast amounts of online information -Filtering out misleading information from the internet
IT Skills: Medium Attitude Toward Digital Innovation: High
Quote Representative of the Goal: "My opinion is that digital sources can help optimize work on the farm. If you have tools that allow you to work better, the entrepreneur works better, the employees and the company work better, and so on. Maybe you can also avoid a lot of waste."
Name: Gisela Schmidt
Age: Middle-aged (45) Country: Germany
Main Goal: To find reliable information sources on agricultural machinery and equipment specific to agriculture.
Background: Gisela Schmidt is a middle-aged farmer from Germany who runs a small farm with her family that specializes in crop production. She has been in the agricultural industry for over 20 years and has seen its evolution with the introduction of technology and digitalization. While Gisela has a basic understanding of digital tools, she faces challenges in finding reliable information and sources specific to agriculture. Her main concern is finding reliable information sources on agricultural machinery and equipment, including sensors specific to agriculture. Gisela believes that she can improve crop production and save time and money if she can find the right sensors and equipment. She often spends a lot of time searching for information online but struggles to find trustworthy sources.
Main Needs: -Reliable information sources on agricultural machinery and equipment -Finding sensors specific to agriculture -Objective comparison or testing of agricultural products Main Challenges: -Language and education barriers -Difficulty accessing reliable information -Lack of personal connection and superficiality of digital communication
IT Skills: Medium Attitude toward digital innovation: Medium
Quote: "Then I used the keywords of those research papers to find other producers, not the salesmen, of the sensors but producers. And then I got in contact with the dual M guys, but also with Medusa, with all kinds of software and hardware and software suppliers that are building nice sensors, but not specifically for agriculture."
Evaluation
Overall To evaluate the validity of these narratives we will look at how the underlying themes appear and are composing the personas. We will do so for the persona Katharina. Table 5 and 6 show the theme-challenges used for building 'Katharina' and Table 7 and 8 the themeneeds.
Challenge Navigating Online Information Description
This group contains topics related to the challenge of navigating through vast amounts of online information to find relevant and accurate sources.
Codes
Description Quote Filtering Information Online
The interviewee faces the challenge of filtering through the vast amount of information available online to find relevant and useful information. They have developed a strategy of using specific keywords and ruling out advertisements to find what they need.
Usually I find a specific keyword and then add another keyword to that and add another keyword to that. But I don't type in all the keywords at once. If you do it step by step, you get more information for every goal and you easily get the rubbish out.
Filtering Information Online
The interviewee faces the challenge of filtering information sources to ensure the accuracy of the information.
So I'm looking informations. And when I get many results, I'm looking at who got this information. How did they get there? If it's still trials, if it's a trial site or if it's only guesses and maybe just a reflection of it could be like that.
Filtering Information Online
The interviewee faces the challenge of filtering out misleading information from the internet, which can be both digital and non-digital.
Misleading information can be digital or nondigital in this there is no difference. Because there is more digital information, there will be more misleading ones.
Navigating Information
The interviewee struggles with navigating through the vast amount of information available online to find reliable sources.
Reliability seems at first glance, and also when someone talks about something they've already tried. Once you have the routine, you can see what advertising is. Good infographics help with comprehension, but they are worth nothing if they are not grounded, they are just advertising.
Information Overload
The challenge of sifting through a large amount of search results to find the right information.
Of course, if you go, for example, to Google and you search something, you will have a lot of outputs. So, it's very difficult to spot the right ones, so sometimes it will take a lot of time searching. Table 5 -Theme-challenge 'Navigating Online Information
Challenge
Language Barriers Description
This group contains topics related to the challenge of language barriers faced by farmers.
Codes
Description Quote Language and Education Barrier Difficulty accessing information due to language and education barriers
The content must be available in the respective national languages and also for users with different levels of education -not in all countries is agricultural education as good as here in Germany.
Language and Education Barrier
The challenge of language barriers for farmers who may not be proficient in foreign languages and need access to information in their local language.
Of course a farmer is also a firm manager and he's has his own company and he needs to to manage his company. So he needs to know about numbers, about accountability, needs to know about politics. But it shouldn't take him out off core business, which is agronomy. So as a tool, must must help the farmer and the user in that direction. So learning a different language, I don't think it's it's the best way to try to be helpful.
Language and Education Barrier
The interviewee faces a challenge of accessing information in languages other than Slovakian or Hungarian.
We mostly use Slovak or Hungarian, not everyone understands English. Perhaps the main information can be understood in German. Access to information is limited to these languages.
Language and Education Barrier
Difficulty in understanding technical terms in English while searching for cultivation problems online If I have to go and look for cultivation problems, in English there is no way I can do it.
Language and Education Barrier
The interviewee only looks for information in Polish, limiting access to potentially valuable information in other languages.
On our websites we have a lot of information but time to time we are checking abroad website and translating it because some websites are better than polish. For example some Belgium webpages have better information than polish. Table 6 -Theme-challenge 'Language Barriers'
Need Personalization and User-Friendliness Description
This group includes topics related to the need for a certain degree of personalization based on what the user does or would like to have access to in terms of information. It also includes the need for digital tools that are easy to use and accessible.
Codes Description Quote Personalization
The need for a certain degree of personalization based on what the user does or would like to have access to in terms of information.
Yes. But I also immediately understand why this isn't possible for everyone. Some people may not know how to configure such an app themselves.
User-Friendly Digital Tools
The need for digital tools that are easy to use and accessible It should be user friendly...They don't have good digital skills. And so everything should be very intuitive. Yes, very easy tool to use. This is the main argument. Table 7 -Theme-need 'Personalization and User-Friendliness'
Need Animal Health and Farming Issues Description
This group includes topics related to finding solutions to animal health and farming issues, optimizing work to prevent waste in the company, and monitoring and tracking poultry health and mortality rates.
Codes
Description Quote Information Discovery
The need to discover new information and techniques related to farming and animal husbandry.
Steal with your eyes to learn how someone else does it, and what you can do better yourself.
Problem Solving
The need to find solutions to problems related to animal health and farming practices.
For example, let's take the (lack of) fertility of the animals. We talked about it with our vet. He drew blood and sent a blood sample to the lab. They examined it and found that the cows had a deficit of a certain mineral.
Problem Solving
The need to effectively solve problems using digital tools and platforms.
I do had a situation when the field was...
Problem Solving
The need to optimize work in the company and avoid waste My opinion is that digital sources can help optimize work in the company. If you have tools that allows you to work better, the entrepreneur works better, the employees and the company works better and so on. Maybe you also avoid a lot of waste.
Fauna Management
The need to manage fauna in a way that is effective and humane Roe Deer? They seem strange things. How to drive away the Roe Deer because objectively there are no resources for you... they only sell you products. But maybe, for example, this is something that if there were answers in Europe on how to manage the fauna, it would be interesting because everyone have an experience. Optimization
The need to optimize work in the company and avoid waste My opinion is that digital sources can help optimize work in the company. If you have tools that allows you to work better, the entrepreneur works better, the employees and the company works better and so on. Maybe you also avoid a lot of waste.
Monitoring and Tracking
The need to monitor and track poultry health and mortality rates to identify potential problems and make informed decisions.
For mortality and temperature, I made a Google Sheets. I monitor these and then I make the charts, I make them myself. If we look back at Katharina, her goal and the quote come from the code 'optimization', in the themeneed 'Animal Health and Farming Issues' (highlighted in light yellow, in Table 8). Information in the background narrative of Katharina mentions aspects such as lack of language skills and the need for user friendly tools which reflect both one of the challenges ( Table 6) and one of the needs (Table 7). Specific main needs and challenges of Katharina are also identifiable in the tables. For example, Katharina does seem to have a strong focus on the navigation and identification of information. One of her challenges, relates to filtering out misleading information from the Internet, is clearly reflected from one of the codes in Table 5 (highlighted in green). One of her needs, related to finding information about animal health, is clearly reflected in one of the 'problem solving' codes in Table 8 (highlighted in green). Language barriers also is another challenge that she faces, and although all the codes in Table 7 may have contributed to this, it is likely that that model used the one related to information in Polish (highlighted in green).
Discussion
The main goal of this paper was to conduct a TA with an LLM (GPT3.5-Turbo), covering also phase 6 of the approach proposed by Braun & Clarke (2006), the writing of the results. In practical terms the goal was to use a TA of semi-structured interviews to create user personas. In this discussion we will reflect on the results of this research.
First, this research has been an opportunity to test again the embryonal and crude process for conducting an inductive TA with an LLM, I proposed in my previous paper (De Paoli, 2023). Overall, this process can be reproduced on different datasets and it is clear that it can deliver some meaningful (but still basic) codes and themes. It certainly will need more work and refinement, but I consider it as a good initial basis for reflection on how to conduct a TA with an LLM. In this paper I also covered in more detail phase 4, where an evaluation of themes has been done and weak themes have been replaced with other themes which appeared more valid and sound for the analysis. This clearly signals the fact that a TA with an LLM is done within the frame of a Human-AI collaboration.
Second, using the themes produced from the process of analysis, it does seem that the LLM can indeed write up the results, in this case in the form of user personas. With a formal prompt (which does not give any indication about the content, except the structure) and a set of themes, the model can write up a persona narrative. We have also seen (in one example) how specific codes are reflected in such a narrative, including in the persona's goal (and related quote), background or in the persona needs and challenges. At this stage, it is difficult to tell in what ways the model prioritises certain codes over others for building the narrative. However, this is a data driven process of writing up personas, and the personas are a reasonable reflection of the data analysis, they are 'fictional, yet realistic' (Harley, 2015). I consider therefore, that we can provide (at least initially) a positive answer to the problem of covering phase 6 of a TA, as indeed also this phase can be attempted with an LLM. Some words need also to be used to reflect on personas, the field of UCD and the use of LLMs. The process I proposed here clearly is significantly different from the generation of entirely fictional personas, which are not based on empirical data, which we often see in online tutorials. Those entirely fictional personas clearly cannot be a good ('yet realistic') representation of target users, as they are not derived from user research.
I would also argue that the personas generated by the LLM using a TA of semi-structured interviews, are probably to be considered not finished personas, but more like initial prototypes, which would require further refinement by the human analysts. Indeed, the model can generate multiple ideas for the personas, by mixing themes (in my example I mixed 2 theme-needs and 2 theme-challenges, and this would allow more than 6000 combinations), and then the analyst can decide which personas do seem more representative of the target user group, and eventually also enrich them with further details, in line with the proposed design work. In essence, I see the process more as an idea generation of personas, rather than as a replacement for the work of the user researcher. The important aspect is that these ideas are generated from real data and from a structured qualitative analysis and are not just entirely fictional.
Recommendations
In this section I offer some recommendations based on the results of this work and connect, where relevant, some of these recommendations with previous literature.
Human-AI Collaboration. The conduction of a qualitative analysis with an LLM is a process of collaboration between the model and the human analysts. This confirms the previous observations by Gao et al (2023) and Jiang et al. (2021). However, we need to link the Human-AI collaboration better to what could be the methodological implications for social sciences. Especially for the reporting of the methods underpinning research. It is clearly not enough to state that an analysis was done with an LLM, and all the steps taken by the model and by the human analyst in collaboration will also need to be reported and documented in any methodology to ensure that validity of the results.
Prompting. Prompting is clearly the key element of using LLMs successfully (Xiao et al., 2023) and the reporting of prompts will need to become a standard practice in methods section of social sciences, when (and if) the analysis is done with an LLM. Moreover, practices for improving prompts will need to be shared across the social sciences research community to improve the capacity to conduct qualitative analysis, such as TA or the production of intermediate writing up of the results.
Coding. In phase 4 of the TA proposed in the previous pages, I have done an evaluation of the themes and replaced weak themes with stronger ones. A similar evaluation, however, should also be conducted in the codes generated at phase 2. Upon scrolling the list of codes, for example I found one code with a truncated quote (this can be seen in Table 8, in one of the 'Problem Solving' codes) and one code with a quote that was not very representative. There is in other words, room for improving the initial coding, and it may be possible to envision also a process which relies on the randomness of the model for doing this. However, this operation is clearly costly as it requires multiple processing of a dataset with the model.
Publishing.
There is debate about the use of LLMs in academic publishing (see e.g. Thorp, 2023;Lund et al., 2023). Whilst the use of these models in place of the human author (or for manipulating results) should be rejected, it may be acceptable to use text written in a Human-AI collaboration, in the form of intermediate products. For example, if user personas generated with the previous process (and then enriched by the human analyst) are included in a UCD publication (and where the process is fully detailed at methodological level), it may be acceptable for these types of intermediate textual products to be part of a manuscript. However, entering in this debate is beyond the scope of my work.
Chat VS API. The few existing online tutorials and few other research contributions focusing on doing qualitative/TA analysis with an LLM, use the web chat version of the model, which requires manually copy-pasting content from a textual file into the chat and then copying back the response into other documents, such as another text file. This procedure is not functional for the conduction of e.g a TA, since it does not allow further data manipulation easily or to reuse the results of the analysis at scale. For example, a TA done with the web chat, would end up in a e.g. text processor document, but this analysis cannot be reused easily for the model to write up results, such as user personas, at scale. API Vs Integration. While LLMs are starting to be integrated in Qualitative Data Analysis Software 8 (which will limit probably the use of the web chat in the future), it remains unclear if these integrations will offer transparency on what is done by the model. The use of the API (although requiring some basic programming skills), offers, at least for now, a reasonable level of transparency, since all the steps conducted for an analysis can be traced, and the outputs can be reconstructed. This is important for methodological reasons and reporting.
Limits of this work
Potential Bias. For some reason, the large majority of the personas generated with the prompt (Figure 4), resulted to be middle-aged. I am not sure the reason for this, but it may be that the model has a bias and assumes people working in farming are all middle-aged. Therefore, the prompting I proposed may need to be adjusted to specify if the persona needs to be young, middle-age or else. Moreover, many of the personas appeared to be called Maria, and coming from Italy. In part this may be a bias of the model (which I cannot prove) and in part this may be a reflection on the data seen by the model. Two sets of themes. I built the personas deriving them from two set of themes (needs and challenges). Clearly personas in UCD have more dimensions than two (e.g. behaviour, interests). However, my focus was only demonstrative, to see if we could write up results of a TA in a way that is at least satisfactorily. For writing more articulated personas (with more dimensions), more themes will need to be produced from the analysis. Phase 1. This phase of a TA would require the familiarisation with the data. At present this cannot be done by the LLM due to the limitations in memory and tokens. We will have to assess in the future if more powerful models can perform also this phase in the future.
Conclusion
This paper has shown that it is possible to perform phase 6 (writing up) of a Thematic Analysis with an LLM, based on the results of a TA conducted on semi-structured interviews. A dataset of 14 user interviews was analysed with an LLM to derive a set of needs and challenges as themes. These themes were then used in a prompt to the model for building user personas. Two examples of personas were presented, and one was evaluated against the underpinning themes. These personas, I would say, appear 'fictional, yet realistic'. While the process for conducting a TA with an LLM is still crude, there clearly is an initial basis for further work.
Figure 1 -
1Prompt and response of chatGPT (web chat)
Figure 5 -
5Prompt used for the generation of a use persona
Language Barriers; and the following tuple of theme-needs: (1) Personalization and User-Friendliness and (2) Animal Health and Farming Issues. The persona Gisela was generated using the following tuple of theme-challenges: (1) Language Barriers and (2) Personal Connection; and the following tuple of theme-needs: (1) Agricultural Machinery and Equipment and (2) Knowledge Sharing and Community.
, the background of each of the two examples of persona (Katharina, Gisela) offers important clues about their story, such as the kind of farm they operate, what the farm produces (e.g. crops), what are the personas intentions (e.g. efficiency, saving time with sensors). It tells also something about the relation of the persona to the farm (a farm inherited from family or a farm run with the family), or about the issues related with keeping up with digital technologies in the sector and online information.
This group contains topics related to the challenges faced by farmers in accessing information, either due to limited network availability, poorly maintained websites, language barriers, or a lack of reliable sources.This group contains topics related to the challenge of verifying the trustworthiness of online information. It includes topics such as filtering through online information, identifying misleading information, and finding reliable sources.This group contains topics related to the use and challenge of digital tools and technologies, including finding effective tools, verifying their effectiveness, and ensuring user-friendliness.This group contains topics related to regulations and compliance in agriculture, including difficulties faced by farmers in calculating fodder/feed rations and managing data related to animal fertility.This group contains topics related to the challenge of navigating through vast amounts of online information to find relevant and accurate sources.This group contains topics related to the use of digital tools to innovate and optimize work in the agricultural sector.Description
Nr of
codes
1
Limited
Access to
Information
8
2
Trustworthi
ness of
Online
Information
7
3
Digital
Tools and
Technologie
s
9
4
Education
and
Training
This group contains topics related to education and training in
agriculture, including language barriers, a lack of formal education, and
the importance of learning by doing.
3
5
Regulations
and
Compliance
4
6
Navigating
Online
Information
3
7
Innovation
and Change
2
8
Business
Planning
This group contains topics related to the challenge of connecting advocacy and professional organizations to share information.This group contains topics related to the challenge of language barriers faced by farmers.This group contains topics related to the challenge of finding reliable information on agricultural machinery.Table 1-Theme-challenges generated by the model in phase 3This group includes topics related to the need for quick and easy access to information, easy access to relevant and up-to-date information, efficient filtering of scientific papers and keyword searches, and a central point of access to relevant information.1
10
Advocacy
and Sharing
Information
1
11
Language
Barriers
1
12
Access to
Agricultural
Machinery
1
Nr. Theme-Need
Description
Nr of
codes
1
Access to
Information
8
2
Digital Tools
and Applications
1 7
1Beekeeping This group includes topics related to the need for current and updated information on the beekeeping industry and policies, efficient data management tools for beekeeping, and digital tools to make beekeeping easier and more efficient.4
8
Organic
Agriculture and
Ancient Seed
Varieties
This group includes topics related to staying updated with
information related to organic agriculture and ancient seed
varieties.
1
9
Mapping and
Documentation
This group includes topics related to the need to use digital
applications for mapping and documentation purposes.
1
10
Learning and
Education
This group includes topics related to flexible learning options,
remote exams, and expert consultancy and advice.
2
11
Grain Growing
and Livestock
Feed
This group includes topics related to practical information on grain
growing, direct sowing, and grain as feed for livestock.
1
12
Agricultural
Machinery and
Equipment
We can operate on a similar line now with the theme-needs. FromTest_2)
Nr
Codes
Theme (Test_3)
Nr
Codes
1
Access to
Information for
Agriculture and
Farming
9
Access to
Information
Platforms
7
Access to
Information
7
2
Digital Tools for
Agriculture
14
Digital Tools for
Farm Management
15
Digital Tools and
Applications
12
3
Personal and
Professional
Exchange
4
Expert Knowledge
and Consultation
5
Expert Advice and
Consultation
3
4
Language and
Communication
in Agriculture
1
Language and
Communication
1
Animal Health and
Farming Issues
4
5
Animal Health
and Husbandry
6
Animal Health and
Husbandry
5
Professional
Exchange and
Networking
5
6
Sustainability and
Environmental
Impact
2
Sustainable
Agriculture
2
Local Information
Sources
1
7
Digital
Competence
1
Equipment and
Machinery
2
Regulatory
Compliance
1
8
Beekeeping
3
Knowledge
Sharing and
Community
3
Access to
Trustworthy
Information
2
9
Waste Reduction
and Optimization
3
Personalization and
User-Friendliness
2
Innovative Solutions
1
10
Community and
Connection in
Agriculture
5
Efficiency and
Waste Reduction
3
Language and
Learning
2
11
Expert
Consultancy and
Advice
3
Remote Learning
and Support
2
Community and
Connection
2
12
Personalization
and User
Experience
2
Specific Industry
Knowledge
4
Grain Growing
1
Table 4 -List of theme-needs generated with T at 0.5 (3 tests) and number of codes
Table 8 -
8Theme-need 'Animal Health and Farming Issues'
See https://www.persona-institut.de/en/chat-gpt-3-und-personas/ 4 See e.g. https://uxdesign.cc/please-do-not-use-chatgpt-to-generate-personas-85ffeaa6690b
See https://h2020eureka.eu/da/about 6 See https://h2020eureka.eu/personas-and-user-journeys
See https://www.youtube.com/watch?v=db3zlVeAMhE
see se.g. https://atlasti.com/
Appendixother examples of personas generated by the model based on the TAUser Persona: Name: Giuseppe Rossi Age: Middle-aged Goal: To gather reliable information and connect with colleagues in the agricultural industry to optimize work in his company and avoid always cultivating the same crops.Background: Giuseppe Rossi is a farmer from Italy who has been in the agricultural industry for over 25 years. He owns a medium-sized farm and has a team of employees who work for him. Giuseppe is interested in learning more about the latest developments in agriculture and how digital tools can help optimize his work. However, he faces challenges in accessing reliable information due to language barriers and limited network availability. He also struggles with finding a central platform for accessing information and knowledge related to agriculture. Giuseppe values personal and professional exchange with colleagues and representatives in order to gain trustworthy information and community connection within the agricultural industry.Main Needs: -Need to communicate and gather information in a language that is understood -Need for personal and professional exchange with colleagues and representatives in order to gain trustworthy information -Need for a platform to share knowledge and experiences in agriculture Main Challenges: -Language barriers for accessing information in foreign languages -Limited network availability and poorly maintained websites -Finding reliable sources of information in the digital age IT Skills: Medium Attitude toward digital innovation: Medium Quote: "Digital sources can help optimize work in the company. If you have tools that allow you to work better, the entrepreneur works better, the employees and the company work better and so on. Maybe you also avoid a lot of waste." Persona: Anna, from Poland Age: Middle age (between 40-60 years old) Goal: To increase productivity and efficiency on her family-owned farm Background: Anna is a middle-aged farmer from a rural town in Poland. She works on a familyowned farm that produces a variety of crops and livestock. With the increasing use of technology in agriculture, Anna has recognized the need to incorporate digital tools into her farming practices to increase productivity and efficiency. She has some basic computer skills, but would like to learn more about how digital tools can be implemented on her farm. Anna is also fluent in Polish but struggles to find information in English or other languages.Needs: -Access to affordable digital tools and equipment -Practical and efficient digital tools that aid in agricultural tasks and processes -User-friendly and easily accessible digital tools that provide efficient information retrieval Background: Maria Rossi is a small-scale farmer from Italy who has been working in agriculture for over 20 years. She has a small farm where she grows vegetables and raises cattle. Maria is looking for a platform where she can connect with other farmers and share her experiences to gain insight into how she can improve her crop yield and livestock production. She is tired of the limited sources of information available in her community and wants to learn from other farmers who are facing similar challenges.Main Needs: -Personal and professional exchange with colleagues and representatives in the agricultural industry. -Access to a platform for knowledge sharing in agriculture.-Connection and community within the agricultural industry.Main Challenges: -Navigating through the vast amount of online information available to find reliable sources and avoid misleading information.-Identifying crop issues using digital tools and validating them in the field. -Limited availability of economically efficient digital tools.IT Skills: MediumAttitude toward Digital Innovation: Medium Quote: "I would like to have much more information in one platform. I use a lot of Facebook. I use Facebook more for work than to socialize. It is an easy way to follow the news from all the Italian newspapers and also the international newspapers." User Persona: Name: Marta Age: Late 30s Country: Italy Background: Marta inherited her family's farm, located in a small village in Southern Italy. Despite having a degree in agriculture, she struggled to manage the farm and keep it up to date with the latest technologies. She is now motivated to bring her farm back to its prime and generate more revenue, but she has limited knowledge of digital tools and faces language barriers while searching for technical information online.Goal: To access personalized and user-friendly digital tools that can help her manage her farm efficiently and access technical information in Italian.Main Needs: -Personalized information -User-friendly digital tools -Information in Italian Main Challenges: -Limited digital skills -Language barriers while searching for technical information online IT Skills: Low Attitude towards digital innovation: Medium Quote: "It should be user-friendly...They don't have good digital skills. And so everything should be very intuitive. Yes, a very easy tool to use. This is the main argument."
Using thematic analysis in psychology. V Braun, V Clarke, 10.1191/1478088706qp063oaQualitative Research in Psychology. 32Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology. 3 (2), 77-101. https://doi.org/10.1191/1478088706qp063oa
Teaching thematic analysis: Overcoming challenges and developing strategies for effective learning. The psychologist. V Clarke, V Braun, 26Clarke, V., & Braun, V. (2013). Teaching thematic analysis: Overcoming challenges and developing strategies for effective learning. The psychologist, 26(2), 120-123.
Can Large Language Models emulate an inductive Thematic Analysis of semistructured interviews? An exploration and provocation on the limits of the approach and the model. De Paoli, S , De Paoli S. (2023). Can Large Language Models emulate an inductive Thematic Analysis of semi- structured interviews? An exploration and provocation on the limits of the approach and the model. Available from: https://arxiv.org/abs/2305.13014
A conversation on artificial intelligence, chatbots, and plagiarism in higher education. M R King, Chatgpt, 10.1007/s12195-022-00754-8Cellular and Molecular Bioengineering. 161King, M. R., & ChatGPT. (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16(1), 1-2. https://doi.org/10.1007/s12195- 022-00754-8
Conversational ai-powered design: Chatgpt as designer, user, and product. A B Kocaballi, Kocaballi, A. B. (2023). Conversational ai-powered design: Chatgpt as designer, user, and product. Available from: https://arxiv.org/abs/2302.07406
AI as Agency without Intelligence: On ChatGPT, large language models, and other generative models. L Floridi, 10.1007/s13347-023-00621-yPhilosophy & Technology. 36115Floridi, L. (2023). AI as Agency without Intelligence: On ChatGPT, large language models, and other generative models. Philosophy & Technology, 36(1), 15. https://doi.org/10.1007/s13347-023-00621-y
J Gao, Y Guo, G Lim, T Zhan, Z Zhang, T J J Li, S T Perrault, 10.48550/arXiv.2304.07366arXiv:2304.07366CollabCoder: A GPT-Powered Workflow for Collaborative Qualitative Analysis. arXiv preprintGao, J., Guo, Y., Lim, G., Zhan, T., Zhang, Z., Li, T. J. J., & Perrault, S. T. (2023). CollabCoder: A GPT-Powered Workflow for Collaborative Qualitative Analysis. arXiv preprint arXiv:2304.07366. https://doi.org/10.48550/arXiv.2304.07366
Personas make users memorable for product team members. A Harley, 16Nielsen Norman GroupHarley, A. (2015). Personas make users memorable for product team members. Nielsen Norman Group, 16. Available from: https://www.nngroup.com/articles/persona/
Supporting serendipity: Opportunities and challenges for Human-AI Collaboration in qualitative analysis. J A Jiang, K Wade, C Fiesler, J R Brubaker, 10.1145/3449168Proceedings of the ACM on Human-Computer Interaction. 5CSCW1Jiang, J. A., Wade, K., Fiesler, C., & Brubaker, J. R. (2021). Supporting serendipity: Opportunities and challenges for Human-AI Collaboration in qualitative analysis. Proceedings of the ACM on Human- Computer Interaction, 5(CSCW1), 1-23. https://doi.org/10.1145/3449168
P Laubheimer, Persona Types: Lightweight, Qualitative, and Statistical. 212020Nielsen Norman GroupLaubheimer, P. (2020). Persona Types: Lightweight, Qualitative, and Statistical. Nielsen Norman Group, 21(06), 2020. https://www.nngroup.com/articles/persona-scope/
ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. B D Lund, T Wang, N R Mannuru, B Nie, S Shimray, Z Wang, 10.1002/asi.24750Journal of the Association for Information Science and Technology. 745Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology. 74(5), 570-581 https://doi.org/10.1002/asi.24750
Personas and user-centered design: How can personas benefit product design processes. T Miaskiewicz, K A Kozar, 10.1016/j.destud.2011.03.003Design studies. 32Miaskiewicz, T., & Kozar, K. A. (2011). Personas and user-centered design: How can personas benefit product design processes?. Design studies, 32(5), 417-430. https://doi.org/10.1016/j.destud.2011.03.003
Personas is applicable: a study on the use of personas in Denmark. L Nielsen, K Storgaard Hansen, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsNielsen, L., & Storgaard Hansen, K. (2014, April). Personas is applicable: a study on the use of personas in Denmark. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1665-1674).
How do designers and user experience professionals actually perceive and use personas. T Matthews, T Judge, S Whittaker, Proceedings of the SIGCHI conference on human factors in computing systems. the SIGCHI conference on human factors in computing systemsMatthews, T., Judge, T., & Whittaker, S. (2012, May). How do designers and user experience professionals actually perceive and use personas?. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1219-1228).
Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?. S O'connor, Nurse Education in Practice. 66O'Connor, S. (2022). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?. Nurse Education in Practice, 66, 103537-103537.
How to analyze qualitative data from UX research: Thematic analysis. NN-Nielsen Norman Group. M Rosala, Rosala, M. (2019). How to analyze qualitative data from UX research: Thematic analysis. NN-Nielsen Norman Group. Available from: https://www.nngroup.com/articles/thematic-analysis/
Just-right personas: How to choose the scope of your personas. K Salazar, Nielsen Norman Group. Retrieved. 23Salazar, K. (2020). Just-right personas: How to choose the scope of your personas. Nielsen Norman Group. Retrieved, 23, 2020. Available from: https://www.nngroup.com/articles/persona-scope/
Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decision and Negotiation. D Siemon, 10.1007/s10726-022-09792-z31Siemon, D. (2022). Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decision and Negotiation, 31(5), 871-912. https://doi.org/10.1007/s10726-022- 09792-z
ChatGPT is fun, but not an author. H H Thorp, 10.1126/science.adg7879Science. 3796630Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313. https://doi.org/10.1126/science.adg7879
Scenarios, personas and user stories: User-centered evidence-based design representations of communicable disease investigations. A M Turner, B Reeder, J Ramey, 10.1016/j.jbi.2013.04.006Journal of biomedical informatics. 464Turner, A. M., Reeder, B., & Ramey, J. (2013). Scenarios, personas and user stories: User-centered evidence-based design representations of communicable disease investigations. Journal of biomedical informatics, 46(4), 575-584. https://doi.org/10.1016/j.jbi.2013.04.006
EUREKA WP2: Regional Interview Transcripts [Data set. Szabolcs Vágó, Pieter Spanoghe, Szabolcs Vágó, & Pieter Spanoghe. (2023). EUREKA WP2: Regional Interview Transcripts [Data set].
. 10.5281/zenodo.7571028Zenodo. https://doi.org/10.5281/zenodo.7571028
Learning about users and their needs. Uk Government, needs#:~:text='User%20needs'%20are%20the%20needs. UK Government (2017). Learning about users and their needs. Available from: https://www.gov.uk/service-manual/user-research/start-by-learning-user- needs#:~:text='User%20needs'%20are%20the%20needs,so%20achieve%20their%20policy%20intent
Report on end-user archetypes, end-user journeys and validation workshops. L Van Der Cruyssen, D2.3Van der Cruyssen L. (2021). D2.3. Report on end-user archetypes, end-user journeys and validation workshops. Available from https://eureknos- eureka.fra1.digitaloceanspaces.com/production/deliverables/D2.3_Enduser_archetypes_journeys_WP2.p df
Designing transparency for effective human-AI collaboration. M Vössing, N Kühl, M Lind, G Satzger, 10.1007/s10796-022-10284-3Information Systems Frontiers. 243Vössing, M., Kühl, N., Lind, M., & Satzger, G. (2022). Designing transparency for effective human-AI collaboration. Information Systems Frontiers, 24(3), 877-895. https://doi.org/10.1007/s10796-022- 10284-3
Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. Z Xiao, X Yuan, Q V Liao, R Abdelghani, P Y Oudeyer, Companion Proceedings of the 28th International Conference on Intelligent User Interfaces. Xiao, Z., Yuan, X., Liao, Q. V., Abdelghani, R., & Oudeyer, P. Y. (2023, March). Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. In Companion Proceedings of the 28th International Conference on Intelligent User Interfaces (pp. 75-78).
| [] |
[
"Text-Only Image Captioning with Multi-Context Data Generation",
"Text-Only Image Captioning with Multi-Context Data Generation"
] | [
"Feipeng Ma \nUniversity of Science\nTechnology of China\n\nWeChat\nTencent Inc\n",
"Yizhou Zhou [email protected] \nWeChat\nTencent Inc\n",
"Fengyun Rao [email protected] \nWeChat\nTencent Inc\n",
"Yueyi Zhang \nUniversity of Science\nTechnology of China\n",
"Xiaoyan Sun [email protected] \nUniversity of Science\nTechnology of China\n"
] | [
"University of Science\nTechnology of China",
"WeChat\nTencent Inc",
"WeChat\nTencent Inc",
"WeChat\nTencent Inc",
"University of Science\nTechnology of China",
"University of Science\nTechnology of China"
] | [] | Text-only Image Captioning (TIC) is an approach that aims to construct a model solely based on text that can accurately describe images. Recently, diffusion models have demonstrated remarkable capabilities in generating high-quality images that are semantically coherent with given texts. This presents an opportunity to generate synthetic training images for TIC. However, we have identified a challenge that the images generated from simple descriptions typically exhibit a single perspective with one or limited contexts, which is not aligned with the complexity of real-world scenes in the image domain. In this paper, we propose a novel framework that addresses this issue by introducing multi-context data generation. Starting with an initial text corpus, our framework employs a large language model to select multiple sentences that describe the same scene from various perspectives. These sentences are then summarized into a single sentence with multiple contexts. We generate simple images using the straightforward sentences and complex images using the summarized sentences through diffusion models. Finally, we train the model exclusively using the synthetic image-text pairs obtained from this process. Experimental results demonstrate that our proposed framework effectively tackles the central challenge we have identified, achieving the state-of-the-art performance on popular datasets such as MSCOCO, Flickr30k, and SS1M. * Corresponding author Preprint. Under review.arXiv:2305.18072v1 [cs.CV] 29 May 2023(1) The natural image and the caption with four sentences A,B,C,D.(2) Uni-context images generated by single caption A,B,C,D, respectively.(3) The multi-context image generated by Summarized caption. A: A large airplane that is parked on a runway. B: A cart pulling luggage near a plane at an airport. C: A parked plane is being directed by a man on the ground. D: A large jetliner sitting on top of an airport tarmac. Summary: A man directs a parked airplane on the runway while a luggage cart approaches it at the airport. Caption: A large airplane that is parked on a runway. A cart pulling luggage near a plane at an airport. A parked plane is being directed by a man on the ground. A large jetliner sitting on top of an airport tarmac. | 10.48550/arxiv.2305.18072 | [
"https://export.arxiv.org/pdf/2305.18072v1.pdf"
] | 258,960,145 | 2305.18072 | 6d589d44957e8e4ce4544f11e2a5ac589c7a4f00 |
Text-Only Image Captioning with Multi-Context Data Generation
Feipeng Ma
University of Science
Technology of China
WeChat
Tencent Inc
Yizhou Zhou [email protected]
WeChat
Tencent Inc
Fengyun Rao [email protected]
WeChat
Tencent Inc
Yueyi Zhang
University of Science
Technology of China
Xiaoyan Sun [email protected]
University of Science
Technology of China
Text-Only Image Captioning with Multi-Context Data Generation
Text-only Image Captioning (TIC) is an approach that aims to construct a model solely based on text that can accurately describe images. Recently, diffusion models have demonstrated remarkable capabilities in generating high-quality images that are semantically coherent with given texts. This presents an opportunity to generate synthetic training images for TIC. However, we have identified a challenge that the images generated from simple descriptions typically exhibit a single perspective with one or limited contexts, which is not aligned with the complexity of real-world scenes in the image domain. In this paper, we propose a novel framework that addresses this issue by introducing multi-context data generation. Starting with an initial text corpus, our framework employs a large language model to select multiple sentences that describe the same scene from various perspectives. These sentences are then summarized into a single sentence with multiple contexts. We generate simple images using the straightforward sentences and complex images using the summarized sentences through diffusion models. Finally, we train the model exclusively using the synthetic image-text pairs obtained from this process. Experimental results demonstrate that our proposed framework effectively tackles the central challenge we have identified, achieving the state-of-the-art performance on popular datasets such as MSCOCO, Flickr30k, and SS1M. * Corresponding author Preprint. Under review.arXiv:2305.18072v1 [cs.CV] 29 May 2023(1) The natural image and the caption with four sentences A,B,C,D.(2) Uni-context images generated by single caption A,B,C,D, respectively.(3) The multi-context image generated by Summarized caption. A: A large airplane that is parked on a runway. B: A cart pulling luggage near a plane at an airport. C: A parked plane is being directed by a man on the ground. D: A large jetliner sitting on top of an airport tarmac. Summary: A man directs a parked airplane on the runway while a luggage cart approaches it at the airport. Caption: A large airplane that is parked on a runway. A cart pulling luggage near a plane at an airport. A parked plane is being directed by a man on the ground. A large jetliner sitting on top of an airport tarmac.
Introduction
Significant advancements have been made in the field of image captioning, which focuses on generating descriptive text for given images. The main challenge in image captioning lies in learning the correlation between images and text, which heavily relies on the available datasets. Current research efforts [10] primarily concentrate on training models using human-annotated data, resulting in notable improvements. However, these models exhibit limited scalability and lack domain generality due to the relatively small size of the available data. To address this limitation, some researchers [13,22,18] have explored the approach of pretraining models on image-text pairs collected through web crawling, followed by finetuning on downstream human-annotated data using a pretraining-finetuning paradigm. Nonetheless, the finetuning stage remains crucial for achieving optimal performance, as models trained solely on web-crawled image-text pairs often struggle to generalize effectively to downstream datasets without additional finetuning [19].
Unsupervised image captioning has become an important research topic due to the high cost of human-annotated data and the low quality of web-crawled data. Unpaired image captioning, which trains models on unpaired images and texts, is a mainstream research direction in unsupervised image captioning. Since unpaired image captioning can utilize widespread images and texts without requiring paired information, it has the potential to overcome the limitations of supervised and webcrawled methods. However, existing methods [14,21,26] have heavily relied on object detectors to initialize image-text pairs, which often leads to errors due to the ignored relationships and attributes of objects. Recent methods [23,29], leveraging the cross-domain ability of CLIP [31], perform text-only training to train a text decoder to reconstruct text from CLIP's text features and decode the image features extracted by CLIP during inference. While text-only training methods eliminate the dependence on object detectors for training, they rely heavily on the cross-domain capability of CLIP during inference.
In this paper, we delve deeper into the Text-only Image Captioning (TIC) method. We have observed that the advancement of large models, such as diffusion models and large language models (LLMs), has made it feasible to utilize synthetic data for image captioning. However, training the image captioning model directly on synthetic image-text pairs has limited performance since natural images can be described from multiple perspectives, whereas synthetic images generated with easily accessible captions typically have limited contexts and less description capacity. This is because easily available captions are often simple sentences describing a single scene. Images generated with such sentences lack the ability to be described from multiple perspectives, which are called uni-context images. Figure 1 provides several examples of uni-context and multi-context images. The key to solving this problem is to find a set of captions that can describe the same image, and then generate an image corresponding to the set so that the generated image can be described by this set of captions from multiple perspectives. We discovered that LLMs are fully capable of selecting captions describing the same image from a candidate set and summarizing the captions for generation.
We propose a framework for TIC with Multi-Context Data Generation (MCDG), which utilizes the diffusion model and the LLM. Our framework consists of two stages: generation and training. In the generation stage, we address the core challenge by (1) selecting captions that describe the same image from multiple perspectives and forming groups, and (2) summarizing the captions within each group into one sentence with multiple contexts. LLMs are employed to accomplish these tasks simultaneously. Using each caption as a query, we initially group captions based on text feature similarity and then instruct LLMs to select captions describing the same image from multiple perspectives and summarize them. With the summary sentences, complex scene images are generated using Stable Diffusion. In the training stage, both types of synthetic data are used: uni-context images generated from single sentences and multi-context images generated from summarized sentences. Unicontext images are paired with their corresponding single captions, while multi-context images are paired with the corresponding sentence groups, providing multiple paired texts for each multi-context image. The model is trained entirely on the synthetic data.
The main contributions are summarized as follows:
(1) Our study represents the pioneering attempt, to the best of our knowledge, to address the TIC task by harnessing the power of diffusion models and LLMs.
(2) We have observed that a uni-context description alone falls short in generating comprehensive image captions. To overcome this limitation, we propose a multi-context data generation method specifically tailored for TIC.
(3) We conduct experiments on prominent datasets, including MSCOCO, Flickr30k, and SS1M. The experimental results demonstrate that our method achieves state-of-the-art performance.
Related Work
Supervised Image Captioning. Classical methods model image captioning as a translation task, training models to translate image to text on human-annotated paired image-text data. Models [43,20] of image captioning usually consist of a CNN-based encoder to encode the input image and a RNN-based decoder to generate captions. Recent work [44,5] adopt transformer for both encoder and decoder and achieve promising results. Besides, some work try to introduce object [2,36], segmentation [46], gazing patterns [1] or attribute [13] to improve the traditional image captioning methods. Due to the limitation of human-annotated data, recent work [22,18,45] explore a pretraining-finetuning paradigm, pretraining models on the large-scale web-crawled dataset then finetuning on the downstream human-annotated dataset. Although these methods can obtain improvement from pretraining stage, their performance still heavily rely on the finetuning stage which required human-annotated data.
Unsupervised Image Captioning. Unsupervised image captioning aims to train captioning models without relying on human-annotated data. Previous research use independent image sources and text corpus to train models, with object detectors playing an important role in building the initial connection between the two modalities. Feng et al. [14] make first attempt and introduce policy gradient to reward generated captions with correct visual concepts. Later, Laina et al. [21] propose to align image and text in a shared multimodal space constructed by visual concepts. Meng et al. [26] propose to harvest objects corresponding to the given sentences instead of finding a candidate image. However, these methods all rely on the object detector to establish the initial correlation between images and texts, which ignores the attributes of objects and the relationship among objects, and are constrained by the generalization ability of the object detector used. Recently, text-only training methods have been proposed to train a text decoder that can reconstruct text from text features extracted by CLIP text encoder. During inference, they use the image encoder of CLIP to extract image features which align well with text features in feature space. Wei et al. [23] propose a training-free mechanism utilizing the training text features to project visual embedding to text embedding space during inference. Nukrai et al. [29] propose a noise injection training method to reduce the modality gap during inference. However, these methods heavily rely on the cross-modality ability of CLIP and difficult to transfer to unseen domain without finetuning CLIP.
Applications of Diffusion Models. Diffusion models have demonstrated powerful generative capabilities and been applied to image generation [16,12], video generation [16,42] and text generation [7]. Notably, conditional generation models offer enhanced controllability and produce high-quality outputs, thus expanding the range of potential applications. In particular, text-to-image generation models, such as DALL-E 2 [33], Imagen [35] and Stable Diffusion [34], have been applied to many downstream tasks. For image classification, He et al. [15] demonstrated the utility of synthetic data derived from GLIDE [28] for zero-shot and few-shot classification, showcasing substantial potential for pretraining. Azizi et al. [3] show that further finetuning text-to-image model on ImageNet can yield great improvement. For image segmentation, Zhao et al. [50] combine Stable Diffusion and CLIP to obtain large-scale images with accurate categories to solve the scalability problem of previous Copy-Paste methods. These tasks require high-quality synthetic images but with low requirements for semantic consistency. Only the object corresponding to the given single label needs to be present in the synthetic image, without considering the whole scene. The text-toimage diffusion model can handle this simple generation condition very well. Unlike these tasks, image captioning requires complex scenes in synthetic image that can be described from different perspectives, which places demands on both image quality and semantic consistency. At the same time, since diffusion models cannot generate multi-view images from simple text, it is also a great challenge to construct a suitable corpus.
Query:
A large airplane that is parked on a runway.
Corpus
Initial Grouping
LLMs INDEX: [1,3,4] Instruction: Select and summarize sentences in sentences set. You should ...
Initial Grouping of Captions
Selection & Summarization via LLMs
SUMMARY:
Stable Diffusion
Generation via Stable Diffusion
concatenate Encoder A large airplane that is parked on a runway.
A cart pulling luggage near a plane at an airport.
A parked plane is being directed by a man on the ground.
Two planes are on a runway beside trucks. In the generation stage, we commence by performing an initial grouping of captions within the corpus. Next, LLMs are employed to select captions that depict an image from multiple perspectives, which are extracted from the obtained candidate sets. These selected captions are then condensed into a single sentence through summarization. These succinct sentences play a pivotal role in generating multi-context images using Stable Diffusion. Finally, in the training stage, we exclusively train the image captioning model on the synthetic images and single captions. The incorporation of summarized captions during training is optional and can be included based on preference.
Method
Overview
Our framework, presented in Figure 2, comprises two stages: generation and training. The generation stage consists of three steps: initial caption grouping, LLM-based selection and summarization, and image generation via stable diffusion. We train the image captioning model solely on the synthetic data obtained from the generation stage, including multi-context and uni-context images generated through summarized and single captions, respectively. Crucially, we do not use real images or rely on pre-trained visual-language models like CLIP during training. This decision ensures that our model develops cross-modal capabilities by learning to associate images with text exclusively through synthetic data.
Generation Stage
Given a text corpus T = {t 1 , t 2 , ..., t N } with N captions, we first group the captions by semantic similarity. Each group G i = {t i , t j , ..., t m } n contains n captions that describe a multi-context image from multiple perspectives, but they may not be consistent or complete. Next, we use LLMs to select the most representative captions in each group and summarize them into a single sentence s i . Finally, we use the original captions and the summarized captions to generate uni-context and multi-context images respectively, using Stable Diffusion.
Initial Grouping of Captions. To address the impracticality of LLMs selecting captions that can describe the same image from multiple perspectives within a text corpus, we begin by conducting a preliminary grouping of captions based on their semantic similarity. Captions that describe the same image often exhibit a high degree of semantic similarity, even when viewed from different perspectives. Consequently, employing simple similarity-based retrieval or clustering methods can facilitate appropriate grouping of the corpus. To this end, we employ the text encoder of CLIP to extract caption features and calculate the cosine similarity of each caption to other captions in the corpus. The top n sentences are selected to form a group around each caption, with some captions overlapping between groups. To further reduce redundancy, we employ a greedy algorithm to sample the minimum number of groups from the constructed groups while ensuring that the captions in the groups cover the entire corpus.
Selection and Summarization via LLMs. The task of selection and summarization poses significant challenges in our scenario. The absence of an anchor image to guide the selection of captions that can describe it presents a significant obstacle. Relying on text alone to determine whether a set of texts can describe an image from different perspectives is also challenging, given the limitations of existing metrics. While similarity-based metrics can measure the similarity between texts, they cannot discern whether different texts are describing different perspectives of the same image. Additionally, traditional text summarization approaches are not suitable for our scenario since they extract key information from long texts, whereas our objective is to combine information from several independent short captions to produce a suitable description of a given image. However, the power of LLMs in open domains allows us to address these difficult-to-define problems. We employ LLMs to simultaneously tackle both tasks. For each group of captions, we consider them as a candidate set for describing a specific image. We then design a prompt that enables us to accomplish both selection and summarization simultaneously. To achieve both selection and summarization, we use the following Instruction I.
Select and summary sentences in the given sentences set. You should find 3 to 8 sentences that form a description from the same or different views of the same image. The meanings of the selected sentences in a group should not conflict with each other. Summarize the sentences as one not exceed 50 words to describe the scene in objective style. The summary sentence must be objective and concise, without ambiguity and uncertainty. Return the selected index and the summarized sentence in json format like 'index': list,'summary': str. Return directly the json format results without explanation. The given sentences set are:
Then, we add the index before each sentence in the group, and then concatenate it after the instruction in order, as input to the LLMs. For group Image Generation with Stable Diffusion. In this crucial step, we utilize the power of Stable Diffusion to generate synthetic images based on the initial corpus T and the summarized sentences S. It is important to note that we refrain from performing any prompt engineering on the captions input to the stable diffusion, as well as filtering the synthetic image-text pairs using a pre-trained vision-language model. By doing so, we can truly assess the usefulness of the synthetic data and the effectiveness of our proposed method in generating accurate and diverse images. As a result of this process, we can obtain two types of images, namely uni-context images and multi-context images. Uni-context images correspond to the caption in the initial corpus one by one, while multi-context images are generated by selecting multiple captions in the group to form multiple image-text pairs. This approach not only allows us to capture a range of perspectives and details related to the same image, but also enhances the diversity and variability of the resulting synthetic data. Overall, this step is crucial in demonstrating the effectiveness and applicability of our proposed method in generating high-quality synthetic data that can be utilized for a wide range of downstream tasks.
G i = {t i , t j , ..., t m } n , the final input of LLMs is I+ '1. t i 2. t j ..., n.
Training Stage
The encoder-decoder architecture based on the Transformer [40] has been widely used in various natural language processing tasks, and we follow the approach of previous works [22,13] by adopting this architecture in our model. In order to train our model, we utilize the synthetic data generated during the generation stage, which includes both uni-context image-text pairs and multi-context image-text pairs. We believe that by using both types of image-text pairs in training, our model will learn to handle different scenarios better. To achieve this, we treat the two types of image-text pairs differently during the training process, taking into account the different characteristics of each type. This approach allows our model to learn from a diverse range of examples and better generalize to new data. Uni-context image-text pairs. Uni-context images, which stem from single captions, inherently lack the ability to encompass various perspectives, limiting their descriptive capacity. Consequently, each uni-context image corresponds specifically to a given caption, capturing its semantic meaning faithfully. Nevertheless, as previously highlighted, training solely on the generated image-text pairs has proven to be somewhat restrictive in terms of effectiveness. To address this limitation, we propose an approach that leverages the outcomes of the initial caption grouping phase during the generation stage.In the initial grouping of captions, we pair each anchor caption with th e top n − 1 most similar captions from the extensive text corpus, forming cohesive and informative groups. Given the substantial size of the corpus, the most similar captions within each group often exhibit semantic equivalence while manifesting diverse linguistic expressions. Although these captions may not provide the desired breadth of perspectives required for comprehensive summarization, they serve an invaluable purpose as paired text for generating uni-context images aligned with the anchor caption.
Multi-context image-text pairs. Multi-context images, generated through summarized captions, possess the unique capability of being paired with multiple captions. In our approach, we leverage the captions selected by LLMs as the accompanying text, enabling each multi-context image to be described from diverse perspectives provided by these chosen captions. Notably, the quantity of captions associated with each multi-context image is not predetermined, but rather relies on the attributes of the initial grouping and LLMs. In the design of our prompts, we explicitly permit LLMs to select a specific number of sentences within a specified range.
Experiments
To investigate the capabilities of our MCDG approach in diverse captioning scenarios, we conducted experiments considering two distinct settings: in-domain unpaired image captioning and zero-shot image captioning. In the context of in-domain unpaired image captioning, both the training and test data are sourced from the same dataset. However, it is important to note that during the training process, the image and text data are not paired together. On the other hand, in zero-shot image captioning, the training and test data are sampled from separate datasets, introducing a greater degree of challenge and requiring the model to generalize effectively across different data sources.
Settings
Datasets. For in-domain unpaired image captioning, we use MSCOCO [8] and Flickr30k [47] datasets. MSCOCO contains 123287 images and each image are annotated with at least 5 captions. We follow Karpathy [20] to split the MSCOCO into 118287, 4000 and 1000 for training, validation and testing respectively. Flickr30k contains 31783 images, each of which is also tagged with 5 Implementation Details. In the generation stage, we adopt varying initial group sizes for different datasets. Specifically, for MSCOCO, Flickr30k, and SS1M, the group sizes (n) are set to 30, 20, and 10 respectively. In the process of selecting and summarizing captions, we utilize the GPT-3.5-turbo model, leveraging its capabilities through API access. For image generation, we employ Stable Diffusion v1-4 with a resolution of 512 × 512, utilizing 20 sampling steps. To expedite the sampling process of the diffusion model, we incorporate the DPM-Solver [25] in our sampling methodology. During the training stage, we follow the approach outlined in BLIP [22]. The encoder transformer is initialized from ViT-B, which undergoes pretraining on ImageNet [39]. As for the decoder transformer, it is initialized from BERT-base [11]. Our model is trained using the Adam optimizer with a weight decay of 0.05, utilizing synthetic data for 30 epochs and a batch size of 36. The learning rate is set to 1e-5, and a warm-up strategy is employed during the training process. Additionally, the input synthetic images are resized to 384 × 384. All experiments are conducted using 8 NVIDIA A100 GPU.
Quantitative Results
In-domain Unpaired Image Captioning. We conduct in-domain unpaired image captioning on MSCOCO and Flickr30k datasets. We compare our MCDG with both supervised methods and unsupervised methods. Supervised methods include BUTD [2],CLIPCap [27] and the method proposed by Barraco et al. [5]. BUTD proposes a combined bottom-up and top-down mechanism that leverages Faster R-CNN to extract region features. CLIPCap and [5] are recent approaches that utilizing CLIP as visual encoder and achieve greate improvements. Unsupervised methods are: Feng et al. [14] and Laina et al. [21] train model on unpaired image and text and utilize visual concept to construct connection between image and text. ZeroCap [38], Magic [37] and ESPER-Style [48] introduce GPT-2 [32] as the language decoder. CLIPRe [37] is a CLIP-based method that retrieving captions. CapDec [29] and DeCap [23] perform text-only training with the powerful cross-modality capability of CLIP. Figure 3: Comparisons for captions generated by CLIPCap [27], CapDec [29], and our proposed MCDG for exemplary images from the MSCOCO dataset.
Zero-shot Image Captioning. The captions of MSCOCO and Flickr30k can naturally be grouped, as each image will have at least 5 descriptions in these dataset. To verify the effectiveness of our MCDG, we trained on the web-crawled SS1M captions, which are not grouped in nature, and perform zeroshot image captioning on MSCOCO. Since there is no in-domain text, we use the the summarized captions for training as well. In this experimental setting, we compared our method against several other approaches. (1) ZeroCap and ConZIC [49] directly employ pre-trained vision-language models without finetuning. (2) CLIPRe and DeCap utilize training on the large CC3M-text corpus [6].
(3) both DeCap and Feng et al. [14] also employed SS1M for training their models. The results, shown in Table 2, demonstrate the effectiveness of our method, surpassing previous methods across all evaluation metrics. Notably, even without vision-language pre-trained weights and support set features, our method outperforms DeCap by 52.8% in terms of B@4 metric. Therefore, we can conclude that the effectiveness of our method does not depend on the data having a grouping nature and still performs well on the web-crawled data.
Qualitative Result
We provide examples of the generated captions on the MSCOCO dataset in Figure 3. To assess the qualitative results, we compare them with those from previous methods, namely CLIPCap [27] and CapDec [29]. The incorrect parts in the captions are highlighted in red, while the improvements made by our method are highlighted in green. In the case of the first image, our method provides an accurate description of the location and the number of people, which other methods fail to achieve due to errors. When it comes to the second and third images, our method excels by capturing more detailed descriptions, such as the inclusion of "a ball" in one case and accurately specifying the colors "brown and white" in the other.
Ablation Study
The effect of components of MCDG. Table 3 presents an ablation study conducted to evaluate the different modules within our framework on MSCOCO dataset. Initially, we establish a baseline approach, referred to as the Baseline, which utilizes uni-context images generated directly from single captions. Subsequently, we incorporate the Initial Grouping module to establish a mapping between uni-context images and captions. In this module, we select the top four similar captions as target captions for the synthetic image generated by the query caption. Building upon the Baseline combined with Initial Grouping, we examine three variations of selection and summarization: (1) using selection without summarization (Sel. w.o. Sum.), where we solely employ LLMs to select additional captions for training. (2) using summarization without selection (Sum. w.o. Sel.), where LLMs directly summarize captions in the Initial Grouping.
(3) using both selection and summarization (Sel. & Sum.), which is our complete framework. To further validate the importance of a high-quality caption set for summarization, we directly employ the groundtruth grouping (w. GTG) from the MSCOCO dataset to construct a summary, which is then used to generate multi-context images. The results of Baseline show that the performance of directly using the synthetic images generated by single captions is limited. Except for B@4, all other metrics are inferior to previous state-ofthe-art method, and on CIDEr it is even lower by 2.9. After adopting the most similar captions of Initial Grouping as target captions, we find that the performance is comparable with other methods. Especially in the CIDEr metric has improved a lot, from 88.9 to 94.7. However, we find that both Sel. w.o. Sum. and Sum. w.o. Sel. performed worse than Initial Grouping in CIDEr metric. This result shows that just using LLMs to select captions to generate images for training, without summarizing them to obtain multi-context images, is ineffective. Similarly, solely summarizing the results of the Initial Grouping is also ineffective. This is because directly summarizing captions in the Initial Grouping does not yield sentences that have the ability to generate multi-context images. The performance of that with Sel. & Sum., which is our proposed MCDG, is much improved. The results show that the multi-context images generated by our MCDG is helpful for unpaired image captioning.
Ideally, LLMs would select and summarize captions from the entire corpus, but due to resource constraints, we only perform Selection & Summarization on a small candidate set of Initial Grouping, which does not achieve optimal performance. To further validate the upper bound of our method, we directly use the groundtruth grouping of MSCOCO and summarize captions. This approach significantly improves performance on all metrics. This result proves that our proposed MCDG is effective and has potential for improvement.
The effect of finetuning on MSCOCO. In Table 4, we aims to explore the transferability of the model trained through our MCDG. We follow the pretraining and finetuning paradigm to pretrain model on 1M synthetic data obtained by our MCDG, then use different proportion training image-text pairs of MSCOCO to finetune the model and evaluate on MSCOCO. We found that with only 1% of the training data of MSCOCO for finetuning, the performance of our model improved very significantly. This is because we did not consider to solve the domain gap problem between the synthetic image and the real image in our MCDG, so with a small amount of finetuning of real images, our model has a significant improvement when evaluating on the real data. When using 50% training data for finetuning, the performance is comparable with ViLT-CAP [13], which is pretrained on 10M image-text pairs and finetuned on full MSCOCO training data. However, compared to the more advanced ViT-Cap [13] method,there is still a gap. The results demonstrate that the model trained with our proposed MCDG framework has a strong generalization ability, and the performance of pretraining on 1M synthetic data is comparable with that of pretraining on 10M real data. To further explore the upper bound for pretraining on synthetic data generated by our MCDG, we expand the number of synthetic data for pretraining in Supplementary Material.
The Impact of the Number of Multi-context Images. To investigate the impact of multi-context images on the performance of our captioning model, we conducted experiments using different numbers of multi-context images for training and evaluated the results on the MSCOCO Karpathytest split. Table 5 presents the findings, where we varied the numbers of multi-context images from 10,000 to 150,000. The results demonstrate that incorporating more multi-context images during training improves the performance of image captioning. Particularly, we observed significant gains in the B@4 and CIDEr metrics when increasing the number of multi-context images from 50,000 to 100,000. The best performance achieved thus far was obtained when using 150,000 multi-context images. However, this is not the upper limit, as we were unable to explore experiments with a larger number of multi-context images due to resource constraints.
The Impact of Vision-Language Pretraining Models. In previous experiments, we intentionally did not utilize weights of vision-language pretraining models trained on real image-text pairs. However, in this section, we follow previous methods [23,29,27] to employ the image encoder of CLIP [31] as the vision backbone. Specifically, we use the ViT-B/32 and ViT-B/16 of CLIP and finetune CLIP using the LoRA method [17]. The experiment is conducted on MSCOCO. The results, as shown in Table 6, indicate a significant improvement across all metrics when using CLIP as the vision backbone. This experiment demonstrates that our approach can achieve substantial gains by leveraging vision-language pretraining models.
Conclusion and Limitations
Conclusion. We observed that synthetic images generated from single captions lack the ability to be described from multiple perspectives, unlike natural images. To address this issue, we propose a framework called MCDG that generates multi-context training data by combining LLMs and diffusion models for TIC. The framework has two stages: generation and training. In the generation stage, we group captions in the corpus and select diverse perspectives using LLMs. These perspectives are summarized into a single sentence, which is then used to generate multi-context images through diffusion models. This results in high-quality synthetic image-text pairs where each image can be described from various perspectives. In the training stage, we train image captioning models using the synthetic data generated in the previous stage. Extensive experiments on unpaired and zero-shot image captioning demonstrate the effectiveness of our MCDG framework.
Limitations. Although we present the promising results in this paper, our approach still has a few limitations. Firstly, from the perspective of Language and Learning Models (LLMs), the following limitations arise: (1) We solely employ LLMs for selection and summarization, without harnessing their powerful generative capabilities to expand the textual data. (2) Due to the constraints imposed by the length of the input sequence, LLMs cannot effectively select captions from the entire corpus, and our approach of constructing a candidate set through initial grouping may not be optimal. From the standpoint of diffusion models, our framework encounters the subsequent limitations: (1) A domain gap still exists between the synthetic images and the natural images, resulting in a lack of realism regarding specific details.
(2) Given that our primary objective in this paper is to explore the utility of synthetic data for the image captioning task, we do not utilize the weights of the vision-language pretraining model or employ data filtering methods. These methods, however, hold significant potential and warrant further investigation.
Selected Captions
Summarized Captions Synthetic Images
Supplementary Material
Visualization of Selection and Summarization. We present some cases of selection and summarization in Figure 4, with each row representing a specific instance. The columns in the figure are: (1) Selected Captions: captions the selected from the initial grouping by LLMs. (2) Natural Images: The real images that corresponds to a specific selected caption. (3) Summarized Captions: The summarized captions of selected captions by LLMs. (4) Synthetic Images: The synthetic images generated with summarized captions through Stable Diffusion. In selected captions, there are multiple captions are corresponding to the same natural image, and we choose this natural image to represent the ground truth scene. Overall, we observe that the synthetic images are very close to the corresponding natural images in terms of scene and can be described by multiple selected captions. This observation verifies our hypothesis that by combining LLMs and Stable Diffusion, we are able to obtain synthetic images that can be described from multiple perspectives, and such synthetic images are more closer to natural images.
More Qualitative Results. In the main text, we gave several examples of comparisons with Clip-Cap [27], CapDec [29], and in this section, we show more comparison results in Figure 5. The incorrect parts in the captions are highlighted in red, while the improvements made by our method are highlighted in green.
Figure 1 :
1Examples of the natural image, the uni-context images, and the multi-context image.
A man directs a parked airplane on the runway while a luggage cart approaches it at the airport4. Training on Synthetic data
Figure 2 :
2Overview of our proposed MCDG framework. The framework comprises two stages: the generation stage and the training stage.
t m '. LLMs will return the index of selected captions and the summarize sentence for each input prompt in JSON format like {'index':[i,j,m],'summary': summarized sentence}.
Figure 4 :
4Visualization of selection and summarization with the corresponding natural images and synthetic images.
Table 1 :
1In-domain unpaired image captioning results on MSCOCO and Flickr30K. "I." and "T." denote image data and text data, respectively. "P." denotes models use Vision-Language Pretraining weights. "S." denotes methods use support set features during inference. B@4: BLEU@4; M: METEOR; R: ROUGE; C: CIDEr.Method
Data
Extra
MSCOCO
Flickr30K
I. T. P. S. B@4
M
R
C
B@4
M
R
C
Supervised Methods
BUTD [2]
✓ ✓
36.2
27.0 56.4 113.5
27.3
21.7
-
56.5
CLIPCap [27]
✓ ✓ ✓
33.5
27.5
-
113.1
21.7
22.1 47.3 53.5
Barraco et al. [5]
✓ ✓ ✓
36.0
27.8 56.5 114.9
-
-
-
-
Unsupervised Methods
Feng et al. [14]
✓ ✓
18.6
17.9 43.1
54.9
-
-
-
-
Laina et al. [21]
✓ ✓
19.3
20.2 45.0
61.8
-
-
-
-
ESPER-Style [48] ✓ ✓
21.9
21.9
-
78.2
-
-
-
-
ZeroCap [38]
✓ ✓
7.0
15.4 31.8
34.5
5.4
11.8 27.3 16.8
Magic [37]
✓ ✓
12.9
17.4 39.9
49.3
6.4
13.1 31.6 20.4
CLIPRe [37]
✓ ✓
4.9
11.4 29.0
13.6
5.2
11.6 27.6 10.0
CapDec [29]
✓ ✓
26.4
25.1 51.8
91.8
17.7
20.0 43.9 39.1
DeCap [23]
✓ ✓ ✓
24.7
25.0
-
91.2
21.2
21.8
-
56.7
MCDG
✓
29.7
24.8 52.2
95.5
24.6
20.0 46.0 50.5
Table 2 :
2Zero-shot image captioning on MSCOCO Karpathy-test split.Methods
Dataset
Extra
MSCOCO
P. S. B@4
M
R
C
ZeroCap [38]
-
✓
2.6
11.5
-
14.6
ConZIC [49]
-
✓
1.3
11.5
-
12.8
CLIPRe [37] CC3M-text ✓
4.6
13.3
-
25.6
DeCap [29]
CC3M-text ✓ ✓
8.8
16.0
-
42.1
DeCap [29]
SS1M
✓ ✓
8.9
17.5
-
50.6
MCDG
SS1M
13.6
17.9 38.3 53.1
sentences. For zero-shot image captioning, we use SS1M [14] dataset for training and evaluate on
MSCOCO. SS1M is a web-scrawled text corpus specifically curated for MSCOCO captions. Feng et
al. [14] crawl 2,322,628 image descriptions from Shutterstock by using the names of the eighty object
classes in MSCOCO as keywords. We follow [23] to remove sentences with more than fifteen words.
Note that for all the datasets, we only use the text of the dataset during training and do not obtain any
real images.
Evaluation Metrics. To evaluate the quality of the generated captions, we follow previous works [9,
26, 29], using BLEU [30], METEOR [4], ROUGE [24], CIDEr-D [41] as our metrics.
Table 3 :
3The effect of components of MCDG.Method
B@4
M
R
C
Baseline
27.2 24.0 51.1 88.9
+Initial Grouping 28.5 24.5 51.7 94.7
+Sel. w.o. Sum.
28.5 24.4 51.6 93.2
+Sum. w.o. Sel.
28.5 24.5 51.7 93.5
+Sel. & Sum.
29.7 24.8 52.2 95.5
MCDG w. GTG
30.2 25.9 53.0 99.0
Table 4 :
4The effect of finetuning on MSCOCO.Proportion B@4
M
R
C
ViLT-CAP 33.7 27.7 56.1 113.5
ViT-CAP
36.3 29.3 58.1 125.2
1%
31.9 26.7 54.4 104.2
10%
33.5 27.5 55.6 109.7
50%
34.2 28.3 56.3 113.3
full
34.2 28.4 56.3 114.8
Table 5 :
5The impact of the number of multi-context images on MSCOCO.Number B@4
M
R
C
10,000 28.6 24.7 51.7 93.8
50,000 28.8 24.6 52.1 94.8
100,000 29.5 24.7 52.2 95.3
150,000 29.7 24.8 52.2 95.5
Table 6 :
6The impact of vision-language pre-training models.Backbone B@4
M
R
C
w.o. CLIP 29.7 24.8 52.2 95.5
ViT-B/32 30.3 25.5 53.2 99.3
ViT-B/16 31.4 26.1 53.9 103.4
A soldier mounts a horse with blue sky in the background. A soldier riding on the back of a black horse. A man dressed in a military outfit riding a horse. A man in uniform riding on a horse. A man in uniform with a sword riding a horse. A bright pink tree stands next to a small wooden house. pink floral tree next to a log cabin looking shed A pink flowering tree in a back yard garden. Beautiful pink floweing tree spread over park bench, landscaping and striped garden shed. A tree in bloom with pink flowers near a rural area. Men in SWAT gear running with guns drawn. The SWAT team approaching a school parking lot. Some men in armoured gear are passing by a bus Armed soldiers are running towards a yellow bus. A group of military people running towards a school bus. Men in SWAT and armored gear with guns drawn are approaching a yellow school bus.Natural ImagesA girl sitting on a bunk bed with a pony stuffed animal on it. A little girl sitting on the top bunk A beautiful little girl sitting on top of a bed holding a stuffed animal. A girl is sitting in the top bunk of bunk beds. This girl is sitting on a top bed bunk. A little girl sits on the top bunk of a bunk bed with a pony stuffed animal, surrounded by stuffed animals, while a little child watches from the bottom bunk.An airport employee on a ladder cleaning a plane's windshield. A man is on a ladder up to an airplane windshield. A person cleaning a window of an airplane. A maintenance worker is utilizing a stair cart to clean the windshield of an aircraft. View of a jet over the wing of another being serviced at an airport.An airport maintenance worker is cleaning the windshield of a jet using a ladder and a stair cart, while another jet is being serviced nearby on the runway.A soldier dressed in uniform is
mounted on a black or brown
horse, holding a sword,
positioned against a blue sky
background.
A pink flowering tree stands next
to a small wooden house and
spreads over park bench,
landscaping and striped garden
shed, creating a beautiful view
near a rural area and in a back
yard garden.
CLIPCap:CapDec:Ours:A dog standing in the grass with a frisbee in its mouth.A black dog holding a green frisbee in it's mouth.Images:A A stainless steel refrigerator in a kitchen next to a stove top oven.Figure 5: Comparisons for captions generated by CLIPCap[27], CapDec[29], and our proposed MCDG for exemplary images from the MSCOCO dataset.
Improve image captioning by estimating the gazing patterns from the caption. Rehab Alahmadi, James Hahn, CVPR. Rehab Alahmadi and James Hahn. Improve image captioning by estimating the gazing patterns from the caption. In CVPR, pages 1025-1034, 2022.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang, CVPR. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, pages 6077-6086, 2018.
Synthetic data from diffusion models improves imagenet classification. Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, David J Fleet, arXiv:2304.08466arXiv preprintShekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J Fleet. Synthetic data from diffusion models improves imagenet classification. arXiv preprint arXiv:2304.08466, 2023.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationSatanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65-72, 2005.
The unreasonable effectiveness of clip features for image captioning: an experimental analysis. Manuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, Rita Cucchiara, proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionManuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. The unreason- able effectiveness of clip features for image captioning: an experimental analysis. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4662-4670, 2022.
Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. Soravit Changpinyo, Piyush Sharma, Nan Ding, Radu Soricut, CVPR. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In CVPR, pages 3558-3568, 2021.
Analog bits: Generating discrete data using diffusion models with self-conditioning. Ting Chen, Ruixiang Zhang, Geoffrey Hinton, ICLR. Ting Chen, Ruixiang Zhang, and Geoffrey Hinton. Analog bits: Generating discrete data using diffusion models with self-conditioning. In ICLR, 2023.
Microsoft coco captions: Data collection and evaluation server. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, C Lawrence Zitnick, arXiv:1504.00325arXiv preprintXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
Towards diverse and natural image descriptions via a conditional gan. Bo Dai, Sanja Fidler, Raquel Urtasun, Dahua Lin, Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. Towards diverse and natural image descriptions via a conditional gan. In ICCV, 2017.
Contrastive learning for image captioning. Bo Dai, Dahua Lin, NeurIPS. 30Bo Dai and Dahua Lin. Contrastive learning for image captioning. In NeurIPS, volume 30, 2017.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL, pages 4171-4186, 2019.
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Nichol, NeurIPS. 34Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NeurIPS, 34:8780-8794, 2021.
Injecting semantic concepts into end-to-end image captioning. Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lin Liang, Zhe Gan, Lijuan Wang, Yezhou Yang, Zicheng Liu, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionZhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lin Liang, Zhe Gan, Lijuan Wang, Yezhou Yang, and Zicheng Liu. Injecting semantic concepts into end-to-end image captioning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 18009-18019, 2022.
Unsupervised image captioning. Yang Feng, Lin Ma, Wei Liu, Jiebo Luo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYang Feng, Lin Ma, Wei Liu, and Jiebo Luo. Unsupervised image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4125-4134, 2019.
Is synthetic data from generative models ready for image recognition? In ICLR. Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, Xiaojuan Qi, Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. Is synthetic data from generative models ready for image recognition? In ICLR, 2023.
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, NeurIPS. 33Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 33:6840- 6851, 2020.
Low-rank adaptation of large language models. J Edward, Phillip Hu, Zeyuan Wallis, Yuanzhi Allen-Zhu, Shean Li, Lu Wang, Weizhu Wang, Chen, International Conference on Learning Representations. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022.
Scaling up vision-language pre-training for image captioning. Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, Lijuan Wang, CVPR. Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. Scaling up vision-language pre-training for image captioning. In CVPR, pages 17980-17989, 2022.
Noise-aware learning from webcrawled image-text data for image captioning. Wooyoung Kang, Jonghwan Mun, Sungjun Lee, Byungseok Roh, arXiv:2212.13563arXiv preprintWooyoung Kang, Jonghwan Mun, Sungjun Lee, and Byungseok Roh. Noise-aware learning from web- crawled image-text data for image captioning. arXiv preprint arXiv:2212.13563, 2022.
Deep visual-semantic alignments for generating image descriptions. Andrej Karpathy, Li Fei-Fei, CVPR. Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR, 2015.
Towards unsupervised image captioning with shared multimodal embeddings. Iro Laina, Christian Rupprecht, Nassir Navab, ICCV. Iro Laina, Christian Rupprecht, and Nassir Navab. Towards unsupervised image captioning with shared multimodal embeddings. In ICCV, pages 7414-7424, 2019.
Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi, ICML. PMLRJunnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, pages 12888-12900. PMLR, 2022.
Decap: Decoding clip latents for zero-shot captioning via text-only training. Wei Li, Linchao Zhu, Longyin Wen, Yi Yang, ICLR. Wei Li, Linchao Zhu, Longyin Wen, and Yi Yang. Decap: Decoding clip latents for zero-shot captioning via text-only training. In ICLR, 2023.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81, 2004.
Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu, NeurIPS. 2022Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In NeurIPS, 2022.
Object-centric unsupervised image captioning. Zihang Meng, David Yang, Xuefei Cao, Ashish Shah, Ser-Nam Lim, Computer Vision-ECCV 2022: 17th European Conference. Tel Aviv, IsraelSpringerProceedings, Part XXXVIZihang Meng, David Yang, Xuefei Cao, Ashish Shah, and Ser-Nam Lim. Object-centric unsupervised image captioning. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXVI, pages 219-235. Springer, 2022.
Ron Mokady, Amir Hertz, Amit Bermano, Clipcap, arXiv:2111.09734Clip prefix for image captioning. arXiv preprintRon Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734, 2021.
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, Mark Chen, ICML. PMLRAlexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In ICML, pages 16784-16804. PMLR, 2022.
Text-only training for image captioning using noiseinjected CLIP. David Nukrai, Ron Mokady, Amir Globerson, EMNLP. David Nukrai, Ron Mokady, and Amir Globerson. Text-only training for image captioning using noise- injected CLIP. In EMNLP, pages 4055-4063, 2022.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318, 2002.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, ICML. 2021Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners, 2019.
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
High-resolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, CVPR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022.
Tim Salimans, et al. Photorealistic text-toimage diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan. 35NeurIPSChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to- image diffusion models with deep language understanding. NeurIPS, 35:36479-36494, 2022.
Direction relation transformer for image captioning. Zeliang Song, Xiaofei Zhou, Linhua Dong, Jianlong Tan, Li Guo, ACM MM. Zeliang Song, Xiaofei Zhou, Linhua Dong, Jianlong Tan, and Li Guo. Direction relation transformer for image captioning. In ACM MM, pages 5056-5064, 2021.
Language models can see: plugging visual controls in text generation. Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, Nigel Collier, arXiv:2205.02655arXiv preprintYixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, and Nigel Collier. Language models can see: plugging visual controls in text generation. arXiv preprint arXiv:2205.02655, 2022.
Zerocap: Zero-shot image-to-text generation for visual-semantic arithmetic. Yoad Tewel, Yoav Shalev, Idan Schwartz, Lior Wolf, CVPR. Yoad Tewel, Yoav Shalev, Idan Schwartz, and Lior Wolf. Zerocap: Zero-shot image-to-text generation for visual-semantic arithmetic. In CVPR, pages 17918-17928, 2022.
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou, ICML. PMLRHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In ICML, pages 10347-10357. PMLR, 2021.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Attention is all you need. NeurIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 2017.
Cider: Consensus-based image description evaluation. Ramakrishna Vedantam, Lawrence Zitnick, Devi Parikh, CVPR. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In CVPR, pages 4566-4575, 2015.
Phenaki: Variable length video generation from open domain textual description. Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, Dumitru Erhan, arXiv:2210.02399arXiv preprintRuben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Moham- mad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual description. arXiv preprint arXiv:2210.02399, 2022.
Show and tell: A neural image caption generator. Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, CVPR. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In CVPR, 2015.
End-to-end transformer based model for image captioning. Yiyu Wang, Jungang Xu, Yingfei Sun, AAAI. 36Yiyu Wang, Jungang Xu, and Yingfei Sun. End-to-end transformer based model for image captioning. In AAAI, volume 36, pages 2585-2594, 2022.
Simvlm: Simple visual language model pretraining with weak supervision. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao, ICLR. 2022Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. In ICLR, 2022.
Difnet: Boosting visual information flow for image captioning. Mingrui Wu, Xuying Zhang, Xiaoshuai Sun, Yiyi Zhou, Chao Chen, Jiaxin Gu, Xing Sun, Rongrong Ji, CVPR. Mingrui Wu, Xuying Zhang, Xiaoshuai Sun, Yiyi Zhou, Chao Chen, Jiaxin Gu, Xing Sun, and Rongrong Ji. Difnet: Boosting visual information flow for image captioning. In CVPR, pages 18020-18029, 2022.
From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier, Transactions of the Association for Computational Linguistics. 2Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78, 2014.
Youngjae Yu, Jiwan Chung, Heeseung Yun, Jack Hessel, Jaesung Park, Ximing Lu, Prithviraj Ammanabrolu, Rowan Zellers, Gunhee Ronan Le Bras, Kim, arXiv:2205.12630Multimodal knowledge alignment with reinforcement learning. arXiv preprintYoungjae Yu, Jiwan Chung, Heeseung Yun, Jack Hessel, JaeSung Park, Ximing Lu, Prithviraj Am- manabrolu, Rowan Zellers, Ronan Le Bras, Gunhee Kim, et al. Multimodal knowledge alignment with reinforcement learning. arXiv preprint arXiv:2205.12630, 2022.
Conzic: Controllable zero-shot image captioning by sampling-based polishing. Zequn Zeng, Hao Zhang, Zhengjue Wang, Ruiying Lu, Dongsheng Wang, Bo Chen, CVPR. Zequn Zeng, Hao Zhang, Zhengjue Wang, Ruiying Lu, Dongsheng Wang, and Bo Chen. Conzic: Control- lable zero-shot image captioning by sampling-based polishing. In CVPR, 2023.
X-paste: Revisit copy-paste at scale with clip and stablediffusion. Hanqing Zhao, Dianmo Sheng, Jianmin Bao, Dongdong Chen, Dong Chen, Fang Wen, Lu Yuan, Ce Liu, Wenbo Zhou, Qi Chu, ICML. Hanqing Zhao, Dianmo Sheng, Jianmin Bao, Dongdong Chen, Dong Chen, Fang Wen, Lu Yuan, Ce Liu, Wenbo Zhou, Qi Chu, et al. X-paste: Revisit copy-paste at scale with clip and stablediffusion. In ICML, 2023.
| [] |
[
"Integrated Sensing and Communication Complex CNN CSI Enhancer for 6G Networks",
"Integrated Sensing and Communication Complex CNN CSI Enhancer for 6G Networks"
] | [
"Member, IEEEXu Chen ",
"Senior Member, IEEEZhiyong Feng ",
"Senior Member, IEEEJ Andrew Zhang ",
"Member, IEEEXin Yuan ",
"Fellow, IEEEPing Zhang "
] | [] | [] | In this paper, we propose a novel integrated sensing and communication (ISAC) complex convolution neural network (CNN) CSI enhancer for 6G networks, which exploits the correlation between the sensing parameters, such as angle-ofarrival (AoA) and range, and the channel state information (CSI) to significantly improve the CSI estimation accuracy and further enhance the sensing accuracy. The ISAC complex CNN CSI enhancer uses the complex-value computation layers to form the CNN to better maintain the phase information of CSI. Furthermore, we incorporate the ISAC transform modules into the CNN enhancer to transform the CSI into the sparse angledelay domain, which can be treated as images with prominent peaks and are suitable to be processed by CNN. Then, we further propose a novel biased FFT-based sensing scheme, where we actively add known phase bias terms to the original CSI to generate multiple estimation results using a simple FFT-based sensing method, and we finally calculate the average of all the debiased sensing results to obtain more accurate range estimates. The extensive simulation results show that the ISAC complex CNN CSI enhancer can converge within 30 training epochs. Its CSI estimation normalized mean square error (NMSE) is about 17 dB lower than the MMSE method, and the bit error rate (BER) of demodulation using the enhanced CSI approaches the perfect CSI. Finally, the range estimation MSE of the proposed biased FFT-based sensing method can approach the subspacebased method with much lower complexity. Index Terms-integrated sensing and communication, joint communication and sensing, 6G, CNN. arXiv:2305.17938v1 [cs.IT] 29 May 2023 * and (·)T denote Hermitian transpose, complex conjugate and transpose, respectively; M 1 ∈ C M ×N and M 2 ∈ R M ×N are M × N complex-value and real-value matrices, respectively; for two given matrices S1 and S2, [v p,q ]| (p,q)∈S1×S2 denotes the vector stacked by values v p,q satisfying p ∈ S1 and | 10.48550/arxiv.2305.17938 | [
"https://export.arxiv.org/pdf/2305.17938v1.pdf"
] | 258,960,169 | 2305.17938 | 970616e6baafd191d3b8a990a20a46580faf95a8 |
Integrated Sensing and Communication Complex CNN CSI Enhancer for 6G Networks
Member, IEEEXu Chen
Senior Member, IEEEZhiyong Feng
Senior Member, IEEEJ Andrew Zhang
Member, IEEEXin Yuan
Fellow, IEEEPing Zhang
Integrated Sensing and Communication Complex CNN CSI Enhancer for 6G Networks
1
In this paper, we propose a novel integrated sensing and communication (ISAC) complex convolution neural network (CNN) CSI enhancer for 6G networks, which exploits the correlation between the sensing parameters, such as angle-ofarrival (AoA) and range, and the channel state information (CSI) to significantly improve the CSI estimation accuracy and further enhance the sensing accuracy. The ISAC complex CNN CSI enhancer uses the complex-value computation layers to form the CNN to better maintain the phase information of CSI. Furthermore, we incorporate the ISAC transform modules into the CNN enhancer to transform the CSI into the sparse angledelay domain, which can be treated as images with prominent peaks and are suitable to be processed by CNN. Then, we further propose a novel biased FFT-based sensing scheme, where we actively add known phase bias terms to the original CSI to generate multiple estimation results using a simple FFT-based sensing method, and we finally calculate the average of all the debiased sensing results to obtain more accurate range estimates. The extensive simulation results show that the ISAC complex CNN CSI enhancer can converge within 30 training epochs. Its CSI estimation normalized mean square error (NMSE) is about 17 dB lower than the MMSE method, and the bit error rate (BER) of demodulation using the enhanced CSI approaches the perfect CSI. Finally, the range estimation MSE of the proposed biased FFT-based sensing method can approach the subspacebased method with much lower complexity. Index Terms-integrated sensing and communication, joint communication and sensing, 6G, CNN. arXiv:2305.17938v1 [cs.IT] 29 May 2023 * and (·)T denote Hermitian transpose, complex conjugate and transpose, respectively; M 1 ∈ C M ×N and M 2 ∈ R M ×N are M × N complex-value and real-value matrices, respectively; for two given matrices S1 and S2, [v p,q ]| (p,q)∈S1×S2 denotes the vector stacked by values v p,q satisfying p ∈ S1 and
I. INTRODUCTION
A. Background and Motivations
Integrated sensing and communication (ISAC), also known as Joint communication and sensing (JCAS), has been regarded as one of the most promising techniques for improving the spectrum efficiency in the future 6G networks [1]- [3]. ISAC achieves wireless communication and sensing using the same transmit signals [4], [5], and it can estimate the sensing parameters such as angle-of-arrival (AoA), delay (or range), and Doppler frequency from the channel state information Xu (CSI) between the base station (BS) and the user equipment (UE) [6], [7]. Therefore, the CSI estimation is crucial for both the reliability of communication demodulation and the sensing accuracy, and there is a strong correlation between the sensing parameters and CSI. The least-square (LS) method has been widely used for CSI estimation due to its low complexity [8]. However, its low CSI estimation accuracy especially in the low signal-tonoise ratio (SNR) regime will cause severe deterioration in communication reliability. The minimum mean square error (MMSE) method was raised to improve the CSI estimation accuracy. However, the high complexity makes it challenging to be utilized in real applications [9]. In this paper, by treating the CSI as image tensors, we intend to utilize the complexvalue convolution neural network (CNN) combined with the ISAC transforms to enhance the CSI estimation for improving both the communication reliability and sensing accuracy for ISAC system.
B. Related Works
In [10], the authors proposed a Kalman filter based CSI enhancer for the ISAC system. It exploited the sensing parameters estimated by ISAC sensing schemes as the prior information for constructing the state transfer of the channel, and a Kalman filter used the reconstructed state transfer to suppress the noise terms in the initially estimated CSI. Since the Kalman filter has a recurrent optimization structure, it can be regarded as a simple neural network (NN). Further, there have been many studies using more complicated deep neural networks (DNN) to achieve the CSI denoiser to improve the CSI estimation accuracy [11]- [13]. In [11], the authors proposed that end-to-end deep fully connected networks can outperform the conventional MMSE estimator under insufficient pilots. In [12], the authors proposed to use DNN to enhance the decision-directed channel estimation for timevarying channels. Moreover, the DNN channel estimator can be jointly designed with beamforming and precoding for massive multiple-input and multiple-output (MIMO) channel [13].
Recently, the potential of CNN to exploit the correlation between the CSI elements and the time, frequency, and spatial domain features has been attractive [14]- [16]. In [14], the authors showed that the CNN-based CSI denoiser outperformed the traditional MMSE method. In [15], the authors proposed to exploit the spatial-frequency domain CSI to obtain better CSI estimation performance than the MMSE method but with lower complexity. Considering that the sparsity of CSI in the transformed domain is a wonderful feature for improving CSI estimation [17], Jiang proposed a dual CNN structure to further improve the CSI estimation performance [16].
We notice that the existing CNN channel enhancers are designed by treating the real and imaginary parts of the complex-value CSI as two isolated real-value channels to learn the complex-value operations of channel and signal transfer, which is hard to keep the phase information of the complex-value CSI. However, the communication channel models and receive signals of communication systems are all with complex-value multiplication and addition operations. Therefore, using real-value artificial neurons to learn the deterministic complex-value operation is a waste of computation resources to some extent, and may cause unnecessary deviation compared with using the complex-value operation DNN. On the other hand, according to the existing ISAC works [17]- [19], the channels in the transformed angle-delay domain are usually much more sparse in expression, and the sensing parameters including angles, delays, etc., are correlated to the CSI. As a classic CSI enhancement method, the transformdomain channel estimation technique [20] truncates CSI in the transform-domain around the peak values and pads zero values to the truncated CSI in the transform-domain to suppress the normalized mean squared error (NMSE) of CSI estimation. However, this method roughly cuts out all the tails of the CSI transform and thus disrupts the phase relations between CSI elements, which is destructive for sensing based on the CSI phase and also causes a larger deviation of CSI estimation compared with the MMSE method. By contrast, we intend to regard the CSI transformed into the angle-delay domain as the radar image, and use the complex-value CNN that is naturally suitable for image feature extraction to ensure the integrity of CSI's phase information while minimizing the CSI estimation NMSE.
C. Our Contributions
In this paper, we propose a novel ISAC complex CNN CSI enhancer to efficiently suppress the noise in the CSI estimates with relatively low complexity, which provides highly accurate CSI for both reliable communication demodulation and accurate sensing estimation. We also propose a novel biased fast Fourier transform (FFT)-based sensing method to further improve the range estimation accuracy with complexity close to the FFT-based sensing method.
Since the radio channel response is complex-valued, instead of using two real-value channels DNN to train the CSI denoiser by treating the imaginary and real parts of complex CSI as two isolated real values, we use the complex-value computation layers to construct a complex CNN CSI enhancer, which well maintains the phase shift pattern of the complex multiplication operation in channel response. Moreover, we add the FFT-based ISAC transform modules into the CNN structure to transform the CSI from the original domain to the sparse delay-angle domain, which can be better processed by CNN since the CSIs in the delay-angle domain can be regarded as radar images [21], and CNN is naturally powerful for extracting the image information [16]. Since the parameter sensing is based on the CSI, we set minimizing the NMSE of CSI estimation as the objective for the training of ISAC complex CNN to unify both the communication and sensing optimization. The simulation results show that the training descending speed of the complex CNN CSI enhancer is fast, and the NMSE loss of the CSI enhancer is low enough to make the bit error rate (BER) of communication demodulation using the enhanced CSI approach that with the perfect CSI estimation.
The main contributions of this paper are summarized as follows.
1. We propose an ISAC complex CNN CSI enhancer that uses the complex-value computation layers to maintain the phase shift of complex-valued signal transfer instead of treating the imaginary and real parts of CSI as two isolated real values. Furthermore, we integrate the FFTbased ISAC transform modules into the complex CNN structure, which can transform the CSI data from the original domain into the sparse delay-angle domain. The CSI in the delay-angle domain is analog to the radar image, which can be well-processed by CNN to extract useful feature images. The loss descending speed and generalization performance of CNN benefit from these settings. 2. We propose to use the CSI estimates enhanced by the ISAC complex CNN CSI enhancer for both communication demodulation and sensing. The BER of communication demodulation using the enhanced CSI approaches with the perfect CSI, and the sensing estimation MSEs are significantly lower than those based on the initially estimated CSI. 3. We propose a biased FFT-based sensing method for range estimation, which first actively adds known bias phase shift terms to the CSI estimates to generate multiple biased sensing results obtained by simple FFT-based sensing method, and finally obtain the mean value of all the biased sensing results as the ultimate sensing result. We prove that the error of the average of all the debiased sensing results is always lower than the FFT-based sensing result obtained from the original CSI estimate.
The remaining parts of this paper are organized as follows. In section II, we describe the system model and basic methods in the ISAC system. Section III introduces the ISAC complex CNN CSI enhancer in detail. Section IV introduces the AoA and range estimation method based on the enhanced CSI estimates. In section V, the simulation results are presented. Section VI concludes this paper.
Notations: Bold uppercase letters denote matrices (e.g., M); bold lowercase letters denote column vectors (e.g., v); scalars are denoted by normal font (e.g., γ); the entries of vectors or matrices are referred to with square brackets; (·) q ∈ S2; and v ∼ CN (m, σ 2 ) means v follows a circular symmetric complex Gaussian (CSCG) distribution with mean m and variance σ 2 .
II. SYSTEM MODEL
This section presents the UL ISAC system setup, MIMO channel model, brief introduction of conventional channel estimators, and FFT-based ISAC transform to provide fundamentals for demonstrating the complex CNN CSI enhancer.
A. ISAC System Setup
Due to the reciprocity of time-division-duplex (TDD) system. We consider the UL channel estimation for ISAC system, where the BS is equipped with a uniform linear array (ULA), and the user equipment (UE) has one single antenna, as shown in Fig. 1. In the UL preamble (ULP) period, BS uses the received training sequences in the preamble signals transmitted by UE for CSI estimation, and the estimated CSI is further used to estimate sensing parameters, such as the AoA and range of UE. In the UL data (ULD) period, the BS demodulates the UL data signals of UE using the estimated CSI. Orthogonal frequency division multiplexing (OFDM) signal is adopted as the transmit signal. For simplicity of presentation, we assume that there is one ULP in each packet for CSI estimation, and the ULPs are transmitted at an equal interval, denoted by T p s . Moreover, synchronization between the BS and UEs are achieved via a global clock, such as GPS disciplined oscillator (GPSDO). The clock between them is assumed to be locked, as discussed in [4]. Thus, the timing and carrier frequency residual offset are neglected in the signal model.
The key parameters for the OFDM signal are denoted as follows. P U t is the transmit power, N c is the number of subcarriers occupied by UE; M s is the number of OFDM packets used for each sensing parameter estimation; d n,m is the transmit OFDM baseband symbol of the mth OFDM symbol at the nth subcarrier, f c is the carrier frequency, ∆f is the subcarrier interval, T s is the time duration of each OFDM symbol, T p s = P s T s , where P s is the number of OFDM symbols in each packet. The sensing parameter estimates are updated every M s packets.
B. Channel Model
This subsection presents the ray-tracing channel model for the MIMO-OFDM system. The uniform interval between neighboring antenna elements is denoted by d a . The size of ULA is P ×1. The AoA for receiving or the angle-of-departure (AoD) for transmitting the kth far-field signal is θ k . The phase difference between the pth antenna element and the reference antenna element is [22] a(θ k ) = [1, e j 2π λ da sin θ k , · · · , e j 2π λ (P −1)da sin θ k ] T ∈ C P ×1 , (1) where λ = c/f c is the wavelength of the carrier, and c is the speed of light.
Further, the ray-tracing channel model for the nth subcarrier of the mth packet can be expressed as
h n,m = L−1 l=0 b C,l e j2πmT p s f d,l e −j2πn∆f τ l a(θ l ) ∈ P ×1 ,(2)
where L is the number of propagation paths, l = 0 is for the channel response of the line-of-sight (LoS) path, and l ∈ {1, · · · , L − 1} is for the paths involved with the lth scatterer; a(θ l ) is the steering vector for UL receiving and transmission, respectively; θ l is the corresponding AoA; f d,0 = v0 λ and τ 0 = r0,1 c are the Doppler shift and time delay between UE and BS of the LoS path, respectively, with v 0 and r 0,1 being the corresponding radial relative velocity and the distance, respectively; f d,l = f d,l,1 + f d,l,2 and τ l = τ l,1 + τ l,2 are the aggregate Doppler shift and time delay of the lth non-line-of-sight (NLoS) path, respectively;
f d,l,1 = v l,1 λ and f d,l,2 = v l,2
λ are the Doppler shifts between UE and the lth scatterer, and between the lth scatterer and BS, respectively, with v l,1 and v l,2 being the corresponding radial velocities; τ l,1 = r l,1 c and τ l,2 = r l,2 c are the time delays between UE and the lth scatterer, and between BS and the lth scatterer, respectively, with r l,1 and r l,2 being the corresponding distances. Moreover, b C,0 = λ 2 (4πr0,1) 2 and b C,l = λ 2 (4π) 3 r l,1 2 r l,2 2 β C,l are the attenuation of the LoS and NLoS paths, respectively; β C,l is the reflecting factor of the lth scatterer, following CN (0, σ 2 Cβ,l ) [9].
C. Channel Estimator
The training sequences of the nth subcarrier of the mth packet transmitted by UE are received by BS, and the receive signal is expressed as
Y n,m = P t h n,m ⊗ s u n,m H + Z n,m ,(3)
where P t is the transmit power, ⊗ is the Kronecker product, Z n,m is the Gaussian noise matrix with each element following i.i.d. CN (0, σ 2 n ), s u n,m ∈ U ×1 is one of the orthogonal codebook, which satisfies
s u1 n,m H s u2 n,m = U, u 1 = u 2 , 0, u 1 ̸ = u 2 .(4)
LS is widely due to its low complexity, the LS estimate of CSI is given bŷ
h LS n,m = Y n,m s u n,m √ P t U = h n,m + z ′ n,m ,(5)H LS m ∈ C P ×Nc , where [Ĥ LS m ] :,n =ĥ LS n,m . The corresponding actual value toĤ LS m ∈ C P ×Nc is H m ∈ C P ×Nc , where [H m ] :,n = h n,m . D. FFT-based ISAC Transform Denote F N to be a N -point discrete Fourier transform (DFT) matrix, we have [F N ] n1,n2 = e −j 2π N n1n2 , where n 1 , n 2 = 0, 1, · · · , N − 1. Then, N -point inverse discrete Fourier transform (IDFT) matrix should be F H N . The FFT- based ISAC transform ofĤ LS m can be expressed as [23] H LS m = T Ĥ LS m = F H Nc F PĤ LS m T T ,(6)
where F P and F Nc are P -point and N c -point DFT matrix, respectively. Then, we explain the meaning of the FFT-based ISAC transform. The CSI estimates aforementioned are expressed in the frequency and antenna domain. Based on (2), we can obtain
[H m ] p,n = L−1 l=0 α m,l e −j2πn∆f τ l e j 2π λ (p−1)da sin θ l ,(7)
where α m,l = b C,l e j2πmTsf d,l . The kth element of the FFT of the nth column ofĤ LS m is
h LS n,m k = F Pĥ LS n,m k = L−1 l=0 P p=1 α n,m,l e j 2π λ (p−1)da sin θ l e −j 2π P (p−1)k + z k ,(8)
where α n,m,l = α m,l e −j2πn∆f τ l , FFT (·) is Fourier transform, z k is the transformed Gaussian noise. It is easily obtained that the modulus of h LS n,m k is largest when k l = P da sin θ l λ in the high SNR regime. Since the index can only be integer, k θ l = P da sin θ l λ should be the maximal point, where ⌊·⌋ is rounding-off operator. Similarly, by applying IFFT to each row ofĤ LS m , the maximal point of each row of the IFFT ofĤ LS m is k τ l = ⌊N ∆f τ l ⌋. Therefore, the indices of the FFT-based ISAC transform are changed into the angle-delay domain. Since the FFT and IFFT are linear transforms, the inverse of the FFT-based ISAC transform iŝ
H LS m = T −1 H LS m = F H P F Nc H LS m T T .(9)
Since the channel expression is usually more sparse in the transformed domain, i.e., the angle-delay domain, the FFTbased ISAC transform is very useful for the feature extraction in the design of ISAC complex CNN CSI enhancer.
r x i x r K i K Convolution/ Multiply r x r K i x i K Real Part r K i K Imaginary Part i x r x Fig. 3:
The illustration of complex linear and convolution layers.
III. ISAC COMPLEX CNN CSI ENHANCER The signal processing diagram of the entire ISAC system is shown in Fig. 2, the CSI initially estimated by LS estimator is first refined by the ISAC complex CNN CSI enhancer. Then, the refined CSI estimates are used for demodulating communication data and sensing processing simultaneously. Since the sensing parameters are hidden inside the CSI, if we can reduce the error of the refined CSI to an extremely low level, then the estimation accuracy of AoA and range can be greatly improved. Therefore, our optimization objective for the ISAC complex CNN CSI enhancer is to minimize the error of the refined CSI, which can unify the communication and sensing objectives.
In this section, we focus on the design of the ISAC complex CNN CSI enhancer, and the communication demodulation and sensing processing will be presented in the next section. First, we introduce the CNN structure and training processes in detail and then analyze the complexity of the ISAC complex CNN CSI enhancer.
A. Complex-valued Computation Layers for CNN
Three kinds of layers are used in the ISAC complex CNN: complex linear layer, complex activation layer, and complex convolution layer. The prominent feature of the complex-value neural network is that the weights and inputs are both complex values, and the key to achieving the complex neural network is to apply the complex multiplication operation to all the network layers.
1) Complex Linear Layer: The weights and bias parameters of the complex linear layer are W = W r + jW i and b = b r +jb i , respectively. With the input formed as x = x r +jx i , we can express the output of the complex linear layer as:
y L = L (x; W, b) = Wx + b.(10)
Based on the complex multiplication operation, we obtain
y L = W r x r − W i x i + b r + j (W r x i + W i x r + b i ) ,(11)
We can see that the complex layer output can be deterministically calculated by real-value linear layer, which is
y L =L (x r ; W r , 0) + L (x i ; −W i , 0) + b r + j (L (x i ; W r , 0) + L (x r ; W i , 0) + b i ) .(12)
Based on (12), we can construct the complex linear layer operation with two real-value linear layers.
2) Complex Convolution Layer: Denote K = K r + jK i to be the convolution kernel. Since the convolution operation is also a linear transform, the output of complex convolution can be given by
f C (x; K) = K * x = K r * x r − K i * x i + j (K r * x i + K i * x r ) ,(13)
where * is the real-value convolution operator. Based on (13), we can use two real-value convolution layers to deterministically obtain the complex convolution results.
The aforementioned complex-valued linear transform layers are shown in Fig. 3, it is like the butterfly computation structure.
3) Complex Activation Layer: The activation function is used to add non-linearity into the network forward computation. According to [24], we apply the leaky Relu function to both real and imaginary parts of input to construct the activation layer, which can be expressed as CLRelu (y) = LeakyRelu (y r ) +jLeakyRelu (y i ) , (14) where LeakyRelu (y) = y, y ≥ 0, ay, y < 0, is the real-value leaky Relu function with a being a small value.
B. Structure of ISAC Complex CNN
The diagram of the ISAC complex CNN is shown in Fig. 4. We use two Resnet-like CNN structures to construct the backbone of the complex CNN. The general formulation of the output of a CNN iŝ
H out = f CN N Ĥ in ; Θ ,(15)
where Θ is the set of all the weights and bias parameters of the CNN;Ĥ in andĤ out are the input and output tensors, respectively. In this paper, we use 3 convolution layers for each CNN block, and each convolution layer is the concatenation of complex convolution and complex leaky Relu functions, i.e.,
f 1 CN N Ĥ in ; Θ 1 = CLRelu f C Ĥ in ; Θ 1 .(16)
Moreover, the shortcut path is a single-layer complex convolution layer, of which the output is expressed as
H Res = f 1 CN N Ĥ in ; Θ Res .(17)
CNN 1 is used to pre-process the input by extracting the superimposed main features of the initial CSI estimates. The output of CNN 1 is expressed aŝ where Θ 1 and Θ ′ 1 are the weights parameters of CNN 1 and its shortcut, respectively. Moreover,H LS m is the normalization of the initial CSI estimate,Ĥ LS m , and the normalization method will be presented in the next subsection.
H F = f CN N H LS m ; Θ 1 + f 1 CN N H LS m ; Θ ′ 1 ,(18) 1 c C P N 1 c PN 1 P Shortcut convolution Complex CNN 1 p m n LS m H 1 c C P N 1 c PN Shortcut F H ISAC Tran Inverse ISAC Tran 2 c C P N 1 c PN CNN 1 CNN m H 1 c PN
The ISAC transform module is to transform the feature map output of CNN 1 into the sparse angle-delay domain so that CNN 2 can learn the sparse feature on the transformed domain. Finally, the inverse ISAC transform module restores the CSI estimates to their original domain. The output of CNN 2 after the inverse ISAC transform is expressed aŝ
H CN N m = T −1 f CN N T Ĥ F ; Θ 2 + f 1 CN N T Ĥ F ; Θ ′ 2 ,(19)
where Θ 2 and Θ ′ 2 are the parameter sets of CNN 2 and its shortcut, respectively, and T (·) and T −1 (·) are FFT-based ISAC transform and inverse transform, respectively.
The dimension of the tensors is denoted as
[C × P × N c ],
where C is the channel number of input tensor, P is the number of antennas, N c is the number of subcarriers. The output of each complex CNN has the same channel number as the input, while the intermediate tensor output can have different channel numbers. The channel numbers of the intermediate output of CNN 1 and 2 are C 1 and C 2 , respectively. In this paper, the input and output CSI tensors are all with 1 channel. The dimension of the convolution kernel is 3×3 for both CNN 1 and 2.
C. Training Methods of ISAC complex CNN Channel Enhancer
Then, we introduce the training method, including the input normalization and loss function for backpropagation (BP).
1) The normalization of input: Since the raw CSI data usually has a small amplitude, we need to normalize the initial CSI estimate,Ĥ LS m , to avoid the training stagnation due to the gradient vanishment in the training process. We aim to normalize the power of the useful signal, i.e., the true CSI, H m .
We can exploit the eigenvalues of the autocorrelation of H LS m to estimate the power of the useful signal and the noise variance. The autocorrelation ofĤ LS m is denoted by
R LS m = 1 NcĤ LS m Ĥ LS m H .
Denote v Σ to be the vector composed of the eigenvalues of R LS m with the descending order. According to the feature of eigenvalue decomposition [18], we have
[v Σ ] i = ρ i + σ 2 N , i ≤ L, σ 2 N , i > L,(20)
where L is the number of strong paths, ρ i is the power gain of the ith path, and σ 2 N = σ 2 n Pt is the variance of CSI estimation error. The estimation of L, denoted byL, can be obtained following the procedures in Appendix A. After gainingL, the estimation of σ 2 N can be expressed aŝ
σ 2 N = P i=L+1 [v Σ ] i P −L .(21)
Then, the power of useful signal inĤ LS m can be estimated as
ρ 2 h = i=L i=1 [v Σ ] i −σ 2 N .(22)
Finally, the normalization ofĤ LS m is given bȳ
H LS m =Ĥ LS m ρ 2 h = H m ρ 2 h +Z ′ m ,(23)
where Z
where Tr (·) is the operation to derive the trace of a matrix. Using BP to find the optimal parameters by minimizing J H m ,Ĥ CN N m , we can finally obtain the ISAC complex CNN CSI enhancer, which is expressed aŝ
Θ 1 ,Θ 2 ,Θ ′ 1 ,Θ ′ 2 = arg min Θ1,Θ2,Θ ′ 1 ,Θ ′ 2 J H m ,Ĥ CN N m .(25)
D. Complexity Analysis and Comparison
In this subsection, we analyze the complexity of the above ISAC complex CNN CSI enhancer and compare it with the LS and linear minimum mean square error (LMMSE) CSI estimation methods.
1) Complexity of the ISAC complex CNN CSI enhancer: The computation complexity of the ISAC complex CNN CSI enhancer mainly comes from 2 CNN and 2 FFTbased ISAC transforms. The aggregate complexity of 2
CNN is O 3 2 × P N c × 2 (C 1 + C 2 ) + C 1 2 + C 2 2
, and the aggregate complexity of 2 FFT-based transforms is O {2 × P N c log (P N c )}. Therefore, the comprehensive complexity of the ISAC complex CNN CSI enhancer is O P N c 9 C 1 2 + C 2 2 + 2log 2 (P N c ) . 2) Complexity of the LS method: The complexity of the LS method comes from the complex-value division, which is O (P N c ).
3) Complexity of the LMMSE method: The CSI estimation of the LMMSE method at the nth subcarrier of the mth packet can be expressed as [8] h M M SE n,m
= R hh R hh +σ 2 N I −1ĥ LS n,m ,(26)
where R hh = E h n,m h H n,m is the expectation of h n,m . The LMMSE method adds the matrix inverse and multiplication operations based on the LS method. Therefore, the complexity of the LMMSE method is O(2(P 2 + P 3 )N c ).
Since C 1 and C 2 are usually smaller than 3, the complexity of the ISAC complex CNN CSI enhancer is actually between the LS method and the LMMSE method.
IV. ISAC SIGNAL PROCESSING
In this section, we will introduce the ISAC signal processing methods. We first estimate the AoA based on the CSI estimation enhanced by the ISAC complex CNN,Ĥ CN N m , and then the estimated AoA serves as the prior information for a baseband spatial filter (SF) based on beamforming (BF). We use the SF to filter the CSI, and then the filtered CSI can be used for communication demodulation and sensing the range of targets in different directions. Finally, we propose a novel biased FFT-based sensing method for range sensing.
A. Sensing Processing
The enhanced CSI estimation,Ĥ CN N m , can be expressed aŝ
H CN N m = H m ρ 2 h +Z ′ m ,(27)
whereZ ′ m is the noise matrix suppressed by the ISAC CNN CSI enhancer, and H m is the true CSI that contains the AoA and range of UE and scatterers. We first use multiple signal classification (MUSIC) method to estimate AoA based on H CN N m . 1) AoA Estimation: By applying eigenvalue decomposition to the auto-correlation, we obtain
[U a , Σ a ] = eig 1 N cĤ CN N m Ĥ CN N m H ,(28)
where Σ a is the eigenvalue matrix with diagonal elements being the eigenvalues in descending order, U a is the eigenmatrix with each column being the eigenvector corresponding to the eigenvalue. Since the number of strong paths has been estimated asL in subsection III-C1. Therefore, the noise subspace can be obtained as U N = [U a ] :,L:P . According to [18], the angle spectrum function is given by
f a (θ; U N ) = a H (θ) U N (U N ) H a(θ).(29)
The minimal points of f a (θ; U N ) are the AoA estimates.
Using the two-step Newton descending method in [18], we can obtain the minimal points of f a (θ; U N ). 2) Spatial Filtering: By stacking the steering vectors of all the estimated AoAs in Θ A , we obtain the steering matrix as
The set of AoA estimation is denoted by Θ
A = {θ l }A (Θ A ) = a(θ 0 ), a(θ 1 ), ..., a(θL −1 ) ,(30)
where a(θ l ) is given in (1). Using the low-complexity LS beamforming method, we can obtain the receive beamforming vector for the signal inθ l as
w R,l = A H (Θ A ) † i l ∈ P ×1 ,(31)
where [·] † is the pseudo-inverse operation to a matrix, and i l is the one-hot vector with only the lth element being 1. Using w R,l to spatially filterĤ CN N m , we obtain the filtered CSI aŝ
h R,l = (w R,l ) HĤCN N m ∈ 1×Nc .(32)
Combining (27) and (32), we can obtain the nth element of h R,l can be expressed as ĥ R,l n = α n,m,l e −j2πn∆f τ l +z n,m,l ,
where α n,m,l = b C,l e j2πmT p s f d,l w H R,l a (θ l ) is the useful power gain, andz n,m,l is the term of noise and interference. Since w R,l is the directional receive BF vector, the E ∥z n,m,l ∥ 2 2 should be much smaller than E ∥α n,m,l ∥ 2 2 in the high SNR regime.
3) Biased FFT-based Range Sensing: Based on (33), we can see thatĥ R,l contains the delay of the lth path, and we can therefore estimate the range based onĥ R,l . As aforementioned, the FFT-based ISAC transform in Section II-D can be used to transformĥ R,l into the delay domain. However, it can only give the discrete spectrum with interval ∆r = c Nc∆f , and the range estimation accuracy is thus restricted by ∆r, especially when the true range is at the midpoint of an interval.
To resolve this problem, we propose a novel biased FFTbased sensing method by actively introducing multiple sets of phase offset to the CSI and averaging the biased sensing results estimated by a simple FFT-based sensing method to obtain a more accurate estimation of the sensing parameter.
Then, we present the derivation of the biased FFT-based sensing method. We use r t to denote the true value of range, and r 1 < r t < r 2 , r 2 = r 1 + ∆r. The discrete spectrum obtained by applying FFT to the raw sequence is the uniform sampling of the DTFT spectrum, and the amplitude spectrum of DTFT should be symmetric with respect to the line r = r t in the high SNR regime [25]. Therefore, the range estimate of the FFT-based sensing method should be the grid point that is closer to r t , which can be expressed as
r est = arg min r∈{r1,r2} |r t − r| ,(34)
and the error should be
r e = min r∈{r1,r2} |r t − r| .(35)
According to (33), by applying e −j2πn∆f kr δ c to the nth element ofĥ R,l , we obtainĥ R,l,k =ĥ R,l ⊙h k , where [h k ] n = e −j2πn∆f kr δ c , and ⊙ is the Hardmad product operator. The nth element ofĥ R,l,k is expressed as ĥ R,l,k n = α n,m,l e −j2πn∆f τ l + kr δ c +z n,m,l e −j2πn∆f kr δ c .
(36) We can see that the true value of the range contained inĥ R,l,k is biased by kr δ . Therefore, the range estimates based on h R,l,k should be debiased by kr δ , i.e.,r est,k = r est − kr δ .
Split the interval r t − ∆r 2 r t + ∆r 2 into N r even pieces with grid spacing r δ = ∆r Nr . Using (36) to add biases to the true value r t , we obtain a set of biased ranges, denoted by r k = {r t + kr δ }| k∈[− Nr 2 , Nr 2 ] , where k is integer. Assume r est = r 1 for k ∈ [− Nr 2 ,k], and r est = r 2 for k ∈ [k + 1, Nr 2 ]. The average of all the debiased range estimates can be expressed asr
est =k k=−Nr /2 (r1−kr δ )+ Nr /2 k=k+1 (r2−kr δ ) Nr+1 =k k=−Nr /2 (r1−kr δ )+ Nr /2 k=k+1 (r1+Nrr δ −kr δ ) Nr+1 = r 1 + r δ Nr(Nr/2−k) Nr+1 .(37)
We prove that the error of biased FFT-based sensing, i.e., |r est − r t |, is always smaller than |r est − r t | in Appendix B. Finally, based on the above illustration, we can propose the procedures of biased FFT-based sensing in Algorithm 1. The biased FFT-based sensing method can improve sensing accuracy with some mechanism similar to the diversity gain, which will be shown in Section V-C.
B. Complexity of Biased FFT-based Sensing Method
The complexity of the proposed biased FFT-based sensing method is mainly from the L rounds of FFT-based sensing for each range bias situation. Therefore, the complexity of
C. Communication Processing
The receive signals of the nth subcarrier of the mth OFDM symbol can be expressed as y C n,m = P t d n,m h n,m + z n,m ,
where d n,m is the transmitted data at the nth subcarrier of the mth OFDM symbol, and z n,m is the Gaussian noise vector with each element following CN (0, σ 2 n Pt ). Using the path with the largest power gain to receive the data. According to Section IV-A2, w R,0 is adopted as the receive BF vector. Then, the received signal is given by
y C n,m = (w R,0 ) H y C n,m = P t d n,m h n,m + z n,m ,(39)
where h n,m = (w R,0 ) H h n,m is the transformed channel response, and z n,m = (w R,0 ) H z n,m is the transformed noise.
The corresponding channel estimate is given bŷ
h n,m = (w R,0 ) H Ĥ CN N m :,n .(40)
Finally, using the maximum likelihood criterion to estimate the communication data aŝ d n,m = arg min
dΘ∈Θ QAM y C n,m √ P tĥn,m − d Θ 2 ,(41)
where Θ QAM is the used quadrature amplitude modulation (QAM) constellation.
V. SIMULATION RESULTS
In this section, we present the normalized MSE (NMSE) of the CSI enhanced by the ISAC complex CNN CSI enhancer, the BER performance of demodulation using the ISAC complex CNN enhanced CSI, and the AoA and range estimation MSE using the enhanced CSI estimation. The global parameter setting is listed as follows.
The carrier frequency is set to 28 GHz, the antenna interval, d a , is half the wavelength, the array sizes of the user and the BS are 1 × 1 and 8 × 1, respectively, the number of paths is L = 2, and the reflection factor for the NLoS path, β C,1 , follows CN (0, 1). The subcarrier interval is ∆f = 480 kHz, and the bandwidth is B =N c ∆f =122.88 MHz. The numbers of neurons for hidden layers are C 1 = C 2 = 4. The number of OFDM symbols for each packet is P s = 14. The variance of the Gaussian noise is σ 2 n = 4.9177 × 10 −12 W. According to (2) and (3), the SNR of each antenna element of BS can be expressed as
γ c = P t L−1 l=0 |b C,l | 2 σ 2 n .(42)
The transmit power is determined according to the given SNR and σ 2 n . Then, we first introduce the procedures of generating training data for the ISAC complex CNN CSI enhancer.
A. Generation of Training, Evaluation, and Test Data
Training and evaluation CSI data are used to train the ISAC complex CNN and evaluate its generalization performance in the training process, respectively. Moreover, the test CSI data is used to obtain the final performance of the trained ISAC complex CNN.
Overall, we adopt two sorts of channels, namely, the static and dynamic channels, to test the performance of the ISAC complex CNN CSI enhancer. The static channel is suitable for the situation where UE and scatterers are all static, and the true CSI of the static channel does not change. On the contrary, the parameters of the dynamic channel are time-varying. Then, we introduce the procedures for generating the static and dynamic channel CSI data, respectively. 1) Static Channel CSI Generation: The AoAs for the LoS and NLoS path are θ 0 = 30 • and θ 1 = 59.5 • , respectively; the range between BS and UE is r 0,1 = 91.26 m, the ranges between BS and scatterer, and between scatterer and UE are r 1,1 = 28.7 m and r 1,2 = 71.6 m, respectively; the relative velocities between BS and UE is v 0,1 = 0 m/s; the velocities between BS and scatterer, and between scatterer and UE are v 1,1 = v 1,2 = 0 m/s, respectively, since UE and scatterers are all static. Using the above parameters, we can generate the true CSI for the static channel according to Section II-B.
For each given SNR, γ c , we generate M s = 2000 items of true CSI, and the transmit power is determined according to (42). The initial CSIs estimated by using the LS method according to (5) are stored as the training input data, and the corresponding true CSIs are stored as the targets for the training input. We set the SNR range to be [0, 1, ..., 15] dB. After generating all the training CSI data for all 16 SNRs, we disrupt the order of the CSI data set and choose the first 3/4 part of it to be the training data set, and the rest to be the evaluation data set. The test data are generated using the above parameters independently, and we set M s = 1000 for generating the test data.
2) Dynamic Channel CSI Generation: The procedures of generating the training, evaluation, and test CSI data for the dynamic channel are mostly the same as those for the static channel, except that the range and velocities of the UE are set to be random for each packet. The range between UE and BS is uniformly distributed from 5 m to 150 m, and the relative velocity between UE and BS is uniformly distributed from -10 m/s to 10 m/s.
We can see that the static channel is actually a special case of dynamic channel where the AoA and range parameters are fixed, and the velocities of UE and all scatterer targets are 0 m/s. Therefore, we use the CSI data of dynamic channel to train the ISAC complex CNN to avoid the possible overfitting problem caused by the monotonous data. Here, we set N c = 256 to generate the training CSI data. After preparing all the data for the training of ISAC complex CNN, we can construct and train the CNN as shown in Section III.
B. Communication Performance using ISAC complex CNN
In this subsection, we present the training and evaluation NMSE loss in the training process of ISAC complex CNN and the BER performance of communication demodulation using the CSI enhanced by the ISAC complex CNN. Fig. 6 presents the training and evaluation NMSE loss in the training process of the ISAC complex CNN. It can be seen that the loss of ISAC complex CNN decreases fast in the first 20 epochs, and hereafter it converges to a stable level. We can conclude that the evaluation loss of ISAC complex CNN on the static channel CSI is slightly lower than that on the dynamic channel CSI. This is because the channel parameters of dynamic channel are random, and thus there is higher test loss for the dynamic channel compared with the static channel. Overall, we can see that the evaluation loss is comparable to the training loss. This shows that the ISAC complex CNN has satisfactory generalization performance. Fig. 7 shows the CSI estimation NMSE of the ISAC complex CNN applied to the test CSI data. We choose the LS, LMMSE, DFT-based transform-domain, and the state-ofthe-art dualCNN [16] CSI estimators as comparisons, where dualCNN treats the real and imaginary parts of the CSI as two isolated real-value channels. We choose dualCNN since it shares similar complexity with the proposed ISAC complex CNN. The CSI estimation NMSEs for LMMSE and dualCNN are referenced to [16]. For the dualCNN trained based on the CSI data under 5 dB SNR, it achieves comparable CSI estimation NMSE performance to the LMMSE method with lower complexity. We can see that the CSI estimation NMSE of the proposed ISAC complex CNN CSI enhancer is much lower than the above two CSI estimators by around 17 dB given the same SNR. This is because the proposed ISAC complex CNN can maintain the complex-valued signal transfer without possible deviation caused by treating the real and imaginary parts of the complex CSI as isolated data, and the ISAC transform modules make the ISAC complex CNN able to extract the sparse transform-domain CSI tensors more efficiently. DFT-based transform-domain method also exploits the sparse CSI expression in the transform-domain by padding zero to all the noise-like transform-domain CSI elements. However, it can not maintain the phase of transform-domain CSI. Therefore, the CSI estimation NMSE of the DFT-based transform-domain method is significantly larger than the ISAC complex CNN CSI enhancer. Moreover, the CSI estimation NMSE of ISAC complex CNN applied to static CSI is slightly lower than that applied to the dynamic CSI, since there is an inevitable regression error due to the randomness of the dynamic channel. Fig. 8 further shows the generalization performance of the ISAC complex CNN by using the CNN trained by dynamic channel CSI data with N c = 256 to enhance both the dynamic and static channel CSI data with various N c . We can see that for the same type of channel (dynamic or static channel), the higher the N c is, the lower the CSI estimation NMSE is. This means that the generalization performance of ISAC complex CNN increases as the dimensions of CSI increase. This is because the CSI data with higher dimensions generate radar images with higher resolution. Moreover, we can see that given the same N c , the CSI estimation NMSE of the ISAC complex CNN applied to the static channel is slightly lower than that applied to the dynamic channel, which is consistent with the results in Fig. 7 CSI estimators. Fig. 9 and Fig. 10 show the BER performance of communication demodulation using the static channel CSI enhanced by the ISAC complex CNN under 4-QAM and 16-QAM modulation, respectively. It can be seen that when using 4-QAM, the BER performance using ISAC complex CNN enhanced CSI is similar to that using the perfect CSI estimation, which requires about 0.5 dB and 2.5 dB lower SNRs compared with the dualCNN and LS CSI estimators, respectively. When 16-QAM is used, ISAC complex CNN can achieve comparable BER performance with the perfect CSI estimation in the low SNR regime. In the high SNR regime, the BER using ISAC complex CNN enhanced CSI is slightly higher than that using the perfect CSI estimation, but is still lower than those using dualCNN and LMMSE methods. This is because the CSI estimation NMSE of the proposed ISAC complex CNN is lower than the compared CSI estimators as shown in Fig. 7. Fig. 11 and Fig. 12 show the BER of communication demodulation using the dynamic channel CSI enhanced by ISAC complex CNN under 4-QAM and 16-QAM modulation, respectively. It can be seen that when using 4-QAM, the BER using ISAC complex CNN is slightly higher than that with perfect CSI estimation, which requires about 0.3 dB and 2.3 dB lower SNRs compared with the dualCNN and LS CSI estimators, respectively. When 16-QAM is used, ISAC complex CNN can achieve comparable BER performance to the perfect CSI estimation in the low SNR regime. In the high SNR regime, the BER using ISAC complex CNN is slightly higher than that using the perfect CSI estimation, but is lower than those using dualCNN and LMMSE methods.
Overall, based on the above simulation results, we can see that the ISAC complex CNN can improve the accuracy of CSI estimation and BER performance for both static and dynamic channel situations. Fig. 13 illustrates the AoA estimation MSE based on the ISAC complex CNN enhanced CSI using the AoA estimation method shown in Section IV-A1, and we consider both the situations of static and dynamic channel CSIs. It is shown that the MSE of AoA estimation based on the ISAC complex CNN enhanced CSI is significantly lower than those with the LS estimated CSI by more than 17 dB for both the static and dynamic channels. This is because the CSI estimation NMSE of ISAC complex CNN is significantly lower than that of LS methods. Moreover, we can see that the MSE of AoA estimation based on the static channel CSI enhanced by the ISAC complex CNN is slightly lower than that based on the enhanced dynamic channel CSI. This is because the randomness of the enhanced dynamic channel CSI has a larger estimation NMSE compared with the enhanced static channel CSI. Then, we show the range estimation MSEs of the proposed biased FFT-based sensing method compared with the existing FFT-based sensing method [21]. Fig. 14 and Fig. 15 present the range estimation MSEs of the proposed biased FFT-based sensing method using the static and dynamic channel CSIs, respectively. From Fig. 14, we can see that the range estimation MSE of the proposed biased FFTbased sensing method is significantly lower than those of the FFT-based sensing method using the enhanced CSI estimation. This is because the average of the biased sensing results is the weighted sum that is closer to the true value as proved in Appendix B, while the MSE of the conventional FFT-based sensing method is restricted by the resolution, i.e., the grid interval of the range spectrum. Moreover, it is shown that the range estimation MSE of the biased FFT-based sensing method decreases with the increase of N r given the same SNR. This is because as N r increases, the biases terms become denser, and the approaching term in (37) is more accurate. By comparing Fig. 14 with Fig. 15, we can see that the range estimation MSE of the biased FFT-based sensing method applied to the static channel CSI is lower than that applied to the dynamic channel CSI. This is because the CSI estimation NMSE of the dynamic channel is slightly larger than that of the static CSI as shown in Fig. 7, which leads to a larger error for range sensing based on the dynamic CSI estimation. When N r = 200, the range estimation MSE of the biased FFT-based sensing method is approximate to that of the high-complexity MUSIC-based sensing method in the low SNR regime both for the static and dynamic channel CSIs. The gap between the range estimation MSEs of biased FFT-based and the MUSICbased sensing methods is enlarged with the increase of SNR in the high SNR regime. This is because the sensing MSE for each bias term of the biased FFT-based sensing method is relatively larger than those for the noise-subspace vectors of the MUSIC-based method.
C. Sensing Performance
VI. CONCLUSION
In this paper, we propose an ISAC complex CNN CSI enhancer that uses the complex-value computation layers to form the CNN structure to maintain the phase shift of the complexvalue CSI and signal transfer. By integrating the FFT-based ISAC transform modules into the complex CNN structure, we can transform the CSI data from the original domain into the sparse delay-angle domain. The sparse expression of CSI in the delay-angle domain can be treated as radar images, which are suitable to be processed by CNN. This improves the loss descending speed and generalization performance of the ISAC complex CNN. The MSE of AoA estimation using the enhanced CSI improves significantly compared with that using the initial CSI estimation. Finally, we propose a novel biased FFT-based sensing method for range estimation, which significantly improves the range sensing accuracy with similar complexity to the FFT-based sensing method.
APPENDIX A ESTIMATION OFL
Exploiting v Σ , we obtain a differential vector, v ∆ ∈
R (P −1)×1 , where [v ∆ ] i = [v Σ ] i − [v Σ ] i+1
. According to (20), we can see that [v ∆ ] i ≈ 0 for i ≥ L, while |[v ∆ ] i | ≫ 0 for i < L. Based on this property, we calculate the mean value of the latter part of v Σ , and we obtain v = P −1 k=⌊(P −1)/2⌋
[v ∆ ] k P − ⌊(P − 1) /2⌋ ,(43)
wherev should be an extremely small value that can be used to decide whether there is a signal. Based on the maximum likelihood criterion, the estimate of L is given bŷ
L = arg max i [v ∆ ] i > (1 + ε)v,(44)
where ε is a parameter to avoid the error caused by small noise. In this paper, we set ε = 0.5.
APPENDIX B PROOF OF |r est − r t | < |r est − r t | We first consider the prerequisite condition that r 1 < r t ≤ r1+r2 2 , and there is r est = r 1 = arg min r∈{r1,r2} |r t − r|.
Furthermore, we have
|r est − r t | = r t − r 1 ,(45)
k > 1 r δ
r 1 + r 2 2 − r t ,(46)
Then, we continue the proof in two conditions, i.e.,r est ≤ r t , andr est > r 1 .
1) Whenr est ≤ r t , we have |r est − r t | = r t −r est . Therefore, we need to prove r t −r est < r t − r 1 , i.e.,r est − r 1 > 0.
According to (37),r est − r 1 = r δ Nr(Nr/2−k) Nr+1
> 0 is satisfied.
2) Whenr est > r t , we need to prover est − r t < r t − r 1 , i.e., r δ Nr(Nr/2−k) Nr+1 < 2 (r t − r 1 ). According to ( = Nr Nr+1 (r t − r 1 ) ≤ 2 (r t − r 1 ) .
Therefore, the proof is complete for prerequisite condition, r 1 < r t ≤ r1+r2 2 . When r 2 > r t > r1+r2 2 , the proof is completely similar to the above procedures, and we omit in this paper.
Fig. 1 :
1The ISAC scenario.
.
Note that due to the normalization procedure, when generating the training data, the true target should be generated as Hm √ Function: Since the input and output of CNN are both complex values, we define the mean square error function for complex values as J H m ,Ĥ
Fig. 5 :
5The ISAC signal processing diagram.
the points in Θ A are sorted by the ascending order of f a (θ; U N ).
biased FFT-based sensing method is L times of the FFT-based sensing methods, which is O {LN c log (N c )}. Compared with the subspace sensing method, such as MUSIC method whose typical complexities are about O N c 3 + N c 2 .
Fig. 6 :
6The NMSE loss in the training process of the ISAC complex CNN on the static and dynamic channel CSIs.
Fig. 7 :Fig. 8 :
78The NMSE of ISAC complex CNN compared with the existing methods. The NMSE of ISAC complex CNN in various number of N c .
Fig. 9 :
9BER of communication demodulation using ISAC complex CNN enhanced static channel CSI compared with different CSI estimators under 4 QAM.
Fig. 12 :
12BER of communication demodulation using ISAC complex CNN enhanced dynamic channel CSI compared with different CSI estimators under 16 QAM.
Fig. 13 :Fig. 14 :
1314AoA estimation MSE based on the ISAC complex CNN enhanced CSI compared with that based on the conventional LS-estimated CSI, when N c = 256. The range estimation MSE of the biased FFT-based sensing method using the static channel CSI under different N r .
Fig. 15 :
15The range estimation MSE of the biased FFT-based sensing method using the dynamic channel under different N r .
Chen, Z. Feng are with Beijing University of Posts and Telecommunications, Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing 100876, P. R. China (Email:{chenxu96330, fengzy, weizhiqing}@bupt.edu.cn). J. A. Zhang is with the Global Big Data Technologies Centre, University of Technology Sydney, Sydney, NSW, Australia (Email: [email protected]). Ping Zhang is with Beijing University of Posts and Telecommunications, State Key Laboratory of Networking and Switching Technology, Beijing 100876, P. R. China (Email: [email protected]). X. Yuan is with Commonwealth Scientific and Industrial Research Organization (CSIRO), Australia (email: [email protected]). Corresponding author: Zhiyong Feng
Fig. 4: The diagram of ISAC complex CNN.p
m
n
LS
m
H
Shortcut
CNN 2
Convolution
+ activation
Direct
transfer
legend
Algorithm 1: Biased FFT-based Sensing Method Input: Spatially filtered CSI vector:ĥ R,l ; The grid interval: ∆r = c Nc∆f ; The number of even pieces for each interval: N r . Output: Estimation results: The range estimtaionr est . Initialize: 1) Calculate r δ = ∆r Nr . 2) Generate the range bias set {kr δ }| k∈[− Nr2 , Nr
2 ] .
Process:
Step 1: Add biases toĥ R,l according to (36), which
generates N r + 1 biased vectors {ĥ R,l,k }| k∈[− Nr
2 , Nr
2 ] .
Step 2: Apply FFT-based ISAC range sensing to each
vector in {ĥ R,l,k }| k∈[− Nr
2 , Nr
2 ] , and generate the
range estimates {r k }| k∈[− Nr
2 , Nr
2 ] .
Step 3: Debias the range estimation and generate
{r est,k =r k − kr δ }| k∈[− Nr
2 , Nr
2 ]
Step 4: Calculate the average of the debiased range
estimates:r est =
Nr /2
k=−Nr /2r
est,k
Fig. 11: BER of communication demodulation using ISAC complex CNN enhanced dynamic channel CSI compared with different CSI estimators under 4 QAM..
Then, we present the BER performance of communication
demodulation using ISAC complex CNN compared with other
0
5
10
15
SNR / dB
10 -8
10 -6
10 -4
10 -2
10 0
BER
Perfect CSI
LS
ISAC-ComCNN-static
LMMSE
DualCNN
Fig. 10: BER of communication demodulation using ISAC
complex CNN enhanced static channel CSI compared with
different CSI estimators under 16 QAM.
0
2
4
6
8
10
SNR / dB
10 -6
10 -4
10 -2
BER
Perfect CSI
LS
ISAC-ComCNN-dynamic
LMMSE
DualCNN
Joint Communication, Sensing, and Computation Enabled 6G Intelligent Machine System. Z Feng, Z Wei, X Chen, H Yang, Q Zhang, P Zhang, IEEE Network. 356Z. Feng, Z. Wei, X. Chen, H. Yang, Q. Zhang, and P. Zhang, "Joint Communication, Sensing, and Computation Enabled 6G Intelligent Machine System," IEEE Network, vol. 35, no. 6, pp. 34-42, Nov. 2021.
Joint radar and communication design: Applications, state-of-the-art, and the road ahead. F Liu, C Masouros, A Petropulu, H Griffiths, L Hanzo, IEEE Transactions on Communications. F. Liu, C. Masouros, A. Petropulu, H. Griffiths, and L. Hanzo, "Joint radar and communication design: Applications, state-of-the-art, and the road ahead," IEEE Transactions on Communications, June 2020.
A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. W Saad, M Bennis, M Chen, IEEE Network. 343W. Saad, M. Bennis, and M. Chen, "A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems," IEEE Network, vol. 34, no. 3, pp. 134-142, May 2020.
Integration of radar sensing into communications with asynchronous transceivers. J A Zhang, K Wu, X Huang, Y J Guo, D Zhang, R W Heath, IEEE Communications MagazineJ. A. Zhang, K. Wu, X. Huang, Y. J. Guo, D. Zhang, and R. W. Heath, "Integration of radar sensing into communications with asynchronous transceivers," IEEE Communications Magazine, pp. 1-7, Aug. 2022.
Joint in-band full-duplex communication and radar processing. S A Hassani, B Van Liempd, A Bourdoux, F Horlin, S Pollin, IEEE Systems Journal. S. A. Hassani, B. van Liempd, A. Bourdoux, F. Horlin, and S. Pollin, "Joint in-band full-duplex communication and radar processing," IEEE Systems Journal, pp. 1-9, July 2021.
An overview of signal processing techniques for joint communication and radar sensing. J A Zhang, F Liu, C Masouros, R W Heath, Z Feng, L Zheng, A Petropulu, IEEE Journal of Selected Topics in Signal Processing. 156J. A. Zhang, F. Liu, C. Masouros, R. W. Heath, Z. Feng, L. Zheng, and A. Petropulu, "An overview of signal processing techniques for joint communication and radar sensing," IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 6, pp. 1295-1315, Sept. 2021.
Joint Communication, Sensing, and Computation Enabled 6G Intelligent Machine System. Z Feng, Z Wei, X Chen, H Yang, Q Zhang, P Zhang, IEEE Network. 356Z. Feng, Z. Wei, X. Chen, H. Yang, Q. Zhang, and P. Zhang, "Joint Communication, Sensing, and Computation Enabled 6G Intelligent Machine System," IEEE Network, vol. 35, no. 6, pp. 34-42, Nov. 2021.
. Y S Cho, J Kim, W Y Yang, C G Kang, MIMO-OFDM Wireless Communications with MATLAB. Wiley PublishingY. S. Cho, J. Kim, W. Y. Yang, and C. G. Kang, MIMO-OFDM Wireless Communications with MATLAB. Wiley Publishing, 2010.
W H T E Rodger, Ziemer, Principles of Communications. Wiley7th edW. H. T. Rodger E. Ziemer, Principles of Communications, 7th ed. Wiley, 2014.
Sensing-aided uplink channel estimation for joint communication and sensing. X Chen, Z Feng, J Andrew Zhang, Z Wei, X Yuan, P Zhang, IEEE Wireless Communications Letters. 123X. Chen, Z. Feng, J. Andrew Zhang, Z. Wei, X. Yuan, and P. Zhang, "Sensing-aided uplink channel estimation for joint communication and sensing," IEEE Wireless Communications Letters, vol. 12, no. 3, pp. 441-445, March 2023.
Power of deep learning for channel estimation and signal detection in ofdm systems. H Ye, G Y Li, B.-H Juang, IEEE Wireless Communications Letters. 71H. Ye, G. Y. Li, and B.-H. Juang, "Power of deep learning for channel estimation and signal detection in ofdm systems," IEEE Wireless Communications Letters, vol. 7, no. 1, pp. 114-117, Feb. 2018.
Decision Directed Channel Estimation Based on Deep Neural Network k -Step Predictor for MIMO Communications in 5G. M Mehrabi, M Mohammadkarimi, M Ardakani, Y Jing, IEEE Journal on Selected Areas in Communications. 3711M. Mehrabi, M. Mohammadkarimi, M. Ardakani, and Y. Jing, "Decision Directed Channel Estimation Based on Deep Neural Network k -Step Predictor for MIMO Communications in 5G," IEEE Journal on Selected Areas in Communications, vol. 37, no. 11, pp. 2443-2456, Nov. 2019.
Sparse channel estimation and hybrid precoding using deep learning for millimeter wave massive mimo. W Ma, C Qi, Z Zhang, J Cheng, IEEE Transactions on Communications. 685W. Ma, C. Qi, Z. Zhang, and J. Cheng, "Sparse channel estimation and hybrid precoding using deep learning for millimeter wave massive mimo," IEEE Transactions on Communications, vol. 68, no. 5, pp. 2838- 2849, 2020.
Deep learning-based channel estimation. M Soltani, V Pourahmadi, A Mirzaei, H Sheikhzadeh, IEEE Communications Letters. 234M. Soltani, V. Pourahmadi, A. Mirzaei, and H. Sheikhzadeh, "Deep learning-based channel estimation," IEEE Communications Letters, vol. 23, no. 4, pp. 652-655, April 2019.
Deep CNN-Based Channel Estimation for mmWave Massive MIMO Systems. P Dong, H Zhang, G Y Li, I S Gaspar, N Naderializadeh, IEEE Journal of Selected Topics in Signal Processing. 135P. Dong, H. Zhang, G. Y. Li, I. S. Gaspar, and N. NaderiAlizadeh, "Deep CNN-Based Channel Estimation for mmWave Massive MIMO Systems," IEEE Journal of Selected Topics in Signal Processing, vol. 13, no. 5, pp. 989-1000, Sept. 2019.
Dual cnn-based channel estimation for mimo-ofdm systems. P Jiang, C.-K Wen, S Jin, G Y Li, IEEE Transactions on Communications. 699P. Jiang, C.-K. Wen, S. Jin, and G. Y. Li, "Dual cnn-based channel estimation for mimo-ofdm systems," IEEE Transactions on Communi- cations, vol. 69, no. 9, pp. 5859-5872, 2021.
Compressive sensing techniques for next-generation wireless communications. Z Gao, L Dai, S Han, C.-L I , Z Wang, L Hanzo, IEEE Wireless Communications. 253Z. Gao, L. Dai, S. Han, C.-L. I, Z. Wang, and L. Hanzo, "Compressive sensing techniques for next-generation wireless communications," IEEE Wireless Communications, vol. 25, no. 3, pp. 144-153, 2018.
Multiple signal classification based joint communication and sensing system. X Chen, Z Feng, Z Wei, X Yuan, P Zhang, J Andrew Zhang, H Yang, IEEE Transactions on Wireless Communications. X. Chen, Z. Feng, Z. Wei, X. Yuan, P. Zhang, J. Andrew Zhang, and H. Yang, "Multiple signal classification based joint communication and sensing system," IEEE Transactions on Wireless Communications, pp. 1-1, Feb. 2023.
Joint estimation of velocity, angle-of-arrival and range (jevar) using a conjugate pair of zadoff-chu sequences. Z Yang, R Wang, Y Jiang, J Li, IEEE Transactions on Signal Processing. 69Z. Yang, R. Wang, Y. Jiang, and J. Li, "Joint estimation of velocity, angle-of-arrival and range (jevar) using a conjugate pair of zadoff-chu sequences," IEEE Transactions on Signal Processing, vol. 69, pp. 6009- 6022, Oct. 2021.
Dft-based adaptive channel estimation for ofdm systems. H Zhu, Y Ge, X Chen, H. Zhu, Y. Ge, and X. Chen, "Dft-based adaptive channel estimation for ofdm systems," pp. 515-517, Oct. 2015.
Waveform Design and Signal Processing Aspects for Fusion of Wireless Communications and Radar Sensing. C Sturm, W Wiesbeck, Proceedings of the IEEE. 997C. Sturm and W. Wiesbeck, "Waveform Design and Signal Processing Aspects for Fusion of Wireless Communications and Radar Sensing," Proceedings of the IEEE, vol. 99, no. 7, pp. 1236-1259, July 2011.
Chapter 15 -subspace methods and exploitation of special array structures. M Haardt, M Pesavento, F Roemer, M. Nabil El Korso, Library in Signal Processing. M. Viberg, R. Chellappa, and S. Theodoridis3ElsevierM. Haardt, M. Pesavento, F. Roemer, and M. Nabil El Korso, "Chapter 15 -subspace methods and exploitation of special array structures," in Academic Press Library in Signal Processing: Volume 3, A. M. Zoubir, M. Viberg, R. Chellappa, and S. Theodoridis, Eds. Elsevier, 2014, vol. 3, pp. 651-717.
Waveform design and signal processing aspects for fusion of wireless communications and radar sensing. C Sturm, W Wiesbeck, Proceedings of the IEEE. 997C. Sturm and W. Wiesbeck, "Waveform design and signal processing aspects for fusion of wireless communications and radar sensing," Proceedings of the IEEE, vol. 99, no. 7, pp. 1236-1259, May 2011.
C Trabelsi, O Bilaniuk, Y Zhang, D Serdyuk, S Subramanian, J F Santos, S Mehri, N Rostamzadeh, Y Bengio, C J , arXiv:1705.09792Deep complex networks. arXiv preprintC. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, "Deep complex networks," arXiv preprint arXiv:1705.09792, 2017.
X.-D Zhang, Modern Signal Processing. Berlin, BostonDe GruyterX.-D. Zhang, Modern Signal Processing. Berlin, Boston: De Gruyter, 2023.
| [] |
[
"Modeling Cross-Cultural Pragmatic Inference with Codenames Duet",
"Modeling Cross-Cultural Pragmatic Inference with Codenames Duet"
] | [
"Omar Shaikh [email protected] \nStanford University\n\n",
"⋆ ",
"Caleb Ziems [email protected] \nStanford University\n\n",
"⋆ ",
"William Held [email protected] \n⋄ USC Information Sciences Institute\nGeorgia Institute of Technology\n\n",
"Aryan J Pariani [email protected] \n⋄ USC Information Sciences Institute\nGeorgia Institute of Technology\n\n",
"Fred Morstatter ",
"Diyi Yang [email protected] \nStanford University\n\n"
] | [
"Stanford University\n",
"Stanford University\n",
"⋄ USC Information Sciences Institute\nGeorgia Institute of Technology\n",
"⋄ USC Information Sciences Institute\nGeorgia Institute of Technology\n",
"Stanford University\n"
] | [] | Pragmatic reference enables efficient interpersonal communication. Prior work uses simple reference games to test models of pragmatic reasoning, often with unidentified speakers and listeners. In practice, however, speakers' sociocultural background shapes their pragmatic assumptions. For example, readers of this paper assume NLP refers to "Natural Language Processing," and not "Neuro-linguistic Programming." This work introduces the CULTURAL CODES dataset, which operationalizes sociocultural pragmatic inference in a simple word reference game.CULTURAL CODES is based on the multi-turn collaborative two-player game, Codenames Duet. Our dataset consists of 794 games with 7,703 turns, distributed across 153 unique players. Alongside gameplay, we collect information about players' personalities, values, and demographics. Utilizing theories of communication and pragmatics, we predict each player's actions via joint modeling of their sociocultural priors and the game context. Our experiments show that accounting for background characteristics significantly improves model performance for tasks related to both clue giving and guessing, indicating that sociocultural priors play a vital role in gameplay decisions. | null | [
"https://export.arxiv.org/pdf/2306.02475v1.pdf"
] | 259,076,052 | 2306.02475 | f4f794a463e736de9761685a1b364a927d06882d |
Modeling Cross-Cultural Pragmatic Inference with Codenames Duet
Omar Shaikh [email protected]
Stanford University
⋆
Caleb Ziems [email protected]
Stanford University
⋆
William Held [email protected]
⋄ USC Information Sciences Institute
Georgia Institute of Technology
Aryan J Pariani [email protected]
⋄ USC Information Sciences Institute
Georgia Institute of Technology
Fred Morstatter
Diyi Yang [email protected]
Stanford University
Modeling Cross-Cultural Pragmatic Inference with Codenames Duet
Pragmatic reference enables efficient interpersonal communication. Prior work uses simple reference games to test models of pragmatic reasoning, often with unidentified speakers and listeners. In practice, however, speakers' sociocultural background shapes their pragmatic assumptions. For example, readers of this paper assume NLP refers to "Natural Language Processing," and not "Neuro-linguistic Programming." This work introduces the CULTURAL CODES dataset, which operationalizes sociocultural pragmatic inference in a simple word reference game.CULTURAL CODES is based on the multi-turn collaborative two-player game, Codenames Duet. Our dataset consists of 794 games with 7,703 turns, distributed across 153 unique players. Alongside gameplay, we collect information about players' personalities, values, and demographics. Utilizing theories of communication and pragmatics, we predict each player's actions via joint modeling of their sociocultural priors and the game context. Our experiments show that accounting for background characteristics significantly improves model performance for tasks related to both clue giving and guessing, indicating that sociocultural priors play a vital role in gameplay decisions.
Introduction
"Most of our misunderstandings of other people are not due to any inability to... understand their words... [but that] we so often fail to understand a speaker's intention." -George Armitage Miller (1974) Certain pragmatic inferences can only be interpreted by individuals with shared backgrounds.
⋆ Equal contribution. Steps 1-5 outline high-level gameplay tasks. THE CLUE GIVER targets the words fall and drop, giving the hint slip. THE GUESSER misinterprets slip as a piece of paper, guessing reciept and check.
For example, what researchers call fun may not be fun for kindergartners. Theories from sociolinguistics, pragmatics, and communication aim to explain how sociocultual background affects interpersonal interaction (Schramm, 1954)especially since variation occurs across several dimensions: class (Bernstein, 2003;Thomas, 1983), age (Labov, 2011), gender (Eckert andMcConnell-Ginet, 2013), race (Green, 2002), and more. Rigorously modeling how culture affects pragmatic inference on all axes is understandably challenging. The board game Codenames Duet offers a more restricted setting of turn-based word reference between two players. In each round, THE CLUE GIVER provides a single-word clue; then THE GUESSER must interpret this clue to select the intended word references on the game board. Ideal inferences come from the players' common ground-the set of shared beliefs between them (Clark, 1996). In practice, however, a player's behavior can be idiosyncratic. Each player has knowledge and experience that shape how they interpret clues and make guesses. When players' backgrounds differ, they may be more likely to misinterpret their partner, as seen in Figure 1.
Inspired by the above, we model the role of sociocultural factors in pragmatic inference with a new task and a series of ablation experiments. First, we describe the CULTURAL CODES dataset of cross-cultural Codenames Duet gameplay, with relevant background information from the players' demographics, personalities, and political and moral values ( §3). Then, we deconstruct each action in a game into a distinct modeling task, taking inspiration from work on cross-cultural pragmatics ( §4). Finally, we model each task with/without sociocultural priors, and highlight how player background improves model performance ( §6). Our dataset and code is released publicly at https: //github.com/SALT-NLP/codenames 2 Related Work Cross-Cultural Pragmatics and NLP Pragmatics describes the nonliteral meaning that comes from context and social inference (Purpura, 2004;Thomas, 1983;Hatch et al., 1992). Although some pragmatic categories are largely universal (e.g., politeness), they can be expressed differently in different sociocultural contexts (Taguchi, 2012;Shoshana et al., 1989;Gudykunst and Kim, 1984). When an intended meaning is misinterpreted, this is known as 'pragmatic failure' (Thomas, 1983), and it is often the result of misaligned reference frames or differences in the common ground (Stadler, 2012;Crawford et al., 2017). One axis of difference is between low and high-context cultures (Hofstede, 2001), where high-context cultures rely more on shared background. Pragmatics also differs by age (Saryazdi et al., 2022), region, ethnicity, politics, and class (Thomas, 1983), as does theory of mind reasoning (Fiske and Cox, 1979;Miller, 1984;Shweder, 1984;Lillard, 1998Lillard, , 1999.
Outside of work on politeness (Sperlich et al., 2016;Fu et al., 2020), sarcasm (Joshi et al., 2016, and irony (Karoui et al., 2017), the NLP subfield has not closely considered cross-cultural pragmat-ics. While there is work on understanding the role of individual culture-for example, learning demographic word vectors (Garimella et al., 2017), identifying deception/depression (Soldner et al., 2019;Loveys et al., 2018), or improving translation (Specia et al., 2016)-modeling cross-cultural pragmatic inference in communication remains a challenge (Hershcovich et al., 2022).
Still, a culture-free pragmatics has played a central role in various NLP tasks, from instructionfollowing (Fried et al., 2018), image captioning (Andreas and Klein, 2016), persona-consistent dialogue (Kim et al., 2020), and summarization (Shen et al., 2019). Much of this work is grounded in Bayesian models of cognition (Griffiths et al., 2008), with models like Bayesian Teaching (Eaves Jr et al., 2016), Naive Utility Calculus (Jara-Ettinger et al., 2016;Jern et al., 2017), and the Rational Speech Acts (RSA) model (Goodman and Frank, 2016;Franke and Jäger, 2016) that integrate language, world knowledge, and context to explain ideal pragmatic reasoning (Noveck, 2018) and grounded reference (Monroe et al., 2017). Instead of modeling socioculture in isolation, we model pragmatic inference, highlighting the role of culture in general interpersonal interaction.
Games as Testbeds for AI A significant body of work focuses on modeling optimal strategy across a wide set of games, including Go (Silver et al., 2016), Chess (Schrittwieser et al., 2020), Poker (Brown and Sandholm, 2017, Diplomacy (, FAIR), D&D (Callison-Burch et al., 2022;Zhou et al., 2022), andMafia (Ibraheem et al., 2022). Reference games are growing in popularity as testbeds for AI. Tests for artificial pragmatic reasoning often rely on sequential language games, where two players leverage private knowledge either to compete Yao et al. (2021) or coordinate towards a common goal (Potts, 2012;Khani et al., 2018;Hawkins et al., 2015). In this vein, recent works have considered Codenames (Koyyalagunta et al., 2021;Kim et al., 2019;Jaramillo et al., 2020), Connector (Ashok Kumar et al., 2021;Kumar et al., 2021;Kovacs et al., 2022) InfoJigsaw (Khani et al., 2018), and image-based games (Bao et al., 2022). Word association games have been used in psychology to study semantic associations in cultural (Korshuk, 2007) and religious (Tikhonova, 2014) contexts. We utilize games to model the effect of cross-cultural interactions on pragmatic inference.
The CULTURAL CODES Dataset
This study has been approved by the Institutional Review Board (IRB) at the authors' institution. The purpose of the CULTURAL CODES dataset is to understand how measurable social factors influence dyadic communication in English. By collecting relevant participant background information, we aim to understand how these factors affect linguistic reasoning in a collaborative reference game.
Codenames Duet Game Overview
Codenames Duet is a collaborative variant of Codenames (Vlaada, 2015) designed for 2 players. The players share a 5 × 5 board of 25 common words. Each player has a distinct (but sometimes partially overlapping) map from words on the board to the following objectives: goal, neutral, and avoid. One player's map is hidden from the opposing player. The objective of the game is for both players to guess all of their partner's goal words without guessing any of their partner's avoid words, as doing so results in an immediate loss.
CULTURAL CODES uses an adapted version of Codenames Duet. With each turn, players alternate between the THE CLUE GIVER and THE GUESSER roles. To begin the turn, THE CLUE GIVER (1) selects one or more associated goal words as targets. Next, THE CLUE GIVER (2) provides a single word clue that relates to the associated target(s). This clue is displayed to THE GUESSER, along with the number of targets she should find. The THE CLUE GIVER also (3) provides a justifying rationale for the clue, describing the relationship between the clue and the target(s). This rationale is not displayed to the partner. Using the clue and the number of target words THE GUESSER (4) guesses targeted words. For each guess, THE GUESSER (5) provides a justifying rationale for the guess. After ending the turn, players alternate roles and continue until all goal words are selected for both sides, or players are eliminated for guessing an avoid word. An overview of roles is illustrated in Figure 1. In §4, we formalize actions (1)-(4) as distinct modeling tasks.
Selecting Board Game Words
All experiments are run on a strategically filtered subset of the 400 words from Codenames Duet. We select the 100 most abstract and semantically ambiguous board game words to elicit diverse responses from players. Since the polysemy (Ravin and Leacock, 2000) of a word-the number of related senses it includes-predicts the expected diversity of player responses, we retain only nouns with two or more senses in WordNet (Miller, 1992). Next, we rank polysemous words with Brysbaert et al. (2014)'s concreteness list, selecting the 100 most abstract words according to the mean of their human concreteness scores (finalized list can be found in Appendix A.)
When a player starts a game, we initialize the board with a random subset of 25 words from the filtered 100. For each player, 9 words are randomly mapped to goal, 3 are avoid, and 13 are neutral.
Gameplay Data
To collect gameplay data, we modified an opensource implementation of Codenames Duet, 1 automatically pairing individuals who visited the game website. To source players, we relied on Amazon's Mechanical Turk. We provided MTurkers with an initial instruction video detailing rules and how to play. To be eligible for the task, Turkers had to get ≥ 80% questions right on a qualifying quiz about Codenames rules and gameplay (Appendix D.1). Average game length was around 17.4 minutes, and MTurkers were paid $2.50 for every game.
Gameplay Attributes For each completed turn, we collected the following game state information from THE CLUE GIVER. Elements marked in gray were hidden from THE GUESSER.
Clue: THE CLUE GIVER's clue c (e.g. c could be "transport" for the target "car").
Target Word(s): (Hidden) The target words t n (e.g. "car") that THE CLUE GIVER intended THE GUESSER to guess.
Target Word(s) Rationale(s): (Hidden) A free-text phrase r n , that describes the relationship between each target word t n and the clue c (e.g. "a car is a mode of transport").
To summarize, each turn from THE CLUE GIVER results in a clue c and at least one target-rationale pair (t n , r n ). On the other hand, we collect the following for THE GUESSER.
Guesses: The guesses g n that THE GUESSER selected for THE CLUE GIVER's clue c.
Rationale for Each Guess: A free-text phrase r n that relates the guess g n to the clue c Manual inspection revealed a wide range of rationales. To prevent models from exploiting variance, we instructed GPT-3 to normalize text, removing pronouns and determiners. 2 We provided few-shot examples of reformatted rationales and manually inspected normalized outputs. Additional preprocessing information can be found in Appendix B.
Sociocultural Priors and Worker Diversity
Because we aim to understand the role of sociocultural priors on gameplay, we asked Turkers to complete the standardized surveys below, which cover three broad dimensions: demography, personality, and morality.
Demographic Data ( Figure 2) comes from both the annotation UI and in the task's qualifying questionnaires. In the UI, we asked Turkers for their numeric age, their country of origin, and whether English is their native language. These were required features, so we will denote them as Demo Req . In the qualifier, we included an extended demographic survey with age range, level of education, marital status, and native language (Appendix D.2.1), which we will denote as Demo All . We find that our annotator demographics are moderately diverse, mirroring Moss et al. (2020). Reported gender across annotators are evenly split: 53% identify as women, 47% identify as men, and 0% as other. Additional details are in Figure 2 and Appendix D.2.1. Figure 3) surveys also offer insight into interpersonal interactions. We administer the Big 5 Personality Test (John et al., 1991), measuring a range of personality dimensions on a 5 point Likert Scale. Features include Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Definitions are in Appendix D.2.2.
Personality (
General Dataset Statistics
In total, we collect 794 games, with a total of 199 wins and 595 losses. 3 Games lasted an average of 9.7 turns, resulting in 7,703 total turns across all games. THE CLUE GIVER targeted an average of 1.24 words per turn. For all collected games, both players provided Demo Req . For 54% of games, both players completed all background surveys; for the remaining 46% of games, at least one player completed all surveys. There were no games with no background information.
Tasks and Modeling
To investigate the role of sociocultural factors in pragmatic inference, we propose a set of tasks (Table 1) associated with THE CLUE GIVER ( §4.1) and THE GUESSER ( §4.2) roles. Concretely, we formalize each action into a conditional generation problem instead of classification, since outputs in 3 Some players went inactive before a game was completed. We only collect games that are reasonably long: greater than the 90 th percentile of incomplete games, or ≥ 7 turns. CULTURAL CODES are unconstrained: actions and outputs depend on a changing board state.
Modeling THE CLUE GIVER
Selecting Target Words
To start, THE CLUE GIVER identifies target word(s) (1) on a board, which are later used to construct a target clue for the inference. Clues will target salient words, where salience is at least partially determined by the speaker's cultural background (Wolff and Holmes, 2011). Each set of targets is a subset of the remaining goal words for a given turn (targets ⊆ goal)-we enforce this restriction in our annotation UI.
Giving a Clue
After selecting target words, THE CLUE GIVER must generate a common clue word across the targets (2). Here, THE CLUE GIVER must select a prototypical word across the targets. Because cultural background plays a role in inference (Thomas, 1983), a clue should lie in players' common ground. Furthermore, the clue word should not lead the guesser to pick a avoid n i or neutral e i word, since these words can end the game or turn (see §3.1). Therefore, we also include avoid and remaining neutral words in our input.
Framing the Target Rationales
The relationship between the target and clue word plays a critical role in communication-how information is framed with respect to common ground can influence pragmatic success (Crawford et al., 2017). To this end, we model THE CLUE GIVER's framing of the rationale r for a specific target word t (3), connecting the target t to the clue (c.f., §3.3).
Because the framing is constructed in relation to every target word (if multiple are provided), we also encode all targets in the input.
Modeling THE GUESSER
Selected Guesses
With the clue word, the THE GUESSER pragmatically infers THE CLUE GIVER's targets, selecting a sequence of corresponding guesses (4). For this task, we model the sequence of all selected guesses, regardless of correctness. We input all unselected 4 4 Note that goal/avoid/neutral words differ across players. A goal word for one player can be avoid for another; game states are asymmetric. A clue from THE CLUE GIVER may also target a goal word for the THE GUESSER. As long as one does not guess a avoid word from the opposing player, the scores and fastText cos similarities between the reference and generation, since outputs must be semantically close to or exactly match the reference labels. We find that Morality and All maximize performance over our metrics.
words at the start of each turn for THE GUESSER, along with the provided clue. Like with Target Word Selection, guesses must be a subset of the unselected words (guesses ⊆ unselected); we enforce this during annotation.
Framing Guess Choice
Finally, THE GUESSER also provides framing rationale for their respective guesses, framing clues with respect to their guess (5).
Predicting Pragmatic Success
So far, our tasks focus on replicating elements of a game turn: the Selected Guesses task ( §4.2.1), for example, models both incorrect and correct guesses. However, we also wish to understand if an entire turn sequence results in a successful inference; differences in cross-cultural inferences can result in pragmatic failures (Thomas, 1983). We formulate this as binary classification. Importantly, we only consider a guess correct if it is intentional. A guess is intentional if and only if the clue giver listed it as a target. If THE GUESSER selects a goal word that is not a target word, we count it as "incorrect." Like with guess generation, we encode unselected words in the input. Because we are not predicting the guess itself, we include game continues. See §3.1. target and rationale from THE CLUE GIVER.
Augmenting with Sociocultural Priors
We hypothesize that players' backgrounds influence Codenames gameplay. To this end, we encode background player information for each task. For each dimension described in §3.4, we encode an attribute/answer pair (e.g. age: 22) for each survey question. Then, we prepend all attributes to the encoded strings for each outlined task ( §4), using a unique token to delimit attributes for THE CLUE GIVER and THE GUESSER.
in socio = {BOS, GIVER, Clue Giver Attr:A ,
GUESSER, Guesser Attr:A } + in
If a player did not respond to a specific attribute, we replace the attribute/answer pair with None. From our sociocultural priors ( §3.4), we have 5 ablations: Demo Req , Demo All , Personality, Morality, and All (concatenating and modeling all ablations). We additionally use no priors as a baseline, using in instead of in socio to test our hypothesis.
Experiment Setup
Baselines and Dataset Splits For generation baselines, we use two Seq2Seq models: T5 (Raffel et al., 2020) and BART (Lewis et al., 2020). We optimize the associated language modeling objective across our tasks. Additionally, we experiment with two retrieval baselines for all generation tasks:
(1) randomly selecting a generation from the train set and (2) selecting the nearest k-N inputs using pretrained SentenceBERT (Reimers and Gurevych, 2020) or fastText (Bojanowski et al., 2017). Retrieval baselines yield insight into how well offthe-shelf pretrained models capture sociocultural diversity. For classification, we experiment with BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019). Models are base variants, and results are averaged over 5 runs.
For each task, we split clue givers into 80-10-10 train/val/test, since all tasks depend on initial clue giver choices. Importantly, a single clue giver's data is not distributed across splits, since clue givers may reuse clues/strategies.
Evaluation Metrics
We use a range of metrics to generation tasks. Rationale generation tasks (Target §4.1.3 & Guess §4.2.2) output entire sentences; therefore, we report F-1 scores from ROUGE-(1, 2, L) (Lin, 2004), BLEU (Papineni et al., 2002), and BERTScore (Zhang et al., 2020). For tasks that generate a single or set of words where order does not matter, (Guess Selection §4.2.1; Clue Generation §4.1.2), we report only ROUGE-1 and averaged word vector (fastText) cosine similarity.
Generation Results & Discussion
Including cultural priors improves modeling performance across all tasks. For generation problems, T5 generally outperforms BART, and our retrieval baselines lag behind more complex models. Finally, we conduct a qualitative analysis of 20 random samples from each task. (Table 2), we find that selecting guesses is an easier modeling task than picking target words, likely because the input for selecting a guess contains the clue word. Intuitively, selecting target words is more arbitrary than selecting a guess from a clueespecially since our generation task does not enforce guess correctness. Our models reflect this observation. Guess Selection has R-1 scores that are, on average, twice as good as Target Word Selection (Target 34 vs. Guess 66). Furthermore, Guess Selection only requires demographics (Demo Req ) to maximize performance, unlike Morality for Target Words. Regardless, both tasks see R-1 increase by ≈ 2 points over no prior baselines.
Picking Targets and Guesses From our results
Looking at model outputs between the None and Morality, we observe that models generate words like Well/Grace instead of Death/Poison and vice versa, depending on player background.
Generating a Clue for Targets Moving to our clue generation models, we again find that including sociocultural priors improves model performance (Table 3). Highest R-1 scores (26.54) occur when using Morality as a prior, resulting in a ≈ 2 pt. R-1 and 4 pt. cos-similarity increase when compared to a no prior baseline. We also suspect that selecting target words and generating a hint are interrelated processes: annotators are likely thinking about clues/targets in parallel. Therefore, the same Morality prior results in maximized performance.
While there are themes related to Morality in clue differences for a target word (accident → death vs. lucifer; or fair → equal vs. good), we also find that generations are more specific given sociocultural priors. Consider these generated target → clue pairs ✓ with and ✗ without priors:
• match → ✗ game ✓ cricket • bond → ✗ connection ✓ james • undertaker → ✗ funeral ✓ wrestler
Each ✓ example generates a clue that relies on shared cultural background: specifically, knowing that cricket is a sport; that James Bond is a popular character; and that the Undertaker is a wrestler. More details can be found in Appendix C, Table 6.
Clue Generation Errors Across Sociocultural
Subtypes Despite jointly modeling cross-cultural information, our performance is far from perfect. Generating successful clues is a core element of Codenames; however, our exact match accuracy on clue generation is only ≈ 26%. To understand errors, we sample 100 generated clues from the Clue Generation Task, and identify errors and differences between (socioculturally) generated clues and the ground truth label.
For 43 samples, we notice that sociocultural priors have no effect on clue generation; the output is identical to the no prior model for the given target word. In these instances, we suspect that our models fail to exploit common ground between a giver/guesser, yielding the same clue as without sociocultural priors. Upon further analysis, we observe that these errors occur frequently (37 samples) when both the clue giver and guesser are white or from North America. Because these demographics are already over-represented in our dataset, we suspect that the model simply ignores over-informative sociocultural priors.
Errors also occur because clues are over (20 instances, e.g. "guevera" instead of "overthrow") or underspecified (13 instances, e.g. "supernatural" instead of "monster") compared to the gold clue. In 21/33 of these instances, there is a demographic mismatch between the clue-giver and guesser: the clue-giver and guesser do not share race/country demographics. In contrast to having no effect, we suspect that models mispredict the common ground between guesser/giver. We also judge 18 generation errors to be of similar specificity to the target word-prefixes/suffixes of the gold label-or completely unrelated to the gold clue (6 instances).
Rationalizing Targets and Guesses
Beyond generating target words and guesses, we ask models to explain how a target or guess is related to a clue word (e.g. James Bond is a movie character). Again, we find that providing contextual priors improves performance (Table 4). For Target Rationale Generation, models see maximized performance when all priors are included, while Guess Rationale generation sees improvements for Morality.
Like with Clue Generation, we find that improvements in Guess Rationale are from increased specificity (e.g. "actors are cast" → "actors are part of a cast"; "money is center" → "money is the center of everything"). While qualitative differences are clear for Guess Rationale, Target Rationale results are more subtle: improvements stem from minor variations in the type of framing ("a kind of" vs. "a type of") used by the annotator. Additional generations can be found in Appendix C, Table 7.
Classifying Pragmatic Failure We find that classification performance across each architecture is maximized when using sociocultural priors during training (Table 5). While BERT sees reduced improvement (an increase of only +0.02 F-1 over a no-prior baseline), XLNet and RoBERTa see maximum increases of +0.07 and +0.10 respectively. Both XLNet and RoBERTa see these improvements across the same Personality setting. Sociocultural priors improve performance across mirroring and evaluating pragmatic inference.
A Word on Word Vector Baselines Surprisingly, retrieving nearest words using a word vector approach (fastText) performs poorly for both Clue and Guess Generation (Tables 2 & 3). We suspect that pretrained vectors fail to capture sociocultural inference in word association tasks.
Conclusion
Language is grounded in rich sociocultural context. To underscore this context, we propose a setting that captures the diversity of pragmatic inference across sociocultural backgrounds. With our Codenames Duet dataset (7K turns across 156 players), we operationalize cross-cultural pragmatic inference. Across our experiments, we detail improvements in mirroring/evaluating inferences when using sociocultural priors. Our work highlights how integrating these priors can align models toward more socially relevant behavior.
Limitations
Cross-Cultural Inference Beyond Codenames
Our work explores sociocultural pragmatic inference in a very limited setting, using a core vocabulary of just 100 words. Despite this limitation, we find significant diversity in our dataset; furthermore, our models successfully capture these diverse inferences. While a limitation of our work is its focus on a single setting, we expect domains outside of Codenames to see similar variance. Understanding and highlighting miscommunication in dialog-due to culture-dependent misinterpretation-is one such extension. These domains are likely much nosier than Codenames; we urge future work to further investigate them.
Spurious Correlations across Sociocultural Factors Across all tasks but one (Target Rationale Generation §4.1.3), jointly modeling all sociocultural priors does not result in the highest performing model. Because our sociocultural factors already correlate with each other ( §3.4), we suspect that modeling all features may be redundant, adding spurious correlations and resulting in overfitting. Improved modeling methodology and careful regularization may address these issues; we leave these experiments for future work.
Bigger Models and Task Specific Modeling
Currently, we evaluate small Seq2Seq models due to computational constraints; however, evaluation of 0-shot and few-shot performance on larger language models (e.g. GPT-3) is necessary. Given the changing state of the Codenames board-along with evidence that LLMs struggle with theory-ofmind-esque perspective taking (Sap et al., 2022)our dataset can serve as a challenging benchmark for sociocultural understanding. However, successfully encoding game state into prompts for LLMs may require experimentation. Finally, our current task formulation and modeling setup are straightforward: we simply encode all information in-context and do not assume recursive reasoning like in RSA (Goodman and Frank, 2016). Future work can explore these directions.
Human Evaluations Our evaluation is limited to automatic metrics and qualitative analysis. Evaluating cross cultural generation depends on the evaluator's own culture. Each generation depends on the player's sociocultural background; finding evaluators who match the player may be prohibitive.
Broadly, our work models user background to determine the choices they make. While we focus on a fairly harmless setting (Codenames), our operationalization can be used in harmful ways (e.g. tracking and modeling user behavior without consent). Future work that uses sociocultural information should only be applied to settings where there is no foreseeable harm to end-users.
Furthermore, learning sociocultural associations can introduce positive and negative stereotypes; documenting and reducing harmful stereotypes is an important avenue for future work. Finally, we emphasize that our work is not evidence for linguistic determinism: sociocultural variation in language can influence but not determine thought.
Figure 1 :
1An example interaction where difference in sociocultural background results in misinterpretation.
Figure 2 :
2Age (left) and Race (right) across our annotators. Most of our annotators are between 30-45 and are White; however, we still see moderate representation across other racial groups and ages.
Figure 3 :
3Big 5 Personality (John et al., 1991) results across annotators. Each personality dimension has a standard deviation ≈ 1, indicating a reasonable diversity across our annotator pool.
Figure 4 :
4Self-reported political leaning (left) and Haidt and Graham (2007)'s Moral Foundations Theory (right) across annotators. A majority of our workers are liberal (57%), 39% are conservative, and the remaining 5% are libertarian. As observed in Haidt (2012), values like loyalty, authority, and sanctity are higher for conservative leaning annotators, while fairness is higher for liberal annotators (p < 0.05, t-test) Moral and Political Leaning (Figure 4) also influences decision making processes. Therefore, we asked annotators to self-report their political leaning (liberal, conservative, libertarian, etc). While political leaning captures broad elements of annotator values, Haidt and Graham (2007)'s widely adopted Moral Foundations Theory (MFT) deconstructs values into individual foundations (Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation). Differences in each foundation can stem from cultural variation (Haidt, 2012). To record annotator leaning on MFT, we administer an abridged version of the Moral Foundations Questionnaire (Graham et al., 2008), which reports each dimension on a 5 point Likert scale (see Appendix D.2.3). Later, we refer to all recorded features as Morality.
{unselected, target, rationale, clue} = {BOS, UN, g1, ..., gn, TR, ti, ri, CLUE, ci, EOS}Agent
Task Description
Input
Output
N
CLUE GIVER (1) Target Words
Generate, from the goal pi
words, a subset of targets ti.
Targets are used to generate a
single clue word.
{goal}
= {BOS, p1, p2, ..., pn, EOS}
{targets}
= {BOS, t1, t2, ..., tm, EOS}
7,961
(2) Generating a Clue
Generate a one word clue
c1 that relates selected target
words while avoiding avoid
ai and neutral ni words.
{avoid, neutral, targets}
= {BOS, AVO, a1, a2, . . . , ao,
NEU, n1, ..., nn
TGT, t1, t2, . . . , tm, EOS}
{clue}
= {BOS, ci, EOS}
7,703
(3) Framing a Clue
Generate reasoning r that
frames a candidate clue word
ci w.r.t. a target ti word from
the set of targets.
{targets, clue, target}
= {BOS, TGTS, t1, ..., tn,
CLUE, ci, TGT, ti, EOS}
{rationale}
= {BOS, r, EOS}
9,519
GUESSER
(4) Selecting Guess Words
Generate a series of guesses
{g1, ..., gm} from the unse-
lected words given a clue ci.
{unselected, clue}
= {BOS, UN, u1, ..., un,
CLUE, ci, EOS}
{guesses}
= {BOS, g1, g2, ..., gm, EOS}
7,703
(5) Framing Guesses
Generate reasoning r that
frames a guess gi (from all
guesses) w.r.t. clue ci
{guesses, clue, guess}
= {BOS, GUESSES, g1, ..., gn,
CLUE, ci, GUESS, gi, EOS}
{rationale}
= {BOS, r, EOS}
9,382
BOTH
Predict Correct Guess
Classify if CLUE GIVER mes-
sage (using target, rationale,
and clue) is correctly inter-
preted by the GUESSER
{T, F }
9,519
Table 1 :
1Tasks associated with a turn in Codenames. THE CLUE GIVER starts by selecting information to encode (in the form of a clue), and THE GUESSER decodes clues through guesses. In our experiments, we evaluate models with and without sociocultural priors. Task formulation (generation/classification) is underlined.
Table 2 :
2Target( §4.1.1) & Guess ( §4.2.1) Selection Gen-
eration Results. We report only R-1 scores, since tasks must
contain exact single-word matches to reference labels. Target
Selection is maximized when using Morality priors, while
Guess Selection is maximized by using only Demo Req .
Table 3 :
3Clue Generation Results ( §4.1.2) We report R-1
Table 4 :
4Framing Generation Results for Target ( §4.1.3) and Guess ( §4.2.2) words. We find that the best models with sociocultural priors universally outperform their baseline counterparts. For Target Rationale Generation, jointly modeling all features yields highest improvements; Guess Rationale generation sees improvements when using Morality priors. Guess Rationale Performance sees higher relative/absolute improvement from baselines compared to Target Rationale Generation.Priors
Random BERT RoBERTa XLNet
None
0.50
0.57
0.57
0.57
↓ With Sociocultural Priors
Demo Req
-
0.52
0.55
0.52
Demo All
-
0.59
0.63
0.62
Personality -
0.57
0.67
0.64
Morality
-
0.57
0.64
0.61
All
-
0.57
0.65
0.63
Table 5 :
5Macro F-1 scores for Predicting Pragmatic Success ( §4.3): models must predict if a guesser will guess correctly given the target word, target rationale, and clue. We use base variants of all models and experiment with ablations across different background characteristics.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2829-2842. Basil Bernstein. 2003. Class, codes and control: Applied studies towards a sociology of language, volume 2. Psychology Press. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. Noam Brown and Tuomas Sandholm. 2017. Libratus: Beating top humans in no-limit poker. In Neural Information Processing Systems. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904-911. Chris Callison-Burch, Gaurav Singh Tomar, Lara J Martin, Daphne Ippolito, Suma Bailis, and David Reitter. 2022. Dungeons and dragons as a dialog challenge for artificial intelligence. ArXiv preprint, abs/2210.07109. Herbert H Clark. 1996. Using language. Cambridge university press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Baxter S Eaves Jr, Naomi H Feldman, Thomas L Griffiths, and Patrick Shafto. 2016. Infant-directed speech is consistent with teaching. Psychological review, 123(6):758. Penelope Eckert and Sally McConnell-Ginet. 2013. Language and gender. Cambridge University Press. Meta Fundamental AI Research Diplomacy Team (FAIR) †, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. 2022. Human-level play in the game of diplomacy by combining language models with strategic reasoning. Science, 378(6624):1067-1074. Susan T Fiske and Martha G Cox. 1979. Person concepts: The effect of target familiarity and descriptive purpose on the process of describing others 1. Journal of Personality, 47(1):136-161. Michael Franke and Gerhard Jäger. 2016. Probabilistic pragmatics, or why bayes' rule is probably important for pragmatics. Zeitschrift für sprachwissenschaft, 35(1):3-44. Daniel Fried, Jacob Andreas, and Dan Klein. 2018. Unified pragmatic models for generating and following instructions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1951-1963, New Orleans, Louisiana. Association for Computational Linguistics. Liye Fu, Susan Fussell, and Cristian Danescu-Niculescu-Mizil. 2020. Facilitating the communication of politeness through fine-grained paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5127-5140, Online. Association for Computational Linguistics. Aparna Garimella, Carmen Banea, and Rada Mihalcea. 2017. Demographic-aware word associations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2285-2295, Copenhagen, Denmark. Association for Computational Linguistics. Noah D Goodman and Michael C Frank. 2016. Pragmatic language interpretation as probabilistic inference. Trends in cognitive sciences, 20(11):818-829. Jesse Graham, Brian A Nosek, Jonathan Haidt, Ravi Iyer, Koleva Spassena, and Peter H Ditto. 2008. Moral foundations questionnaire. Journal of Personality and Social Psychology. Lisa J Green. 2002. African American English: a linguistic introduction. Cambridge University Press. Thomas L Griffiths, Charles Kemp, and Joshua B Tenenbaum. 2008. Bayesian models of cognition. William B Gudykunst and Young Yun Kim. 1984. Communicating with strangers: An approach to intercultural communication. Addison Wesley Publishing Company. Jonathan Haidt. 2012. The righteous mind: Why good people are divided by politics and religion. Vintage. Jonathan Haidt and Jesse Graham. 2007. When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1):98-116. Evelyn Hatch et al. 1992. Discourse and language education. Cambridge University Press. Robert XD Hawkins, Andreas Stuhlmüller, Judith Degen, and Noah D Goodman. 2015. Why do you ask? good questions provoke informative answers. In CogSci. Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, et al. 2022. Challenges and strategies in cross-cultural nlp. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6997-7013. Geert H Hofstede. 2001. Culture's consequences: Comparing values, behaviors, institutions and organizations across nations. sage. Samee Ibraheem, Gaoyue Zhou, and John DeNero. 2022. Putting the con in context: Identifying deceptive actors in the game of mafia. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 158-168, Seattle, United States. Association for Computational Linguistics. Julian Jara-Ettinger, Hyowon Gweon, Laura E Schulz, and Joshua B Tenenbaum. 2016. The naïve utility calculus: Computational principles underlying commonsense psychology. Trends in cognitive sciences, 20(8):589-604. Catalina Jaramillo, Megan Charity, Rodrigo Canaan, and Julian Togelius. 2020. Word autobots: Using transformers for word association in the game codenames. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages 231-237. Alan Jern, Christopher G Lucas, and Charles Kemp. 2017. People learn other people's preferences through inverse decision-making. Cognition, 168:46-64. Oliver P John, Eileen M Donahue, and Robert L Kentle. 1991. Big five inventory. Journal of Personality and Social Psychology. Joshi, Pushpak Bhattacharyya, Mark Carman, Jaya Saraswati, and Rajita Shukla. 2016. How do cultural differences impact the quality of sarcasm annotation?: A case study of Indian annotators and American text. In Proceedings of the 10th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 95-99, Berlin, Germany. Association for Computational Linguistics. Jihen Karoui, Farah Benamara, Véronique Moriceau, Viviana Patti, Cristina Bosco, and Nathalie Aussenac-Gilles. 2017. Exploring the impact of pragmatic phenomena on irony detection in tweets: A multilingual corpus study. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 262-272, Valencia, Spain. Association for Computational Linguistics.Abhilasha Ashok Kumar, Ketika Garg, and Robert
Hawkins. 2021. Contextual flexibility guides commu-
nication in a cooperative language game. In Proceed-
ings of the Annual Meeting of the Cognitive Science
Society, volume 43.
Yuwei Bao, Sayan Ghosh, and Joyce Chai. 2022. Learn-
ing to mediate disparities towards pragmatic commu-
nication. Tonia Crawford, Sally Candlin, and Peter Roger. 2017.
New perspectives on understanding cultural diver-
sity in nurse-patient communication. Collegian,
24(1):63-69.
Daniel Aditya
We use the text-davinci-003 variant from OpenAI. Without GPT-3 normalization, we find that model performance is artificially inflated.
AcknowledgementsWe are thankful to the members of SALT Lab for their helpful feedback on the draft. We are also thankful for the helpful feedback from Jing Huang and Rishi Bommasani. Caleb Ziems is supported by the NSF Graduate Research Fellowship under Grant No. DGE-2039655. This research was supported, in part, by MURI-ONR-N00014-20-S-F003 on Persuasion, Identity, and Morality in Social-Cyber Environments, as well as a DARPA grant HR00112290103/HR0011260656.A Finalized Codenames Word ListWe sample from the following list of 100 words: luck,grace, soul, fair, life, pass, revolution, change, charge, degree, force, code, genius, compound, time, wake, plot, draft, ghost, play, part, spell, well, point, link, mass, disease, sub, state, alien, space, mine, ray, millionaire, agent, bond, unicorn, figure, war, cycle, boom, sound, trip, centaur, death, club, crash, angel, cold, center, spring, round, date, press, cast, day, row, wind, fighter, embassy, beat, leprechaun, comic, pitch, mount, march, fall, undertaker, green, switch, strike, king, superhero, capital, slip, lead, check, lap, mammoth, air, match, spy, roulette, contract, witch, stock, light, drop, spot, novel, vacuum, cover, scientist, tag, conductor, field, racket, poison, ninja, opera.B Reformatting Rationales using GPT-3 Some annotators wrote verbose rationales (I think fall happens after you slip), while other annotators were more succinct (fall after slip). To prevent models from learning grammar variation across annotators, we normalize our text using GPT-3. We use the following prompt, using hand-written few-shot examples. Some of the examples are unchangedwe include them in the prompt to demonstrate positive examples to the model.Normalize the text, removing determiners like "the" and "a" at the start of a sentence, along with any pronouns. Correct spelling and grammar mistakes. If possible, the final text should be formatted with the clue first and the target last or the target first and the clue last. clue: "sub" target: "sandwich" text: "you can make a sub, which is a type of sanwich" output: "sub is a type of sandwich" clue: "die" target: "cliff" text: "you may die if you fall off a cliff" output: "die if fall off a cliff" clue: "explosion" target: "boom" text: "it makes sound" output: "explosion makes boom" clue: "superman" target: "superhero" text: "most famous superhero" output: "superman is most famous superhero" clue: "night" target: "club" text: "i love night club" output: "night club is a kind of club" clue: "horn" target: "air" text: "an air horn is a type of horn" output: "air horn is a type of horn" clue: "ivy" target: "poison" text: "poison ivy is a well known plant" output: "poison ivy is a well known plant" clue: "month" target: "march" text: "march is a month" output: "march is a month" clue: "{clue}" target: "{target}" text: "{text}" output: "C Example GenerationsHere, we include example generations for a subset of our tasks, illustrating the influence of sociocultural factors on generated Codenames gameplay.C.1 Clue GenerationBelow, we highlight more clues generated with-/without sociocultural priors. Note how some of the without generations are euro-centric: space → nasa, {revolution, king} → war; adding priors creates more specific clues. However, this isn't always true: target words {pass, check} → leads to poker instead of overtake when conditioned on priors. We suspect that the average player in our pool is not aware of how {pass, check} are associated with poker, resulting in a more generic generation.C.2 Clue FramingAdditional generations can be found inTable 7. Again, we observe that adding sociocultural priors increases relation specificity.D Annotation Task DetailsD.1 Qualification TestTo qualify for the HIT, workers were required to complete a consent form detailing dataset collection and release; and were expected to watch an instructional video outlining game rules. Then they had to pass the following qualifying test, answering at least 6 out of 7 questions correctly.1. True or False: "angry dog" is an example of a clue you could give. (a) "a computer has a mouse" (b) "a doctor is smart" (c) "a dog is a kind of animal" (d) "a disease causes people to be sick" tennis tennis has racket a racket is used in tennis tennis uses a racket day month day is month month has many days 30 days in a month 1. I see myself as someone who does a thorough job.2. I see myself as someone who is reserved.3. I see myself as someone who is outgoing, sociable.4. I see myself as someone who gets nervous easily.5. I see myself as someone who has few artistic interests.6. I see myself as someone who is relaxed, handles stress well.7. I see myself as someone who tends to find fault with others.8. I see myself as someone who is generally trusting.9. I see myself as someone who tends to be lazy.10. I see myself as someone who has an active imagination.D.2.3 Moral Foundations And PoliticalLeaning. Moral Foundations Theory. Following Haidt and Graham(2007), we use the five-foundation theory of moral reasoning to understand our players' values and leanings. This theory does not give explicit definitions for the five foundations, but following recent work byZiems et al. (2022), we can assume the following definition sketches:1. Care: wanting someone or something to be safe, healthy, and happy. Harm: wanting someone or something to suffer physically, emotionally, socially, intellectually, or spiritually.2.Fairness: wanting to see individuals or groups treated equally or equitably Cheating: wanting to see unfairness, injustice, bias, exclusion, or discrimination.3. Loyalty: wanting unity and seeing people keep promises or obligations to an in-group.Betrayal: wanting to see people lie, abandon an in-group, or become isolated and divided.4. Authority: wanting to respect social roles, duties, privacy, peace, and order.Subversion: wanting to see people disrespect, disobey or cause disorder, challenge the statusquo, and do what they do not have permission to do. 5. Sanctity: wanting people and things to be clean, pure, innocent, and holy. Degradation: wanting people to follow selfish or crude desires and do things that make them or others dirty, corrupt, sick, repulsive, or perverted.Moral Foundations Questionnaire We use the associated Moral Foundations Questionnaire, which we shortened to 12 questions as follows.Please answer 12 questions about "right" and "wrong." The prompts are the same in each case, but the considerations are different.D.3 Instructions for Writing RationalesWe explain that rationales should use at least 3 words to describe the connection between the clue and the target. Annotators were encouraged to be creative while trying to use one of the structures below. We imposed these structures for the sake of regularity.E Training and HyperparametersFor our generation tasks, we perform use 5e-5 as our initial learning rate and perform a hyperparameter search over {1...20} epochs. For classification, we use the same splits and perform a hyperparameter sweep over learning rates ({1e-4, 5e-4, 1e-5, 5e-5, 1e-6, 5e-6}) and epochs ({1...15}). All models were trained on an NVIDIA A100 GPU. Across all experiments, GPU compute time was around 4-5 days.F Artifact DetailsWe use several models in our paper for their intended retrieval or generation task. Each model has its own license and number of parameters, listed below:1. T5(Raffel et al., 2020), 220M parameters, is under the Apache 2.0 License. We plan on releasing CULTURAL CODES and corresponding code under Creative Commons Attribution Share Alike 4.0 International. While our released dataset has extensive demographic information, we do not collect any identifiers that can uniquely isolate a person (e.g. name, MTurk ID, etc.)BART (
Reasoning about pragmatics with neural listeners and speakers. Jacob Andreas, Dan Klein, 10.18653/v1/D16-1125Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsJacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182, Austin, Texas. Association for Computational Linguistics.
Planning, inference and pragmatics in sequential language games. Fereshte Khani, Noah D Goodman, Percy Liang, 10.1162/tacl_a_00037Transactions of the Association for Computational Linguistics. 6Fereshte Khani, Noah D. Goodman, and Percy Liang. 2018. Planning, inference and pragmatics in sequen- tial language games. Transactions of the Association for Computational Linguistics, 6:543-555.
Cooperation and codenames: Understanding natural language processing via codenames. Andrew Kim, Maxim Ruzmaykin, Aaron Truong, Adam Summerville, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment15Andrew Kim, Maxim Ruzmaykin, Aaron Truong, and Adam Summerville. 2019. Cooperation and code- names: Understanding natural language processing via codenames. In Proceedings of the AAAI Confer- ence on Artificial Intelligence and Interactive Digital Entertainment, volume 15, pages 160-166.
2020. Will I sound like me? improving persona consistency in dialogues through pragmatic selfconsciousness. Hyunwoo Kim, Byeongchang Kim, Gunhee Kim, 10.18653/v1/2020.emnlp-main.65Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2020. Will I sound like me? improving persona consistency in dialogues through pragmatic self- consciousness. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 904-916, Online. Asso- ciation for Computational Linguistics.
Learning more about cultures through free word association data. Alena Korshuk, Alena Korshuk. 2007. Learning more about cultures through free word association data.
Fast and frugal memory search for communication. J Collin, Jasper M Kovacs, Abhilasha A Wilson, Kumar, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society44Collin J Kovacs, Jasper M Wilson, and Abhilasha A Kumar. 2022. Fast and frugal memory search for communication. In Proceedings of the Annual Meet- ing of the Cognitive Science Society, volume 44.
Playing codenames with language graphs and word embeddings. Divya Koyyalagunta, Anna Sun, Rachel Lea Draelos, Cynthia Rudin, Journal of Artificial Intelligence Research. 71Divya Koyyalagunta, Anna Sun, Rachel Lea Draelos, and Cynthia Rudin. 2021. Playing codenames with language graphs and word embeddings. Journal of Artificial Intelligence Research, 71:319-346.
Semantic memory search and retrieval in a novel cooperative word game: A comparison of associative and distributional semantic models. A Abhilasha, Mark Kumar, David A Steyvers, Balota, Cognitive Science. 451013053Abhilasha A Kumar, Mark Steyvers, and David A Balota. 2021. Semantic memory search and retrieval in a novel cooperative word game: A comparison of associative and distributional semantic models. Cog- nitive Science, 45(10):e13053.
Principles of linguistic change. William Labov, John Wiley & Sons3Cognitive and cultural factorsWilliam Labov. 2011. Principles of linguistic change, volume 3: Cognitive and cultural factors, volume 3. John Wiley & Sons.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.
Ethnopsychologies: cultural variations in theories of mind. Angeline Lillard, Psychological bulletin. 12313Angeline Lillard. 1998. Ethnopsychologies: cultural variations in theories of mind. Psychological bulletin, 123(1):3.
Developing a cultural theory of mind: The ciao approach. Angeline Lillard, Current Directions in Psychological Science. 82Angeline Lillard. 1999. Developing a cultural theory of mind: The ciao approach. Current Directions in Psychological Science, 8(2):57-61.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. ArXiv preprint, abs/1907.11692Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv preprint, abs/1907.11692.
Cross-cultural differences in language markers of depression online. Kate Loveys, Jonathan Torrez, Alex Fine, Glen Moriarty, Glen Coppersmith, 10.18653/v1/W18-0608Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic. the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to ClinicNew Orleans, LAAssociation for Computational LinguisticsKate Loveys, Jonathan Torrez, Alex Fine, Glen Mori- arty, and Glen Coppersmith. 2018. Cross-cultural differences in language markers of depression online. In Proceedings of the Fifth Workshop on Computa- tional Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 78-87, New Orleans, LA. Association for Computational Linguistics.
Psychology, language, and levels of communication. A George, Miller, John WileyHuman communicationGeorge A Miller. 1974. Psychology, language, and levels of communication. In Human communication. John Wiley.
WordNet: A lexical database for English. A George, Miller, Speech and Natural Language: Proceedings of a Workshop Held at. Harriman, New YorkGeorge A. Miller. 1992. WordNet: A lexical database for English. In Speech and Natural Language: Pro- ceedings of a Workshop Held at Harriman, New York, February 23-26, 1992.
Culture and the development of everyday social explanation. G Joan, Miller, Journal of personality and social psychology. 465961Joan G Miller. 1984. Culture and the development of everyday social explanation. Journal of personality and social psychology, 46(5):961.
Colors in context: A pragmatic neural model for grounded language understanding. Will Monroe, X D Robert, Noah D Hawkins, Christopher Goodman, Potts, 10.1162/tacl_a_00064Transactions of the Association for Computational Linguistics. 5Will Monroe, Robert X.D. Hawkins, Noah D. Good- man, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics, 5:325-338.
Demographic stability on mechanical turk despite covid-19. J Aaron, Cheskie Moss, Rosenzweig, Trends in cognitive sciences. 24Jonathan Robinson, and Leib LitmanAaron J Moss, Cheskie Rosenzweig, Jonathan Robin- son, and Leib Litman. 2020. Demographic stability on mechanical turk despite covid-19. Trends in cog- nitive sciences, 24(9):678-680.
Experimental pragmatics: The making of a cognitive science. Ira Noveck, Cambridge University PressIra Noveck. 2018. Experimental pragmatics: The mak- ing of a cognitive science. Cambridge University Press.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Goal-driven answers in the cards dialogue corpus. Christopher Potts, Proceedings of the 30th West Coast Conference on Formal Linguistics. the 30th West Coast Conference on Formal LinguisticsCascadilla Proceedings ProjectChristopher Potts. 2012. Goal-driven answers in the cards dialogue corpus. In Proceedings of the 30th West Coast Conference on Formal Linguistics, pages 1-20. Cascadilla Proceedings Project.
Assessing grammar. E James, Purpura, Cambridge University Press8James E Purpura. 2004. Assessing grammar, volume 8. Cambridge University Press.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21(140):1-67.
Polysemy: Theoretical and computational approaches. Yael Ravin, Claudia Leacock, OUP OxfordYael Ravin and Claudia Leacock. 2000. Polysemy: The- oretical and computational approaches. OUP Ox- ford.
Making monolingual sentence embeddings multilingual using knowledge distillation. Nils Reimers, Iryna Gurevych, 10.18653/v1/2020.emnlp-main.365Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual us- ing knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512-4525, Online. Association for Computational Linguistics.
Neural theory-of-mind? on the limits of social intelligence in large lms. Maarten Sap, Ronan Lebras, Daniel Fried, Yejin Choi, abs/2210.13312ArXiv preprintMaarten Sap, Ronan LeBras, Daniel Fried, and Yejin Choi. 2022. Neural theory-of-mind? on the limits of social intelligence in large lms. ArXiv preprint, abs/2210.13312.
Anthropology's romantic rebellion against the enlightenment, or there's more to thinking than reason and evidence. Culture theory: Essays on mind, self, and emotion. A Richard, Shweder, Richard A Shweder. 1984. Anthropology's romantic rebellion against the enlightenment, or there's more to thinking than reason and evidence. Culture theory: Essays on mind, self, and emotion, pages 27-66.
Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den, Julian Driessche, Ioannis Schrittwieser, Veda Antonoglou, Marc Panneershelvam, Lanctot, nature. 5297587David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Ju- lian Schrittwieser, Ioannis Antonoglou, Veda Pan- neershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489.
Box of lies: Multimodal deception detection in dialogues. Felix Soldner, Verónica Pérez-Rosas, Rada Mihalcea, 10.18653/v1/N19-1175Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Felix Soldner, Verónica Pérez-Rosas, and Rada Mihal- cea. 2019. Box of lies: Multimodal deception de- tection in dialogues. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1768-1777, Minneapolis, Minnesota. Association for Computational Linguistics.
A shared task on multimodal machine translation and crosslingual image description. Lucia Specia, Stella Frank, Khalil Sima'an, Desmond Elliott, 10.18653/v1/W16-2346Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersLucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image descrip- tion. In Proceedings of the First Conference on Ma- chine Translation: Volume 2, Shared Task Papers, pages 543-553, Berlin, Germany. Association for Computational Linguistics.
The interaction of politeness systems in Korean learners of French. Darcy Sperlich, Jaiho Leem, Eui-Jeen Ahn, Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers. the 30th Pacific Asia Conference on Language, Information and Computation: Oral PapersSeoul, South KoreaDarcy Sperlich, Jaiho Leem, and Eui-Jeen Ahn. 2016. The interaction of politeness systems in Korean learn- ers of French. In Proceedings of the 30th Pacific Asia Conference on Language, Information and Compu- tation: Oral Papers, pages 163-171, Seoul, South Korea.
Cross-cultural pragmatics. The Encyclopedia of applied linguistics. Stefanie Stadler, Stefanie Stadler. 2012. Cross-cultural pragmatics. The Encyclopedia of applied linguistics, pages 1-8.
Context, individual differences and pragmatic competence. In Context, Individual Differences and Pragmatic Competence. Multilingual Matters. Naoko Taguchi, Naoko Taguchi. 2012. Context, individual differences and pragmatic competence. In Context, Individual Differences and Pragmatic Competence. Multilingual Matters.
Cross-cultural pragmatic failure. Jenny Thomas, Applied linguistics. 42Jenny Thomas. 1983. Cross-cultural pragmatic failure. Applied linguistics, 4(2):91-112.
Linguistic diagnosing of religious relationships through word association responses. Ev Tikhonova, Conference proceedings of international multidisciplinary scientific conference on social sciences and arts. 3EV Tikhonova. 2014. Linguistic diagnosing of religious relationships through word association responses. In Conference proceedings of international multidisci- plinary scientific conference on social sciences and arts, volume 3, pages 505-516.
Chvátil Vlaada, Codenames -rules -czech games edition | boardgame publisher. Chvátil Vlaada. 2015. Codenames -rules -czech games edition | boardgame publisher.
Linguistic relativity. Phillip Wolff, Kevin J Holmes, Wiley Interdisciplinary Reviews: Cognitive Science. 23Phillip Wolff and Kevin J Holmes. 2011. Linguistic relativity. Wiley Interdisciplinary Reviews: Cognitive Science, 2(3):253-265.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaZhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems 32: Annual Confer- ence on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754-5764.
Adversarial language games for advanced natural language intelligence. Yuan Yao, Haoxi Zhong, Zhengyan Zhang, Xu Han, Xiaozhi Wang, Chaojun Xiao, Guoyang Zeng, Zhiyuan Liu, Maosong Sun, Proceedings of AAAI. AAAIYuan Yao, Haoxi Zhong, Zhengyan Zhang, Xu Han, Xi- aozhi Wang, Chaojun Xiao, Guoyang Zeng, Zhiyuan Liu, and Maosong Sun. 2021. Adversarial language games for advanced natural language intelligence. In Proceedings of AAAI.
Bertscore: Evaluating text generation with BERT. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netTianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evalu- ating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
An ai dungeon master's guide: Learning to converse and guide with intents and theory-of-mind in dungeons and dragons. Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, Prithviraj Ammanabrolu, ArXiv preprint, abs/2212.10060Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, and Prithvi- raj Ammanabrolu. 2022. An ai dungeon master's guide: Learning to converse and guide with intents and theory-of-mind in dungeons and dragons. ArXiv preprint, abs/2212.10060.
The moral integrity corpus: A benchmark for ethical dialogue systems. Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, Diyi Yang, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3755-3773.
| [] |
[
"Optimized Vectorization Implementation of CRYSTALS-Dilithium",
"Optimized Vectorization Implementation of CRYSTALS-Dilithium"
] | [
"Journal Of L A T E X Class ",
"Files "
] | [] | [] | CRYSTALS-Dilithium is a lattice-based signature scheme to be standardized by NIST as the primary post-quantum signature algorithm. In this work, we make a thorough study of optimizing the implementations of Dilithium by utilizing the Advanced Vector Extension (AVX) instructions, specifically AVX2 and the latest AVX512.We first present an improved parallel small polynomial multiplication with tailored early evaluation (PSPM-TEE) to further speed up the signing procedure, which results in a speedup of 5%-6% compared with the original PSPM Dilithium implementation. We then present a tailored reduction method that is simpler and faster than Montgomery reduction. Our optimized AVX2 implementation exhibits a speedup of 3%-8% compared with the state-of-the-art of Dilithium AVX2 software. Finally, for the first time, we propose a fully and highly vectorized implementation of Dilithium using AVX-512. This is achieved by carefully vectorizing most of Dilithium functions with the AVX512 instructions in order to improve efficiency both for time and for space simultaneously.With all the optimization efforts, our AVX-512 implementation improves the performance by 37.3%/50.7%/39.7% in key generation, 34.1%/37.1%/42.7% in signing, and 38.1%/38.7%/40.7% in verification for the parameter sets of Dilithium2/3/5 respectively. To the best of our knowledge, our AVX512 implementation has the best performance for Dilithium on the Intel x64 CPU platform to date. | null | [
"https://export.arxiv.org/pdf/2306.01989v1.pdf"
] | 259,076,273 | 2306.01989 | cb204a559b86448dae8d0e28bc6840986dd0f624 |
Optimized Vectorization Implementation of CRYSTALS-Dilithium
AUGUST 2021 1
Journal Of L A T E X Class
Files
Optimized Vectorization Implementation of CRYSTALS-Dilithium
148AUGUST 2021 1Index Terms-Post-Quantum CryptographyLattice-Based CryptographyCRYSTALS-DilithiumAVX2AVX-512Software Optimization
CRYSTALS-Dilithium is a lattice-based signature scheme to be standardized by NIST as the primary post-quantum signature algorithm. In this work, we make a thorough study of optimizing the implementations of Dilithium by utilizing the Advanced Vector Extension (AVX) instructions, specifically AVX2 and the latest AVX512.We first present an improved parallel small polynomial multiplication with tailored early evaluation (PSPM-TEE) to further speed up the signing procedure, which results in a speedup of 5%-6% compared with the original PSPM Dilithium implementation. We then present a tailored reduction method that is simpler and faster than Montgomery reduction. Our optimized AVX2 implementation exhibits a speedup of 3%-8% compared with the state-of-the-art of Dilithium AVX2 software. Finally, for the first time, we propose a fully and highly vectorized implementation of Dilithium using AVX-512. This is achieved by carefully vectorizing most of Dilithium functions with the AVX512 instructions in order to improve efficiency both for time and for space simultaneously.With all the optimization efforts, our AVX-512 implementation improves the performance by 37.3%/50.7%/39.7% in key generation, 34.1%/37.1%/42.7% in signing, and 38.1%/38.7%/40.7% in verification for the parameter sets of Dilithium2/3/5 respectively. To the best of our knowledge, our AVX512 implementation has the best performance for Dilithium on the Intel x64 CPU platform to date.
I. INTRODUCTION
W ITH the popularity of authentication and nonrepudiation, it is more common to construct digital signatures using asymmetric cryptographic techniques. Currently, millions of web servers use digital signatures as part of Transport Level Security (TLS) [1]- [3], which allows users to verify the server's identity. Both hardware and software vendors rely on digital signatures to guarantee entity integrity. Digital signatures are also essential for cybersecurity infrastructure. Most of the current digital signatures are implemented based on Rivest-Shamir-Adleman (RSA) [4], Elliptic Curve Cryptography (ECC), or Digital Signature Algorithm (DSA).
However, in the era of continuous development of quantum computers, traditional public key cryptography and DSA appear to be in jeopardy. Using Shor's algorithm [5], an attacker with a powerful quantum computer can obtain the corresponding private key in polynomial time by analyzing the public key of RSA or ECC. The National Institute of Standards and Technology (NIST) proposed in [6] that by 2030 an RSA 2048-bit key may be broken by a quantum computer within a few hours. As a result, NIST has launched a competition to solicit standard algorithms for PQC, including soliciting and evaluating quantum-resistant secure digital signature algorithms. On July 5th 2022, NIST announced the first algorithms to be standardized. There are three signature schemes selected: CRYSTALS-Dilithium, FALCON and SPHINICS+ [7], among them CRYSTALS-Dilithium is recommended by NIST as the primary signature algorithm to be used. CRYSTALS-Dilithium is a digital signature scheme based on lattice theory, whose security is based on the Module Learning With Errors (MLWE) [8] and the Module Short Integer Solution (MSIS) [9] problems. Most of the operations of Dilithium are based on the cyclotomic polynomial ring arithmetic. Number Theoretic Transform (NTT) is a common technique used to speed up polynomial multiplication. Dilithium scheme uses the Fiat-Shamir with Aborts structure [10]. So the signature process will reject sampling through a series of conditional checks to ensure that the generated signature will not leak private key information.
NIST chose the 64-bit Intel architecture (i.e. x64) as the main benchmarking platform of NIST PQC candidates. Advanced Vector Extension (AVX) is x64 Intel architecture instruction set [11]. The first AVX instruction is proposed by Intel in 2008. AVX-512 is the newest version of Intel Advanced Vector eXtensions [12]. It has 32 512-bit vector registers called zmm registers. The vector registers are divided into different data lanes. Instructions are operated in-lane simultaneously. AVX-512 instruction set can accelerate the nonsequential process with the best performance among all AVX serial instruction sets, for its registers with maximum data width. The parallelism of AVX-512 is based on the "Single Instruction Multiple Data (SIMD)" idea. AVX-512 provides various permutation instructions and masked loads/stores that are especially efficient to implement hash functions, NTT, and rejection sampling. AVX-512IFMA has the potential to speed up multiply and add operations.
a) Related Work: The optimization work of Dilithium includes software optimization implementation and hardware optimization implementation. In this paper, we focus on the software-optimized implementation of Dilithium. There are a list of software implementations related to Dilithium. The basic software implementation is the C REF implementation that the CRYSTALS team submitted to NIST [13]. However, the C REF implementation is not optimized and has lower efficiency. Additionally, the CRYSTALS team provided a faster AVX2 optimized version [13] on x64 CPUs. Recently, software optimization studies mainly focus on CPU/GPU environments and embedded systems like ARM. Ravi et al. [14] presented a signed polynomial representation implementation arXiv:2306.01989v1 [cs.CR] 3 Jun 2023 for Cortex-M4 and proposed various stack consumptions and speed trade-offs for the signing procedure. Kim et al. [15] presented a method for designing the NTT multiplications of CRYSTALS-Dilithium using advanced SIMD instructions and vector registers. "Asymmetric multiplication" for matrixto-vector polynomial multiplication was introduced in [16]. Abdulrahman et al. [17] proposed to switch to a smaller prime modulus for small polynomial multiplication in the signing procedure of Dilithium. [18] presented optimizations of Dilithium on IBM z15 architecture, and mentioned that employing some optimization methods with advanced instruction sets like AVX-512 as future work. Zheng et al. [19] presented a parallel small polynomial multiplication (PSPM) algorithm that can fastly compute small vector polynomial multiplication in Dilithium, based on which the C and ARM Neon implementation were proposed.
For AVX-512 implementations of PQC algorithms, some arithmetics like large integer multiplication, Montgomery multiplication, and NTT AVX-512 implementation have received researchers' attention [20]- [25]. Cheng et al. [26] proposed a highly vectorized implementation for SIKE. [27] presented an implementation using AVX-512 to batch CSIDH group actions. [28] presented an implementation using AVX-512 for SPHINCS+. Cabral et al. [29] presented an optimized AVX-512 implementation for SHA-3 family. Up till now, to the best of our knowledge, there is still no AVX-512 optimized implementation for Dilithium. It's also interesting to investigate whether the current state-of-art of AVX2 implementations for Dilithium can be further improved. b) Contribution: Our contribution is summarized as follows.
1) We present an improved parallel small polynomial multiplication with tailored early evaluation (PSPM-TEE) to further speed up the signing procedure. We implement this algorithm using both C, AVX2, and AVX-512. 2) We present a tailored reduction method that is faster than Montgomery reduction, and apply it to the first level of NTT(t 0 ), NTT(t 1 ) for Dilithium2/3/5 and NTT(y) for Dilithium2. 3) We propose an optimized implementation of the tailored reduction, consisting of only two instructions and utilizing AVX-512IFMA, which results in the reduction of one instruction and two cycle counts compared to the implementation using AVX-512F. In comparison to Montgomery reduction, the tailored reduction using AVX-512IFMA offers a more efficient performance, saving up to two instructions and six cycle counts. 4) We present optimized AVX2 implementation of Dilithium by integrating our improved PSPM-TEE, tailored reduction, and lazy reduction techniques. Our optimized AVX2 implementation exhibits a speedup of 3%-8% compared with the state-of-the-art of Dilithium AVX2 software. 5) For the first time, we propose a fully and highly vectorized implementation of Dilithium using AVX-512. We carefully vectorize most of Dilithium functions, especially performance bottlenecks including NTT, NTT −1 , Montgomery reduction, hashing, and parallel reject sampling. In particular, we present a space-efficient implementation of parallel rejection sampling using AVX-512 without a big precomputation table, as otherwise the space consumption is infeasible by applying the implementation method using AVX2. With all these optimization efforts, our AVX-512 implementation improves the performance by 37.3%/50.7%/39.7% in key generation, 34.1%/37.1%/42.7% in signing, and 38.1%/38.7%/40.7% in verification for the parameter sets of Dilithium2/3/5 respectively. To the best of our knowledge, our AVX512 implementation achieves the best performance for Dilithium on the Intel x64 CPU platform thus far. c) Code: We will later open source our code. d) Structure of this paper: This paper is organized as follows. Section II reviews some preliminaries. Section III presents an improved PSPM with early evaluation. Section IV introduces the proposed special Tailored reduction. Section V deals with the AVX-512 implementation of Dilithium and presents various optimization strategies. In Section VI we go through the performance results and comparison.
II. PRELIMINARIES
A. Notation
We denote polynomials by lowercase Latin letter c (the coefficient of a polynomial is c i , which represents the i-th element of c), vectors of polynomials by bold lowercase letter t, and matrices by bold upper case letter A. If they are transformed to NTT-domain, then we add a hat to make a tag, e.g.,ĉ,t andÂ.
Let
Z q def = Z/qZ, R def = Z[x]/(x n + 1), and R q def = Z q [x]/(x n + 1)
. Element a i ∈ Z q will be represented by one element in {− q−1 2 , · · · , 0, · · · , q−1 2 }. Polynomial a ∈ R q can be represented by a = n−1 i=0 a i · x n , where a i ∈ Z q . The operator • denotes coefficient-wise multiplication. The operator || concatenates two inputs into a byte stream. For a i ∈ Z q , ||a i || ∞ denotes |a i mod ± q| (the absolute value of (a i mod ± q)). For a finite set S or a distribution D, x ← S denotes random sampling of an element from the set S, and x ← D denotes sampling x according to distribution D. z means rounding down z and z means rounding to the nearest integer of z.
B. CRYSTALS-Dilithium Signature Scheme
CRYSTALS-Dilithium is a post-quantum digital signature algorithm based on the hardness of MSIS and MLWE lattice problems. Algorithm 1, 2, and 3 specify the CRYSTALS-Dilithium key generation, signature generation, and signature verification, respectively. The polynomial ring in Dilithium is Z q [x]/ (x n + 1), where n = 256, q = 8380417.
The function NumberOfOne means to count the number of 1's in a vector of polynomials. For the details about the seed expansion functions ExpandA, ExpandS and ExpandMask, the rounding functions Power2Round, HighBits, LowBits and Decompose, and the hint functions MakeHint and UseHint, the generating c polynomial function SampleInBall, the reader can refer to the Dilithium specification [30].
Algorithm 1 Dilithium.KeyGen()
Input: ζ ← {0, 1} 256 Output:
Public and secret keys (pk = (ρ, t1) , sk = (ρ, K, tr, s1, s2, t0))
1: (ρ, ρ , K) ∈ {0, 1} 256 × {0, 1} 512 × {0, 1} 256 := H(ζ)
H is instantiated as SHAKE-256 2: A ∈ R k× q := ExpandA(ρ)
A is generated and stored in NTT Representation as 3: (s, e) ∈ S η × S k η := ExpandS (ρ ) 4: t := As + e Compute As as NTT −1  • NTT (s) 5: (t1, t0) := Power2Round q,d (t) 6: tr ∈ {0, 1} 256 := H (ρ t1) 7: return (pk = (ρ, t1) , sk = (ρ, K, tr, s, e, t0))
C. Hashing
The hash functions used by Dilithium are two eXtendable Output Functions (XOF), namely SHAKE-256 and SHAKE-128 [31]. XOF maps an arbitrary-length bit string to a string of infinitely many bits. Dilithium uses these XOF functions mainly for generating random bytes of SHAKE-128 to sample matrix A and for generating random bytes of SHAKE-256 to sample s, e and y. Store c in NTT representation asĉ = NTT(c) 11: z := y + cs Compute cs as NTT −1 (ĉ •ŝ) 12: r0 := LowBitsq (w − ce, 2γ2) Compute ce as NTT −1 (ĉ •ê) 13: if ||z||∞ ≥ γ1 − β or ||r0||∞ ≥ γ2 − β then (z, h) := ⊥
w 1 := UseHintq(h, Az − ct1 · 2 d , 2γ2) Compute as NTT −1 Â • NTT(z) − NTT(c) • NTT t1 · 2 d 5: returnc = H (µ w 1 ) and z ∞ < γ1 − β and NumberOfOne(h) ≤ ω
D. Number Theoretical Transform in Dilithium
Polynomial multiplications are one of the most expensive parts in massive lattice-based cryptographic schemes. The commonly used technique to accelerate computation is the number theoretic transform (NTT). In Dilithium, the modulus q is chosen so that q ≡ 1 (mod 2n) and thus there exists a primitive 2n-th root of unity in Z q . Concretely, the recommended parameter setting is q = 8380417, n = 256 for the sake of security, and the expected primitive 512th root of unity is r = 1753. The NTT algorithm maps
f = f 0 + f 1 x + · · · + f 255 x 255 ∈ Z q [x]/(x 256 + 1) to (f mod Z q /(x 128 − r 128 ), f mod Z q /(x 128 + r 128 ))
=((f 0 + r 128 f 128 ) + · · · + (f 127 + r 128 f 255 )x 127 ,
(f 0 − r 128 f 128 ) + · · · + (f 127 − r 128 f 255 )x 127 ) ∈Z q [x]/(x 128 − r 128 ) × Z q [x]/(x 128 + r 128 )
using FFT trick [32]. We call this transformation forward NTT (denoted as NTT from here on). To transform back from the NTT domain to the regular domain, the inverse NTT (denoted as NTT −1 ) is computed. By recursively applying this, f is transformed into its NTT form
NTT(f ) =f = (f 0 , · · · ,f 255 ) ∈ Z 256 q wheref i = f mod (x − r 2i−1 ) = f (r 2i−1 ), i = 1, · · · , 255
Since the NTT transform is an isomorphism, we have
f • g = NTT −1 (NTT(f ) • NTT(g))
Note that the direct output of NTT/NTT −1 may not result in the natural order as presented, but in a "bit-reversed" order. However, each polynomial undergoes two times of bit reversal during NTT multiplication, one in NTT and one in NTT −1 , so the result finally turns out in the expected natural order. The core operation to split polynomial Z q [x]/(x 256 + 1) to polynomial Z q [x]/(x 128 − r 128 ) and Z q [x]/(x 128 − r 128 ) is Cooley-Tukey(CT) butterfly [33]. The NTT performs 128 CT butterflies to pairs of coefficients in every iteration of splitting. Each iteration is referred to as a level. Figure 1(a) depicts the CT butterfly. One might invert the FFT trick using Gentleman-Sande(GS) butterfly [34]. Figure 1 for j ∈ (0, 1, · · · , r − 1) do 6: vi := vi · M + U + a for j ∈ {0, 1, · · · , n − 1} do 12: wj := wj + vj−i 13: if ci = −1 then 14: for j ∈ {0, 1, · · · , n − 1} do 15: wj
Input: (c, a), where a = [a (0) , · · · , a (r−1) ] T ∈ R r q , every a (j) = n−1 i=0 a (j) i · x i ∈ Rq, and c = n−1 i=0 ci · x i ∈ Bτ Output: u = c · a = [u (0) , · · · , u (r−1) ] T ∈ R r q , where u (j) = c · a (j) = n−1 i=0 u (j) i · x i ∈ Rq 1: for i ∈ {0, 1, · · · , n − 1} do:= wj + (γ − vj−i) 16: for i ∈ {0, 1, · · · , n − 1} do 17: t := wi 18: for j ∈ (0, 1, · · · , r − 1) do 19: u (r−1−j) i := (t mod M ) − τ U (modq) 20: t := t/M 21: return u = [u (0) , · · · , u (r−1) ] T
E. Parallel Small Polynomial Multiplication
As we shall see previously in Section II-B, one distinctive feature of the polynomial multiplication operations in Dilithium is that many of the time, one of the two multiplicands involved, namely c ∈ B τ , has exactly τ coefficients from 1, -1, the rest being 0. Multiplication by 1 or -1 can be reduced to an addition or subtraction with a sign-based conditional judgment. This is an optimized work presented in [19]. Algorithm 4 is the parallel small polynomial multipli-cation (PSPM) algorithm, and one single call can compute several products of c and small polynomials, it can speed up the signing and verification of Dilithium. We call lines 1-7 of pseudocode in Algorithm 4 preparing process, lines 9-21 evaluating process.
F. AVX-512 Instructions Set
Intel Advanced Vector Extensions 512 (AVX-512) is the set of Intel's latest x64 vector instructions. AVX-512 adopts the SIMD vectorization parallel approach. Unlike the previous AVX2 instruction set, the size of the vector register is first expanded to 512 bits, and the number of vector registers is also increased from the previous 16 to 32 vector registers (zmm0-zmm31). The AVX-512 vector registers can store more values, and reduce the number of loads from memory to vector registers. In particular, there are eight mask registers in AVX-512 (k0-k7). The mask registers can be used to store the comparison results of two vector registers, enabling more comparison instructions in AVX-512. The mask register can be used for "maskmov" type instructions for masking load and store. Generally, we use this type of instructions to select the vector data lane within zmm registers we need to load or store. AVX-512 has many permutation instructions for adjusting the position of 16-bit, 32-bit, and 64-bit words residing in a zmm register. Such instructions are very important for implementing rejection sampling, NTT and NTT −1 , as we shall see. AVX-512F is a vector extension of the x86 instruction set architecture (ISA) that provides 512-bit vector operations, allowing the execution of up to 16 double-precision floating-point or 32 single-precision floating-point operations per cycle. AVX-512F also includes new instructions for integer operations, gather and scatter instructions, and support for masked operations, which allows operations to be selectively applied to vector elements. AVX-512IFMA is an extension to AVX-512F that provides instructions for integer multiplication using the Fused Multiply-Add (FMA) technique, which can perform two multiply-add operations in a single instruction. AVX-512IFMA provides two new IFMA instructions for 52bit integer vpmadd52luq and vpmadd52huq.
III. PSPM WITH TAILORED EARLY EVALUATION (PSPM-TEE)
The signing procedure employs conditional checks for the infinity norm z, r 0 , and ct 0 to perform rejection sampling. Since these checks are performed over single coefficients, it is not necessary to compute all the polynomials of the vector. Instead, one polynomial is computed and checked immediately. If the check fails, further computation is unnecessary, saving significant computation time. The probability that
z ∞ < γ 1 − β is 2(γ1−β)−1 2γ1−1 256· = 1 − β γ1−1/2 n ≈ e −256·β /γ1
, and the probability of r 0 in the good range is
2(γ2−β)−1 2γ2
256·k ≈ e −256·βk/γ2 . It is worth noting that the majority of loop repetitions occur due to the infinity checks of z and r 0 . Therefore, we will only consider the probabilities of these two vectors. In previous implementations, the infinite norm of the vector z was first evaluated, followed by the evaluation of the infinite norm of vector r 0 . In this paper, for the first time, we propose to adjust the order of evaluation of vector z and vector r 0 based on the different rejection probabilities of vector z and vector r 0 for different parameters of Dilithium. We can compute the probabilities of the two conditional checks for three parameter sets. As shown in Table I, the probability of vector z falling within a good range is always greater than the probability of vector r 0 . Hence, checking r 0 prior to checking z can result in a faster signature procedure since repetition is more likely to occur after checking r 0 and the computation of z can be saved. We tested the performance of the Dilithium C REF implementation between checking r 0 first and checking z first. We observe that checking r 0 before checking z results in a 2% to 3% improvement in the signing procedure, as demonstrated in Table II. The idea of first evaluating the infinite norm of the vector with higher rejection probability is applicable to signature schemes that use rejection sampling.
The parallel algorithm presented in [19] poses difficulties for early-evaluation as it calculates the entire polynomial vector multiplication results simultaneously. To overcome this issue, we introduce a PSPM algorithm in this section that incorporates early evaluation. Our algorithm includes the computation of c · s + y and LowBits(w − c · e, 2γ 2 ) in the evaluating process, enabling us to promptly perform reject checks for each coefficient. If the reject checks fail, the computation is terminated. This approach results in faster signature speeds. Additionally, there are various PSPM algorithms available. In Dilithium3/5, the coefficients of s and vector e are stored in separate precomputed tables, allowing for independent early checks of z and r 0 . In contrast, Dilithium2 stores the coefficients of s and e in the same precomputed table. Consequently, the early checks for z and r 0 are performed simultaneously, as depicted in Algorithm 5. It is important to note that during rejection checks, verifying the r 0 always takes precedence over checking vector z for all three parameter sets of Dilithium, as previously analyzed.
Scheme
Pr Algorithm 5 A parallel index-based polynomial multiplication algorithm with early evaluating r 0 and z for Dilithium2 for j ∈ (0, 1, · · · , l − 1) do 6: vi 13: if ci = 1 then 14: for j ∈ {0, 1, · · · , n − 1} do 15: mj := mj + vj−i 16: if ci = −1 then 17: for j ∈ {0, 1, · · · , n − 1} do 18: mj := mj + (γ − vj−i) 19: for i ∈ {0, 1, · · · , n − 1} do 20: t := mi 21: for j ∈ (0, 1, · · · , k − 1) do 22: for j ∈ (0, 1, · · · , l − 1) do 28:
(||z|| ≤ γ 1 − β) Pr (||r 0 || ≤ γ 1 − β) Dilithium2Input: (c, s, e, y, w), where s = [s (0) , · · · , s (l−1) ] T ∈ R l q , y ∈ R l q , e ∈ R k q , w ∈ R k q , every s (j) = n−1 i=0 s (j) i · x i ∈ Rq, y (j) = n−1 i=0 y (j) i · y i ∈ Rq, e (j) = n−1 i=0 e (j) i · e i ∈ Rq, w (j) = n−1 i=0 w (j) i · w i ∈ Rq, and c = n−1 i=0 ci · x i ∈ Bτ Output: z = c · s + y = [z (0) , · · · , z (l−1) ] T ∈ R l q , where z (j) = c · s (j) + y (j) = n−1 i=0 z (j) i · x i ∈ Rq, r = w − c · e = [r (0) , · · · , r (k−1) ] T ∈ R k q , where r (j) = w (j) − c · e (j) = n−1 i=0 r (j) i · x i ∈ Rq 1: for i ∈ {0, 1, · · · , n − 1} do:= vi · M + U + s (j) i 7: vi−n := vi−n · M + U − s (j) i 8: for j ∈ (0, 1, · · · , k − 1) do 9: vi := vi · M + U + e (j) i 10: vi−n := vi−n · M + U − e (j) i 11: γ := 2U · M l+k −1 M −1 12: for i ∈ {0, 1, · · · , n − 1} dor (k−1−j) i := (t mod M ) − τ U (mod q) 23: r (k−1−j) i := w (k−1−j) i − r (k−1−j) i 24: r (k−1−j) i :=LowBitsq(r (k−1−j) i , 2γ2) 25: if |r (k−1−j) i | >= γ2 − β then Restart signature process.z (l−1−j) i := (t mod M ) − τ U (mod q) 29: z (l−1−j) i := z (l−1−j) i + y (l−1−j) i 30: if |z (l−1−j) i
| >= γ1 − β then Restart signature process. 31: t := t/M 32: return z = [z (0) , · · · , z (l−1) ] T ,r = [r (0) , · · · , r (k−1) ] T
IV. TAILORED REDUCTION
We present an optimized modular reduction tailored for Dilithium modulus q = 8380417, which might be of independent interest and can be applied to optimize the implementations of Dilithium in other platforms. The modulus q can be represented as 2 23 − 2 13 + 1. We can apply a fast specialized reduction algorithm for modulus prime having such a form.
We exemplify with the Dilithium prime and the process is shown in Algorithm 6.
Algorithm 6 Tailored reduction for the Dilithium prime q = 2 23 − 2 13 + 1
Require: −2 40 < z ≤ 2 40 , q = 2 23 − 2 13 + 1 Ensure: r = z(modq), −2 31 < r < 2 31 1: p1 = z 2 23 2: r = z − qp1 Proposition 1. If −2 40 < z ≤ 2 40 , then Algorithm 6 computes an integer r congruent to a modulo q = 2 23 − 2 13 + 1 such that −2 31 < r ≤ 2 31 .
Proof. If −2 40 < z ≤ 2 40 , in line 1, p 1 = z/2 23 < 2 17 , let
r 1 = z − 2 23 p 1 < 2 23 , r = z − qp 1 = z − (2 23 − 2 13 + 1)p 1 = (2 13 − 1)p 1 + r 1 , so |r| ≤ | 2 13 − 1 p 1 | + |r 1 | ≤ (2 13 − 1)2 17 + 2 23 < 2 31 .
Algorithm 7 Signed Montgomery reduction for 32-bit q [32]
Require:
0 < q < 2 31 odd, −2 31 q ≤ z = z12 32 + z0 < 2 31 q where 0 ≤ z0 < 2 32 Ensure: r ≡ β −1 z(modq), −q < r < q 1: m ← z0q −1 mod ± 2 32 signed low product, q −1 precomputed 2: t1 ← mq β signed high product 3: r ← z1 − t1
A. Comparisons
Montgomery reduction is an efficient algorithm to reduce product in NTT by computing Hensel remainder. The disadvantage of Montgomery reduction is the Hensel remainder r is congruent to z · 2 −32 mod q instead of representative of the residue class of z modulo q. Algorithm 7 presents the pseudocode of Signed Montgomery reduction. This operation involves two bit-shiftings, two multiplications, and one subtraction. In contrast, our Tailored reduction algorithm is more efficient as it only requires one bit-shifting, one subtraction, and one multiplication. This makes it a better choice than Montgomery reduction when dealing with products smaller than 2 40 for NTT with lazy reduction. Furthermore, the Tailored reduction can be implemented with the new AVX-512IFMA instruction in just two instructions, resulting in a lower latency during reduction (see Subsection V-C for a detailed discussion).
V. IMPLEMENTATION DETAILS
We present an optimized vectorization implementation of Dilithium for CPUs that both support the AVX2 and AVX-512 instruction sets. In this section, we will thoroughly explore the implementation details of each optimized module. A. Dilithium Software Performance Profiling A critical step in software optimization is to identify the performance bottlenecks of the algorithm. In this section, we utilize the Linux performance analysis tool perf to profile the Dilithium C REF implementation of Dilithium3 parameter set. The performance data was collected by executing the Dilithium3 codes 1000 times and calculating the average execution time. Table IV depicts the detailed percentages. KeccakF1600 StatePermute which is predominantly used in hash functions, is the most time-consuming function in key generation, signing, and verification. This is followed by Montgomery reduction and poly uniform and poly uniform eta, and then NTT and NTT −1 . The functions of poly uniform and poly uniform eta are used to sample coefficients using the rejection sampling method, while the functions of NTT, NTT −1 and Montgomery reduction are used for polynomial multiplication. Consequently, we can identify the computation bottleneck functions as polynomial multiplication, hash function, and rejection sampling. In the following sections, we propose a series of optimization techniques for these functions.
B. Data Alignment
We represent each polynomial as an array of 256 32bit signed integers. For this representation, we can use the AVX-512 SIMD instruction to vectorize different functions. Alternatively, we can represent this array as an array of 16 512bit vectors of type __m512i in AVX-512 intrinsics, where the symbol "i" represents integers. In AVX-512 assembly, we store the 256 coefficients in 16 zmm vector registers. The 512bit Intel AVX-512 registers have an alignment requirement of 64 bytes to ensure optimal vectorization. Optimal memory access is achieved when the data starts at an address on a 64byte boundary, which means that the address in memory is divisible by 64. Therefore, we align all arrays to 64 bytes in our implementation.
C. Vectorization of NTT with AVX-512
We now give details about our AVX-512 parallel implementation of NTT for Dilithium polynomial ring Z q [x]/(x n + 1), where n = 256, q = 8380417, the modulus q is a 32-bit prime. The whole NTT-based polynomial multiplication is divided into three parts, NTT, NTT −1 , and point-wise multiplication. a) Register allocation: Here we introduce our register arrangement. Note that AVX-512 has 32 512-bit vector zmm registers (zmm0-zmm31). If a 32-bit integer is directly stored in a zmm vector register without zero-padded, a zmm register can store 16 32-bit coefficients, and hence 16 vector registers are enough to load all 256 coefficients. In doing so, we merge the eight levels without reloading coefficients. Later in the implementation of the butterfly implementation, we will carefully explain why there is no need to reserve 64-bit space for intermediate products. We arrange zmm1-zmm16 to store all the polynomial coefficients consecutively. We use zmm17 to store the precomputed results ζq −1 mod 2 32 , and zmm18 to store ζ (ζ is the twiddle factor). The zmm19, zmm20, and zmm21 are used to store temporary computation values. Figure 2. In the first level, the distance of CT butterfly is 128. So the two vector registers zmm1 and zmm9 perform a pair of butterfly operations, and zmm2 and zmm10 perform a pair of butterfly operations; that is, the subscript distance of zmm register is 8. In the second level, the distance is 64. The corresponding registers subscript distance becomes 4. Analogously, the registers subscript distances in the third level and fourth level are two and one respectively. Starting from the fifth level, the distance is 8 while a consecutive 16 coefficients reside in a zmm register. Therefore, in the fifth level, we need to swap the upper 8 coefficients of one register with the lower 8 coefficients of another register. After the fifth level, coefficients are stored in a permutated order in registers. In the sixth level, the distance is 4. The upper four coefficients and the lower four coefficients in every 256-bit data lane are shuffled. Similarly, two coefficients are swapped in every 128-bit data lane in the seventh level and one coefficient is shuffled in every 64-bit data lane in the eighth level. The shuffling process is illustrated in Figure 3(a), Figure 3(b), and Figure 3(c). Shuffle8 means to shuffle 8 coefficients, Shuffle4 means to shuffle 4 coefficients, Shuffle2 means to shuffle 2 coefficients, and Shuffle1 means to shuffle one coefficient. We implement Shuffle8 using the vshufi32x4 instruction. The function of this instruction is to rearrange each 128-bit data lane of the two vector registers a and b through an 8-bit immediate value. We want to rearrange eight consecutive coefficients, which correspond to a 128-bit data lane. According to the instruction pseudocode 1 , we set the immediate value to 0x44 and 0xEE.
The shuffling of the four coefficients is more complicated because at this time the four consecutive coefficients 1 https://www.intel.com/content/www/us/en/docs/intrinsicsguide/index.html First, we splice the lower 64-bit in register a and the lower 64bit in register b using vpblendmd. However, this instruction can only be spliced by the value of the mask register according to the index. Specifically, if we use the vpblendmd directly, the order of the coefficients we will obtain is {a 0 , a 1 , a 2 , a 3 , b 4 , b 5 , b 6 , b 7 , a 8 , a 9 , a 10 , a 11 , b 12 , b 13 , b 14 , b 15 }. This is not the order we want. Therefore, we duplicate the lower 64 bits to the upper 64 bits of register b in every 128-bit data lane and duplicate the upper 64 bits to the lower 64 bits of register a in every 128-bit data lane. We implement this by using vpermq with constant argument 0x4E and then using the vpblendmd instruction to splice the 64-bit data lane in the two registers through the mask register. Here, we use the kmovw instruction to store 0x0F0F into mask register k6. For the permutation of two coefficients, we use vpunpcklqdq and vpunpckhqdq. For the shuffling of one coefficient, because there is no ready-made instruction that can be realized, we adopt the same idea as shuffling four coefficients. First, the upper 32 bits of every 64 bits data lane in register b are obtained by shifting 32 bits to the left. Then use the vpblendmd to splice 32 bits of the two registers with mask register value 0xAAAA. For copying the upper 32-bit to the lower 32-bit, we directly use the vmovshdup to copy the upper 32-bit. c) Butterflies: In Section II-D, we introduce NTT and CT/GS butterflies. In the CT butterfly transform, half of the coefficients need to be multiplied by the twiddle factors. Note that the twiddle factors are fixed constants, so we precompute their values and store them in a look-up table. As mentioned earlier, to save multiplication in Montgomery reduction, we also precompute ζq −1 mod 2 32 and store them in the lookup table. Here we would like to explain why it is not necessary to reserve 64-bit for multiplication results. At the start, the 16 consecutive coefficients are loaded into a zmm register. During the butterfly operation calculation, we split the coefficients that need to be multiplied by the twiddle factor into two parts according to the odd and even subscripts. The odd and even subscript coefficients are stored in two zmm registers. The odd/even coefficient splitting is achieved by copying the upper 32 bits using vmovshdup instruction. After splitting, a register only stores eight coefficients, and each coefficient occupies 64 bits of space. Thus, there is no need to reserve 64 bits of space when loading. Finally, it is reduced to 32bit by Montgomery reduction. Then the odd-index and evenindex coefficients are spliced into a 512-bit vector register by vpblendmd instruction. In this way, although the splitting operation takes some clock cycles, it ensures the maximum degree of parallelism. Generally speaking, this implementation idea is faster than the idea of loading zero-padded 64-bit integers.
Algorithm 8 2-instruction Tailored reduction using AVX512IFMA
Input: A 40-bit signed integer 2 −40 < z ≤ 2 40 Output: r = z(modq), −2 31 < r < 2 31 1: vpsrlq 23, z, r z 2 23 2: vpmadd52luq −q, z, r z − z 2 23 · q 3: return r Algorithm 9 3-instruction Tailored reduction using AVX512 Input: A 40-bit signed integer 2 −40 < z ≤ 2 40 Output: r = z(modq), −2 31 < r < 2 31 1: vpsrlq 23, z, r z 2 23 2: vpmuldq q, z, t t = z 2 23 · q 3: vpsubq t, z, r r = z − t 4: return r d) Vectorized Tailored reduction: We present a vectorized Tailored reduction implementation using AVX-512IFMA instruction. We use this vectorized Tailored reduction implementation in NTT(t 0 ) and NTT(t 1 ). Previous work implements a four-instruction Montgomery reduction that is both suited for AVX2 and AVX-512 vectorized implementation. The total latency of these four instructions is 12 cycles. In this work, we present a 2-instruction Tailored reduction using AVX-512IFMA vpmadd52luq instruction that can reduce both latency and instruction count and shown in Algorithm 8. This vectorized Tailored reduction reduces the cycle counts down to 6 cycles by eliminating one vpmuldq and one vpsubq.
Algorithm 10 4-instruction Montgomery reduction using AVX512 [13] Input: A signed integer 2 −31 q < z ≤ 2 31 q Output: r = 2 −32 z(modq), −q < r < q 1: vpmuldq q −1 , z, m m = z mod 2 32 · q −1 2: vpmuldq q, m, t t = m mod 2 32 · q 3: vpsubq t, z, r r = z − t 4: vpsrlq 32, r , r r = r 2 32 5: return r e) Lazy reduction: Dilithium involves NTT operations on polynomials with small coefficients. We observe that, for CT butterfly of NTT with small coefficients such as c and the noise vectors s and e, the first level does not need to perform Montgomery reduction, because the upper bound data width of s/e is 4 bits, and the multiplication of a 4-bit coefficient and a 23-bit twiddle factor will not exceed 32 bits. c is a small polynomial with only τ ±1, so the product of a 1bit coefficient and a 23-bit twiddle factor will not exceed 32 bits as well. Specifically, we do not need to perform modular reductions in the first level of NTT(c), NTT(s) and NTT(e). For NTT(t 0 ) and NTT(t 1 ) in all the three security levels of Dilithium2/3/5, as well as NTT(y) in Dilithium2, in the first level of NTT we only need to perform the above tailored reduction algorithm instead of Montgomery reduction. For instance, in Dilithium2, where γ 1 = 2 17 , the data width of vector y is 18-bit. The product of vector y and the twiddle factor multiplication is a 41-bit integer in (−2 40 , 2 40 ]. Hence, we use the Tailored reduction Algorithm 6 proposed above. Specifically, in this case, we do not need to completely reduce the coefficient to Z q in the first level of NTT, our only requirement is to prevent the coefficient from overflowing. Starting from the second level, the product will be reduced by Montgomery reduction.
D. Hashing
Dilithium makes use of XOF to expand seeds and sample polynomials. SHAKE-128 is used to generate matrix A, and SHAKE-256 is used to generate vectors s, e and y. As we discussed in Section V-A, hashing is an expensive operation in the entire scheme. The previous AVX2 implementation used a 4-way SHAKE-128 and SHAKE-256; that is, they use a vectorized SHAKE implementation that operates on 4 parallel sponges and hence can absorb and squeeze blocks in and out of these 4 sponges at the same time [13]. We use the AVX-512 implementation and can calculate and generate 8 hash results at the same time due to the expansion of the register bit width. We used the SPHINCS+ AVX-512 open source code 2 . We embedded this 8-way hash implementation into the expansion of matrix A, vector y, and vector s, e.
E. Parallel Rejection Sampling
The rejection sampling process generates a 23-bit random number by sampling and then checks whether it is greater than or less than q using conditional judgment. If the number is greater than q, it is rejected, and if it is less than q, it is accepted. To obtain the 23-bit random number, the byte stream obtained by hashing needs to be spliced, and then the random number is accepted or rejected sequentially. This process poses a challenge to vectorizing rejection sampling. The previous method used by AVX2 was to create a twodimensional array of size 2 8 × 8 = 2048, which stored all possible acceptance positions for 8 32-bit integers in a 256bit vector register. However, this method is not suitable for AVX-512 implementation because a vector register in AVX-512 can store 16 32-bit integers, requiring a two-dimensional array of size 2 16 × 16 = 1048576, which is not feasible for AVX-512 implementation. Therefore, a more space-efficient implementation method was used.
One of the main concepts of rejection sampling is to compare numbers in all positions with q and then store them in order. Fortunately, AVX-512 has a built-in function called _mm512_mask_compressstoreu_epi32, which stores 32-bit integers in their corresponding positions sequentially through the values of the mask register. This allows us to compressively store the values and meet our requirements. The function is described in Figure 4. We can also set the mask register using the function _mm512_cmp_epi32_mask. By setting the comparison operand value of the _mm512_cmp_epi32_mask function to MM CMPINT LT, we compare the values of the input vector register a and vector register b. If a is smaller than b, we set the value of the mask register at the corresponding position to 1, otherwise, we set it to 0. Note that the mask register is a 16-bit binary integer. We can determine how many coefficients are received in a vector register by counting the number of 1's in the mask register using the function _mm_popcnt_u32. Store a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 Reg 1 0 1 1 0 0 1 1 1 1 1 0 1 1 1 1 Mask a0 a1 a2 a3 a5 a6 a7 a8 a9 a12 a13 a15 Mem Fig. 4: The _mm512_mask_compressstoreu_epi32 function.
Compress Masked
We optimized the vectorized implementation of generating 23-bit random integers to reduce the number of calls to SHAKE-128. Since we only need 48 out of the 64 bytes streams loaded to obtain 16 23-bit numbers, we should avoid wasting the extra 16 bytes generated by SHAKE-128.
To achieve this, we first initialize a vector register with all zeros, and then use the functions _mm512_permutexvar_epi8 and _mm512_mask_blend_epi64 to adjust and splice this all-0 register and the register loaded with 64-byte random byte streams. We illustrate this process in Figure 5(a) and Figure 5(b). The upper 6×64 = 384 bits are the random byte streams, and the lower 2 × 64 = 128 bits are zeros. By adjusting the order of the spliced vector registers in the 8-bit data lane using the function _mm512_permutexvar_epi8, we can obtain three consecutive random bytes of every four bytes, and the last byte of the four bytes is just 0. Then, we use _mm512_and_si512 to perform a bitwise AND with 23 ones to obtain 16 23-bit random integers. The above describes the rejection sampling process for generating numbers in the range [0, q). However, in Dilithium, there is also rejection sampling of numbers in the range [−η, η]. We have also optimized the previous AVX2 implementation for this purpose. In our implementation, we first separate the high 4 bits and low 4 bits of each 8-bit random byte, and then use the _mm512_cmp_epi32_mask function to judge and store the high 4 bits and low 4 bits separately using mask registers. To ensure the correctness of the test vector, we also adjust the order of the high 4 bits and low 4 bits accordingly.
F. Expanding Matrix A and Sampling Vectors
We present an 8-way poly uniform 8x function to sample 8 polynomials in R q simultaneously, using 8-way SHAKE-128 and parallel rejection sampling. For the expansion of matrix A, in Dilithium2 where k = l = 4, we can directly call the poly uniform 8x function twice to generate 4 row vectors. In Dilithium3, where k = 6, l = 5, poly uniform 8x is called four times to generate 30 polynomials of 6 row vectors. In Dilithium5, poly uniform 8x is called eight times to generate 56 polynomials of 8 row vectors. Similarly, for sampling vectors, we propose an 8-way function poly uniform eta 8x and poly uniform gamma1 8x using 8-way SHAKE-256 to sample vectors s/e and y respectively.
G. Implementing PSPM-TEE
This work implements AVX2 and AVX-512 for PSPM-TEE. In original PSPM implementation from [19], coefficients were packed into 64-bit words. However, to ensure consistency in the data lane of the vector register and make it easier to operate on the same size operand, we chose to pack coefficients into 32-bit words. This eliminates the need to zero-extend 32-bit coefficients to 64-bit and simplifies the vectorization of PSPM implementation. For Dilithium2/3/5, we provide a specific description of the implementation of the parallel small polynomial algorithm for Dilithium3 parameters, where k = 6, l = 5. Our implementation is based on the parallel small polynomial parameter sets shown in Table X in Appendix A.
Firstly, we introduce the splicing of the noise vector s, e. Although each coefficient of s and e lies in the range of [−4, 4], the coefficients grow by 2τ U after the addition operation in Algorithm 4, where U = 4, τ = 49 in Dilithium3. As a result, the upper bound of coefficients is 392. Therefore, each coefficient needs to set aside at least 9 bits for storage. One 32-bit word can pack up to 3 polynomial coefficients. Therefore, vectors s, e need two precomputed tables to store all coefficients.
The preparing process is implemented using intrinsic functions because it is easily vectorizable. However, the loop operation in Algorithm 4 is not suitable for parallel implementation. Therefore, our AVX-512 implementation uses parallel computing to implement the accumulation process through AVX-512 assembly. Specifically, when determining whether challenge polynomial c is 1 or -1, we pass the corresponding array address to AVX-512 assembly and perform parallel addition. Combining with the parallelism achieved by Algorithm 5, the implementation of cs can achieve a maximum of 8×3 = 24 parallelism at most.
We implemented the evaluating process of extracting computation results from the 32-bit packed words using the intrinsic functions. To perform the conditional check of vector coefficients, we used the _mm512_cmp_epi32_mask function, which allows us to check 8 coefficients in parallel and obtain a 16-bit mask for every 32-bit data lane. If the mask is non-zero, the function immediately returns 1.
H. Vectorized Packing a) Obstacle in vectorizing packing: In Dilithium implementation, polynomial vectors need to be encoded as byte strings (packing) and vice versa (unpacking). We have completed the vectorization of unpacking of z and packing of w 1 using AVX-512. To ensure that our optimized implementation works on all platforms and matches the NIST Known Answer Tests (KAT) test vectors, we faced a difficulty in vectorizing polynomial packing and unpacking. Directly vectorizing the packing/unpacking process is not feasible. For instance, a 512-bit vector register can store 16 coefficients, and bit-wise instructions are operated on two vector registers. If register r 1 stores coefficients a 0 − a 15 , r 2 stores coefficients a 16 − a 31 . A pair of coefficients a 0 and a 16 are packed, whereas we need a 0 and a 1 . Therefore, direct vectorization is not possible. b) How to vectorize packing: The vectorization of unpacking z using AVX-512 is similar to parallel rejection sampling, we do not give details again here. For packing of w 1 . In Dilithium3 and Dilithium5, the coefficient range of w 1 is [0, 15]. Therefore, every two w 1 coefficients can be packed into one byte. A zmm register can store 128 4-bit w 1 coefficients. Therefore, 128 polynomial coefficients are loaded continuously in each loop. We use _mm512_packus_epi32 to convert packed signed 32-bit integers to packed 16-bit integers. Next, we use _mm512_packus_epi16 to convert packed 16-bit integers to packed 8-bit integers, which results in 64 coefficients stored in each vector, with each coefficient having a width of 8 bits. We then use _mm512_maddubs_epi16 to shift the coefficients with odd indices by 4 bits to the left, so that adjacent coefficients are combined into one byte. Finally, we use _mm512_maddubs_epi16 to merge all coefficients into a 512-bit vector. However, after the above operation, we change the order of the coefficients. Therefore, we use the intrinsic functions _mm512_permutexvar_epi32 and _mm512_shuffle_epi8 to reorder the coefficients to ensure the correctness of the KAT test.
VI. EXPERIMENT RESULTS AND DISSCUSSIONS
We implemented all three security levels of Dilithium: Dilithium2, Dilithium3, and Dilithium5, using both C language and Intel AVX-512 assembly and AVX-512 intrinsic functions. We also optimized the previous AVX2 code using the presented optimization technique. Our optimized vectorization implementation passed the NIST Known Answer Tests tests to ensure the implementation works on all platforms. We perform a detailed evaluation of performance improvements for the optimization we have achieved in Section V. The Dilithium codes are collected from https://csrc.nist.gov/Projects/post-quantumcryptography/selected-algorithms-2022. The compiler is gcc-9.4.0 and the optimization flags are -Wshadow -Wpointerarith -mavx2 -mAVX-512F -mavx512vbmi -mavx512bw -mavx512cd -mavx512vl -mpopcnt -maes -march=native -mtune=native -O3. The benchmark experiments were conducted on a desktop machine with Ubuntu 20.04 operating system and Intel(R) Core(TM) i7-11700F CPU (Rocket Lake) running at 2.5GHz. As usual, we disable the TurboBoost and Hyper-Threading to ensure the reproduction of the experiments. Each experiment is repeated 100000 times, and we present the median results.
A. Tailored Reduction Performance
We implemented Tailored reduction using AVX2 and AVX-512 in the first level of NTT. We make a comparison with NTT using Montgomery reduction. To demonstrate the true benefits of Tailored reduction in the signing process, we compared the original AVX2 signing implementation with the AVX2 implementation using Tailored reduction in the NTT(t 0 ). In Table V, Sign-original refers to the implementation of signing with NTT using Montgomery reduction. Sign-opt1 refers to the implementation of signing with NTT using Tailored reduction by AVX-512F. Sign-opt2 refers to the implementation of signing with NTT using Tailored reduction by AVX-512IFMA.
C. Other Vectorization Functions Performance
We conducted an experiment to primarily test the performance of our AVX-512 vectorized functions in Dilithium. We tested three sampling functions: poly uniform, poly uniform eta, and poly uniform gamma1. Overall, polynomial sampling can bring about a three to four-fold increase in performance. This improvement mainly comes from the hash and rejection sampling functions, where poly uniform and poly uniform eta use SHAKE-128 hash function, and poly uniform gamma1 uses SHAKE-256. To obtain further performance data, we separately tested the speed of rejection sampling and hashing. According to Table IX in Appendix A, the 8-way poly uniform implementation's primary performance improvement comes from the 5.5-fold improvement of SHAKE-128, with the help of 16-way rejection sampling which has a 5.63-fold improvement. However, since the polynomial sampling part is not entirely 8-way, when sampling less than 16 polynomial coefficients, the 1way polynomial sampling is still used, so the final performance improvement does not fully reach a five-fold speedup, and the other two sampling functions remain the same. Our 16-way rej uniform and rej eta implementation can achieve a five to six-fold speedup overall. However, the speedup is somewhat limited, mainly because the rejection sampling part requires some shuffle operations to ensure the correctness of the test vectors, and the rejection sampling process is not entirely vectorizable.
For the NTT part, we utilized a compact method for loading coefficients, allowing us to load 16 coefficients at once instead of the AVX2's load of four coefficients at a time. As a result, we were able to achieve 16-way parallelism in the NTT part. This allowed us to achieve a significant acceleration ratio of almost 14 times in the NTT operation. Similarly, in the inverse NTT implementation, we were able to achieve a speedup of almost 18 times. The acceleration of the NTT part mainly comes from the vectorization of AVX-512 itself, as well as our reasonable instruction schedule and full utilization of registers, reducing load and store operations through layer merging technology.
In the polynomial pointwise multiplication function, the parallelism is reduced by half during the calculation due to the large number of 32bits × 32bits =⇒ 64bits multiplication operations, resulting in only an 11-fold acceleration ratio. We also implemented AVX-512's 64-way vectorization on the packing part, and the corresponding data is shown in Table IX in Appendix A. However, since Dilithium's AVX2 code does not have a polyz unpack implementation, the 32-way data is not given here. Our 16-way vectorization achieves a 30-fold speedup. The main bottleneck of the polyz unpack acceleration is the coefficient's reordering to ensure the correctness of the test vectors. The polyw1 pack itself is relatively simple. The C implementation is already fast, so the 16-way implementation does not bring much acceleration.
D. Scheme Performance
In this work, to give the best performance, we apply various optimization techniques to Dilithium implementation, including optimized implementations in NTT, rejection sampling, decomposing and computing hints, bit-packing, and so on. Table VIII summarizes the cycle counts and comparisons for all three security levels of Dilithium, including key generation (KeyGen), signing (Sign), and verification (Verify).
We enhanced our previous AVX2 implementation by using improved PSPM and tailored reduction, resulting in a speedup of 3% to 9% in the signature procedure. In our implementation of Dilithium AVX-512, there are some parts that have not yet been vectorized, such as hash functions other than polynomial sampling. Therefore, the overall improvement in signature speed cannot exceed twice the AVX2 software speed. However, our speedup is mainly attributed to the vectorization of certain functions and optimization techniques that we have introduced.
E. Discussions about Side-Channel Security and Memory Cost
Constant-time implementation (CTI) was not the focus of this work, but we indeed take it in mind. We have carefully avoided using branching statements depending on secret information, and we have not used the modulo operator %. For the side-channel security of the PSPM-TEE technique, we have the following observations. On the one hand, as the intermediate hashing c's rejected with the tailored early evaluation are never output, the intermediate values are actually blinded to an outside observer. On the other hand, the PSPM technique combines the same dimension coefficient of multiple small polynomials into one word to operate, which could make more difficulties or obstacle for side-channel attacks than traditional NTT techniques. For space cost, our implementation pre-calculates the tables in improved PSPM, which requires an additional 8192 bytes of storage space in Dilithium3/5 and 4096 bytes in Dilithium2. However, our implementation of parallel rejection sampling saves 1048576 bytes. Overall, our implementation significantly reduces the required space compared to the previous AVX2 implementations.
Algorithm 2
2Dilithium.Sign(sk, M ) Input: Secret key sk = (ρ, K, tr, s, e, t0), Message M ∈ {0, 1} * Output: Signature σ = (c, z, h) 1: A ∈ R k× q := ExpandA(ρ) A is generated and stored in NTT Representation as 2: µ ∈ {0, 1} 512 := H(tr M ) 3: κ := 0, (z, h) :=⊥ 4: ρ ∈ {0, 1} 512 := H(K µ) (or ρ ← {0, 1} 512 for randomized signing) 5: while (z, h) = ⊥ do Pre-computê s1 := NTT (s) ,ê2 := NTT (e), andt0 := NTT (t0) 6: y ∈S γ 1 := ExpandMask (ρ , κ) 7: w := Ay w := NTT −1 ( • NTT(y)) 8: w1 := HighBits q (w, 2γ2) 9:c ∈ {0, 1} 256 := H (µ w1)
Bτ := SamplelnBall(c)
h
:= MakeHintq(−ct0, w − ce + ct0, 2γ2) Compute ct0 as NTT −1 ĉ •t0 16: if ||ct0||∞ ≥ γ2 or NumberOfOne(h) > ω then (z, h) := ⊥ 17: κ := κ + 18: return σ = (c, Public key pk = (ρ, t1), Message M ∈ {0, 1} * , Signature σ = (c,
(b) depicts the GS butterfly.
Fig. 1 :
1Butterfly diagramsAlgorithm 4 A parallel index-based polynomial multiplication algorithm with translations
Fig. 2 :
2The storage coefficients order in zmm registers b) Coefficients loading and shuffling: We exemplify a polynomial a 0 + a 1 x + ... + a 255 x 255 as input of NTT. Before the first level, we load the consecutive 16 coefficients in every zmm register as shown in
Fig. 3 :
3Coefficients shuffling in two vector registers correspond to a 64-bit data lane. Here we use two permute instructions, one is vpermq and the other is vpblendmd.
Fig. 5 :
5Packing random byte stream
TABLE I :
IProbability of vector in a good range.Scheme
Round3 C REF
Imp. (%)
(check z first) (check r 0 first)
Dilithium2
992696
972244
2.06%
Dilithium3
1670374
1627560
2.56%
Dilithium5
2088720
2026818
2.96%
TABLE II :
IIComparative performance of checking z first and checking r 0 first (Cycles).Scheme
Sign (Original PSPM) Sign (Improved PSPM) Imp.(%)
Dilithium2
670970
636326
5.16%
Dilithium3
1171086
1101330
6.00%
Dilithium5
1491124
1415452
5.07%
TABLE III :
IIIComparative performance of improved PSPM and original PSPM[19] (Cycles).
TABLE IV :
IVPercentages of used functions in Keygen, Signature and Verification.Functions
Keygen Sign
Verify
montgomery reduce
38.24% 23.02% 16.04%
KeccakF1600 StatePermute 17.68% 38.84% 42.99%
invntt tomont
17.48% 6.28% 5.22%
ntt
6.94% 9.30% 3.35%
poly pointwise montgomery 4.96% 1.86% 2.22%
Table V
Vshows that Tailored reduction provides a speedup of 3% in signing. Additionally, our implementation of Tailored reduction using AVX-512IFMA is much faster than AVX-512F, confirming that IFMA does indeed improve the performance of Tailored reduction implementation.Operation
Impl.
CPU cycles
Sign-original
AVX-512
239244
Sign-opt1
AVX-512F
234124
Sign-opt2
AVX-512IFMA
231962
TABLE V :
VComparative performance of Tailored Reduction in Dilithium2(Cycles).B. PSPM-TEE Performance
Table VI
VIshows the performance of the Improved PSPM algorithm. By applying Improved PSPM, our AVX2 implementation obtains a speedup of about 9% in Dilithium3. Dilithium5 has the least acceleration, only 2.6%. The results clearly show that the improved PSPM benefits to the Signing procedure of Dilithium. So we use the improved PSPM algorithm in implementing Dilithium using AVX512 as well.AVX2
AVX2
Speedup
(PSPM − TEE)
Dilithium2
254922
231766
9.0%
Dilithium3
407316
393454
3.4%
Dilithium5
514992
501304
2.6%
TABLE VI :
VIPerformance of Signing Procedure with Improved PSPM (Cycles).
TABLE VII :
VIIPerformance comparison in Signing procedure (Cycles).Scheme AVX2 [13] AVX2 (Our work) Speedup
Dilithium2
254922
231410
9.22%
Dilithium3
407316
392436
3.65%
Dilithium5
514992
500882
2.74%
TABLE VIII :
VIIIExecution times (in Cycles) of implementation of Dilithium2, Dilithium3 and Dilithium5 on an Intel Core i7-11700F processor.Scheme
Operation
C [13]
AVX2 [13]
AVX-512
Cycles
Cycles
Cycles
Speedup vs C Speedup vs AVX2
https://github.com/DorAlter/sphincsplus/tree/avx512-implementation
APPENDIX A. PSPM with Early Evaluation pseudocode for Dilithium3/5Algorithm 11 A parallel index-based polynomial multiplication algorithm with early evaluating z for Dilithium3/5Input: (c, s, y), where s = [s (0) , · · · , s (l−1)for j ∈ (0, 1, · · · , l − 1) do6:vi := vi · M + U + s for j ∈ {0, 1, · · · , n − 1} do12:mj := mj + vj−i13:if ci = −1 then14:for j ∈ {0, 1, · · · , n − 1} do15:mj := mj + (γ − vj−i)16:for i ∈ {0, 1, · · · , n − 1} do17:t := mi18:for j ∈ (0, 1, · · · , l − 1) do19:
Information technology-open systems interconnection-the directory: public-key and attribute certificate framework. X , 9594-8: 2001ITU-T RecommendationX. . ITU-T Recommendation, "Information technology-open systems interconnection-the directory: public-key and attribute certificate frame- work," ISO/IEC 9594-8: 2001, 2000.
The transport layer security (TLS) protocol version 1.2. T Dierks, E Rescorla, Tech. Rep. T. Dierks and E. Rescorla, "The transport layer security (TLS) protocol version 1.2," Tech. Rep., 2008.
The transport layer security (TLS) protocol version 1.3. E Rescorla, Tech. Rep. E. Rescorla, "The transport layer security (TLS) protocol version 1.3," Tech. Rep., 2018.
A method for obtaining digital signatures and public-key cryptosystems. R L Rivest, A Shamir, L Adleman, Communications of the ACM. 212R. L. Rivest, A. Shamir, and L. Adleman, "A method for obtaining digital signatures and public-key cryptosystems," Communications of the ACM, vol. 21, no. 2, pp. 120-126, 1978.
Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. P W Shor, SIAM review. 412P. W. Shor, "Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer," SIAM review, vol. 41, no. 2, pp. 303-332, 1999.
Report on lightweight cryptography. K Mckay, L Bassham, M Sönmez Turan, N Mouha, National Institute of Standards and Technology. Tech. Rep.K. McKay, L. Bassham, M. Sönmez Turan, and N. Mouha, "Report on lightweight cryptography," National Institute of Standards and Technol- ogy, Tech. Rep., 2016.
Status report on the third round of the NIST post-quantum cryptography standardization process. G Alagic, D Apon, D Cooper, Q Dang, T Dang, J Kelsey, J Lichtinger, C Miller, D Moody, R Peralta, 2022US Department of Commerce, NISTG. Alagic, D. Apon, D. Cooper, Q. Dang, T. Dang, J. Kelsey, J. Lichtinger, C. Miller, D. Moody, R. Peralta et al., "Status report on the third round of the NIST post-quantum cryptography standardization process," US Department of Commerce, NIST, 2022.
Worst-case to average-case reductions for module lattices. A Langlois, D Stehlé, 10.1007/s10623-014-9938-4Des. Codes Cryptogr. 753A. Langlois and D. Stehlé, "Worst-case to average-case reductions for module lattices," Des. Codes Cryptogr., vol. 75, no. 3, pp. 565-599, 2015. [Online]. Available: https://doi.org/10.1007/s10623-014-9938-4
Generating hard instances of lattice problems. M Ajtai, Proceedings of the twenty-eighth annual ACM symposium on Theory of computing. the twenty-eighth annual ACM symposium on Theory of computingM. Ajtai, "Generating hard instances of lattice problems," in Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, 1996, pp. 99-108.
Fiat-shamir with aborts: applications to lattice and factoring-based signatures. V Lyubashevsky, International Conference on the Theory and Application of Cryptology and Information Security. SpringerV. Lyubashevsky, "Fiat-shamir with aborts: applications to lattice and factoring-based signatures," in International Conference on the Theory and Application of Cryptology and Information Security. Springer, 2009, pp. 598-616.
Introduction to intel advanced vector extensions. C Lomont, 23Intel white paperC. Lomont, "Introduction to intel advanced vector extensions," Intel white paper, vol. 23, 2011.
10th generation intel core processor based on ice lake microarchitecture instruction throughput and latency. I Corporation, I. Corporation, "10th generation intel core processor based on ice lake microarchitecture instruction throughput and latency." Available online at https://software.intel.com/content/www/us/en/develop/download/10t h-generation-intel-core-processor-instruction-throughput-and-latency-d ocs.html, 2020.
Submission to the NIST postquantum cryptography standardization project. R Avanzi, J Bos, L Ducas, R. Avanzi, J. Bos, and L. Ducas, "Submission to the NIST post- quantum cryptography standardization project," Available for download at https://csrc.nist.gov/CSRC/media/Projects/post-quantum-cryptograph y/documents/round-3/submissions/Dilithium-Round3.zip, 2022.
Compact Dilithium implementations on Cortex-M3 and Cortex-M4. D O C Greconici, M J Kannwischer, D Sprenkels, 10.46586/tches.v2021.i1.1-24IACR Trans. Cryptogr. Hardw. Embed. Syst. 20211D. O. C. Greconici, M. J. Kannwischer, and D. Sprenkels, "Compact Dilithium implementations on Cortex-M3 and Cortex-M4," IACR Trans. Cryptogr. Hardw. Embed. Syst., vol. 2021, no. 1, pp. 1-24, 2021. [Online]. Available: https://doi.org/10.46586/tches.v2021.i1.1-24
CRYSTALS-Dilithium on ARMv8. Y Kim, J Song, T.-Y Youn, S C Seo, Security and Communication Networks2022Y. Kim, J. Song, T.-Y. Youn, and S. C. Seo, "CRYSTALS-Dilithium on ARMv8," Security and Communication Networks, vol. 2022, 2022.
H Becker, V Hwang, M J Kannwischer, B Yang, S Yang, 10.46586/tches.v2022.i1.221-244Neon NTT: faster Dilithium, Kyber, and Saber on Cortex-A72 and Apple M1. 2022H. Becker, V. Hwang, M. J. Kannwischer, B. Yang, and S. Yang, "Neon NTT: faster Dilithium, Kyber, and Saber on Cortex-A72 and Apple M1," IACR Trans. Cryptogr. Hardw. Embed. Syst., vol. 2022, no. 1, pp. 221-244, 2022. [Online]. Available: https://doi.org/10.46586/tches.v2022.i1.221-244
Faster Kyber and Dilithium on the Cortex-M4. A Abdulrahman, V Hwang, M J Kannwischer, D Sprenkels, 10.1007/978-3-031-09234-3_42Applied Cryptography and Network Security -20th International Conference, ACNS 2022. G. Ateniese and D. VenturiRome, ItalySpringer13269Proceedings, ser. Lecture Notes in Computer ScienceA. Abdulrahman, V. Hwang, M. J. Kannwischer, and D. Sprenkels, "Faster Kyber and Dilithium on the Cortex-M4," in Applied Cryptography and Network Security -20th International Conference, ACNS 2022, Rome, Italy, June 20-23, 2022, Proceedings, ser. Lecture Notes in Computer Science, G. Ateniese and D. Venturi, Eds., vol. 13269. Springer, 2022, pp. 853-871. [Online]. Available: https://doi.org/10.1007/978-3-031-09234-3 42
Fast quantum-safe cryptography on IBM Z. J Bradbury, B Hess, Technical reportJ. Bradbury and B. Hess, "Fast quantum-safe cryptography on IBM Z." Technical report, 2021. URL: https://csrc.nist.gov/CSRC/media/Event s/third-pqc-standardization-conference/documents/accepted-papers/hess -fast-quantum-safe-pqc2021.pdf, 2021.
Parallel small polynomial multiplication for dilithium: a faster design and implementation. J Zheng, F He, S Shen, C Xue, Y Zhao, 10.1145/3564625.3564629Annual Computer Security Applications Conference, ACSAC 2022. Austin, TX, USAACMJ. Zheng, F. He, S. Shen, C. Xue, and Y. Zhao, "Parallel small polynomial multiplication for dilithium: a faster design and implementation," in Annual Computer Security Applications Conference, ACSAC 2022, Austin, TX, USA, December 5-9, 2022. ACM, 2022, pp. 304-317. [Online]. Available: https://doi.org/10.1145/3564625.3564629
Montgomery multiplication using vector instructions. J W Bos, P L Montgomery, D Shumow, G M Zaverucha, International Conference on Selected Areas in Cryptography. SpringerJ. W. Bos, P. L. Montgomery, D. Shumow, and G. M. Zaverucha, "Montgomery multiplication using vector instructions," in International Conference on Selected Areas in Cryptography. Springer, 2014, pp. 471-489.
Speeding up R-LWE post-quantum key exchange. S Gueron, F Schlieker, Nordic conference on secure IT systems. SpringerS. Gueron and F. Schlieker, "Speeding up R-LWE post-quantum key exchange," in Nordic conference on secure IT systems. Springer, 2016, pp. 187-198.
Finite field arithmetic using AVX-512 for isogeny-based cryptography. G Orisaka, D F Aranha, J López, Anais do XVIII Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. G. Orisaka, D. F. Aranha, and J. López, "Finite field arithmetic using AVX-512 for isogeny-based cryptography," in Anais do XVIII Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. SBC, 2018, pp. 49-56.
Acceleration of large integer multiplication with intel AVX-512 instructions. T Edamatsu, D Takahashi, 10.1109/HPCC/SmartCity/DSS.2018.0005920th IEEE International Conference on High Performance Computing and Communications; 16th IEEE International Conference on Smart City; 4th IEEE International Conference on Data Science and Systems, HPCC/SmartCity/DSS. Exeter, United KingdomIEEET. Edamatsu and D. Takahashi, "Acceleration of large integer multiplication with intel AVX-512 instructions," in 20th IEEE International Conference on High Performance Computing and Communications; 16th IEEE International Conference on Smart City; 4th IEEE International Conference on Data Science and Systems, HPCC/SmartCity/DSS 2018, Exeter, United Kingdom, June 28-30, 2018. IEEE, 2018, pp. 211-218. [Online]. Available: https://doi.org/10.1109/HPCC/SmartCity/DSS.2018.00059
An implementation of parallel number-theoretic transform using Intel AVX-512 instructions. D Takahashi, International Workshop on Computer Algebra in Scientific Computing. SpringerD. Takahashi, "An implementation of parallel number-theoretic trans- form using Intel AVX-512 instructions," in International Workshop on Computer Algebra in Scientific Computing. Springer, 2022, pp. 318- 332.
Faster multiplication over f 2 [x] using AVX512 instruction set and VPCLMULQDQ instruction. J Robert, P Véron, abs/2201.10473CoRR. J. Robert and P. Véron, "Faster multiplication over f 2 [x] using AVX512 instruction set and VPCLMULQDQ instruction," CoRR, vol. abs/2201.10473, 2022. [Online]. Available: https://arxiv.org/abs/2201. 10473
Highly vectorized SIKE for AVX-512. H Cheng, G Fotiadis, J Großschädl, P Y A Ryan, 10.46586/tches.v2022.i2.41-68IACR Trans. Cryptogr. Hardw. Embed. Syst. 20222H. Cheng, G. Fotiadis, J. Großschädl, and P. Y. A. Ryan, "Highly vectorized SIKE for AVX-512," IACR Trans. Cryptogr. Hardw. Embed. Syst., vol. 2022, no. 2, pp. 41-68, 2022. [Online]. Available: https://doi.org/10.46586/tches.v2022.i2.41-68
Batching CSIDH group actions using AVX-512. H Cheng, G Fotiadis, J Großschädl, P Y A Ryan, P B Rønne, 10.46586/tches.v2021.i4.618-649IACR Trans. Cryptogr. Hardw. Embed. Syst. 20214H. Cheng, G. Fotiadis, J. Großschädl, P. Y. A. Ryan, and P. B. Rønne, "Batching CSIDH group actions using AVX-512," IACR Trans. Cryptogr. Hardw. Embed. Syst., vol. 2021, no. 4, pp. 618-649, 2021. [Online]. Available: https://doi.org/10.46586/tches.v2021.i4.618-649
Optimizing the NIST post quantum candidate SPHINCS+ using AVX-512. D M Alter, D. M. Alter, "Optimizing the NIST post quantum candidate SPHINCS+ using AVX-512," https://github.com/DorAlter/sphincsplus/tree/avx512-i mplementation, 2021.
Implementation of the SHA-3 family using AVX512 instructions. R Cabral, J López, in Anais do XVIII Simpósio Brasileiro deR. Cabral and J. López, "Implementation of the SHA-3 family us- ing AVX512 instructions," in Anais do XVIII Simpósio Brasileiro de
Segurança da Informação e de Sistemas Computacionais. Segurança da Informação e de Sistemas Computacionais. SBC, 2018, pp. 25-32.
CRYSTALS-Dilithium algorithm specifications and supporting documentation (version 3.1). S Bai, L Ducas, E Kiltz, T Lepoint, V Lyubashevsky, P Schwabe, G Seiler, D Stehlé, NIST Post-Quantum Cryptography Standardization Round. 3S. Bai, L. Ducas, E. Kiltz, T. Lepoint, V. Lyubashevsky, P. Schwabe, G. Seiler, and D. Stehlé, "CRYSTALS-Dilithium algorithm specifica- tions and supporting documentation (version 3.1)," NIST Post-Quantum Cryptography Standardization Round, vol. 3, 2021.
SHA-3 standard: permutation-based hash and extendable-output functions. M J Dworkin, M. J. Dworkin et al., "SHA-3 standard: permutation-based hash and extendable-output functions," 2015.
Faster AVX2 optimized NTT multiplication for ring-lwe lattice cryptography. G Seiler, IACR Cryptol. ePrint Arch. 39G. Seiler, "Faster AVX2 optimized NTT multiplication for ring-lwe lattice cryptography," IACR Cryptol. ePrint Arch., p. 39, 2018. [Online]. Available: http://eprint.iacr.org/2018/039
An algorithm for the machine calculation of complex fourier series. J W Cooley, J W Tukey, Mathematics of computation. 1990J. W. Cooley and J. W. Tukey, "An algorithm for the machine calculation of complex fourier series," Mathematics of computation, vol. 19, no. 90, pp. 297-301, 1965.
Fast fourier transforms: for fun and profit. W M Gentleman, G Sande, Proceedings of the. thefall joint computer conferenceW. M. Gentleman and G. Sande, "Fast fourier transforms: for fun and profit," in Proceedings of the November 7-10, 1966, fall joint computer conference, 1966, pp. 563-578.
| [
"https://github.com/DorAlter/sphincsplus/tree/avx512-implementation",
"https://github.com/DorAlter/sphincsplus/tree/avx512-i"
] |
[
"Ergonomic Collaboration between Humans and Robots: An Energy-Aware Signal Temporal Logic Perspective",
"Ergonomic Collaboration between Humans and Robots: An Energy-Aware Signal Temporal Logic Perspective"
] | [
"Giuseppe Silano ",
"Amr Afifi ",
"Martin Saska ",
"Antonio Franchi "
] | [] | [] | This paper presents a method for designing energy-aware collaboration tasks between humans and robots, and generating corresponding trajectories to carry out those tasks. The method involves using high-level specifications expressed as Signal Temporal Logic (STL) specifications to automatically synthesize task assignments and trajectories. The focus is on a specific task where a Multi-Rotor Aerial Vehicle (MRAV) performs object handovers in a power line setting. The motion planner takes into account constraints such as payload capacity and refilling, while ensuring that the generated trajectories are feasible. The approach also allows users to specify robot behaviors that prioritize human comfort, including ergonomics and user preferences. The method is validated through numerical analyses in MATLAB and realistic Gazebo simulations in a mock-up scenario. | null | [
"https://export.arxiv.org/pdf/2306.02454v1.pdf"
] | 259,076,333 | 2306.02454 | d1b602f4ca5a9202dbda04c27e398ec32e983c48 |
Ergonomic Collaboration between Humans and Robots: An Energy-Aware Signal Temporal Logic Perspective
Giuseppe Silano
Amr Afifi
Martin Saska
Antonio Franchi
Ergonomic Collaboration between Humans and Robots: An Energy-Aware Signal Temporal Logic Perspective
This paper presents a method for designing energy-aware collaboration tasks between humans and robots, and generating corresponding trajectories to carry out those tasks. The method involves using high-level specifications expressed as Signal Temporal Logic (STL) specifications to automatically synthesize task assignments and trajectories. The focus is on a specific task where a Multi-Rotor Aerial Vehicle (MRAV) performs object handovers in a power line setting. The motion planner takes into account constraints such as payload capacity and refilling, while ensuring that the generated trajectories are feasible. The approach also allows users to specify robot behaviors that prioritize human comfort, including ergonomics and user preferences. The method is validated through numerical analyses in MATLAB and realistic Gazebo simulations in a mock-up scenario.
I. INTRODUCTION
In robotics, Multi-Rotor Aerial Vehicles (MRAVs) are popular due to their agility, maneuverability, and versatility with onboard sensors. They have various applications, including contactless or physical interaction with their surroundings [1]. MRAVs are advantageous in scenarios such as working environments at heights, wind turbines, large construction sites, and power transmission lines [2]. They can act as robotic co-workers, carrying tools and reducing physical and cognitive load on human operators, but ergonomics and safety must be considered [3], [4]. However, the use of MRAVs in human-robot interaction is limited compared to ground robots. Object handover is also a well-studied topic.
To enable effective collaboration between MRAVs and human workers, advanced task and motion planning techniques are required to address ergonomic and safety concerns while minimizing the physical and cognitive demands on human operators. Signal Temporal Logic (STL) [5] can provide a framework to express these complex specifications and generate optimal feasible trajectories.
Handover involves multiple stages: approach, reach, and transfer phases [3], [4]. While some previous studies have examined individual phases, e.g. [6], there is limited consideration of safety and ergonomics in such approaches as well as energy efficiency. For aerial robot-human collaboration in high-risk environments, it is crucial to include these considerations. Additionally, prior works [7], [8] have explored the integration of human comfort and ergonomics in robot planning, but none have considered the context of MRAVs as co-workers with humans.
Some studies use sensors on MRAVs to improve control and planning, with perception-constrained control being a key consideration. For example, [4] proposes a Nonlinear Model Predictive Control (NMPC) formulation that incorporates human ergonomics and comfort while enforcing perception and actuation limits. Other research, such as [3], uses dynamic programming to ensure safety when controlling an aerial manipulator during physical interactions with a human operator. However, these approaches only consider scenarios with a single operator and do not address energy consumption. Regarding motion planning for human-robot handovers, [9] presents a controller automatically generated from STL specifications, while [10] uses probabilistic modelchecking to validate a controller for safety and liveness specifications. Neither of these addresses the task assignment and trajectory generation problem to enhance energy-aware human-robot ergonomic collaboration for MRAVs. This paper presents an energy-aware motion planner that leverages STL specifications to facilitate human-robot collaboration. To this end, a nonlinear non-convex max-min optimization problem is formulated, which is addressed using a hierarchical approach that first solves an Integer Linear Programming (ILP) problem. The approach is demonstrated in a power line scenario considering the task of an MRAV performing object handovers as depicted in Fig. 1, where the mission requirements are expressed as an STL formula. Trajectories consider payload capacity limitations and refilling stations for longer-duration operations. Additionally, a method for computing the initial solution for the optimization problem is proposed. Validation is conducted through numerical simulations in MATLAB, while Gazebo simulations demonstrate the approach's effectiveness in a real-world implementation scenario.
II. PROBLEM DESCRIPTION
This paper aims to improve ergonomic human-robot collaboration by designing a trajectory for an MRAV equipped with a manipulation arm to perform object handovers in arXiv:2306.02454v1 [cs.RO] 4 Jun 2023 a power line setting. To meet ergonomic requirements, the drone must approach the operator from the front, either from the left or right, from above or below, and never from behind. Additionally, refilling stations are available for the drone to reload tools. The goal is to complete the mission within a specified maximum time frame while meeting dynamic and capability constraints, as well as avoiding obstacles and minimizing energy consumption. To simplify the scenario, we assume that the handover location is a 3D space for each operator, that the MRAV can carry only one tool at a time, and that an onboard low-level controller, e.g. [3], [4], manages the handover procedure. A map of the environment, including obstacles, is assumed to be known in advance.
III. PRELIMINARIES
Let us consider a discrete-time dynamical system of a MRAV x k+1 = f (x k , u k ), where x k+1 and x k ∈ X ⊂ R n are the next and current states, respectively, and u k ∈ U ⊂ R m is the control input. Let f : X ×U → X be differentiable in both arguments. With an initial state x 0 ∈ X 0 ⊂ R n and a time vector t = (t 0 , . . . , t N ) ⊤ ∈ R N +1 , we can define the finite control input sequence u = (u 0 , . . . , u N −1 ) ⊤ ∈ R N to attain the unique sequence of states x = (x 0 , . . . , x N ) ⊤ ∈ R N +1 with sampling period T s ∈ R + and N ∈ N + samples.
Hence, we define the state and control input sequences for the MRAV as
x = (p (1) , v (1) , p (2) , v (2) , p (3) , v (3) ) ⊤ and u = (a (1) , a (2) , a (3) ) ⊤ , where p (j) , v (j)
, a (j) are the position, velocity, and acceleration sequences of the vehicle along the j-axis of the world frame F W , respectively. Finally, let us denote with p
(j) k , v (j) k , a (j) k , t k the k-th elements of the sequences p (j) , v (j)
, a (j) and vector t, respectively.
A. Signal temporal logic Definition 1 (Signal Temporal Logic): STL is a concise language for describing real-valued signal temporal behavior [5]. Unlike traditional planning algorithms [11], all mission specifications can be encapsulated into a single formula φ. STL's grammar includes temporal operators, such as until (U), always (□), eventually (♢), and next (⃝), as well as logical operators like conjunction (∧), disjunction (∨), implication ( =⇒ ), and negation (¬). These operators act on atomic propositions, which are simple statements or assertions that are either true (⊤) or false (⊥). An STL formula φ is considered valid if it evaluates to ⊤, and invalid otherwise. More details are available in [5], [12]. Informally, φ 1 U I φ 2 means that φ 2 must eventually hold within the time interval I, while φ 1 must hold continuously until that point.
Definition 2 (STL Robustness): The satisfaction of an STL formula φ (Def. 1) can be impacted by uncertainties and unexpected events. To ensure a margin of satisfaction, the concept of robust semantics for STL formulae has been developed [5], [12]. This robustness, ρ, is a quantitative metric that guides the optimization process towards finding the best feasible solution for meeting the statement requirements. It is formally defined using the recursive formulae:
ρ pi (x, t k ) = µ i (x, t k ), ρ ¬φ (x, t k ) = −ρ φ (x, t k ), ρ φ1∧φ2 (x, t k ) = min (ρ φ1 (x, t k ), ρ φ2 (x, t k )) , ρ φ1∨φ2 (x, t k ) = max (ρ φ1 (x, t k ), ρ φ2 (x, t k )) , ρ □ I φ (x, t k ) = min ρ φ (x, t ′ k ), ρ ⃝ I φ (x, t k ) = ρ φ (x, t ′ k ), with t ′ k ∈ [t k + I], ρ φ1U I φ2 (x, t k ) = max t ′ k ∈[t k +I] min (ρ φ2 (x, t ′ k )) , min t ′′ k ∈[t k ,t ′ k ] (ρ φ1 (x, t ′′ k ) ,
where t k + I denotes the Minkowski sum of scalar t k and time interval I. The formulae comprise predicates, p i , along with their corresponding real-valued function µ i (x, t k ), each of which is evaluated like a logical formula. Namely, x satisfies the STL formula φ at time t k (in short, denoted as
x(t k ) |= φ) if ρ φ (x, t k ) > 0, and violates if ρ φ (x, t k ) ≤ 0.
Each predicate describes part of the mission specifications, and their robustness values indicate how well the specifications are being met. If all predicates are true, then the result is a numerical value that indicates to what degree the specification is being satisfied. Control inputs that maximize robustness are computed over a set of finite state and input sequences, and the optimal sequence u ⋆ is considered valid if ρ φ (x ⋆ , t k ) is positive. Definition 3 (Smooth Approximation): Recent research has proposed smooth approximationsρ φ (x, t k ) for the nonsmooth and non-convex robustness measure ρ φ (x, t k ), which involves the operators min and max. These approximations can be optimized efficiently using gradient-based methods. One such smooth approximation is the Arithmetic-Geometric Mean (AGM) robustness [13], which we choose as it is more conservative and computationally efficient than the commonly used Log-Sum-Exponential (LSE) [2]. For a full description of the AGM robustness syntax and semantics, see [13].
Definition 4 (STL Motion Planner): By encoding the mission specifications from Sec. II as an STL formula φ and replacing its robustness ρ φ (x, t k ) with the smooth approximationρ φ (x, t k ) (defined in Def. 3), the optimization problem for generating energy-aware trajectories for the MRAV can be defined as [2]:
maximize p (j) ,v (j) , a (j) ,ε (j)ρ φ (p (j) , v (j) ) − ε (j) ⊤ Q ε (j) s.t. |v (j) k | ≤v (j) , |a (j) k | ≤ā (j) , ∥a (j) k ⊤ a (j) k ∥ 2 ≤ ε (j) k ⊤ ε (j) k , ε (j) k ≥ 0, S (j) , ∀k = {0, 1, . . . , N − 1} ,(1)
where ε = (ε (1) , ε (2) , ε (3) ) ⊤ is the sequence of decision variables ε (j) representing the bound on the square norm of the MRAV acceleration along each j-axis of F W . Also,v (j) andā (j) denote the upper limits of velocity and acceleration, respectively, and S (j) (p
(j) k , v (j) k , a (j) k ) = (p (j) k+1 , v (j) k+1 , a (j)
k+1 ) ⊤ are the vehicle motion primitives encoding the splines presented in [2]. The energy minimization pass through the term ε ⊤ Q ε, where Q ∈ R 3N ×3N such that we have ε ⊤ Qε ≥ 0.
IV. PROBLEM SOLUTION
In this section, we apply the STL framework from Sec. III to formulate the optimization problem presented in Sec. II as a nonlinear non-convex max-min problem. To solve this problem, we generate an initial guess using a simplified ILP formulation that does not account for obstacles, safety, vehicle dynamics, ergonomics, energy minimization, or time specifications. This approach simplifies the search for a global solution. We translate the mission requirements, which include performing object handovers with an MRAV under safety and ergonomic constraints, into the STL formula φ that considers the mission time T N . The STL formula contains two types of specifications: safety requirements that ensure the MRAV stays within a designated area (φ ws ), avoids collisions with objects (φ obs ), and never approaches the operator from behind (φ beh ); and ergonomic-related objectives that require the MRAV to visit each human operator (φ han ), stay with them for a fixed duration T han , approach them from the front based on their preferences (φ pr ), and stop at a refilling station for T rs when its onboard supply of tools is depleted (φ rs ). Finally, the MRAV must return to the refilling station after completing the handover operations (φ hm ). All mission requirements can be expressed as:
φ = □ [0,TN ] (φ ws ∧ φ obs ∧ φ beh ) ∧ han q=1 ♢ [0,TN −Than] pr d=1 q,d φ pr ∧ □ [0,Than] q φ han ∧ rs q=1 ♢ [0,TN −Trs] (c(t) = 0 =⇒ p(t) |= q φ rs ) ∧ rs q=1 □ [1,TN −1] (p(t) |= φ hm =⇒ p(t + 1) |= φ hm ) .(2)
with φ ws = In the formulated ILP problem, the objective function (4a) minimizes the distance traversed by the MRAV. Constraints (4b), (4c) and (4d) ensure that each human operator is visited once, the MRAV begins at the depot and does not return, tours do not exceed payload capacity or are not connected to a refilling station using h(T ) [15], respectively. The motion primitives for the MRAV are obtained from the optimal assignment, which is used to generate a dynamically feasible trajectory. The trajectory includes time intervals for handover and refilling (T han and T rs ), with fixed rest-to-rest motion between operators and maximum values for velocity and acceleration (v (j) andā (j) ). Further details on the motion primitives are provided in [2].
V. SIMULATION RESULTS Numerical simulations in MATLAB were used to validate the planning approach, without including vehicle dynamics and trajectory tracking controller. Feasibility was verified in Gazebo with software-in-the-loop simulations [16]. The ILP problem was formulated using the CVX framework, and the STL motion planner used the CasADi library with IPOPT as the solver. Simulations were run on an i7-8565U processor with 32GB of RAM on Ubuntu 20.04. Illustrative videos with the simulations are available at http://mrs.felk. cvut.cz/stl-ergonomy-energy-aware.
The object handover scenario outlined in Sec. II was used to evaluate the proposed planning strategy. The simulation scenario consisted of a mock-up environment, with two human operators, one refilling station, and a single MRAV. Parameters and corresponding values used to run the optimization problem are listed in Table I. The heading angle of the MRAV was adjusted by aligning the vehicle with the direction of movement when moving towards the human operator. Once the MRAV reaches the operator, it is assumed that an onboard low-level controller, e.g. [3], [4], handles the handover operation, thus adjusting the heading angle accordingly. The rectangular regions in which the MRAV was allowed to approach the operator were established taking into consideration the operators' heading, ψ ho1 and ψ ho2 , as well as their preferred direction of approach (φ pr ). Figure 2 presents a comparison of energy profiles obtained by considering the preferred approach directions of the operators, namely front, right and left, and top to bottom, both with and without the energy term. The energy term is given by ε ⊤ k Qε k ≥ 0 and ∥a
(j) k ⊤ a (j) k ∥ 2 ≤ ε (j) ⊤ ε (j)
, ε (j) ≥ 0, as formulated in the problem statement (1). The results demonstrate that the inclusion of the energy term leads to a reduction of energy consumption by approximately 10%.
VI. CONCLUSIONS This paper presented a motion planning framework to improve energy-aware human-robot collaboration for an MRAV with payload limitations and dynamic constraints. The proposed approach uses STL specifications to generate safe and ergonomic trajectories while meeting mission time requirements. An ILP method is introduced to handle the nonlinear non-convex optimization problem. Numerical in MATLAB and realistic simulations in Gazebo confirm the effectiveness of the proposed approach. Future work includes incorporating human operator fatigue and exploring other types of temporal logic languages to adapt the framework for dynamic environments.
Fig. 1 :
1Illustration of an MRAV approaching a human operator, with gray showing a possible STL optimizer output.
Fig. 2 :
2Normalized energy consumption profiles considering different operators' preferred approach directions, including left and right (blue), front (green), and top to bottom (red). From left to right: the data with and without considering the energy term in the STL motion planner.
TABLE I :
IParameter values for the optimization problem.
Faculty of Electrical Engineering, Department of Cybernetics, Czech Technical University in Prague, 12135 Prague, Czech Republic (emails: {giuseppe.silano, martin.saska}@fel.cvut.cz). 2 Robotics and Mechatronics Department, Electrical Engineering, Mathematics, and Computer Science (EEMCS) Faculty, University of Twente,
t ′ k ∈[t k +I] ρ φ (x, t ′ k ), ρ ♢ I φ (x, t k ) = max t ′ k ∈[t k +I]
j=1 p (j) ∈ (p (j)ws ,p
w ij z ij (4a) s.t. i∈V, i̸ =j z ij = 2, ∀j ∈ T ,(4b)i∈T z 0i = 1,(4c)i∈T , j̸ ∈Tz ij ≥ 2h(T ) .(4d)
obs q=1 p (j) ̸ ∈ ( q p (j) obs , qp (j) obs ),Equation (3a) constrains the MRAV's position to remain within the workspace, with minimum and maximum values denoted by p (j) ws andp (j) ws , respectively. Equations (3b), (3c), (3d), (3e), (3f), and (3g) provide guidelines for obstacle avoidance, operator safety, mission completion, handover operations, payload capacity, and human operators' preferences, respectively. The payload capacity is represented by c(t) ∈ {0, 1}. The vertices of rectangular regions identifying obstacles, areas behind the operators, operators themselves, refilling stations, and human operators' preferences are represented by q prs , and q,dp (j) pr , respectively.A. Initial guessThe resulting nonlinear, non-convex max-min problem is solved using dynamic programming, which requires a wellchosen initial guess to avoid local optima[14]. The strategy for obtaining an appropriate initial guess for the STL motion planner involves simplifying the original problem to an optimization problem with fewer constraints. The resulting ILP problem assigns human operators to the vehicle and provides a navigation sequence for the MRAV. The initial guess considers mission requirements and MRAV payload capacity and refilling operations (φ hm , φ han and φ rs ), but disregards safety and ergonomy requirements (φ ws , φ obs , φ beh , and φ pr ), and mission time intervals (T N , T han and T rs ).The graph used to formulate the ILP is defined by the tuple G = (V, E, W, C), where V is the set of vertices, consisting of human operators (T ), refilling stations (R), and the depot (O) where the MRAV is initially located. The number of elements in T , R, and O are represented by τ , r, and δ, respectively. The set of edges and their associated weights are represented by E and W, respectively, where edge weights are modeled using Euclidean distances. To represent the number of times an edge is selected in the ILP solution, an integer variable z ij ∈ Z ≥0 is defined for each edge e ij ∈ E. The variable z ij is limited to the set {0, 1} if {i, j} ∈ {T , O} and {0, 1, 2} if i ∈ R and j ∈ T , which ensures that an edge between two human operators is never traversed twice and that the depot is only used as a starting point. The ILP problem is then formulated as:{i,j}∈V, i̸ =j
Past, Present, and Future of Aerial Robotic Manipulators. A Ollero, IEEE T-RO. 381A. Ollero et al., "Past, Present, and Future of Aerial Robotic Manip- ulators," IEEE T-RO, vol. 38, no. 1, pp. 626-645, 2022.
Power Line Inspection Tasks With Multi-Aerial Robot Systems Via Signal Temporal Logic Specifications. G Silano, IEEE RA-L. 62G. Silano et al., "Power Line Inspection Tasks With Multi-Aerial Robot Systems Via Signal Temporal Logic Specifications," IEEE RA- L, vol. 6, no. 2, pp. 4169-4176, 2021.
Toward Physical Human-Robot Interaction Control with Aerial Manipulators: Compliance, Redundancy Resolution, and Input Limits. A Afifi, IEEE ICRA, 2022. A. Afifi et al., "Toward Physical Human-Robot Interaction Control with Aerial Manipulators: Compliance, Redundancy Resolution, and Input Limits," in IEEE ICRA, 2022, pp. 4855-4861.
Nonlinear Model Predictive Control for Human-Robot Handover with Application to the Aerial Case. G Corsini, IEEE IROS, 2022. G. Corsini et al., "Nonlinear Model Predictive Control for Human- Robot Handover with Application to the Aerial Case," in IEEE IROS, 2022, pp. 7597-7604.
Monitoring temporal properties of continuous signals," in Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems. O Maler, SpringerO. Maler et al., "Monitoring temporal properties of continuous sig- nals," in Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems. Springer, 2004, pp. 152-166.
A human-inspired controller for fluid humanrobot handovers. J R Medina, IEEE Humanoids. J. R. Medina et al., "A human-inspired controller for fluid human- robot handovers," in IEEE Humanoids, 2016, pp. 324-331.
A Human-Aware Manipulation Planner. E A Sisbot, IEEE T-RO. 285E. A. Sisbot et al., "A Human-Aware Manipulation Planner," IEEE T-RO, vol. 28, no. 5, pp. 1045-1057, 2012.
Towards ergonomic control of human-robot comanipulation and handover. L , IEEE. L. Peternel et al., "Towards ergonomic control of human-robot co- manipulation and handover," in IEEE Humanoids, 2017, pp. 55-60.
Specifying and Synthesizing Human-Robot Handovers. A Kshirsagar, IEEE IROS. A. Kshirsagar et al., "Specifying and Synthesizing Human-Robot Handovers," in IEEE IROS, 2019, pp. 5930-5936.
An assurance-based approach to verification and validation of human-robot teams. M Webster, arXiv:1608.07403arXiv preprintM. Webster et al., "An assurance-based approach to verification and validation of human-robot teams," arXiv preprint arXiv:1608.07403, September 2019.
Sampling-Based Motion Planning. S M Lavalle, Cambridge University PressS. M. LaValle, Sampling-Based Motion Planning. Cambridge Uni- versity Press, 2006.
Robust satisfaction of temporal logic over realvalued signals. A Donzé, International Conference on Formal Modeling and Analysis of Timed Systems. SpringerA. Donzé et al., "Robust satisfaction of temporal logic over real- valued signals," in International Conference on Formal Modeling and Analysis of Timed Systems. Springer, 2010, pp. 92-106.
Arithmetic-Geometric Mean Robustness for Control from Signal Temporal Logic Specifications. N Mehdipour, IEEE ACC. N. Mehdipour et al., "Arithmetic-Geometric Mean Robustness for Control from Signal Temporal Logic Specifications," in IEEE ACC, 2019, pp. 1690-1695.
Dynamic programming and optimal control. D Bertsekas, Athena Scientific. D. Bertsekas, Dynamic programming and optimal control, Athena Scientific, 2012.
Integer programming formulation of traveling salesman problems. C Miller, Journal of the Association for Computing Machinery. 7C. Miller et al., "Integer programming formulation of traveling sales- man problems," Journal of the Association for Computing Machinery, vol. 7, pp. 326-329, 1960.
The MRS UAV System: Pushing the Frontiers of Reproducible Research, Real-world Deployment, and Education with Autonomous Unmanned Aerial Vehicles. T Baca, JINT. 10226T. Baca et al., "The MRS UAV System: Pushing the Frontiers of Reproducible Research, Real-world Deployment, and Education with Autonomous Unmanned Aerial Vehicles," JINT, vol. 102, no. 26, pp. 1-28, 2021.
| [] |
[
"HeadSculpt: Crafting 3D Head Avatars with Text",
"HeadSculpt: Crafting 3D Head Avatars with Text"
] | [
"Xiao Han \nUniversity of Surrey\n\n",
"Yukang Cao \nThe University of Hong\nKong\n",
"Kai Han \nThe University of Hong\nKong\n",
"Xiatian Zhu \nUniversity of Surrey\n\n\nSurrey Institute for People-Centred AI\n\n",
"Jiankang Deng \nSurrey Joint Research Centre on AI\nImperial College London\n4 iFlyTek\n",
"Yi-Zhe Song \nUniversity of Surrey\n\n",
"Tao Xiang \nUniversity of Surrey\n\n",
"† Kwan-Yee ",
"K Wong \nThe University of Hong\nKong\n"
] | [
"University of Surrey\n",
"The University of Hong\nKong",
"The University of Hong\nKong",
"University of Surrey\n",
"Surrey Institute for People-Centred AI\n",
"Surrey Joint Research Centre on AI\nImperial College London\n4 iFlyTek",
"University of Surrey\n",
"University of Surrey\n",
"The University of Hong\nKong"
] | [] | Recently, text-guided 3D generative methods have made remarkable advancements in producing high-quality textures and geometry, capitalizing on the proliferation of large vision-language and image diffusion models. However, existing methods still struggle to create high-fidelity 3D head avatars in two aspects: (1) They rely mostly on a pre-trained text-to-image diffusion model whilst missing the necessary 3D awareness and head priors. This makes them prone to inconsistency and geometric distortions in the generated avatars. (2) They fall short in fine-grained editing. This is primarily due to the inherited limitations from the pre-trained 2D image diffusion models, which become more pronounced when it comes to 3D head avatars. In this work, we address these challenges by introducing a versatile coarse-to-fine pipeline dubbed HeadSculpt for crafting (i.e., generating and editing) 3D head avatars from textual prompts. Specifically, we first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding representing the back view appearance of heads, enabling 3D-consistent head avatar generations. We further propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique. This enables identity preservation while following the editing instruction. We showcase HeadSculpt's superior fidelity and editing capabilities through comprehensive experiments and comparisons with existing methods. ‡ Recently, vision-language models (e.g., CLIP [51]) and diffusion models (e.g., Stable Diffusion [65, 57, 55]) have attracted increasing interest. These progresses have led to the emergence of text-to-3D generative models[31,58,41,24]which create 3D content in a self-supervised manner. Notably, DreamFusion [50] introduces a score distillation sampling (SDS) strategy that leverages a pre-trained image diffusion model to compute the noise-level loss from the textual description, unlocking the potential to optimize differentiable 3D scenes (e.g., neural radiance field [42], tetrahedron mesh [62], texture [54, 8], or point clouds [47]) with 2D diffusion prior only. Subsequent research * Equal contributions † Corresponding authors ‡ Webpage: https://brandonhan.uk/HeadSculpt Preprint. Under review. | null | [
"https://export.arxiv.org/pdf/2306.03038v1.pdf"
] | 259,076,344 | 2306.03038 | d7805dc00ad1be0b12ad9034c5ca4a6c2f8589fe |
HeadSculpt: Crafting 3D Head Avatars with Text
Xiao Han
University of Surrey
Yukang Cao
The University of Hong
Kong
Kai Han
The University of Hong
Kong
Xiatian Zhu
University of Surrey
Surrey Institute for People-Centred AI
Jiankang Deng
Surrey Joint Research Centre on AI
Imperial College London
4 iFlyTek
Yi-Zhe Song
University of Surrey
Tao Xiang
University of Surrey
† Kwan-Yee
K Wong
The University of Hong
Kong
HeadSculpt: Crafting 3D Head Avatars with Text
Recently, text-guided 3D generative methods have made remarkable advancements in producing high-quality textures and geometry, capitalizing on the proliferation of large vision-language and image diffusion models. However, existing methods still struggle to create high-fidelity 3D head avatars in two aspects: (1) They rely mostly on a pre-trained text-to-image diffusion model whilst missing the necessary 3D awareness and head priors. This makes them prone to inconsistency and geometric distortions in the generated avatars. (2) They fall short in fine-grained editing. This is primarily due to the inherited limitations from the pre-trained 2D image diffusion models, which become more pronounced when it comes to 3D head avatars. In this work, we address these challenges by introducing a versatile coarse-to-fine pipeline dubbed HeadSculpt for crafting (i.e., generating and editing) 3D head avatars from textual prompts. Specifically, we first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding representing the back view appearance of heads, enabling 3D-consistent head avatar generations. We further propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique. This enables identity preservation while following the editing instruction. We showcase HeadSculpt's superior fidelity and editing capabilities through comprehensive experiments and comparisons with existing methods. ‡ Recently, vision-language models (e.g., CLIP [51]) and diffusion models (e.g., Stable Diffusion [65, 57, 55]) have attracted increasing interest. These progresses have led to the emergence of text-to-3D generative models[31,58,41,24]which create 3D content in a self-supervised manner. Notably, DreamFusion [50] introduces a score distillation sampling (SDS) strategy that leverages a pre-trained image diffusion model to compute the noise-level loss from the textual description, unlocking the potential to optimize differentiable 3D scenes (e.g., neural radiance field [42], tetrahedron mesh [62], texture [54, 8], or point clouds [47]) with 2D diffusion prior only. Subsequent research * Equal contributions † Corresponding authors ‡ Webpage: https://brandonhan.uk/HeadSculpt Preprint. Under review.
Introduction
Modeling 3D head avatars underpins a wide range of emerging applications (e.g., digital telepresence, game character creation, and AR/VR). Historically, the creation of intricate and detailed 3D head avatars demanded considerable time and expertise in art and engineering. With the advent of deep learning, existing works [83,25,30,68,7,35,14] have shown promising results on the reconstruction of 3D human heads from monocular images or videos. However, these methods remain restricted to head appearance contained in their training data which is often limited in size, resulting in the inability to generalize to new appearance beyond the training data. This constraint calls for the need of more flexible and generalizable methods for 3D head modeling. Figure 1: Examples of generation and editing results obtained using the proposed HeadSculpt. It enables the creation and fine-grained editing of high-quality head avatars, featuring intricate geometry and texture, for any type of head avatar using simple descriptions or instructions. Symbols indicate the following prompt prefixes: * "a head of [text]" and † "a DSLR portrait of [text]". The captions in gray are the prompt suffixes while the blue ones are the editing instructions.
efforts [40,6,61,75,39,71,37,52,72] improve and extend DreamFusion from various perspectives (e.g., higher resolution [36] and better geometry [9]).
Considering the flexibility and versatility of natural languages, one might think that these SDS-based text-to-3D generative methods would be sufficient for generating diverse 3D avatars. However, it is noted that existing methods have two major drawbacks (see Fig. 4): (1) Inconsistency and geometric distortions: The 2D diffusion models used in these methods lack 3D awareness particularly regarding camera pose; without any remedy, existing text-to-3D methods inherited this limitation, leading to the multi-face "Janus" problem in the generated head avatars. (2) Fine-grained editing limitations: Although previous methods propose to edit 3D models by naively fine-tuning trained models with modified prompts [50,36], we find that this approach is prone to biased outcomes, such as identity loss or inadequate editing. This problem arises from two causes: (a) inherent bias in prompt-based editing in image diffusion models, and (b) challenges with inconsistent gradient back-propagation at separate iterations when using SDS calculated from a vanilla image diffusion model.
In this paper, we introduce a new head-avatar-focused text-to-3D method, dubbed HeadSculpt, that supports high-fidelity generation and fine-grained editing. Our method comprises two novel components: (1) Prior-driven score distillation: We first arm a pre-trained image diffusion model with 3D awareness by integrating a landmark-based ControlNet [80]. Specifically, we adopt the parametric 3D head model, FLAME [35], as a prior to obtain a 2D landmark map [38,28], which will serve as an additional condition for the diffusion model, ensuring the consistency of generated head avatars across different views. Further, to remedy the front-view bias in the pre-trained diffusion model, we utilize an improved view-dependent prompt through textual inversion [16], by learning a specialized <back-view> token to emphasize back views of heads and capture their unique visual details.
(2) Identity-aware editing score distillation (IESD): To address the challenges of fine-grained editing for head avatars, we introduce a novel method called IESD. It blends two scores, one for editing and the other for identity preservation, both predicted by a ControlNet-based implementation of InstructPix2Pix [5]. This approach maintains a controlled editing direction that respects both the original identity and the editing instructions. To further improve the fidelity of our method, we integrate these two novel components into a coarse-to-fine pipeline [36], utilizing NeRF [45] as the low-resolution coarse model and DMTET [62] as the high-resolution fine model. As demonstrated in Fig. 1, our method can generate high-fidelity human-like and non-human-like head avatars while enabling fine-grained editing, including local changes, shape/texture modifications, and style transfers.
Related work
Text-to-2D generation. In recent years, groundbreaking vision-language technologies such as CLIP [51] and diffusion models [22,12,55,64] have led to significant advancements in text-to-2D content generation [57,53,1,65,66]. Trained on extensive 2D multimodal datasets [59,60], they are empowered with the capability to "dream" from the prompt. Follow-up works endeavor to efficiently control the generated results [80,81,44], extend the diffusion model to video sequence [63,3], accomplish image or video editing [21,29,77,5,73,13,20], enhance the performance for personalized subjects [56,16], etc. Although significant progress has been made in generating 2D content from text, carefully crafting the prompt is crucial, and obtaining the desired outcome often requires multiple attempts. The inherent randomness remains a challenge, especially for editing tasks.
Text-to-3D generation. Advancements in text-to-2D generation have paved the way for text-to-3D techniques. Early efforts [78,24,41,58,31,26,10] propose to optimize the 3D neural radiance field (NeRF) or vertex-based meshes by employing the CLIP language model. However, these models encounter difficulties in generating expressive 3D content, primarily because of the limitations of CLIP in comprehending natural language. Fortunately, the development of image diffusion models [65,1] has led to the emergence of DreamFusion [50]. It proposes Score Distillation Sampling (SDS) based on a pre-trained 2D diffusion prior [57], showcasing promising generation results. Subsequent works [34] have endeavored to improve DreamFusion from various aspects: Magic3D [36] proposes a coarse-to-fine pipeline for high-resolution generations; Latent-NeRF [40] includes shape guidance for more robust generation on the latent space [55]; DreamAvatar [6] leverages SMPL [4] to generate 3D human full-body avatars under controllable shapes and poses; Fantasia3D [9] disentangles the geometry and texture training with DMTET [62] and PBR texture [46] as their 3D representation; 3DFuse [61] integrates depth control and semantic code sampling to stabilize the generation process. Despite notable progress, current text-to-3D generative models still face challenges in producing view-consistent 3D content, especially for intricate head avatars. This is primarily due to the absence of 3D awareness in text-to-2D diffusion models. Additionally, to the best of our knowledge, there is currently no approach that specifically focuses on editing the generated 3D content, especially addressing the intricate fine-grained editing needs of head avatars.
3D head modeling and creation. Statistical mesh-based models, such as FLAME [35,14], enable the reconstruction of 3D head models from images. However, they struggle to capture fine details like hair and wrinkles. To overcome this issue, recent approaches [7,67,68,48] employ Generative Adversarial Networks (GANs) [43,18,27] to train 3D-aware networks on 2D head datasets and produce 3Dconsistent images through latent code manipulation. Furthermore, neural implicit methods [83,15,25,84] introduce implicit and subject-oriented head models based on neural rendering fields [42,45,2].
Recently, text-to-3D generative methods have gained traction, generating high-quality 3D head avatars from natural language using vision-language models [51,65]. Typically, T2P [81] predicts bone-driven parameters of head avatars via a game engine under the CLIP guidance [51]. Rodin [76] proposes a roll-out diffusion network to perform 3D-aware diffusion. DreamFace [79] employs a selection strategy in the CLIP embedding space to generate coarse geometry and uses SDS [50] to We refine or edit the model using the extracted 3D mesh and apply identity-aware editing score distillation if editing is the target. (c) The core of our pipeline is the prior-driven score distillation, which incorporates landmark control, enhanced view-dependent prompts, and an InstructPix2Pix branch.
optimize UV-texture. Despite producing promising results, all these methods require a large amount of data for supervised training and struggle to generalize well to non-human-like avatars. In contrast, our approach relies solely on pre-trained text-to-2D models, generalizes well to out-of-domain avatars, and is capable of performing fine-grained editing tasks.
Methodology
HeadSculpt is a 3D-aware text-to-3D approach that utilizes a pre-trained text-to-2D Stable Diffusion model [65,55] to generate high-resolution head avatars and perform fine-grained editing tasks. As illustrated in Fig. 2, the generation pipeline has two stages: coarse generation via the neural radiance field (NeRF) [45] and refinement/editing using tetrahedron mesh (DMTET) [62]. Next, we will first introduce the preliminaries that form the basis of our method in Sec. 3.1. We will then discuss the key components of our approach in Sec. 3.2 and Sec, 3.3, including (1) the prior-driven score distillation process via landmark-based ControlNet [80] and textual inversion [16], and (2) identity-aware editing score distillation accomplished in the fine stage using the ControlNet-based InstructPix2Pix [5].
Preliminaries
Score distillation sampling. Recently, DreamFusion [50] proposed score distillation sampling (SDS) to self-optimize a text-consistent neural radiance field (NeRF) based a the pre-trained text-to-2D diffusion model [57]. Due to the unavailability of the Imagen model [57] used by DreamFusion, we employ the latent diffusion model in [55] instead. Specifically, given a latent feature z t encoded from an image x, the SDS strategy adds random noise ϵ to z t and utilizes the pre-trained denoising function ϵ ϕ (z t ; y, t) to predict the added noise. The SDS loss is defined as the difference between predicted and added noise and its gradient is given by
∇ θ L SDS (ϕ, g(θ)) = E t,ϵ∼N (0,1) w(t) (ϵ ϕ (z t ; y, t) − ϵ) ∂z ∂x ∂x ∂θ ,(1)
where y is the text embedding, w(t) weights the loss from noise level t. With the expressive text-to-2D diffusion model and self-supervised SDS loss, we can back-propagate the gradients to optimize an implicit 3D scene g(θ), eliminating the need for an expensive 3D dataset.
3D scene optimization. HeadSculpt explores the potential of two different 3D differentiable representations as the optimization basis for crafting 3D head avatars. Specifically, we employ NeRF [45] in the coarse stage due to its greater flexibility in geometry deformation, while utilizing DMTET [62] in the fine stage for efficient high-resolution optimization.
(1) 3D prior-based NeRF. DreamAvatar [6] recently proposed a density-residual setup to enhance the robustness of the generated 3D NeRF. Given a point x inside the 3D volume, we can derive its density and color value based on a prior-based density fieldσ:
F (x,σ) = F θ (γ(x)) + (σ(x), 0) → (σ, c),(2)
where γ(·) denotes a hash-grid frequency encoder [45], and σ and c are the density and RGB color respectively. We can deriveσ from the signed distance d(x) of a given 3D shape prior (e.g., a canonical FLAME model [35] in our implementation):
σ(x) = max 0, softplus −1 (τ (x)) , τ (x) = 1 a sigmoid(−d(x)/a), where a = 0.005. (3)
To obtain a 2D RGB image from the implicit volume defined above, we employ a volume rendering technique that involves casting a ray r from the 2D pixel location into the 3D scene, sampling points µ i along the ray, and calculating their density and color value using F in Eq. (2):
C(r) = i W i c i , W i = α i j<i (1 − α j ), α i = 1 − e (−σi||µ i −µ i+1 ||) .(4)
(2) DMTET. It discretizes a deformable tetrahedral grid (V T , T ), where V T denotes the vertices within grid T [17,62], to model the 3D space. Every vertex v i ∈ V T ⊂ R 3 possesses a signed distance value s i ∈ R, along with a position offset ∆v i ∈ R 3 of the vertex relative to its initial canonical coordinates. Subsequently, the underlying mesh can be extracted based on s i with the differentiable marching tetrahedra algorithm. In addition to the geometry, we adopt the Magic3D approach [36] to construct a neural color field. This involves re-utilizing the MLP trained in the coarse NeRF stage to predict the RGB color value for each 3D point. During optimization, we render this textured surface mesh into high-resolution images using a differentiable rasterizer [33,46].
3D-Prior-driven score distillation
Existing text-to-3D methods with SDS [50] assume that maximizing the likelihood of images rendered from various viewpoints of a scene model g(·) is equivalent to maximizing the overall likelihood of g(·). This assumption can result in inconsistencies and geometric distortions [50,61]. A notable issue is the "Janus problem" characterized by multiple faces on a single object (see Fig. 4). There are two possible causes: (1) the randomness of the diffusion model which can cause inconsistencies among different views, and (2) the lack of 3D awareness in controlling the generation process, causing the model to struggle in determining the front view, back view, etc. To address these issues in generating head avatars, we integrate 3D head priors into the diffusion model.
Landmark-based ControlNet. In Section 3.1, we explain our adoption of FLAME [35] as the density guidance for our NeRF. Nevertheless, this guidance by itself is insufficient to have a direct impact on the SDS loss. What is missing is a link between the NeRF and the diffusion model, incorporating the same head priors. Such a link is key to improving the view consistency of the generated head avatars.
To achieve this objective, as illustrated in Fig. 2, we propose the incorporation of 2D landmark maps as an additional condition for the diffusion model using ControlNet [80]. Specifically, we employ a ControlNet C trained on a large-scale 2D face dataset [82,11], using facial landmarks rendered from MediaPipe [38,28] as ground-truth data. When given a randomly sampled camera pose π, we first project the vertices of the FLAME model onto the image. Following that, we select and render some of these vertices into a landmark map P π based on some predefined vertex indexes. The landmark map will be fed into ControlNet and its output features are added to the intermediate features within the diffusion U-Net. The gradient of our SDS loss can be re-written as
∇ θ L SDS (ϕ, g(θ)) = E t,ϵ∼N (0,1),π w(t) (ϵ ϕ (z t ; y, t, C(P π )) − ϵ) ∂z ∂x ∂x ∂θ .(5)
Enhanced view-dependent prompt via textual inversion. Although the landmark-based ControlNet can inject 3D awareness into the pre-trained diffusion models, it struggles to maintain back view head consistency. This is expected as the 2D image dataset used for training mostly contains only front or side face views. Consequently, when applied directly to back views, the model introduces ambiguity as front and back 3D landmark views can appear similar, as shown in Fig. 6. To address this issue, we propose a simple yet effective method. Our method is inspired by previous works [50,61,36] which found it beneficial to append view-dependent text (e.g., "front view", "side view" or "back view") to the provided input text based on the azimuth angle of the randomly sampled camera. We extend this idea by learning a special token <back-view> to replace the plain text "back view" in order to emphasize the rear appearance of heads. This is based on the assumption that a pre-trained Stable Diffusion does has the ability to "imagine" the back view of a head -it has seen some during training. The main problem is that a generic text embedding of "back view" is inadequate in telling the model what appearance it entails. A better embedding for "back view" is thus required. To this end, we first randomly download 34 images of the back view of human heads, without revealing any personal identities, to construct a tiny dataset D, and then we optimize the special token v (i.e., <back-view>) to better fit the collected images, similar to the textual inversion [16]:
v * = arg min v E t,ϵ∼N (0,1),z∼D ∥ϵ − ϵ ϕ (z t ; v, t)∥ 2 2 ,(6)
which is achieved by employing the same training scheme as the original diffusion model, while keeping ϵ ϕ fixed. This constitutes a reconstruction task, which we anticipate will encourage the learned embedding to capture the fine visual details of the back views of human heads. Notably, as we do not update the weights of ϵ ϕ , it stays compatible with the landmark-based ControlNet.
Identity-aware editing score distillation
After generating avatars, editing them to fulfill particular requirements poses an additional challenge. Previous works [50,36] have shown promising editing results by fine-tuning a trained scene model with a new target prompt. However, when applied to head avatars, these methods often suffer from identity loss or inadequate appearance modifications (see Fig. 7). This problem stems from the inherent constraint of the SDS loss, where the 3D models often sacrifice prominent features to preserve view consistency. Substituting Stable Diffusion with InstructPix2Pix [5,19] might seem like a simple solution, but it also faces difficulties in maintaining facial identity during editing based only on instructions, as it lacks a well-defined anchor point.
To this end, we propose identity-aware editing score distillation (IESD) to regulate the editing direction by blending two predicted scores, i.e., one for editing instruction and another for the original description. Rather than using the original InstructPix2Pix [5], we employ a ControlNet-based InstructPix2Pix I [80] trained on the same dataset, ensuring compatibility with our landmark-based ControlNet C and the learned <back-view> token. Formally, given an initial textual prompt y describing the avatar to be edited and an editing instructionŷ, we first input them separately into the same diffusion model equipped with two ControlNets, I and C. This allows us to obtain two predicted noises, which are then combined using a predefined hyper-parameter ω e like classifier-free diffusion guidance (CFG) [23]:
∇ θ L IESD (ϕ, g(θ)) = E t,ϵ∼N (0,1),π w(t) ε ϕ (z t ; y,ŷ, t, C(P π ), I(M π )) −ϵ ∂z ∂x ∂x ∂θ , ω e ϵ ϕ (z t ;ŷ, t, C(P π ), I(M π )) + (1 − ω e )ϵ ϕ (z t ; y, t, C(P π ), I(M π ))
where P π and M π represent the 2D landmark maps and the reference images rendered in the coarse stage, both being obtained under the sampled camera pose π. The parameter ω e governs a trade-off between the original appearance and the desired editing, which defaults to 0.6 in our experiments.
Experiments
We will now assess the efficacy of our HeadSculpt across different scenarios, while also conducting a comparative analysis against state-of-the-art text-to-3D generation pipelines. Figure 4: Comparison with existing text-to-3D methods. Unlike other methods that struggle or fail to generate reasonable results, our approach consistently achieves high-quality geometry and texture, yielding superior results. *Non-official implementation. † Generated from the online website demo.
Implementation details. HeadSculpt builds upon Stable-DreamFusion [69] and Huggingface Diffusers [74,49]. We utilize version 1.5 of Stable Diffusion [65] and version 1.1 of ControlNet [80,11] in our implementation. In the coarse stage, we optimize our 3D model at 64 × 64 grid resolution, while using 512 × 512 grid resolution for the fine stage (refinement or editing). Typically, each text prompt requires approximately 7, 000 iterations for the coarse stage and 5, 000 iterations for the fine stage. It takes around 1 hour for each stage on a single Tesla V100 GPU with a default batch size of 4. We use Adam [32] optimizer with a fixed learning rate of 0.001. Additional implementation details can be found in the supplementary material.
Baseline methods. We compare with five baselines: DreamFusion [69], Latent-NeRF [40], 3DFuse [61] (improved version of SJC [75]), Fantasia3D [9], and DreamFace [79]. Since official implementations for DreamFusion and Fantasia3D are not yet available, we employ their respective open-source counterparts, i.e., Stable-DreamFusion [69] and Fantasia3D.unofficial [70]. We do not directly compare with DreamAvatar [6] as it involves deformation fields for full-body-related tasks.
Qualitative evaluations
Head avatar generation and editing with various prompts. In Fig. 1, we show a diverse array of 3D head avatars generated by HeadSculpt, consistently demonstrating high-quality geometry and texture across various viewpoints. Our method's versatility is emphasized by its ability to create an assortment of avatars, including humans (both celebrities and ordinary individuals) as well as non-human characters like superheroes, comic/game characters, paintings, and more. Additionally, HeadSculpt's adaptability is showcased through its ability to perform fine-grained editing, such as local changes (e.g., adding accessories or altering expressions), shape and texture modifications, and style transfers. Head avatar editing with different edit scales. In Fig. 3, we demonstrate the effectiveness of IESD with different ω e values, highlighting its ability to control editing influence on the reference identity.
Comparison with SOTA methods. We provide qualitative comparisons with existing SOTA methods in Fig. 4. We employ the same FLAME model for Latent-NeRF [40] to compute their sketch-guided loss and for Fantasia3D [70] as the initial geometry.
The following observations can be made: (1) All baselines tend to be more unstable during training than ours, often resulting in diverged training processes; (2) Latent-NeRF occasionally produces plausible results due to its use of the shape prior, but its textures are inferior to ours since optimization occurs solely in the latent space; (3) Despite 3DFuse's depth control to mitigate the Janus problem, it still struggles to generate 3D consistent head avatars; (4) While Fantasia3D can generate a mesh-based 3D avatar, its geometry is heavily distorted, as its disentangled geometry optimization might be insufficient for highly detailed head avatars; (5) Although DreamFace generates realistic human face textures, it falls short in generating (i) complete heads, (ii) intricate geometry, (iii) non-human-like appearance, and (iv) composite accessories. In comparison, our method consistently yields superior results in both geometry and texture with much better consistency for the given prompt.
User study
To further assess the quality of the generated results, we conduct a user study with the participation of 20 volunteers. The four baselines [69,70,61,40] and our HeadSculpt are compared based on three dimensions: (1) consistency with the text, (2) texture quality, and (3) geometry quality. Volunteers are presented with 20 randomly selected generated results in the form of rendered rotating videos and asked to assign a score from 1 (worst) to 5 (best) for each example and criterion. The results, shown in Fig. 5, indicate that our method achieved the highest rank in all three aspects by large margins.
Further analysis
Effectiveness of prior-driven score distillation. In Fig. 6, we conduct ablation studies to examine the impact of the proposed landmark control and textual inversion priors in our method. We demonstrate this on the coarse stage because the refinement and editing results heavily depend on this stage. The findings show that landmark control is essential for generating spatially consistent head avatars. Without it, the optimized 3D avatar faces challenges in maintaining consistent facial views, particularly for non-human-like characters. Moreover, textual inversion is shown to be another vital component in mitigating the Janus problem, specifically for the back view, as landmarks cannot exert control on the rear view. Overall, the combination of both components enables HeadSculpt to produce view-consistent avatars with high-quality geometry.
Effectiveness of IESD. We assess IESD's efficacy for fine-grained 3D head avatar editing by comparing it with various alternatives since no dedicated method exists for this: (B1) One-step optimization on the coarse stage without initialization; (B2) Initialized from the coarse stage, followed by optimization of another coarse stage with an altered description; (B3) Initialized from the coarse , ω e = 1). Notably, B2 represents the editing method proposed in DreamFusion [50], while B3 has a similar performance as Magic3D [36], which employs a three-stage editing process (i.e., Coarse + Coarse + Fine).
In Fig. 7, we present two common biased editing scenarios produced by the baseline methods: insufficient editing and loss of identity. With Stable Diffusion, specific terms like "Saul Goodman" and "skull" exert a more substantial influence on the text embeddings compared to other terms, such as "older" and "Vincent van Gogh". B1, B2, and B3, all based on vanilla Stable Diffusion, inherit such bias in their generated 3D avatars. Although B4 does not show such bias, it faces two other issues: (1) the Janus problem reemerges due to incompatibility between vanilla InstructPix2Pix and the proposed prior-driven score distillation; (2) it struggles to maintain facial identity during editing based solely on instructions, lacking a well-defined anchor point. In contrast, B5 employs ControlNet-based InstructPix2Pix [80] with the proposed prior score distillation, resulting in more view-consistent editing. Additionally, our IESD further uses the proposed edit scale to merge two predicted scores, leading to better identity preservation and more effective editing. This approach allows our method to overcome the limitations faced by the alternative solutions, producing high-quality 3D avatars with improved fine-grained editing results.
Conclusions
We have introduced HeadSculpt, a novel pipeline for generating high-resolution 3D human avatars and performing identity-aware editing tasks through text. We proposed to utilize a prior-driven score distillation that combines a landmark-based ControlNet and view-dependent textual inversion to address the Janus problem. We also introduced identity-aware editing score distillation that preserves both the original identity information and the editing instruction. Extensive evaluations demonstrated that our HeadSculpt produces high-fidelity results under various scenarios, outperforming state-ofthe-art methods significantly.
Limitations. Although HeadSculpt produces new SOTA results, we notice certain limitations: (1) nondeformable results hinder further extensions and applications in audio or video-driven problems; (2) generated textures of human faces are less realistic than conventional supervised face reconstruction methods; (3) some inherited biases from Stable Diffusion [65] still remain, e.g., the generated Asian heads are highly stereotyped; and (4) limitations inherited from InstructPix2Pix [5], e.g., the inability to perform large spatial manipulations.
Societal impact. The advancements in geometry and texture generation for human head avatars could be deployed in many AR/VR use cases but also raises concerns about their potential malicious use. We encourage responsible research and application, fostering open and transparent practices. To show the effectiveness of the learned <back-view> token, we conduct an analysis of its control capabilities in the context of 2D generation results. Specifically, we compare two generation results using Stable Diffusion [65], with both experiments sharing the same random seed. One experiment has the plain text prompt appended with the plain phrase "back view," while the other experiment utilizes the learned special token <back-view> in the prompt. We present a selection of randomly generated results in Fig. 9. The observations indicate that the <back-view> token effectively influences the pose of the generated heads towards the back, resulting in a distinct appearance. Remarkably, the <back-view> token demonstrates a notable generalization ability, as evidenced by the Batman case, despite not having been trained specifically on back views of Batman in the textual inversion process.
B.2 Inherent bias in 2D diffusion models
In our main paper, we discussed the motivation behind our proposed identity-aware editing score distillation (IESD), which can be attributed to two key factors. Firstly, the limitations of promptbased editing [50,36] are due to the inherent bias present in Stable Diffusion (SD). Secondly, while InstructPix2Pix (IP2P) [5] offers a solution by employing instruction-based editing to mitigate bias, it often results in identity loss. To further illustrate this phenomenon, we showcase the biased 2D outputs of SD and ControlNet-based IP2P in Fig. 10. Modified descriptions and instructions are utilized in these respective methods to facilitate the editing process and achieve the desired results. The results provide clear evidence of the following: (1) SD generates biased outcomes, with a tendency to underweight the "older" aspect and overweight the "skull" aspect in the modified description; (2) IP2P demonstrates the ability to edit the image successfully, but it faces challenges in preserving the identity of the avatar.
The aforementioned inherent biases are amplified in the domain of 3D generation (refer to Fig. 7 in the main paper) due to the optimization process guided by SDS loss, which tends to prioritize view consistency at the expense of sacrificing prominent features. To address this issue, our proposed IESD approach combines two types of scores: one for editing and the other for identity preservation. This allows us to strike a balance between preserving the initial appearance and achieving the desired editing outcome. Figure 9: Analysis of the learned <back-view> on 2D image generation. For each pair of images, we present two 2D images generated with the same random seed, where the left image is conditioned on the plain text "back view" and the right image is conditioned on the <back-view> token. Modified description: a DSLR portrait skull of Vincent van Gogh Instruction: turn his face into a skull Figure 10: Analysis of the inherent bias in 2D diffusion models. For each case, we display several 2D outputs of SD and IP2P, utilizing modified descriptions and instructions, respectively, with reference images from our coarse-stage NeRF model to facilitate the editing process.
C Additional qualitative comparisons
We provide more qualitative comparisons with four baseline methods [69,40,61,70] in Fig. 11 and Fig. 12. These results serve to reinforce the claims made in Sec 4.1 of the main paper, providing further evidence of the superior performance of our HeadSculpt in generating high-fidelity head avatars. These results showcase the ability of our method to capture intricate details, realistic textures, and overall visual quality, solidifying its position as a state-of-the-art solution in this task.
Notably, to provide a more immersive and comprehensive understanding of our results, we include multiple outcomes of our HeadSculpt in the form of 360 • rotating videos. These videos can be accessed in the accompanying HTML file, enabling viewers to observe the generated avatars from various angles and perspectives.
Figure 2 :
2Overall architecture of HeadSculpt. We craft high-resolution 3D head avatars in a coarseto-fine manner. (a) We optimize neural field representations for the coarse model. (b)
Figure 3 :
3Impact of the edit scale ω e in IESD. It balances the preservation of the initial appearance and the extent of the desired editing, making the editing process more controllable and flexible.
Figure 5 :
5User study. HeadSculpt (Ours) w/o Landmark Ctrl HeadSculpt (Ours) w/o Textual Inversion a head of Woody in the Toy Story a head of Walter White, wearing a bowler hat a head of Bumblebee in Transformers a head of Mario in Mario Franchise
Figure 6 :
6Analysis of prior-driven score distillation.
Figure 7 :
7Analysis of identity-aware editing score distillation.stage, followed by optimization of a new fine stage with an altered description; (B4) Initialized from the coarse stage, followed by optimization of a new fine stage with an instruction based on the vanilla InstructPix2Pix[5]; (B5) Ours without edit scale (i.e.
Figure 8 :
8Samples of the tiny dataset collected for learning <back-view> token. B Further analysis B.1 Effectiveness of textual inversion on 2D generation
Figure 11 :
11Additional comparisons with existing text-to-3D methods (Part 1). *Non-official.
Figure 12 :
12Additional comparisons with existing text-to-3D methods (Part 2). *Non-official.
w/ "back view" w/ <back-view> w/ "back view" w/ <back-view> w/ "back view" w/ <back-view>seed: 413
seed: 16772
seed: 40805
a DSLR portrait of Obama
seed: 50682
seed: 93440
seed: 96458
a DSLR portrait of Hillary Clinton
seed: 2367
seed: 19656
seed: 62156
a DSLR portrait of a boy with facial painting
seed: 53236
seed: 62424
seed: 72649
a DSLR portrait of Batman
Modified description: a DSLR portrait of +[older] Saul GoodmanInstruction: make him olderLandmark Map
Stable Diffusion
Reference Image
InstructPix2Pix
seed: 19056
seed: 72854
seed: 50233
seed: 64136
seed: 5427
seed: 91282
seed: 60104
seed: 88141
a DSLR portrait of Batman a DSLR portrait of Black Panther in Marvel a DSLR portrait of Two-face in DC a DSLR portrait of Doctor Strange a head of Terracotta ArmyDreamFusion* [69]
Latent-NeRF [40]
3DFuse [61]
Fantasia3D* [70]
HeadSculpt (Ours)
a head of Simpson in the Simpsons a head of Naruto Uzumaki a DSLR portrait of Napoleon Bonaparte a DSLR portrait of Leo Tolstoy a DSLR portrait of Audrey Hepburn a DSLR portrait of Obama with a baseball cap a DSLR portrait of Taylor SwiftDreamFusion* [69]
Latent-NeRF [40]
3DFuse [61]
Fantasia3D* [70]
HeadSculpt (Ours)
https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion
A Implementation details A.1 Details about 3D scene modelsIn the coarse stage, we make use of the grid frequency encoder γ(·) from the publicly available Stable DreamFusion[69]. This encoder maps the input x ∈ R 3 to a higher-frequency dimension, yielding γ(x) ∈ R 32 . The MLP within our NeRF model consists of three layers with dimensions[32, 64, 64, 3+1+3]. Here, the output channels '3', '1', and '3' represent the predicted normals, density value, and RGB colors, respectively. In the fine stage, we directly optimize the signed distance value s i ∈ R, along with a position offset ∆v i ∈ R 3 for each vertex v i . We found that fitting s i and v i into MLP, as done by Fantasia3D[70], often leads to diverged training.To ensure easy reproducibility, we have included all the hyperparameters used in our experiments in Tab 1. The other hyper-parameters are set to be the default of Stable-DreamFusion[69].A.2 Details about textual inversionIn the main paper, we discussed the collection of a tiny dataset consisting of 34 images depicting the back view of heads. This dataset was used to train a special token, <back-view>, to address the ambiguity associated with the back view of landmarks. The images in the dataset were selected to encompass a diverse range of gender, color, age, and other characteristics. A few samples from the dataset are shown inFig. 8. While our simple selection strategy has proven effective in our specific case, we believe that a more refined collection process could further enhance the controllability of the learned <back-view> token. We use the default training recipe provided by HuggingFace Diffusers 2 , which took us 1 hour on a single Tesla V100 GPU.
Text-to-image diffusion models with an ensemble of expert denoisers. Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, arXiv:2211.013242022arXiv preprintYogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 3
Mip-nerf 360: Unbounded anti-aliased neural radiance fields. T Jonathan, Ben Barron, Dor Mildenhall, Verbin, P Pratul, Peter Srinivasan, Hedman, CVPR. 2022Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In CVPR, 2022. 3
Align your latents: High-resolution video synthesis with latent diffusion models. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Wook Seung, Sanja Kim, Karsten Fidler, Kreis, CVPR. 2023Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, 2023. 3
Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, Michael J Black, ECCV. Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In ECCV, 2016. 3
Instructpix2pix: Learning to follow image editing instructions. Tim Brooks, Aleksander Holynski, Alexei A Efros, CVPR, 2023. 3, 4. 615Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In CVPR, 2023. 3, 4, 6, 9, 15
Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models. Yukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, Kwan-Yee K Wong, arXiv:2304.00916arXiv preprintYukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, and Kwan-Yee K Wong. Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models. arXiv preprint arXiv:2304.00916, 2023. 2, 3, 4, 7
Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Orazio Shalini De Mello, Gallo, CVPR, 2022. Leonidas Guibas, Jonathan Tremblay, Sameh Khamis13Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In CVPR, 2022. 1, 3
Text2tex: Text-driven texture synthesis via diffusion models. Dave Zhenyu Chen, Yawar Siddiqui, Hsin-Ying Lee, Sergey Tulyakov, Matthias Nießner, arXiv:2303.11396arXiv preprintDave Zhenyu Chen, Yawar Siddiqui, Hsin-Ying Lee, Sergey Tulyakov, and Matthias Nießner. Text2tex: Text-driven texture synthesis via diffusion models. arXiv preprint arXiv:2303.11396, 2023. 1
Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation. Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia, arXiv:2303.1387327arXiv preprintRui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation. arXiv preprint arXiv:2303.13873, 2023. 2, 3, 7
Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition. Yongwei Chen, Rui Chen, Jiabao Lei, Yabin Zhang, Kui Jia, NeurIPS. 2023Yongwei Chen, Rui Chen, Jiabao Lei, Yabin Zhang, and Kui Jia. Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition. In NeurIPS, 2023. 3
. Crucibleai, Controlnetmediapipeface, 57CrucibleAI. ControlNetMediaPipeFace. https://huggingface.co/CrucibleAI/ ControlNetMediaPipeFace, 2023. 5, 7
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Nichol, NeurIPS. 2020Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In NeurIPS, 2020. 3
Structure and content-guided video synthesis with diffusion models. Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis, arXiv:2302.03011arXiv preprintPatrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models. arXiv preprint arXiv:2302.03011, 2023. 3
Learning an animatable detailed 3d face model from in-the-wild images. Yao Feng, Haiwen Feng, J Michael, Timo Black, Bolkart, ACM Transactions on Graphics. 13Yao Feng, Haiwen Feng, Michael J Black, and Timo Bolkart. Learning an animatable detailed 3d face model from in-the-wild images. ACM Transactions on Graphics (ToG), 2021. 1, 3
Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. Guy Gafni, Justus Thies, Michael Zollhofer, Matthias Nießner, CVPR. Guy Gafni, Justus Thies, Michael Zollhofer, and Matthias Nießner. Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. In CVPR, 2021. 3
An image is worth one word: Personalizing text-to-image generation using textual inversion. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, H Amit, Gal Bermano, Daniel Chechik, Cohen-Or, ICLR, 2023. 3. 46Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. In ICLR, 2023. 3, 4, 6
Learning deformable tetrahedral meshes for 3d reconstruction. Jun Gao, Wenzheng Chen, Tommy Xiang, Alec Jacobson, Morgan Mcguire, Sanja Fidler, NeurIPS. Jun Gao, Wenzheng Chen, Tommy Xiang, Alec Jacobson, Morgan McGuire, and Sanja Fidler. Learning deformable tetrahedral meshes for 3d reconstruction. In NeurIPS, 2020. 5
Generative adversarial networks. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Communications of the ACM. 20203Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 2020. 3
Instruct-nerf2nerf: Editing 3d scenes with instructions. Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, Angjoo Kanazawa, arXiv:2303.12789arXiv preprintAyaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct- nerf2nerf: Editing 3d scenes with instructions. arXiv preprint arXiv:2303.12789, 2023. 6
. Amir Hertz, Kfir Aberman, Daniel Cohen-Or, arXiv:2304.07090Delta denoising score. arXiv preprintAmir Hertz, Kfir Aberman, and Daniel Cohen-Or. Delta denoising score. arXiv preprint arXiv:2304.07090, 2023. 3
. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, Daniel Cohen-Or, arXiv:2208.016262022arXiv preprintPrompt-toprompt image editing with cross attention controlAmir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to- prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022. 3
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, NeurIPS. 2020Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020. 3
Classifier-free diffusion guidance. Jonathan Ho, Tim Salimans, NeurIPS Workshop. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS Workshop, 2021. 6
Avatarclip: zero-shot text-driven generation and animation of 3d avatars. Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, Ziwei Liu, ACM Transactions on Graphics. 13Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, and Ziwei Liu. Avatarclip: zero-shot text-driven generation and animation of 3d avatars. ACM Transactions on Graphics (TOG), 2022. 1, 3
Headnerf: A real-time nerf-based parametric head model. Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, Juyong Zhang, CVPR. 13Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, and Juyong Zhang. Headnerf: A real-time nerf-based parametric head model. In CVPR, 2022. 1, 3
Zero-shot text-guided object generation with dream fields. Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, Ben Poole, CVPR. 2022Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In CVPR, 2022. 3
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, CVPR. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 3
Real-time facial surface geometry from monocular video on mobile gpus. Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, Matthias Grundmann, CVPR workshops. 35Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, and Matthias Grundmann. Real-time facial surface geometry from monocular video on mobile gpus. In CVPR workshops, 2019. 3, 5
Imagic: Text-based real image editing with diffusion models. Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, Michal Irani, arXiv:2210.092762022arXiv preprintBahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. arXiv preprint arXiv:2210.09276, 2022. 3
Realistic one-shot mesh-based head avatars. Taras Khakhulin, Vanessa Sklyarova, Victor Lempitsky, Egor Zakharov, ECCV. 2022Taras Khakhulin, Vanessa Sklyarova, Victor Lempitsky, and Egor Zakharov. Realistic one-shot mesh-based head avatars. In ECCV, 2022. 1
Clip-mesh: Generating textured meshes from text using pretrained image-text models. Tianhao Nasir Mohammad Khalid, Eugene Xie, Popa Belilovsky, Tiberiu, SIGGRAPH Asia. 13Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, and Popa Tiberiu. Clip-mesh: Generating textured meshes from text using pretrained image-text models. In SIGGRAPH Asia, 2022. 1, 3
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 7
Modular primitives for high-performance differentiable rendering. Samuli Laine, Janne Hellsten, Tero Karras, Yeongho Seol, Jaakko Lehtinen, Timo Aila, 2020. 5ACM Transactions on Graphics. Samuli Laine, Janne Hellsten, Tero Karras, Yeongho Seol, Jaakko Lehtinen, and Timo Aila. Modular primitives for high-performance differentiable rendering. ACM Transactions on Graphics (ToG), 2020. 5
Generative ai meets 3d: A survey on text-to-3d in aigc era. Chenghao Li, Chaoning Zhang, Atish Waghwase, Lik-Hang Lee, Francois Rameau, Yang Yang, Sung-Ho Bae, Choong Seon Hong, arXiv:2305.06131arXiv preprintChenghao Li, Chaoning Zhang, Atish Waghwase, Lik-Hang Lee, Francois Rameau, Yang Yang, Sung-Ho Bae, and Choong Seon Hong. Generative ai meets 3d: A survey on text-to-3d in aigc era. arXiv preprint arXiv:2305.06131, 2023. 3
Learning a model of facial shape and expression from 4d scans. Tianye Li, Timo Bolkart, J Michael, Hao Black, Javier Li, Romero, ACM Transactions on Graphics. 315Tianye Li, Timo Bolkart, Michael J Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4d scans. ACM Transactions on Graphics (TOG), 2017. 1, 3, 5
Magic3d: High-resolution text-to-3d content creation. Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin, CVPR. 915Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In CVPR, 2023. 2, 3, 5, 6, 9, 15
Zero-1-to-3: Zero-shot one image to 3d object. Ruoshi Liu, Rundi Wu, Pavel Basile Van Hoorick, Sergey Tokmakov, Carl Zakharov, Vondrick, arXiv:2303.11328arXiv preprintRuoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. arXiv preprint arXiv:2303.11328, 2023. 2
Mediapipe: A framework for perceiving and processing reality. Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris Mcclanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Yong, Juhyun Lee, CVPR workshops. 35Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Yong, Juhyun Lee, et al. Mediapipe: A framework for perceiving and processing reality. In CVPR workshops, 2019. 3, 5
Realfusion: 360 {\deg} reconstruction of any object from a single image. Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi, arXiv:2302.10663arXiv preprintLuke Melas-Kyriazi, Christian Rupprecht, Iro Laina, and Andrea Vedaldi. Realfusion: 360 {\deg} reconstruction of any object from a single image. arXiv preprint arXiv:2302.10663, 2023. 2
Or Patashnik, Raja Giryes, and Daniel Cohen-Or. Latent-nerf for shapeguided generation of 3d shapes and textures. Gal Metzer, Elad Richardson, arXiv:2211.0760018arXiv preprintGal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, and Daniel Cohen-Or. Latent-nerf for shape- guided generation of 3d shapes and textures. arXiv preprint arXiv:2211.07600, 2022. 2, 3, 7, 8, 17, 18
Text2mesh: Text-driven neural stylization for meshes. Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, Rana Hanocka, CVPR, 2022. 13Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, and Rana Hanocka. Text2mesh: Text-driven neural stylization for meshes. In CVPR, 2022. 1, 3
Nerf: Representing scenes as neural radiance fields for view synthesis. Ben Mildenhall, P Pratul, Matthew Srinivasan, Jonathan T Tancik, Ravi Barron, Ren Ramamoorthi, Ng, ECCV. 13Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. 1, 3
Mehdi Mirza, Simon Osindero, arXiv:1411.1784Conditional generative adversarial nets. arXiv preprintMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. 3
T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie, arXiv:2302.08453arXiv preprintChong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie. T2i- adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023. 3
Instant neural graphics primitives with a multiresolution hash encoding. Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller, 2022. 3ACM Transactions on Graphics. 45Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (TOG), 2022. 3, 4, 5
Extracting triangular 3d models, materials, and lighting from images. Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler, CVPR. 35Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, and Sanja Fidler. Extracting triangular 3d models, materials, and lighting from images. In CVPR, 2022. 3, 5
Point-e: A system for generating 3d point clouds from complex prompts. Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, Mark Chen, arXiv:2212.087512022arXiv preprintAlex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 1
Stylesdf: High-resolution 3d-consistent image and geometry generation. Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, Ira Kemelmacher-Shlizerman, CVPR. 2022Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In CVPR, 2022. 3
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, NeurIPS. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019. 7
Dreamfusion: Text-to-3d using 2d diffusion. Ben Poole, Ajay Jain, Jonathan T Barron, Ben Mildenhall, ICLR. 915Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In ICLR, 2022. 1, 2, 3, 4, 5, 6, 9, 15
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, ICML. 13Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 1, 3
Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, arXiv:2303.13508Subject-driven text-to-3d generation. arXiv preprintAmit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, et al. Dreambooth3d: Subject-driven text-to-3d generation. arXiv preprint arXiv:2303.13508, 2023. 2
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.061252022arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. 3
Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or, arXiv:2302.01721Texture: Text-guided texturing of 3d shapes. arXiv preprintElad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721, 2023. 1
High-resolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, CVPR, 2022. 1. 34Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. 1, 3, 4
Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, Kfir Aberman, CVPR. 2023Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dream- booth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 2023. 3
Photorealistic text-toimage diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, ; S Sara Mahdavi, Rapha Gontijo Lopes, arXiv:2205.11487Burcu Karagol Ayan. 14arXiv preprintSeyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to- image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022. 1, 3, 4
Clip-forge: Towards zero-shot text-to-shape generation. Aditya Sanghi, Hang Chu, Ye Joseph G Lambourne, Chin-Yi Wang, Marco Cheng, Kamal Rahimi Fumero, Malekshan, CVPR. 13Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Ka- mal Rahimi Malekshan. Clip-forge: Towards zero-shot text-to-shape generation. In CVPR, 2022. 1, 3
Laion-5b: An open large-scale dataset for training next generation image-text models. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, NeurIPS. 2022Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In NeurIPS, 2022. 3
Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki, arXiv:2111.02114arXiv preprintChristoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. 3
Let 2d diffusion model know 3d-consistency for robust text-to-3d generation. Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim, arXiv:2303.079371718arXiv preprintJunyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, and Seungryong Kim. Let 2d diffusion model know 3d-consistency for robust text-to-3d generation. arXiv preprint arXiv:2303.07937, 2023. 2, 3, 5, 7, 8, 17, 18
Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler, NeurIPS. Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, and Sanja Fidler. Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. In NeurIPS, 2021. 1, 3, 4, 5
Make-a-video: Text-to-video generation without text-video data. Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, ICLR. Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. In ICLR, 2023. 3
Denoising diffusion implicit models. Jiaming Song, Chenlin Meng, Stefano Ermon, ICLR. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In ICLR, 2021. 3
Stable diffusion. Stability, Ai, 15Stability.AI. Stable diffusion. https://stability.ai/blog/stable-diffusion-public-release, 2022. 1, 3, 4, 7, 9, 15
Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images. Stability, Ai, Stability.AI. Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images. https://stability.ai/blog/deepfloyd-if-text-to-image-model, 2023. 3
Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis. Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, Yebin Liu, 2022. 3ACM Transactions on Graphics. Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, and Yebin Liu. Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis. ACM Transactions on Graphics (ToG), 2022. 3
Next3d: Generative neural texture rasterization for 3d-aware head avatars. Jingxiang Sun, Xuan Wang, Lizhen Wang, Xiaoyu Li, Yong Zhang, Hongwen Zhang, Yebin Liu, CVPR. 13Jingxiang Sun, Xuan Wang, Lizhen Wang, Xiaoyu Li, Yong Zhang, Hongwen Zhang, and Yebin Liu. Next3d: Generative neural texture rasterization for 3d-aware head avatars. In CVPR, 2023. 1, 3
Stable-dreamfusion: Text-to-3d with stable-diffusion. Jiaxiang Tang, 1718Jiaxiang Tang. Stable-dreamfusion: Text-to-3d with stable-diffusion. https://github.com/ashawkey/ stable-dreamfusion, 2022. 7, 8, 14, 17, 18
. Jiaxiang Tang, Fantasia3d, 1718Jiaxiang Tang. Fantasia3d.unofficial. https://github.com/ashawkey/fantasia3d.unofficial, 2023. 7, 8, 14, 17, 18
Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior. Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, Dong Chen, arXiv:2303.14184arXiv preprintJunshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior. arXiv preprint arXiv:2303.14184, 2023. 2
Textmesh: Generation of realistic 3d meshes from text prompts. Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, Federico Tombari, arXiv:2304.12439arXiv preprintChristina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, and Federico Tombari. Textmesh: Generation of realistic 3d meshes from text prompts. arXiv preprint arXiv:2304.12439, 2023. 2
Unitune: Text-driven image editing by fine tuning an image generation model on a single image. Dani Valevski, Matan Kalman, Yossi Matias, Yaniv Leviathan, arXiv:2210.094772022arXiv preprintDani Valevski, Matan Kalman, Yossi Matias, and Yaniv Leviathan. Unitune: Text-driven image editing by fine tuning an image generation model on a single image. arXiv preprint arXiv:2210.09477, 2022. 3
Diffusers: State-of-the-art diffusion models. Suraj Patrick Von Platen, Anton Patil, Pedro Lozhkov, Nathan Cuenca, Kashif Lambert, Mishig Rasul, Thomas Davaadorj, Wolf, Patrick von Platen, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert, Kashif Rasul, Mishig Davaadorj, and Thomas Wolf. Diffusers: State-of-the-art diffusion models. https://github.com/ huggingface/diffusers, 2022. 7
Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. Haochen Wang, Xiaodan Du, Jiahao Li, A Raymond, Greg Yeh, Shakhnarovich, arXiv:2212.0077427arXiv preprintHaochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. arXiv preprint arXiv:2212.00774, 2022. 2, 7
Rodin: A generative model for sculpting 3d digital avatars using diffusion. Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, CVPR. Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, et al. Rodin: A generative model for sculpting 3d digital avatars using diffusion. In CVPR, 2023. 3
Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. Yixiao Jay Zhangjie Wu, Xintao Ge, Stan Weixian Wang, Yuchao Lei, Wynne Gu, Ying Hsu, Xiaohu Shan, Mike Zheng Qie, Shou, arXiv:2212.115652022arXiv preprintJay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. arXiv preprint arXiv:2212.11565, 2022. 3
Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models. Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, Shenghua Gao, arXiv:2212.147042022arXiv preprintJiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, and Shenghua Gao. Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models. arXiv preprint arXiv:2212.14704, 2022. 3
Dreamface: Progressive generation of animatable 3d faces under text guidance. Longwen Zhang, Qiwei Qiu, Hongyang Lin, Qixuan Zhang, Cheng Shi, Wei Yang, Ye Shi, Sibei Yang, Lan Xu, Jingyi Yu, arXiv:2304.0311737arXiv preprintLongwen Zhang, Qiwei Qiu, Hongyang Lin, Qixuan Zhang, Cheng Shi, Wei Yang, Ye Shi, Sibei Yang, Lan Xu, and Jingyi Yu. Dreamface: Progressive generation of animatable 3d faces under text guidance. arXiv preprint arXiv:2304.03117, 2023. 3, 7
Adding conditional control to text-to-image diffusion models. Lvmin Zhang, Maneesh Agrawala, arXiv:2302.0554379arXiv preprintLvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543, 2023. 3, 4, 5, 6, 7, 9
Zero-shot text-to-parameter translation for game character auto-creation. Rui Zhao, Wei Li, Zhipeng Hu, Lincheng Li, Zhengxia Zou, Zhenwei Shi, Changjie Fan, CVPR. 2023Rui Zhao, Wei Li, Zhipeng Hu, Lincheng Li, Zhengxia Zou, Zhenwei Shi, and Changjie Fan. Zero-shot text-to-parameter translation for game character auto-creation. In CVPR, 2023. 3
General facial representation learning in a visual-linguistic manner. Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, Fang Wen, CVPR. Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, and Fang Wen. General facial representation learning in a visual-linguistic manner. In CVPR, 2022. 5
Implicit morphable head avatars from videos. Yufeng Zheng, Victoria Fernández Abrevaya, Marcel C Bühler, Xu Chen, Michael J Black, Otmar Hilliges, . I M Avatar, CVPR. 13Yufeng Zheng, Victoria Fernández Abrevaya, Marcel C. Bühler, Xu Chen, Michael J. Black, and Otmar Hilliges. I M Avatar: Implicit morphable head avatars from videos. In CVPR, 2022. 1, 3
Instant volumetric head avatars. Wojciech Zielonka, Timo Bolkart, Justus Thies, CVPR. 2023Wojciech Zielonka, Timo Bolkart, and Justus Thies. Instant volumetric head avatars. In CVPR, 2023. 3
| [
"https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion",
"https://github.com/ashawkey/",
"https://github.com/ashawkey/fantasia3d.unofficial,"
] |
[
"Learning GAN-based Foveated Reconstruction to Recover Perceptually Important Image Features",
"Learning GAN-based Foveated Reconstruction to Recover Perceptually Important Image Features"
] | [
"Luca Surace ",
"Cara Tursun ",
"Karol Myszkowski ",
"\nUniversità della Svizzera italiana\nSwitzerland\n",
"\nMAREK WERNIKOWSKI\nUniversità della Svizzera italiana\nSwitzerland\n",
"\nMax Planck Institute for Informatics\nUniversità della Svizzera italiana, Switzerland and University of Groningen\nWest Pomeranian University of Technology\nPoland, Netherlands, Germany\n",
"\nPIOTR DIDYK\nRADOSŁAW MANTIUK\nWest Pomeranian University of Technology\nUniversità della Svizzera italianaPoland, Switzerland\n"
] | [
"Università della Svizzera italiana\nSwitzerland",
"MAREK WERNIKOWSKI\nUniversità della Svizzera italiana\nSwitzerland",
"Max Planck Institute for Informatics\nUniversità della Svizzera italiana, Switzerland and University of Groningen\nWest Pomeranian University of Technology\nPoland, Netherlands, Germany",
"PIOTR DIDYK\nRADOSŁAW MANTIUK\nWest Pomeranian University of Technology\nUniversità della Svizzera italianaPoland, Switzerland"
] | [] | A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of Generative Adversarial Networks has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach. | 10.1145/3583072 | [
"https://export.arxiv.org/pdf/2108.03499v3.pdf"
] | 256,872,306 | 2108.03499 | 89462a8475a47964184c888f437033a45104bdab |
Learning GAN-based Foveated Reconstruction to Recover Perceptually Important Image Features
Luca Surace
Cara Tursun
Karol Myszkowski
Università della Svizzera italiana
Switzerland
MAREK WERNIKOWSKI
Università della Svizzera italiana
Switzerland
Max Planck Institute for Informatics
Università della Svizzera italiana, Switzerland and University of Groningen
West Pomeranian University of Technology
Poland, Netherlands, Germany
PIOTR DIDYK
RADOSŁAW MANTIUK
West Pomeranian University of Technology
Università della Svizzera italianaPoland, Switzerland
Learning GAN-based Foveated Reconstruction to Recover Perceptually Important Image Features
A foveated image can be entirely reconstructed from a sparse set of samples distributed according to the retinal sensitivity of the human visual system, which rapidly decreases with increasing eccentricity. The use of Generative Adversarial Networks has recently been shown to be a promising solution for such a task, as they can successfully hallucinate missing image information. As in the case of other supervised learning approaches, the definition of the loss function and the training strategy heavily influence the quality of the output. In this work,we consider the problem of efficiently guiding the training of foveated reconstruction techniques such that they are more aware of the capabilities and limitations of the human visual system, and thus can reconstruct visually important image features. Our primary goal is to make the training procedure less sensitive to distortions that humans cannot detect and focus on penalizing perceptually important artifacts. Given the nature of GAN-based solutions, we focus on the sensitivity of human vision to hallucination in case of input samples with different densities. We propose psychophysical experiments, a dataset, and a procedure for training foveated image reconstruction. The proposed strategy renders the generator network flexible by penalizing only perceptually important deviations in the output. As a result, the method emphasized the recovery of perceptually important image features. We evaluated our strategy and compared it with alternative solutions by using a newly trained objective metric, a recent foveated video quality metric, and user experiments. Our evaluations revealed significant improvements in the perceived image reconstruction quality compared with the standard GAN-based training approach.
INTRODUCTION
Wide-field-of-view displays, such as virtual and augmented reality headsets, require efficient methods to generate and transmit high-resolution images. Techniques for reconstructing foveated images seek to solve the problem by leveraging the non-uniform sensitivity of human vision to spatial distortions across a wide field of view and generate high-quality images around only the location of the gaze as indicated by an eye-tracking device. Such foveated systems usually consist of two main steps [24,50,58]. First, an image is generated or transmitted in the form of a sparse set of samples that are generated according to the location of the gaze. Second, the image is reconstructed from the sparse information before being shown to the observer. An example of such a technique is foveated rendering [15] where fewer image samples are computed for peripheral vision to save computation during rendering (Figure 1).
We focus on the second step above, i.e., reconstructing an image from sparse samples. While simple techniques, such as interpolation [50] can be used to this end, it has been demonstrated that machine-learning techniques, more precisely, generative adversarial networks (GANs), can provide superior results [24] due to their ability to hallucinate missing content based on the learned statistics of the given image or video. Although such reconstruction requires additional computation, it can provide results of a similar quality as those obtained by simpler techniques while using fewer input samples. However, several challenges persist in design and training in this regard. Every GAN architecture is composed of two neural networks trained simultaneously [14,24]. In the task of foveated image reconstruction, one of them, called the generator, is responsible for reconstructing the image from a sparse set of samples, while the second one, called the discriminator, is responsible for discriminating between real and reconstructed images. The training iterations of both networks are interleaved such that an improvement in one of them triggers an improvement in the other. Ultimately, training in this manner can be viewed as a game between the generator and the discriminator networks, where the generator tries to reconstruct perfect images from a limited number of samples to try to fool the constantly improving discriminator. Therefore, to successfully train a GAN architecture, maintaining a balance between the training of the generator and the discriminator is highly important. Another challenge, which is the main focus of this paper, involves the choice of training loss and procedure. As in other supervised learning solutions, they have a significant influence on the final performance of GAN-based reconstruction. The recent literature has acknowledged that for any task where the perceived quality is critical, the loss function must capitalize on visually important image features. A well-known strategy to incorporate such perceptual findings in neural network training is to use a perceptual loss function (e.g., LPIPS [70]). While training a GAN, it is possible to use such a function as the loss for training the generator [24]. However, our main hypothesis is that this is insufficient because the training of the discriminator should alsotake into account the properties of human visual perception. To address this problem, we propose a new training scheme for the discriminator. Instead of training the discriminator to distinguish reconstructed images from real images, our technique trains it to distinguish the reconstructed images from real images that contain imperceptible distortions.
In this way, the discriminator network can inherit the limitations of the HVS (Human Visual System) represented in the data, and stops penalizing the generator network for reconstructing imperfect images within perceptual limits.
Consequently, in this work, we aim to improve GAN-based machine-learning approaches for foveated reconstruction by introducing a new training scheme based on perceived quality degradation. We first design and conduct psychophysical experiments to study the sensitivity of human visual system to content-based hallucinations across a wide visual field. Our choice of stimuli is inspired by several findings on the degradation of the sensitivity of human vision along the periphery. Although many foveated systems have previously exploited the loss of visual sensitivity to high-frequency information [15,58], the effect does not fully explain the visibility of missing information in peripheral vision. For example, past psychophysical experiments suggest that even though a perfect reconstruction of fine details of the image in the periphery is not critical, a complete lack of high spatial frequencies is detectable [55]. Another effect that is not fully explained by the reduced visual sensitivity to higher spatial frequencies in the periphery is the increased positional uncertainty [20,31]. To reflect these findings, we employ a technique of texture synthesis guided by the statistics of the original images to generate stimuli with varying amounts of hallucinated content. We argue that this type of distortion resembles the content synthesized by using GAN-based image reconstruction, and our experiments quantify their visibility. We then demonstrate how to incorporate the experimental results into training. Finally, we show how our strategy of focusing on perceptually important image features during training can lead to a GAN-based foveated reconstruction method that provides higher reconstruction quality with the same number of input samples or, conversely, the same perceived quality using fewer samples, leading to savings in bandwidth or rendering time. We argue that this is possible because our foveated reconstruction method aims to recover perceptually important image features that would be otherwise lost due to sub-sampling. The new dataset also allows us to calibrate application-specific objective metrics that predict image quality. We use the new metric and the perceptual experiments to evaluate our new training strategy and compare it with alternative solutions.
RELATED WORK
Our work takes inspiration from and bridges the expertise in visual perception, computer graphics, and machine learning. Here, we provide an overview of the relevant works from these fields.
Foveal vs. peripheral vision
Retina. The perceptual capabilities of the HVS have been extensively studied under different positions of visual stimuli in the visual field. Perception is not uniform across the visual field owing to optical and physiological limitations. Studies on the retina revealed that the density of photoreceptors in the retina is highly heterogeneous [7,67]. The central region of the retina, called the fovea centralis (or fovea), is characterized by a relatively high density of cone photoreceptors and retinal ganglion cells (RGCs). This provides foveal vision with a superior perceptual capability compared with non-foveal (or peripheral) vision. Although the fovea provides a sharp central vision, it is relatively small and corresponds to approximately 2°of the visual field, which spans up to 160-170° [28]. On the contrary, peripheral vision corresponds to more than 99 % of our visual field.
Peripheral contrast sensitivity. To study the differences between foveal and peripheral vision, previous psychophysical studies have focused on measurements of the contrast sensitivity function (CSF), which represents the sensitivity to changes in contrast at different spatial frequencies [3,26,32]. Research on the fovea has shown that the human CSF curve has a peak around 4-8 cycles per degree (cpd), with its tail reaching up to 50-60 cpd. Later, Peli et al. [38] and, more recently, Chwesiuk et al. [6] extended these measurements to peripheral vision, and observed that the decline in contrast sensitivity is characterized by a smaller peak that shifts toward lower spatial frequencies as eccentricity increases. This implies a loss of sensitivity to content with high spatial frequency content in peripheral vision.
Foveated rendering. The differences between foveal and peripheral vision mentioned above have led to gazecontingent techniques that process and display images depending on the position of the gaze of the observer. Foveated rendering is an actively studied gaze-contingent technique in this domain. It uses the position of the gaze from an eye tracker for a low-resolution image reconstruction in the periphery [5,15,25,34,37,50]. These studies have significantly reduced the computational cost of rendering because they reduce the number of pixels to be rendered [68]. However, their reconstruction methods are mostly based on the simple interpolation of a sub-sampled image and such post-processing steps as temporal antialiasing and contrast enhancement. However, such a simple reconstruction approach does not aim to replace the high-frequency spatial details lost as a result of the undersampling of the underlying content, leading to noticeable degradation in quality.
Hallucinating image details. Psychophysical measurements show that peripheral vision requires a more sophisticated model than a simple boundary between perceptible and imperceptible regions of contrast guided by the shape of the CSF [45,47]. Thibos et al. [55] revealed that the threshold of resolution declines from 14 cpd to 2.6 cpd in the range of eccentricity of 5°to 35°, whereas the threshold of detection drops from 46 cpd to 28 cpd in the same range of eccentricity. As a result of the faster drop-off in the threshold of resolution, there exists a band of spatial frequencies that can be detected but not accurately resolved for each value of eccentricity. Additional studies have shown that performance in terms of discriminating the spatial phase also degrades with increasing eccentricity and leads to greater positional uncertainty in visual perception [31,35,40]. Rosenholtz [44] claimed that the HVS encodes image statistics rather than precise location information in peripheral vision, leading to a performance decline in resolving the stimulus position. These studies have important implications for the design of foveated image reconstruction methods because they clearly show that HVS models driving the reconstruction must be comprehensive enough to consider multiple aspects of visual perception. In contrast to the standard reconstruction techniques mentioned above, we address this missing piece in the foveated image reconstruction pipeline.
Metamers
The goal of foveated rendering can be viewed as the low-cost production of images that are metameric to the full-quality rendering. Metamer here refers to images that are structurally different but appear the same to the human observer. Moreover, foveated rendering assumes knowledge about gaze position; therefore, images are metameric usually only for a given gaze location at which most of their content is observed by peripheral vision.
The limitations of the HVS perception have inspired several important studies on metamerism. Initial work aimed to synthesize textural metamers, i.e., different images representing the same type of texture. To this end, Portilla and Simoncelli [39] used an iterative optimization that is run until a randomly initialized image patch converges to the same summary statistics as the target texture. Their observations led to further studies on crowding effects, and Balas et al. [2] revealed that the representation of summary statistics can explain the crowding effects observed in the periphery. Rosenholtz et al. [46,47] introduced the texture tiling model (TTM), which models the performance of visual search in the periphery. Based on the ideas on summary statistics, Fridman et al. [12] proposed a convolutional network to reproduce the outputs of the TTM, and Deza et al. [9] introduced a generative adversarial network model for creating metamers of peripherally viewed natural images. Instead of using hand-crafted summary statistics, as previously introduced by Portilla and Simoncelli [39], they used channel autocorrelation statistics computed from the pre-trained VGG network features [48]. Although these studies have delivered promising results, their main goal is to study foveal texture perception or provide a reference model for studying the properties of peripheral vision (e.g., for visual crowding). More recently, Walton et al. [61] proposed a real-time method for producing metameric images to peripherally viewed input images with the main application of image and video compression.
While all the above-mentioned studies have focused on producing a metamer of an input image, this approach is not directly applicable to foveated rendering, the goal of which is to avoid rendering the original, full-quality image in the first place. Therefore, it is interesting to consider the problem of computing images that are metameric to full-quality rendering but are derived based on partial information from the rendering system. Examples of such techniques include standard foveated rendering, where the shading rate is reduced toward the periphery [15], as well as more recent techniques, where contrast enhancement [37] or noise synthesis [53] are applied to further reduce the amount of information needed for reconstructing peripherally viewed metamers of full-quality images. Kaplanyan et al. [24] recently introduced a powerful method to this end. They proposed a foveated image compression solution by using a GAN model that reconstructs perceptually plausible image sequences from a very sparse set of samples while maintaining temporal coherence. Our work takes inspiration from this solution and aims to minimize the required number of samples from the underlying content while achieving the best-perceived quality. To this end, similar to past work, we focus on reconstructing by hallucination but capitalize on positional uncertainty, and distortions [55] by modifying the training scheme. More precisely, in contrast to the scheme presented by Kaplanyan et al. [24], where the discriminator network is trained on ground-truth images, we train it on data derived from a series of systematic experiments that analyze the sensitivity of the HVS to distortions in the periphery.
Image metrics and perceptual loss
One way of guiding image reconstruction is to use image metrics. Current foveal quality metrics use the properties of central vision and provide inaccurate predictions for the periphery. The growing research on peripheral vision and its applications to foveated image reconstruction suggests the need for new foveated metrics [17,21,29,42,52,56,57,60,64]. These metrics are promising candidates to guide the loss function in learning-based approaches. However, their complex implementations, costly computations, and, in some cases, non-differentiable operations pose challenges for training models of image reconstruction by using them. An alternative to this is to use a training loss defined on the feature maps from a pre-trained deep network. This has become one of the most common approaches to learning-based image reconstruction, especially for super-resolution-based techniques [10,23,70]. Compared with a simpler loss function, such as the mean squared error (MSE), the loss functions defined on the hierarchical features of deep networks more closely resemble how the HVS processes visual information. However, there may still be significant differences between deep network representation and human visual perception [11]. In addition, some of the most commonly used pre-trained networks have been shown to have redundancy in their feature representations when reconstructing for the best-perceived quality [54]. The losses defined on those feature representations improve the perceived quality in the fovea but are not specifically optimized for peripheral vision. In this work, we take an orthogonal approach in the context of GAN training. Apart from using a perceptual loss to train a generator, our main contribution is a modification to the training of the discriminator such that it better reflects the discriminative power of a human observer.
PERCEPTUAL EXPERIMENTS
There is an important connection between studies on metamers and those on foveated image reconstructions using GANs discussed above. While the former postulate the importance of preserving image statistics for peripheral vision, the latter reconstruct the content according to the discriminator trained on natural images and videos. However, we argue that training the discriminator using natural images does not adequately reflect the lack of sensitivity of the HVS to spatial distortions, and excessively constrain the generator in hallucinating content. Therefore, we propose to train the discriminator on a dataset composed of images that contain distortions that are unobjectionable to the observers.
It is important to note that the primary goal of this procedure is not to cause the GAN to produce distortions, but rather to make it insensitive to the distortions that humans cannot detect and focus on penalizing perceptually important artifacts. By doing this, we want the discriminator to share limitations similar to those of the HVS.
Training GANs on an extensive dataset of images containing distortions with a near-visibility-threshold requires the responses of human observers in a subjective experiment, in which the participants are asked to adjust the level of distortion in peripherally viewed stimuli. However, owing to the sheer amount of data typically required for training the GAN, it is unfeasible to generate such a dataset by relying solely on perceptual experiments in a controlled lab environment with a reasonable number of participants. Therefore, we rely on method of texture synthesis that takes advantage of the correlation between image features to preserve the statistical properties of an input image. By imposing pixel-level constraints, we can control how faithful the reconstruction is to the structure of an exemplar. The number of pixel-level constraints acts as a parameter that controls the freedom to change the structure of the synthesized image with respect to the input, thereby introducing a way to control the strength of visual distortions that can be permitted in the reconstruction. We aim to use subjective experiments to measure the strength of distortions, which makes the reconstruction metameric with respect to the peripherally viewed ground-truth stimulus. Once the optimal parameters have been estimated for a smaller set of images in the perceptual experiment, we generate a dataset large enough for training the GAN by using the method of texture synthesis, thus eliminating the need to conduct subjective experiments with an unreasonably large number of participants.
Stimuli generation
Our stimuli generation model is based on the texture synthesis method proposed by Gatys et al. [13]. We customized their method by partially constraining the synthesis to control the level of visual distortion in a convenient way. We exploited this capability to create metamers of the input image given the specific viewing conditions. Their method was formulated as an optimization procedure on feature maps of the pre-trained VGG-19 [48] network that optimizesì for an input exemplar ì by minimizing the loss function:
ℒ( ì,ì) = ∑︁ =0 ∑︁ , 1 4 2 2 (ˆ− ),(1)
whereˆand are Gram matrices for feature maps and in layer , is the number of feature maps, is the total number of neurons in a layer, and is an additional weight associated with layer . To synthesize images for our experiment, we use the same procedure but also constrain a portion of randomly chosen pixel values inì such that they are identical to the corresponding pixels in ì. We refer to these pixels as guiding samples. We enforce the quality constraint by projecting the solution to the feasible space in each iteration of a gradient descent optimization.
We observed that solving the constrained optimization leads to subtle image artifacts that resemble checkerboard patterns ( Figure 2, first row -middle). The problem is related to the well-known issue of checkerboard artifacts created by backpropagation [36]. We identified two solutions to address this issue. The first consists of running the constrained optimization until convergence, removing the checkerboard artifacts characterized by a high spatial frequency by applying a low-pass Gaussian filter ( = 1), and running a second round of optimization without the constraint. We also perceptually verified that similar results may be achieved when constrained optimization is initialized with the guiding samples filtered by a Gaussian filter ( Figure 2, second row). Using the above procedure, we computed the imagesì , where % is the percentage of the guiding samples (see Figure 3, left). For = 0, our synthesis is equivalent to the original technique presented by Gatys et al.
The loss of high spatial frequencies (as commonly observed when reducing the input resolution) is an important factor influencing the perceived quality. To improve the sensitivity of the trained metrics to a visible decline in resolution, we also created a separate dataset of images with different degrees of Gaussian blur, i.e., different values of of the Gaussian kernel (see Figure 3, right). The results of the perceptual experiment obtained with these stimuli were used to expand the dataset of images to train the image metrics in Section 5.2.
Experimental protocol
The number of the guiding samples and the value of of the Gaussian kernel provide a parametrization for our investigation of the sensitivity of the HVS to deviations relative to the original images. More precisely, with the generation of stimuli using texture synthesis in our main experiment, we sought a direct relationship between the number of guiding samples and the probability of detection of the distortions by a human observer at a particular eccentricity. We later used this relation to generate a much larger dataset of images with imperceptible distortions that is required for training GAN-based foveated reconstruction. By contrast, the additional experiment with Gaussian blur only sought pairs of images and the corresponding probability of blur detection, as a smaller dataset was sufficient for training our image metrics.
Stimuli. We prepared 24 image patches of size 256 × 256 each from a different 4K image. The images are grouped into two main categories: nature and architecture. Nature images typically do not contain as much structure as the architecture images of human-made objects. The features present in the nature images have a larger variance in their texture, both in terms of colors and frequency-related content. Natural objects have a large variety of shapes without any strict pattern. On the contrary, architecture scenes usually contain larger uniform areas with clear separation between different parts. We expected that reconstructing images with a clear structure may be more challenging. For each patch, we generated a corresponding set of distorted patches for ∈ {0, 3, 5, 7.5, 10}. The set of values of was determined in a preliminary experiment, in which we found that > 10% yields to images that are almost always perceptually indistinguishable from the originals. A set of ground-truth patches was used to generate a set of blurred patches for ∈ {0.25, 1.25, 2.25, 3.25, 4.25}. The range was chosen to uniformly span the range of visible blur across the considered field of view [58]. The sample of our stimuli is presented in Figure 3.
Task. The experiment started with a short initial warm-up phase, in which the participants received instructions about the task. Subsequently, in each trial, three patches were presented to them on the screen: (1) the original patch at the fixation point, (2) a synthesized stimulus on either the right or the left side at a given eccentricity, and (3) the original patch on the opposite side at the same eccentricity. The stimuli were visible to both eyes, and the participants were asked to select the patch that was more similar to the reference by pressing left or right arrow keys of the keyboard.
Although we asked the participants to maintain their gaze at the center of the screen during the experiment, involuntary changes in the position of the gaze might have occurred from time to time. To preserve the retinal position of the stimuli against involuntary movements of the eyes of the participants, the stimuli followed the eye movements (see Figure 11). For this purpose, the gaze position is continuously monitored by using an eye tracker. The participants were not required to always focus on one point because the stimuli always followed the gaze point. The participants did not receive feedback on the correctness of their responses. We tested the visibility of distortions at 8°(the end of the parafovea [62]), 14°(the center of the perifovea [51,62]), and 20°, for which the stimuli spanned 3.21°, 3.08°, and 2.89°, respectively. The distance between the participant and the display was set to 70 cm during the experiment. At this viewing distance, each pixel spanned approximately 0.012 visual degrees. We did not impose a limit on the viewing time, and the average duration of each trial was 2 seconds. In total, each participant performed 1800 trials, leading to a total duration of almost 1 hour. The order of the images, eccentricities, and sides on which the test stimuli were shown were randomized. We used these experimental settings with both types of stimuli (created using texture synthesis and Gaussian blur).
Hardware. We used a setup consisting of a 27" Acer Predator display operating at a resolution of 3840×2160 px at 120 Hz and a peak luminance of 170 cd/m 2 . We used a Tobii Pro Spectrum eye tracker at a sampling rate of 600 Hz to track the position of the gaze.
Participants. In order to investigate the potential effects of the participants' background and knowledge of the field, we conducted this experiment 1 with two groups of participants. The first group consisted of five participants (four of them were the authors) who had extensive experience in computer graphics and full knowledge of the task. The second group consisted of 10 naive participants who had no experience in computer graphics or related fields. To improve the diversity of the stimuli, the naive participants performed the experiments with an extended dataset that contained four additional images. All participants were between 25 and 36 years of age and had a normal or corrected-to-normal vision.
Data analysis
The results of Gaussian blur were used directly to train the image metrics (Section 5.2). The rest of the data, i.e., for texture synthesis-based stimuli, were processed separately for the expert and naive participants, and then compared. For each eccentricity and number of guiding samples tested, we first aggregated the participants' answers across all images and then computed the probability of their detection of distortions in the patches. We then expressed the relationship between the number of guiding samples and the probability of detection for each eccentricity by using a cubic polynomial fit. The results are shown in Figure 4 for both the expert and naive groups.
To construct a set of image patches to train a discriminator in a GAN architecture, we sought a relation between eccentricity and the number of guide samples, to produce patches that did not contain objectionable artifacts when used with texture synthesis. We used the probabilities obtained from the experiments for guidance. More precisely, we used the number of the guiding samples corresponding to a 75% probability. Our choice was motivated by the commonly used definition of one just-noticeable-difference (JND) being a transition between visible and invisible distortions [33]. In practice, this is the midpoint between distortions that are always visible and those that are invisible. Although lower probabilities, such as 50%, can be considered, we argue that 75% provides a good trade-off for two reasons. First, the probabilities estimated from psychological experiments can asymptotically converge to only 50%, posing challenges when seeking the exact 50% point. The estimation of the threshold becomes ill-conditioned when the psychometric slope approaches zero, while 75% is an adequately located value as it lies in the steepest part of the psychometric function. Second, our experiments applied an isolated scenario in which the participants were given the particular task of determining the higher-quality patch, and only the reference and the test patch were shown to them. In ultimate applications such as foveated rendering, the observer is less sensitive to any distortion. Therefore, we believe that choosing the number of guiding samples leading to near-threshold distortions was appropriate in this case.
We used a 75% probability as a guide together with cubic fits to our data. We computed the number of guiding samples for expert subjects as 9.09%, 6 Figure 4. As expected, the naive observers were less sensitive to the artifacts than the experts, and tolerated distortions in synthesized patches with fewer guiding samples. We used the estimated values to prepare the inputs for the discriminator during GAN training (Section 4.1).
METHOD
The results of the perceptual experiments described in the previous section provide the measured thresholds for structural distortions for a standard observer. We used these data to control the learned manifold of the target images in foveated image reconstruction. To this end, we present an improved training scheme for the GAN in which the training data consist of a set of natural and synthesized images.
Our network for the foveated reconstruction uses the Wasserstein GAN [1] training scheme to produce perceptually optimized reconstructions from subsampled images. The network topology is based on the UNet encoder-decoder structure with skip connections [43] (Figure 5). This network design is similar to the model previously used by Kaplanyan et al. [24]. The encoder part of the generator network ( ) consists of downsampling residual blocks that use average pooling layers [18]. Each residual block of the encoder consists of two convolutional layers with a filter size of 5 × 5, except for the main branch, where we use a 1 × 1 filter to adjust the dimensionality. The numbers of filters are 16-32-64-128-128 in each block, respectively. The decoder part is a mirrored version of the first four encoder blocks with upsampling blocks that use bilinear interpolation instead of average pooling. The encoder and the decoder are connected by an additional bilinear upsampling layer. We use LeakyReLU activation with a negative slope coefficient of = 0.2 throughout the network, except for the final layer of the generator, which uses tanh activation.
The discriminator network or critic ( ) is based on PatchGAN [22], with a patch size of 64×64. The discriminator consists of downsampling blocks similar to the encoder part of the generator (the number of filters: 16-32-64-128-128). The output of the downsampling blocks is flattened and passed to a fully connected layer that produces a scalar. Compared with the model developed by Kaplanyan et al. , we use a more compact generator with half the number of filters in the first and last three blocks. Such a compact generator is made possible by our more permissive training scheme, which allows for imperceptible deviations from the statistics of the target image. By contrast, the discriminator loss used by Kaplanyan et al. aims to match the statistics of the target image as closely as possible. This important difference in our training scheme makes it possible to retain the perceptual quality of the image with a more compact network. Furthermore, their technique aims to reconstruct the original image, while our work aims to show the feasibility of using a GAN trained on perceptual data.
Dataset
We used two separate datasets as inputs to the generator and the discriminator. The input dataset for the generator consisted of patches from natural images with a size of 256 × 256. The patches were generated by cropping images with random offsets. To maintain a balanced data representation, 50 images were randomly selected from each of the 1000 classes in the ImageNet dataset [8], which provided us with a total of 50K patches. These images were later sub-sampled using the void-and-cluster algorithm [59] with sampling rates of 12% for the near periphery and 0.7% for the far periphery. This choice was made according to a content-aware foveated rendering method proposed by Tursun et al. [58]. The sub-sampling was followed by bilinear interpolation before the images were passed on to the generator as input.
We used the texture synthesis method described in Section 3.1 using the results of the perceptual experiment described in Section 3.2. This dataset consisted of 50K patches that were synthesized using 9.09 % and 6.89 % of the pixels as guiding samples, respectively, for near and far peripheral regions in addition to the full-resolution ground truth images.
Discriminator loss
The training of the discriminator, , uses the same loss function as in the original WGAN design [1]:
ℒ = ( ) − ( ( )),(2)
Training with mask Training without mask where represents the input to the generator network , ( ) is the output of the discriminator to real samples (natural images), and ( ( )) is the output of the discriminator to reconstructions from . This training is equivalent to the optimizations performed in previous work when ∈ I, where I is the set of images from ImageNet.
In our experiments, we updated this formulation by using * ∈ I * , where I * is the set of images with visually imperceptible structural distortions, as we described in Section 3.2. We denote the discriminator loss operating on this manifold of synthesized images with structural distortions by:
ℒ * = ( * ) − ( ( )).(3)
To ensure the stability of the training of the WGAN, we imposed a soft Lipschitz constraint by using a gradient penalty [16].
Generator loss
Our optimization trained generator networks with a weighted sum of different types of losses. For a comprehensive evaluation of our training scheme, we focused our analysis on three types of generators trained with standard and perceptual losses. The first generator, 2 , was trained with a combination of standard MSE loss and adversarial loss. The second generator, , was trained with the learned perceptual image patch similarity loss term. We used the learned linear weights on top of the VGG network as provided by the authors in their work [70]. Moreover, inspired by [19], we added the generator that used Laplacian-based loss. It is defined as a weighted sum of the mean squared error between the corresponding levels of the Laplacian pyramid for the reconstruction and the ground truth. We assigned weights to each level according to a Gaussian with = 1.0. By centering the Gaussian around different levels, we were able to place more emphasis on the reconstruction fidelity of different spatial frequencies in the pyramid decomposition. The main motivation behind the loss was that by assigning a larger weight at lower spatial frequencies, the network will be given more freedom to hallucinate high spatial frequencies, which might be desirable in the periphery.
All the generator losses used in our experiments are listed in Table 1.
Training
Inspired by work by Guenter et al. [15], we considered a foveated rendering scenario in which the image was divided into three regions with different levels of distortions. We assumed that the boundaries of these three regions were at the 8°and 14°of eccentricity (represented as the red and green circles, respectively, in Figure 1), which coincide with the end of the parafovea and its center of perifovea [62]. They divided the image into foveal, near periphery, and far periphery regions. Our split was similar to that used in the computer graphics research, such as by [Guenter et al. 2012, Patney et al. 2016]. While the content of the foveal region was directly transferred from a full-resolution image to ensure the highest quality, we reconstructed the near and far periphery regions from a sparse set of samples. The input sampling density for these regions was assumed to be the same as the number of guiding samples estimated in our experiment with image patches (Section 3). More precisely, we assumed that the near periphery region was reconstructed from the number of samples required for an eccentricity of 8°, and the far periphery was reconstructed from the number of samples corresponding to an eccentricity of 14°.
To perform this reconstruction, we trained two distinct generator networks, each of which was responsible for the reconstruction of the near and far peripheral regions. For benchmarking purposes, we also trained separate networks for each of the two discriminator losses (using our ℒ * and the standard loss ℒ ) and three generator losses (using ℒ 2 , ℒ , and ℒ ). The relative weights of the loss terms were set to 2 = 2000, = 100, = 100, and = 1. The selected weights were adjusted according to the magnitudes of the individual loss terms to equalize their contributions to the final loss. This was done by observing the values of the individual terms during training.
We used the Adam optimizer with a learning rate of 2 × 10 −5 ( 1 = 0.5, 2 = 0.999, = 10 −8 ). The training lasted for 20-30 epochs until convergence, which took approximately one day on an Nvidia 2080 Ti GPU. We assumed convergence when the training loss reached a plateau. The sample reconstructions from the converged network were also visually checked against potential instabilities during training.
Sampling mask
By capitalizing on the potential correlation between subsequent frames, the network introduced by Kaplanyan et al. uses recurrent connections as an important part of their design to retain information from previously subsampled frames. This high-level temporal reprojection provides the network with additional information when the underlying content is only partially observed owing to sparse subsampling. In order to clearly observe the effects of different training schemes, we used information from only one frame and isolated the reconstruction from the effects of this flow of temporal information. In our initial experiments, we observed that such a design decision made the network more sensitive to the sampling mask used in the inputs due to the absence of the flow of temporal information, which would otherwise have compensated for the lack of information on the true values of the missing samples. In order to address this issue, as a first attempt, we filled in the missing information by interpolating the sampled pixels while retaining the sampling mask as a channel of the input. However, visual inspection revealed visual artifacts collocated with the sampled pixels ( Figure 6), and their visibility was dependent on the weights assigned to the loss terms. The effect was the most pronounced when we used ℒ 2 in training, and the artifacts were less visible with ℒ . As a remedy, we removed the sampling mask from the training input and provided the generator with the bilinearly interpolated input consisting of RGB channels, as shown in Figure 5. This solution seemed to be effective in removing visual artifacts from the reconstruction (please refer to Figure 6 for a visual comparison).
RESULTS AND EVALUATION
We evaluated our strategy for training foveated image reconstruction using objective image metrics (Section 5.2) and a subjective experiment (Section 5.3). In our evaluation, we aimed to show that the benefits of our method are not limited to a particular selection of the training loss. To this end, we evaluated the generator network of our method with six loss functions that were combinations of LPIPS (ℒ ), L2 (ℒ 2 ), and Laplacian pyramid loss (ℒ ) terms (Table 1). For the discriminator network, , we benchmarked the performance of networks trained using our new patch dataset (ℒ * ) as well as the original dataset (ℒ ).
Loss function Definition
L2 refers to the largest weight assigned to medium spatial frequencies, while ℒ refers to the assignment of the largest weight to high spatial frequencies. Note that the provided images are shown for the demonstration purposes only. For appropriate perception cues, the images corresponding to the figure need to be observed in full screen with 36 cpd. They correspond only to the far peripheral area of vision, spanning from 21°to the corners of the display. and near periphery, respectively. The letter H represents the position of the peak located at the first level of the pyramid (characterized by an emphasis on high spatial frequencies), whereas the letter M represents the position of the peak located at the fourth level of the pyramid (medium spatial frequencies). For example, the method denoted by ℒ refers to a reconstruction obatined by using a network trained with Laplacian pyramid-based loss. In this case high spatial frequencies were assigned larger weights for the far periphery, and medium frequencies were given higher importance for the near periphery.
ℒ 2 = 2 · ℒ 2 + · ℒ L2 ours ℒ 2 * = 2 · ℒ 2 + · ℒ * LPIPS ℒ = · ℒ + · ℒ LPIPS ours ℒ * = · ℒ + · ℒ * Laplacian ℒ = · ℒ + · ℒ Laplacian ours ℒ * = · ℒ + · ℒ *
Visual inspection
The first observation is that the results of all eight GAN-based reconstruction exhibit clear hallucinated results, and the reconstruction of very fine details is not exact. Although this is visible with direct visual inspection, such deviations are less visible when shown in the periphery. Furthermore, all reconstructions introduce high spatial frequencies and strong edges, but training with ℒ 2 loss makes them sparser and more exaggerated. A visual comparison (Figure 7) between the discriminators trained with and without our synthesized dataset (i.e., ℒ 2 vs. ℒ 2 * , ℒ vs. ℒ * , ℒ vs. ℒ * , ℒ vs. ℒ * ) shows that our results include higher spatial frequencies. We argue that this is due to the flexibility of the discriminator, which penalizes hallucinations of high spatial frequencies less harshly. This is the desired effect because while the HVS is sensitive to the removal of some high spatial frequencies in the periphery, it is less sensitive to changes in their positions (Section 2).
To further investigate the characteristics of spatial frequency distribution of our reconstructions, we visualized the outputs of frequency band decomposition of the Laplacian pyramid and computed the differences of two layers from the bottom of the pyramid 2 , which encode the highest frequency band as well as one octave below it (Figure 8). We observe that our reconstructions provide additional hallucinated high-frequency details that do not exist in the traditional foveated image reconstruction. Please refer to the supplementary material for an interactive demo with more results.
Objective image metrics
We assessed the perceptual quality of our foveated image reconstruction using the recently introduced FovVideoVDP metric [33]. FovVideoVDP is a full-reference quality metric that can be used on images and videos. It considers into account the peripheral acuity of the HVS and the retinal eccentricity of the stimuli while computing quality scores. FovVideoVDP quality scores are in Just-Objectionable-Difference (JOD) units (JOD ∈ [0, 10]) where JOD = 10 represents the highest quality, while lower values represent higher perceived distortion with respect to the reference. We computed FovVideoVDP quality scores of the images generated by our method (ℒ * ) and those reconstructed by networks trained on a standard dataset (ℒ ). We provided the original image to the metric as a reference image. We report the FovVideoVDP quality scores in Table 2 for different peripheral regions (near and far), and generator loss functions. We compared these scores with those of our training method using ℒ * and the standard training approach with ℒ . Our method achieved higher quality-related scores than the standard approach to training the GAN. The generator was able to reconstruct the images better when we included perceptually non-objectionable distortions in the training set of the discriminator using our method. Table 2. FovVideoVDP [33] quality scores (in JOD units) of our method and standard training of GANs for image reconstruction. The scores were computed for near and far peripheral regions, which represent the images reconstructed at 8°and 14°, respectively. A higher score implies better visual quality. We also evaluated our method by using other objective quality metrics. Although many objective quality metrics are available for non-foveated quality measurement, objective quality assessment for foveated images is still an open research problem. In the absence of quality metrics for specific types of image distortions, past work has shown that the task-specific calibration of currently available objective quality metrics may be a promising solution [27,69]. Motivated by this, we used our perceptual data to calibrate existing metrics: L2, SSIM [65], MS-SSIM [66], and LPIPS [70], separately for different eccentricities. The calibration was performed by fitting the following logistic function [41]:
Peripheral region Loss function Ours
( ) = + ( − )/( + · − · ) 1 .(4)
to reflect the non-linear relation between the magnitude of distortion in the image and the probability of detecting it, with , , , , , being free parameters. Inspired by LPIPS [70], we also considered reweighing the contributions of each convolution and pooling layer of VGG-19 for each eccentricity separately. We refer to this metric based on the calibrated VGG network as Cal. VGG.
For all metrics, the free parameters (i.e., the parameters of the logistic functions as well as the weights and bias of VGG-19 layers) were obtained by minimizing the mean squared error in predicting the probability of detection:
∑︁ ( ì,ì) ∈ ( ( ì,ì)) − ( ì,ì) 2 ,(5)
where is one of the original metrics, is the set of distorted and undistorted pairs of images for eccentricity ∈ 8, 14, 20, and is the probability of detecting the difference. Minimization was performed by using nonlinear curve fitting through the trust-region-reflective and the Levenberg-Marquardt optimizations [4,30] with multiple random initializations. Furthermore, we constrained the VGG weights to be non-negative values to maintain the positive correlation between image dissimilarity and the magnitude of differences in VGG features, as motivated by the work in [70]. To make our dataset more comprehensive, we added stimuli from an additional experiment that analyzed the visibility of the blur. For this purpose, we followed the procedure described in Section 3.2.
To validate our calibration, we performed 5-fold cross-validation and computed Pearson's correlations between the ground-truth probability of detecting distortions and metric predictions. Figure 9 presents correlation coefficients for all trained metrics and eccentricities computed as an average across all the folds. Each bar shows the measured correlation for the uncalibrated (bright part) and calibrated (dark part) metrics by using the data from our initial experiment (Section 3.2). For uncalibrated metrics, we used the standard sigmoid logistic function: ( ) = 1/(1 + − ). We also provide the aggregated results, where the correlation was analyzed across all eccentricities. The individual scores showed that as the eccentricity increase, there is a decline in performance in terms of the original metrics. The additional calibration significantly improves the prediction performance in terms of all metrics. An interesting observation is that LPIPS performs very well for small eccentricities (8°). For larger ones (14°and 20°), however, its performance is significantly reduced even with the optimized logistic function. We relate this observation to the fact that LPIPS is not trained for peripheral vision. However, when the weights of the deep layers of VGG-19 are optimized (Cal. VGG), the performance improved significantly. This suggests that the above metrics are promising, but depending on the eccentricity, the contributions of the individual layers to the overall prediction must change. Since our Cal. VGG delivered the best performance in the tests, we selected it to benchmark the foveated image reconstruction techniques listed in Table 1. The results of this test for other metrics that we did not use for evaluation are also reported in the supplementary material as a reference. After calibration, Cal. VGG is still limited to processing image patches as input. To be able to run Cal. VGG on full images that cover a larger field of view, it needs to consider the influence of changes in eccentricity depending on the position of a given pixel in the image. To support arbitrary values of eccentricity as input, we linearly interpolated the prediction of the metric from 8 and 20 degrees to intermediate values of the eccentricity between 8-20 degrees. Moreover, in contrast to our approach in the calibration step, we switched to a single logistic function whose parameters were estimated by using experimental data from all eccentricities. After these extensions, we ran Cal. VGG locally on non-overlapping patches of the full input image. To compute a single scalar for the entire image, we took the average value computed across all patches as a global pooling step. To benchmark different reconstruction methods, we randomly selected 10 publicly available images at 3840 × 2160 resolution that contained architectural and natural features. Before applying different reconstruction techniques, we split the images into three regions: fovea, near periphery, and far periphery. We then draw sparse samples as visualized in (Figure 1). To test the reconstruction quality provided by different sampling rates used in near-and far-peripheral regions, we analyzed the Cal. VGG predictions for various blending strategies by changing the eccentricity thresholds at which the transition from near to far peripheral regions occurred. We computed the predicted detection rates from Cal. VGG for threshold points between 9°-22°for all images. Figure 10 presents the results. Lower detection rates indicate lower probabilities of detecting reconstruction artifacts by human observers and, therefore, a higher reconstruction quality. Training the reconstruction using ℒ * and ℒ yields reconstructions that are the least likely to be distinguished from the original images. The results generated by using ℒ 2 * delivered a lower detection rate than those generated by using ℒ 2 . The detection rate for our method is significantly lower when the far periphery threshold is selected in the range of eccentricity of 12°-22°( < 0.05). For ℒ * , this difference in detection rate is significant, compared with that for ℒ for thresholds between 9°-16°( < 0.05). We did not note a significant difference between the methods ( > 0.10 for all cases considered) when the network was trained by using Laplacian loss. All -values were computed by using -test.
We separated the images into two groups according to their prominent visual features: nature and architecture. Nature images were considered to form a class containing fewer geometrical structures and more texture-like areas, e.g., leaves, trees, waves, etc. They usually have a large variety of shapes without any well-defined patterns. In addition, they exhibit a high level of variance in colors and structure. Visual distortions in nature images would be less likely to result in perceivable changes because the variance in color and structure may have a masking effect on the distortions. On the contrary, architecture images mostly contain structures, like human-made objects, such as buildings, and larger uniform areas with clear visual boundaries. They usually have many edges and corners, which makes it more challenging to have a perceptually plausible and faithful reconstruction from sparse image samples. Distorting such images is more likely to lead to the mixing of visual information from different areas, and this is easy to detect even in the peripheral region of vision. Owing to these distinct properties of nature and architecture images, we separately evaluated the results on these two types of images separately. The results show that the difference of detection rate between our method and the standard training, compared to the overall trend, is more pronounced for nature images and less pronounced for architecture images. The results are available in Figure 6 of the supplementary material. The psychovisual experiment used to derive the data for training our reconstruction methods was performed involved 5 participants. Although it is common to use few participants in such experiments due to their complexity, and given that they should capture the general properties of the HVS, such experiments do not investigate potential differences within the population. In addition, it is not clear whether the method derived from the perceptual data is effective. Therefore, to further validate our claims regarding the new training strategy, and verify the importance of the improvements observed when using calibrated metrics, we conducted an additional subjective user experiment, in which naive participants were asked to directly compare different reconstruction methods.
Subjective experiments
Stimuli and task. We used the 10 images that were used in our evaluation on objective image metrics (Section 5.2). They were sub-sampled and reconstructed by using ℒ 2 * , ℒ 2 , ℒ * , and ℒ as shown in Figure 1. In each trial, the participants were shown the original image on the left and one of two reconstructions on the right half of the display. The two halves were separated by a 96-pixel-wide gray stripe. The participants could freely switch between reconstructions by using a keyboard. They were asked to select the reconstruction that was more similar to the reference on the left by pressing a key. During the experiment, the images followed the eye movements of the participant, as shown in Figure 11. In contrast to the calibration experiment performed in Section 3.2, in this experiment we showed full images to the participants, each covering half of the screen. Fixation was enforced as in the calibration experiment described in Section 3.2. Each trial took 15 seconds on average. The total duration of the experiment was around 15 minutes.
Participants. 15 participants with normal or corrected-to-normal vision took part in the experiment. All were naive for the purposes of the study and were given instructions at the beginning. Each participant was asked to compare all pairs of techniques for each image (60 comparisons per participant).
Results. To analyze the results, for each pair of techniques, we computed a preference rate of method A over method B. The rate expresses the percentage of trials in which method A was chosen as visually more similar to the original image. Table 3 shows the preference rates obtained by using networks trained using our procedure (ℒ * , ℒ 2 * ) in comparison with the standard procedure (ℒ , ℒ 2 ). We report the results for all images (last column) and do so separately for nature and architecture images. We used the binomial test to compute the -values. The reconstructions obtained by using a network trained with our strategy are preferred in 57% of the cases ( = 0.013). The difference is significant for ℒ * with a 59% preference ( = 0.04). In the context of different image classes, our method performed well on nature images, both when we consider ℒ 2 * and ℒ * separately and when we consider them jointly (for each case preferred in 75 % of cases, < 0.001). On architecture images, we observe the preference for ℒ 2 (63%, = 0.037). For all techniques, our method is preferred in 40 % of the cases ( = 0.018). This is consistent with the results of Cal. VGG (Section 5.2), where our method had a lower probability of detection of nature images and a similar probability of detection of architecture images. We hypothesize that there might be several reasons for its poorer performance on architecture images. First, architecture images contain objects with simple shapes, uniform areas, edges, and corners. Such features might not have been represented well in our calibration, where we used 256 × 256 patches, whose size was limited to avoid testing the visibility across a wide range of eccentricities. Furthermore, we believe that distortions in the visual features of simple objects are much easier to perceive than those in natural textures, which are more random. This problem might have been aggravated because our calibration considered both groups together and did not make any distinction when modeling the perception of artifacts for them. This problem might be solved by using different numbers of guiding samples for different classes of images when generating the dataset for training GAN-based reconstruction. However, this would require more careful data collection for the initial experiment and a more complex model that can predict the number of guiding samples based on the image content. Once these challenges have been addressed, the proposed approach can yield a more accurate dataset and can be used to train a single architecture that can handle different types of images. Figure 12 shows the preferences for the individual methods compared with those for all other training strategies, including different loss functions, i.e., ℒ * , ℒ * , ℒ 2 , and ℒ 2 * . In the experiment with experts (left), ℒ * attained the highest preference of 38% ( < 0.001) while ℒ 2 recorded the lowest preference (24%, < 0.001). Fig. 11. An example of the stimuli shown during the experiment. The image follows the gaze point, which is marked with the green dot. Table 3. Preference rates of the methods ℒ 2 * , ℒ * over ℒ 2 , ℒ when trained using the data collected from the expert group. The values were computed by taking the average across participants. The errors correspond to the standard error of the mean.
Gaze location
Nature
Architecture All When divided into classes, ℒ * and ℒ 2 * were the most preferred methods for natural images, with values of 41% ( < 0.001) and 37% ( = 0.003), respectively. The other methods had lower preference values -22% for ℒ ( < 0.001) and 20% for ℒ 2 ( < 0.001). ℒ was the most preferred on architecture images (37%, = 0.005), followed by ℒ * (35%, = 0.032). The ℒ 2 * was selected the fewest number of times (21%, < 0.001). All -values were computed by using the binomial test, and the remaining results were not statistically significant. The experiment, when repeated with naive participants (right), yielded a different threshold as a function of the guiding samples needed for the appropriate foveated reconstruction. In particular, the values related to 8°and 14°changed from 9.09 to 7.93, and from 6.89 to 4.57, respectively. This means that for a standard observer, the number of samples needed to generate an image of fixed quality is higher than the samples needed for an expert observer. Texture synthesis was the initial step of our pipeline. For this reason, we trained all our networks again and we repeated the validation experiment with the new reconstruction. The results are presented in Table 4 and Figure 12 (right). The new experiments showed that while our technique maintained a slight advantage through ℒ * over the standard method on nature images, our reconstructions delivered the worst performance on architecture images. Table 4. Preference rates of the methods ℒ 2 * , ℒ * over ℒ 2 , ℒ when trained by using the data gathered from the naive group. The values were computed by taking the average across participants. The errors correspond to the standard error of the mean.
Nature
Architecture All
Discussion
The results of the final evaluation demonstrate that with a successfully derived training dataset containing near-threshold distortions, our GAN-based training strategy can improve the quality of the reconstructions. The benefits, however, are observed with a more conservative calibration (Section 3) performed by experts. When calibration data from the naive participants are used, the final preference shifts towards the standard reconstruction strategy. This shows that our strategy works under conservative calibration conditions. We believe that this owes itself to the potential limitations of our calibration, which was performed on small patches, while larger patches may render some artifacts more prominent. However, there is a trade-off between using small patches and obtaining localized information about the sensitivity to distortions for a particular eccentricity, and making the patches larger and losing this property. In future work, it would be interesting to investigate better calibration strategies for our technique.
As described, our technique can be directly applied to train GAN-based foveated image reconstructions. Our experiments demonstrate that the same network architecture with the same input provides higher-visual-quality reconstructions if our training strategy is used than otherwise. This applies directly to techniques such as the one proposed by Kaplanyan et al. [24]. Because the density of the input sampling can be changed, we argue that our training can also reduce the number of input samples while preserving quality, which will improve the efficiency of the entire image generation pipeline.
CONCLUSIONS AND FUTURE WORK
Currently available techniques for foveated image reconstruction use perceptual loss to guide network training to capitalize on perceptually important image features. The goal of this work was to inject perceptual information into the discriminator network. To this end, during training, we provided the discriminator with images containing distortions that are imperceptible to a human observer. This allowed the discriminator to inherit the properties of the HVS encoded in the training dataset. Our new dataset contains images with invisible spatial distortions based on texture synthesis. We argue that such distortions are much closer to artifacts introduced by GAN-based reconstruction than the previously considered blur. Moreover, the new dataset allowed us to train several image metrics to improve their predictions of stimulus quality presented in the periphery.
We studied the suitability of the new training strategy for foveated image reconstruction. In future work, it is essential to extend this investigation to video content as this may yield benefits when the sensitivity of the HVS to temporal artifacts is incorporated. We trained separate networks for the near and far periphery. While this makes the training procedure easier, a more practical solution is to train one network to handle spatially varying density. An alternative solution to attain this goal is to use a fully convolutional network in the log-polar domain [49,63]. We also did not focus on computational performance. At present, our unoptimized inference takes 3 seconds on our hardware. Although previous work [24] has shown the feasibility of using GAN in such scenarios, computational efficiency remains an important concern. We believe that making networks and their training aware of the limitations of human perception will be important to close the gaps.
Another exciting direction of research is to design a foveated image metric that accounts for a wide range of effects. While work by Mantiuk et al. [33] has taken this approach, they targeted image quality instead of the visibility of distortion. The challenge here lies in collecting large-scale perceptual data with eye-trackingbased information that can help determine the visibility of distortions. Even though our dataset contained this information, it is not sufficient to train a general-purpose visibility metric for both foveal and peripheral vision.
Finally, we believe that the idea of supplying the discriminator in the GAN architectures with images containing near-threshold distortions during training extends beyond applications to foveated image reconstructions. We see it as a more general strategy for training perception-aware GAN-based techniques for the creation of graphical content. Our work considered texture synthesis as a technique suitable for creating controlled distortions relevant to our application. However, it is not ideal for capturing other characteristics of perception, such as the sensitivity to temporal changes, color, and depth, that might be relevant in different applications. It would be interesting if other researchers followed our procedure and included these aspects in the training of generative-adversarial networks to verify their benefits.
Fig. 1 .
1The input to the foveated image reconstruction are sparse samples (magnify the top-left part of the image or see the insets), based on which the technique reconstructs the images (bottom-right part of the image). The image in this example consists of three regions: fovea (100% samples), near periphery (12% samples), and far periphery (0.7% samples). The white cross in the center indicates the position of the gaze. The reconstructed regions are combined by using linear blending to create a smooth transition between them.
Fig. 3 .
3Results of stimulus generation for different numbers of guiding samples (left) and values of (right) of the Gaussian filter. Note the increased distortions relative to the exemplar when the number of guiding samples decreases (left) and when increases (right).
Fig. 2 .
2The figure presents two strategies for synthesizing stimuli by using the guiding samples. Top: a two-step procedure where a second unconstrained optimization is performed to remove subtle checkerboard artifacts after applying a Gaussian filter with = 1. Bottom: constrained optimization initialized with a blurred version of the guiding samples.
Fig. 4 .
4The probability of detecting differences between the original and the synthesized images as a function of the number of guiding samples and eccentricity. The error bars visualize standard errors of the mean. On the left are the results of the expert group and on the right are those of the naive group.
Fig. 5 .
5The architecture of our proposed network.
Fig. 6 .
6Training with (left) and without (right) an input sampling mask.
Fig. 7 .
7Sample reconstructions of nature and architecture images by all evaluated methods for the far periphery. ℒ
Figure 7 Fig. 8 .
78presents the results of reconstruction obtained by using differently trained architectures on four images. For reference, we include the original high-resolution and the standard foveated reconstruction by using an interpolation with Gaussian weights. For the results of training using Laplacian loss, we introduced a notation consisting of two letters, ℒ , where X, Y ∈ {H, M} encode the position of the Gaussian peak at the far High-frequency content hallucination using ℒ , ℒ * , and standard Gaussian blur.
Fig. 9 .
9Pearson's correlation coefficient of the analyzed methods. Bright bars: uncalibrated metrics (using standard logistic sigmoid with the equation: ( ) = 1/(1 + − )), dark bars: calibrated metrics (fitted to the logistic model using Eq. 4).
Fig. 10 .
10Detection rates according to the objective Cal. VGG metric with increasing radius of the far periphery. Lower values indicate a higher quality of reconstruction.
Fig. 12 .
12Preference rates of the methods trained using data gathered from the expert group (left) vs. the naive group (right) in the calibration experiment (Section 3.2). The scores on the -axis represent the number of times a given method was selected, divided by the total number of trials of the experiment. The error bars show the standard error of the mean. The values were computed by taking an average across participants.
.89%, and 4.71% with a 95% CI [7.85 -10.48], [5.78 -8.14], [3.60 -5.94] for eccentricities 8°, 14°and 20°, respectively. For the naive participants, the results are: 7.93%, 4.57% and 2.06% with a 95% CI [7.47 -8.41], [3.98 -5.29], [1.30 -2.76]. The confidence intervals were computed by bootstrapping, and the estimates are shown in
Table 1 .
1The loss functions used for training the generator in our evaluations.
• Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, and Piotr Didyk
• Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, and Piotr Didyk
• Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, and Piotr Didyk
approved by Ethical Committee of Università della Svizzera italiana, decision CE.2020.3.
Please note the difference between frequency decomposition using the Laplacian pyramid and the Laplacian pyramid-based loss function.
• Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radosław Mantiuk, and Piotr Didyk
ACKNOWLEDGMENTSThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement N• 804226 PERDY). The images used in this paper come from ImageNet dataset and Pexel.com. We would like to thank all who contributed to these image collections.
. Martin Arjovsky, Soumith Chintala, Léon Bottou, arXiv:1701.07875arXiv: 1701.07875Wasserstein GAN. cs, statMartin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein GAN. arXiv:1701.07875 [cs, stat] (Jan. 2017). arXiv: 1701.07875.
A summary-statistic representation in peripheral vision explains visual crowding. Benjamin Balas, Lisa Nakano, Ruth Rosenholtz, Journal of vision. 912Benjamin Balas, Lisa Nakano, and Ruth Rosenholtz. 2009. A summary-statistic representation in peripheral vision explains visual crowding. Journal of vision 9, 12 (2009).
Contrast sensitivity of the human eye and its effects on image quality. G J Peter, Barten, SPIE pressPeter G. J. Barten. 1999. Contrast sensitivity of the human eye and its effects on image quality. SPIE press.
A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems. Mary Ann Branch, Thomas F Coleman, Yuying Li, SIAM Journal on Scientific Computing. 211Mary Ann Branch, Thomas F. Coleman, and Yuying Li. 1999. A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems. SIAM Journal on Scientific Computing 21, 1 (1999).
Voronoi-Based Foveated Volume Rendering. Valentin Bruder, Christoph Schulz, Ruben Bauer, Steffen Frey, Daniel Weiskopf, Thomas Ertl, EuroVis (Short Papers). The Eurographics Association. 5Valentin Bruder, Christoph Schulz, Ruben Bauer, Steffen Frey, Daniel Weiskopf, and Thomas Ertl. 2019. Voronoi-Based Foveated Volume Rendering. In EuroVis (Short Papers). The Eurographics Association, 5.
Measurements of contrast sensitivity for peripheral vision. Michał Chwesiuk, Radosław Mantiuk, 10.1145/3343036.3343123ACM Symposium on Applied Perception. Barcelona, Spain; New York, NY, USA, ArticleAssociation for Computing Machinery20SAP '19)Michał Chwesiuk and Radosław Mantiuk. 2019. Measurements of contrast sensitivity for peripheral vision. In ACM Symposium on Applied Perception 2019 (Barcelona, Spain) (SAP '19). Association for Computing Machinery, New York, NY, USA, Article 20, 9 pages. https://doi.org/10.1145/3343036.3343123
Human photoreceptor topography. Christine A Curcio, Kenneth R Sloan, Robert E Kalina, Anita E Hendrickson, Journal of comparative neurology. 292Christine A. Curcio, Kenneth R. Sloan, Robert E. Kalina, and Anita E. Hendrickson. 1990. Human photoreceptor topography. Journal of comparative neurology 292, 4 (1990), 497-523.
ImageNet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, IEEE conference on computer vision and pattern recognition. IEEEJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition. IEEE, 248-255.
Arturo Deza, Aditya Jonnalagadda, Miguel Eckstein, arXiv:1705.10041arXiv: 1705.10041Towards Metamerism via Foveated Style Transfer. Arturo Deza, Aditya Jonnalagadda, and Miguel Eckstein. 2017. Towards Metamerism via Foveated Style Transfer. arXiv:1705.10041 [cs] (May 2017). arXiv: 1705.10041.
Generating images with perceptual similarity metrics based on deep networks. Alexey Dosovitskiy, Thomas Brox, Advances in neural information processing systems. Alexey Dosovitskiy and Thomas Brox. 2016. Generating images with perceptual similarity metrics based on deep networks. In Advances in neural information processing systems. 658-666.
Metamers of neural networks reveal divergence from human perceptual systems. Jenelle Feather, Alex Durango, Ray Gonzalez, Josh Mcdermott, Advances in Neural Information Processing Systems. Jenelle Feather, Alex Durango, Ray Gonzalez, and Josh McDermott. 2019. Metamers of neural networks reveal divergence from human perceptual systems. In Advances in Neural Information Processing Systems. 10078-10089.
A Fast Foveated Fully Convolutional Network Model for Human Peripheral Vision. Lex Fridman, Benedikt Jenik, Shaiyan Keshvari, Bryan Reimer, Christoph Zetzsche, Ruth Rosenholtz, arXiv:1706.04568arXiv:1706.04568cs.NELex Fridman, Benedikt Jenik, Shaiyan Keshvari, Bryan Reimer, Christoph Zetzsche, and Ruth Rosenholtz. 2017. A Fast Foveated Fully Convolutional Network Model for Human Peripheral Vision. arXiv:1706.04568 [cs] (2017). arXiv:1706.04568 [cs.NE]
Texture synthesis using convolutional neural networks. Leon Gatys, Alexander S Ecker, Matthias Bethge, Advances in neural information processing systems. Leon Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. Texture synthesis using convolutional neural networks. In Advances in neural information processing systems. 262-270.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems 27 (2014).
Foveated 3D Graphics. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, John Snyder, ACM Transactions on Graphics. 3110Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D Graphics. ACM Transactions on Graphics 31, 6 (Nov. 2012), 164:1-164:10.
Improved training of Wasserstein GANs. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron C Courville, Advances in neural information processing systems. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. 2017. Improved training of Wasserstein GANs. In Advances in neural information processing systems. 5767-5777.
Peiyao Guo, Qiu Shen, Zhan Ma, David J Brady, Yao Wang, arXiv:1802.09065arXiv: 1802.09065Perceptual Quality Assessment of Immersive Images Considering Peripheral Vision Impact. Peiyao Guo, Qiu Shen, Zhan Ma, David J. Brady, and Yao Wang. 2018. Perceptual Quality Assessment of Immersive Images Considering Peripheral Vision Impact. arXiv:1802.09065 [cs] (Feb. 2018). arXiv: 1802.09065.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770-778.
Enforcing perceptual consistency on generative adversarial networks by using the normalised laplacian pyramid distance. Alexander Hepburn, Valero Laparra, Ryan Mcconville, Raul Santos-Rodriguez, arXiv:1908.04347arXiv preprintAlexander Hepburn, Valero Laparra, Ryan McConville, and Raul Santos-Rodriguez. 2019. Enforcing perceptual consistency on generative adversarial networks by using the normalised laplacian pyramid distance. arXiv preprint arXiv:1908.04347 (2019).
Is the increased spatial uncertainty in the normal periphery due to spatial undersampling or uncalibrated disarray? Vision research. F Robert, David Hess, Field, 33Robert F. Hess and David Field. 1993. Is the increased spatial uncertainty in the normal periphery due to spatial undersampling or uncalibrated disarray? Vision research 33, 18 (1993), 2663-2670.
Is Foveated Rendering Perceivable in Virtual Reality?: Exploring the Efficiency and Consistency of Quality Assessment Methods. Chih-Fan Hsu, Anthony Chen, Cheng-Hsin Hsu, Chun-Ying Huang, Chin-Laung Lei, Kuan-Ta Chen, 10.1145/3123266.3123434Proceedings of the 25th ACM International Conference on Multimedia (MM '17). the 25th ACM International Conference on Multimedia (MM '17)New York, NY, USA; Mountain View, California, USAACMChih-Fan Hsu, Anthony Chen, Cheng-Hsin Hsu, Chun-Ying Huang, Chin-Laung Lei, and Kuan-Ta Chen. 2017. Is Foveated Rendering Perceivable in Virtual Reality?: Exploring the Efficiency and Consistency of Quality Assessment Methods. In Proceedings of the 25th ACM International Conference on Multimedia (MM '17). ACM, New York, NY, USA, 55-63. https://doi.org/10.1145/3123266.3123434 event-place: Mountain View, California, USA.
Image-to-image translation with conditional adversarial networks. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1125-1134.
Perceptual Losses for Real-Time Style Transfer and super-Resolution. Justin Johnson, Alexandre Alahi, Li Fei-Fei, Computer Vision -ECCV 2016. Bastian Leibe, Jiri Matas, Nicu Sebe, and Max WellingSpringer International PublishingJustin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual Losses for Real-Time Style Transfer and super-Resolution. In Computer Vision -ECCV 2016 (Lecture Notes in Computer Science), Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, 694-711.
Deepfovea: neural reconstruction for foveated rendering and video compression using learned natural video statistics. Anton Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, Gizem Rufo, ACM SIGGRAPH 2019 Talks. Los Angeles, California; New York, NY, USA, ArticleACM58SIGGRAPH '19). 2 pagesAnton Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. Deepfovea: neural reconstruction for foveated rendering and video compression using learned natural video statistics. In ACM SIGGRAPH 2019 Talks (Los Angeles, California) (SIGGRAPH '19). ACM, New York, NY, USA, Article 58, 2 pages.
Foveated AR: dynamically-foveated augmented reality display. Jonghyun Kim, Zander Majercik, Peter Shirley, Josef Spjut, Morgan Mcguire, David Luebke, Youngmo Jeong, Michael Stengel, Kaan Akşit, Rachel Albert, Ben Boudaoud, Trey Greer, Joohwan Kim, Ward Lopes, ACM Transactions on Graphics. 384Jonghyun Kim, Zander Majercik, Peter Shirley, Josef Spjut, Morgan McGuire, David Luebke, Youngmo Jeong, Michael Stengel, Kaan Akşit, Rachel Albert, Ben Boudaoud, Trey Greer, Joohwan Kim, and Ward Lopes. 2019. Foveated AR: dynamically-foveated augmented reality display. ACM Transactions on Graphics 38, 4 (July 2019).
Measurements of achromatic and chromatic contrast sensitivity functions for an extended range of adaptation luminance. Kil Joong Kim, Rafal Mantiuk, Kyoung Ho Lee, Human vision and electronic imaging XVIII. 865186511Kil Joong Kim, Rafal Mantiuk, and Kyoung Ho Lee. 2013. Measurements of achromatic and chromatic contrast sensitivity functions for an extended range of adaptation luminance. In Human vision and electronic imaging XVIII, Vol. 8651. International Society for Optics and Photonics, 86511A.
Towards a Quality Metric for Dense Light Fields. Marek Vamsi Kiran Adhikarla, Denis Vinkler, Rafal K Sumin, Karol Mantiuk, Hans-Peter Myszkowski, Piotr Seidel, Didyk, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Vamsi Kiran Adhikarla, Marek Vinkler, Denis Sumin, Rafal K. Mantiuk, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Towards a Quality Metric for Dense Light Fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Encyclopedia of behavioral neuroscience. George F Koob, Michel Le Moal, Richard F Thompson, ElsevierGeorge F. Koob, Michel Le Moal, and Richard F. Thompson. 2010. Encyclopedia of behavioral neuroscience. Elsevier.
Foveated video quality assessment. M S Sanghoon Lee, A C Pattichis, Bovik, 10.1109/6046.985561IEEE Transactions on Multimedia. 41Sanghoon Lee, M. S. Pattichis, and A. C. Bovik. 2002. Foveated video quality assessment. IEEE Transactions on Multimedia 4, 1 (March 2002), 129-132. https://doi.org/10.1109/6046.985561
A method for the solution of certain non-linear problems in least squares. Kenneth Levenberg, Quarterly of applied mathematics. 2Kenneth Levenberg. 1944. A method for the solution of certain non-linear problems in least squares. Quarterly of applied mathematics 2, 2 (1944), 164-168.
Positional uncertainty in peripheral and amblyopic vision. Dennis M Levi, Stanley A Klein, Yen Lee Yap, Vision research. 27Dennis M. Levi, Stanley A. Klein, and Yen Lee Yap. 1987. Positional uncertainty in peripheral and amblyopic vision. Vision research 27, 4 (1987), 581-597.
The effects of a visual fidelity criterion of the encoding of images. James Mannos, David Sakrison, IEEE Transactions on Information Theory. 20James Mannos and David Sakrison. 1974. The effects of a visual fidelity criterion of the encoding of images. IEEE Transactions on Information Theory 20, 4 (1974), 525-536.
FovVideoVDP: A visible difference predictor for wide field-of-view video. Gyorgy Rafał K Mantiuk, Alexandre Denes, Anton Chapiro, Gizem Kaplanyan, Romain Rufo, Trisha Bachy, Anjul Lian, Patney, ACM Transactions on Graphics. 404Rafał K Mantiuk, Gyorgy Denes, Alexandre Chapiro, Anton Kaplanyan, Gizem Rufo, Romain Bachy, Trisha Lian, and Anjul Patney. 2021. FovVideoVDP: A visible difference predictor for wide field-of-view video. ACM Transactions on Graphics 40, 4 (2021).
Kernel Foveated Rendering. Xiaoxu Meng, Ruofei Du, Matthias Zwicker, Amitabh Varshney, Proceedings of ACM on Computer Graphics and Interactive Techniques. 11Xiaoxu Meng, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. 2018. Kernel Foveated Rendering. Proceedings of ACM on Computer Graphics and Interactive Techniques 1, 1 (July 2018).
Discrimination of spatial phase in central and peripheral vision. M , Concetta Morrone, David C Burr, Donatella Spinelli, Vision research. 29M. Concetta Morrone, David C. Burr, and Donatella Spinelli. 1989. Discrimination of spatial phase in central and peripheral vision. Vision research 29, 4 (1989), 433-445.
. Augustus Odena, Vincent Dumoulin, and Chris Olah. 2016. Deconvolution and checkerboard artifactsAugustus Odena, Vincent Dumoulin, and Chris Olah. 2016. Deconvolution and checkerboard artifacts.
Towards foveated rendering for gaze-tracked virtual reality. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, Aaron Lefohn, ACM Transactions on Graphics. 3512Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics 35, 6 (Nov. 2016), 179:1-179:12.
Image invariance with changes in size: The role of peripheral contrast thresholds. Eli Peli, Jian Yang, Robert B Goldstein, JOSA A. 8Eli Peli, Jian Yang, and Robert B. Goldstein. 1991. Image invariance with changes in size: The role of peripheral contrast thresholds. JOSA A 8, 11 (1991), 1762-1774.
A parametric texture model based on joint statistics of complex wavelet coefficients. Javier Portilla, Eero P Simoncelli, International journal of computer vision. 40Javier Portilla and Eero P. Simoncelli. 2000. A parametric texture model based on joint statistics of complex wavelet coefficients. International journal of computer vision 40, 1 (2000), 49-70.
Loss of spatial phase relationships in extrafoveal vision. Ingo Rentschler, Bernhard Treutwein, Nature. 313Ingo Rentschler and Bernhard Treutwein. 1985. Loss of spatial phase relationships in extrafoveal vision. Nature 313, 6000 (1985), 308-310.
A flexible growth function for empirical use. F J Richards, Journal of experimental Botany. 10F. J. Richards. 1959. A flexible growth function for empirical use. Journal of experimental Botany 10, 2 (1959), 290-301.
Foveated mean squared error-a novel video quality metric. Mario Snježana Rimac-Drlje, Drago Vranješ, Žagar, 10.1007/s11042-009-0442-1Multimedia Tools and Applications. 49Snježana Rimac-Drlje, Mario Vranješ, and Drago Žagar. 2010. Foveated mean squared error-a novel video quality metric. Multimedia Tools and Applications 49, 3 (Sept. 2010), 425-445. https://doi.org/10.1007/s11042-009-0442-1
U-Net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. Springer, 234-241.
What your visual system sees where you are not looking. Ruth Rosenholtz, 10.1117/12.876659SPIE: Human Vision and Electronic Imaging XVI. Bernice E. Rogowitz and Thrasyvoulos N. PappasSan Francisco Airport, California, USARuth Rosenholtz. 2011. What your visual system sees where you are not looking. In SPIE: Human Vision and Electronic Imaging XVI, Bernice E. Rogowitz and Thrasyvoulos N. Pappas (Eds.). San Francisco Airport, California, USA. https://doi.org/10.1117/12.876659
Capabilities and Limitations of Peripheral Vision. Annual review of vision science. Ruth Rosenholtz, 10.1146/annurev-vision-082114-0357332Ruth Rosenholtz. 2016. Capabilities and Limitations of Peripheral Vision. Annual review of vision science 2 (2016), 437-457. https: //doi.org/10.1146/annurev-vision-082114-035733
Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Ruth Rosenholtz, Jie Huang, Krista A Ehinger, Frontiers in psychology. 313Ruth Rosenholtz, Jie Huang, and Krista A. Ehinger. 2012. Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in psychology 3 (2012), 13.
A summary statistic representation in peripheral vision explains visual search. Ruth Rosenholtz, Jie Huang, Alvin Raj, Benjamin J Balas, Livia Ilie, 10.1167/12.4.14Journal of Vision. 12Ruth Rosenholtz, Jie Huang, Alvin Raj, Benjamin J. Balas, and Livia Ilie. 2012. A summary statistic representation in peripheral vision explains visual search. Journal of Vision 12, 4 (April 2012). https://doi.org/10.1167/12.4.14
Very Deep Convolutional Networks for Large-Scale Image Recognition. Karen Simonyan, Andrew Zisserman, arXiv:1409.1556ICLR. cs.CVKaren Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR. arXiv:1409.1556 [cs.CV]
Design strategies for direct multi-scale and multi-orientation feature extraction in the log-polar domain. Fabio Solari, Manuela Chessa, Silvio P Sabatini, Pattern Recognition Letters. 33Fabio Solari, Manuela Chessa, and Silvio P. Sabatini. 2012. Design strategies for direct multi-scale and multi-orientation feature extraction in the log-polar domain. Pattern Recognition Letters 33, 1 (2012), 41-51.
Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering. Michael Stengel, Steve Grogorick, Martin Eisemann, Marcus Magnor, Computer Graphics Forum. 35Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering. Computer Graphics Forum 35, 4 (2016), 129-139.
Peripheral vision and pattern recognition: A review. Hans Strasburger, Ingo Rentschler, Martin Jüttner, Journal of vision. 11Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral vision and pattern recognition: A review. Journal of vision 11, 5 (2011).
User, Metric, and Computational Evaluation of Foveated Rendering Methods. Nicholas T Swafford, A José, Charalampos Iglesias-Guitian, Bochang Koniaris, Darren Moon, Kenny Cosker, Mitchell, 10.1145/2931002.2931011Proceedings of the ACM Symposium on Applied Perception (SAP '16). the ACM Symposium on Applied Perception (SAP '16)New York, NY, USAACMNicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, Metric, and Computational Evaluation of Foveated Rendering Methods. In Proceedings of the ACM Symposium on Applied Perception (SAP '16). ACM, New York, NY, USA, 7-14. https://doi.org/10.1145/2931002.2931011
Noise-Based Enhancement for Foveated Rendering. Taimoor Tariq, Cara Tursun, Piotr Didyk, 10.1145/3528223.3530101ACM Trans. Graph. 41143Taimoor Tariq, Cara Tursun, and Piotr Didyk. 2022. Noise-Based Enhancement for Foveated Rendering. ACM Trans. Graph. 41, 4, Article 143 (jul 2022). https://doi.org/10.1145/3528223.3530101
Why are deep representations good perceptual quality features. Taimoor Tariq, Munchurl Okan Tarhan Tursun, Piotr Kim, Didyk, The European Conference on Computer Vision (ECCV). Taimoor Tariq, Okan Tarhan Tursun, Munchurl Kim, and Piotr Didyk. 2020. Why are deep representations good perceptual quality features?. In The European Conference on Computer Vision (ECCV).
Retinal limits to the detection and resolution of gratings. L N Thibos, F E Cheney, D J Walsh, JOSA A. 4L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987. Retinal limits to the detection and resolution of gratings. JOSA A 4, 8 (1987), 1524-1529.
T T Huyen, Duc V Tran, Nam Pham Nguyen, Trang H Ngoc, Hoang, arXiv:1908.06239arXiv: 1908.06239Truong Thu Huong, and Truong Cong Thang. 2019. Impacts of Retina-related Zones on Quality Perception of Omnidirectional Image. cs, eessHuyen T. T. Tran, Duc V. Nguyen, Nam Pham Ngoc, Trang H. Hoang, Truong Thu Huong, and Truong Cong Thang. 2019. Impacts of Retina-related Zones on Quality Perception of Omnidirectional Image. arXiv:1908.06239 [cs, eess] (Aug. 2019). arXiv: 1908.06239.
Foveation-based image quality assessment. W Tsai, Y Liu, 10.1109/VCIP.2014.70514952014 IEEE Visual Communications and Image Processing Conference. W. Tsai and Y. Liu. 2014. Foveation-based image quality assessment. In 2014 IEEE Visual Communications and Image Processing Conference. 25-28. https://doi.org/10.1109/VCIP.2014.7051495
Luminance-contrast-aware foveated rendering. Elena Okan Tarhan Tursun, Marek Arabadzhiyska-Koleva, Radosław Wernikowski, Hans-Peter Mantiuk, Karol Seidel, Piotr Myszkowski, Didyk, ACM Transactions on Graphics. 384Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics 38, 4 (2019).
Void-and-cluster method for dither array generation. Robert A Ulichney, Human Vision, Visual Processing, and Digital Display IV. 1913Robert A. Ulichney. 1993. Void-and-cluster method for dither array generation. In Human Vision, Visual Processing, and Digital Display IV, Vol. 1913. International Society for Optics and Photonics, 332-343.
Foveation-based content adaptive root mean squared error for video quality assessment. Mario Vranješ, Snježana Rimac-Drlje, Denis Vranješ, 10.1007/s11042-017-5544-6Multimedia Tools and Applications. 77Mario Vranješ, Snježana Rimac-Drlje, and Denis Vranješ. 2018. Foveation-based content adaptive root mean squared error for video quality assessment. Multimedia Tools and Applications 77, 16 (Aug. 2018), 21053-21082. https://doi.org/10.1007/s11042-017-5544-6
Beyond blur: Real-time ventral metamers for foveated rendering. Rafael David R Walton, Kuffner Dos, Sebastian Anjos, David Friston, Kaan Swapp, Anthony Akşit, Tobias Steed, Ritschel, ACM Transactions on Graphics. 404David R Walton, Rafael Kuffner Dos Anjos, Sebastian Friston, David Swapp, Kaan Akşit, Anthony Steed, and Tobias Ritschel. 2021. Beyond blur: Real-time ventral metamers for foveated rendering. ACM Transactions on Graphics 40, 4 (2021).
Foundations of vision. Brian Wandell, Stephen Thomas, Psyccritiques. 427Brian Wandell and Stephen Thomas. 1997. Foundations of vision. Psyccritiques 42, 7 (1997).
Central and peripheral vision for scene recognition: A neurocomputational modeling exploration. Panqu Wang, W Garrison, Cottrell, Journal of vision. 174Panqu Wang and Garrison W. Cottrell. 2017. Central and peripheral vision for scene recognition: A neurocomputational modeling exploration. Journal of vision 17, 4 (2017).
Foveated wavelet image quality index. Zhou Wang, Alan C Bovik, Ligang Lu, Jack L Kouloheris, 10.1117/12.449797Applications of Digital Image Processing. Andrew G. TescherSan Diego, CAZhou Wang, Alan C. Bovik, Ligang Lu, and Jack L. Kouloheris. 2001. Foveated wavelet image quality index. In Applications of Digital Image Processing XXIV, Andrew G. Tescher (Ed.). San Diego, CA, 42-52. https://doi.org/10.1117/12.449797
Image quality assessment: from error visibility to structural similarity. Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, IEEE Transactions on Image Processing. 13Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600-612.
Multiscale structural similarity for image quality assessment. Zhou Wang, P Eero, Alan C Simoncelli, Bovik, The Thrity-Seventh Asilomar Conference on Signals. Ieee2Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik. 2003. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2. Ieee, 1398-1402.
A formula for human retinal ganglion cell receptive field density as a function of visual field location. Andrew B Watson, Journal of Vision. 14Andrew B. Watson. 2014. A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 14, 7 (2014).
. Martin Weier, Michael Stengel, Thorsten Roth, Piotr Didyk, Elmar Eisemann, Martin Eisemann, Steve Grogorick, André Hinkenjann, Ernst Kruijff, Marcus Magnor, Perception-driven Accelerated Rendering. Computer Graphics Forum. 362Martin Weier, Michael Stengel, Thorsten Roth, Piotr Didyk, Elmar Eisemann, Martin Eisemann, Steve Grogorick, André Hinkenjann, Ernst Kruijff, Marcus Magnor, et al. 2017. Perception-driven Accelerated Rendering. Computer Graphics Forum 36, 2 (2017), 611-643.
Selecting texture resolution using a task-specific visibility metric. K Wolski, D Giunchi, S Kinuwaki, P Didyk, K Myszkowski, A Steed, R K Mantiuk, 10.1111/cgf.13871Computer Graphics Forum. 38K. Wolski, D. Giunchi, S. Kinuwaki, P. Didyk, K. Myszkowski, A. Steed, and R. K. Mantiuk. 2019. Selecting texture resolution using a task-specific visibility metric. Computer Graphics Forum 38, 7 (2019), 685-696. https://doi.org/10.1111/cgf.13871
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionRichard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 586-595.
| [] |
[
"Look-back Decoding for Open-Ended Text Generation",
"Look-back Decoding for Open-Ended Text Generation"
] | [
"Nan Xu \n♠ Meta\nUniversity of Southern California\nAI\n",
"Chunting Zhou \n♠ Meta\nUniversity of Southern California\nAI\n",
"♠ \n♠ Meta\nUniversity of Southern California\nAI\n",
"Asli Celikyilmaz [email protected] \n♠ Meta\nUniversity of Southern California\nAI\n",
"Xuezhe Ma [email protected] \n♠ Meta\nUniversity of Southern California\nAI\n"
] | [
"♠ Meta\nUniversity of Southern California\nAI",
"♠ Meta\nUniversity of Southern California\nAI",
"♠ Meta\nUniversity of Southern California\nAI",
"♠ Meta\nUniversity of Southern California\nAI",
"♠ Meta\nUniversity of Southern California\nAI"
] | [] | Given a prefix (context), open-ended generation aims to decode texts that are coherent, which don't abruptly drift from previous topics, and informative, which don't suffer from undesired repetitions. In this paper, we propose Look-back , an improved decoding algorithm that leverages the Kullback-Leibler divergence to track the distribution distance between current and historical decoding steps. Thus Lookback can automatically predict potential repetitive phrase and topic drift, and remove tokens that may cause the failure modes, restricting the next token probability distribution within a plausible distance to the history. We perform decoding experiments on document continuation and story generation, and demonstrate that Look-back is able to generate more fluent and coherent text, outperforming other strong decoding methods significantly in both automatic and human evaluations. 1 The correlation between sequence probability and its quality for MLE trained models can be low . | 10.48550/arxiv.2305.13477 | [
"https://export.arxiv.org/pdf/2305.13477v1.pdf"
] | 258,841,101 | 2305.13477 | 762d46a1ad88d7ed08a91146c20200831388e0d3 |
Look-back Decoding for Open-Ended Text Generation
Nan Xu
♠ Meta
University of Southern California
AI
Chunting Zhou
♠ Meta
University of Southern California
AI
♠
♠ Meta
University of Southern California
AI
Asli Celikyilmaz [email protected]
♠ Meta
University of Southern California
AI
Xuezhe Ma [email protected]
♠ Meta
University of Southern California
AI
Look-back Decoding for Open-Ended Text Generation
Given a prefix (context), open-ended generation aims to decode texts that are coherent, which don't abruptly drift from previous topics, and informative, which don't suffer from undesired repetitions. In this paper, we propose Look-back , an improved decoding algorithm that leverages the Kullback-Leibler divergence to track the distribution distance between current and historical decoding steps. Thus Lookback can automatically predict potential repetitive phrase and topic drift, and remove tokens that may cause the failure modes, restricting the next token probability distribution within a plausible distance to the history. We perform decoding experiments on document continuation and story generation, and demonstrate that Look-back is able to generate more fluent and coherent text, outperforming other strong decoding methods significantly in both automatic and human evaluations. 1 The correlation between sequence probability and its quality for MLE trained models can be low .
Introduction
Despite the impressive success on generating fluent and accurate sentences for low-entropy tasks such as summarization or translation, large-scale language models (LLMs) still suffer from serious degeneration problems, such as undesired repetitions (Holtzman et al., 2019) and unnatural topic drifts, under open-ended settings (Eikema and Aziz, 2020). Open-ended neural text generation aims to generate coherent and diverse text from LLMs, given contextual prefix (Nadeem et al., 2020;Dhamala et al., 2022), and has spawned a wide range of natural language applications, including contextual text completion (Radford et al., 2019), story generation (Fan et al., 2018), and review generation .
To alleviate the degeneration problem in openended text generation, a number of techniques have emerged over the recent years, which can be categorized into two directions: i) improved learning : Maximum similarity of hidden states and normalized minimum KL divergence between current step and history (a) or prefix (b) from GPT2 on 1,000 instances of WikiText-103. Compared with human continuation, (a): repetition has much smaller minKL but undistinguishable high maxHidden with history text, (b): pseudo topic drift by switching to continuation of another instance has much higher minKL but similar high maxHidden with prefix text.
proposing new learning objectives, e.g., unlikelihood training (Welleck et al., 2019), contrastive training (Su et al., 2022) and sequence likelihood calibration (Zhao et al., 2022), to compensate for the rooted deficiency of the conventional Maximum Likelihood Estimation (MLE) 1 ; ii) improved decoding remedying tedious and repetitive generations in decoding search (Su et al., 2022;Li et al., 2022), or combating topic drifts in sampling procedures (Hewitt et al., 2022). In this work, we propose a new decoding algorithm, named Look-back , which pays particular attention to the probability distribution disparity between continuation and history text. Unlike contrastive search (Su et al., 2022;Su and Xu, 2022) which uses cosine similarity between the hidden representation, Look-back leverages the Kullback-Leibler (KL) divergence to track the distribution distance between current and historical decoding steps. The main motivation of Look-back is that KL divergence defines a distance between the probability distributions of decoding steps, which ar-guably better aligns with the decoding practice. As shown in Figure 1 (a), as the greedy algorithm repeatedly outputs single sentences, the distance with the closest past token distribution decreases towards 0. Besides, when the continuation switches to another topic in Figure 1 (b), the distribution distance of continuation with prefix obtains much higher levels compared with topic-relevant human continuation. Based on our prior observations, for informative and coherent generation, the probability distribution should not be too close to history to guarantee diversity, but relatively close to prefix to maintain coherence.
Experimentally, through two tasks of openended text generation, including document continuation and story generation, we demonstrate that Look-back outperforms a variety of open-ended decoding algorithms under different scales of pretrained LLMs (GPT2-XL and OPT-6.7B) by producing much more coherent texts -high mauve score compared with human continuation and high similarity score measured against prefix, while maintaining similar level of diversity.
Related Work
Improved Learning Algorithms Yang et al. (2018); Adiwardana et al. (2020) observed that increasing number of candidates in beam search or sampling leads to worse quality of generated data. They attribute this to the predominant training objective (i.e., Maximum Likelihood Estimation) that might not accurately rank generated sequences by quality (Zhao et al., 2022). Besides, Holtzman et al. (2019) found that searching for the probable sequences always results in short and repetitive texts, which further motivated recent efforts to improve generation via revised learning objectives. Welleck et al. (2019) proposed unlikelihood training to force unlikely generations to be assigned lower probability by the model. To alleviate degeneration, SimCTG (Su et al., 2022) introduced a contrastive training objective to preserve sparseness of the token similarity matrix of the generated text. To avoid unintentionally boosting the probability of other irrelevant tokens in unlikelihood training, Jiang et al. (2022) leveraged contrastive token learning to explicitly teach the LLM to assign negative tokens with a lower probability than positive tokens through more focused contrast between the two. Based on a BERTScore-style similarity metric between model decodes and targets measured in the model's latent space, Zhao et al. (2022) calibrated model-generated sequences with sequence likelihood calibration to better align with reference sequences via different types of losses (e.g., rank and margin loss). observed that search methods (e.g., greedy and beam) which optimize generation probabilities may result in tedious and repetitive outputs in open-ended text generation. Su et al. (2022) complemented the contrastive training with contrastive search for decoding, which selects tokens more distingushable from previous context. Li et al. (2022) observed that degeneration is more prevalent in larger LMs than smaller ones, and proposed contrastive decoding to remove these undesired behavior by factoring out smaller LM's behavior from the larger LM. On the other hand, truncation sampling methods such as nucleus (Holtzman et al., 2019) and typical (Meister et al., 2022) decoding improve sample quality with more diverse samples compared to direct sampling, but at the expense of poor coherence and undesired topic drift. Hewitt et al. (2022) introduced η-sampling to truncate words below an entropy-dependent probability threshold.
Improved Decoding Algorithms
Without extra effort on fine-tuning LMs, the proposed Look-back improves conventional search method with reference from the given prefix and prior generation, so that undesired repetitions and topic drifts can be explicitly alleviated.
Background
Open-ended Text Generation
Given a sequence of m tokens sampled from natural text C = {x 1 . . . x m } as context or prefix, the neural text generation is to decode a n-token continuation using the probability distribution provided by pre-trained LMs:
p(x m+1:m+n |C) = n t=1 P (x t |C, x m+1 . . . x m+t−1 ),
where the continuation is generated token-by-token using a particular decoding strategy. For instance, greedy algorithm selects the next token given context with the highest probability, while nucleus sampling (Holtzman et al., 2019) restricts the plausible area of tokens with total mass above a threshold.
Degeneration Problems
There are two commonly observed degeneration problems in open-ended text generation: repetition
Degeneration LM (Decoding) Continuation
Prefix: In addition to live broadcasts FIFA Fan Fests offer food and beverages, merchandise and various entertainment events by local and international artists. The start of 2006 World Cup was Repetitive Continuation S1: (GPT2-XL greedy) marked by the FIFA Fan Fest in Sao Paulo, Brazil.\n\n The FIFA Fan Fest is a unique event that brings together fans from all over the world to celebrate the FIFA World Cup . The FIFA Fan Fest is a unique event that brings together fans from... Table 1: Degeneration examples with typical decoding algorithms by GPT2-XL and GPT3 (ada-001 and davinci-002). Complete sentence repetition (S1), repetition with minor location changes (S2) or paragraph duplication (S5) is marked in green , while unnatural (S3&S4) or stiff (S5) topic drifts are in pink . and incoherence.
Repetition LLMs prefer to overestimate the probability of repeated sequences (Welleck et al., 2019) especially for deterministic algorithms such as greedy and beam search. Although decoding algorithms such as nucleus sampling (Holtzman et al., 2019) have been proposed to interrupt repeating sequences, we can still observe repetitive and tedious continuation even from the state-of-the-art GPT-3 language model (Brown et al., 2020), as shown in Table 1. Besides the consensus that probabilities from conditional LMs often do not accurately rankorder generated sequences by quality (Zhao et al., 2022), a recent study provides a possible way to explain the repetitive generation with the observed analogical sequence copying pattern: prefix matching and copying 1 (Olsson et al., 2022).
Incoherence Sampling algorithms sacrifice coherence for alleviating repetition during decoding. As shown in Table 1, given probabilities from GPT-3 models, nucleus sampling fails to produce coherent generation, switching topic from Burkan's acute indigestion to Shanny's way to home with ada-001 (S5). Recent decoding algorithms depend on model confidence to "guarantee" coherence while resolving repetition explicitly with certain heuristics. For example, SimCTG (Su et al., 2022) selects from most probable candidates predicted by LM. Contrastive decoding (Li et al., 2022) exploits 1 Prefix matching: the attention mechanism in transformerbased LMs attends back to previous tokens that were followed by the current and/or recent tokens. Copying: outputs increased logit of the attended-to token or others similar in embedding space.
Algorithm 1 Look-back Decoding
Input: Prefix C = {x 1 . . . x m }, language model with vocabulary V , beam size k and threshold α Output: Continuation G = {x m+1 . . . x m+n } G ← {} for m + 1 ≤ t ≤ m + n do if KL t min ≤ α then ▷ Alleviate Repetitions for v ∈ V k do q v = softmax(−KL t+1,v|C min ) end for x t = v ∼ q v ▷ Improve Coherence else x t = argmax v∈V p θ (v|x <t ) end if G ← G ∪ {x t } end for
coherence nature of the expert LMs. In both S3 and S4 from Table 1, unfortunately, we find that the coherence hypothesis of pretrained LMs in prior work does not always hold in practice: it is likely to produce incoherent sentences when powerful LMs rigorously follow model confidence at each step with greedy algorithm.
Proposed Method: Look-back
As presented in Algorithm 1, Look-back first leverages probability distribution distance between current and prior steps to avoid repetitions ( §4.1), then incorporates reference from given prefix to mitigate topic drifts ( §4.2).
Alleviating Repetitions with Reference from Prior Texts
Signal for Surface or Semantic Repetitions In the decoding process of open-ended text generation, one of the plausible tokens is selected/sampled according to model probability. Inspired by the decisive role of probability distribution, we investigate measuring the distance between current and prior steps in disbrituion space via KL divergence:
D KL (p t |p ′ t ) for any 1 ≤ t ′ < t.
As the distance heatmap shown in Figure 2a, for steps generating identical tokens, their corresponding probability distributions stay close to each other than those with dissimilar outputs.
Note that neither the contrastive training objective (SimCTG) (Su et al., 2022) nor its contrastive search decoding algorithm (Su and Xu, 2022) can be directly applied to LLMs such as GPT3, where its hidden states are inaccesible. Fortunately, we can directly detect surface or semantic repetitions from GPT3 by analyzing available probability distribution: step pairs producing either identical to-ken or tokens sharing similar semantic meaning are distinguishable with distribution distance. Take Figure 2b as an instance: output token pairs from decoding steps with closest probability distributions are the 1st and 2nd FAN, city Munich and Frankfurt, location Olympic and R of Römerberg.
As repetitive steps tend to stay extremely close to prior steps with similar outputs in probability distribution space, we calculate the probability distribution distance between the t-th and closest prior step as KL t min for further analysis:
KL t min = min 1≤j≤t−1 KL (p(·|x <t )∥p(·|x <j ))
As demonstrated in Figure 2c and Figure 2d, values of KL t min become flat as repetition-style degeneration advances 1 .
Alleviating Repetitions
Since identical or similar repetition pattern could be forecasted via probablity distribution analysis, Look-back attempts to avoid repetitive sentences or phrases prior to actual generation. Practically, when KL t min has been below a pre-defined threshold α, an alerm is triggered and Look-back attempts to sample a token from the top-k most probable tokens from the vocabulary V rather than sticking to the top-1 token:
x t ∼ Unif(V k ), if KL t min ≤ α = argmax v∈V p θ (v|x <t ), Otherwise
where V k is the set of top-k most probable tokens from the vocabulary V . To avoid false positive cases where one step identified with high possibility to repeat may not necessarily lead to undesired repetitions, we do not exclude its most probable token from the plausible candidate set on purpose.
Improving Coherence with Reference from Given Prefix
Signal for Topic Drift In open-ended generation, in order to produce sentences coherent with the given prefix, the decoding algorithm is required to provide further elaboration of the major topic conveyed in the prefix. According to the prior observations (e.g., Munich and Frankfurt in Figure 2b), decoding steps with tokens sharing similar semantic meaning are close to each other with respect to probability distribution distance. Therefore, we explore the KL divergence between current and prefix m steps that should keep to the same topic:
KL t|C min = min 1≤j≤m KL (p(·|x <t )∥p(·|x <j )
When comparing distribution distance of incoherent generation with natural continuation to the same prefix, the probability distribution divergence maintains a much higher level for generation with obvious topic drift, as shown in Figure 2e.
Improving Coherence When the model is prone to provide repetitive tokens, one straightforward solution for avoiding repetition is to randomly sample from the top-k plausible tokens. It is likely to result in unnatural topic drift due to undesired sampling choices accumulation over long sequence decoding, which is frequently observed in sampling algorithms (Eikema and Aziz, 2020;Maynez et al., 2020). On the other side, the probability distribution distance between current and prefix is able to distinguish whether the generation is ontopic or not. Therefore, Look-back wisely samples from the plausible candidates according to their influence on coherence reflected by next-step distribution distance with prefix:
KL t+1,v|C min = min 1≤j≤m KL (p(·|x <t+1 , v)∥p(·|x <j )) x t ∼ softmax(−KL t+1,v|C min ), if KL t min ≤ α = argmax v∈V p θ (v|x <t ), Otherwise
where tokens with larger next-step distance to prefix is less likely to be sampled given the softmax operation upon KL divergence.
Experiments
In this section, we first introduce the datasets ( §5.1) and automatic metrics ( §5.2) used to evaluate the generation quality of the proposed Look-back and other strong decoding baselines ( §5.3). We then analyze experimental results evaluated by automatic metrics ( §5.5) and human evaluators ( §5.6). Lastly, we show effectiveness of different techniques used in Look-back through detailed analyses ( §5.7).
Datasets
We consider two applications of open-ended text generation: 1) document continuation on WikiText-103 with articles fitting the Good or Featured article criteria specified by editors on Wikipedia (Merity et al., 2016), and 2) story generation on WritingPrompts, which is a challenging task for inspiring continuations with abstract, highlevel story prompts submitted by online users and continuations responded by others freely on Reddit (Fan et al., 2018).
Evaluation Metrics
We adopt the following automatic metrics to evaluate generation quality:
Repetition We use rep-n to measure sequencelevel repetition according to the portion of duplicate n-grams (Welleck et al., 2019). For a sequence x, rep-n = 1.0 − |unique n-grams(x)| |total n-grams(x) |. Diversity Following (Su et al., 2022), we obtain an overall assessment of model repetition by considering repetition at different n-gram levels: diversity = 4 n=2 (1.0 − rep-n). MAUVE By computing information divergences in a quantized embedding space 1 , MAUVE (Pillutla et al., 2021) directly compares the learnt distribution from a text generation model to the distribution of human-written continuation. Coherence The semantic coherence between prefix and continuation is measured as the cosine similarity between their sentence embeddings represented by SimCSE (Gao et al., 2021).
Results measured by all metrics range from 0 to 1, and higher scores indicate better generation except rep-n, for which the lower the better.
Decoding Baselines
Given pretrained LMs with conventional MLE, we evaluate Look-back together with various decoding algorithms for fair comparisons.
Search Methods
We consider the competitive contrastive search proposed in SimCTG (Su et al., 2022) that predicts the next token based on both the output distribution and representation similarities between candidates and past tokens 1 .
Sampling Methods Nucleus sampling (Holtzman et al., 2019) samples the next token from the top-p portion of the probability mass. Typical decoding (Meister et al., 2022) samples from the set of words whose negative log-probabilities are close to the conditional entropy. η-sampling (Hewitt et al., 2022) truncates any word whose probability is smaller than an entropy-based threshold.
Implementation Details
We randomly sample 1,000 instances from the original training data of WikiText-103 and Writing-Prompts as our validation and test sets. Given the beginning several tokens as prefix 2 , we generate 256 tokens with different decoding algo- 1 We disregard greedy and beam search as they kept producing repetitive phrases/sentences in prior studies (Welleck et al., 2019;Holtzman et al., 2019). 2 First 32 tokens are used as prefix for WikiText-103, while the original prompts are used for WritingPrompts. rithms and disregard those after the end-of-text token during evaluation. We perform experiments with pre-trained LMs from different families and scales: GPT2-XL (Radford et al., 2019) and OPT-6.7B (Zhang et al., 2022). The same set of hyperparameters is used to decode from different LMs: the beam size for beam search is 10, p = 0.95 for nucleus, τ = 0.92 for typical, and η = 0.0003 for η-sampling. We follow the recommended range for k = {5, 8, 10} and α = [0.5, 0.9] in SimCTG and select the set based on their MAUVE scores on the validation set. For Look-back , the range of candidate amount k is {5, 8, 10} and the threshold α is [0.5, 1.6]. We select hyperparameters that result in the rep-2 score closest to human's and the optimal MAUVE performance on the validation set.
Results
In Table 2, we show the performance of different decoding algorithms as well as natural human continuation evaluated by automatic metrics. On both datasets, Look-back consistently achieves the highest MAUVE scores and coherence scores, which indicates that the generation of Look-back has high semantic similarity with human continuations while staying relevant to the given prefixes. Meanwhile, Look-back is capable of producing texts with similar repetition and diversity level as the natural human text, which implies the fluency and informativeness of the generated text. We also notice that generations from all decoding algorithms obtain relatively low MAUVE and coherence scores on WritingPrompts. This is because the given prefixes are abstract and the human written references are diverse and varied, which results in low coherence and MAUVE w.r.t. various model continuations.
LM Criterion
Look-back better same SimCTG better Probability distribution of Look-back keeps distance to history to avoid repetitions but stays close to prefix to guarantee coherence.
Human Evaluation
To further evaluate the quality of generated texts, we randomly sample two sets of 50 examples from WikiText-103 to produce prefixes for GPT2-XL and OPT-6.7B respectively and generate continuations from them. Then, we ask 3 evaluators to compare generated continuations from Look-back and the second best baseline SimCTG in two dimensions: 1) fluency: diverse and natural content without repeated words, phrases or sentences; 2) coherence: well-organized and easy to follow; being consistent with the topics presented in the humanwritten prefix without abrupt topic drifts. We ask annotators to choose one out of three options: the 1st continuation is better, the 2nd is better, or the two are of the same quality. As presented in Table 3, for both evaluation dimensions, the content generated by Look-back is preferred or marked as equally good by evaluators around or more than 70% of the time compared with baseline, which aligns well with the automatic metrics in Table 2.
Further Analyses
In this section, we analyze the effectiveness of different techniques used by Look-back individually.
Analyzing Probability Distribution Distance.
To verify whether decoding with Look-back appropriately constrains the probability distribution distance to past steps, we compare KL t min to history and KL t|C min to prefix of degeneration and different decoding algorithms in Figure 3 and leave more results in Appendix Figure 5. Although all improved decoding algorithms keep distance to historical probability distribution to avoid repetitions compared with greedy algorithm (Repetitive in Figure 3(a)), the probability distribution of Look-back (Look-back in Figure 3(b)) is much closer to the given prefix, which distinguishes it from off-topic continuation compared with other algorithms.
Softmax vs. Uniform. According to the softmax operation on KL t|C min introduced in §4.2, the closer the next step's probability distribution to prefix, the more likely the corresponding plausible token is selected to avoid undesired topic drift compared with random sampling. In Table 4, we empirically investigate the impact of plausible token sampling, uniform vs. softmax, on generation quality and find Look-back significantly enhances coherence on both datasets compared with random sampling. Although diversity drops with distribution distance-guided sampling in Look-back , both sampling strategies produce similar level of diverse content as human texts listed in Table 2.
Effects of Candidate Amount and Threshold α. In §4.1, the hyperparameter α determines whether the current step is likely to produce repetitive continuation while k restricts the range of plausible token candidates. The second best baseline Sim-CTG has the similar candidate amount parameter Compared with the other search algorithm SimCTG (1st column), Look-back (2nd column) keeps relatively higher MAUVE and coherence scores regardless of plausible token amount k and the KL t min threshold α. See Figure 6 for more results in other settings.
k and the α to balance model confidence and degeneration penalty. When GPT2-XL is used to decode with Look-back and SimCTG on WikiText-103, we visualize the impact of hyperparameters on generation quality in Figure 4 (refer to Appendix Figure 6 for other settings). The α in Look-back is different from that in SimCTG, but both controls reliance on model confidence: a larger α indicates the most probable token is less likely to be adopted, hence more diversity is obtained. We also observe that for Look-back , the relevance of generated text to prefix (high coherence) and human continuation (high MAUVE) is much more robust to various hyperparameter values compared with SimCTG.
Case Study
Given a prefix sampled from WikiText-103, we present truncated human continuations as well as generations from Look-back and SimCTG in Table 5 and leave more examples in Appendix Table 6. The prefix is talking about the design of a race car game. Both human and Look-back continuations focus on describing major difficulties encountered during the game design, while SimCTG switches to Prefix: Ridge Racer Revolution was developed over eight months by a team of more than twenty people, most of whom joined just for Ridge Racer Revolution. The biggest difficulties were Human: the link-up mode, rear-view mirror, and running the game at high speeds. Each member had worked on other console games, and they commented that the graphics detail showed the improvements in skill and technique. The team wanted players to enjoy a more in-depth game than the original, so rather than develop a port of Ridge Racer 2, as many features as possible were added. The early designs for the new courses were made...
SimCTG: balancing the game, which had to be a mix of arcade and racing elements, along wtih making an open world that felt like the future of video games.\n\nIn order to do this, we used Unreal Engine 3, the game engine that powers ... You can learn more about the game by clicking here, but be warned, there are spoilers in this article. If you're planning on reading this article, I suggest you stop reading now before it spoils the game for you...
Look-back : the lack thereof: the original game was built in a single year; Ridge Crash took more. The original developers were all gone, and the original team of programmers, artists, and designers from Ridge Revolution, including the lead programmer at Capcom, had all left the company by 2007...In the end, a new team of twenty-five employees was assembled for Revolution, who took nearly two years to complete the game. In all, this team of more... Table 5: Case study of an instance sampled from WikiText-103 with GPT2-XL. Continuation of both human and Look-back discusses difficulties in game design, while SimCTG gradually produces less informative sentences with slight topic drift to game introduction (in pink ). Refer to Table 6 for more examples. a different topic by pointing to an online introduction of the game in the second half of continuation. Interestingly, Look-back explains how the team of more than twenty people was formed, which is coherent with the topic in the prefix.
Conclusion
The distance between output distributions signals potential failure modes of text generation, such as dull repetition and incoherence. We propose Look-back , a novel decoding algorithm that utilizes the KL divergence between the current and historic decoding steps, to steer the output distribution into a plausible subspace. Look-back can generate higher-quality text and outperforms several strong decoding algorithms in both automatic and human evaluation. However, KL divergence may not be the optimal measure for text output distributions and we leave the investigation of other measures to future work. In addition, the idea proposed in this work can also be used for other specialized constrained decoding scenarios, such as preventing hallucination.
Limitations
We discuss the limitations of our work as follows:
• Look-back penalizes next tokens that result in low KL divergence with historic output distributions. However, we can not explicitly distinguish if such tokens are natural or undesired repetitions. This may lead to aggressive eliminations of possible outputs. We leave the distinction of different repetitions to future work.
• Look-back tends to show a higher bi-gram repetition score than other decoding methods because it encourages the coherence with prefix text at each decoding step. As we use a short prefix text following previous evaluation protocol, which might not be sufficiently informative, we will adopt a more comprehensive evaluation setup in the future or prepend relevant text in the beginning at decoding time.
• Most of our evaluations rely on automatic metrics, such as MAUVE scores. However, we found that these metrics may not truthfully reflect the quality of text, for example, MAUVE score is sensitive to the choice of sentence embedding models.
In general, open-ended text generation still poses a great challenge to the development of NLG algorithms.
A Appendix
In this section, we mainly show results from GPT2-XL and OPT-6.7B on WikiText-103 and Writing-Prompts that haven't been covered in the main paper. Human: managed by Alfred Butt. The show was organised by Oswald Stoll, an Australian impresario who managed a string of West End and provincial theatres. Stoll, although a fan of Lloyd's, disliked the vulgarity of her act and championed a return to a more family @-@ friendly atmosphere within the music hall. Because of this, and her participation in the earlier music hall war, Stoll left her out of the line @-@ up. He placed an advert in The Era on the day of the performance warning that " Coarseness and vulgarity etc are not allowed...
SimCTG
WritingPrompts (GPT2-Xl)
Prefix: The world is teetering on the brink of destruction . Only 2 people remain alive , both of them old ladies . They are face to face with the eldritch creature that has singlehandedly destroyed their planet . And in its terrifying presence ... they make cheeky small talk about what brought them here .
Human: FADE IN : EXT . A DESOLATE LANDSCAPE -AFTERNOON *A red sky covered with black clouds hangs above an expanse of blasted countryside . Dark craters pockmark the land , and the corrupted hulks of destroyed war machines lie rusting in the acrid air . Amidst this desolation , two elderly women sit on rocking chairs that are curiously undamaged . These are DELILAH and TABITHA . A small table with a tea set is present between them . * **DELILAH : ** It 's a bit hot today , is n't it ?...
SimCTG: <|endoftext|>A group of high school students in New York's Nassau-Suffolkshire school district are demanding to be excused from participating in "Star Wars," "Harry Potter" and Disney movies because it's the holiday season...
Look-back : \n\nThe two old ladies, who were not named in either the book or film, had been visiting from the future when their ship was destroyed. They are not aware of this, but one has become a time traveler from Earth and knows that they were never meant be together in life but are now destined by fate together. One day they will meet again at last on some other planet where one of each woman will be born with powers over time which will allow them each one chance to save the world, which is now destined to fall into the hands of their enemy. One day both are going in the same direction. And one woman has decided that she needs to see to it that they both get there first!...
WritingPrompts (OPT-6.7B)
Prefix: For the first time ever , a person is born with a genuine superpower . They proceed to live out their entire life without noticing or realizing it .
Human: For Francis , the good had always gone hand-in-hand with the bad . He had spent his life trying his best to be a good person , or at least what he believed would be a good person . He realized now that he was n't really sure anymore . When his mother left his dad , Francis had chosen to stay with him . His friends had all told him he was smart to stay with the rich parent , but that had n't been the reason , had it ? At the time , he had told himself that his father needed him more . He did . Dad had been really bad at taking care of himself . Without his mother , someone had to be there to deal with dad when he was lonely and angry and ...
SimCTG:
What if they have superpowers in real life?\nI'm going to need proof of this before I believe it. You know, for science. It's true. I saw it on the internet. So it must be true. </s>How to Get Rid of Pimples on Forehead...
Look-back :
The person is born with the ability not necessarily in the form of super strength or super speed (although they may have those abilities as well) but the superpower is the ability to see the future! They see the outcomes in their life and they live their days according to the outcome they see for themselves! In other words, they plan their life out according how their futures will unfold according. They see what will happen and they do what they can so that it will come. And because they live their whole live without realizing they have this ability they are able live life to the fullest. Because the future is already set for them, their entire life unfolds according. Their whole life, from the moment of their birth, unfolds according as planned in their mind... Table 6: Case study of instances sampled from WikiText-103 and WritingPrompts. Unnatural topic drifts are frequently observed in generations from SimCTG (in pink ).
Figure 1
1Figure 1: Maximum similarity of hidden states and normalized minimum KL divergence between current step and history (a) or prefix (b) from GPT2 on 1,000 instances of WikiText-103. Compared with human continuation, (a): repetition has much smaller minKL but undistinguishable high maxHidden with history text, (b): pseudo topic drift by switching to continuation of another instance has much higher minKL but similar high maxHidden with prefix text.
S2: (davinci-002 greedy) celebrated with a concert by Bon Jovi in Berlin.\n\n The FIFA Fan Fest in Munich was located at the Olympic Park.\n\n The FIFA Fan Fest in Frankfurt was located at the Römerberg.\n\n The FIFA Fan Fest in Hamburg was located at the ...Prefix: Burkan died from an attack of acute indigestion at his country home in Great Neck on June 6, 1936. Three thousand people attended his funeral at Off-Topic Continuation S3: (ada-001 greedy) the local church.\n\n The authorities are investigating how Khedira managed to enter the house and what role he played in the attack. S4: (davinci-002 greedy): Temple Emanu-El in New York City... Category:1868 births\nCategory:1936 deaths\nCategory:Austro-... S5: (ada-001 nucleus): aients home.\n The Lorraine weekend\nIn house of intensity and occupation, great law enforcement officers\n ... Shanny Bankecived his way into the home of Runaan U Without giving any reason other than to marines and punch said home's door ... ×2
Figure 2 :
2to prefix by davinci-002 (greedy) and ada-001 (nucleus) Probability distribution distance of GPT2-XL measured by KL divergence for repetitive (a,b,c,d) and off-topic (e) continuation presented inTable 1. (a) and (b): Dark cells along diagonal indicate that steps of small distance with history tend to produce repetitive tokens. (c) and (d): Compared with human continuation, minimum distribution distance with past gradually approaches 0 (red curves) as similar phrases keep repeating during decoding. (e): distribution of incoherent continuation (green and blue curves) is prone to stay farther from given prefix as decoding proceeds.
Figure 3 :
3Minimum KL divergence between current step and (a) history or (b) prefix from GPT2-XL decoded by different algorithms on the test set of WikiText103.
Figure 4 :
4Impact of decoding hyperparameters on validation set of WikiText-103.
Figure 5 :Figure 6 :
56Minimum KL divergence between current step and history or prefix from GPT2-XL and OPT-6.7B decoded by different algorithms on test data of WikiText103 and WritingPrompts. Impact of decoding hyperparameters on validation set of WikiText103 and WritingPrompts.Prefix: A new show in London in 1912 showcased the best of music hall\'s talent. The Royal Command Performance took place at the Palace Theatre in London, which was
↓ diversity ↑ MAUVE ↑ coherence ↑ rep-2 ↓ rep-3 ↓ rep-4 ↓ diversity ↑ MAUVE ↑ coherence ↑LM Decoding
WikiText-103
WritingPrompts
rep-2 ↓ rep-3 ↓ rep-4 human
6.91
1.83
0.70
0.91
-
0.62
15.61
3.78
1.24
0.80
-
0.31
GPT2-XL
nucleus
5.29
1.97
1.42
0.92
0.69
0.53
5.40
2.41
1.72
0.91
0.22
0.34
typical
3.61
1.07
0.73
0.95
0.70
0.50
3.60
1.51
1.10
0.94
0.19
0.30
η-sampling
6.25
2.49
1.80
0.90
0.68
0.55
6.17
2.88
2.16
0.89
0.17
0.35
SimCTG
5.37
1.97
1.46
0.91
0.72
0.53
2.84
0.36
0.19
0.97
0.18
0.31
Look-back
8.22
1.34
0.38
0.90
0.81
0.65
7.94
1.25
0.33
0.91
0.24
0.52
OPT-6.7B
nucleus
6.08
2.19
1.43
0.91
0.63
0.56
5.82
3.12
2.57
0.89
0.13
0.33
typical
6.58
2.25
1.37
0.90
0.61
0.57
5.80
2.67
1.93
0.90
0.14
0.33
η-sampling
6.07
2.26
1.55
0.90
0.66
0.56
4.72
1.93
1.36
0.92
0.15
0.34
SimCTG
5.44
1.97
1.38
0.91
0.56
0.55
7.49
4.25
3.10
0.86
0.08
0.20
Look-back
9.21
1.74
0.53
0.89
0.80
0.65
9.77
2.18
0.74
0.88
0.19
0.43
Table 2 :
2Automatic evaluation results of different decoding algorithms for document continuation and story generation. Continuation generated by Look-back is of similar level of diversity as human texts while much more relevant to prefix (highest coherence) and semantically similar to human continuation (highest MAUVE).
Table 3 :
3Human Evaluation on generations from Lookback and the second best SimCTG with examples sampled from WikiText-103.
Table 4 :
4Effects of probability distribution-guided sam-
pling of Look-back (Softmax) on generation quality.
With similar level of diverse content as human text,
Look-back samples according to softmax of negative
distribution distance to prefix, leading to improved co-
herence compared with Uniform.
: one of the most popular theatres in the West End at the time. Start the conversation, or Read more at BroadwayWorld.com.</s>I have been waiting for this. Thank you for putting it together. You should cross post to /r/blunderyears as well...Look-back : the home of the Royal Commandos during the First World War. The show starred the likes of Harry Lauder, who played the role he was born to play, 'The King in Yellow', and Fred Karno -who, as the 'King', was the star attraction. It was a huge success, and the Royal Variety Performance took its spot in the calendar. It's a tradition that continues to this day -and the King in Yellow is still a big draw at any show...
Spikes inFigure 2din later decoding steps correspond to multiple tokens for representing one single location, e.g., ö, mer, berg for Römerberg inFigure 2b.
We use GPT2-XL for text sequence embedding.
Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, arXiv:2001.09977Towards a human-like open-domain chatbot. arXiv preprintDaniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chat- bot. arXiv preprint arXiv:2001.09977.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
Towards coherent and cohesive long-form text generation. Sang Woon, Pengchuan Cho, Yizhe Zhang, Xiujun Zhang, Michel Li, Chris Galley, Mengdi Brockett, Jianfeng Wang, Gao, 10.18653/v1/W19-2401Proceedings of the First Workshop on Narrative Understanding. the First Workshop on Narrative UnderstandingMinneapolis, MinnesotaAssociation for Computational LinguisticsWoon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiu- jun Li, Michel Galley, Chris Brockett, Mengdi Wang, and Jianfeng Gao. 2019. Towards coherent and co- hesive long-form text generation. In Proceedings of the First Workshop on Narrative Understanding, pages 1-11, Minneapolis, Minnesota. Association for Computational Linguistics.
An analysis of the effects of decoding algorithms on fairness in open-ended language generation. Jwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, Aram Galstyan, arXiv:2210.03826arXiv preprintJwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, and Aram Galstyan. 2022. An analysis of the effects of decoding algorithms on fairness in open-ended language generation. arXiv preprint arXiv:2210.03826.
Is map decoding all you need? the inadequacy of the mode in neural machine translation. Bryan Eikema, Wilker Aziz, arXiv:2005.10283arXiv preprintBryan Eikema and Wilker Aziz. 2020. Is map de- coding all you need? the inadequacy of the mode in neural machine translation. arXiv preprint arXiv:2005.10283.
Hierarchical neural story generation. Angela Fan, Mike Lewis, Yann Dauphin, 10.18653/v1/P18-1082Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
SimCSE: Simple contrastive learning of sentence embeddings. Tianyu Gao, Xingcheng Yao, Danqi Chen, 10.18653/v1/2021.emnlp-main.552Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOnline and Punta CanaDominican RepublicTianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence em- beddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 6894-6910, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics.
John Hewitt, D Christopher, Percy Manning, Liang, arXiv:2210.15191Truncation sampling as language model desmoothing. arXiv preprintJohn Hewitt, Christopher D Manning, and Percy Liang. 2022. Truncation sampling as language model desmoothing. arXiv preprint arXiv:2210.15191.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, arXiv:1904.09751The curious case of neural text degeneration. arXiv preprintAri Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.
Shaojie Jiang, Ruqing Zhang, arXiv:2205.02517Svitlana Vakulenko, and Maarten de Rijke. 2022. A simple contrastive learning objective for alleviating neural text degeneration. arXiv preprintShaojie Jiang, Ruqing Zhang, Svitlana Vakulenko, and Maarten de Rijke. 2022. A simple contrastive learn- ing objective for alleviating neural text degeneration. arXiv preprint arXiv:2205.02517.
Lisa Xiang, Ari Li, Daniel Holtzman, Percy Fried, Jason Liang, Tatsunori Eisner, Luke Hashimoto, Mike Zettlemoyer, Lewis, arXiv:2210.15097Contrastive decoding: Open-ended text generation as optimization. arXiv preprintXiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettle- moyer, and Mike Lewis. 2022. Contrastive decoding: Open-ended text generation as optimization. arXiv preprint arXiv:2210.15097.
BRIO: Bringing order to abstractive summarization. Yixin Liu, Pengfei Liu, Dragomir Radev, Graham Neubig, 10.18653/v1/2022.acl-long.207Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsYixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2890-2903, Dublin, Ireland. Association for Computational Lin- guistics.
On faithfulness and factuality in abstractive summarization. Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald, 10.18653/v1/2020.acl-main.173Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineJoshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, On- line. Association for Computational Linguistics.
Clara Meister, Tiago Pimentel, Gian Wiher, Ryan Cotterell, arXiv:2202.00666Typical decoding for natural language generation. arXiv preprintClara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022. Typical decoding for natural lan- guage generation. arXiv preprint arXiv:2202.00666.
Pointer sentinel mixture models. Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher, arXiv:1609.07843arXiv preprintStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. arXiv preprint arXiv:1609.07843.
A systematic characterization of sampling algorithms for open-ended language generation. Moin Nadeem, Tianxing He, Kyunghyun Cho, James Glass, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsMoin Nadeem, Tianxing He, Kyunghyun Cho, and James Glass. 2020. A systematic characterization of sampling algorithms for open-ended language gen- eration. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Compu- tational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 334-346, Suzhou, China. Association for Computa- tional Linguistics.
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova Dassarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, arXiv:2209.11895-context learning and induction heads. arXiv preprintCatherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. 2022. In-context learning and induction heads. arXiv preprint arXiv:2209.11895.
Mauve: Measuring the gap between neural text and human text using divergence frontiers. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui, Advances in Neural Information Processing Systems. 34Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap be- tween neural text and human text using divergence frontiers. Advances in Neural Information Process- ing Systems, 34:4816-4828.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, Nigel Collier, arXiv:2202.064172022. A contrastive framework for neural text generation. arXiv preprintYixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Ling- peng Kong, and Nigel Collier. 2022. A contrastive framework for neural text generation. arXiv preprint arXiv:2202.06417.
An empirical study on contrastive search and contrastive decoding for open-ended text generation. Yixuan Su, Jialu Xu, arXiv:2211.10797arXiv preprintYixuan Su and Jialu Xu. 2022. An empirical study on contrastive search and contrastive decoding for open-ended text generation. arXiv preprint arXiv:2211.10797.
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, Jason Weston, arXiv:1908.04319Neural text generation with unlikelihood training. arXiv preprintSean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2019. Neu- ral text generation with unlikelihood training. arXiv preprint arXiv:1908.04319.
Breaking the beam search curse: A study of (re-)scoring methods and stopping criteria for neural machine translation. Yilin Yang, Liang Huang, Mingbo Ma, 10.18653/v1/D18-1342Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsYilin Yang, Liang Huang, and Mingbo Ma. 2018. Break- ing the beam search curse: A study of (re-)scoring methods and stopping criteria for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3054-3059, Brussels, Belgium. Associa- tion for Computational Linguistics.
. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, arXiv:2205.01068Mona Diab, Xian Li, Xi Victoria LinarXiv preprintet al. 2022. Opt: Open pre-trained transformer language modelsSusan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
Calibrating sequence likelihood improves conditional language generation. Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, Peter J Liu, arXiv:2210.00045arXiv preprintYao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J Liu. 2022. Calibrating sequence likelihood improves conditional language generation. arXiv preprint arXiv:2210.00045.
| [] |
[
"Enhancing Accuracy and Robustness through Adeversarial Training in Class Incremental Continual Learning",
"Enhancing Accuracy and Robustness through Adeversarial Training in Class Incremental Continual Learning"
] | [
"Minchan Kwon \nGraduate School of AI\nKAIST\nDeajeonRepublic of Korea\n",
"Kangil Kim [email protected] \nAI Graduate School, GIST\nGwangjuRepublic of Korea\n"
] | [
"Graduate School of AI\nKAIST\nDeajeonRepublic of Korea",
"AI Graduate School, GIST\nGwangjuRepublic of Korea"
] | [] | In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used classincremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL. | 10.48550/arxiv.2305.13678 | [
"https://export.arxiv.org/pdf/2305.13678v1.pdf"
] | 258,841,210 | 2305.13678 | 0eb7da5dc9ae9c6ce0ebc7d4149c6344271c76e2 |
Enhancing Accuracy and Robustness through Adeversarial Training in Class Incremental Continual Learning
Minchan Kwon
Graduate School of AI
KAIST
DeajeonRepublic of Korea
Kangil Kim [email protected]
AI Graduate School, GIST
GwangjuRepublic of Korea
Enhancing Accuracy and Robustness through Adeversarial Training in Class Incremental Continual Learning
In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used classincremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL.
Introduction
Deep learning has achieved remarkable performance in various field of computer vision. However, it remains vulnerable to adversarial attacks, which add minuscule perturbations to an image that are almost imperceptible to the human eye but cause the model to make incorrect predictions. This has made adversarial attacks a major concern for researcher, as they pose a significant security risk when deep learning is applied in real-world scenarios. Therefore, developing defenses * corresponding author against and methods for launching adversarial attacks have become a focus of research in the field.
Despite the significance of continual learning (CL) in realworld applications of deep learning, there has been limited research on adversarial attacks and defenses in this context. CL examines how models can effectively learn from a stream of continuous data. In our empirical analysis to reveal the impact of attacks, we found that class-incremental CL (CICL) setting is vulnerable to adversarial attack. Furthermore, adversarial training (AT), the most widely used adversarial defense method, is ineffective in CICL settings. Compared to the expected robustness enhancing and small clean accuracy loss of AT in a single task, the AT in class-incremental CL shows larger drop of clean accuracy and only small improvement of robustness. We argue that the cause of this problem is that the class imbalance, an inherent property of CL, deepens the model disturbance effect of AT.
To address these problems, we propose External Adversarial Training (EAT), an adversarial training method that can create adversarial examples that exclude the class imbalance problem of CICL. EAT can be easily applied to any method using experience replay (ER) which includes the SOTA models. To the best of our knowledge, EAT is the most effective method for defending against adversarial attacks while maintaining clean accuracy. We verify and analyze the points on state-of-the-art and well-known rehearsal-based CICL methods on on split CIFAR-10 and split tiny-imagenet benchmarks.
In summary, our contributions are as follows.
• verifying AT is ineffective in CICL
• analyzing the causes of the problem based on attack overwhelming
• presenting a simple but effective EAT method to exclude the causes • providing baseline of robustness for several rehearsalbased method 2 Background
Class-Incremental Continual Learning Continual learning is environments in which a model called target model is trained to learn new tasks or classes in sequential manner, without forgetting the previously learned tasks or classes.
This means that the model is continually exposed to new data and must learn to adapt to the new information while retaining the knowledge it has gained from previous tasks. There are many different settings for continual learning, following recent CL literature [Mai et al., 2022;Cha et al., 2021;Buzzega et al., 2020], we consider the supervised classincremental continual learning setting where a model needs to learn new classes continually without task-iD. The stream D is a sequence of disjoint subsets whose union is equal to the whole training data, notated as {T 1 , · · · , T N } where T i indicates the subset called a task at ith time step. Each task is a set of input and ground truth label pairs. Training in the class-incremental continual learning has two constraints: 1) a target model which want to training for continuous dataset composed of an encoder and single-head classifier is shared overall tasks, and 2) the model learns from only a task at each time step without accessibility to the other tasks. The single-head classifier uses all classes in D, not restricted to the classes of a task, which is more challenging environment than the other settings using task-IDs or using different classifier for tasks. In CICL setting, the model suffer from class imbalance because the previous task data is inaccessible.
Rehearsal-based method Rehearsal-based methods, also known as replay-based methods, are a popular approach for addressing the issue of catastrophic forgetting in CL. These methods use a memory buffer composed of a small fraction of previous training samples to reduce forgetting of previously learned information. et al., 2021] improves performance through contrastive learning. These methods have been shown to be effective in reducing forgetting and improving performance in classincremental CL scenarios.
Adversarial Attack Adversarial example/image, were first introduced by [Szegedy et al., 2013]. These examples are modified versions of clean images that are specifically designed to confuse deep neural networks. Adversarial attacks are methods for creating adversarial examples. These attacks can be classified into various categories based on their goals and specific techniques. In this paper, we will focus on whitebox attacks, which assume knowledge of the model's parameters, structure, and gradients.
Fast Gradient Sign Method (FGSM) [Szegedy et al., 2013;Goodfellow et al., 2014] is a popular white-box attack that utilizes gradient information to update the adversarial example in a single step, in the direction of maximum classification loss. The FGSM update rule is given by x ′ = clip [0,1] {x+ϵ·sign(∇ x , L(x, y; θ))}. Basic Iterative Method (BIM) [Kurakin et al., 2018] is an extension of FGSM which use iterative method to generate adversarial examples through multiple updates. Projected Gradient Descent (PGD) is simi-lar to BIM, but with the added feature of randomly selecting an initial point in the neighborhood of the benign examples as the starting point of the iterative attack. PGD can be interpreted as an iterative algorithm to solve the following problem : max x ′ :||x ′ −x||∞<α L(x ′ , y; θ). PGD is recognized by [Athalye et al., 2018] to be one of the most powerful firstorder attacks. The use of random noise was first studied by [Tramèr et al., 2017]. In the PGD attack, the number of iteration K is crucial factor in determining the strength of the attacks, as well as the computation time for generating adversarial examples. In this paper, we will refer to a K-step PGD attack as PGD-K.
Adversarial defense methods have been widely studied in recent years due to the increasing concern for the security of deep learning models. These methods aim to improve the robustness of deep neural networks against adversarial attacks, which are specifically designed to exploit the weaknesses of the model by introducing small, imperceptible perturbations to the input data. Adversarial training (AT) [Goodfellow et al., 2014] is a popular method that trains the model with generated adversarial examples, making the model more robust against similar attacks. Robustness is used as a measure of how well the model defends an attack. This is a count of how much it is correct after applying an adversarial attack to the clean test data. To avoid confusion, in this paper, accuracy means clean accuracy using clean test data, and robustness means accuracy for adversarial attacks on clean test data. Critical Drawback of Adversarial Training in CICL Naive application of AT to CICL causes serious problems on both robustness and accuracy. Figure 1 shows the negative impact of AT on a CICL data. This experiments conducted on sequencial CIFAR-10 and detail setting same as Section 5. In the figure, applying AT to joint training decreases clean accuracy slightly, but increases the robustness dramatically. This is well known effect of AT [Goodfellow et al., 2014;Zhang et al., 2019]. However, AT in ER shows largely different results to this well-known effect of AT. Clean accuracy significantly decreases and robustness also drops than joint adversarial training. This example shows the potential risk of AT in CICL framework.
Problem of AT in CICL
(a) clean test, balanced CT Attack Overwhelming by Class-Imbalance Classimbalance of CICL increases the number of adversarial attacks of a majority class, which overwhelms the number of clean examples for the minority class. The class-imbalance, which is a well-known but still unsolved problem of CICL, causes the imbalance of adversarial attacks, because AT generates them by distorting all clean samples in a mini-batch. For example, AT using 10% of training data for a class, and 90% for the others exactly inherits the rate to the generated adversarial examples [Goodfellow et al., 2014]. In usual CICL settings [Buzzega et al., 2020], the class-imbalance is a common property, and therefore the imbalanced attack occurs in most CICLs to naively adopt AT.
(b) clean test, balanced AT (c) clean test, imbalanced CT (d) clean test, imbalanced AT (e) RT test, balanced CT (f) RT test, balanced AT (g) RT test, imbalanced CT (h) RT test, imbalanced AT
Weak Resistance to Inbound Attacks by Class-Imbalance
The small clean examples by class-imbalance weakens the resistance to inbound attacks. We use the term, the inbound attack for a class, to indicate closely located adversarial examples from the other classes. When the inbound attacks are trained in AT, the number of clean examples has an important role of resisting to distortion of existing information by the attacks in the model. The resistance is weaken for the minority class in CICL, which has insufficient clean examples compared to the other majority classes. For example, in rehearsal-based method, model can access full current task data but only access previous task data within a very small memory size compared to the current task size. This imbalance of previous-current tasks ratio gap as CL progresses.
Problem: Decision Boundary Distortion The two properties, attack overwhelming of the majority class and weakening resistance of the minority class, cause critical distortion of trained information from clean data. This phenomenon appears as the distortion of decision boundaries. The overwhelming attacks increase the inbound attacks for the minority classes, and the minority classes has insufficient resistance Settings for Empirical Analysis We prepared a toy binary classification task to preliminarily verify the distortion phenomenon. In the task, we generated the same number of crescent-shaped input representations for each of two classes as Figure 2, like [Altinisik et al., 2022]. Each class has 1000 input samples. We trained a simple linear network which composed three hidden nodes, two layer feed-forward network on the data in the four conditions of training data: 1) balanced clean data, 2) balanced clean data with balanced adversarial examples, 3) imbalanced clean data, and 4) imbalanced clean data with imbalanced adversarial examples (1:9). Training used SGD optimizer, learning rate as 0.1, doing 500 epochs. For adversarial training, using PGD attack with 10 iters. The trained models are used for plotting their decision boundaries by generating predicted classes over representation space as shown in Figure 2. The boundary is tested on balanced clean samples and balanced adversarial examples that shown as dot distribution in the first and second row of the figure. Detail accuracy can be seen at Table 1. Figure 2a, the model trained with balanced clean data shows clear decision boundary to distinguish the clean test samples. In Figure 2b, the model which trained with balanced adversarial samples changes the boundary slightly, but still maintains the boundary of clean data. Using the imbalanced adversarial examples (in Figure 2d), the model largely moves the boundary from the majority (red) to the minority of classes (blue) and incorrectly classify more blue test samples. The results imply that the imbalanced adversarial training has a potential to distort the boundary and destroy the original decision boundary built by clean data. Note that there has been no critical clean accuracy degradation in imbalanced clean training (in Figure 2c). This degradation in performance and increase poor robustness occur only when AT is combined in the imbalanced setting. It does not happen in simple imbalanced setting.
Distortion Compared to Clean Test In
Distortion Compared to Robustness Test In Figure 2d, balanced clean training shows the base robustness to the adversarial attacks generated for its trained model. Applying AT to the balanced data, Figure 2e, the trained model shows significantly improved robustness, which is a desirable gain by AT in an ordinary balanced training environment. However, imbalanced AT shows less improvement, compared to the imbalanced clean training. In the balanced case, the boundary is nearly changed, but the imbalance case shows the shifted boundary toward the blue area when AT is applied. Then, most of robustness test samples for blue class are incorrectly classified. This result also provides an evidence of robustness degradation by decision boundary distortion.
Method
Simple Solution: External Adversarial Training In Figure 3, the details of EAT to CICL with experience replay setting are shown. Compared to typical AT, EAT creates an additional external model whose backbone has the same network architecture to the CL model shared over tasks (Target model). At each step, the method creates an external model, trains it via AT only for the current task at the step from the scratch, generates adversarial examples, and deleted. Then, Target model trains with current task data, replayed samples from memory, and the generated adversarial samples without AT. Detail process is described in Algorithm 1. Note that EAT doesn't need any extra external memory size. External model deleted after generate adversarial examples, do not saved for future tasks.
Motivation: Effective Exclusion of AT on Class-Imbalance Motivation of EAT is to effectively exclude imbalanced AT for reducing the distortion effect. In CICL, the return AE i imbalanced AT appears by the imbalanced size of current task data and replayed samples, and therefore AT over only different tasks suffers from the distortion problem. Excluding the cases of applying AT over different tasks is a practically achievable way for the goal, because the class-imbalance is a nature of CICL method, which has no clear solution in a limited computing environment. A simple way of the exclusion is to learn Target model only on the current task data, called current task adversarial training (CAT) in this paper. However, this method generates attacks from a current task to other different tasks in CICL settings to incrementally expand the class set for prediction. To enhance the exclusion, EAT uses an external model focused on attacks between classes of a current task. In Figure 4, the rate of adversarial samples between different Tasks is shown. This experiment is conducted on split CIFAR-10 and the other settings are shown in Section 5. In the result, EAT shows higher rate than CAT over all training epochs, which verifies more effective exclusion of EAT. In fact, the unclear exclusion of CAT improves largely decreases the accuracy and robustness slightly as shown in Figure 1. Ho and Nvasconcelos, 2020]. Memory Update After training of the base model at a epochs, the external memory is updated by inserting samples randomly selected from the task at the step. This memory update method is called as Reservoir sampling. If memory is already full, we randomly choose data in the memory and replace this data to new data. Datasets We use three datasets, Split-CIFAR-10, Split-CIFAR-100, and Split-MiniImageNet. Each set is created through splitting original data by classes, composing of classes for each task, and ordering the tasks as a stream. The task composition and ordering determine the information for transfer over tasks, and their different settings cause the large change of results. For clear analysis, we fixed task composition in ascending order of labels.
Datasets
Split CIFAR-10 Split MiniImageNet task 5 20 classes / task 2 5 tr. samples / task 2000 500 te. sample / task 10 100 image Size 32x32 84x84
Results and Discussion
Performance Comparison with State-of-The-Art The performance of accuracy and robustness in some of state-ofthe-art models are shown in Table 3. The results are categorized to two cases using 200 and 500 buffer size for experience replay. In each memory setting, we reproduce state-ofthe-art methods and their results are close to their reference accuracy results with some variance.
Buffer
Method CIFAR-10 Tiny ImageNet Size Accuracy(%) Robustness(%) Accuracy*(%) Accuracy(%) Robustness(%) Accuracy*(%) GEM [Lopez-Paz and Ranzato, 2017] 29.75 (±3. Table 3: Accuracy and robustness on split CIFAR-10 and split Tiny Imagenet dataset. Accuracy* is the reference result. Every value is averaged over 3 trial.
In the accuracy results, AT significantly decreases accuracy of experience replay methods in all cases compared to their original accuracy, whereas EAT show significantly larger accuracy than AT. In the robustness results, AT improves robustness of all base methods. EAT significantly increases robustness again and shows the best value over all methods in the table.
The results imply that EAT effectively solves the problems on accuracy and robustness drop of AT in CICL. Furthermore, EAT is the most effective method to enhance robustness in CICL. Less accuracy than the best original method is the trade-off between accuracy and robustness, which is the property of AT observed in usual cases. Robustness and Accuracy on Each Task After Training Figure 5 shows detail robustness of AT and EAT for each task after training over all tasks on CIFAR-10. In the results, EAT shows higher robustness than AT in all tasks. AT nearly improves any previous tasks except current task (Task5). The accuracy of EAT is higher at Task1 and Task2, which are the oldest two tasks, whereas AT shows slightly higher accuracy in the recent tasks. Considering the total accuracy and robustness increase of EAT, the results imply that EAT improves both, specifically improves older accuracy better, and significantly improves all robustness. Note that EAT has never learned inter-task adversarial attacks, but the robustness increases overall tasks. This is the strong evidence of drawbacks of unnecessary class-imbalance attacks of AT between tasks . Performance Difference over Time Steps Figure 6 shows accuracy and robustness averaged over involved tasks at each step in training. The accuracy gradually decreases by steps in CICL settings, while the model is repeatedly trained for a new task and forgets the previous task information. This phenomenon of CICL appears for both AT and EAT, but their overall accuracy is slightly higher with EAT. The robustness is similar, but not exactly equal at step1, which is caused by randomness of adversarial attacks of AT. The difference of the robustness results significantly increases at step2 and it shows the similar robustness until step5. As step2 includes only Task1 and Task2 but step5 includes all Tasks, the remaining difference imply that CICL settings with AT have sufficiently large robustness degradation when adding a new task. Reducing Computational Cost Both EAT and AT are computationally expensive to build and train adversarial samples. In particular, EAT is more expensive because it trains and uses new external models. For practical use, the cost may be a limitation, so we also verify the performance of EAT in better CICL settings to use faster attack method, FGSM [Szegedy et al., 2013]. Compared to 4-PGD attack, the method reduces the time complexity to about 25% [Szegedy et al., 2013]. Table 4 shows the performance in the efficient setting. In the results, accuracy and robustness are still improved by EAT significantly, so the limit of EAT in computational cost can be sufficiently alleviated.
Method
Accuracy
Conclusion
In this paper, we show that existing AT do not work well in class-incremental continual learning setting with experience replay. We argue that its cause lies in AT on the classimbalance data and its distortion of decision boundaries results in accuracy and robustness drop. To solve the distortion, we introduced EAT that effectively excludes the imbalanced AT between different tasks. In the experiments on CICL benchmarks, we verify that our method significantly improves both accuracy compared to AT suffering from the negative effect of class-imbalance. Moreover, EAT provides the new state-of-the-art defence performance (robustness) in CICL with ER environment.
Future Works
Although robustness of several methods has been investigated in this paper, the robustness of many CL methods is still insufficient. In addition, there is also a lack of study about how adversarial defense method except adversarial training affect to CL. Wide and various study on the adversarial robustness in CL need to be studied with future work. To the best of our knowledge, this study is the first to study adversarial defenses specialized in CL. Affordable and effective adversarial defenses specialized in CL should also be studied in the future.
Related Work
Continual learning CL can be divided into several categories according to problem setting and constraints. One group extends the architecture of the model for each new task. Another approach is to regularize the model with respect to previous task knowledge while training new tasks. And Rehearsal method use stored data or samples from generative models to resist catastrophic forgetting. Rehearsal method is very effective in class-incremental CL, but there are additional computational cost and memory costs. Recent rehearsal-free method have shown high performance with little memory cost using vision transformer and prompt tuning. This setting is more realistic and shows higher performance than setting starting with scratch. In this paper, we focus on the setting of class-incremental CL from the scratch.
Adversarial Defense There are various adversarial defense methods that have been proposed in the literature, including adversarial training, defensive distillation, input preprocessing methods, and model ensemble methods. Defensive distillation [Papernot et al., 2016] is another method that improves the robustness of the model by distilling the knowledge from a robust model into a less robust one. Input preprocessing [Dziugaite et al., 2016] methods aim to preprocess the input data to remove the adversarial perturbations before feeding it to the model. Model ensemble methods [Pang et al., 2019], on the other hand, aim to increase the robustness by combining the predictions of multiple models. Other methods such as gradient masking, Randomized smoothing and Adversarial Detection are also proposed in recent years. Gradient masking [Lee et al., 2020] is a method that hides the gradients of the model to prevent the gradientbased attacks. Randomized smoothing [Cohen et al., 2019] is a method that makes the model more robust by adding random noise to the input data. Adversarial Detection [Liu et al., 2018] is a method that aims to detect the adversarial examples and discard them before they are fed to the model.
Continual learning with adversarial defense Efforts to incorporate adversarial robustness into CL have not been long studied. [Khan et al., 2022] is studied on how to increase robustness in joint training using continual pruning method. But this study didn't study about how to increase robustness in CL. [Chou et al., 2022] using the robust and non-robust data set found in [Ilyas et al., 2019] to increase clean accuracy of continual learning model. They also did experiments on robustness of CL model, but only conducted experiments in seq CIFAR-10 with large memory (=16000). And their goal is to increase clean accuracy, they have not studied how to increase adversarial robustness. In this paper, we first studied how to increase adversarial robustness in CL, and measured robustness of various method in various setting.
Figure 1 :
1Accuracy and robustness of simple methods to apply AT to CICL with ER settings on split CIFAR-10 task.
Figure 2 :
2Decision boundary for training data and test sample distribution on the toy task. Clean and robustness test samples are dotted with decision boundary obtained by AT on balanced and imbalanced clean training data. RT means robustness.
Figure 3 :
3Overview of a) general experience replay and b) proposed EAT method on it in CICL. (MT : the target model transferred over time steps)
stream D = {T 1 , · · · , T N }, a target model M T , and an external memory M, external model M e 2: procedure ER+EAT(D, M b , T on (x ′ , y ′ ) ∪ (x,
Figure 4 :
4Rate of adversarial attacks from current tasks to previous tasks by training epochs
Figure 5 :
5Accuracy and robustness of each task at the final step of ER + AT and ER + EAT (acc: accuracy)
Figure 6 :
6Accuracy and robustness of ER + AT and ER + EAT at each steps (acc: accuracy)
The most typical method in this category is Experience Replay (ER) [Mai et al., 2022; Chaudhry et al., 2019]. ER updates the network with training batches consisting of samples from both new and previous classes. The ER approach is simple and effective, but it has some limitations such as requiring extra memory to store the replay buffer. DER/DERpp [Buzzega et al., 2020] improves the performance of ER by leveraging distillation loss. CO2L [Cha
to the attack by lacking clean examples. Combination of these two properties then increases the gap of training loss for clean examples and adversarial examples, and then distorts the decision boundary obtained by clean examples. The distortion directly causes accuracy drop because the clean test samples follows the imbalance rate of clean training examples to build the decision boundary. In the case of robustness, the test samples are balanced adversarial attacks in AT, which builds different decision boundary to the distorted one based on the imbalanced attacks. This difference causes robustness errors.Table 1: Numerical results of accuracy and robustness on the toy task.training type
accuracy(%) robustness(%)
Balanced CT
100.0
43.6
Balanced AT
99.2
69.6
Imbalanced CT
100.0
46.2
Imbalanced AT
93.4
52.6
Table 2 :
2Statistics of CICL benchmarks. (tr: training, te: test)Method Setting We use adversarial attack method for EAT
as PGD attack [Madry et al., 2017]. To adversarialy train-
ing the external model, we perform adversarial training in 10
epoch. For PGD attack, we use 4 iterations. For FGSM attack
and PGD attack, we used α of 0.0078, and ϵ of 0.0314. For
Comparison, we test adversarial training using FGSM and
PGD attck. The setup of PGD and FGSM attack used for
adversarial training is same as EAT. To test robustness, we
used PGD attack with 4 iterations. EAT was applied to ER,
DER, and DERpp, which are methods that can be applied
without deformation. We compared our method with knowl-
edge distillation method (iCaRL [Rebuffi et al., 2017]), and
7 rehearsal-based methods (ER, GEM [Lopez-Paz and Ran-
zato, 2017], FDR [Benjamin et al., 2018], HAL [Chaudhry et
al., 2021], DER, DERpp). Like [Buzzega et al., 2020], we
do not compared with HAL and GEM in seq Tiny-ImageNet
setting because its untractable time.
Table 4 :
4Accuracy and Robustness of AT and EAT using computationally efficient FGSM attack method
AcknowledgementThis work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1A2C2012054) and by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2022 (Project Name: Development of service robot and contents supporting children's reading activities based on artificial intelligence Project Number:R2022060001, Contribution Rate: 50%).
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Altinisik , arXiv:2211.16316arXiv:1902.10486Proceedings of the IEEE/CVF International Conference on Computer Vision. Chaudhry et al., 2021] Arslan Chaudhry, Albert Gordo, Puneet Dokania, Philip Torr, and David Lopez-Pazthe IEEE/CVF International Conference on Computer Vision33arXiv preprint2022 International Conference on Cyberworlds (CW)References [Altinisik et al., 2022] Enes Altinisik, Safa Messaoud, Hus- rev Taha Sencar, and Sanjay Chawla. A3t: Accuracy aware adversarial training. arXiv preprint arXiv:2211.16316, 2022. [Athalye et al., 2018] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pages 274-283. PMLR, 2018. [Benjamin et al., 2018] Ari S Benjamin, David Rolnick, and Konrad Kording. Measuring and regularizing networks in function space. arXiv preprint arXiv:1805.08289, 2018. [Buzzega et al., 2020] Pietro Buzzega, Matteo Boschini, An- gelo Porrello, Davide Abati, and Simone Calderara. Dark experience for general continual learning: a strong, sim- ple baseline. Advances in neural information processing systems, 33:15920-15930, 2020. [Cha et al., 2021] Hyuntak Cha, Jaeho Lee, and Jinwoo Shin. Co2l: Contrastive continual learning. In Proceed- ings of the IEEE/CVF International Conference on Com- puter Vision, pages 9516-9525, 2021. [Chaudhry et al., 2018] Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420, 2018. [Chaudhry et al., 2019] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc'Aurelio Ranzato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019. [Chaudhry et al., 2021] Arslan Chaudhry, Albert Gordo, Puneet Dokania, Philip Torr, and David Lopez-Paz. Us- ing hindsight to anchor past knowledge in continual learn- ing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6993-7001, 2021. [Chou et al., 2022] Ting-Chun Chou, Jhih-Yuan Huang, and Wei-Po Lee. Continual learning with adversarial train- ing to enhance robustness of image recognition models. In 2022 International Conference on Cyberworlds (CW), pages 236-242, 2022.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Cohen, arXiv:1608.00853arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. Logan Engstrom, Brandon Tran, and Aleksander MadryAndrew Ilyas33arXiv preprintAdvances in neural information processing systemsCohen et al., 2019] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via random- ized smoothing. In International Conference on Machine Learning, pages 1310-1320. PMLR, 2019. [Dziugaite et al., 2016] Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M Roy. A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853, 2016. [Goodfellow et al., 2014] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. [Ho and Nvasconcelos, 2020] Chih-Hui Ho and Nuno Nvas- concelos. Contrastive learning with adversarial exam- ples. Advances in Neural Information Processing Systems, 33:17081-17093, 2020. [Ilyas et al., 2019] Andrew Ilyas, Shibani Santurkar, Dim- itris Tsipras, Logan Engstrom, Brandon Tran, and Alek- sander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019.
Gradient masking of label smoothing in adversarial robustness. Khan, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Ninghao Liu, Hongxia Yang, and Xia Huthe 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningChapman and Hall/CRC92022 International Joint Conference on Neural Networks (IJCNN)Khan et al., 2022] Hikmat Khan, Nidhal Carla Bouaynaya, and Ghulam Rasool. Adversarially robust continual learn- ing. In 2022 International Joint Conference on Neural Networks (IJCNN), pages 1-8, 2022. [Kurakin et al., 2018] Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In Artificial intelligence safety and security, pages 99-112. Chapman and Hall/CRC, 2018. [Lee et al., 2020] Hyungyu Lee, Ho Bae, and Sungroh Yoon. Gradient masking of label smoothing in adversarial robust- ness. IEEE Access, 9:6453-6464, 2020. [Liu et al., 2018] Ninghao Liu, Hongxia Yang, and Xia Hu. Adversarial detection with model interpretation. In Pro- ceedings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery & Data Mining, pages 1803-1811, 2018. [Lopez-Paz and Ranzato, 2017] David Lopez-Paz and
Online continual learning in image classification: An empirical survey. Aurelio Marc, ; Ranzato, Madry, arXiv:1706.06083International Conference on Machine Learning. Mai et al., 2022] Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott SannerPMLR30arXiv preprintImproving adversarial robustness via promoting ensemble diversityMarc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. [Madry et al., 2017] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. [Mai et al., 2022] Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner. Online continual learning in image classification: An empirical survey. Neurocomputing, 469:28-51, 2022. [Pang et al., 2019] Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In International Con- ference on Machine Learning, pages 4970-4979. PMLR, 2019.
Distillation as a defense to adversarial perturbations against deep neural networks. [ Papernot, arXiv:1312.6199arXiv:1705.07204Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick Mc-DanielPMLRarXiv preprintInternational conference on machine learning[Papernot et al., 2016] Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neu- ral networks. In 2016 IEEE symposium on security and privacy (SP), pages 582-597. IEEE, 2016. [Rebuffi et al., 2017] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Pro- ceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001-2010, 2017. [Szegedy et al., 2013] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [Tramèr et al., 2017] Florian Tramèr, Alexey Kurakin, Nico- las Papernot, Ian Goodfellow, Dan Boneh, and Patrick Mc- Daniel. Ensemble adversarial training: Attacks and de- fenses. arXiv preprint arXiv:1705.07204, 2017. [Zhang et al., 2019] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International conference on machine learn- ing, pages 7472-7482. PMLR, 2019.
| [] |
[
"mPMR: A Multilingual Pre-trained Machine Reader at Scale *",
"mPMR: A Multilingual Pre-trained Machine Reader at Scale *"
] | [
"Weiwen Xu ",
"Xin Li \nDAMO Academy, Alibaba Group\n\n",
"Wai Lam [email protected] \nThe Chinese University of Hong\nKong\n",
"Lidong Bing [email protected] \nDAMO Academy, Alibaba Group\n\n"
] | [
"DAMO Academy, Alibaba Group\n",
"The Chinese University of Hong\nKong",
"DAMO Academy, Alibaba Group\n"
] | [] | We present multilingual Pre-trained Machine Reader (mPMR), a novel method for multilingual machine reading comprehension (MRC)style pre-training. mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages. To achieve cross-lingual generalization when only source-language fine-tuning data is available, existing mPLMs solely transfer NLU capability from a source language to target languages. In contrast, mPMR allows the direct inheritance of multilingual NLU capability from the MRCstyle pre-training to downstream tasks. Therefore, mPMR acquires better NLU capability for target languages. mPMR also provides a unified solver for tackling cross-lingual span extraction and sequence classification, thereby enabling the extraction of rationales to explain the sentence-pair classification process. | 10.48550/arxiv.2305.13645 | [
"https://export.arxiv.org/pdf/2305.13645v1.pdf"
] | 258,841,288 | 2305.13645 | 895fc6686d695cb45bc32676c88dc82f85868558 |
mPMR: A Multilingual Pre-trained Machine Reader at Scale *
Weiwen Xu
Xin Li
DAMO Academy, Alibaba Group
Wai Lam [email protected]
The Chinese University of Hong
Kong
Lidong Bing [email protected]
DAMO Academy, Alibaba Group
mPMR: A Multilingual Pre-trained Machine Reader at Scale *
We present multilingual Pre-trained Machine Reader (mPMR), a novel method for multilingual machine reading comprehension (MRC)style pre-training. mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages. To achieve cross-lingual generalization when only source-language fine-tuning data is available, existing mPLMs solely transfer NLU capability from a source language to target languages. In contrast, mPMR allows the direct inheritance of multilingual NLU capability from the MRCstyle pre-training to downstream tasks. Therefore, mPMR acquires better NLU capability for target languages. mPMR also provides a unified solver for tackling cross-lingual span extraction and sequence classification, thereby enabling the extraction of rationales to explain the sentence-pair classification process.
Introduction
Multilingual pre-trained language models, acronymed as mPLMs, have demonstrated strong Natural language understanding (NLU) capability in a wide range of languages (Xue et al., 2021;Cai et al., 2021Cai et al., , 2022Conneau et al., 2020a;Ding et al., 2022;Li et al., 2020a). In particular, mPLMs can maintain exceptional cross-lingual language understanding (XLU) capability on unseen target languages though mPLMs are only fine-tuned on resource-rich source languages like English.
It has been proved that optimizing cross-lingual representations of mPLMs can improve XLU ca-mPMR
Source Label Data (EN)
The pizza is delicious. => Positive Tom eats pizza.=> ("Tom",PER)
Transfer from source language Inherit from MRC pre-training
Pre-training
Fine-tuning
Definition Article
Mention Article
印 欧 语 系 (Q19860) 印 欧 语 系 , 全 称 印 度 -欧罗巴语系,是世界上 分 布 最 ⼴ 泛 的 语 系 ...
Supervised Learning (Q334384)
Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples ...
Artificial Intelligence (Q11660)
... Supervised learningQ334384 requires a human to label the input data first, and comes in two main varieties ... ...
XLU
有 共 同 组 语 的 ⼀ 组 语 ⾔ 称 为 语 系 。 印 欧 语 系 Q19860 的 语 ⾔ 在 今 天 为 使 ⽤ ⼈ 数 之 最 ...
Retrofitting mPLM to mPMR with Wikipedia hyperlinks
Acquiring NLU capability for downstream XLU Figure 1: Pre-training and fine-tuning of mPMR.
pability. For example, cross-lingual supervisions, such as parallel sentences (Conneau and Lample, 2019) or bilingual dictionaries (Conneau et al., 2020b) could enhance cross-lingual representations with better language alignment. XLM-R (Conneau et al., 2020a) and mT5 (Xue et al., 2021) showed that appropriately incorporating more languages during pre-training leads to better crosslingual representations. A few works enriched the cross-lingual representations with factual knowledge through the utilization of multilingual mentions of entities (Calixto et al., 2021;Ri et al., 2022) and relations (Liu et al., 2022;Jiang et al., 2022) annotated in knowledge graphs. Despite their differences, the above methods essentially constructed more diverse multilingual corpora for pre-training mPLMs. These mPLMs would presumably meet their saturation points and are known to suffer from curse of multilinguality (Conneau et al., 2020a;Pfeiffer et al., 2022;Berend, 2022). Under this situation, introducing more training data from either existing (Pfeiffer et al., 2022) or unseen (Conneau et al., 2020a) languages for enhancing mPLMs may not bring further improvement or even be detrimental to their cross-lingual representations.
In the paper, instead of training a new mPLM with better cross-lingual representations, we propose multilingual Pre-trained Machine Reader (mPMR) to directly guide existing mPLMs to perform NLU in various languages. As shown in Figure 1, mPMR resembles PMR (Xu et al., 2022) for constructing multilingual machine reading comprehension (MRC)-style data with Wikipedia hyperlinks. These data are used to retrofit an mPLM into an mPMR through an MRC-style continual pre-training. During retrofitting process (i.e., pretraining), mPMR jointly learns the general sequence classification and span extraction capability for multiple languages. In XLU fine-tuning, mPLMs solely rely on cross-lingual representations to transfer NLU capability from a source language to target languages. By contrast, mPMR enables the direct inheritance of multilingual NLU capability from the MRC-style pre-training to downstream tasks in a unified MRC formulation, which alleviates the discrepancies between source-language fine-tuning and target-language inference (Zhou et al., 2022a(Zhou et al., ,b, 2023. Therefore, mPMR shows greater potential in XLU than mPLMs.
To improve the scalability of mPMR across multiple languages, we further propose Unified Q/C Construction and Stochastic answer position strategies for refining the curation of MRC data. With these two strategies, mPMR can better generalize to low-resource languages and becomes more robust to position bias (Ko et al., 2020).
The experimental results show that mPMR obtains clear improvements over XLM-R (Conneau et al., 2020a) on span extraction, with an average improvement of up to 12.6 F1 on TyDiQA, and 8.7 F1 on WikiAnn respectively. The analysis reveals that mPMR benefits from more multilingual MRC data for pre-training. We also found that mPMR converges faster in downstream tasks and is capable of using its strong extraction capability for explaining the sequence classification process.
mPMR
We present the MRC model and training data of mPMR. We closely follow PMR (Xu et al., 2022) and introduce the modifications for enabling multilingual MRC-style pre-training.
Model Pre-training
Our mPMR follows the same MRC architecture of Xu et al. (2022Xu et al. ( , 2023 with an encoder and an extractor. The encoder maps input tokens X, the concatenation of the query Q, the context C, and special markers (i.e., [CLS] and [SEP]), into hidden representations H. For any two tokens X i and X j (i < j), the extractor receives their contextualized representations H i and H j and predicts the probability score S i,j indicating the probability of the token span X i:j being the answer to the query Q.
mPMR is guided with the Wiki Anchor Extraction (WAE) objective to train both the encoder and the extractor. WAE checks if the answer to the query exists in the context. If so, WAE would first regard the query and the context to be relevant and extracts the [CLS] token as a sequence-level relevance indicator. WAE would then extract all corresponding answers from the context.
Multilingual MRC Data
Training mPMR requires the existence of labeled (query, context, answer) triplets. To obtain such data, we collected Wikipedia articles with anchor annotations for 24 languages, which are the most widely used and cover a reasonable number of languages used in XLU tasks (Ri et al., 2022).
As shown in Figure 1, we utilized a Wikipedia anchor to obtain a pair of correlated articles. One side of the pair is the article that provides in-depth descriptions of the anchor entity, which we defined as the definition article. The other side of the pair is named as the mention article, which mentions the specific anchor text 2 . We composed an answerable MRC example in which the anchor is the answer, the surrounding text of the anchor in the mention article is the context, and the definition of the anchor entity in the definition article is the query. Additionally, we can generate an unanswerable MRC example by pairing a query with an irrelevant context without anchor association.
Unified Q/C Construction. PMR constructed the MRC query and context as valid sentences so as to keep the text coherent. However, sentence segmentation tools are usually not available for low-resource languages. To remedy this, we did not apply sentence segmentation but only preprocess Wikipedia articles with word tokenization in mPMR. For each anchor, the MRC query comprises the first Q words in the definition article. To prevent information leakage during pre-training, similar to PMR, we anonymized the anchor entity in the query to the [MASK] token. The MRC context consists of C words surrounding the anchor.
Stochastic Answer Position. As mentioned by Ko et al. (2020), the model is prone to overfitting to the position shortcut if the answer in the context exhibits a fixed position pattern. In our case, suppose that the MRC context consists of C/2 words on both the left and right sides of the anchor, the model may learn the shortcut that the middle part of the context is likely to be the answer. To prevent such position bias, we propose a stochastic answer position method, which allows the answer to be presented in any position within the context. Specifically, given an anchor in a Wikipedia article, the context comprises ξ words preceding the anchor and the C − ξ words following the anchor, where ξ is a random integer ranging from 0 to C and varies across different contexts. In accordance with PMR, we treated all text spans identical to the anchor in the current context as valid answers.
Experimental Setup
Implementation Details. In mPMR, the encoder is loaded from XLM-R (Conneau et al., 2020a) and the extractor is randomly initialized. Both components are then continually pre-trained using the multilingual MRC data that we constructed. More hyper-parameters can be found in Appendix A.1.
Downstream XLU Tasks. We evaluated mPMR on a series of span extraction tasks, including Extractive Question Answering (EQA), Named Entity Recognition (NER), and Aspect-Based Sentiment Analysis (ABSA). We also evaluated our mPMR on two sequence classification tasks. We followed Xu et al. (2022) ous methods on span extraction tasks. In particular, mPMR achieves up to 7.3 and 7.1 F1 improvements over XLM-R on TyDiQA and WikiAnn respectively. Such significant improvements probably come from the following two facts: (1) WikiAnn comprises a larger number of target languages (i.e. 40). Therefore, existing methods may struggle to align these low-resource languages with English due to a lack of language-specific data.
(2) TyDiQA is a more challenging cross-lingual EQA task with 2x less lexical overlap between the query and the answer than MLQA and XQuAD (Hu et al., 2020). Our mPMR, which acquires target-language span extraction capability from both MRC-style pretraining and English-only QA fine-tuning, achieves larger performance gains on more challenging task.
mPMR Pre-training. To reflect the impact of our MRC-style data and Stochastic Answer Position method on pre-training, we present a stepby-step analysis of the retrofitting process starting from XLM-R in that sentence-pair classification focuses on the inference between the two sentences. If we construct the query with only the task label as PMR does, such query does not solely correspond to any meaningful span in the context, and thus is hard to guide the span extraction. Therefore, we leveraged another template "[CLS] label Sen-1 [SEP] Sen-2
[SEP]", where the two sentences are represented separately in the query and the context. In this template, we can extract the exact span from Sen-2 that leads to a contraction or entailment relation (i.e., the task label) with Sen-1. Specifically, we passed the sentence pair to the model twice, with each sentence of the pair being designated as the Sen-2 respectively, and extract the context span with the highest probability score from both sentences.
As shown in Table 3, the extracted spans are indeed important rationales that determine the relationship between two sentences. Such a finding confirms that the extraction capability of mPMR can be appropriately used for explaining the sentence-pair classification process. While the extraction capability may affect the learning of sequence classification during fine-tuning, resulting in a 0.4 Acc. decrease on XNLI.
mPMR Fine-tuning. We investigated the effects of mPMR on XLU fine-tuning. Figure 2 shows that mPMR converges faster than XLM-R on WikiAnn with an extremely low loss value even fine-tuned for 500 steps. In terms of test set performance, mPMR outperforms XLM-R comprehensively and exhibits greater stability. As a result, mPMR provides a better starting point for addressing XLU tasks compared to XLM-R. More examples from XQuAD and PAWS-X are provided in Figure 3 and 4.
Conclusions
This paper presents a novel multilingual MRC-style pre-training method, namely mPMR. mPMR provides a unified solver for cross-lingual span extraction and sequence classification and enables direct transfer of NLU capability from pre-training to downstream tasks. mPMR clearly improves the previous baselines and provides a possible solution to explain the sentence-pair classification process.
Limitations
We identify the following two limitations of our work:
• Different from raw text, constructing MRCstyle data from Wikipedia requires the existence of hyperlinks. This idea works well for resource-rich languages, such as English and Chinese. While such an idea is less effective for languages with few hyperlink annotations in Wikipedia because a small amount of MRCstyle training data is difficult to guide the learning of NLU capability in those languages. A possible solution is to explore other data resources to automatically construct large-scale MRC data for pre-training.
• As observed in Table 1, the improvements of sequence classification tasks are less significant than those of span extraction tasks. We suggest that the existence of anchors is not a strong relevance indicator between our constructed query and context. Such a finding is also observed in Chang et al. (2020). Therefore, constructing more relevant query-context pairs for sequence classification pre-training can possibly remedy this issue.
A.3 mPMR Performance per Language
We show the detailed results for each language in each task in Table 7 (XQuAD), Ori.
Two goals in the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday.
("Japan", LOC); ("Syria", LOC); ("Asian Cup", MISC)
Figure 3 :
3Convergence speed (Test set F1 and the training loss) of mPMR base and XLM-R base on XQuAD.
Figure 4 :
4Convergence speed (Test set F1 and the training loss) of mPMR base and XLM-R base on PAWS-X.
Table 1 :
1The results of all XLU tasks. We report the average results of all languages for each dataset. We also compute the overall average score among all datasets in the Avg. column. We reproduce the missing results with the ‡ label. Some results of Wiki-CL are left blank because they do not release their model checkpoint.
Table 2 :
2The process of retrofitting XLM-R into mPMR using multilingual MRC data (English→10 languages→24 languages) and our Stochastic Answer Position method. Each row accumulates modifications from all rows above.Label
Sentence 1
Sentence 2
Entailment
Rami Nieminen ( born February 25 , 1966 )
is a Finnish footballer.
Rami Nieminen ( born 25 February 1966 ) is a
Finnish former footballer.
Contradiction In 1938 he became the Government Anthropologist of
the Egyptian-Anglo Sudan and conducted fieldwork
with the Nuba.
In 1938 he became the government anthropologist of
the anglo-Egyptian Sudan and led fieldwork with the
Nuba .
Entailment
Stipsits 出生于科尔新堡,并在维也纳施塔莫斯多
夫度过了他的童年。
什蒂普西奇出生于德国科恩堡,在维也纳斯塔莫
斯多夫度过了他的童年。
Contradiction 纳舒厄白银骑士团队加入了夏季大学联盟,是本
市的现役球队。
Nashua Silver Knights 队是当前夏季联赛的一部
分,也是该市的大学体育队。
Entailment
これらの見方は、福音主義的、清教徒的、プロ
テスタント的な動きが出現するとともに、しば
しば表明されてきました。
これらの見解は多くの場合、新教徒、清教徒、
福音主義者が出現するなかで示されてきた。
Contradiction 1954 年にスリナムに戻った後、弁護士としてパ
ラマリボに定住した。
1954 年、パラマリボに戻ると、彼はスリナムで
弁護士として定住しました。
Table 3 :
3Case study on PAWS-X. mPMR can extract rationales to explain the sequence-pair classification in multiple languages.
Table 2 .
2Our findings suggest that the significant improvements observed are largely due to the inclusion of multilingual MRC data. Introducing English MRC data(model #2) gives marginal improvements because model #2Figure 2: Convergence speed (Test set F1 and the training loss) of mPMR base and XLM-R base on WikiAnn.can only rely on cross-lingual representations to transfer the knowledge acquired during MRC-style pre-training. When using MRC data on more languages (model #4 and #5), we can observe significant improvements on XLU tasks. This can be attributed to the NLU capability directly inherited from MRC-style pre-training in target languages. Additionally, with our Stochastic Answer Position method (model #3), mPMR becomes more robust to position bias and thus improves XLU tasks.Explainable Sentence-pair Classification. Inspired byPMR (Xu et al., 2022), we investigated if the extraction capability of mPMR can be leveraged to explain sentence-pair classification. Note
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. 2020. Look at the first sentence: Position bias in question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).Gábor Berend. 2022. Combating the curse of multi-
linguality in cross-lingual WSD by aligning sparse
contextualized word representations. In Proceedings
of the 2022 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies.
Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing,
and Wai Lam. 2021. Multilingual AMR parsing with
noisy knowledge distillation. In Findings of the As-
sociation for Computational Linguistics: EMNLP
2021.
Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing,
and Wai Lam. 2022. Retrofitting multilingual sen-
tence embeddings with Abstract Meaning Represen-
tation. Iacer Calixto, Alessandro Raganato, and Tommaso
Pasini. 2021. Wikipedia entities as rendezvous across
languages: Grounding multilingual language models
by predicting Wikipedia hyperlinks. In Proceedings
of the 2021 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies.
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yim-
ing Yang, and Sanjiv Kumar. 2020. Pre-training tasks
for embedding-based large-scale retrieval. In Inter-
national Conference on Learning Representations.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan
Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and
Jennimaria Palomaki. 2020. TyDi QA: A bench-
mark for information-seeking question answering in
typologically diverse languages. Transactions of the
Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
moyer, and Veselin Stoyanov. 2020a. Unsupervised
cross-lingual representation learning at scale. In Pro-
ceedings of the 58th Annual Meeting of the Associa-
tion for Computational Linguistics.
Alexis Conneau and Guillaume Lample. 2019. Cross-
lingual language model pretraining. In Advances in
Neural Information Processing Systems.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina
Williams, Samuel Bowman, Holger Schwenk, and
Veselin Stoyanov. 2018. XNLI: Evaluating cross-
lingual sentence representations. In Proceedings of
the 2018 Conference on Empirical Methods in Natu-
ral Language Processing.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle-
moyer, and Veselin Stoyanov. 2020b. Emerging
cross-lingual structure in pretrained language models.
In Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics.
Bosheng Ding, Junjie Hu, Lidong Bing, Mahani Alju-
nied, Shafiq Joty, Luo Si, and Chunyan Miao. 2022.
GlobalWoZ: Globalizing MultiWoZ to develop mul-
tilingual task-oriented dialogue systems. In Proceed-
ings of the 60th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers).
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham
Neubig, Orhan Firat, and Melvin Johnson. 2020.
Xtreme: A massively multilingual multi-task bench-
mark for evaluating cross-lingual generalisation. In
International Conference on Machine Learning.
Xiaoze Jiang, Yaobo Liang, Weizhu Chen, and Nan
Duan. 2022. Xlm-k: Improving cross-lingual lan-
guage model pre-training with multilingual knowl-
edge. In Proceedings of the AAAI Conference on
Artificial Intelligence.
Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian
Riedel, and Holger Schwenk. 2020. MLQA: Eval-
uating cross-lingual extractive question answering.
In Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics.
Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong
Bing, and Rui Yan. 2020a. Unsupervised domain
adaptation of a pretrained cross-lingual language
model. In Proceedings of the Twenty-Ninth Inter-
national Joint Conference on Artificial Intelligence,
IJCAI 2020.
Xin Li, Lidong Bing, Wenxuan Zhang, Zheng Li, and
Wai Lam. 2020b. Unsupervised cross-lingual adapta-
tion for sequence tagging and beyond. arXiv preprint
arXiv:2010.12405.
Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty,
and Luo Si. 2022. Enhancing multilingual language
model with massive multilingual knowledge triples.
In Proceedings of the 2022 Conference on Empirical
Methods in Natural Language Processing.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled
weight decay regularization. In International Confer-
ence on Learning Representations.
Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Song-
fang Huang, Fei Huang, and Luo Si. 2021. VECO:
Variable and flexible cross-lingual pre-training for
language understanding and generation. In Proceed-
ings of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th International
Joint Conference on Natural Language Processing
(Volume 1: Long Papers).
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Noth-
man, Kevin Knight, and Heng Ji. 2017. Cross-lingual
name tagging and linking for 282 languages. In Pro-
ceedings of the 55th Annual Meeting of the Associa-
tion for Computational Linguistics (Volume 1: Long
Papers).
Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James
Cross, Sebastian Riedel, and Mikel Artetxe. 2022.
Lifting the curse of multilinguality by pre-training
modular transformers. In Proceedings of the 2022
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies.
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou,
Ion Androutsopoulos, Suresh Manandhar, Moham-
mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan
Zhao, Bing Qin, Orphée De Clercq, Véronique
Hoste, Marianna Apidianaki, Xavier Tannier, Na-
talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel,
Salud María Jiménez-Zafra, and Gülşen Eryigit.
2016. SemEval-2016 task 5: Aspect based sentiment
analysis. In Proceedings of the 10th International
Workshop on Semantic Evaluation (SemEval-2016).
Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
2022. mLUKE: The power of entity representations
in multilingual pretrained language models. In Pro-
ceedings of the 60th Annual Meeting of the Associa-
tion for Computational Linguistics (Volume 1: Long
Papers).
Erik F. Tjong Kim Sang. 2002. Introduction to the
CoNLL-2002 shared task: Language-independent
named entity recognition. In COLING-02: The 6th
Conference on Natural Language Learning 2002
(CoNLL-2002).
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In
Proceedings of the Seventh Conference on Natural
Language Learning at HLT-NAACL 2003.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Remi Louf, Morgan Funtow-
icz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
Quentin Lhoest, and Alexander Rush. 2020. Trans-
formers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing: System
Demonstrations.
Weiwen Xu, Xin Li, Yang Deng, Wai Lam, and Lidong
Bing. 2023. Peerda: Data augmentation via modeling
peer relation for span identification tasks. In The 61th
Annual Meeting of the Association for Computational
Linguistics.
Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Li-
dong Bing, Wai Lam, and Luo Si. 2022. From cloz-
ing to comprehending: Retrofitting pre-trained lan-
guage model to pre-trained machine reader. arXiv
preprint arXiv:2212.04755.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale,
Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and
Colin Raffel. 2021. mT5: A massively multilingual
pre-trained text-to-text transformer. In Proceedings
of the 2021 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies.
Yinfei Yang, Yuan Zhang, Chris Tar, and Jason
Baldridge. 2019. PAWS-X: A cross-lingual adversar-
ial dataset for paraphrase identification. In Proceed-
ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing (EMNLP-IJCNLP).
Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong
Bing, and Wai Lam. 2021. Cross-lingual aspect-
based sentiment analysis with aspect term code-
switching. In Proceedings of the 2021 Conference on
Empirical Methods in Natural Language Processing.
Meng Zhou, Xin Li, Yue Jiang, and Lidong Bing. 2022a.
Enhancing cross-lingual prompting with mask token
augmentation. arXiv preprint arXiv:2202.07255.
Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, and
Chunyan Miao. 2023. Improving self-training for
cross-lingual named entity recognition with con-
trastive and prototype learning. In The 61th Annual
Meeting of the Association for Computational Lin-
guistics.
Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Luo Si,
and Chunyan Miao. 2022b. ConNER: Consistency
training for cross-lingual named entity recognition.
In Proceedings of the 2022 Conference on Empirical
Methods in Natural Language Processing.
DiQA (Clark et al., 2020) covering 11, 7, and 9
languages respectively. For NER evaluation, we
use the WikiAnn dataset (Pan et al., 2017) restricted
to the 40 languages from XTREME (Hu et al.,
2020), as well as the CoNLL dataset with 4 lan-
guages (Tjong Kim Sang, 2002; Tjong Kim Sang
and De Meulder, 2003); We also evaluate the XLU
performance of SemEval16 ABSA on 6 languages
(Pontiki et al., 2016), where we collect the data
from Li et al. (2020b); Zhang et al. (2021). Regard-
ing the sequence classification task, we evaluate
XNLI (Conneau et al., 2018) and PAWS-X (Yang
et al., 2019) with 15 and 7 languages respectively.
Table 4 :
4Data statistics of mPMR pre-training data. The statistics is computed after removing the low-frequency entities. The number of MRC examples includes both answerable and unanswerable examples.Dataset
XQuAD MLQA TyDiQA WikiAnn CoNLL SemEval16 PAWS-X XNLI
Query Length
64
64
64
32
32
32
64
64
Input Length
384
384
384
192
192
192
192
192
Batch Size
8
8
8
16
16
32
16
32
Learning Rate
3e-5
3e-5
2e-5
1e-5
1e-5
2e-5
5e-5
3e-5
Epoch
3
3
10
10
10
20
10
3
Table 5 :
5Hyper-parameters settings in fine-tuning XLU tasks. Question: Who lost to the Broncos in the divisional round? Context: The Broncos defeated the Pittsburgh Steelers in the divisional round, 23-16, by scoring 11 points in the final three minutes of the game. Answer: "Pittsburgh Steelers" PMR [CLS] Who lost to the Broncos in the divisional round ? [SEP] [SEP] The Broncos defeated the Pittsburgh Steelers in the divisional round, 23-16 , by scoring 11 points in the final three minutes of the game .Task
Example Input
Asian Cup victory over Syria on Friday . [SEP] ∅ [CLS] "LOC" . Location entities are the name of politically or geographically defined locations such as cities , countries . [SEP] [SEP] Two goals in the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday . [SEP] [CLS] "MISC" . Examples of miscellaneous entities include events , nationalities , products and works of art . [SEP] [SEP] Two goals in the last six minutes gave holders Japan an uninspiring 2-1 Asian Cup victory over Syria on Friday . [SEP] PMR [CLS] "POS" . For aspect terms of positive sentiment . [SEP] [SEP] Nice ambience , but highly overrated place . [SEP](13,13) -"ambience"[CLS] "NEG" . For aspect terms of negative sentiment .[SEP] [SEP] Nice ambience , but highly overrated place . [SEP] (18,18) -"place" [CLS] "NEU" . For aspect terms of neutral sentiment . [SEP] [SEP] Nice ambience , but highly overrated place . [SEP] ∅ Ori. Hypothesis: The Tabaci River is a tributary of the River Leurda in Romania. Premise: The Leurda River is a tributary of the River Tabaci in Romania. PMR [CLS] Contradiction . The hypothesis is a sentence with a contradictory meaning to the premise . [SEP] [SEP] Hypothesis : The Tabaci River is a tributary of the River Leurda in Romania . Premise : The Leurda River is a tributary of the River Tabaci in Romania . [SEP] [CLS] Entailment . The hypothesis is a sentence with a similar meaning as the premise . [SEP] [SEP] Hypothesis : The Tabaci River is a tributary of the River Leurda in Romania . Premise : The Leurda River is a tributary of the River Tabaci in Romania . [SEP]PMR
[CLS] "ORG" . Organization entities are limited to named corporate,
governmental, or other organizational entities. [SEP] [SEP] Two
goals in the last six minutes gave holders Japan an uninspiring 2-1
Asian Cup victory over Syria on Friday . [SEP]
∅
[CLS] "PER" . Person entities are named persons or family . [SEP]
[SEP] Two goals in the last six minutes gave holders Japan an unin-
spiring 2-1 (32,32) -"Japan";
(40,40) -"Syria"
(34,35) -"Asian Cup"
ABSA
(SemEval16)
Ori.
Nice ambience, but highly overrated place.
("ambience", POS);
("place", NEG)
Sen. Pair
Classification
(PAWS-X)
Contradiction
(0,0) -"[CLS]"
∅
Table 6 :
6MRC examples of XLU tasks. We use English examples here for demonstration purposes. Ori. indicates the original data format of these tasks.Model
en
ar
de
el
es
hi
ru
th
tr
vi
zh
Avg.
XLM-Rbase 82.2 / 72.0 65.5 / 49.9 73.9 / 59.7 71.2 / 56.3 76.3 / 59.4 66.4 / 52.0 73.7 / 58.9 64.7 / 54.6 67.0 / 52.8 73.3 / 54.7 65.0 / 55.9 70.8 / 56.9
mPMRbase
84.4 / 73.4 69.6 / 53.2 76.4 / 61.5 74.9 / 58.4 77.4 / 60.2 69.2 / 54.5 75.2 / 58.8 69.2 / 57.6 70.4 / 55.8 74.8 / 55.8 71.8 / 65.5 74.0 / 59.5
XLM-R
86.5 / 75.6 72.4 / 54.8 79.3 / 63.0 79.2 / 61.6 82.0 / 62.9 76.1 / 59.1 79.0 / 62.9 72.2 / 59.8 75.4 / 60.8 79.7 / 60.8 68.2 / 58.2 77.3 / 61.7
mPMR
87.6 / 76.5 75.9 / 60.0 81.5 / 65.0 80.8 / 63.9 82.8 / 65.1 76.5 / 60.3 80.9 / 65.3 75.5 / 65.5 76.7 / 61.3 81.5 / 62.2 71.5 / 63.4 79.2 / 64.4
Table 7 :
7XQuAD results (F1 / EM) for each language. 66.4 / 47.0 70.3 / 56.2 74.5 / 57.1 71.4 / 54.1 74.7 / 54.4 70.5 / 47.3 73.1 / 55.4Model
en
ar
de
es
hi
vi
zh
Avg.
XLM-R base 79.3 / 67.2 55.4 / 38.1 62.0 / 49.1 66.8 / 50.2 59.4 / 44.8 66.1 / 46.7 61.8 / 39.5 64.4 / 47.9
mPMR base
81.1 / 68.9 58.5 / 41.0 63.6 / 50.5 68.5 / 52.1 60.3 / 46.4 68.3 / 49.2 56.6 / 32.9 65.3 / 48.7
XLM-R
83.4 / 71.0 64.9 / 45.8 69.6 / 54.8 74.1 / 56.8 70.7 / 53.4 73.3 / 53.0 64.4 / 42.4 71.5 / 53.9
mPMR
84.0 / 71.4
Table 8 :
8MLQA results (F1 / EM) for each language.Model
en
ar
bn
fi
id
ko
ru
sw
te
Avg.
XLM-R base 66.8 / 57.3 55.7 / 42.0 31.5 / 20.4 52.6 / 40.3 69.1 / 55.6 36.3 / 27.9 54.8 / 36.5 53.0 / 34.7 37.4 / 28.8 50.8 / 38.2
mPMR base 71.1 / 61.6 66.3 / 52.6 56.5 / 41.6 65.5 / 53.1 73.9 / 63.7 50.4 / 38.8 64.4 / 37.9 57.4 / 41.1 65.3 / 50.4 63.4 / 49.0
XLM-R
71.3 / 60.7 69.3 / 52.3 66.2 / 53.1 64.3 / 51.3 76.5 / 62.5 58.3 / 46.7 64.7 / 43.4 68.6 / 53.1 67.3 / 41.1 67.4 / 51.6
mPMR
76.4 / 65.2 76.0 / 58.0 72.3 / 55.8 74.4 / 56.5 84.1 / 71.3 62.2 / 50.7 72.5 / 43.2 76.5 / 63.1 77.7 / 60.8 74.7 / 58.3
Table 9 :
9TyDiQA-GoldP results (F1 / EM) for each language. 45.1 52.9 62.4 59.4 68.1 57.4 83.7 81.5 71.8 77.3 50.5 57.4 3.0 74.2 80.3 55.7 75.2 31.6 49.9 XLM-R 59.9 41.7 41.3 56.8 58.2 76.7 29.6 86.1 85.2 72.2 77.6 52.3 51.6 7.1 78.8 70.9 64.0 80.0 27.2 22.4 mPMR 77.3 46.8 57.9 70.6 68.1 73.8 57.8 86.0 83.6 72.8 79.8 62.6 58.1 3.8 83.0 80.3 76.2 83.6 36.1 54.4Model
en
af
ar
bg
bn
de
el
es
et
eu
fa
fi
fr
he
hi
hu
id
it
ja
jv
XLM-R base 84.2 75.3 47.3 79.0 66.3 77.5 75.3 78.0 69.6 56.0 38.1 70.4 81.4 50.8 67.9 72.4 51.0 79.6 19.6 63.9
mPMR base
85.1 80.7 57.6 80.2 71.9 81.2 77.6 79.5 79.1 71.3 49.6 80.4 82.4 65.2 71.7 82.2 58.6 83.5 43.2 72.0
XLM-R
85.4 81.1 53.9 84.0 73.8 82.3 82.8 80.4 68.8 54.8 64.2 75.9 81.4 59.3 72.9 76.4 59.3 84.6 13.2 71.2
mPMR
86.0 81.7 56.1 85.9 79.6 82.3 82.3 75.5 82.7 69.6 75.2 84.1 82.0 66.5 75.9 84.0 59.9 86.1 49.1 72.4
ka
kk
ko
ml
mr
ms
my
nl
pt
ru
sw
ta
te
th
tl
tr
ur
vi
yo
zh
XLM-R base 58.7 40.6 34.3 50.8 46.0 63.8 40.6 81.5 80.0 65.4 76.1 43.0 46.4 4.2 71.9 68.7 45.7 70.9 1.5 23.0
mPMR base
72.2
Table 10 :
10WikiAnn results (F1 Score) for each language.Model
en
de
es
nl
Avg.
XLM-R base 91.3 71.0 78.7 75.7 79.2
mPMR base
91.9 74.3 80.8 79.7 81.7
XLM-R
92.8 73.7 81.6 77.7 81.4
mPMR
93.5 75.0 85.0 83.1 84.1
Table 11 :
11CoNLL results (F1 Score) for each language.Model
en
es
fr
nl
ru
tr
Avg.
XLM-R base 76.5 65.4 55.6 61.2 56.1 45.4 60.0
mPMR base
77.6 68.6 56.4 62.2 59.5 48.4 62.1
XLM-R
82.4 71.3 60.3 67.4 61.2 49.1 66.1
mPMR
82.8 71.9 64.7 67.4 66.9 55.7 68.2
Table 12 :
12SemEval16 results (F1 Score) for each language.Model
en
de
es
fr
ja
ko
zh
Avg.
XLM-R base 94.3 87.7 89.1 88.7 77.0 76.6 81.3 85.0
mPMR base
94.3 88.4 90.1 88.9 79.0 79.4 82.4 86.1
XLM-R
95.2 89.3 91.0 90.9 79.6 79.9 82.5 86.9
mPMR
95.2 90.6 90.3 91.3 81.2 82.9 84.6 88.0
Table 13 :
13PAWS-X accuracy scores (Acc.) for each language. XLM-R base 84.6 71.0 76.8 75.6 74.9 77.9 76.9 68.9 74.1 64.4 71.1 72.4 65.2 73.2 73.0 73.3 mPMR base 84.2 71.5 77.2 75.5 75.5 78.6 76.9 69.5 74.7 62.5 71.4 71.6 65.5 74.3 74.0 73.6 XLM-R 88.2 77.0 81.7 81.2 81.2 84.2 81.7 74.9 78.9 70.8 75.7 77.4 70.6 78.0 77.7 78.Model
en
ar
bg
de
el
es
fr
hi
ru
sw
th
tr
ur
vi
zh
Avg.
6
mPMR
88.3 77.9 82.9 82.2 81.0 83.5 82.2 75.2 79.8 71.2 76.1 78.9 71.6 78.9 79.0 79.3
Table 14 :
14XNLI accuracy scores (Acc.) for each language.
definition/mention article refers to home/reference article of Xu et al. (2022).
https://dumps.wikimedia.org/enwiki/latest 4 https://github.com/explosion/spaCy 5 https://github.com/PyThaiNLP/pythainlp 6 https://github.com/alvations/sacremoses
A AppendixA.1 More Implementation DetailsWe collect the 2022-08-01 dump 3 of Wikipedia articles for the 24 languages in consideration. The statistics of each language can be found inTable 4. Then for each article, we extract the plain text with anchors via WikiExtractor(Attardi, 2015). Word tokenization is performed using spaCy 4 if the language is supported, otherwise, we utilize PyThaiNLP 5 for Thai and Sacremoses 6 for remaining languages. For each anchor entity, we construct 10 answerable MRC examples and 10 unanswerable MRC examples as described in Sec. 2.2. Anchor entities with low frequency (below 10 occurrences for English entities and 5 occurrences for entities in other languages) were excluded.In mPMR, we use Huggingface's implementations of XLM-R (Wolf et al., 2020). During the pre-training stage, the query length Q is set to 50 words, and the context length C is set to 200 words. Both are computed before the subword segmentation. We follow the default learning rate schedule and dropout settings used in XLM-R. We use AdamW (Loshchilov and Hutter, 2019) as our optimizer. We train both mPMR base and mPMR on 4 A100 GPU. The learning rate is set to 1e-5, and the effective batch size for each step is set to 256 and 80 for mPMR base and mPMR respectively in order to maximize the usage of the GPU memory. We use the average scores of XQuAD, CoNLL, and PAWS-X to select the best mPMR checkpoint. In fact, we continually pre-train mPMR base and mPMR for 250,000 and 100,000 steps. The training speed is around 6250 steps per hour. The hyper-parameters of mPMR large on downstream XLU tasks can be found inTable 5.A.2 Downstream XLU TasksWe evaluate mPMR on XLU tasks including both span extraction (EQA, NER, and ABSA) and sequence classification (sentence pair classification). We follow(Xu et al., 2022)to convert all tasks into MRC formulation and tackle them accordingly. We show concrete examples for each task in Table 6. Specifically, we evaluate the performance of EQA on three benchmarks: XQuAD(Artetxe et al., 2020),MLQA (Lewis et al., 2020), and Ty-
On the cross-lingual transferability of monolingual representations. Mikel Artetxe, Sebastian Ruder, Dani Yogatama, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
. Giusepppe Attardi, Giusepppe Attardi. 2015. Wikiextractor. https:// github.com/attardi/wikiextractor.
| [
"https://github.com/explosion/spaCy",
"https://github.com/PyThaiNLP/pythainlp",
"https://github.com/alvations/sacremoses"
] |
[
"Entanglement criteria based on local uncertainty relations are strictly stronger than the computable cross norm criterion",
"Entanglement criteria based on local uncertainty relations are strictly stronger than the computable cross norm criterion"
] | [
"Otfried Gühne \nInstitut für Quantenoptik und Quanteninformation\nOsterreichische Akademie der WissenschaftenA-6020InnsbruckAustria\n",
"Mátyás Mechler \nResearch Group for Nonlinear and Quantum Optics\nHungarian Academy of Sciences\nUniversity of Pécs\nIfjúságút 6H-7624PécsHungary\n",
"Géza Tóth \nResearch Institute for Solid State Physics and Optics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary\n\nMax-Planck-Institut für Quantenoptik\nHans-Kopfermann-Straße 1D-85748GarchingGermany\n",
"Peter Adam \nResearch Group for Nonlinear and Quantum Optics\nHungarian Academy of Sciences\nUniversity of Pécs\nIfjúságút 6H-7624PécsHungary\n\nResearch Institute for Solid State Physics and Optics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary\n"
] | [
"Institut für Quantenoptik und Quanteninformation\nOsterreichische Akademie der WissenschaftenA-6020InnsbruckAustria",
"Research Group for Nonlinear and Quantum Optics\nHungarian Academy of Sciences\nUniversity of Pécs\nIfjúságút 6H-7624PécsHungary",
"Research Institute for Solid State Physics and Optics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary",
"Max-Planck-Institut für Quantenoptik\nHans-Kopfermann-Straße 1D-85748GarchingGermany",
"Research Group for Nonlinear and Quantum Optics\nHungarian Academy of Sciences\nUniversity of Pécs\nIfjúságút 6H-7624PécsHungary",
"Research Institute for Solid State Physics and Optics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary"
] | [] | We show that any state which violates the computable cross norm (or realignment) criterion for separability also violates the separability criterion of the local uncertainty relations. The converse is not true. The local uncertainty relations provide a straightforward construction of nonlinear entanglement witnesses for the cross norm criterion. PACS numbers: 03.67.-a, 03.65.UdEntanglement plays a central role in quantum information processing. Thus its characterization is important for the field: It is crucial to be able to decide whether or not a given quantum state is entangled. However, this so-called separability problem remains one of the most challenging unsolved problems in quantum physics.Several sufficient conditions for entanglement are known. The first of such criteria was the criterion of the positivity of the partial transpose (PPT) [1]. This criterion is necessary and sufficient for 2 × 2 and 2 × 3 systems [2], but in higher dimensional systems some entangled states escape the detection. The characterization of these PPT entangled states is thus of great interest. Recently, the computable cross norm (CCN) or realignment criterion was put forward by O. Rudolph [3] and Chen and Wu[4]. The original condition has been reformulated in several ways and extended to multipartite systems[5][6][7]. The CCN criterion allows to detect the entanglement of many states where the PPT criterion fails, however, some states which are detected by the PPT criterion, cannot be detected by the CCN criterion[5]. In this way, one may view the CCN criterion as complementary to the PPT criterion. In addition to the CCN criterion, there are also algorithmic approaches to the separability problem which allow the detection of entanglement when the PPT criterion fails[8].A different approach to the separability problem tries to formulate separability criteria directly in mean values or variances of observables. Typically, these conditions are formulated as Bell inequalities [9], entanglement witnesses [2, 10] or uncertainty relations[11][12][13][14][15][16]. Here, the local uncertainty relations (LURs) by Hofmann and Takeuchi are remarkable[12]. They have a clear physical interpretation and are quite versatile: It has been shown that they can be used to detect PPT entangled states[13]. It is further known that in certain situations they can provide a nonlinear refinement of linear entanglement witnesses[14]. Consequently, the investigation of LURs has been undertaken in several directions[15,16].In this paper we investigate the relation between the CCN criterion and the LURs. We show that any state which can be detected by the CCN criterion can also be detected by a LUR. By providing counterexamples, we prove that the converse does not hold. Our results show that the LURs can be viewed as nonlinear entanglement witnesses for the CCN criterion. In this way, we demonstrate a surprising connection between permutation separability criteria (to which the CCN criterion belongs)[7], criteria in terms of covariance matrices, such as LURs[16,17], and the theory of nonlinear entanglement witnesses[18,19]. Further, in two Appendices we discuss the relation of our constructions to other entanglement witnesses which have been proposed for the CCN criterion and we calculate other nonlinear entanglement witnesses for the CCN criterion[18].Let us start by recalling the definition of separability. A quantum state ̺ is called separable, if its density matrix can be written as a convex combination of product states, | 10.1103/physreva.74.010301 | [
"https://arxiv.org/pdf/quant-ph/0604050v2.pdf"
] | 50,971,661 | quant-ph/0604050 | 74221187cd04b7fc7ca26c94858c16e3b58f381e |
Entanglement criteria based on local uncertainty relations are strictly stronger than the computable cross norm criterion
15 Jun 2006
Otfried Gühne
Institut für Quantenoptik und Quanteninformation
Osterreichische Akademie der WissenschaftenA-6020InnsbruckAustria
Mátyás Mechler
Research Group for Nonlinear and Quantum Optics
Hungarian Academy of Sciences
University of Pécs
Ifjúságút 6H-7624PécsHungary
Géza Tóth
Research Institute for Solid State Physics and Optics
Hungarian Academy of Sciences
P.O. Box 49H-1525BudapestHungary
Max-Planck-Institut für Quantenoptik
Hans-Kopfermann-Straße 1D-85748GarchingGermany
Peter Adam
Research Group for Nonlinear and Quantum Optics
Hungarian Academy of Sciences
University of Pécs
Ifjúságút 6H-7624PécsHungary
Research Institute for Solid State Physics and Optics
Hungarian Academy of Sciences
P.O. Box 49H-1525BudapestHungary
Entanglement criteria based on local uncertainty relations are strictly stronger than the computable cross norm criterion
15 Jun 2006arXiv:quant-ph/0604050v2
We show that any state which violates the computable cross norm (or realignment) criterion for separability also violates the separability criterion of the local uncertainty relations. The converse is not true. The local uncertainty relations provide a straightforward construction of nonlinear entanglement witnesses for the cross norm criterion. PACS numbers: 03.67.-a, 03.65.UdEntanglement plays a central role in quantum information processing. Thus its characterization is important for the field: It is crucial to be able to decide whether or not a given quantum state is entangled. However, this so-called separability problem remains one of the most challenging unsolved problems in quantum physics.Several sufficient conditions for entanglement are known. The first of such criteria was the criterion of the positivity of the partial transpose (PPT) [1]. This criterion is necessary and sufficient for 2 × 2 and 2 × 3 systems [2], but in higher dimensional systems some entangled states escape the detection. The characterization of these PPT entangled states is thus of great interest. Recently, the computable cross norm (CCN) or realignment criterion was put forward by O. Rudolph [3] and Chen and Wu[4]. The original condition has been reformulated in several ways and extended to multipartite systems[5][6][7]. The CCN criterion allows to detect the entanglement of many states where the PPT criterion fails, however, some states which are detected by the PPT criterion, cannot be detected by the CCN criterion[5]. In this way, one may view the CCN criterion as complementary to the PPT criterion. In addition to the CCN criterion, there are also algorithmic approaches to the separability problem which allow the detection of entanglement when the PPT criterion fails[8].A different approach to the separability problem tries to formulate separability criteria directly in mean values or variances of observables. Typically, these conditions are formulated as Bell inequalities [9], entanglement witnesses [2, 10] or uncertainty relations[11][12][13][14][15][16]. Here, the local uncertainty relations (LURs) by Hofmann and Takeuchi are remarkable[12]. They have a clear physical interpretation and are quite versatile: It has been shown that they can be used to detect PPT entangled states[13]. It is further known that in certain situations they can provide a nonlinear refinement of linear entanglement witnesses[14]. Consequently, the investigation of LURs has been undertaken in several directions[15,16].In this paper we investigate the relation between the CCN criterion and the LURs. We show that any state which can be detected by the CCN criterion can also be detected by a LUR. By providing counterexamples, we prove that the converse does not hold. Our results show that the LURs can be viewed as nonlinear entanglement witnesses for the CCN criterion. In this way, we demonstrate a surprising connection between permutation separability criteria (to which the CCN criterion belongs)[7], criteria in terms of covariance matrices, such as LURs[16,17], and the theory of nonlinear entanglement witnesses[18,19]. Further, in two Appendices we discuss the relation of our constructions to other entanglement witnesses which have been proposed for the CCN criterion and we calculate other nonlinear entanglement witnesses for the CCN criterion[18].Let us start by recalling the definition of separability. A quantum state ̺ is called separable, if its density matrix can be written as a convex combination of product states,
We show that any state which violates the computable cross norm (or realignment) criterion for separability also violates the separability criterion of the local uncertainty relations. The converse is not true. The local uncertainty relations provide a straightforward construction of nonlinear entanglement witnesses for the cross norm criterion. Entanglement plays a central role in quantum information processing. Thus its characterization is important for the field: It is crucial to be able to decide whether or not a given quantum state is entangled. However, this so-called separability problem remains one of the most challenging unsolved problems in quantum physics.
Several sufficient conditions for entanglement are known. The first of such criteria was the criterion of the positivity of the partial transpose (PPT) [1]. This criterion is necessary and sufficient for 2 × 2 and 2 × 3 systems [2], but in higher dimensional systems some entangled states escape the detection. The characterization of these PPT entangled states is thus of great interest. Recently, the computable cross norm (CCN) or realignment criterion was put forward by O. Rudolph [3] and Chen and Wu [4]. The original condition has been reformulated in several ways and extended to multipartite systems [5][6][7]. The CCN criterion allows to detect the entanglement of many states where the PPT criterion fails, however, some states which are detected by the PPT criterion, cannot be detected by the CCN criterion [5]. In this way, one may view the CCN criterion as complementary to the PPT criterion. In addition to the CCN criterion, there are also algorithmic approaches to the separability problem which allow the detection of entanglement when the PPT criterion fails [8].
A different approach to the separability problem tries to formulate separability criteria directly in mean values or variances of observables. Typically, these conditions are formulated as Bell inequalities [9], entanglement witnesses [2,10] or uncertainty relations [11][12][13][14][15][16]. Here, the local uncertainty relations (LURs) by Hofmann and Takeuchi are remarkable [12]. They have a clear physical interpretation and are quite versatile: It has been shown that they can be used to detect PPT entangled states [13]. It is further known that in certain situations they can provide a nonlinear refinement of linear entanglement witnesses [14]. Consequently, the investigation of LURs has been undertaken in several directions [15,16].
In this paper we investigate the relation between the CCN criterion and the LURs. We show that any state which can be detected by the CCN criterion can also be detected by a LUR. By providing counterexamples, we prove that the converse does not hold. Our results show that the LURs can be viewed as nonlinear entanglement witnesses for the CCN criterion. In this way, we demonstrate a surprising connection between permutation separability criteria (to which the CCN criterion belongs) [7], criteria in terms of covariance matrices, such as LURs [16,17], and the theory of nonlinear entanglement witnesses [18,19]. Further, in two Appendices we discuss the relation of our constructions to other entanglement witnesses which have been proposed for the CCN criterion and we calculate other nonlinear entanglement witnesses for the CCN criterion [18].
Let us start by recalling the definition of separability. A quantum state ̺ is called separable, if its density matrix can be written as a convex combination of product states,
̺ = k p k ̺ (A) k ⊗ ̺ (B) k ,(1)
where p k ≥ 0, k p k = 1 and A and B denote the two subsystems. The CCN criterion can be formulated in different ways. We use here a formulation given in Ref. [3] in Corollary 18, since it is best suited for our approach. It makes use of the Schmidt decomposition in operator space. Due to that, any density matrix ̺ can be written as
̺ = k λ k G A k ⊗ G B k .(2)
where the λ k ≥ 0 and G A k and G B k are orthogonal bases of the observable spaces B(H A ) resp. B(H B ). Such a basis consists of d 2 observables which have to fulfill
T r(G A k G A l ) = T r(G B k G B l ) = δ kl .(3)
We refer to such observables as local orthogonal observables (LOOs) [20]. For instance, for qubits the (appropriately normalized) Pauli matrices together with the identity form a set of LOOs (see Eq. (12)). Note that, given a set G A k of LOOs, any other setG A l of LOOs is of the form [20].
G A l = k O lk G A k , where O lk is a d 2 × d 2 real orthogonal matrix
As for the usual Schmidt decomposition, the λ k are (up to a permutation) unique and if the λ k are pairwise different, the G A k and G B k are also unique (up to a sign). The λ k can be computed as in the Schmidt decomposition: First, one decomposes ̺ = kl µ klG A k ⊗G B l with arbitrary LOOsG A k andG B l , then, by performing the singular value decomposition of µ kl one arrives at Eq. (2), the λ k are the roots of the eigenvalues of the matrix µµ † .
The CCN criterion states that if ̺ is separable, then the sum of all λ k is smaller than one:
̺ is separable ⇒ k λ k ≤ 1.(4)
Hence, if k λ k > 1 the state must be entangled. For states violating this criterion, an entanglement witness can directly be written down. Recall that an entanglement witness W is an observable with a positive expectation value on all separable states, hence a negative expectation value signals the presence of entanglement [10]. Given a state in the form (2) which violates the CCN criterion, a witness is given by [21]
W = 1 1 − k G A k ⊗ G B k ,(5)
since for this state we have T r(W̺) = 1 − k λ k < 0 due to the properties of the LOOs. On the other hand,
if ̺ = kl µ kl G A k ⊗ G B l were separable, then T r(W̺) = 1 − k µ kk ≥ 1 − k λ k ≥ 0, since k µ kk ≤ k λ k due
to the properties of the singular value decomposition [22]. It is clear that any state violating the CCN criterion can be detected by a witness of the type (5). Note that other forms of entanglement witnesses for the CCN criterion have also been proposed [6], we will discuss them in the Appendix A.
Let us now discuss the LURs. This criterion is formulated as follows: Given some non-commuting observables A k on Alice's space and B k on Bob's space, one may compute strictly positive numbers C A and C B such that
n k=1 ∆ 2 (A k ) ≥ C A , n k=1 ∆ 2 (B k ) ≥ C B(6)
holds for all states for Alice, resp. Bob. Here, ∆ 2 (A) = A 2 − A 2 denotes the variance of an observable A. Then it can be proved that for separable states
n k=1 ∆ 2 (A k ⊗ 1 1 + 1 1 ⊗ B k ) ≥ C A + C B(7)
has to hold. Any quantum state which violates Eq. (7) is entangled. Physically, Eq. (7) may be interpreted as stating that separable states always inherit the uncertainty relations which hold for their reduced states [23].
To connect the LURs with the CCN criterion, first note that for any LOOs G A k the relation
d 2 k=1 ∆ 2 (G A k ) ≥ d − 1,(8)
holds. This can be seen as follows. If we choose the d 2 LOOs [24]. Similarly, we have for Bob's system
G A k = 1 √ 2 (|m n| + |n m|), for 1 ≤ k ≤ (d(d − 1))/2; 1 ≤ m < n ≤ d; 1 √ 2 (i|m n| − i|n m|), for (d(d − 1))/2 < k ≤ (d(d − 1)); and 1 ≤ m < n ≤ d; |m m| for d(d − 1) < k ≤ d 2 ; 1 ≤ m ≤ d; one can directly calculate that k (G A k ) 2 = d1 1 and that k G A k 2 = T r(̺ 2 ) ≤ 1. For generalG A k = l O kl G A l we have k (G A k ) 2 = klm O T lk O km G A l G A m = d1 1 since O is orthogonal and again k G A k 2 = T r(̺ 2 ) ≤ 1d 2 k=1 ∆ 2 (−G B k ) ≥ d − 1,(9)
where the minus sign has been inserted for later convenience.
Combining Eqs. (8,9) with the method of the LURs, using the fact that k (G A k ) 2 = k (G B k ) 2 = d1 1 one can directly calculate that for separable states
1 − k G A k ⊗ G B k − 1 2 k G A k ⊗ 1 1 − 1 1 ⊗ G B k 2 ≥ 0. (10)
The first, linear part is just the expectation value of the witness (5), from this some positive terms are subtracted. Since any state which violates the CCN criterion can be detected by the witness in Eq. (5) it can also be detected by the LUR in Eq. (10) and we have: Theorem. Any state which violates the computable cross norm criterion can be detected by a local uncertainty relation, while the converse is not true.
To prove the second statement of the theorem we will later give explicit counterexamples of states which can be detected by a LUR, but not by the CCN criterion. Before doing that, let us add some remarks.
First, the Theorem from above can be interpreted in the following way: While the witness in Eq. (5) is the natural linear criterion for states violating the CCN criterion, the LUR in Eq. (10) is the natural nonlinear witness for these states. The fact that LURs can sometimes be viewed as nonlinear witnesses which improve linear witnesses has been observed before [14]. The theorem, however, proves that the LURs provide in general improvements for witnesses of the type (5
̺ ns (p) := p|ψ s ψ s | + (1 − p)̺ sep ,(11)
where the singlet is |ψ s := (|01 − |10 )/ √ 2, and the separable noise is given as ̺ sep := 2/3|00 00| + 1/3|01 01|. Using the PPT criterion one can see that the state is entangled for any p > 0. First we check for which values of p the state ̺ ns is detected as entangled by the CCN criterion. It can be seen that ̺ ns (p) violates the CCN criterion for all p > 0.292. Now we define G A k and G B k as
{G A k } 4 k=1 = {− σ x √ 2 , − σ y √ 2 , − σ z √ 2 , 1 1 √ 2 }, {G B k } 4 k=1 = { σ x √ 2 , σ y √ 2 , σ z √ 2 , 1 1 √ 2 }.(12)
These G A k and G B k are the matrices corresponding to the Schmidt decomposition of |ψ s ψ s |. Using Eq. (10) with these LOOs one finds that ̺ ns is detected as entangled by the LURs at least for p > 0.25.
For the second example, we consider the 3 × 3 bound entangled state defined in [25] mixed with white noise:
|ψ 0 = 1 √ 2 |0 (|0 − |1 ), |ψ 1 = 1 √ 2 (|0 − |1 )|2 , |ψ 2 = 1 √ 2 |2 (|1 − |2 ), |ψ 3 = 1 √ 2 (|1 − |2 )|0 , |ψ 4 = 1 3 (|0 + |1 + |2 )(|0 + |1 + |2 ), ̺ BE = 1 4 (1 1 − 4 i=0 |ψ i ψ i |); ̺(p) = p̺ BE + (1 − p) 1 1 9 .
The states ̺(p) are detected as entangled via the CCN criterion whenever p > p ccn = 0.8897. Taking the LUR (10) with the Schmidt matrices of ̺(p ccn ) as LOOs, one finds that the states ̺(p) must already be entangled for p > p lur = 0.8885. Thus, the LURs are able to detect states which are neither detected by the CCN criterion, nor by the PPT criterion. Note that ̺(p) is known to be entangled at least for p > 0.8744 [6].
In conclusion, we showed that entanglement criteria based on local uncertainty relations are strictly stronger than the CCN criterion. The local uncertainty relations can be viewed as the natural nonlinear entanglement witnesses for the CCN criterion. The question, whether there is also a relation between the PPT criterion and local uncertainty relations is very interesting. We leave this problem for future research.
We Now we show that the entanglement witness defined in Eq. (5) is identical to the witness defined in Ref. [6] based on a different formulation of the CCN criterion. Let us first review the realignment map. For a density matrix ̺ = kl µ kl G A k ⊗ G B l the realigned matrix is given by [3]
R(̺) := kl µ kl |G A k G B l | (A1)
Here |G A k denotes a column vector obtained from G A k by joining its columns consecutively while G B k | denotes the transposition of a column vector obtained similarly from G B k . R(̺) can also be computed by a reordering ("realignment") of the matrix entries of ̺, as explained in Ref. [4]. The CCN criterion states that if R(̺) 1 > 1 then ρ is entangled [3][4][5][6]. Here A 1 denotes the trace norm, i.e., the sum of the singular values of matrix A. If ̺ = k λ k A k ⊗B k is given in its Schmidt decomposition, we have R(̺) = k λ k |A k B k | and R(ρ) 1 = k λ k . In this case R(̺) is already given in its singular value decomposition. To make this even more transparent, let us define Σ = diag(λ 1 , λ 2 , ...), U = [|A 1 , |A 2 , ...] and V = [|B 1 , |B 2 , ...]. Then we obtain the decomposition
R(̺) = U ΣV † .
Now we can show that the witness Eq. (5) can be rewritten using the inverse of R. For that we need to observe that
k A k ⊗ B k = R −1 ( k |A k B k |) = R −1 (U V † )
. Hence the witness Eq. (5) can be written as
W = 1 1 − R −1 (U V † ).(A2)
Since R realigns the matrix entries, we have always R −1 (X * ) = R −1 (X) * . Furthermore, since k A k ⊗ B k is Hermitian, R −1 (U V † ) is also Hermitian. Thus the witness in Eq. (A2) can be written as W = 1 1 − [R −1 (U * V T )] T , which is the witness presented in Ref. [6].
APPENDIX B: MORE NONLINEAR WITNESSES
Recently, a method to calculate nonlinear improvements for a given general witness has been developed [18]. Here, we apply this method to Eq. (5).
To start, we first have to calculate the positive map Λ : B(H A ) → B(H B ) corresponding to W [26]. This is Λ(̺) = T r A [W(̺ T ⊗ 1 1 B )], and one can directly see
that for ̺ = i α i (G A i ) T we have Λ(̺) = T r(̺)1 1 B − i α i (G B i )
. We can assume without the loss of generality that dΛ is trace non-increasing, otherwise we rescale W to obtain this. According to the Jamio lkowski isomorphism the witness can then be rewritten as
W = (Á A ⊗ dΛ)(|φ + φ + |),(B1)
where |φ + = i |ii / √ d is a maximally entangled state on H A ⊗ H A . Since for LOOs i T r(G A i )G A i = 1 1 holds, Eq. (B1) implies that |φ + φ + | = i G A i ⊗ (G A i ) T /d. To write down a nonlinear improvement, we can take an arbitrary state |ψ ∈ H A ⊗ H A which has a maximal squared Schmidt coefficient s(ψ). Then, defining X = (Á A ⊗ dΛ)(|φ + ψ|) the functional
F (̺) = W − X X † /s(ψ) (B2)
is a nonlinear improvement of W [18].
To give a first example, let us choose an arbitrary unitary U A on H A and define |ψ = (U A ) † ⊗ 1 1|φ + , which implies that s(ψ) = 1/d. Then direct calculations lead to the nonlinear witness
F (̺) = W − d W(U A ⊗ 1 1) (U A ⊗ 1 1) † W .(B3)
To give a second example, let us define |ψ = 1 1 ⊗ (U A ) † |φ + . Using the coefficients
η ij = T r[(G A i ) T (G A j ) T U A ] we can directly calculate that X = (Á A ⊗Λ)( i G A i ⊗(G A i ) T U A ) = 1 1− ij G A i ⊗η ij G B j . Hence, F (̺) = W −d 1 1− ij G A i ⊗η ij G B j 1 1− ij G A i ⊗η * ij G B j
is another nonlinear witness, improving the witness in Eq. (5). The structure of these witnesses is quite different from the structure of the LURs. Thus other nonlinear witnesses can be derived for the CCN criterion, which do not coincide with the LURs.
thank H.J. Briegel, M. Lewenstein, N. Lütkenhaus, M. Piani and M.M. Wolf for helpful discussions. We acknowledge the support of the European Union (Grant Nos. MEIF-CT-2003-500183 and MERG-CT-2005-029146, OLAQI, PROSECCO, QUPRODIS, RESQ, SCALA), the FWF, the DFG, the Kompetenznetzwerk Quanteninformationsverarbeitung der Bayerischen Staatsregierung and the National Research Fund of Hungary OTKA under contracts T049234 and T043287. APPENDIX A: CONNECTION TO THE WITNESSES PROPOSED IN REF. [6]
). Note, that there are other possible nonlinear improvements on these witnesses as discussed in Appendix B. Second, we have to discuss what happens if the dimensions of the Hilbert spaces H A and H B are not the same. So let us assume that d A = dim(H A ) < d B = dim(H B ). LURs are strictly stronger than the CCN criterion. First, let us consider a noisy singlet state of the formThen, in Eq. (2) there are d 2
A different G A
k and G B
k . The
G A
i form already a set of LOOs for H A and one can find
further d 2
B −d 2
A observables G B
k to complete the set {G B
k }
to become a complete set of LOOs for H B . Using then the
LURs with the definition G A
k = 0 for k = d 2
A + 1, ..., d 2
B
proves the claim.
Now we present two examples which show that the
. A Peres, Phys. Rev. Lett. 771413A. Peres, Phys. Rev. Lett. 77, 1413 (1996).
. M Horodecki, P Horodecki, R Horodecki, Phys. Lett. A. 2231M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A 223, 1 (1996).
. O Rudolph, quant-ph/0202121O. Rudolph, quant-ph/0202121.
. K Chen, L.-A Wu, Quantum Inf. Comput. 3193K. Chen and L.-A. Wu, Quantum Inf. Comput. 3, 193 (2003).
. O Rudolph, Phys. Rev. A. 6732312O. Rudolph, Phys. Rev. A 67, 032312 (2003).
. K Chen, L.-A Wu, Phys. Rev. A. 6922312K. Chen and L.-A. Wu, Phys. Rev. A 69, 022312 (2004).
. K Chen, L.-A Wu, Phys. Lett. A. 30614K. Chen and L.-A. Wu, Phys. Lett. A 306, 14 (2002);
. M Horodecki, P Horodecki, R Horodecki ; P.Wocjan, M Horodecki, quant- ph/0206008Open Syst. Inf. Dyn. 12331M. Horodecki, P. Horodecki, and R. Horodecki, quant- ph/0206008; P.Wocjan and M. Horodecki, Open Syst. Inf. Dyn. 12, 331 (2005);
. L Clarisse, P Wocjan, Quantum Inf. Comput. 6277L. Clarisse and P. Wocjan, Quantum Inf. Comput. 6, 277 (2006).
. A C Doherty, P A Parrilo, F M Spedalieri, Phys. Rev. A. 6922308A.C. Doherty, P.A. Parrilo and F.M. Spedalieri, Phys. Rev. A 69, 022308 (2004);
. J Eisert, ibid. 7062317J. Eisert et al., ibid. 70, 062317 (2004);
. F G S L Brandao, R O Vianna, Phys. Rev. Lett. 93220503F.G.S.L. Brandao and R.O. Vianna Phys. Rev. Lett. 93, 220503 (2004).
. Peres, Found. Phys. 29589See, e.g. A. Peres, Found. Phys. 29, 589 (1999).
. B M , Phys. Lett. A. 271319B.M. Terhal, Phys. Lett. A 271, 319 (2000);
. M Lewenstein, Phys. Rev. A. 6252310M. Lewen- stein et al., Phys. Rev. A 62, 052310 (2000);
. G Tóth, O Gühne, Phys. Rev. Lett. 9460501G. Tóth and O. Gühne, Phys. Rev. Lett. 94, 060501 (2005);
. L.-A Wu, Phys. Rev. A. 7232309L.- A. Wu et al., Phys. Rev. A 72, 032309 (2005);
. F G S L Brandão, ibid. 7222310F.G.S.L. Brandão, ibid. 72, 022310 (2005).
For nonlinear entanglement criteria see also D. Janzing and Th. Beth. Phys. Rev. A. 6152308For nonlinear entanglement criteria see also D. Janzing and Th. Beth, Phys. Rev. A 61, 052308 (2000);
. L.-M Duan, Phys. Rev. Lett. 842722L.-M. Duan et al., Phys. Rev. Lett. 84, 2722 (2000);
. R Simon, 842726R. Simon ibid. 84, 2726 (2000);
. A Sørensen, Nature. 40963A. Sørensen et al., Nature 409, 63 (2001);
. G Tóth, C Simon, J I Cirac, Phys. Rev. A. 6862310G. Tóth, C. Simon, and J.I. Cirac, Phys. Rev. A 68, 062310 (2003).
. H F Hofmann, S Takeuchi, Phys. Rev. A. 6832103H.F. Hofmann and S. Takeuchi, Phys. Rev. A 68, 032103 (2003).
. H Hofmann, Phys. Rev. A. 6834307H. Hofmann, Phys. Rev. A 68, 034307 (2003).
. O Gühne, M Lewenstein, AIP Conf. Proc. 734230O. Gühne and M. Lewenstein, AIP Conf. Proc. 734, 230 (2004);
. G Tóth, O Gühne, Phys. Rev. A. 7222340G. Tóth and O. Gühne, Phys. Rev. A 72, 022340 (2005).
. M Wiesniak, V Vedral, C Brukner, New J. Phys. 7258M. Wiesniak, V. Vedral and C. Brukner, New J. Phys. 7, 258 (2005);
. S Samuelsson, G Björk, Phys. Rev. A. 7312319S. Samuelsson and G. Björk, Phys. Rev. A 73, 012319 (2006).
. O Gühne, Phys. Rev. Lett. 92117903O. Gühne, Phys. Rev. Lett. 92, 117903 (2004).
. E Shchukin, W Vogel, Phys. Rev. Lett. 95230502E. Shchukin and W. Vogel, Phys. Rev. Lett. 95, 230502 (2005);
. J Rigas, O Gühne, N Lütkenhaus, Phys. Rev. A. 7312341J. Rigas, O. Gühne, and N. Lütkenhaus, Phys. Rev. A 73, 012341 (2006);
. P Hyllus, J Eisert, New J. Phys. 851P. Hyllus and J. Eisert, New J. Phys. 8, 51 (2006);
. A Miranowicz, quant- ph/0605001A. Miranowicz et al., quant- ph/0605001.
. O Gühne, N Lütkenhaus, Phys. Rev. Lett. 96170502O. Gühne and N. Lütkenhaus, Phys. Rev. Lett. 96, 170502 (2006).
. F A Bovino, Phys. Rev. Lett. 95240407F.A. Bovino et al., Phys. Rev. Lett. 95, 240407 (2005);
. R Augusiak, P Horodecki, M Demianowicz, quant- ph/0604109R. Augusiak, P. Horodecki and M. Demianowicz, quant- ph/0604109.
. S Yu, N Liu, Phys. Rev. Lett. 95150504S. Yu and N. Liu, Phys. Rev. Lett. 95, 150504 (2005).
This witness has also been derived in a different formulation in Ref. 20This witness has also been derived in a different formu- lation in Ref. [20].
A similar statement holds for entropic uncertainty relations. A similar statement holds for entropic uncertainty rela- tions;
. O Gühne, M Lewenstein, Phys. Rev. A. 7022316O. Gühne and M. Lewenstein, Phys. Rev. A 70, 022316 (2004).
Alternatively, one may also use that for all LOOs Gi and arbitrary X we have i GiXG † i = T r(X)1 1. M. Piani, private communicationAlternatively, one may also use that for all LOOs Gi and arbitrary X we have i GiXG † i = T r(X)1 1. [M. Piani, private communication].
. C H Bennett, Phys. Rev. Lett. 825385C.H. Bennett et al. Phys. Rev. Lett. 82, 5385 (1999).
. A Jamio, ; M Horodecki, P Horodecki, R Horodecki, quant-ph/0109124Rep. Mat. Phys. 3275A. Jamio lkowski, Rep. Mat. Phys. 3, 275 (1972); for a re- view see M. Horodecki, P. Horodecki, and R. Horodecki, quant-ph/0109124.
| [] |
[
"High-temperature behaviour of supported graphene: electron-phonon coupling and substrate-induced doping",
"High-temperature behaviour of supported graphene: electron-phonon coupling and substrate-induced doping"
] | [
"Søren Ulstrup \nDepartment of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark\n",
"Marco Bianchi \nDepartment of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark\n",
"Richard Hatch \nDepartment of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark\n",
"Dandan Guan \nDepartment of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark\n",
"Alessandro Baraldi \nPhysics Department and CENMAT\nUniversity of Trieste\n34127TriesteItaly\n\nIOM-CNR Laboratorio TASC\nArea Science Park34149TriesteItaly\n",
"Dario Alfè \nDepartment of Earth Sciences\nDepartment of Physics and Astronomy\nLondon Centre for Nanotechnology\nTYC@UCL\nUniversity College London\nGower StreetWC1E 6BTLondonUnited Kingdom\n",
"Liv Hornekaer \nDepartment of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark\n",
"Philip Hofmann \nDepartment of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark\n"
] | [
"Department of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark",
"Department of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark",
"Department of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark",
"Department of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark",
"Physics Department and CENMAT\nUniversity of Trieste\n34127TriesteItaly",
"IOM-CNR Laboratorio TASC\nArea Science Park34149TriesteItaly",
"Department of Earth Sciences\nDepartment of Physics and Astronomy\nLondon Centre for Nanotechnology\nTYC@UCL\nUniversity College London\nGower StreetWC1E 6BTLondonUnited Kingdom",
"Department of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark",
"Department of Physics and Astronomy\nInterdisciplinary Nanoscience Center\nAarhus University\n8000Aarhus CDenmark"
] | [] | 1 arXiv:1203.2187v1 [cond-mat.mes-hall] | 10.1103/physrevb.86.161402 | [
"https://arxiv.org/pdf/1203.2187v1.pdf"
] | 28,701,158 | 1203.2187 | b19057cce5868ad73748eaf125de21662ced6755 |
High-temperature behaviour of supported graphene: electron-phonon coupling and substrate-induced doping
9 Mar 2012
Søren Ulstrup
Department of Physics and Astronomy
Interdisciplinary Nanoscience Center
Aarhus University
8000Aarhus CDenmark
Marco Bianchi
Department of Physics and Astronomy
Interdisciplinary Nanoscience Center
Aarhus University
8000Aarhus CDenmark
Richard Hatch
Department of Physics and Astronomy
Interdisciplinary Nanoscience Center
Aarhus University
8000Aarhus CDenmark
Dandan Guan
Department of Physics and Astronomy
Interdisciplinary Nanoscience Center
Aarhus University
8000Aarhus CDenmark
Alessandro Baraldi
Physics Department and CENMAT
University of Trieste
34127TriesteItaly
IOM-CNR Laboratorio TASC
Area Science Park34149TriesteItaly
Dario Alfè
Department of Earth Sciences
Department of Physics and Astronomy
London Centre for Nanotechnology
TYC@UCL
University College London
Gower StreetWC1E 6BTLondonUnited Kingdom
Liv Hornekaer
Department of Physics and Astronomy
Interdisciplinary Nanoscience Center
Aarhus University
8000Aarhus CDenmark
Philip Hofmann
Department of Physics and Astronomy
Interdisciplinary Nanoscience Center
Aarhus University
8000Aarhus CDenmark
High-temperature behaviour of supported graphene: electron-phonon coupling and substrate-induced doping
9 Mar 2012(Dated: May 5, 2014)
1 arXiv:1203.2187v1 [cond-mat.mes-hall]
One of the salient features of graphene is the very high carrier mobility that implies tremendous potential for use in electronic devices [1]. Unfortunately, transport measurements find the expected high mobility only in freely suspended graphene [2]. When supported on a surface, graphene shows a strongly reduced mobility, and an especially severe reduction for temperatures above 200 K [3,4].
A temperature-dependent mobility reduction could be explained by scattering of carriers with phonons, but this is expected to be weak for pristine, weaklydoped graphene [5,6]. The mobility reduction has therefore been ascribed to the interaction with confined ripples or substrate phonons [3,4,7]. Here we study the temperature-dependent electronic structure of supported graphene by angle-resolved photoemission spectroscopy, a technique that can reveal the origin of the phenomena observed in transport measurements. We show that the electron-phonon coupling for weakly-doped, supported graphene on a metal surface is indeed extremely weak, reaching the lowest value ever reported for any material. However, the temperature-dependent dynamic interaction with the substrate leads to a complex and dramatic change in the carrier type and density that is relevant for transport. Using ab initio molecular dynamics simulations, we show that these changes in the electronic structure are mainly caused by fluctuations in the graphene-substrate distance.
Graphene's remarkable transport properties have been one reason for the tremendous interest in this material [8,9] and have been widely studied [10]. Transport measurements give direct access to the quantities that are eventually important for applications, such as the temperature-dependent carrier density and mobility. In such experiments, graphene is typically placed on insulating SiO 2 so that the carrier density can be changed by electric field gating. Placing graphene on SiO 2 , however, has been shown to severely reduce the carrier mobility, especially above 200 K, i.e. for the temperature range relevant of applications [3,4]. This can be improved by choosing a flat and non-polar insulator as a substrate, such as hexagonal boron nitride [11], but the microscopic mechanism of the mobility reduction is not yet well understood. Here we address this issue using a combination of angle-resolved photoemission spectroscopy (ARPES) and ab initio molecular dynamics, techniques that can give detailed information on the system's spectral properties and are thus complementary to transport measurements.
So far, all ARPES investigations of the electron-phonon coupling in graphene have been carried out at a constant, low temperature. The determination of the electron-phonon mass enhancement parameter λ then relies on the observed energy dependence of the electronic self-energy near the Fermi energy E F . For this approach to be applicable, the sample temperature has to be much lower than the relevant temperature for phonon excitations. For the reported results this is fulfilled with respect to graphene's very high Debye temperature [12], but it might not be fulfilled if the Bloch-Grüneisen temperature sets the relevant temperature scale [13]. For strongly doped graphene (n ≈ 10 13 cm −2 ), the electron-phonon scattering was found to be of intermediate strength with λ ≈ 0.2 − 0.3 [14][15][16]. For weakly doped graphene, λ appears to be much smaller [17]. Here we employ a different approach to studying the electron-phonon coupling directly, by measuring the temperature-dependent self-energy for graphene supported on a metal surface. This necessitates that ARPES experiments be carried out up to high temperatures but it does not require assumptions about the relevant temperature scale for phonon excitations (Debye vs. Bloch-Grüneisen). In fact, the determination of the relevant temperature scale for phonon excitations is a byproduct of the analysis. We find that λ is extremely small such that no temperature-induced mobility reduction would be expected for this system. However, we also find unexpected temperature-induced changes in the electronic structure near the Fermi energy that, in a transport measurement, would entirely dominate the electron-phonon coupling effect.
The temperature-dependent spectral function for graphene supported on Ir(111) is shown in Figure 1. The ARPES measurement of the electronic structure close to the Fermi energy E F and near theK-point of the Brillouin zone is shown for three different temperatures in Figure 1(a)-(c). The characteristic Dirac cone is easily identified, even for the highest temperature of 1300 K. In addition to the main Dirac cone, weak replicas and mini-gaps are evident. These are caused by the interaction with the substrate and the formation of a moiré superstructure [18,19]. Remarkably, these features are clearly discernible even at the highest temperature. As the temperature is increased, several changes can be observed in the electronic structure. The first is the expected broadening of the features that is caused by the electron-phonon coupling. Given the very large temperature range of the measurements, this effect is relatively minor. The second and unexpected effect is a significant change of the doping. At 300 K the Dirac point of graphene is located above the Fermi energy in agreement with earlier results [18,19], but as the temperature increases, it shifts substantially and is clearly below the Fermi energy at 1300 K. Finally, the band structure at 1300 K does not
show the expected Dirac cone-like dispersion but the spectral function around the Dirac point is broadened out and the situation resembles the observed onset of a gap-opening for disordered graphene [20,21].
For a more detailed analysis of the electron-phonon coupling strength, we determine the linewidth of the momentum distribution curves (MDCs) averaged over binding energies from 4 250 meV to 550 meV below the Dirac point as a function of temperature. From the average MDC linewidth and the (constant) group velocity v of the band we infer the imaginary part of the self-energy Σ [22] and plot this as a function of temperature in Figure 1(d). In the high temperature limit, such data can directly yield the electron-phonon coupling strength λ because Σ is a linear function of T , independent of the phonon spectrum [23,24]. For a metal, this high temperature limit is reached for T higher than the Debye temperature Θ D . For graphene, this limit is not reached in our experiments, and it must also be kept in mind that the relevant temperature scale might not be set by Θ D but rather by the Bloch-Grüneisen temperature Θ BG that could be substantially lower [13]. We thus have to employ the general expression [24,25]:
Σ (T ) = π ωmax 0 α 2 F (ω )[1 − f (ω − ω , T ) + 2n(ω , T ) + f (ω + ω , T )]dω + Σ 0 ,(1)
where ω is the hole energy, ω is the phonon energy and f (ω,T) and n(ω, T ) are the Fermi and Bose-Einstein distribution functions, respectively. Σ 0 is a temperature-independent offset that accounts for electron-electron and electron-defect scattering. The integral extends over all phonon frequencies in the material. α 2 F (ω ) is the Eliashberg coupling function which we approximate by a 3D Debye model, i.e.
α 2 F (ω ) = λ(ω /ω D ) 2 = λ( ω /k B Θ D ) 2 ,(2)
for ω < ω D and zero elsewhere. [26]. A 3D model is chosen in view of the graphene-substrate
interactions, but we note that choosing a 2D model does not significantly alter the results.
In the further analysis, the data in Figure 1(d) are fitted using (1) and (2). This implies three fit parameters: Σ 0 , λ and Θ D . We could choose to eliminate Θ D from the fit by using an experimentally determined value (e.g. Θ D = 1495 K [12]). This, however, ignores the possibility that the actually relevant temperature scale is set by the Bloch-Grüneisen temperature rather than the Debye temperature. We therefore choose to keep Θ D in (2) as a free parameter and emphasise that the resulting Θ D from the fit is then merely an effective measure of the temperature scale relevant for the electron-phonon scattering. It could be much lower than the actual Debye temperature determined from other experiments. In the fit, Θ D and λ are strongly correlated through (2) [27]. Figure 1(e) shows a plot of the resulting quality of the fit (χ 2 ) as a function of Θ D and λ and illustrates this correlation.
We find equally good fits for a wide range of Θ D and λ along the minimum of the contour, but only for values of Θ D > ∼ 1050 K. For the fit in Figure 1(d) we use the experimentally determined Θ D of 1495 K [12] and λ = 8.8 × 10 −4 .
Nevertheless, we can draw several important conclusions. The first is that λ is very small, between 4 × 10 −4 and 2 × 10 −3 . To the best of our knowledge, this is the lowest λ value ever determined for any material. The result is consistent with the the theoretical expectation of a vanishing λ near the Dirac point [5], and with a single recent ARPES study for weakly doped graphene on SiC [17]. Most earlier ARPES studies have been carried out for significantly stronger doping and have accordingly found higher λ values [14][15][16]. The second conclusion is that the actual Debye temperature of graphene, rather than the Bloch-
Grüneisen temperature, appears to be the relevant temperature for the electron-phonon scattering. Again, the uncertainty of Θ D in the fit is large because of the correlation between Θ D and λ but the fit is significantly inferior for Θ D values below 1050 K. Θ BG , on the other hand, can be estimated to be ≈ 400 K, using the average binding energy of 400 meV below E D that was used for the extraction of the temperature-dependent data and following Ref. [13].
While the electron-phonon coupling is thus consistent with theoretical expectations, the temperature-dependent changes of the electronic structure are highly unexpected. The most dramatic effect is the change from hole doping at low temperature to electron doping at high temperature. Indeed, if we infer the position of the Dirac point from an extrapolation of the occupied bands, its position changes by more than 250 meV over the temperature range explored here (see Figure 1(f)).
It is tempting to ascribe this behaviour to an increased graphene-substrate interaction at higher temperatures. We have investigated this possibility by temperature-dependent ab initio molecular dynamics calculations. In these calculations, a layer of graphene is placed on a three layer thick slab of Ir(111) and the atoms of the graphene and two topmost Ir layers are allowed to move for 60 ps, keeping track of the electronic degrees of freedom. Such calculations provide us with the average distance between the carbon atoms and the Ir(111) surface atoms and with the electronic structure of the entire system. The main reason is that the average distance between graphene and the Ir substrate is probably overestimated by the calculations, which do not include the van der Waals interaction. This is consistent with the difference in doping between ARPES and calculation. The experimental trends are, however well reproduced and we believe the essential physics to be captured. Note that a fluctuating doping of graphene at high temperature would lead to a systematic error in our determination of λ because it represents an additional broadening mechanism. This, however, would merely cause the real λ to be even lower than the value 8 we report above.
The observed temperature-dependent changes of the electronic structure are expected to lead to a very complex behaviour in transport measurements, even for a simple metallic substrate without any polar phonon modes. In fact, the contribution of the electron-phonon coupling would be expected to be insignificant with respect to the other changes that would presumably give rise to a "semiconducting" behaviour caused by a strong decrease of the carrier density between 0 K and 700 K and a "metallic" behaviour above 700 K. Most transport measurements are admittedly limited to a much smaller temperature range but our results illustrate that that the temperature-dependent doping of supported graphene could have a very significant impact on the transport properties.
In conclusion, we have used spectroscopic measurements showing that the electronphonon coupling for supported graphene can be extremely weak. Nevertheless, strong effects in the temperature-dependent transport properties can be expected due to temperaturedependent doping changes of the graphene. Our results are specifically important for a graphene-metal interface where the doping of the graphene has important consequences for device operation [28]. But they are not restricted to this type of interface. Graphene on SiO 2 is also subject to considerable interface charge transfer [29] and similar effects can be expected. Finally, we note that pristine and suspended graphene could be expected to retain its benign electronic properties up to very high temperatures, as our results suggest that the intrinsic electron-phonon coupling is very weak indeed and thermal fluctuations would hardly affect the DOS.
METHODS
ARPES experiments were carried out at the SGM-3 beamline of the synchrotron radiation source ASTRID [30]. Graphene was prepared on Ir(111) using a well established procedure based on C 2 H 4 dissociation [31]. The quality of the graphene layer was controlled by low-energy electron diffraction and its spectral function was measured by ARPES. At low temperature, the photoemission linewidth of the features was found to be similar to published values [18,19]. The temperature measurements were performed with a K-type thermocouple and an infrared pyrometer. The temperature-dependent data were taken such that the sample was heated by a filament mounted behind it. The filament current was pulsed and the data were acquired during the off-part of the heating cycle. The total energy and k resolution during data acquisition were 18 meV and 0.01Å −1 , respectively. The MDC linewidth was determined as the average over an energy range between 250 meV and 550 meV below the Dirac point. An energy interval was chosen in order to improve the experimental uncertainties. The interval limits were chosen such that the lower limit is always more than a typical phonon energy (≈ 200 meV) away from E F and neither limit is too close to the Dirac point or the crossing points between the main Dirac cone and the replica bands, as this is known to lead to errors in the linewidth determination [32].
The ab initio calculations were performed with the VASP code [33], the projectoraugmented-wave method [34,35], the Perdew-Burke-Ernzerhof exchange-correlation energy [36], and an efficient extrapolation for the charge density [37]. Single particle orbitals were expanded in plane waves with a cutoff of 400 eV. We used the NPT ensemble (constant particles number N , pressure P , and temperature T ), as recently implemented in VASP [38,39]. For the present slab calculations, we only applied the constant pressure algorithm to the two lattice vectors parallel to the surface, leaving the third unchanged during the simulation. Adsorption of graphene has been modelled by overlaying a 10 × 10 graphene sheet (200 C atoms) over a 9 × 9 Ir(111) supercell [12] and using a slab of 3 layers where the two topmost layers were allowed to move while the bottom layer was kept fixed.
Molecular dynamics simulations were performed with the Γ point only at T = 300 K and T= 1000 K. Density of states were calculated on representative simulation snapshots, using a 16 × 16 × 1 grid of k-points (128 points). The projected density of states on the carbon atoms were obtained by projecting the Bloch orbitals onto spherical harmonics with l =1, inside spheres of radius 0.86Å centered on the C atoms. The PDOS obtained in this way is representative of the density of states due to the p orbitals of the carbon atoms.
The DOS of suspended graphene at 0 K was calculated only for the p states, as the s state contribution around the Fermi energy is very small. The DOS was rescaled such that it could be fitted to the analytical linear density of states prer unit cell of isolated graphene near the Dirac point. The same scaling factor was applied to all the calculated density of states data.
FIG. 1 :
1Temperature-dependent electronic structure of graphene on Ir(111) determined by ARPES.(a)-(c) Spectra taken through theK point of the Brillouin zone perpendicular to theΓ-K direction (dashed line in the inset of (a)) for three different temperatures. The wavevector is measured relative to the Dirac point. (d) Imaginary part of the self-energy obtained as an average using the temperature-dependent momentum distribution linewidths between 250 meV and 550 meV below the Dirac point. The solid line is a result of fitting eq. (1) to the data using the given parameters. (e) χ 2 value for the fit in (d) as a function of two fit parameters, the Debye temperature θ D and the electron-phonon coupling strength λ. (f) Dirac point energy as a function of temperature, estimated from the extrapolated high binding energy dispersion.
Figure 2 (
2a) gives the calculated density of states (DOS) for a freely suspended graphene layer at 0 K. It shows the expected features of a zero-gap semiconductor with the Dirac point energy E D at the Fermi energy. The electronic structure near E F is magnified in Figure 2(b)and plotted together with the expected analytical result for a linear dispersion (solid line).6The calculated and analytical results virtually coincide near E F . Small deviations are only discernible for higher absolute binding energies, as the band structure becomes non-linear and the van Hove singularities, visible inFigure 2(a), are approached. Also shown is the calculated DOS at a temperature of 1000 K. Remarkably, the temperature of the graphene has virtually no effect on the DOS.The situation is dramatically different for supported graphene on Ir.Figure 2(c)shows the result of a 60 ps ab initio molecular dynamics calculation, giving the average graphene-Ir distance at 300 K and 1000 K. After some time needed to achieve thermal equilibrium (around 5 ps), the average distance fluctuates around a stable value. The distance fluctuations are much more pronounced at T = 1000 K than at T = 300 K, with the graphene layer coming closer to the Ir substrate. When this happens, the interaction between graphene and the substrate is stronger, resulting in a pronounced shift of E D towards higher binding energies. This is evident in the insets ofFigure 2(c) which show the projected density of states (PDOS) on the carbon atoms for two representative configurations: After 30 ps, the graphene-substrate distance is ≈ 4.2Å for both temperatures and the resulting PDOS curves are nearly identical. After 45 ps, however, the average distance between the graphene layer and the substrate at 1000 K is reduced from ≈ 4.2Å to ≈ 3.8Å and this results in a strong shift of E D from 560 meV to 420 meV above E F . What is more, the PDOS is no longer well-described by the analytical model, contrary to suspended graphene, with the PDOS at E D now being substantially different from zero.The calculations thus reproduce and explain the ARPES observations: both the pronounced change in doping and the deviation of the spectral function from a simple Dirac cone are caused by fluctuations in the graphene-substrate distance as the temperature is increased. It is tempting to make a more quantitative comparison between experiment and calculations but such a comparison would exceed the achievable accuracy in the calculations.
FIG. 2 :
2Electronic and geometric structure of suspended and supported graphene determined by ab initio molecular dynamics. (a) Density of states (DOS) of freely suspended graphene. (b) DOS for freely suspended graphene in the vicinity of the Fermi energy for two temperatures. The solid line is the DOS calculated from the analytic, linear dispersion. (c) Average distance between the Ir(111) surface and graphene during 60 ps simulations at 300 K and 1000 K. Insets: Snapshots of the projected DOS (PDOS) on the carbon atoms at 30 ps and 45 ps, corresponding to configurations with similar and difference graphene-Ir distances for the two temperatures, respectively. The solidlines have the same shape as in (b) but merely serve as a guide to the eye here. The geometry of the system after 45 ps at 1000 K is shown in the lower right corner.
The position of the Dirac point in the calculations for supported graphene was determined by fitting the analytical linear DOS with an offset in binding energy corresponding to the new position of the Dirac point.
10
Carbon-based electronics. Phaedon Avouris, Zhihong Chen, Vasili Perebeinos, Nature Nanotechnology. 2Phaedon Avouris, Zhihong Chen, and Vasili Perebeinos. Carbon-based electronics. Nature Nanotechnology, 2, 605-615, 10 2007.
Ultrahigh electron mobility in suspended graphene. K I Bolotin, K J Sikes, Z Jiang, M Klima, G Fudenberg, J Hone, P Kim, H L Stormer, Solid State Communications. 146K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer. Ultrahigh electron mobility in suspended graphene. Solid State Communications, 146, 351-355, 6 2008.
Giant Intrinsic Carrier Mobilities in Graphene and Its Bilayer. S V Morozov, K S Novoselov, M I Katsnelson, F Schedin, D C Elias, J A Jaszczak, A K Geim, Phys. Rev. Lett. 10016602S. V. Morozov, K. S. Novoselov, M. I. Katsnelson, F. Schedin, D. C. Elias, J. A. Jaszczak, and A. K. Geim. Giant Intrinsic Carrier Mobilities in Graphene and Its Bilayer. Phys. Rev. Lett., 100, 016602, Jan 2008.
Intrinsic and extrinsic performance limits of graphene devices on SiO 2. Jian-Hao Chen, Chaun Jang, Shudong Xiao, Masa Ishigami, Michael S Fuhrer, Nat Nano. 3Jian-Hao Chen, Chaun Jang, Shudong Xiao, Masa Ishigami, and Michael S. Fuhrer. Intrinsic and extrinsic performance limits of graphene devices on SiO 2 . Nat Nano, 3, 206-209, 04 2008.
Electron-phonon coupling and electron self-energy in electron-doped graphene: Calculation of angular-resolved photoemission spectra. Matteo Calandra, Francesco Mauri, Physical Review B. 76Matteo Calandra and Francesco Mauri. Electron-phonon coupling and electron self-energy in electron-doped graphene: Calculation of angular-resolved photoemission spectra. Physical Review B, 76, 205411, 2007.
Temperature-Dependent Transport in Suspended Graphene. K I Bolotin, K J Sikes, J Hone, H L Stormer, P Kim, Phys. Rev. Lett. 10196802K. I. Bolotin, K. J. Sikes, J. Hone, H. L. Stormer, and P. Kim. Temperature-Dependent Transport in Suspended Graphene. Phys. Rev. Lett., 101, 096802, 2008.
Substrate-limited electron dynamics in graphene. S Fratini, F Guinea, Phys. Rev. B. 77S. Fratini and F. Guinea. Substrate-limited electron dynamics in graphene. Phys. Rev. B, 77, 195415, 2008.
. K S Novoselov, A K Geim, S V Morozov, D Jiang, Y Zhang, S V Dubonos, I V , K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V.
Electric field effect in atomically thin carbon films. A A Grigorieva, Firsov, Science. 306Grigorieva, and A. A. Firsov. Electric field effect in atomically thin carbon films. Science, 306, 666-669, 2004.
Two-dimensional gas of massless dirac fermions in graphene. K S Novoselov, A K Geim, S V Morozov, D Jiang, M I Katsnelson, I V Grigorieva, S V Dubonos, A A Firsov, Nature. 438K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov. Two-dimensional gas of massless dirac fermions in graphene. Nature, 438, 197-200, 2005.
Electronic transport in two-dimensional graphene. S Das Sarma, Shaffique Adam, E H Hwang, Enrico Rossi, Rev. Mod. Phys. 83S. Das Sarma, Shaffique Adam, E. H. Hwang, and Enrico Rossi. Electronic transport in two-dimensional graphene. Rev. Mod. Phys., 83, 407-470, 2011.
Boron nitride substrates for high-quality graphene electronics. C R Dean, A F Young, I Meric, C Lee, L Wang, S Sorgenfrei, K Watanabe, T Taniguchi, P Kim, K L Shepard, J Hone, Nature Nanotechnology. 5C. R. Dean, A. F. Young, I. Meric, C. Lee, L. Wang, S. Sorgenfrei, K. Watanabe, T. Taniguchi, P. Kim, K. L. Shepard, and J. Hone. Boron nitride substrates for high-quality graphene electronics. Nature Nanotechnology, 5, 722-726, 10 2010.
Thermal Expansion of Supported and Freestanding Graphene: Lattice Constant versus Interatomic Distance. Monica Pozzo, Dario Alfè, Paolo Lacovig, Philip Hofmann, Silvano Lizzit, Alessandro Baraldi, Physical Review Letters. 106135501Monica Pozzo, Dario Alfè, Paolo Lacovig, Philip Hofmann, Silvano Lizzit, and Alessandro Baraldi. Thermal Expansion of Supported and Freestanding Graphene: Lattice Constant versus Interatomic Distance. Physical Review Letters, 106, 135501, 2011.
Controlling Electron-Phonon Interactions in Graphene at Ultrahigh Carrier Densities. Dmitri K Efetov, Philip Kim, Phys. Rev. Lett. 105256805Dmitri K. Efetov and Philip Kim. Controlling Electron-Phonon Interactions in Graphene at Ultrahigh Carrier Densities. Phys. Rev. Lett., 105, 256805, 2010.
Quasiparticle dynamics in graphene. A Bostwick, T Ohta, T Seyller, K Horn, E Rotenberg, Nature Physics. 3A. Bostwick, T. Ohta, T. Seyller, K. Horn, and E. Rotenberg. Quasiparticle dynamics in graphene. Nature Physics, 3, 36-40, 2007.
Renormalization of graphene bands by many-body interactions. A Bostwick, T Ohta, J L Mcchesney, T Seyller, K Horn, E Rotenberg, Solid State Communications. 143A. Bostwick, T. Ohta, J. L. McChesney, T. Seyller, K. Horn, and E. Rotenberg. Renormaliza- tion of graphene bands by many-body interactions. Solid State Communications, 143, 63-71, 2007.
Electron-phonon coupling in potassium-doped graphene: Angle-resolved photoemission spectroscopy. M Bianchi, E D L Rienks, S Lizzit, A Baraldi, R Balog, L Hornekaer, Ph Hofmann, Phys. Rev. B. 8141403M. Bianchi, E. D. L. Rienks, S. Lizzit, A. Baraldi, R. Balog, L. Hornekaer, and Ph. Hofmann. Electron-phonon coupling in potassium-doped graphene: Angle-resolved photoemission spec- troscopy. Phys. Rev. B, 81, 041403, 2010.
Large-area homogeneous quasifree standing epitaxial graphene on SiC(0001): Electronic and structural characterization. S Forti, K V Emtsev, C Coletti, A A Zakharov, C Riedl, U Starke, Phys. Rev. B. 84125449S. Forti, K. V. Emtsev, C. Coletti, A. A. Zakharov, C. Riedl, and U. Starke. Large-area homogeneous quasifree standing epitaxial graphene on SiC(0001): Electronic and structural characterization. Phys. Rev. B, 84, 125449, 2011.
Graphene on Ir(111) characterized by angle-resolved photoemission. Marko Kralj, Ivo Pletikosić, Marin Petrović, Petar Pervan, Milorad Milun, Alpha T N'diaye, Carsten Busse, Thomas Michely, Jun Fujii, Ivana Vobornik, Phys. Rev. B. 8475427Marko Kralj, Ivo Pletikosić, Marin Petrović, Petar Pervan, Milorad Milun, Alpha T. N'Diaye, Carsten Busse, Thomas Michely, Jun Fujii, and Ivana Vobornik. Graphene on Ir(111) char- acterized by angle-resolved photoemission. Phys. Rev. B, 84, 075427, 2011.
Dirac Cones and Minigaps for Graphene on Ir(111). I Pletikosic, M Kralj, P Pervan, R Brako, J Coraux, A T N'diaye, C Busse, T Michely, Physical Review Letters. 10256808I. Pletikosic, M. Kralj, P. Pervan, R. Brako, J. Coraux, A. T. N'Diaye, C. Busse, and T. Michely. Dirac Cones and Minigaps for Graphene on Ir(111). Physical Review Letters, 102, 056808, 2009.
Origin of the energy bandgap in epitaxial graphene. Eli Rotenberg, Aaron Bostwick, Taisuke Ohta, Jessica L Mcchesney, Thomas Seyller, Karsten Horn, Nature Materials. 7Eli Rotenberg, Aaron Bostwick, Taisuke Ohta, Jessica L. McChesney, Thomas Seyller, and Karsten Horn. Origin of the energy bandgap in epitaxial graphene. Nature Materials, 7, 258-259, 2008.
. Richard Balog, Bjarke Jørgensen, Louis Nilsson, Mie Andersen, Emile Rienks, Marco Bianchi, Mattia Fanetti, Erik Laegsgaard, Alessandro Baraldi, Silvano Lizzit, Zeljko Sljivancanin, Flemming Besenbacher, Bjørk Hammer, Thomas G Pedersen, Philip Hofmann, Liv Hornekaer, Richard Balog, Bjarke Jørgensen, Louis Nilsson, Mie Andersen, Emile Rienks, Marco Bianchi, Mattia Fanetti, Erik Laegsgaard, Alessandro Baraldi, Silvano Lizzit, Zeljko Sljivancanin, Flem- ming Besenbacher, Bjørk Hammer, Thomas G. Pedersen, Philip Hofmann, and Liv Hornekaer.
Band Gap Opening in Graphene Induced by Patterned Hydrogen Adsorption. Nature Materials. 9Band Gap Opening in Graphene Induced by Patterned Hydrogen Adsorption. Nature Mate- rials, 9, 315-319, 2010.
Electron-phonon coupling at surfaces and interfaces. Ph Hofmann, Yu Sklyadneva, E D L Rienks, E V Chulkov, New Journal of Physics. 11125005Ph Hofmann, I Yu Sklyadneva, E D L Rienks, and E V Chulkov. Electron-phonon coupling at surfaces and interfaces. New Journal of Physics, 11, 125005, 2009.
Phonon contribution to quasiparticle lifetimes in Cu measured by angle-resolved photoemission. B A Mcdougall, T Balasubramanian, E Jensen, Physical Review B. 5113891B. A. McDougall, T. Balasubramanian, and E. Jensen. Phonon contribution to quasiparticle lifetimes in Cu measured by angle-resolved photoemission. Physical Review B, 51, R13891, 1995.
Surface-sensitive conductance measurements. Ph, J W Hofmann, Wells, Journal of Physics: Condensed Matter. 2113003Ph. Hofmann and J. W. Wells. Surface-sensitive conductance measurements. Journal of Physics: Condensed Matter, 21, 013003, 2009.
The electron-phonon interaction in metals. G , North-HollandG. Grimvall. The electron-phonon interaction in metals. North-Holland, 1981.
Electron-phonon coupling at metal surfaces. B Hellsing, A Eiguren, E V Chulkov, Journal of Physics: Condens. Matter. 14B. Hellsing, A. Eiguren, and E. V. Chulkov. Electron-phonon coupling at metal surfaces. Journal of Physics: Condens. Matter, 14, 5959-5977, 2002.
Electronphonon coupling on the Mg(0001) surface. T K Kim, T S Sorensen, E Wolfring, H Li, E V Chulkov, Ph Hofmann, Physical Review B. 7275422T. K. Kim, T. S. Sorensen, E. Wolfring, H. Li, E. V. Chulkov, and Ph. Hofmann. Electron- phonon coupling on the Mg(0001) surface. Physical Review B, 72, 075422, 2005.
Fengnian Xia, and Phaedon Avouris. Quantum Behavior of Graphene Transistors near the Scaling Limit. Yanqing Wu, Vasili Perebeinos, Yu-Ming Lin, Tony Low, Nano Letters. 11Yanqing Wu, Vasili Perebeinos, Yu-ming Lin, Tony Low, Fengnian Xia, and Phaedon Avouris. Quantum Behavior of Graphene Transistors near the Scaling Limit. Nano Letters, 1, 1, 2012.
Hugo E Romero, Ning Shen, Prasoon Joshi, Humberto R Gutierrez, Srinivas A Tadigadapa, Jorge O Sofo, Peter C Eklund, Type Behavior of Graphene Supported on Si/SiO 2 Substrates. 2Hugo E. Romero, Ning Shen, Prasoon Joshi, Humberto R. Gutierrez, Srinivas A. Tadigadapa, Jorge O. Sofo, and Peter C. Eklund. n-Type Behavior of Graphene Supported on Si/SiO 2 Substrates. ACS Nano, 2, 2008.
An undulator-based spherical grating monochromator beamline for angle-resolved photoemission spectroscopy. S V Hoffmann, C Søndergaard, C Schultz, Z Li, Ph Hofmann, Nuclear Inst. and Methods in Physics Research. 523441AS. V. Hoffmann, C. Søndergaard, C. Schultz, Z. Li, and Ph. Hofmann. An undulator-based spherical grating monochromator beamline for angle-resolved photoemission spectroscopy. Nu- clear Inst. and Methods in Physics Research, A, 523, 441, 2004.
. Johann Coraux, T Alpha, Martin N'diaye, Carsten Engler, Dirk Busse, Niemma Wall, Buckanie, -J Frank, Raoul Meyer Zu Heringdorf, Bene Van Gastel, Thomas Poelsema, Michely, Growth of graphene on Ir. 1111123006New Journal of PhysicsJohann Coraux, Alpha T N'Diaye, Martin Engler, Carsten Busse, Dirk Wall, Niemma Buck- anie, Frank-J Meyer zu Heringdorf, Raoul van Gastel, Bene Poelsema, and Thomas Michely. Growth of graphene on Ir(111). New Journal of Physics, 11, 023006, 2009.
Hole dynamics in a two-dimensional spin-orbit coupled electron system: Theoretical and experimental study of the Au(111) surface state. I A Nechaev, M F Jensen, E D L Rienks, V M Silkin, P M Echenique, E V Chulkov, Ph Hofmann, Physical Review B. 80113402I. A. Nechaev, M. F. Jensen, E. D. L. Rienks, V. M. Silkin, P. M. Echenique, E. V. Chulkov, and Ph. Hofmann. Hole dynamics in a two-dimensional spin-orbit coupled electron system: Theoretical and experimental study of the Au(111) surface state. Physical Review B, 80, 113402, 2009.
Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. G Kresse, J Furthmüller, Phys. Rev. B. 54G. Kresse and J. Furthmüller. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B, 54, 11169-11186, Oct 1996.
Projector augmented-wave method. P E Blöchl, Phys. Rev. B. 50P. E. Blöchl. Projector augmented-wave method. Phys. Rev. B, 50, 17953-17979, Dec 1994.
From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, Phys. Rev. B. 59G. Kresse and D. Joubert. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B, 59, 1758-1775, 1999.
Generalized Gradient Approximation Made Simple. John P Perdew, Kieron Burke, Matthias Ernzerhof, Phys. Rev. Lett. 77John P. Perdew, Kieron Burke, and Matthias Ernzerhof. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett., 77, 3865-3868, 1996.
Ab initio molecular dynamics, a simple algorithm for charge extrapolation. Dario Alfè, Computer Physics Communications. 118Dario Alfè. Ab initio molecular dynamics, a simple algorithm for charge extrapolation. Com- puter Physics Communications, 118, 31-33, 4 1999.
Metric-tensor flexible-cell algorithm for isothermal-isobaric molecular dynamics simulations. E Hernandez, The Journal of Chemical Physics. 115E. Hernandez. Metric-tensor flexible-cell algorithm for isothermal-isobaric molecular dynamics simulations. The Journal of Chemical Physics, 115, 10282-10290, 2001.
First-Principles Simulations of Lithium Melting: Stability of the bcc Phase Close to Melting. E R Hernández, A Rodriguez-Prieto, A Bergara, D Alfè, Phys. Rev. Lett. 104185701E. R. Hernández, A. Rodriguez-Prieto, A. Bergara, and D. Alfè. First-Principles Simulations of Lithium Melting: Stability of the bcc Phase Close to Melting. Phys. Rev. Lett., 104, 185701, 2010.
| [] |
[] | [
"F Lo Verso \nInstitut für Theoretische Physik II\nDipartimento di Fisica\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstraße 1D-40225DüsseldorfGermany\n",
"R L C Vink \nInstitut für Theoretische Physik II\nDipartimento di Fisica\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstraße 1D-40225DüsseldorfGermany\n",
"D Pini \nUniversità di Milano\nVia Celoria 1620133MilanoItaly\n",
"L Reatto \nUniversità di Milano\nVia Celoria 1620133MilanoItaly\n"
] | [
"Institut für Theoretische Physik II\nDipartimento di Fisica\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstraße 1D-40225DüsseldorfGermany",
"Institut für Theoretische Physik II\nDipartimento di Fisica\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstraße 1D-40225DüsseldorfGermany",
"Università di Milano\nVia Celoria 1620133MilanoItaly",
"Università di Milano\nVia Celoria 1620133MilanoItaly"
] | [] | We extensively investigated the critical behavior of mixtures of colloids and polymers via the two-component Asakura-Oosawa model and its reduction to a one-component colloidal fluid using accurate theoretical and simulation techniques. In particular the theoretical approach, hierarchical reference theory [Adv. Phys. 44, 211 (1995)], incorporates realistically the effects of long-range fluctuations on phase separation giving exponents which differ strongly from their mean-field values, and are in good agreement with those of the three-dimensional Ising model. Computer simulations combined with finite-size scaling analysis confirm the Ising universality and the accuracy of the theory, although some discrepancy in the location of the critical point between one-component and full-mixture description remains. To assess the limit of the pair-interaction description, we compare one-component and two-component results. | 10.1103/physreve.73.061407 | [
"https://export.arxiv.org/pdf/cond-mat/0603578v1.pdf"
] | 20,089,711 | cond-mat/0603578 | ee5a810ad1dcda8a1de0dae92402db960f4cb8fc |
22 Mar 2006
F Lo Verso
Institut für Theoretische Physik II
Dipartimento di Fisica
Heinrich-Heine-Universität Düsseldorf
Universitätsstraße 1D-40225DüsseldorfGermany
R L C Vink
Institut für Theoretische Physik II
Dipartimento di Fisica
Heinrich-Heine-Universität Düsseldorf
Universitätsstraße 1D-40225DüsseldorfGermany
D Pini
Università di Milano
Via Celoria 1620133MilanoItaly
L Reatto
Università di Milano
Via Celoria 1620133MilanoItaly
22 Mar 2006(Dated: March 23, 2022)Critical behavior in colloid-polymer mixtures: theory and simulationnumbers: 6120Gy6460Ak8270Dd6120Ja0510Ln0270-c0570Jk6460Fr
We extensively investigated the critical behavior of mixtures of colloids and polymers via the two-component Asakura-Oosawa model and its reduction to a one-component colloidal fluid using accurate theoretical and simulation techniques. In particular the theoretical approach, hierarchical reference theory [Adv. Phys. 44, 211 (1995)], incorporates realistically the effects of long-range fluctuations on phase separation giving exponents which differ strongly from their mean-field values, and are in good agreement with those of the three-dimensional Ising model. Computer simulations combined with finite-size scaling analysis confirm the Ising universality and the accuracy of the theory, although some discrepancy in the location of the critical point between one-component and full-mixture description remains. To assess the limit of the pair-interaction description, we compare one-component and two-component results.
I. INTRODUCTION
Mixtures of colloids and polymers are exciting fluid systems. This is partly due to their industrial importance, but also to the fundamental physical insights they provide [1,2,3]. An important mechanism governing the phase behavior of colloid-polymer mixtures is the depletion effect, which leads to an effective attraction between the colloids [4,5] where the polymer fugacity may be regarded as the analogue of inverse temperature. In particular, for sufficiently large size ratio R g /R c , where R g and R c are respectively the radius of gyration of the polymer and the radius of the colloid, a colloid-polymer mixture exhibits stable fluid-fluid phase separation and a solid crystal phase. Since colloidal particles can be visualized close to single particle resolution using confocal microscopy, exciting real-space investigations of fluid criticality are nowadays possible. In one such experiment, interface fluctuations were directly visualized [3]. The snapshots given in Ref. 3 show that the interface fluctuations become more pronounced upon approach of the critical point, a consequence of the diverging correlation length. In a more recent experiment, the critical exponent of the correlation length was extracted from real-space data and shown to be compatible with the three-dimensional (3D) Ising exponent [6]. This finding is consistent with earlier colloid-polymer experiments [7,8], whose results were interpreted in terms of renormalization of the Ising critical exponents [9]. Compared to the wealth of results on the overall shape of the phase diagram, little theoretical attention has been given to the critical behavior of colloidal dispersions. While the universality class is not expected to be changed from the Ising one, characteristic of simple fluids, the greater flexibility of colloid-colloid interactions with respect to their atomic counterparts may allow the study of several nonuniversal aspects of criticality, which are difficult to bring forth in atomic fluids. Inspired in part by the above mentioned experiments, the present paper aims to describe fluid criticality in colloidpolymer mixtures theoretically. In particular we focused on fluid-fluid phase separation and on the critical behavior of the Asakura-Oosawa (AO) model [4,5] via numerical simulation and liquid-state theory. On the simulation side, we considered the fluid-fluid phase diagram both of the AO colloid-polymer mixture, and of the one-component fluid of particles interacting via the two-body AO potential for a number of size ratios. The critical behavior of the order parameter and of the isothermal compressibility were also determined in the binary mixture by finite-size scaling techniques.
Despite its simplicity, extracting the critical behavior of the AO model by theoretical means is challenging. Characteristic of the mean-field approximation used till now is the parabolic shape of the binodal, corresponding to β = 1/2 where β is the critical exponent of the order parameter. Since the AO model belongs to the 3D Ising universality class [13,14,15], where β ≈ 0.326 [16], the true binodal is flatter. In addition, mean-field approximations appreciably underestimate the critical polymer fugacity, such that the location of the critical point is not reproduced correctly.
In this paper we studied the critical behavior of the AO pair potential by the hierarchical reference theory (HRT) [17]. Among liquid-state theories, the HRT has the peculiar feature of implementing the renormalization group machinery, which makes it a reliable tool in the study of the critical behavior of fluids, since it takes into account the effect of long range fluctuations on phase separation in a non-perturbative way. This theory is indeed capable of getting arbitrarily close to the fluid-fluid critical point. The present analysis complements an earlier investigation [18], where the stress was more on the overall topology of the phase diagram and the stability of fluid-fluid phase separation with respect to freezing. Here, we devote our attention solely to the fluid critical regime. Notice that we did not make a direct comparison with the experimental results of refs. [7,8], since the mixture considered there has a polymer-colloid size ratio larger than one. A one-component treatment where the polymer degrees of freedom are traced out is then untenable in that case.
The main motivations for this paper are the following: first, by comparing the fluid-fluid phase diagram of the AO mixture with that of the one-component AO fluid with the effective pair interaction, we aim at assessing the accuracy of the latter description as the size ratio is varied. Second, we want to compare the critical behavior of the AO mixture as given by finite-size scaling with the results obtained by applying HRT to the AO pair potential. Once reduced quantities are used, so as to make allowance for the discrepancy between the critical polymer densities given by the one-and the two-component descriptions, we find that the HRT is able to reproduce the asymptotic critical power law for the compressibility and the order parameter of the AO mixture with good accuracy.
The paper is structured as follows: In Sec. IIA the main features of the AO model are recalled. In Sec. IIB a short overview of the HRT is given. Sec. III is devoted to the illustration of the simulation techniques: specifically, Secs. IIIA and IIIB deal with the cumulant intersection and the scaling plots methods for determining the critical temperature and amplitudes respectively. In Sec. IIIC the unbiased scaling algorithm mentioned above is described. A biased version of the algorithm, where one assumes that the universality class in known, is described in Sec. IIID. In Sec. IV our HRT and simulation results for several size ratios are presented and discussed and our conclusions are drawn.
II. INTRODUCTION TO THE SYSTEM AND THEORETICAL METHOD OF ANALYSIS
A. The Asakura-Oosawa model
In this work we considered a binary mixture of colloidal particles and non-adsorbing polymers. Neglecting the degrees of freedom of the individual solvent molecules and of the polymer monomers, it is useful to consider effective potentials between constituents which are pairwise additive. In this context a very simple and basic model was proposed by Asakura and Oosawa [4] (AO model) and independently by Vrij [5]. The colloids are assumed to be hard spheres with radius σ c /2 and the polymers, with radius of gyration R g , as inter-penetrating and non-interacting as regards to their mutual interactions. However, the polymers are excluded from the colloids to a certain center-of-mass distance σ cp . Therefore, with respect to their interaction with colloidal particles the polymer molecules are assumed to behave as hard spheres of radius σ p /2 = R g , the diameter of the colloid-polymer interaction being σ cp = (σ c + σ p )/2. The binary AO model is then represented by a binary mixture characterized by the following potentials:
v pp = 0 v cp = ∞ r < σ cp 0 r > σ cp (1) v cc = ∞ r < σ c 0 r > σ c
where r is the distance between two particles. The above potentials define what we, in this paper, shall call the full mixture description of the AO model, in which the degrees of freedom of both the colloids and the polymers are explicitly retained. If the polymer degrees of freedom are traced out, the resulting effective (pair) interaction reads as:
βv AO (r) = ∞ r < σ c − πσ 3 p z p 6 (1 + q) 3 q 3 1 − 3r 2(1 + q)σ c + r 3 2(1 + q) 3 σ 3 c σ c < r < σ c + σ p 0 r > σ c + σ p(2)
here z p is the fugacity of a pure ideal polymer system and q = σ p /σ c . In Fig. 1 we show the interaction potential for several size ratios q. On decreasing q, the attractive well potential becomes deeper and narrower. Here we do not intend to describe in detail this model, which is well known in literature (see e.g. [19]), and was previously considered also by some of us [20]. We just remind that this effective interaction disregards three-body and higher-body terms. Interactions beyond the pair level are actually absent for q = σp σc < 0.154, but they are present for larger values of q, so that v AO becomes less and less accurate as q increases. Despite of this limit and of its simplicity, we stress that the AO model is able to reproduce also at a fairly quantitative level the main features of the experimental phase diagrams for a large range in the size ratios. Finally, we recall that the fluid-fluid demixing becomes stable as q increases and the depletion potential becomes longer ranged [19,21,22].
We applied HRT to the two-body AO potential Eq.(2) for different size ratios and we completed the theoretical analysis about the critical behavior of this kind of mixtures preliminarily investigated in Ref. [20]. The theoretical results are compared with those we obtained via Monte Carlo simulation and finite size scaling analysis on the full mixture as well as on the effective pair interaction Eq.(2).
B. Hierarchical Reference Theory
As introduced before, for the theoretical analysis we consider a model fluid of particles interacting via a pairwise additive potential V (r 1 , r 2 , . . . r n ) = i<j v(|r i − r j |), where r i is the position of a generic particle i. We assume that the two-body potential v(r) is spherically symmetric and results from the sum of a very short-ranged, singular repulsion v R (r) and a longer-ranged attraction w(r). In the present case w(r) is v AO (r), i.e. Eq.(2) for r > σ c . The fluid interacting via v R (r) alone acts as the unperturbed or reference system, whose properties are considered as known. Here the reference system is the hard-sphere fluid, whose thermodynamics and correlations are accurately described by the Carnahan-Starling equation of state [23] and the Verlet-Weis parametrization of the two-body radial distribution function [24]. We then focus on the perturbation w(r). In HRT [17], w(r) is switched on by taking its Fourier transform w(k) and introducing a parameter Q and a potential w Q (r), such that its Fourier transform coincides with w(k) for k > Q, and is identically vanishing for k < Q. As Q evolves from Q = ∞ to Q = 0, the interaction v Q (r) = v R (r) + w Q (r) goes from the reference part v R (r) to the full potential v(r). This procedure for turning on the interaction closely resembles that adopted in the momentum-space renormalization group [25,26]. At a given stage of the evolution, the nature of w Q (r) is such that fluctuations over length scales L larger than 1/Q are suppressed, so that critical fluctuations are recovered only in the limit Q → 0. The corresponding evolution of the Helmholtz free energy and n-body direct correlation functions, from the reference fluid to the fully interacting one, is described by an exact hierarchy of integral-differential equations. Close to a critical point and at large length scales, this hierarchy becomes indeed equivalent to a formulation of the momentum-space renormalization group [27]. However, the HRT hierarchy remains valid also away from criticality and over all length scales, thereby describing also the nonuniversal behavior of the fluid, which depends on the specific features of the microscopic interaction.
The first equation of the hierarchy gives the evolution of the Helmholtz free energy A Q of the partially interacting system in terms of its two-body direct correlation function in momentum space c Q (k) and the full perturbation w(k):
∂A Q ∂Q = − Q 2 4π 2 ln 1 − Φ(Q) C Q (Q) .(3)
In the equation above, we have set Φ(k) = − w(k)/k B T , k B being the Boltzmann constant and T the absolute temperature, and the quantities A Q , C Q (k) are linked to A Q and c Q (k) by the relations
A Q = − A Q k B T V + 1 2 ρ 2 [Φ(k = 0) − Φ Q (k = 0)] − 1 2 ρ d 3 k (2π) 3 [Φ(k) − Φ Q (k)] (4) C Q (k) = c Q (k) + Φ(k) − Φ Q (k) ,(5)
where V is the volume and ρ the number density. These modified free energy and direct correlation function have been introduced in order to remove the discontinuities which appear in A Q and c Q (k) at Q = 0 and k = Q respectively as a consequence of w Q (k) itself being discontinuous at k = Q. Physically they represent the free energy and direct correlation function of the fully interacting fluid as given by a treatment such that the Fourier components of the interaction with wavelengths larger than 1/Q are not really disregarded, but instead they are approximately taken into account by mean-field theory. In particular, for Q = 0 the modified quantities coincide with the physical ones, once the fluctuations have been fully included, while for Q = ∞ they give the mean-field expressions of the free energy and direct correlation function.
In order to get a closed set of equations, Eq.(3) has been supplemented with a closure relation for C Q (k). This is the point where approximations are introduced into the HRT scheme. As in previous applications, our form of C Q (k) has been inspired by liquid-state theories, and it reads
C Q (k) = c HS (k) + λ Q Φ(k) + G Q (k) ,(6)
where c HS (k) is the direct correlation function of the hard-sphere fluid, and λ Q , G Q (k) are a priori unknown functions of the thermodynamic state and of Q. Specifically, the function G Q (k) is determined by the core condition, i.e., the requirement that the radial distribution function g Q (r) be vanishing for every Q whenever the interparticle separation is less than the hard-sphere diameter σ. λ Q is adjusted so that C Q (k) satisfies the compressibility rule. This constraint gives the reduced compressibility of the fluid as the structure factor evaluated at zero wavevector, and can be expressed in terms of the modified quantities A Q , C Q (k) as
C Q (k = 0) = ∂ 2 A Q ∂ρ 2 .(7)
The compressibility rule (7) plays a fundamental role in the present scheme. In fact, when λ Q in Eq.(6) is determined via Eq.(7) and the resulting expression for C Q (k) is used in Eq.(3), one obtains a partial differential equation for A Q which reads
∂A Q ∂Q = − Q 2 4π 2 ln 1 − Φ(Q) A ′′ Q ϕ(Q) + ψ(Q) ,(8)
where we have set
A ′′ Q = ∂ 2 A Q /∂ρ 2 , ϕ(k) = Φ(k)/Φ(0) and ψ(k) = c HS (k) + G Q (k) − [c HS (0) + G Q (0)]ϕ(k) .(9)
Eq.(8) is integrated numerically from Q = ∞ down to Q = 0. At each integration step, G Q (k) is determined by the core condition g Q (r) = 0, 0 < r < σ. This condition acts as an auxiliary equation for determining ψ(k) via Eq.(9). The function G Q (k) has been approximated by a fourth-degree polynomial in the interval 0 < r < σ, and the equations for the coefficients were obtained. In order to keep the computational scheme relatively simple, further approximations were introduced in these auxiliary equations, which amount to decoupling the short-and long-range part of the correlations. The details of this procedure have been given elsewhere [28].
Since in this work we will be mostly concerned with the critical region, it is worthwhile recalling the critical behavior of HRT with the closure relation (6) [17,18]. In the critical region and at small Q, Eq.(8) can be considerably simplified in such a way that it depends just on the long-wavelength limit of the direct correlation function. This in turn is determined by the compressibility rule (7) and the assumption, implicit in Eq. (6), that C Q (k) is always an analytic function of k:
C Q (k) ∼ ∂ 2 A Q ∂ρ 2 − bk 2 ,(10)
where b is a regular function of Q, ρ, T . The resulting evolution equation is then cast into universal form by suitably rescaling the density and the free energy. The critical behavior of the theory is analyzed in terms of fixed-point functions and linearized flow of the rescaled free energy in the neighborhood of the fixed points, along the lines of the renormalization group [18]. The resulting critical exponents are correct to first order in the expansion in the parameter ǫ = 4 − d, d being the dimensionality of the system. In particular, the analytic dependence of the direct correlation function on k, also known as the Ornstein-Zernike ansatz, implies that the critical exponent η is zero in our approximation. For d = 3 one finds [18] γ = 1.378, β = 0.345, δ = 5, α = −0.07, ν = γ/2 = 0.689, where the usual notation for the critical exponents has been used. These exponents satisfy the algebraic relations implied by the scaling of the free energy in the critical region [26]. Below the critical temperature, the theory correctly predicts a diverging compressibility inside the coexistence region. However, the compressibility diverges also on the coexistence boundary, unlike in the real fluid. This is a consequence of the Ornstein-Zernike ansatz (10). Finally, we observe that, for the AO fluid considered here, the perturbation Φ(k) does not depend on temperature. The role of the inverse temperature is instead played by the packing fraction of the polymer in the reservoir η r p , defined in terms of the polymer fugacity z p as η r p = πz p σ 3 p /6 (see Eq. (2)). For η r p above a critical value η r p,cr , the AO model will phase separate into a colloid poor phase (the colloidal vapor, with colloid packing fraction η v c ) and colloid rich phase (the colloidal liquid, with colloid packing fraction η l c ). In this sense, then, η r p is the analogue of inverse temperature in fluid-vapor transitions of simple fluids. Both the inverse temperature and η r p appear in Eq.(8) just as external parameters which govern the strength of the interaction for an atomic fluid and the AO fluid respectively. Consequently, this does not imply any substantial change in our treatment (see Ref. 20).
III. SIMULATIONS
To test the predictions of HRT, large-scale Monte Carlo (MC) simulations of the AO model have also been performed, using colloid-to-polymer size ratios q = 0.8, q = 0.56 and q = 0.4. The simulations were carried out in the grand canonical ensemble. In this ensemble, the volume V , the colloid fugacity z c , and the polymer fugacity z p are fixed, while the number of particles fluctuates. We use cubic simulation boxes with edge L and periodic boundary conditions in all d = 3 dimensions. The output of the simulations consists of the distribution P L (N c |η r p , z c ), defined as the probability of observing a system containing N c colloids at "temperature" η r p , colloid fugacity z c and box size L. The simulations of the full AO model, in which both the colloids and the polymers are explicitly retained, were performed using the method of Ref. 13. The essential ingredients of this approach are a cluster move [13,14], a reweighting scheme [29], and histogram extrapolation [30,31]. In addition, we performed a number of grand canonical simulations using the effective pair potential of Eq.(2). These simulations, obviously, do not require the cluster move of Ref. 13, and were performed using standard grand canonical MC [32,33].
One aim of the simulations is to verify to what degree the critical properties of the AO model predicted by HRT are reproduced. Previous simulations have shown that the AO model belongs to the 3D Ising universality class [13,14,15], and that pronounced deviations from mean-field behavior become visible upon approach of the critical point. One strong point of HRT lies in its ability to yield non-classical (i.e. non mean-field) critical exponents. The latter is expected to resemble more closely simulations, and indeed experiments [6,7,8]. In this work, we are particularly concerned with the critical behavior of the order parameter and the compressibility in the one-phase region. More precisely, defining t ≡ η r p /η r p,cr − 1 as distance from the critical point, the order parameter ∆ ≡ (η l c − η v c )/2 is expected to obey ∆ = At β , while the (dimensionless) compressibility in the one-phase region χ ≡ v c ( N 2 c − N c 2 )/V is expected to diverge as χ = B(−t) −γ , with critical exponents β and γ, critical amplitudes A and B, and v c = πσ 3 c /6 the volume of a single colloid. Note that this definition of χ differs from the one adopted in Ref. 15 by a factor 1/v c . The quantity χ is related to the usual reduced compressibility χ red , i.e., the zero-k value of the structure factor, by χ = η c χ red . Preferred values of the critical exponents for the 3D Ising universality class are listed in Table I.
To accurately probe the critical properties, the simulation data is analyzed using a number of finite size scaling (FSS) techniques, which we will briefly outline in what follows. Most notably, part of our analysis is based on recently proposed unbiased scaling algorithms [10,11]. We emphasize here that all FSS algorithms require as input highly accurate MC data, typically for a range of system sizes and temperatures. Since our resources are limited, some compromise is unavoidable. The FSS analysis in this work is therefore limited to the full AO model only; no FSS is performed on the data obtained using the effective pair potential of Eq.(2).
A. Cumulant Intersections (CI)
The cumulant intersection approach [34] is a common FSS method to determine the critical temperature η r p,cr from simulation data. Here, the grand canonical distribution P L (N c |η r p , z c ) is used to measure the cumulant ratio m 2 / |m| 2 with m = N c − N c as function of η r p , for a number of system sizes L. In these simulations, the colloid fugacity is tuned so as to obey the "equal-area" criterion [35,36]. The data from the different system sizes is expected to show a common intersection point, which yields an estimate of the critical polymer reservoir packing fraction η r p,cr . For the full AO model with q = 0.8, the results of this procedure can be found in Ref. 13. Additional simulations for q = 0.56 performed in this work, using three distinct system sizes L = 10, 12, 14 (in units of the colloid diameter), show qualitatively similar behavior. The resulting estimates of η r p,cr are listed in Table II. Note that the cumulant intersection approach can be applied without prior knowledge of the universality class. In this sense, then, it is an unbiased method.
The cumulant intersection method thus requires as input MC data from at least two different system sizes. To verify the consistency of the results, however, a higher number is recommended. For q = 0.4, this requirement exceeded the computational resources available to us, and so no cumulant intersection result for this size ratio is reported (small size ratios q are problematic to simulate due to the high number of polymers these simulations require). The results for q = 0.4 are determined by means of the economical scaling algorithm and scaling plots to be described shortly.
B. Scaling Plots (SP)
According to finite size scaling theory, the order parameter ∆ obtained in a finite system of linear dimension L close to criticality shows a systematic L dependence that can be written as ∆ = L −β/ν M 0 (tL 1/ν ), with ν the critical exponent of the correlation length, and M 0 a scaling function independent of system size [33,37]. The latter implies that plots of L β/ν ∆ versus tL 1/ν should collapse onto a single curve. For large tL 1/ν , but still within the critical region, this curve should approach the critical power law of the thermodynamic limit. The latter is most conveniently visualized using a double logarithmic scale. The data is then expected to approach a straight line, with slope β and intercept equal to the critical amplitude A. This method can thus be used to extract critical amplitudes from simulation data. Note that this approach is biased, in the sense that the critical temperature and exponents must be known a priori. A possible strategy is to obtain the critical temperature using the cumulant intersection method, and to simply assume 3D Ising universality. For many fluids, especially those with short-ranged interactions, the latter assumption will be a safe one. Precisely this strategy was followed in Ref. 15 to obtain the critical amplitude of the order parameter, as well as the compressibility amplitude, for q = 0.8. The scaling plots obtained by applying the same strategy to the q = 0.56 data of this work are qualitatively similar; the resulting estimates of the critical amplitudes are listed in Table II. Note that in these analysis, the colloid fugacity was again chosen to fulfill the "equal-area" criterion (see section III A), and also that, for the same reason as before, no estimates for q = 0.4 are reported.
C. Unbiased Scaling (US)
Recently, FSS algorithms were presented that do not require prior knowledge of the critical exponents, nor of the critical temperature [10,11,12,38,39]. Instead, these quantities are outputs, and this may prove valuable if there is doubt regarding the universality class of a system. In fact, these algorithms were inspired by serious doubts raised over the universality class of the restricted primitive electrolyte (which was shown to be that of the 3D Ising model [40]). In case of the AO model, there is no doubt regarding the universality class. The motivation for nevertheless using these new unbiased FSS methods is a different one. As mentioned in Ref. 10, one problem in simulating asymmetric fluids lies in choosing the coexistence chemical potential (or fugacity). A common approach, also adopted by us before, is to use the "equal-area" criterion: the fugacity (of the colloids) is chosen such that P L (N c |η r p , z c ) becomes bimodal, with two peaks of equal area. Away from the critical point, the peaks in P L (N c |η r p , z c ) are well separated: equal area then corresponds to equal pressure in the two phases [39], a necessary condition for phase coexistence. Close to the critical point, however, the peaks in P L (N c |η r p , z c ) strongly overlap, and a separation in terms of equal area becomes rather arbitrary. An alternative procedure is thus desirable.
The unbiased algorithm [10,11] requires as input the grand canonical distribution P L (N c |η r p , z c ) for at least three different system sizes L, and for η r p ranging from the non-critical regime toward the critical point. It is therefore considerably more expensive than previously discussed methods, which only required data near the critical point. For this reason, we consider q = 0.56 only. Starting with η r p far away from the critical point in the two-phase region, the cumulant ratio m 2 2 / m 4 is plotted as function of the average colloid packing fraction v c N c /V , with symbols defined as before (note that this plot is parameterized by z c ). The resulting curve will reveal two minima, located at η − c and η + c , with respective values Q − and Q + at the minima. Defining the quantities Q min = (Q + + Q − )/2, x = Q min ln(4/eQ min ), and y = (η + c − η − c )/(2∆), the points (x, y) from the different system sizes should all collapse onto the line y = 1 + x/2. Recall that ∆ is the order parameter in the thermodynamic limit at the considered η r p , precisely the quantity of interest, which may thus be obtained by fitting until the best collapse onto 1 + x/2 occurs. In the next step, η r p is chosen closer to the critical point, the points (x, y) are calculated as before, but this time ∆ is chosen such that the new data set joins smoothly with the previous one, yielding an estimate of the order parameter at the new temperature. This procedure is repeated all the way to the critical point, where ∆ vanishes, leading to an estimate of η r p,cr . Moreover, the procedure also yields y as function of x. The latter function is universal within a universality class, and for the hard-core square-well (HCSW) fluid can be found in Ref. 11. Since the AO model belongs to the same universality class, we should arrive at a similar result. The latter is verified in Fig. 2, which shows y as function of x obtained in this work, compared to the result of Ref. 11. Note that, for small x, our data correctly approach y = 1 + x/2. More importantly, the scaling function we obtain agrees well with the one obtained in Ref. 11, albeit that our data is "noisier". This comes as no surprise, since the HCSW fluid is much easier to simulate than the AO model, and so higher quality data are more readily generated. The critical value η r p,cr is obtained via the best collapse of (t, ∆) onto a power law. The resulting data is plotted in Fig. 6 (triangles), and the corresponding estimate of η r p,cr is listed in Table II. The critical exponent β and amplitude A are obtained from the slope and intercept of the simulation data. For the exponent we find β ≈ 0.32, which is in good agreement with the accepted 3D Ising value; the critical amplitude is listed in Table II.
The unbiased scaling algorithm naturally extends to estimate the coexistence diameter D ≡ (η l c + η v c )/2 from simulation data [10,12]. In contrast to the order parameter, however, the corresponding scaling function for the diameter is not universal. In other words, a comparison to Ref. 12 similar in spirit to Fig. 2, is not possible in this case. Instead, we simply show in Fig. 3 the output of the scaling algorithm for the q = 0.56 AO model, where we used for η r p,cr the value obtained in the previous paragraph. Close to the critical point, the diameter is expected to scale as D = η c,cr 1 + a 1 t 2β + a 2 t 1−α + a 3 t ,
with t the relative distance from the critical point, η c,cr the critical colloid packing fraction, and non-universal amplitudes a i [41]. The curve in Fig. 3 is a fit to the data using the above form yielding η c,cr = 0.1736, a 1 = −0.125, a 2 = 1.674 and a 3 = −0.875. The HRT diameter (not shown in the figure) varies more slowly with t, its average slope being about half of the simulation results in the reduced temperature interval shown in Fig. 3.
D. Economical Scaling (ES)
The authors of the unbiased scaling algorithm have also presented, in Ref. 11, a biased version of their algorithm. If one is prepared to accept 3D Ising universality, the scaling function of the HCSW fluid shown in Fig. 2, which is universal within a universality class, can be used to estimate the order parameter of other systems in that class. To use this approach, MC data of a single system size obtained close to criticality is in principle sufficient. Since we lack the resources to execute the full unbiased scaling algorithm for q = 0.4 and q = 0.8, this economical approach offers an attractive alternative. For q = 0.4, a single simulation was thus performed using system size L/σ c = 10, while for q = 0.8 the data of the largest system of Ref. 13 was used. The triangles in Fig. 7 and Fig. 5 show the corresponding order parameter as function of the distance from the critical point. Here, η r p,cr was obtained from the best collapse of (t, ∆) onto a power law; this also yields the critical amplitude A. Using the value of η r p,cr thus obtained for q = 0.4, a scaling plot was generated to also extract the critical amplitude B. The latter estimate will not be very precise, but seems consistent at least with the trend that smaller size ratios lead to smaller compressibility amplitudes.
Note that at present, no economical algorithm for the coexistence diameter exists, since the corresponding scaling function is not universal in this case [12]. Hence, in Table II, no FSS estimate of η c,cr could be provided for q = 0.4. Instead, we have listed the colloid packing fraction at η r p,cr in the finite system. For q = 0.8, the estimate of Ref. 14 is reported, which was derived using the Bruce-Wilding mixed-field scaling method [42].
IV. RESULTS AND DISCUSSION
As introduced before, in Fig. 1 we present the pair interaction for the different size ratios we focused on, i.e. q = 0.4, 0.56, 0.8. Starting from the effective interaction which describes the AO mixtures, we studied the critical behavior by HRT and compared our results with simulations on the same effective interaction. In addition, we compared our analysis of the critical behavior with simulations performed on the full, two-component mixture.
We first considered the whole coexistence region in order to assess the limits entailed by a description of this mixture in terms of an effective pair interaction as well as by the particular HRT scheme which we used. In Ref. [20], some of us showed that HRT yields a very good account of fluid-fluid coexistence curve and of its stability with respect to solid phases for several q (q = 0.25, 0.4, 0.6, 0.8). Thanks to its renormalization-group structure, the accuracy of HRT in the critical region is remarkable compared to other approximate theories, such as integral equations or perturbative methods. In Fig. 4 we show both the data we obtained with HRT and MC simulation on the pair potential v AO (r), and the results of our study with MC simulation on the full mixture. There are two main features which emerge TABLE II: Summary of our FSS results for the full mixture AO model at various size ratios q. Listed are the critical temperature η r p,cr , the critical amplitude A of the order parameter, the critical amplitude B of the compressibility in the one-phase region, and the critical colloid packing fraction ηc,cr. Also indicated is the FSS method that was used to obtain the estimate: cumulant intersection (CI), scaling plot (SP), unbiased scaling (US), and economical scaling (ES). The ⋆ symbol marks the estimate we believe to be the most reliable, in case multiple values are provided. from the comparison: on the one hand, the agreement between theory and simulation results on the pair interaction is quite good, and increases on increasing the size ratio, because of the theory being tailored, at the present stage, to relatively long-ranged interactions (the apparent discrepancy in this trend, central panel, is due to the smaller number of MC data close to η c,cr for q = 0.56 than for q = 0.4). On the other hand, the agreement between the effective pair interaction description and the full binary mixture increases on decreasing the size ratio, both for the overall coexistence curve and the critical packing fractions η r p,cr and η c,cr . This is due to a deficiency of the effective pair potential, which is more appropriate to describe colloid-polymer mixtures for small q. In general, the full AO mixture appears to be much more sensitive to a change in q than its one-component representation in terms of the effective pair interaction v AO (r). This is especially true for η r p,cr . In fact, most of the deviations of the HRT critical parameters from the simulation results of the full mixture are not due to the HRT approximation, but to the modeling of the mixture in terms of a pair interaction, as one can appreciate by comparing the HRT coexistence curves with the simulation results for the pair interaction model. In Table III, the HRT results for the critical packing fractions η r p,cr and η c,cr are compared to the mean-field values. We recall that the mean-field approximation is obtained by setting Q = ∞ in Eq. (4). This corresponds to evaluating the excess free energy with respect to the hard-sphere gas a la van der Waals as A − A HS = 1 2 ρ 2 V d 3 r v AO (r). The main purpose of this paper is to study in detail the critical behavior of the AO system with simulations and theory and to test the accuracy of HRT in reproducing nontrivial critical exponents. The discrepancy between the HRT critical point and the value obtained via finite-size scaling on the AO binary mixture is then at least partially accounted for by adopting the reduced "temperature" variable |η r p /η r p,cr − 1|, as customary in the study of the critical behavior. Moreover, such a discrepancy is not expected to affect the universal features of the transition. We then focus on the critical behavior of the AO model, in particular on the study of the order parameter, compressibility and correlation function. In Fig. 5 we present the HRT data for the difference between the reduced densities of the colloid on the "liquid" and "vapor" branches of the coexistence curve ∆ = (η l c − η v c )/2 as a function of the reduced "temperature" η r p /η r p,cr − 1 for q = 0.8. A linear fit of our results for log(t) −4 gives a power-law behavior with β = 0.37. We recall that the asymptotic value of the exponent β predicted by HRT is obtained by linearizing the renormalization-group flow induced by the HRT evolution equation (3) in the neighborhood of the critical point and it is given by β = 0.345 [17]. The theoretical results are compared with the asymptotic Ising-3D power law ∆ ∼ A t β obtained from finite-size scaling. The critical exponent β of the finite-size scaling analysis coincides with the generally accepted value β = 0.324 (see Sec. III). The same figure shows the data obtained by simulating the full mixture. and η l c are the colloid packing fraction on the low-and high density branch of the coexistence curve respectively. The triangles are data obtained in MC simulations of the full mixture AO model extrapolated to the thermodynamic limit using FSS. The dashed line is a fit to the simulation data using the critical power law ∆ = A(η r p /η r p,cr − 1) β , assuming the 3D Ising value for β, and fit parameters η r p,cr and A taken from Table II. Closed circles show the HRT result; the dotted line represents the mean-field result. The comparison with the mean-field results (dotted curve) evidences how HRT is able to reproduce the non-trivial critical behavior of the mixture. In Fig. 6 and Fig. 7 we present the same analysis for different size ratios. According to both HRT and finite-size scaling, the critical amplitude of the coexistence region increases on decreasing q as a consequence of the coexistence curve becoming flatter (see Fig. 4). However, this trend is stronger for the binary system than for the one-component fluid, as can be inferred by comparing the values reported in Table II with q = 0.4 respectively. As a consequence, in the right-hand side of the reduced temperature axis, HRT overestimates the finite-size scaling data for q = 0.8, while it underestimates them for q = 0.4. The overall agreement remains nevertheless quite good, especially if one considers that the theory contains no free parameters. Far from the critical point we recover the mean-field trend. In Fig. 8, Fig. 9 and Fig. 10 we consider the behavior of the reduced compressibility χ red of the colloid on the critical isochore in the one-phase region. We report both the HRT results and the asymptotic power law obtained from finite-size scaling, χ red ∼ B ′ |t| −γ , γ = 1.239 [15], B ′ = B/η c,cr with B given in Table II. We recall the HRT result: γ = 1.378, about 10% larger than the correct one [17]. A linear fit of the data shown in the figures for log(t) −4 gives γ = 1.37. The effect of the interaction range on the critical amplitude of the compressibility is weaker than on the order parameter, and is again more evident in the finite-size scaling results for the binary mixture than in the one-component fluid. In fact, the data reported in Table II show a weak decrease of the critical amplitude on decreasing q, while the amplitude of the HRT compressibility is hardly affected at all in the range of q considered here. Therefore, as q is decreased, the finite-size scaling line moves below the HRT points. Also in this case we found good overall agreement for each q. In the same figures we have plotted also the HRT results for the correlation length ξ which governs the decay of the correlations near the critical point. The divergence of ξ on the critical isochore asymptotically close to the critical point is described by the power law ξ ∼ C|t| −ν . The current implementation of HRT necessarily gives a vanishing critical exponent η. Therefore, the critical exponent relation between γ and ν becomes just ν = γ/2. As a general remark, we observe that, while the FSS curves represent the asymptotic power-law behavior and therefore give a straight line on the log-log plots of Fig. 5-Fig. 10, the HRT results do not, since in the crossover region the corrections to the asymptotic scaling are important. These corrections are nonuniversal, and depend on the size ratio q.
It is worthwhile mentioning that the critical behavior of a colloid-polymer mixture of sterically stabilized silica spheres and polydimethylsiloxane dispersed in cyclohexane has been investigated experimentally [7,8]. The isothermal compressibility, the correlation length [7], the order parameter, and the interfacial tension [8] of the colloid were measured as a function of the polymer concentration, and the critical exponents γ, β, ν were found to be slightly higher than the asymptotic Ising values. This was explained in terms of exponent renormalization by the factor 1/(1 − α), α being the critical exponent of the specific heath [9]. On the other hand, exponent renormalization is not found here, where the quantity driving criticality is the reservoir density of the polymer, or equivalently its chemical potential. This is a field analogous to temperature in thermal systems, and does not cause renormalization.
In summary, we have carried out a study of the critical behavior of a simple model of colloid-polymer mixtures by both MC simulations and HRT. Critical fluctuations in colloidal systems are expected to have a strong effect on phase separation, e.g. lowering the free energy barrier for nucleation of the solid in the fluid phase [44]. More generally, there exist many examples which show the importance of a full understanding of the universality class of the systems in exam. In a previous paper [15], one of us studied the interfacial tension of the AO model, which gives indication of the strength of capillary waves. The mean-field like behavior and the 3D Ising behavior of the capillarity strength are profoundly different; then it is important to reproduce correctly the universality class of the systems in exam. The importance of an accurate theoretical approach for this study, where the ability of HRT to describe realistically criticality can be very helpful, origins also from the numerical efforts and time cost necessary to the simulation of a full colloid-polymer mixture.
Acknowledgments This work is funded in part by the Marie Curie program of the European Union, contract number MRTN-CT2003-504712. Support from the Deutsche Forschungsgemeinschaft under the SFB-TR6 (project sections A5 and D3) is also acknowledged. We thank K. Binder and J. Horbach for many stimulating discussions.
FIG. 1 :
1Plots of the effective colloid-colloid pair potential, given by Eq.(2), as function of the pair separation r, for several colloid-to-polymer size ratios q.
FIG. 2 :
2Scaling function of the order parameter obtained using the unbiased scaling algorithm. Following the convention of Ref. 11, the scaling function is raised to a negative exponent, with φ = 1/β. Open circles show results obtained in this work for the AO model with q = 0.56; the solid curve is the HCSW result of Ref.11. Also shown is the exact small x limiting form y = 1 + x/2.
FIG. 3 :
36256 ± 0.0001 ⋆ (US) 0.4 0.52154 ± 0.0001 (ES) 0.47 ± 0.01 (ES) 0.045 ± 0.003 (ES+SP) ∼ 0.21 b a Taken from Ref. 14 and listed here for completeness. b This is no FSS estimate! Coexistence diameter of the q = 0.56 AO model as function of the relative distance from the critical point, where η r p,cr = 0.6256 was used. Open circles show simulation results, the closed circle is our estimate of ηc,cr obtained by fitting the simulation data to Eq.(11); the curve shows the fit itself.
FIG. 4 :
4Coexistence curve of the AO model for several colloid-to-polymer size ratios q. Closed circles are simulation results of the full mixture description, where the crosses mark the location of the critical point obtained using FSS. The triangles are simulation results obtained using the effective pair potential of Eq.(2). The open circles are the theoretical predictions obtained using HRT. Lines connecting the points serve to guide the eye. The agreement between the pair interaction and the full-mixture picture increases on decreasing the size ratio. On the contrary, the accuracy of the HRT with respect to the simulation results, both on v(r), increases on increasing q.
FIG. 5 :
5Coexistence curve for q = 0.8 close to the critical point on double logarithmic scales (reduced units). The quantities η v c
FIG. 6 :
6Coexistence curve for q = 0.56 (reduced units). Symbols defined as inFig. 5.
the amplitudes obtained by the linear fit of the HRT results A = 0.41, A = 0.52, A = 0.56 for q = 0.8, q = 0.56, and
FIG. 7 :
7Coexistence curve for q = 0.4 close to the critical point (reduced units). Symbols defined as in Fig. 5.
FIG. 8 :
8Reduced compressibility χ red (upper curves) and correlation length ξ (lower curves) for the AO fluid with q = 0.8.
FIG. 9 :
9Reduced compressibility χ red (upper curves) and correlation length ξ (lower curves) for the AO fluid with q = 0.56.
FIG. 10 :
10Reduced compressibility χ red (upper curves) and correlation length ξ (lower curves) for the AO fluid with q = 0.4.
TABLE I :
IPreferred values of the critical exponents of the specific heat (α), order parameter (β), compressibility (γ), and correlation length (ν) for the 3D Ising model[16].α
β
γ
ν
0.109
0.326
1.239
0.630
TABLE III :
IIICritical temperature η r
p,cr and critical colloid packing fraction ηc,cr determined with HRT and mean-field (MF)
theory for the different size ratios.
q
η r
p,cr
ηc,cr
HRT
0.8
0.4825
0.1895
0.56
0.4679
0.2204
0.4
0.4404
0.2257
MF
0.8
0.4129
0.1304
0.56
0.4086
0.1304
0.4
0.3718
0.1304
. W Poon, Science. 304830W. Poon, Science 304, 830 (2004).
. A Imhof, J K G Dhont, Phys. Rev. Lett. 751662A. Imhof and J. K. G. Dhont, Phys. Rev. Lett. 75 1662 (1995).
. D G A L Aarts, M Schmidt, H N W Lekkerkerker, Science. 304847D. G. A. L. Aarts, M. Schmidt and H. N. W. Lekkerkerker, Science 304, 847 (2004).
. S Asakura, F Oosawa, J , S. Asakura, F. Oosawa J. Polym. Sci. 22183Chem. Phys.S. Asakura, F. Oosawa J. Chem. Phys. 22, 1255 (1954). S. Asakura, F. Oosawa J. Polym. Sci. 33, 183 (1958).
. A Vrij, Pure appl. Chem. 48471A. Vrij Pure appl. Chem. 48, 471 (1976).
. C P Royall, D G A L Aarts, H Tanaka, in preparationC.P. Royall, D.G.A.L. Aarts and H. Tanaka, in preparation (2006).
. B H Chen, B Payandeh, M Robert, Phys. Rev. E. 622369B. H. Chen, B. Payandeh, and M. Robert, Phys. Rev. E 62, 2369 (2000).
. B H Chen, B Payandeh, M Robert, Phys. Rev. E. 6442401B. H. Chen, B. Payandeh, and M. Robert, Phys. Rev. E 64, 042401(R) (2001).
. M E Fisher, Phys. Rev. 176257M. E. Fisher, Phys. Rev. 176, 257 (1968);
. W F Saam, Phys. Rev. A. 21461W. F. Saam, Phys. Rev. A 2, 1461 (1970).
. Y C Kim, M E Fisher, E Luijten, Phys. Rev. Lett. 9165701Y. C. Kim, M. E. Fisher, and E. Luijten, Phys. Rev. Lett. 91, 065701 (2003).
. Y C Kim, M E Fisher, Comput. Phys. Commun. 169295Y. C. Kim and M. E. Fisher, Comput. Phys. Commun. 169, 295 (2005).
. Y C Kim, Phys. Rev. E. 7151501Y. C. Kim, Phys. Rev. E 71, 051501 (2005).
. R L C Vink, J Horbach, J. Chem. Phys. 1213253R. L. C. Vink and J. Horbach, J. Chem. Phys. 121, 3253 (2004).
. R L C Vink, J Horbach, J. Phys.: Condens. Matter. 163807R. L. C. Vink and J. Horbach, J. Phys.: Condens. Matter 16, S3807 (2004).
. R L C Vink, J Horbach, K Binder, Phys. Rev. E. 7111401R. L. C. Vink, J. Horbach, and K. Binder, Phys. Rev. E 71, 011401 (2005).
. M E Fisher, S.-Y Zinn, J. Phys. A: Math. Gen. 31629M. E. Fisher and S.-Y. Zinn, J. Phys. A: Math. Gen. 31, L629 (1998).
. A Parola, L Reatto, Adv. Phys. 44211A. Parola, L. Reatto, Adv. Phys. 44, 211 (1995).
. A Parola, L Reatto, Phys. Rev. A. 313309A. Parola, L. Reatto, Phys. Rev. A 31, 3309 (1985).
. J M Brader, R Evans, M , Schmidt Mol. Phys. 1013349J. M. Brader, R. Evans, M. Schmidt Mol. Phys. 101, 3349 (2003).
. F Lo Verso, D Pini, L Reatto, J. Phys.: Condens.Matter. 17771F. Lo Verso, D. Pini, and L. Reatto J. Phys.: Condens.Matter 17, 771 (2005).
. A P Gast, C K Hall, W B Russel, J. Colloid Interface Sci. 96251A. P. Gast, C. K. Hall, W. B. Russel J. Colloid Interface Sci. 96, 251 (1983).
. H N W Lekkerkerker, W C K Poon, P N Pusey, A Stroobans, P B Warren, Europhys. lett. 20559H. N. W. Lekkerkerker, W. C. K. Poon, P. N. Pusey, A. Stroobans, P. B. Warren Europhys. lett. 20, 559 (1992).
See, J P Instance, I R Hansen, Mcdonald, Theory of Simple Liquids. LondonAcademic PressSee, for instance, J. P. Hansen and I. R. McDonald, Theory of Simple Liquids, (Academic Press, London, 1986).
. L Verlet, J J Weis, Phys. Rev. A. 5939L. Verlet, J. J. Weis, Phys. Rev. A 5, 939 (1972);
. D Henderson, E W Grundke, J. Chem. Phys. 63601D. Henderson, E. W. Grundke, J. Chem. Phys. 63, 601 (1975).
. K G See, J B Wilson, Kogut, Phys. Rep. C. 1275See, for instance, K. G. Wilson, J. B. Kogut, Phys. Rep. C 12, 75 (1974).
M E See, Fisher, Critical Phenomena. F. J. W. HahneBerlinSpringer186See, for instance, M. E. Fisher, Critical Phenomena, Lecture Notes in Physics, Vol. 186, edited by F. J. W. Hahne (Springer, Berlin, 1982).
. F Nicoll, T S Chang, Phys. Lett. 62F. Nicoll, T. S. Chang, Phys. Lett. 62A (1977).
. A Meroni, A Parola, L Reatto, Phys. Rev. A. 426104A. Meroni, A. Parola, L. Reatto, Phys. Rev. A 42, 6104 (1990);
. M Tau, A Parola, D Pini, L Reatto, Phys. Rev. E. 522644M. Tau, A. Parola, D. Pini, L. Reatto, Phys. Rev. E 52, 2644 (1995).
. P , M Müller, J. Chem. Phys. 12010925P. and M. Müller, J. Chem. Phys. 120, 10925 (2004).
. A M Ferrenberg, R H Swendsen, Phys. Rev. Lett. 612635A. M. Ferrenberg and R. H. Swendsen,Phys. Rev. Lett. 61, 2635 (1988).
. A M Ferrenberg, R H Swendsen, Phys. Rev. Lett. 631195A. M. Ferrenberg and R. H. Swendsen, Phys. Rev. Lett. 63, 1195 (1989).
Understanding Molecular Simulation. D Frenkel, B Smit, Academic PressSan DiegoD. Frenkel and B. Smit, Understanding Molecular Simulation (Academic Press, San Diego, 2001).
D P Landau, K Binder, A Guide to Monte Carlo Simulations in Statistical Physics. CambridgeCambridge University PressD. P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics (Cambridge University Press, Cambridge, 2000).
. K Binder, Z. Phys. B: Condens. Matter. 43119K. Binder, Z. Phys. B: Condens. Matter 43, 119 (1981).
. K Binder, D Landau, Phys. Rev. B. 301477K. Binder and D. P Landau, Phys. Rev. B 30, 1477 (1984).
. C Borgs, R Kotecky, Phys. Rev. Lett. 681734C. Borgs and R. Kotecky, Phys. Rev. Lett. 68, 1734 (1992).
M E J Newman, G T Barkema, Monte Carlo Methods in Statistical Physics. OxfordClarendon PressM. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics (Clarendon Press, Oxford, 1999).
. Y C Kim, M E Fisher, Phys. Rev. E. 6841506Y. C. Kim and M. E. Fisher, Phys. Rev. E 68, 041506 (2003).
. G Orkoulas, M E Fisher, A Z Panagiotopoulos, Phys. Rev. E. 6351507G. Orkoulas, M. E. Fisher, and A. Z. Panagiotopoulos, Phys. Rev. E 63, 051507 (2001).
. E Luijten, M E Fisher, A Z Panagiotopoulos, Phys. Rev. Lett. 88185701E. Luijten, M. E. Fisher, and A. Z. Panagiotopoulos, Phys. Rev. Lett. 88, 185701 (2002).
. Y C Kim, M E Fisher, G Orkoulas, Phys. Rev. E. 6761506Y. C. Kim, M. E. Fisher, and G. Orkoulas, Phys. Rev. E 67, 061506 (2003).
. A D Bruce, N B Wilding, Phys. Rev. Lett. 68193A. D. Bruce and N. B. Wilding, Phys. Rev. Lett. 68, 193 (1992).
. Y C Kim, M E Fisher, J. Phys. Chem. B. 1086750Y. C. Kim and M. E. Fisher, J. Phys. Chem. B 108, 6750 (2004).
. P R Ten Wolde, D Frenkel, Science. 2771975P. R. ten Wolde and D. Frenkel, Science 277, 1975 (1997).
| [] |
[
"Local Explanation of Dialogue Response Generation",
"Local Explanation of Dialogue Response Generation"
] | [
"Yi-Lin Tuan [email protected] \nUniversity of California\nSanta Barbara\n",
"Connor Pryor [email protected] \nUniversity of California\nSanta Cruz\n",
"Wenhu Chen [email protected] \nUniversity of California\nSanta Barbara\n",
"Lise Getoor [email protected] \nUniversity of California\nSanta Cruz\n",
"William Yang Wang [email protected] \nUniversity of California\nSanta Barbara\n"
] | [
"University of California\nSanta Barbara",
"University of California\nSanta Cruz",
"University of California\nSanta Barbara",
"University of California\nSanta Cruz",
"University of California\nSanta Barbara"
] | [] | In comparison to the interpretation of classification models, the explanation of sequence generation models is also an important problem, however it has seen little attention. In this work, we study model-agnostic explanations of a representative text generation task -dialogue response generation. Dialog response generation is challenging with its open-ended sentences and multiple acceptable responses. To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG), that regards the explanations as the mutual interaction of segments in input and output sentences. LERG views the sequence prediction as uncertainty estimation of a human response and then creates explanations by perturbing the input and calculating the certainty change over the human response. We show that LERG adheres to desired properties of explanation for text generation, including unbiased approximation, consistency, and cause identification. Empirically, our results show that our method consistently improves other widely used methods on proposed automatic-and human-evaluation metrics for this new task by 4.4-12.8%. Our analysis demonstrates that LERG can extract both explicit and implicit relations between input and output segments. 1 | null | [
"https://arxiv.org/pdf/2106.06528v2.pdf"
] | 235,417,358 | 2106.06528 | 60a2567a238aff33d8cab667be76df83c63d9e1f |
Local Explanation of Dialogue Response Generation
Yi-Lin Tuan [email protected]
University of California
Santa Barbara
Connor Pryor [email protected]
University of California
Santa Cruz
Wenhu Chen [email protected]
University of California
Santa Barbara
Lise Getoor [email protected]
University of California
Santa Cruz
William Yang Wang [email protected]
University of California
Santa Barbara
Local Explanation of Dialogue Response Generation
In comparison to the interpretation of classification models, the explanation of sequence generation models is also an important problem, however it has seen little attention. In this work, we study model-agnostic explanations of a representative text generation task -dialogue response generation. Dialog response generation is challenging with its open-ended sentences and multiple acceptable responses. To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG), that regards the explanations as the mutual interaction of segments in input and output sentences. LERG views the sequence prediction as uncertainty estimation of a human response and then creates explanations by perturbing the input and calculating the certainty change over the human response. We show that LERG adheres to desired properties of explanation for text generation, including unbiased approximation, consistency, and cause identification. Empirically, our results show that our method consistently improves other widely used methods on proposed automatic-and human-evaluation metrics for this new task by 4.4-12.8%. Our analysis demonstrates that LERG can extract both explicit and implicit relations between input and output segments. 1
Introduction
As we use machine learning models in daily tasks, such as medical diagnostics [6,19], speech assistants [31] etc., being able to trust the predictions being made has become increasingly important. To understand the underlying reasoning process of complex machine learning models a sub-field of explainable artificial intelligence (XAI) [2,17,36] called local explanations, has seen promising results [35]. Local explanation methods [27,39] often approximate an underlying black box model by fitting an interpretable proxy, such as a linear model or tree, around the neighborhood of individual predictions. These methods have the advantage of being model-agnostic and locally interpretable.
Traditionally, off-the-shelf local explanation frameworks, such as the Shapley value in game theory [38] and the learning-based Local Interpretable Model-agnostic Explanation (LIME) [35] have been shown to work well on classification tasks with a small number of classes. In particular, there has been work on image classification [35], sentiment analysis [8], and evidence selection for question answering [32]. However, to the best of our knowledge, there has been less work studying explanations over models with sequential output and large class sizes at each time step. An attempt by [1] aims at explaining machine translation by aligning the sentences in source and target languages. Nonetheless, unlike translation, where it is possible to find almost all word alignments of the input and output sentences, many text generation tasks are not alignment-based. We further explore explanations over sequences that contain implicit and indirect relations between the input and output utterances.
In this paper, we study explanations over a set of representative conditional text generation modelsdialogue response generation models [45,55]. These models typically aim to produce an engaging and informative [3,24] response to an input message. The open-ended sentences and multiple acceptable responses in dialogues pose two major challenges: (1) an exponentially large output space and (2) the implicit relations between the input and output texts. For example, the open-ended prompt "How are you today?" could lead to multiple responses depending on the users' emotion, situation, social skills, expressions, etc. A simple answer such as "Good. Thank you for asking." does not have an explicit alignment to words in the input prompt. Even though this alignment does not exist, it is clear that "good" is the key response to "how are you". To find such crucial corresponding parts in a dialogue, we propose to extract explanations that can answer the question: "Which parts of the response are influenced the most by parts of the prompt?"
To obtain such explanations, we introduce LERG, a novel yet simple method that extracts the sorted importance scores of every input-output segment pair from a dialogue response generation model. We view this sequence prediction as the uncertainty estimation of one human response and find a linear proxy that simulates the certainty caused from one input segment to an output segment. We further derive two optimization variations of LERG. One is learning-based [35] and another is the derived optimal similar to Shapley value [38]. To theoretically verify LERG, we propose that an ideal explanation of text generation should adhere to three properties: unbiased approximation, intra-response consistency, and causal cause identification. To the best of our knowledge, our work is the first to explore explanation over dialog response generation while maintaining all three properties.
To verify if the explanations are both faithful (the explanation is fully dependent on the model being explained) [2] and interpretable (the explanation is understandable by humans) [14], we conduct comprehensive automatic evaluations and user study. We evaluate the necessity and sufficiency of the extracted explanation to the generation model by evaluating the perplexity change of removing salient input segments (necessity) and evaluating the perplexity of only salient segments remaining (sufficiency). In our user study, we present annotators with only the most salient parts in an input and ask them to select the most appropriate response from a set of candidates. Empirically, our proposed method consistently outperforms baselines on both automatic metrics and human evaluation.
Our key contributions are:
• We propose a novel local explanation method for dialogue response generation (LERG).
• We propose a unified formulation that generalizes local explanation methods towards sequence generation and show that our method adheres to the desired properties for explaining conditional text generation.
• We build a systematic framework to evaluate explanations of response generation including automatic metrics and user study.
Local Explanation
Local explanation methods aim to explain predictions of an arbitrary model by interpreting the neighborhood of individual predictions [35]. It can be viewed as training a proxy that adds the contributions of input features to a model's predictions [27]. More formally, given an example with
input features x = {x i } M i=1
, the corresponding prediction y with probability f (x) = P θ (Y = y|x) (the classifier is parameterized by θ), we denote the contribution from each input feature x i as φ i ∈ R and denote the concatenation of all contributions as φ = [φ 1 , ..., φ M ] T ∈ R M . Two popular local explanation methods are the learning-based Local Interpretable Model-agnostic Explanations (LIME) [35] and the game theory-based Shapley value [38].
LIME interprets a complex classifier f based on locally approximating a linear classifier around a given prediction f (x). The optimization of the explanation model that LIME uses adheres to:
ξ(x) = arg min ϕ [L(f, ϕ, π x ) + Ω(ϕ)] ,(1)
where we sample a perturbed inputx from π x (x) = exp(−D(x,x) 2 /σ 2 ) taking D(x,x) as a distance function and σ as the width. Ω is the model complexity of the proxy ϕ. The objective of ξ(x) is to find the simplest ϕ that can approximate the behavior of f around x. When using a linear classifier φ as the ϕ to minimize Ω(ϕ) [35], we can formulate the objective function as:
φ = arg min φ Ex ∼πx (P θ (Y = y|x) − φ T z) 2 ,(2)
where z ∈ {0, 1} M is a simplified feature vector ofx by a mapping function h such that
z = h(x,x) = {1(x i ∈x)} M i=1 .
The optimization means to minimize the classification error in the neighborhood of x sampled from π x . Therefore, using LIME, we can find an interpretable linear model that approximates any complex classifier's behavior around an example x.
Shapley value takes the input features
x = {x i } M i=1
as M independent players who cooperate to achieve a benefit in a game [38]. The Shapley value computes how much each player x i contributes to the total received benefit:
ϕ i (x) = x⊆x\{xi} |x|!(|x| − |x| − 1)! |x|! [P θ (Y = y|x ∪ {x i }) − P θ (Y = y|x)] .(3)
To reduce the computational cost, instead of computing all combinations, we can find surrogates φ i proportional to ϕ i and rewrite the above equation as an expectation over x sampled from P (x):
φ i = |x| |x| − 1 ϕ i = Ex ∼P (x) [P θ (Y = y|x ∪ {x i }) − P θ (Y = y|x)], ∀i ,(4)
where P (x) =
1 (|x|−1)( |x|−1 |x| )
is the perturb function. 2 We can also transform the above formulation into argmin:
φ i = arg min φi Ex ∼P (x) ([P θ (Y = y|x ∪ {x i }) − P θ (Y = y|x)] − φ i ) 2 .(5)
Local Explanation for Dialogue Response Generation
We aim to explain a model's response prediction to a dialogue history one at a time and call it the local explanation of dialogue response generation. We focus on the local explanation for a more fine-grained understanding of the model's behavior.
Task Definition
As depicted in Figure 1, we draw inspiration from the notions of controllable dialogue generation models ( Figure 1a) and local explanation in sentiment analysis ( Figure 1b). The first one uses a concept in predefined classes as the relation between input text and the response; the latter finds the features that correspond to positive or negative sentiment. We propose to find parts within the input and output texts that are related by an underlying intent (Figure 1c).
We first define the notations for dialogue response generation, which aims to predict a response y = y 1 y 2 ...y N given an input message x = x 1 x 2 ...x M . x i is the i-th token in sentence x with length M and y j is the j-th token in sentence y with length N . To solve this task, a typical sequence-to-sequence model f parameterized by θ produces a sequence of probability masses <P θ (y 1 |x), P θ (y 2 |x, y 1 ), ..., P θ (y N |x, y <N )> [45]. The probability of y given x can then be computed as the product of the sequence P θ (y|x) = P θ (y 1 |x)P θ (y 2 |x, y 1 )...P θ (y N |x, y <N ).
2 x⊆x\{x i } P (x) = 1 (|x|−1) x⊆x\{x i } 1/ |x|−1 |x| = 1 (|x|−1) |x| |x|−1 |x| / |x|−1 |x| = (|x|−1) (|x|−1) = 1.
This affirms that the P (x) is a valid probability mass function.
To explain the prediction, we then define a new explanation model Φ ∈ R M ×N where each column Φ j ∈ R M linearly approximates single sequential prediction at the j-th time step in text generation. To learn the optimal Φ, we sample perturbed inputsx from a distribution centered on the original inputs x through a probability density functionx = π(x). Finally, we optimize Φ by ensuring u(Φ T j z) ≈ g(x) whenever z is a simplified embedding ofx by a mapping function z = h(x,x), where we define g as the gain function of the target generative model f , u as a transform function of Φ and z and L as the loss function. Note that z can be a vector or a matrix and g(·), u(·) can return a scalar or a vector depending on the used method. Therefore, we unify the local explanations (LIME and Shapley value) under dialogue response generation as:
Definition 1: A Unified Formulation of Local Explanation for Dialogue Response Generation Φ j = arg min Φj L(g(y j |x, y <j ), u(Φ T j h(x))), for j = 1, 2, ..., N .(6)
The proofs of unification into Equation 6 can be found in Appendix A. However, direct adaptation of LIME and Shapley value to dialogue response generation fails to consider the complexity of text generation and the diversity of generated examples. We develop disciplines to alleviate these problems.
Proposed Method
Our proposed method is designed to (1) address the exponential output space and diverse responses built within the dialogue response generation task and (2) compare the importance of segments within both input and output text.
First, considering the exponential output space and diverse responses, recent work often generates responses using sampling, such as the dominant beam search with top-k sampling [11]. The generated response is therefore only a sample from the estimated probability mass distribution over the output space. Further, the samples drawn from the distribution will inherently have built-in errors that accumulate along generation steps [34]. To avoid these errors we instead explain the estimated probability of the ground truth human responses. In this way, we are considering that the dialogue response generation model is estimating the certainty to predict the human response by P θ (y|x). Meanwhile, given the nature of the collected dialogue dataset, we observe only one response per sentence, and thus the mapping is deterministic. We denote the data distribution by P and the probability of observing a response y given input x in the dataset by P (y|x). Since the mapping of x and y is deterministic in the dataset, we assume P (y|x) = 1.
Second, if we directly apply prior explanation methods of classifiers on sequential generative models, it turns into a One-vs-Rest classification situation for every generation step. This can cause an unfair comparison among generation steps. For example, the impact from a perturbed input on y j could end up being the largest just because the absolute certainty P θ (y j |x, y <j ) was large. However, the impact from a perturbed input on each part in the output should be how much the certainty has changed after perturbation and how much the change is compared to other parts.
Therefore we propose to find explanation in an input-response pair (x, y) by comparing the interactions between segments in (x, y). To identify the most salient interaction pair (x i , y j ) (the i-th segment in x and the j-th segment in y), we anticipate that a perturbationx impacts the j-th part most in y if it causes
D(P θ (y j |x, y <j )||P θ (y j |x, y <j )) > D(P θ (y j |x, y <j )||P θ (y j |x, y <j )), ∀j = j ,(7)
where D represents a distance function measuring the difference between two probability masses. After finding the different part x i in x andx, we then define an existing salient interaction in (x, y) is (x i , y j ).
In this work, we replace the distance function D in Equation 7 with Kullback-Leibler divergence (D KL ) [20]. However, since we reduce the complexity by considering P θ (y|x) as the certainty estimation of y, we are limited to obtaining only one point in the distribution. We transfer the equation by modeling the estimated joint probability by θ of x and y. We reconsider the joint distributions as P θ (x, y ≤j ) such that x,y P θ (x, y ≤j ) = 1 and q(x, y) = P θ,πinv (x, y ≤j ) = P θ (x, y) such that
x,y q(x, y) = x,y P θ (x, y ≤j ) = x,y P θ,πinv (x, y ≤j ) = 1 with π inv being the inverse function of π. Therefore,
D(P θ (x, y ≤j )||P θ (x, y ≤j )) = D KL (P θ (x, y ≤j )||q(x, y ≤j )) = yj x P θ (x, y ≤j ) log P θ (x, y ≤j ) P θ (x, y ≤j ) .(8)
Moreover, since we are estimating the certainty of a response y drawn from data distribution, we know that the random variablesx is independently drawn from the perturbation model π. Their independent conditional probabilities are P (y|x) = 1 and π(x|x). We approximate the multiplier P θ (x, y ≤j ) ≈ P (x, y ≤j |x) = P (x|x)P (y|x) = π(x|x). The divergence can be simplified to
D(P θ (x, y ≤j )||P θ (x, y ≤j )) ≈ yj x π(x|x) log P θ (x, y ≤j ) P θ (x, y ≤j ) = Ex ∼π(·|x) log P θ (x, y ≤j ) P θ (x, y ≤j ) .(9)
To meet the inequality for all j and j = j, we estimate each value Φ T j z in the explanation model
Φ being proportional to the divergence term, where z = h(x,x) = {1(x i ∈x)} M i=1 .
It turns out to be re-estimating the distinct of the chosen segment y j by normalizing over its original predicted probability.
Φ T j z ∝ Ex ⊆x\{xi} D(P θ (x, y ≤j )||P θ (x, y ≤j )) ≈ Ex ,x⊆x\{xi} log P θ (x, y ≤j ) P θ (x, y ≤j ) .(10)
We propose two variations to optimize Φ following the unified formulation defined in Equation 6.
First, since logarithm is strictly increasing, so to get the same order of Φ ij , we can drop off the logarithmic term in Equation 10. After reducing the non-linear factor, we use mean square error as the loss function. With the gain function g =
P θ (x,y ≤j ) P θ (x,y ≤j ) , the optimization equation becomes Φ j = arg min Φj E P (x) ( P θ (x, y ≤j ) P θ (x, y ≤j ) − Φ T j z) 2 , ∀j .(11)
We call this variation as LERG_L in Algorithm 1, since this optimization is similar to LIME but differs by the gain function being a ratio.
To derive the second variation, we suppose an optimized Φ exists and is denoted by Φ * , we can write that for everyx and its correspondent z = h(x,x),
Φ * j z = log P θ (x, y ≤j ) P θ (x, y ≤j ) .(12)
We can then find the formal representation of Φ * ij by
Φ * ij = Φ * j 1 − Φ * j 1 i=0 = Φ * j (z + e i ) − Φ * j z, ∀x ∈ x\{x i } and z = h(x,x) = Ex ∈x\{xi} [Φ * j (z + e i ) − Φ * j z] = Ex ∈x\{xi} [log P θ (y j |x ∪ {x i }, y <j ) − log P θ (y j |x, y <j )](13)
We call this variation as LERG_S in Algorithm 1, since this optimization is similar to Shapley value but differs by the gain function being the difference of logarithm. To further reduce computations, we use Monte Carlo sampling with m examples as a sampling version of Shapley value [41].
Properties
We propose that an explanation of dialogue response generation should adhere to three properties to prove itself faithful to the generative model and understandable to humans.
Property 1: unbiased approximation To ensure the explanation model Φ explains the benefits of picking the sentence y, the summation of all elements in Φ should approximate the difference between the certainty of y given x and without x (the language modeling of y).
j i Φ ij ≈ log P (y|x) − log P (y) .(14)
Algorithm 1: LOCAL EXPLANATION OF RESPONSE GENERATION Input: input message x = x 1 x 2 ...x M , ground-truth response y = y 1 y 2 ...y N Input: a response generation model θ to be explained Input: a local explanation model parameterized by Φ // 1st variation -LERG_L for each iteration do sample a batch ofx perturbed from π(x) mapx to z = {0, 1} M 1 compute gold probability P θ (y j |x, y <j ) compute perturbed probability P θ (y j |x, y <j ) optimize Φ to minimize loss function
L = j x ( P θ (yj |x,y<j ) P θ (yj |x,y<j ) − Φ T j z) 2 // 2nd variation -LERG_S for each i do sample a batch ofx perturbed from π(x\{x i }) Φ ij = 1 m x log P θ (y j |x ∪ {x i }, y <j ) − log P θ (y j |x, y <j ), for ∀j return Φ ij , for ∀i, j
Property 2: consistency To ensure the explanation model Φ consistently explains different generation steps j, given a distance function if
D(P θ (y j |x, y <j ), P θ (y j |x∪{x i }, y <j )) > D(P θ (y j |x, y <j ), P θ (y j |x∪{x i }, y <j )), ∀j , ∀x ∈ x\{x i } ,(15)then Φ ij > Φ ij .
Property 3: cause identification To ensure that the explanation model sorts different input features by their importance to the results, if
g(y j |x ∪ {x i }) > g(y j |x ∪ {x i }), ∀x ∈ x\{x i , x i } ,(16)then Φ ij > Φ i j
We prove that our proposed method adheres to all three Properties in Appendix B. Meanwhile Shapley value follows Properties 2 and 3, while LIME follows Property 3 when an optimized solution exists. These properties also demonstrate that our method approximates the text generation process while sorting out the important segments in both the input and output texts. This could be the reason to serve as explanations to any sequential generative model.
Experiments
Explanation is notoriously hard to evaluate even for digits and sentiment classification which are generally more intuitive than explaining response generation. For digit classification (MNIST), explanations often mark the key curves in figures that can identify digit numbers. For sentiment analysis, explanations often mark the positive and negative words in text. Unlike them, we focus on identifying the key parts in both input messages and their responses. Our move requires an explanation include the interactions of the input and output features.
To evaluate the defined explanation, we quantify the necessity and sufficiency of explanations towards a model's uncertainty of a response. We evaluate these aspects by answering the following questions.
• necessity: How is the model influenced after removing explanations?
• sufficiency: How does the model perform when only the explanations are given?
Furthermore, we conduct a user study to judge human understandings of the explanations to gauge how trustworthy the dialog agents are.
Dataset, Models, Methods
We evaluate our method over chit-chat dialogues for their more complex and realistic conversations. We specifically select and study a popular conversational dataset called DailyDialog [25] because its dialogues are based on daily topics and have less uninformative responses.Due to the large variation of topics, open-ended nature of conversations and informative responses within this dataset, explaining dialogue response generation models trained on DailyDialog is challenging but accessible. 3 We fine-tune a GPT-based language model [33,47] and a DialoGPT [55] on DailyDialog by minimizing the following loss function:
L = − m j log P θ (y j |x, y <j ) ,(17)
where θ is the model's parameter. We train until the loss converges on both models and achieve fairly low test perplexities compared to [25]: 12.35 and 11.83 respectively. The low perplexities demonstrate that the models are more likely to be rationale and therefore, evaluating explanations over these models will be more meaningful and interpretable.
We compare our explanations LERG_L and LERG_S with attention [46], gradient [43], LIME [35] and Shapley value [42]. We use sample mean for Shapley value to avoid massive computations (Shapley for short), and drop the weights in Shapley value (Shapley-w for short) due to the intuition that not all permutations should exist in natural language [12,21]. Our comparison is fair since all methods requiring permutation samples utilize the same amount of samples. 4
Necessity: How is the model influenced after removing explanations?
Assessing the correctness of estimated important feature relevance requires labeled features for each model and example pair, which is rarely accessible. Inspired by [2,4] who removes the estimated salient features and observe how the performance changes, we introduce the notion necessity that extends their idea. We quantify the necessity of the estimated salient input features to the uncertainty estimation of response generation by perplexity change of removal (P P LC R ), defined as:
P P LC R := exp 1 m [− j log P θ (yj |x R ,y<j )+ j log P θ (yj |x,y<j )] ,(18)
where x R is the remaining sequence after removing top-k% salient input features. 3 We include our experiments on personalized dialogues and abstractive summarization in Appendix E 4 More experiment details are in Appendix C As shown in Figure 2a and Figure 3a 5 , removing larger number of input features consistently causes the monotonically increasing P P LC R . Therefore, to reduce the factor that the P P LC R is caused by, the removal ratio, we compare all methods with an additional baseline that randomly removes features. LERG_S and LERG_L both outperform their counterparts Shapley-w and LIME by 12.8% and 2.2% respectively. We further observe that Shapley-w outperforms the LERG_L. We hypothesize that this is because LERG_L and LIME do not reach an optimal state.
Sufficiency
: How does the model perform when only the explanations are given?
Even though necessity can test whether the selected features are crucial to the model's prediction, it lacks to validate how possible the explanation itself can determine a response. A complete explanation is able to recover model's prediction without the original input. We name this notion as sufficiency testing and formalize the idea as:
P P L A := exp − 1 m j log P θ (yj |x A ,y<j ) ,(19)
where x A is the sequential concatenation of the top-k% salient input features.
As shown in Figure 2b and Figure 3b, removing larger number of input features gets the P P L A closer to the perplexity of using all input features 12.35 and 11.83. We again adopt a random baseline to compare. LERG_S and LERG_L again outperform their counterparts Shapley-w and LIME by 5.1% and 3.4% respectively. Furthermore, we found that LERG_S is able to go lower than the original 12.35 and 11.83 perplexities. This result indicates that LERG_S is able to identify the most relevant features while avoiding features that cause more uncertainty during prediction. To ensure the explanation is easy-to-understand by non machine learning experts and gives users insights into the model, we resort to user study to answer the question: "If an explanation can be understood by users to respond?"
User Study
We ask human judges to compare explanation methods. Instead of asking judges to annotate their explanation for each dialogue, to increase their agreements we present only the explanations (Top 20% features) and ask them to choose from four response candidates, where one is the ground-truth, two are randomly sampled from other dialogues, and the last one is randomly sampled from other turns in the same dialogue. Therefore the questionnaire requires human to interpret the explanations but not guess a response that has word overlap with the explanation. The higher accuracy indicates the higher quality of explanations. To conduct more valid human evaluation, we randomly sample 200 conversations with sufficiently long input prompt (length≥ 10). This way it filters out possibly non-explainable dialogues that can cause ambiguities to annotators and make human evaluation less reliable.
We employ three workers on Amazon Mechanical Turk [7] 6 for each method of each conversation, resulting in total 600 annotations. Besides the multiple choice questions, we also ask judges to claim their confidences of their choices. The details can be seen in Appendix D. The results are listed in Table 1. We observe that LERG_L performs slightly better than LIME in accuracy while maintaining similar annotator's confidence. LERG_S significantly outperforms Shapley-w in both accuracy and annotators' confidence. Moreover, these results indicates that when presenting users with only 20% of the tokens they are able to achieve 56% accuracy while a random selection is around 25%.
(a) Implication: find the "hot potato" might indicate "gasoline".
(b) Sociability: find "No" for the "question mark" and "thanks" for the "would like", the polite way to say "want".
(c) Error analysis: related but not the best Figure 4: Two major categories of local explanation except word alignment and one typical error.
The horizontal text is the input prompt and the vertical text is the response.
Qualitative Analysis
We further analyzed the extracted explanation for each dialogue. We found that these fine-grained level explanations can be split into three major categories: implication / meaning, sociability, and oneto-one word mapping. As shown in Figure 4a, the "hot potato" in response implies the phenomenon of "reduce the price of gasoline". On the other hand, Figure 4b demonstrates that a response with sociability can sense the politeness and responds with "thanks". We ignore word-to-word mapping here since it is intuitive and can already be successfully detected by attention models. Figure 4c shows a typical error that our explanation methods can produce. As depicted, the word "carry" is related to "bags","suitcases", and "luggage". Nonetheless a complete explanation should cluster "carry-on luggages". The error of explanations can result from (1) the target model or (2) the explanation method. When taking the first view, in future work, we might use explanations as an evaluation method for dialogue generation models where the correct evaluation metrics are still in debates. When taking the second view, we need to understand that these methods are trying to explain the model and are not absolutely correct. Hence, we should carefully analyze the explanations and use them as reference and should not fully rely on them.
Related Work and Discussion
Explaining dialogue generation models is of high interest to understand if a generated response is reasonably produced rather than being a random guess. For example, among works about controllable dialogue generation [15,26,37,40,48,50,51,53], Xu et al. [49] takes the dialog act in a controllable response generation model as the explanation. On the other hand, some propose to make dialogue response generation models more interpretable through walking on knowledge graphs [18,28,44]. Nonetheless, these works still rely on models with complex architecture and thus are not fully interpretable. We observe the lack of a model-agnostic method to analyze the explainability of dialogue response generation models, thus proposing LERG.
Recently, there are applications and advances of local explanation methods [27,35,38]. For instance in NLP, some analyze the contributions of segments in documents to positive and negative sentiments [4,8,9,29]. Some move forwards to finding segments towards text similarity [10], retrieving a text span towards question-answering [32], and making local explanation as alignment model in machine translation [1]. These tasks could be less complex than explaining general text generation models, such as dialogue generation models, since the the output space is either limited to few classes or able to find one-to-one mapping with the input text. Hence, we need to define how local explanations on text generation should work. However, we would like to note that LERG serves as a general formulation for explaining text generation models with flexible setups. Therefore, the distinct of prior work can also be used to extend LERG, such as making the explanations hierarchical. To move forward with the development of explanation methods, LERG can also be extended to dealing with off-/on-data manifold problem of Shapley value introduced in [13], integrating causal structures to separate direct / in-direct relations [12,16], and fusing concept-/ feature-level explanations [5].
Conclusion
Beyond the recent advances on interpreting classification models, we explore the possibility to understand sequence generation models in depth. We focus on dialogue response generation and find that its challenges lead to complex and less transparent models. We propose local explanation of response generation (LERG), which aims at explaining dialogue response generation models through the mutual interactions between input and output features. LERG views the dialogue generation models as a certainty estimation of a human response so that it avoids dealing with the diverse output space. To facilitate future research, we further propose a unification and three properties of explanations for text generation. The experiments demonstrate that LERG can find explanations that can both recover a model's prediction and be interpreted by humans. Next steps can be taking models' explainability as evaluation metrics, integrating concept-level explanations, and proposing new methods for text generation models while still adhering to the properties.
A Unifying LIME and Shapley value to Dialogue Response Generation
A.1 Learning based Local Approximation Explanation
Taking the prediction of each time step in sequence generation as classification over a lexicon, we can define the loss function as mean square error, the u(x) = x, z = h(x) = {1(xi = xi)} |x| i=1 and the gain function g as
g(yj|x, y<j) = P θ (yj|x, y<j)(20)
Then LIME can be cast exactly as Equation 6.
Φj = arg min Φ j E P (x) (P θ (yj|x, y<j) − Φ T j z) 2 , ∀j(21)
A.2 Game Theory based Attribution Methods
We first define the gain function is
g(yj|x, y<j)i = P θ (yj|x ∪ i, y<j) − P θ (yj|x, y<j), if i / ∈x else g(yj|x, y<j)i = 0(22)
The Z = h(x) ∈ R M ×M is defined to be a diagonal matrix with the diagonal being
Zii = 1, if i / ∈x else 0 = 0(23)
With the loss function being L2-norm, we can see the equation is exactly the same as the Equation 6.
Φj = arg min Φ j E P (x) ||g(yj|x, y<j) − Φ T j Z||2(24)
B The proof of Properties Property 1: unbiased approximation To ensure the explanation model Φ explains the benefits of picking the sentence y, the summation of all elements in Φ should approximate the difference between the certainty of y given x and without x (the language modeling of y).
j i Φij ≈ log P (y|x) − log P (y)(25)
proof:
j i Φij = j i Ex ∈S\i [log P (yj|x ∪ {xi}) − log P (yj|x)] = j [ i E log P (yj|x ∪ {xi}) − i E log P (yj|x)] ≈ j [ i x∪{x i } P (x) log P (yj|x ∪ {xi}) − i x P (x) log P (yj|x)] = j [log P (yj|x, y<j) − log P (yj|∅, y<j)] = log P (y|x) − log P (y)(26)
Property 2: consistency To ensure the explanation model Φ consistently explains different generation steps j, given a distance function if D(P θ (yj|x, y<j), P θ (yj|x ∪ {xi}, y<j)) > D(P θ (y j |x, y <j ), P θ (y j |x ∪ {xi}, y <j )), ∀j , ∀x ∈ x\{xi} (27) then Φij > Φ ij .
When taking the distance function as the difference between log-probabilities, we can prove that Equation 13 has this property by reducing the property to be the consistency property of Shapley value [27]. Prior work [52] has shown that the assumed prior distribution of Shapley value is the only one that satisfies this monotonicity.
g(yj|x ∪ {xi}) > g(yj|x ∪ {x i }), ∀x ∈ x\{xi, x i }(28)
then Φij > Φ i j
The unified formulation (Equation 6) is to minimize the difference between φ and the gain function g. If an optimized φ * exists, g can be written as g(yj|x ∩ i) = φ * j (z + ei=1). Therefore the inequality 28 can be derived as:
g(yj|x ∪ {xi}) > g(yj|x ∪ {x i }), ∀x ∈ x\{xi, x i } ⇐⇒ φ * j (z + ei) > φ * j (z + e i ) ⇐⇒ φ * j ei > φ * j e i ⇐⇒ φ * ij > φ * i j(29)
Since Shapley_log of LERG is a variation to express the optimized φ, the method adheres to Property 3 without assumptions.
C Experiment Details
For all methods, we sample 1000 perturbations of input. Also, to reduce the effect of off-manifold perturbations, we perturb input with at most half the features are changed. After three runs with random initialization for each method, the variances of the results are at most 0.001 and do not affect the trend. Our code is available at https://github.com/Pascalson/LERG.
All methods can be run on single GPU. We run them on one TITAN RTX.
D User Study Details
E More Experiments
Beyond our main experiments on Dailydialog [25], we further take a look into how Shapley value and LERG_S work on personalized conversation and abstractive summarization. We specifically use datasets PersonChat [54] for the personalized conversation task and XSum [30] for abstractive summarization. We fine-tuned a GPT model on PersonaChat as [47] and directly used the BART model [23] that has been pretrained on XSum. The results are plotted in Figure 6. Similar to the results of our main experiments, Figure 6a and 6c show similar trend of increased P P LCR with a higher removal ratio, with LERG_S consistently performing better. Further, the P P LA also shows a similar trend to the main experiments, with the perplexity decreasing until some ratio and subsequently increase. Interestingly, the lowest ratio occurs earlier compared to the experiments on DailyDialog. This phenomenon can mean that the number of key terms in the input text of PersonaChat and XSum are less than the one of DailyDialog. Besides this ratio, we observe that LERG_S consistently has lower perplexity.
Implementation Details Throughout the preliminary study of PersonaChat, we specifically investigate the influence of dialogue history to the response and ignore profiles in the input. For XSum, we only run explanation methods on documents containing less than 256 tokens and responses with greater than 30 tokens, therefore, the perplexity is lower than the one reported in [23].
F Discussion of Local Explanation with Phrase-level Input Features
We tried two methods of phrase-level experiments. In the first approach, we used parsed phrases, instead of tokens, as the xi and yj in the equations and obtained explanations through Algorithm 1. In the second approach, we used the token-level explanations and averaged the scores of tokens in the same phrase parsed by an off-the-shelf parser. In both cases, the trend of the performance was similar to token-level methods. However, we suppose that the basic unit of dialogues is hard to define. Therefore we choose tokens in this paper for that tokens can be flexibly bottom-up to different levels of units.
Figure 1 :
1The motivation of local explanation for dialogue response generation. (c) = (a)+(b).
Figure 2 :
2The explanation results of a GPT model fine-tuned on DailyDialog.(a) P P LCR.(b) P P LA.
Figure 3 :
3The explanation results of fine-tuned DialoGPT.
Figure 5 :
5The screenshots of our user study Property 3: cause identification To ensure that the explanation model sorts different input features by their importance to the results, if
Figure 5
5are the screenshots of our instruction and question presented to workers on Amazon Mechanical Turk. We paid workers an estimated hourly wage 18 usd. For every 5 questions, the estimated spent time is about 30 seconds and the worker was paid 0.15 usd.
Figure 6 :
6The explanation results of a GPT model fine-tuned on PersonaChat ((a) and (b)) and the pretrained BART model on XSum ((c) and (d))
Our code is available at https://github.com/Pascalson/LERG. 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
We did a z-test and a t-test[22] with the null hypothesis between LERG_L and LIME (and LERG_S and Shapley). For both settings the p-value was less than 0.001, indicating that the proposed methods significantly outperform the baselines.6 https://www.mturk.com
[50] Can Xu, Wei Wu, Chongyang Tao, Huang Hu, Matt Schuerman, and Ying Wang. Neural response generation with meta-words. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5416-5426, 2019.
AcknowledgementWe thank all the reviewers precious comments in revising this paper. This work was supported by a Google Research Award and the National Science Foundation award #2048122. The views expressed are those of the author and do not reflect the official policy or position of the funding agencies.
A causal framework for explaining the predictions of black-box sequence-to-sequence models. David Alvarez-Melis, Tommi Jaakkola, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingDavid Alvarez-Melis and Tommi Jaakkola. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 412-421, 2017.
Towards robust interpretability with self-explaining neural networks. David Alvarez-Melis, Tommi S Jaakkola, In NeurIPS. David Alvarez-Melis and Tommi S Jaakkola. Towards robust interpretability with self-explaining neural networks. In NeurIPS, 2018.
Deep active learning for dialogue generation. Nabiha Asghar, Pascal Poupart, Xin Jiang, Hang Li, Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017). the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017)Nabiha Asghar, Pascal Poupart, Xin Jiang, and Hang Li. Deep active learning for dialogue generation. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pages 78-83, 2017.
A diagnostic study of explainability techniques for text classification. Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. A diagnostic study of explainability techniques for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3256-3274, 2020.
Debiasing concept-based explanations with causal analysis. Taha Mohammad, David Bahadori, Heckerman, International Conference on Learning Representations. Mohammad Taha Bahadori and David Heckerman. Debiasing concept-based explanations with causal analysis. In International Conference on Learning Representations, 2021. URL https://openreview. net/forum?id=6puUoArESGp.
Deep learning and medical diagnosis: A review of literature. Mihalj Bakator, Dragica Radosav, Multimodal Technologies and Interaction. 2347Mihalj Bakator and Dragica Radosav. Deep learning and medical diagnosis: A review of literature. Multimodal Technologies and Interaction, 2(3):47, 2018.
Amazon's mechanical turk: A new source of inexpensive, yet high-quality data?. Michael Buhrmester, Tracy Kwang, Samuel D Gosling, Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. Amazon's mechanical turk: A new source of inexpensive, yet high-quality data? 2016.
Learning variational word masks to improve the interpretability of neural text classifiers. Hanjie Chen, Yangfeng Ji, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Hanjie Chen and Yangfeng Ji. Learning variational word masks to improve the interpretability of neural text classifiers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4236-4251, 2020.
Generating hierarchical explanations on text classification via feature interaction detection. Hanjie Chen, Guangtao Zheng, Yangfeng Ji, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsHanjie Chen, Guangtao Zheng, and Yangfeng Ji. Generating hierarchical explanations on text classification via feature interaction detection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5578-5593, 2020.
Explaining neural network predictions on sentence pairs via learning word-group masks. Hanjie Chen, Song Feng, Jatin Ganhotra, Hui Wan, Chulaka Gunasekara, Sachindra Joshi, Yangfeng Ji, 2021Hanjie Chen, Song Feng, Jatin Ganhotra, Hui Wan, Chulaka Gunasekara, Sachindra Joshi, and Yangfeng Ji. Explaining neural network predictions on sentence pairs via learning word-group masks. NAACL, 2021.
Hierarchical neural story generation. Angela Fan, Mike Lewis, Yann Dauphin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, 2018.
Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. Christopher Frye, Colin Rowat, Ilya Feige, Advances in Neural Information Processing Systems. 33Christopher Frye, Colin Rowat, and Ilya Feige. Asymmetric shapley values: incorporating causal knowl- edge into model-agnostic explainability. Advances in Neural Information Processing Systems, 33, 2020.
Shapley explainability on the data manifold. Christopher Frye, Damien De Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige, International Conference on Learning Representations. Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, and Ilya Feige. Shapley explainability on the data manifold. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=OPyWRrcjVQw.
Explaining explanations: An overview of interpretability of machine learning. H Leilani, David Gilpin, Bau, Ayesha Ben Z Yuan, Michael Bajwa, Lalana Specter, Kagal, IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEELeilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pages 80-89. IEEE, 2018.
Controlling dialogue generation with semantic exemplars. Prakhar Gupta, P Jeffrey, Yulia Bigham, Amy Tsvetkov, Pavel, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesPrakhar Gupta, Jeffrey P Bigham, Yulia Tsvetkov, and Amy Pavel. Controlling dialogue generation with semantic exemplars. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3018-3029, 2021.
Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. Tom Heskes, Evi Sijben, Gabriel Ioan, Tom Bucur, Claassen, Advances in Neural Information Processing Systems. 33Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. Advances in Neural Information Processing Systems, 33, 2020.
How can i explain this to you? an empirical study of deep neural network explanation methods. Jeya Vikranth Jeyakumar, Joseph Noor, Yu-Hsi Cheng, Luis Garcia, Mani Srivastava, Advances in Neural Information Processing Systems. Jeya Vikranth Jeyakumar, Joseph Noor, Yu-Hsi Cheng, Luis Garcia, and Mani Srivastava. How can i explain this to you? an empirical study of deep neural network explanation methods. Advances in Neural Information Processing Systems, 2020.
Dialograph: Incorporating interpretable strategygraph networks into negotiation dialogues. Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, Yulia Tsvetkov, International Conference on Learning Representations. Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, and Yulia Tsvetkov. Dialograph: Incorporating interpretable strategygraph networks into negotiation dialogues. In International Conference on Learning Representations, 2021.
Machine learning for medical diagnosis: history, state of the art and perspective. Igor Kononenko, Artificial Intelligence in medicine. 231Igor Kononenko. Machine learning for medical diagnosis: history, state of the art and perspective. Artificial Intelligence in medicine, 23(1):89-109, 2001.
On information and sufficiency. The annals of mathematical statistics. Solomon Kullback, A Richard, Leibler, 22Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86, 1951.
Problems with shapley-value-based explanations as feature importance measures. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler, International Conference on Machine Learning. PMLRI Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. Problems with shapley-value-based explanations as feature importance measures. In International Conference on Machine Learning, pages 5491-5500. PMLR, 2020.
Testing statistical hypotheses. Erich Leo Lehmann, Joseph P Romano, Springer3Erich Leo Lehmann and Joseph P Romano. Testing statistical hypotheses, volume 3. Springer.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
Deep reinforcement learning for dialogue generation. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, Jianfeng Gao, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingJiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, 2016.
DailyDialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Asian Federation of Natural Language ProcessingYanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995, Taipei, Taiwan, November 2017. Asian Federation of Natural Language Processing. URL https://www.aclweb.org/anthology/I17-1099.
The adapter-bot: All-in-one controllable conversational model. Zhaojiang Lin, Andrea Madotto, Yejin Bang, Pascale Fung, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Zhaojiang Lin, Andrea Madotto, Yejin Bang, and Pascale Fung. The adapter-bot: All-in-one controllable conversational model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 16081-16083, 2021.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Advances in neural information processing systems. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765-4774, 2017.
Opendialkg: Explainable conversational reasoning with attention-based walks over knowledge graphs. Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsSeungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. Opendialkg: Explainable conversational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 845-854, 2019.
Beyond word importance: Contextual decomposition to extract interactions from lstms. W James Murdoch, J Peter, Bin Liu, Yu, International Conference on Learning Representations. W James Murdoch, Peter J Liu, and Bin Yu. Beyond word importance: Contextual decomposition to extract interactions from lstms. In International Conference on Learning Representations, 2018.
Don't give me the details, just the summary! topicaware convolutional neural networks for extreme summarization. Shashi Narayan, B Shay, Mirella Cohen, Lapata, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingShashi Narayan, Shay B Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic- aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, 2018.
The kaldi speech recognition toolkit. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, IEEE 2011 workshop on automatic speech recognition and understanding, number CONF. IEEE Signal Processing Society. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number CONF. IEEE Signal Processing Society, 2011.
Evaluating explanations: How much do explanations from the teacher aid students?. Danish Pruthi, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C Lipton, Graham Neubig, William W Cohen, arXiv:2012.00893arXiv preprintDanish Pruthi, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C Lipton, Graham Neubig, and William W Cohen. Evaluating explanations: How much do explanations from the teacher aid students? arXiv preprint arXiv:2012.00893, 2020.
Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, Sequence level training with recurrent neural networks. ICLR. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. ICLR, 2016.
why should i trust you?" explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144, 2016.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Cynthia Rudin, Nature Machine Intelligence. 15Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215, 2019.
What makes a good conversation? how controllable attributes affect human judgments. Abigail See, Stephen Roller, Douwe Kiela, Jason Weston, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702-1723, 2019.
A value for n-person games. S Lloyd, Shapley, Lloyd S Shapley. A value for n-person games.
Learning important features through propagating activation differences. Avanti Shrikumar, Peyton Greenside, Anshul Kundaje, International Conference on Machine Learning. PMLRAvanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagat- ing activation differences. In International Conference on Machine Learning, pages 3145-3153. PMLR, 2017.
Eric Michael Smith, Diana Gonzalez-Rico, Emily Dinan, Y-Lan Boureau, arXiv:2009.10855Controlling style in generated dialogue. arXiv preprintEric Michael Smith, Diana Gonzalez-Rico, Emily Dinan, and Y-Lan Boureau. Controlling style in generated dialogue. arXiv preprint arXiv:2009.10855, 2020.
An efficient explanation of individual classifications using game theory. Erik Strumbelj, Igor Kononenko, The Journal of Machine Learning Research. 11Erik Strumbelj and Igor Kononenko. An efficient explanation of individual classifications using game theory. The Journal of Machine Learning Research, 11:1-18, 2010.
Explaining prediction models and individual predictions with feature contributions. Erik Štrumbelj, Igor Kononenko, Knowledge and information systems. 413Erik Štrumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3):647-665, 2014.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, International Conference on Machine Learning. PMLRMukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pages 3319-3328. PMLR, 2017.
Dykgchat: Benchmarking dialogue generation grounding on dynamic knowledge graphs. Yi-Lin Tuan, Yun-Nung Chen, Hung-Yi Lee, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Yi-Lin Tuan, Yun-Nung Chen, and Hung-yi Lee. Dykgchat: Benchmarking dialogue generation grounding on dynamic knowledge graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 1855-1865, 2019.
Oriol Vinyals, Quoc Le, arXiv:1506.05869A neural conversational model. arXiv preprintOriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Sarah Wiegreffe and Yuval Pinter. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, 2019.
Transfertransfo: A transfer learning approach for neural network based conversational agents. Thomas Wolf, Victor Sanh, Julien Chaumond, Clement Delangue, NeurIPS 2018 CAI Workshop. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. Transfertransfo: A transfer learning approach for neural network based conversational agents. NeurIPS 2018 CAI Workshop, 2019.
A controllable model of grounded response generation. Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, et al. A controllable model of grounded response generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14085- 14093, 2021.
Towards explainable and controllable open domain dialogue generation with dialogue acts. Can Xu, Wei Wu, Yu Wu, arXiv:1807.07255arXiv preprintCan Xu, Wei Wu, and Yu Wu. Towards explainable and controllable open domain dialogue generation with dialogue acts. arXiv preprint arXiv:1807.07255, 2018.
Progressive open-domain response generation with multiple controllable attributes. Haiqin Yang, Xiaoyuan Yao, Yiqun Duan, Jianping Shen, Jie Zhong, Kun Zhang, Haiqin Yang, Xiaoyuan Yao, Yiqun Duan, Jianping Shen, Jie Zhong, and Kun Zhang. Progressive open-domain response generation with multiple controllable attributes. 2021.
Monotonic solutions of cooperative games. Young H Peyton, International Journal of Game Theory. 142H Peyton Young. Monotonic solutions of cooperative games. International Journal of Game Theory, 14 (2):65-72, 1985.
Learning to control the specificity in neural response generation. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, Xueqi Cheng, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1108-1117, 2018.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213, 2018.
Dialogpt: Large-scale generative pre-training for conversational response generation. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, William B Dolan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsYizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B Dolan. Dialogpt: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, 2020.
| [
"https://github.com/Pascalson/LERG.",
"https://github.com/Pascalson/LERG."
] |
[
"Robust Optimization for Multilingual Translation with Imbalanced Data",
"Robust Optimization for Multilingual Translation with Imbalanced Data"
] | [
"Xian Li [email protected] ",
"Hongyu Gong [email protected] ",
"Facebook Ai "
] | [] | [] | Multilingual models are parameter-efficient and especially effective in improving low-resource languages by leveraging crosslingual transfer. Despite recent advance in massive multilingual translation with ever-growing model and data, how to effectively train multilingual models has not been well understood. In this paper, we show that a common situation in multilingual training, data imbalance among languages, poses optimization tension between high resource and low resource languages where the found multilingual solution is often sub-optimal for low resources. We show that common training method which upsamples low resources can not robustly optimize population loss with risks of either underfitting high resource languages or overfitting low resource ones. Drawing on recent findings on the geometry of loss landscape and its effect on generalization, we propose a principled optimization algorithm, Curvature Aware Task Scaling (CATS), which adaptively rescales gradients from different tasks with a meta objective of guiding multilingual training to low-curvature neighborhoods with uniformly low loss for all languages. We ran experiments on common benchmarks (TED, WMT and OPUS-100) with varying degrees of data imbalance. CATS effectively improved multilingual optimization and as a result demonstrated consistent gains on low resources (+0.8 to +2.2 BLEU) without hurting high resources. In addition, CATS is robust to overparameterization and large batch size training, making it a promising training method for massive multilingual models that truly improve low resource languages.Recent progress in multilingual sequence modeling, with multilingual translation as a representative application, been extending the scale of massive multilingual learning, with increasing number of languages [2, 10, 58] , the amount of data[2,12], as well as model size[27,13]. Despite the power-law scaling of (English-only) language modeling loss with model, data and compute[25], it has been found that multilingual models do not always benefit from scaling up model and data size, especially multilingual machine translation to multiple languages even after exploiting language proximity with external linguistic knowledge[12,27,51].There has been limited understanding of the optimization aspects of multilingual models. Multilingual training is often implemented as monolithic with data from different languages simply combined. | null | [
"https://arxiv.org/pdf/2104.07639v4.pdf"
] | 235,422,129 | 2104.07639 | e4e0674c580db2cd2c0f5d3dca65d9a98ce0ed02 |
Robust Optimization for Multilingual Translation with Imbalanced Data
Xian Li [email protected]
Hongyu Gong [email protected]
Facebook Ai
Robust Optimization for Multilingual Translation with Imbalanced Data
Multilingual models are parameter-efficient and especially effective in improving low-resource languages by leveraging crosslingual transfer. Despite recent advance in massive multilingual translation with ever-growing model and data, how to effectively train multilingual models has not been well understood. In this paper, we show that a common situation in multilingual training, data imbalance among languages, poses optimization tension between high resource and low resource languages where the found multilingual solution is often sub-optimal for low resources. We show that common training method which upsamples low resources can not robustly optimize population loss with risks of either underfitting high resource languages or overfitting low resource ones. Drawing on recent findings on the geometry of loss landscape and its effect on generalization, we propose a principled optimization algorithm, Curvature Aware Task Scaling (CATS), which adaptively rescales gradients from different tasks with a meta objective of guiding multilingual training to low-curvature neighborhoods with uniformly low loss for all languages. We ran experiments on common benchmarks (TED, WMT and OPUS-100) with varying degrees of data imbalance. CATS effectively improved multilingual optimization and as a result demonstrated consistent gains on low resources (+0.8 to +2.2 BLEU) without hurting high resources. In addition, CATS is robust to overparameterization and large batch size training, making it a promising training method for massive multilingual models that truly improve low resource languages.Recent progress in multilingual sequence modeling, with multilingual translation as a representative application, been extending the scale of massive multilingual learning, with increasing number of languages [2, 10, 58] , the amount of data[2,12], as well as model size[27,13]. Despite the power-law scaling of (English-only) language modeling loss with model, data and compute[25], it has been found that multilingual models do not always benefit from scaling up model and data size, especially multilingual machine translation to multiple languages even after exploiting language proximity with external linguistic knowledge[12,27,51].There has been limited understanding of the optimization aspects of multilingual models. Multilingual training is often implemented as monolithic with data from different languages simply combined.
Introduction
Multilingual models have received growing interest in natural language processing (NLP) [34,41,10,26,2,19,58]. The task of multilingual machine translation aims to have one model which can translate between multiple languages pairs, which reduces training and deployment cost by improving parameter efficiency. It presents several research questions around crosslingual transfer learning and multi-task learning (MTL) [23,1,2,27,45,28].
Challenges were observed when training with imbalanced data [2,26], which is common for multilingual NLP as only a few languages are rich in training data (high resource languages) while the rest of the languages in the world has zero or low training data (low resource languages) [24]. This has been mostly treated as a "data problem" with a widely used work-around of upsampling low resource languages' data to make the data more balanced [2,26,10,34,58].
In this paper, we fill the gap by systematically study the optimization of multilingual models in the task of multilingual machine translation with Transformer architecture. Our contribution is twofold.
First, we reveal the optimization tension between high resource and low resource languages where low resources' performance often suffer. This had been overlooked in multilingual training but have important implications for achieving the goal of leveraging crosslingual transfer to improve low resources. We analyze the training objectives of multilingual models and identify an important role played by local curvature of the loss landscape, where "sharpness" causes interference among languages during optimization. This hypothesis is verified empirically, where we found optimization tension between high and low resource languages. They compete to update the loss landscape during early stage of training, with high resource ones dominating the optimization trajectory during the rest of training. Existing approaches such as upsampling low resources implicitly reduce this tension by augmenting training distribution towards more uniform. We show that this approach is not robust to different data characteristics, where it suffers from either overfitting low resources or underfitting high resources.
Second, we propose a principled training algorithm for multilingual models to mitigate such tension and effectively improve all languages. Our algorithm explicitly learn the weighting of different languages' gradients with a meta-objective of guiding the optimization to "flatter" neighborhoods with uniformly low loss (Curvature-Aware Task Scaling, CATS). Compared to static weighting implied by sampling probabilities of the data distribution, our method effectively reduced the optimization tension between high and low resource languages and improves the Pareto front of generalization. On common benchmarks of multilingual translation, CATS consistently improves low resources in various data conditions, +0.8 BLEU on TED (8 languages, 700K sentence pairs), +2.2 BLEU on WMT (10 languages, 30M sentence pairs), and +1.3 BLEU on OPUS-100 (100 languages, 50M sentence pairs) without sacrificing performance on high resources. Furthermore, CATS can effectively leverage model capacity, yielding better generalization in overparameterized models. The training algorithm is conceptually simple and efficient to apply to massive multilingual settings, making it a suitable approach for achieving equitable progress in NLP for every language.
Related Work
Multilingual Learning and Multi-task Learning. Massive multilingual models have been gaining increasing research interest as a result of recent advancement in model and data scaling [17,10,23,2,19,58]. However, multilingual models are often trained using the same monolithic optimization as is used in training single-language models. To deal with imbalanced training data, upsampling low resource languages (and downsampling high resource languages) was first proposed in massive multilingual machine translation in order to improve low resource performance [23,2]. It was adopted in recent state-of-the-art multilingual pretraining [10,34,58].
Although recent work has looked into maximizing positive transfer across languages by learning parameter-sharing sub-networks conditional on languages [28,31], few prior work had looked into the blackbox of multilingual optimization. Relevant efforts focus on dynamic data selection, such as adaptive scheduling [22] and MultiDDS (Differentiable Data Selection), which dynamically selects training examples based on gradient similarity with validation set [53]. Although they share the same motivation of treating multilingual training as a multi-objective optimization problem, data selection introduces additional computation and slows down training. This work also adds to a growing interest in addressing interference in multilingual models, known as "the curse of multilinguality" [2,10], which was initially hypothesized as "capacity bottleneck" in general multi-task learning [5]. Existing work have mainly focused on model architecture, e.g. via manually engineered or learnt parameter sharing across languages based on language proximity [61,47,46]. Gradient Vaccine is a recent work addressing interference from an optimization perspective by de-conflicting gradients via projection [54], a general approach MTL [60]. Although conflicting gradients are also examined in this work, they are used as evaluation metrics, where we show that regularizing local curvature can prevent conflicting gradients from happening in the first place. Overall, we provide new understanding of interference in multilingual tasks by pointing out the optimization tension between high and low resources, which had not been studied before.
Multilingual training is an instance of multi-task learning (MTL) [9,36,11,50,44,35]. Task balancing has been studied from both architecture and optimization perspectives [33,6,48,60]. Among them, GradNorm [6] and MGDA [48] are closely related, where the GradNorm adjust gradient norms to be uniform across tasks, and MGDA adapts gradient-based multi-objective optimization to modern neural networks with efficient training.
Sharpness of the Loss Landscape and Generalization. Our analysis and proposed method are closely related to recent findings on the generalization of neural networks and optimization behaviors including geometric properties of loss landscape during training. It was observed that the "sharpness" of the loss landscape grows rapidly especially during the early phase of training by examining relevant metrics such as largest eigenvalues of Hessian, spectral norm of empirical Fisher Information matrix, etc. [8,20]. Gradient alignment, measured as cosine similarity of gradients computed on different training examples, indicates the covariance of gradients, which was shown to be related to generalization performance [15,21,32]. Our work applies those insights from optimization literature to analyze the training dynamics of multilingual learning with the Transformer architecture. A closely-related concurrent work [14] has verified the effectiveness of regularizing loss sharpness in improving generalization, but in a single-task setting on computer vision benchmarks.
Demystifying Optimization Challenges in Multilingual Training
Notations. In multilingual translation task, we typically train one model with training data from a set of languages N := {l 1 , ..., l N }, and measure its generalization performance on a set of evaluation languages M. We use "language" and "task" interchangeably throughout the paper. We introduce the following notations:
• D n := {(x (n) i , y (n)
i )} refers to the set of labeled examples of language l n . A prominent property of multilingual translation task is the highly imbalanced data, i.e. the distribution of |D n |, denoted as p |Dn| , is usually heavy-tailed, with a few high resource (HiRes) languages, and a large number of low resource (LoRes) ones [2,55].
• Let θ ∈ R d be the parameters of the multilingual model.
• L n := E (x,y)∼Dn [L(ŷ, y)] as the expected loss as measured on language n, with token-level cross entropy loss L(ŷ, y) = − i y |V| i logŷ, where V is the vocabulary the target language(s) which the model translates to.
• Denote ∇ as gradient, ∇ 2 as Hessian H, tr(.) as trace, and . 2 as Euclidean (l 2 ) norm.
Optimization Objective
Our analysis starts by taking a closer look at the training and generalization objective of multilingual models. Generalization performance is measured on a set of evaluation languages M weighted by their importance w n , L Multi := |M| n=1 w n L n (x, y|θ). For multilingual translation, w n = 1 |M| , n = 1, ...|M| indicating that all translation directions have equal importance. This population loss is usually optimized by minimizing empirical risk, i.e. a training losŝ L Multi := min θ |N | n=1 α nLn (x, y; θ), where α n corresponds to the weight of language n's loss during training.
We point out that as an important optimization choice, α := {α n } is either left implicit or chosen heuristically in existing work of multilingual training. A common choice for α resembles the sampling probability of language l n in a mini-batch according to training data distribution p For most multilingual tasks, training data is highly imbalanced with a few high resource languages accounting for majority of the training data and a long tail of low resource languages. Practitioners manually tune T to make minimizing training lossL Multi a proxy of optimizing the population loss L Multi . Massive multilingual translation and pretraining with real-world datasets of 100 languages usually upsample low resource languages (e.g. T = 5) to improve the averaged generalization performance [2,34,10,58]. Figure 1: Left: Change of one language's loss L 1 after shared parameters being updated to θ 2 driven by ∇L 2 from another language is affected by the curvature of the loss landscape around previous critical point θ * 1 . Right: Illustration of the proposed algorithm, Curvature Aware Task Scaling (CATS), to learn task weighting re-scaling α n such that the combined gradients will guide the optimization trajectory to low curvature region (pointed by the green arrow).
Local Curvature and Robust Multi-task Optimization
* 1 2 1 ( 2 ) 1 ( 2 ) 2 1 1
We analyze how local curvature affects optimization efficiency of multilingual training. Task interference has been observed to pose optimization challenges in general multi-task learning and continual learning, where improvement in one task hurts performance on another task [45,48,30,60,38].
In the context of training multilingual models, we define interference from the optimization perspective, as the change of empirical risk L 1 as a result of an update to θ dominated by empirical risk minimization for another language's loss L 2 . In Figure 1, we illustrate how curvature of the loss landscape affects interference similar to the analysis which has been done in continual learning [38]. After a gradient step driven by ∇L 2 , the shared model parameters moved from θ * 1 to θ 2 . The corresponding changes in L 1 , i.e.
L 1 (θ 2 ) − L 1 (θ * 1 ) is affected by the curvature. I(L 1 , L 2 ) ∆ = L 1 (θ 2 ) − L 1 (θ * 1 ) ≈ (θ 2 − θ * 1 ) ∇L 1 (θ * 1 ) + 1 2 ∇ 2 L 1 (θ * 1 ) θ 2 − θ * 1 2 2 (1)
This relationship is also summarized in Eq. (1). We make an assumption that in a small neighborhood of θ * 1 , the loss surface is almost convex [7,16]. Then we can apply a second-order Taylor expansion of L 1 (θ 2 ) in Eq. (2), and derive the connection between I(L 1 , L 2 ) and the Hessian H(θ) = ∇ 2 L(θ) which indicates the local curvature of loss surface. for i from 1 to m do 4:
L 1 (θ 2 ) ≈ L 1 (θ * 1 ) + (θ * 2 − θ * 1 ) ∇L 1 (θ * 1 ) + 1 2 (θ 2 − θ * 1 ) ∇ 2 L 1 (θ * 1 )(θ 2 − θ * 1 )(2)
for n = 1, ..., K do 5:
Compute g n = ∇ θLn (B n ) 6:g n = clone(g n ) 7:α n = clone(α n ) 8: end for 9:
Update θ with K n=1α n g n 10:
end for 11: Compute ∇ αLmetaα ,∇ λLmetaα according to Eq. (4) usingg n for ∇ θLn 12:
Update α with gradient descent 13: Update λ gradient ascent 14: end while
Since ∇L 1 (θ * 1 ) ≈ 0 at critical point for L 1 , the major contributing factor to interference is local curvature, measured by the spectral norm of H(θ * 1 ), as well as the magnitude of parameter update θ 2 − θ * 1 2 2 . We hypothesize that the optimization tension defined above affects low resource languages more than high resource ones, assuming that low-resource tasks have lower sample complexity, and thus are more likely to reach local minima during early stage of training.
Meta-learn α with curvature regularization
Motivated by previous section's analysis, we propose a principled optimization procedure for multilingual translation with a meta objective of regularizing local curvature of the loss landscape of shared parameters θ. We explicitly learn the task weighting parameters α := [α n ] n=1,...,N so as to minimize the trace norm of empirical Fisher informa-
tion tr(F ) = E (x,ŷ) [ N n=1 α n ∇ θLn 2 ]
, which is an approximation of tr(H) as was proposed in optimization literature [20,52]. To leverage distributed training in modern deep learning, we estimate α from a mini-batch B with K languages as is shown in Eq. (3):
min α1,...,α K K n=1 α n ∇ θLn (θ) 2 s.t. K n=1 α n = 1, α n ≥ 0, ∀n(3)
We treat solving Eq. (3) along with minimizing empirical riskL Multi as a multi-objective optimization problem, and optimizing the corresponding Lagrangian [4]. The overall training objective is as follows:
LCATS(θ, α, λ) = K n=1 αnLn − λc( − K n=1 αn∇ θLn 2 ) + λs( K n=1 αn − 1) 2 − K n=1 λn(αn − ) (4)
We learn both model parameters θ task weighting parameters α simultaneously with bi-level optimization, where update θ in the inner loop and update α and Lagrangian multipliers λ := [λ c , λ s , λ 1 , ..., λ N ] in the outer loop. We refer to proposed algorithm as CATS (Curvature-Aware Task Scaling) with the detailed training procedure described in Algorithm 1. Datasets. We experiment on three public benchmarks of multilingual machine translation with varying characteristics of imbalanced data as is shown in Table 1. They are representative in terms of the number of languages |N |, total volume of training data |D n | measured as the number of sentence pairs in millions (M), and the entropy of data distribution H |Dn| indicating the degree of data imbalance. For example, distributions with multiple high resource languages are covered by experiments on the TED and OPUS100 datasets, while experiments on WMT dataset cover a unique scenario of extremely skewed distribution with one high resource language and a long tail of low resource ones, which is not uncommon in real world applications. Additional details and statistics of the datasets are provided in Appendix A.
Models. We use the Transformer architecture, and the same model size and configuration as were used in corresponding baselines. For experiments on TED, the model is a 6-layer encoder-decoder, 512 hidden dimension, 1024 FFN dimension, and 4 attention heads [53]. For experiments on OPUS-100, the model is the Transformer-base configuration as is used in [62]. We use Transformer-base for WMT experiments. We use the same preprocessed data by the MultiDDS baseline authors [53], and followed the same procedure to preprocess OPUS-100 data released by the baseline [62]. All models are trained with the same compute budget for comparison. We provide detailed training hyperparameters in Appendix B.
Baselines. We compare to strong baselines used in state-of-the-art multilingual translation and relevant approaches in generic multi-task learning:
• Proportional sampling. This is a straightforward yet common approach used in practice by training on combined training data from all languages [62], which corresponds to α n = p 1 T , T = 1. • Upsampling low resources T = 5. This has been adopted in state-of-the-art multilingual transformers as is discussed in Section 3.1. • GradNorm [6]: We also compare to a closely related approach proposed for general multi-task learning. The key difference is that the objective in GradNorm is to rescale each task's gradients to be closer to the average gradients norm (i.e. only based on ∇L i ), while the objective in our approach is to minimize the second-order derivative ∇ 2 L i which corresponds to the curvature of the loss landscape. Evaluation of Optimization. For the analysis of loss landscape, we look into the top eigenvalue (λ max ) of H computed from examples in a mini-batch with the power iteration method described in [59] for each language. To evaluate optimization efficiency, we analyze gradient similarity using the same metrics introduced in [60]. At each training step, with g 1 = ∇L 1 , g 2 = ∇L 2 computed from training examples of two languages, we measure the following metrics as indicators of gradient similarity:
• Gradient direction similarity (alignment): cos φ(g 1 , g 2 ) where φ is the angle between g 1 and g 2 .
• Gradient magnitude (L 2 norm) similarity: γ(g 1 , g 2 ) = 2 g1 2 g2 2
g1 2 2 + g2 2 2 .
Evaluation of Generalization. We verify whether improved optimization leads to better generalization. We report both token-level loss (negative log-likelihood, NLL↓) and BLEU scores (↑) on hold-out datasets. we choose the best checkpoint by validation perplexity and only use the single best model without ensembling.
We conduct most experiments in multilingual one-to-many task as it is a more challenging task than many-to-one, and represents the core optimization challenges underlying many-to-many task. Additional experiments with many-to-one are provided in Appendix C.
Results
Robust Optimization
Abrasive Gradients between HiRes and
LoRes. First, we illustrate the optimization tension between high resource languages and low resource languages (HiLo). We examine abrasive gradients in Figure 3, which shows a fine-grained view of gradients similarity in terms of both direction (left) and magnitude (right) for different parameters in the Transformer architecture: encoder self attention (E.SA), encoder feedforward (E.FC), decoder self-attention (D.SA), decoder encoder attention (D.EA), and decoder feedforward (D.FC). We can see that gradients are more similar when training with balanced data, i.e. HiHi and LoLo, compared to HiLo for two fundamental multilingual translation tasks one-to-many (O2M) and many-to-one (M2O), indicating that the amount of training data is less a cause of the problem compared to the distribution of training data across languages. Furthermore, contrary to common heuristics which use language family information to determine which languages share parameters [12], language proximity, i.e. related (R) vs. diverse (D), has a smaller effect on gradient similarity compared to data imbalance.
Next, we look into optimization tension through the lens of curvature. Figure 2 (top) plots the top eigenvalues λ max when training a multilingual model in the HiLo setting. We can see that with uniform sampling (equal weighting, T = ∞), LoRes updates the shared parameter space in the direction towards higher curvature during the early stage of training, while HiRes dominates the optimization trajectory (with the LoRes having negative top eigenvalue) for the rest of the training. We found that common heuristics of T = 5 reduces the early-stage sharpness although the loss landscape is still dominated by HiRes. In contrast, the proposed optimization approach (CATS α n ) mitigated the abrasive optimization and increased gradients alignment as is illustrated Figure 2 (bottom). and high resource (Right) from the same multilingual model. We can see that addressing data imbalance with the temperature hyperparameter T is not robust to changing data distribution p |Dn| : TED (Top) and WMT (Bottom). For highly imbalanced WMT dataset, the common practice of upsampling low resource (T = 5) improves LoRes but at the cost of underfitting HiRes, while it leads to LoRes overfitting for less imbalanced dataset (TED). Datasets characteristics are described in Table 1.
Training without layer normalization.
We take a closer look at gradients alignment for different types of parameters in the Transformer architecture. We found that gradient alignments for layer normalization parameters (LayerNorm, LN) are much noisier than other parameters as is shown in Figure 4a. Extensive work has provided empirical and theoretical results of LayerNorm be being critical for training stability of Transformers [3,57,56]. Our work is the first to show that it is a doubleedged sword in multilingual training and exacerbates the tension between HiRes and LoRes, which competes to set the gain and bias parameters in LayerNorm, where the final weights highly depend on training data distributions. However, simply removing the gain and bias parameters causes training instability [32]. In Figure 4b, we show that with CATS optimization in place, Layer-Norm can be removed yet stable training can still be achieved without further adjusting other hyperparameters such as decreasing learning rate, etc. As a result, the generalization gap (difference between training and validation loss) for low resource is greatly reduced possibly due to that the learnt Layer-Norm parameters would otherwise be biased towards HiRes. Table 2: Comparison of CATS with common methods used in multilingual training. We evaluate on the 8-language TED benchmark with related (top) and diverse (bottom) languages which help verify performance while controlling optimization difficulty due to language proximity. We compare to static weighting with manually tuned temperatures T = 1, 5, 100(∞), and dynamic weighting such as GradNorm [6] and MultiDDS [53]. Results are BLEU scores on test sets per language and the average BLEU score (↑) across all languages (All) and low resource languages (Lo). Multilingual training which achieves the best average BLEU is in bold and the strongest baseline approach is annotated with underscore. * indicates the improvements are statistically significant with p < 0.05. Robust to different training data distributions p |Dn| . We show that the improved optimization is reflected in generalization performance. In Figure 5, we report the training and validation loss of HiRes and LoRes languages throughout training on two representative benchmarks: TED (top) and WMT (bottom) with different degrees of data imbalance. We can see that the optimal temperature hyperparameters T vary given the distinct distributions of training data p |Dn| in these two benchmarks. For example, on WMT dataset where the training data distribution across languages is more skewed, upsampling LoRes (T = 5, ∞) is beneficial, as was observed in other large-scale multilingual datasets [2,10,34]. However, T = 5 easily leads to overfitting for LoRes on a more balanced dataset such as the TED dataset. Compared to existing approaches of using a hyperparameter T , CATS is more robust as a principled way to address varying distributions of training data. It does not need manual tuning of a hyperparameter T nor suffer from overfitting or underfitting. On both benchmarks, it consistently improves LoRes without sacrificing HiRes. Table 3: Detailed performance on the WMT benchmark. Compared to the strongest baselines (T = 1, 5), CATS drastically improves low and mid resource languages. Multilingual training which achieves the best BLEU is in bold and the best baseline is annotated with underscore.
Generalization
We report translation quality on three representative benchmarks whose detailed characteristics are described in Table 1.
TED. Table 2 summarizes translation quality improvement on the TED corpus. To control for language similarity, we evaluate on the 8-language benchmark with both Related (4 LoRes and 4 HiRes from the same language family) and Diverse (4 LoRes and 4 HiRes without shared linguistic properties) settings as is used in [53]. We found simple mixing (T = 1) is a very strong baseline and complex training methods such as GradNorm and MultiDDS do not generalize well (TED is a smaller dataset and more prone to overfitting). CATS achieves +0.4 ∼ 0.7 BLEU on average for LoRes and +0.7 ∼ 0.8 BLEU overall.
WMT. On a larger and more imbalanced 10-language WMT datset with 30.5M sentence pairs, we found regularizing local curvature with meta learnt α (CATS) is more beneficial. CATS improved low and mid resource languages across the board as is shown in Table 3. Compared to the strongest baseline (T = 5), CATS achieved +2.2 average BLEU for low resource languages and +2.6 BLEU across all languages. Furthermore, CATS is more sample efficient as is shown in Figure 6. OPUS-100. Finally, we test the scalability of CATS in massive multilingual setting using the OPUS-100 benchmark [62]. Results are summarized in Table 4. Consistent with previous results on OPUS-100 [62,61] as well as other massive multilingual model in [2], upsampling low resource with T = 5 improves BLEU scores for low resource but at the cost of accuracy for high resource ones. The trade-off is the opposite for proportional sampling (T = 1). CATS achieves the the best of both worlds, especially +1.3 BLEU on low resource, +1.4 BLEU on mid resource, and +1.0 BLEU on high resource compared to the strong baseline of T = 5. We also compare to a recent state-of-the-art model, conditional language-specific routing (CLSR) [61], which adds additional language specific parameters (157M parameters in total compared to 110M parameters in our experiments). By improving optimization without increasing model capacity, CATS still outperforms CLSR on low resource by +0.9 BLEU. Robust to overparameterization. Scaling up model size has been of central interest in recent development of massive multilingual models such as GShard and Switch Transformer with trillions of parameters [27,13]. Training overparameterized models a shorter amount of time have shown to be more efficient than training a smaller model for longer time [29,39,25]. Therefore, we are interested in understanding CATS' performance in training overparameterized models. Figure 7 plots change in generalization (measured by BLEU score difference) as we increase model capacity. CATS can effectively benefit from larger model capacity, especially in the overaparameterized regime (300M parameters for TED dataset) where performance begins to degradate with standard training.
LoRes MidRes HiRes All
|D n | < 100K ≥ 1M
Performance with large batch size. Large batch size is an effective way to speed up training without sacrificing performance [18,49]. However, heterogeneous training data do not necessarily benefit from large batch size training [37]. This is a challenge for multilingual training as is shown in Figure 8, where increasing batch size hurts generalization for LoRes with a common practice of upsampling (T = 5). This is likely due to LoRes is prone to overfitting (illustrated in Figure 5) and larger batch size exacerbates it. The same batch size without upsampling leads to improved generalization (T = 1). In comparison, CATS can effectively benefit from larger batch size due to its adaptive rescaling of gradients. Table 5: Ablation study of the effectiveness of CATS when combined with removing layer normalization parameters (-LN).
Ablation layer normalization. To further understand the effect from removing layer normalization which mitigates LoRes overfitting as was shown in Section 4.2.1, we provide an ablation study on its role when combined with CATS. We ran experiments on TED Diverse dataset with the strongest baseline (T = 1). In Table 5, we can see that CATS brings additional +0.5 BLEU to low resources on top of the +0.3 BLEU improvement from removing layer normalization parameters (-LN).
Conclusion
In this work, we look under the hood of monolithic optimization of multilingual models. We unveil optimization challenges arising from imbalanced training data, where low resource languages have been sub-optimally optimized. We proposed a principled optimization algorithm for multilingual training with adaptive gradient rescaling of different languages where the scaling is learnt with a meta-objective of guiding the optimization to solutions with low local curvature.
We evaluated the proposed method on three representative benchmarks (TED, WMT and OPUS100) which cover a wide range of imbalanced data distributions commonly seen in real word multilingual datasets. Compared to existing methods of simply mixing the data or manually augmenting the data distribution to be more "balanced" with a temperature hyperparameter, the proposed training method demonstrates robust optimization and consistently improves generalization for low resource languages. Further analysis shows that the proposed approach is suitable for the realistic training settings of large-scale multilingual learning such as overparameterized models, large batch size, highly imbalanced data, as well as large number of languages etc., paving the way for advancing massive multilingual models which truly benefit low resource languages.
Broader Impact. Recent progress in NLP enabled by scaling up model size and data is widening the gap of technology equity between high resource languages (such as English) and low resource languages. Multilingual model is a promising approach to close this gap. However, current language agnostic multilingual models does not effectively improve low resources for various reasons to be understood. Our investigation in optimization is one step towards building truly inclusive multilingual models.
This work has several limitations which we hope to extend in future work. We did not run experiments on translation between non-English directions including zero-shot. The absolute size of the overparameterized model in our analysis is relatively small compared to the state-of-the-art of model scaling of billions of parameters. As an language generation application, machine translation model could produce unsafe output or hallucination. Table 8: Performance on TED corpus with four low resource (LoRes) and four high resource (HiRes). To control for language proximity, we evaluate on the 8-language benchmark with both related and diverse languages in multilingual many-to-one (M2O) translation. We compare to static weighting with commonly used temperature hyperparameters T = 1, 5, and dynamic weighting such as MultiDDS [53]. Results are BLEU scores on test sets per language and the average BLEU score (↑) across all languages (All) and low resource languages (Lo). Multilingual training which achieves the best BLEU is in bold and the strongest baseline approach is annotated with underscore. with LayerNorm, we use the PreNorm setup which has been shown to be more robust [40,56]. We use maximum 50k updates with batch size of 131k tokens for the TED corpus, maximum 300k updates with batch size of 262k tokens for the WMT dataset, and maximum 500k updates with batch size of 262k tokens for the OPUS-100 corpus. Each experiment were run with 32 Nvidia V100 GPUs (32GB).
LoRes
CATS. α is initialized with 1 N , where N is the number of languages. We use standard SGD for updating α with learning rate chosen from {0.1, 0.2} based on validation loss. λ are initialized with 0 and updated with gradient ascent. We set the number of languages per batch K = 4 for TED and WMT experiments and K = 8 for OPUS-100 experiments. Due to the high curvature at early stage of training, we update α at higher frequency with m = 10, 100 for the first 8k and 16k updates and m = 1000 for the rest of the training. We also tested m = 1 for the early stage and found the optimization trajectories do not change much.
Evaluation. We use the best checkpoint (without ensembling) chosen by validation loss. We use beam search with beam size 5 and length penalty 1.0 for decoding. We report SacreBLEU [42].
C Additional Experiments
We report experiments on many-to-one (M2O) translation on the TED corpus in Table 8.
a temperature hyperparameter T . For example, T = 1 corresponds to sampling proportional to training data sizes [|D 1 |, ...|D N |]; T = ∞ corresponds to uniformly sampling across languages.
•
MultiDDS[53]: A recently proposed method to balance training losses in multilingual training. It learns to select training examples among different languages based on gradient similarity with validation set. Although it does not address the optimization challenges studied in this work, it shares the same interest of improving generalization performance and utilizing training examples in a dynamic and adaptive fashion.
Figure 2 :
2Local curvature measured by top eigenvalue (Top) and gradient direction similarity (Bottom) of multilingual training with high-resource (HiRes) and low resource (LoRes) languages measured on TED corpus. To control for other factors such as language proximity, both high resource (Russian) and low resource (Belarusian) are chosen to be from the same language family. We compare the proposed optimization method CATS with existing approaches of setting α n based on empirical distribution of training examples, i.e. α n ∝ p |Dn| , and upsampling with a temperature hyperparameter T . T = ∞ corresponds to equal weighting (uniform sampling). In the beginning of training (Left), HiRes and LoRes competes to increase the sharpness the loss landscape, with HiRes dominating the optimization trajectory during the rest of the training (Right) and their gradients are almost orthogonal. Our proposed method (CATS α) effectively reduced local curvature and improved gradients alignment.
Figure 3 :
3Gradients similarity of different Transformer parameters (x-axis) in common multilingual translation tasks (y-axis), measured as gradients direction similarity (Left), and gradients norm similarity (Right).
Figure 4 :
4Gradients alignment between HiRes and LoRes have high variance for Layer normalization parameters (LayerNorm) in both encoder (E.LN) and decoder (D.LN), which exacerbate overfitting of LoRes. CATS allows stable training without LayerNorm parameters and reduces generalization gap for LoRes.
Figure 5 :
5Train and validation loss (token-level negative log-likelihood, NLL ↓) for low resource (Left)
Figure 6 :
6CATS is very effective in highly imbalanced datasets (WMT), where common sampling approaches T = 1, 5 sacrifice either low resources (LoRes) or high resources (HiRes). CATS significantly improves generalization on LoRes while demonstrating better sample efficiency.
Figure 7 :Figure 8 :
78CATS improves generalization in overparameterized models, while standard approach suffers from overfitting. 1 T , T = 5 = p 1 T , T = 1 CATS CATS is robust in large batch size training (4× batch size from 33K to 131K tokens).
Table 4 :
4Performance on OPUS-100.
CATS can easily apply to training at the
scale of ∼ 100 languages and improves
low resources.
bos mar hin mkd ell bul fra kor All LoLoRes
HiRes
Avg.
Baseline 24.3 10.7 22.9 33.5 38.0 39.1 40.5 19.1 28.5 22.8
-LN only 24.4 10.6 23.7 33.9 38.4 39.5 40.9 19.5 28.9 23.1
CATS 24.6 11.3 24.2 34.1 39.1 40.1 41.2 19.6 29.3 23.6
The authors of[53] provided the downloadable data at https://drive.google.com/file/d/ 1xNlfgLK55SbNocQh7YpDcFUYymfVNEii/view?usp=sharing
A Dataset StatisticsFor TED experiments, we use the same preprocessed data 1 provided by[53]using the same train, valid, and test split as in[43]. The data volumes for related and diverse language groups are summarized inTable 6.Languages are grouped into families based on the their similarity. For TED Related dataset, we have four language families: Turkic family with Azerbaijani and Turkish, Slavic family with Belarusian and Russian, Romanance family with Glacian and Portuguese, and Czech-Slovak family with Slovak and Czech. As for the Diverse dataset, the languages are grouped into five families: Indo-iranian family with Hindi and Marathi, Slavic family with Macedonian, Bosnian and Bulgarian, Korean family with Korean, Hellenic family with Greek, and Romance family with French. The source of WMT dataset is public parallel corpora from previous WMT shared task. We use the same preprocessed data as was used in several multilingual translation tasks[34]. We selected 10 languages with highly imbalanced data distribution and diverse linguistic features from 5 language families: (1) Arabic; (2) Kazakh and Turkish; (3) Vietnamese; (4) German, Hindi, Italian, Dutch, Romanian; (5) Japanese.
Massively multilingual neural machine translation. Roee Aharoni, Melvin Johnson, Orhan Firat, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Roee Aharoni, Melvin Johnson, and Orhan Firat. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, 2019.
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, arXiv:1907.05019Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprintNaveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. Massively multilingual neu- ral machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019, 2019.
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Constrained optimization and Lagrange multiplier methods. P Dimitri, Bertsekas, Academic pressDimitri P Bertsekas. Constrained optimization and Lagrange multiplier methods. Academic press, 2014.
Multitask learning. Machine learning. Rich Caruana, 28Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997.
Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, Andrew Rabinovich, International Conference on Machine Learning. PMLRZhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gra- dient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, pages 794-803. PMLR, 2018.
The loss surfaces of multilayer networks. Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, Yann Lecun, Artificial intelligence and statistics. PMLRAnna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In Artificial intelligence and statistics, pages 192-204. PMLR, 2015.
Simran Jeremy M Cohen, Yuanzhi Kaur, Zico Li, Ameet Kolter, Talwalkar, arXiv:2103.00065Gradient descent on neural networks typically occurs at the edge of stability. arXiv preprintJeremy M Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gra- dient descent on neural networks typically occurs at the edge of stability. arXiv preprint arXiv:2103.00065, 2021.
A unified architecture for natural language processing: Deep neural networks with multitask learning. Ronan Collobert, Jason Weston, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningRonan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160-167, 2008.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wen- zek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116, 2019.
Multi-task self-supervised visual learning. Carl Doersch, Andrew Zisserman, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionCarl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2051-2060, 2017.
Beyond englishcentric multilingual machine translation. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, arXiv:2010.11125arXiv preprintAngela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. Beyond english- centric multilingual machine translation. arXiv preprint arXiv:2010.11125, 2020.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. William Fedus, Barret Zoph, Noam Shazeer, arXiv:2101.03961arXiv preprintWilliam Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021.
Sharpness-aware minimization for efficiently improving generalization. Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur, arXiv:2010.01412arXiv preprintPierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware mini- mization for efficiently improving generalization. arXiv preprint arXiv:2010.01412, 2020.
Stanislav Fort, Stanislaw Paweł Krzysztof Nowak, Srini Jastrzebski, Narayanan, arXiv:1901.09491Stiffness: A new perspective on generalization in neural networks. arXiv preprintStanislav Fort, Paweł Krzysztof Nowak, Stanislaw Jastrzebski, and Srini Narayanan. Stiffness: A new perspective on generalization in neural networks. arXiv preprint arXiv:1901.09491, 2019.
J Ian, Oriol Goodfellow, Andrew M Vinyals, Saxe, arXiv:1412.6544Qualitatively characterizing neural network optimization problems. arXiv preprintIan J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014.
Learning word vectors for 157 languages. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov, arXiv:1802.06893arXiv preprintEdouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. Learning word vectors for 157 languages. arXiv preprint arXiv:1802.06893, 2018.
Train longer, generalize better: closing the generalization gap in large batch training of neural networks. Elad Hoffer, Itay Hubara, Daniel Soudry, arXiv:1705.08741arXiv preprintElad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017.
Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson, International Conference on Machine Learning. PMLRJunjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin John- son. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, pages 4411-4421. PMLR, 2020.
Catastrophic fisher explosion: Early phase fisher matrix impacts generalization. Stanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, Krzysztof Geras, arXiv:2012.14193arXiv preprintStanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof Geras. Catastrophic fisher explosion: Early phase fisher matrix impacts generalization. arXiv preprint arXiv:2012.14193, 2020.
The break-even point on optimization trajectories of deep neural networks. Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, Krzysztof Geras, International Conference on Learning Representations. Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, and Krzysztof Geras. The break-even point on optimization trajectories of deep neural networks. In International Conference on Learning Representations, 2019.
Adaptive scheduling for multi-task learning. Sébastien Jean, Orhan Firat, Melvin Johnson, arXiv:1909.06434arXiv preprintSébastien Jean, Orhan Firat, and Melvin Johnson. Adaptive scheduling for multi-task learning. arXiv preprint arXiv:1909.06434, 2019.
Google's multilingual neural machine translation system: Enabling zero-shot translation. Melvin Johnson, Mike Schuster, V Quoc, Maxim Le, Yonghui Krikun, Zhifeng Wu, Nikhil Chen, Fernanda Thorat, Martin Viégas, Greg Wattenberg, Corrado, Transactions of the Association for Computational Linguistics. 5Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351, 2017.
The state and fate of linguistic diversity and inclusion in the nlp world. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, Monojit Choudhury, arXiv:2004.09095arXiv preprintPratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. The state and fate of linguistic diversity and inclusion in the nlp world. arXiv preprint arXiv:2004.09095, 2020.
Jared Kaplan, Sam Mccandlish, Tom Henighan, B Tom, Benjamin Brown, Rewon Chess, Scott Child, Alec Gray, Jeffrey Radford, Dario Wu, Amodei, arXiv:2001.08361Scaling laws for neural language models. arXiv preprintJared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
Guillaume Lample, Alexis Conneau, arXiv:1901.07291Cross-lingual language model pretraining. arXiv preprintGuillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
Gshard: Scaling giant models with conditional computation and automatic sharding. Dmitry Lepikhin, Hyoukjoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen, arXiv:2006.16668arXiv preprintDmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with condi- tional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.
Deep transformers with latent depth. Xian Li, Asa Cooper Stickland, Yuqing Tang, Xiang Kong, Advances in Neural Information Processing Systems. 33Xian Li, Asa Cooper Stickland, Yuqing Tang, and Xiang Kong. Deep transformers with latent depth. Advances in Neural Information Processing Systems, 33, 2020.
Train big, then compress: Rethinking model size for efficient training and inference of transformers. Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joey Gonzalez, International Conference on Machine Learning. PMLRZhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joey Gon- zalez. Train big, then compress: Rethinking model size for efficient training and inference of transformers. In International Conference on Machine Learning, pages 5958-5968. PMLR, 2020.
. Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, Sam Kwong, arXiv:1912.12854Pareto multi-task learning. arXiv preprintXi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, and Sam Kwong. Pareto multi-task learning. arXiv preprint arXiv:1912.12854, 2019.
Learning language specific sub-network for multilingual machine translation. Zehui Lin, Liwei Wu, Mingxuan Wang, Lei Li, arXiv:2105.09259arXiv preprintZehui Lin, Liwei Wu, Mingxuan Wang, and Lei Li. Learning language specific sub-network for multilingual machine translation. arXiv preprint arXiv:2105.09259, 2021.
Understanding why neural networks generalize well through gsnr of parameters. Jinlong Liu, Yunzhi Bai, Guoqing Jiang, Ting Chen, Huayan Wang, International Conference on Learning Representations. Jinlong Liu, Yunzhi Bai, Guoqing Jiang, Ting Chen, and Huayan Wang. Understanding why neural networks generalize well through gsnr of parameters. In International Conference on Learning Representations, 2019.
End-to-end multi-task learning with attention. Shikun Liu, Edward Johns, Andrew J Davison, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1871-1880, 2019.
Multilingual denoising pre-training for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, Transactions of the Association for Computational Linguistics. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742, 2020.
Kevin Lu, Aditya Grover, arXiv:2103.05247Pieter Abbeel, and Igor Mordatch. Pretrained transformers as universal computation engines. arXiv preprintKevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. Pretrained transformers as universal computation engines. arXiv preprint arXiv:2103.05247, 2021.
Multi-task sequence to sequence learning. Minh-Thang Luong, V Quoc, Ilya Le, Oriol Sutskever, Lukasz Vinyals, Kaiser, arXiv:1511.06114arXiv preprintMinh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114, 2015.
An empirical model of large-batch training. Sam Mccandlish, Jared Kaplan, Dario Amodei, Openai Dota Team, arXiv:1812.06162arXiv preprintSam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. arXiv preprint arXiv:1812.06162, 2018.
Understanding the role of training regimes in continual learning. Mehrdad Seyed Iman Mirzadeh, Razvan Farajtabar, Hassan Pascanu, Ghasemzadeh, arXiv:2006.06958arXiv preprintSeyed Iman Mirzadeh, Mehrdad Farajtabar, Razvan Pascanu, and Hassan Ghasemzadeh. Under- standing the role of training regimes in continual learning. arXiv preprint arXiv:2006.06958, 2020.
Deep double descent: Where bigger models and more data hurt. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, Ilya Sutskever, arXiv:1912.02292arXiv preprintPreetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292, 2019.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, arXiv:1904.01038arXiv preprintMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038, 2019.
How multilingual is multilingual bert?. Telmo Pires, Eva Schlinger, Dan Garrette, arXiv:1906.01502arXiv preprintTelmo Pires, Eva Schlinger, and Dan Garrette. How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502, 2019.
Matt Post, arXiv:1804.08771A call for clarity in reporting bleu scores. arXiv preprintMatt Post. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771, 2018.
When and why are pre-trained word embeddings useful for neural machine translation?. Ye Qi, Devendra Singh Sachan, Matthieu Felix, Graham Sarguna Janani Padmanabhan, Neubig, arXiv:1804.06323arXiv preprintYe Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan, and Graham Neubig. When and why are pre-trained word embeddings useful for neural machine translation? arXiv preprint arXiv:1804.06323, 2018.
Multi-task self-supervised learning for robust speech recognition. Mirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, Yoshua Bengio, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEMirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, and Yoshua Bengio. Multi-task self-supervised learning for robust speech recognition. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6989-6993. IEEE, 2020.
An overview of multi-task learning in. Sebastian Ruder, arXiv:1706.05098deep neural networks. arXiv preprintSebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
Parameter sharing methods for multilingual self-attentional translation models. Devendra Singh Sachan, Graham Neubig, arXiv:1809.00252arXiv preprintDevendra Singh Sachan and Graham Neubig. Parameter sharing methods for multilingual self-attentional translation models. arXiv preprint arXiv:1809.00252, 2018.
Multilingual unsupervised nmt using shared encoder and language-specific decoders. Sukanta Sen, Asif Kamal Kumar Gupta, Pushpak Ekbal, Bhattacharyya, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsSukanta Sen, Kamal Kumar Gupta, Asif Ekbal, and Pushpak Bhattacharyya. Multilingual unsupervised nmt using shared encoder and language-specific decoders. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3083-3089, 2019.
Multi-task learning as multi-objective optimization. Ozan Sener, Vladlen Koltun, arXiv:1810.04650arXiv preprintOzan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. arXiv preprint arXiv:1810.04650, 2018.
Don't decay the learning rate, increase the batch size. L Samuel, Pieter-Jan Smith, Chris Kindermans, Quoc V Ying, Le, arXiv:1711.00489arXiv preprintSamuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V Le. Don't decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489, 2017.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hua Hao Tian, Wu, Ernie, arXiv:1904.09223Enhanced representation through knowledge integration. arXiv preprintYu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223, 2019.
Multilingual translation with extensible multilingual pretraining and finetuning. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan, arXiv:2008.00401arXiv preprintYuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401, 2020.
On the interplay between noise and curvature and its effect on optimization and generalization. Valentin Thomas, Fabian Pedregosa, Bart Merriënboer, Pierre-Antoine Manzagol, Yoshua Bengio, Nicolas Le Roux, International Conference on Artificial Intelligence and Statistics. PMLRValentin Thomas, Fabian Pedregosa, Bart Merriënboer, Pierre-Antoine Manzagol, Yoshua Bengio, and Nicolas Le Roux. On the interplay between noise and curvature and its effect on optimization and generalization. In International Conference on Artificial Intelligence and Statistics, pages 3503-3513. PMLR, 2020.
Balancing training for multilingual neural machine translation. Xinyi Wang, Yulia Tsvetkov, Graham Neubig, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsXinyi Wang, Yulia Tsvetkov, and Graham Neubig. Balancing training for multilingual neu- ral machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8526-8537, 2020.
Zirui Wang, Yulia Tsvetkov, Orhan Firat, Yuan Cao, arXiv:2010.05874Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models. arXiv preprintZirui Wang, Yulia Tsvetkov, Orhan Firat, and Yuan Cao. Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models. arXiv preprint arXiv:2010.05874, 2020.
Are all languages created equal in multilingual bert?. Shijie Wu, Mark Dredze, arXiv:2005.09093arXiv preprintShijie Wu and Mark Dredze. Are all languages created equal in multilingual bert? arXiv preprint arXiv:2005.09093, 2020.
On layer normalization in the transformer architecture. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, Tieyan Liu, International Conference on Machine Learning. PMLRRuibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In International Conference on Machine Learning, pages 10524-10533. PMLR, 2020.
Understanding and improving layer normalization. Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin, arXiv:1911.07013arXiv preprintJingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. Understanding and improving layer normalization. arXiv preprint arXiv:1911.07013, 2019.
Aditya Barua, and Colin Raffel. mt5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, arXiv:2010.11934arXiv preprintLinting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. mt5: A massively multilingual pre-trained text-to-text trans- former. arXiv preprint arXiv:2010.11934, 2020.
Hessian-based analysis of large batch training and robustness to adversaries. Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, Michael W Mahoney, arXiv:1802.08241arXiv preprintZhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, and Michael W Mahoney. Hessian-based analysis of large batch training and robustness to adversaries. arXiv preprint arXiv:1802.08241, 2018.
Gradient surgery for multi-task learning. Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn, Advances in Neural Information Processing Systems. 33Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33, 2020.
Share or not? learning to schedule language-specific ca-pacity for multilingual translation. Biao Zhang, Ankur Bapna, Rico Sennrich, Orhan Firat, International Conference on Learning Representations. Biao Zhang, Ankur Bapna, Rico Sennrich, and Orhan Firat. Share or not? learning to schedule language-specific ca-pacity for multilingual translation. In International Conference on Learning Representations, 2021.
Improving massively multilingual neural machine translation and zero-shot translation. Biao Zhang, Philip Williams, Ivan Titov, Rico Sennrich, arXiv:2004.11867arXiv preprintBiao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. Improving massively multilingual neural machine translation and zero-shot translation. arXiv preprint arXiv:2004.11867, 2020.
| [] |
[
"A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network",
"A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network"
] | [
"Jun-Kun Wang \nSchool of Computer Science\nGeorgia Institute of Technol\n\n",
"Chi-Heng Lin \nSchool of Electrical and Computer Engineering\nGeorgia\n",
"Jacob Abernethy \nSchool of Computer Science\nGeorgia Institute of Technol\n\n"
] | [
"School of Computer Science\nGeorgia Institute of Technol\n",
"School of Electrical and Computer Engineering\nGeorgia",
"School of Computer Science\nGeorgia Institute of Technol\n"
] | [
"Proceedings of the 38 th International Conference on Machine Learning"
] | Incorporating a so-called "momentum" dynamic in gradient descent methods is widely used in neural net training as it has been broadly observed that, at least empirically, it often leads to significantly faster convergence. At the same time, there are very few theoretical guarantees in the literature to explain this apparent acceleration effect. Even for the classical strongly convex quadratic problems, several existing results only show Polyak's momentum has an accelerated linear rate asymptotically. In this paper, we first revisit the quadratic problems and show a nonasymptotic accelerated linear rate of Polyak's momentum. Then, we provably show that Polyak's momentum achieves acceleration for training a one-layer wide ReLU network and a deep linear network, which are perhaps the two most popular canonical models for studying optimization and deep learning in the literature. Prior work (Du et al., 2019b; Wu et al., 2019c) showed that using vanilla gradient descent, and with an use of overparameterization, the error decays as (1−Θ( 1 κ )) t after t iterations, where κ is the condition number of a Gram Matrix. Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of (1 − Θ( 1 √ κ )) t . For the deep linear network, prior work (Hu et al., 2020b) showed that vanilla gradient descent has a rate of (1 − Θ( 1 κ )) t , where κ is the condition number of a data matrix. Our result shows an acceleration rate (1 − Θ( 1 √ κ )) t is achievable by Polyak's momentum. This work establishes that momentum does indeed speed up neural net training. | null | [
"https://arxiv.org/pdf/2010.01618v6.pdf"
] | 235,359,199 | 2010.01618 | 10d3717fca445643e25d1419a767da68246af630 |
A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network
2021
Jun-Kun Wang
School of Computer Science
Georgia Institute of Technol
Chi-Heng Lin
School of Electrical and Computer Engineering
Georgia
Jacob Abernethy
School of Computer Science
Georgia Institute of Technol
A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network
Proceedings of the 38 th International Conference on Machine Learning
the 38 th International Conference on Machine Learning2021Institute of Technology. Correspondence to: Jun-Kun Wang <[email protected]>, Chi-Heng Lin <[email protected]>, Jacob Abernethy <[email protected]>.
Incorporating a so-called "momentum" dynamic in gradient descent methods is widely used in neural net training as it has been broadly observed that, at least empirically, it often leads to significantly faster convergence. At the same time, there are very few theoretical guarantees in the literature to explain this apparent acceleration effect. Even for the classical strongly convex quadratic problems, several existing results only show Polyak's momentum has an accelerated linear rate asymptotically. In this paper, we first revisit the quadratic problems and show a nonasymptotic accelerated linear rate of Polyak's momentum. Then, we provably show that Polyak's momentum achieves acceleration for training a one-layer wide ReLU network and a deep linear network, which are perhaps the two most popular canonical models for studying optimization and deep learning in the literature. Prior work (Du et al., 2019b; Wu et al., 2019c) showed that using vanilla gradient descent, and with an use of overparameterization, the error decays as (1−Θ( 1 κ )) t after t iterations, where κ is the condition number of a Gram Matrix. Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of (1 − Θ( 1 √ κ )) t . For the deep linear network, prior work (Hu et al., 2020b) showed that vanilla gradient descent has a rate of (1 − Θ( 1 κ )) t , where κ is the condition number of a data matrix. Our result shows an acceleration rate (1 − Θ( 1 √ κ )) t is achievable by Polyak's momentum. This work establishes that momentum does indeed speed up neural net training.
Incorporating a so-called "momentum" dynamic in gradient descent methods is widely used in neural net training as it has been broadly observed that, at least empirically, it often leads to significantly faster convergence. At the same time, there are very few theoretical guarantees in the literature to explain this apparent acceleration effect. Even for the classical strongly convex quadratic problems, several existing results only show Polyak's momentum has an accelerated linear rate asymptotically. In this paper, we first revisit the quadratic problems and show a nonasymptotic accelerated linear rate of Polyak's momentum. Then, we provably show that Polyak's momentum achieves acceleration for training a one-layer wide ReLU network and a deep linear network, which are perhaps the two most popular canonical models for studying optimization and deep learning in the literature. Prior work (Du et al., 2019b;Wu et al., 2019c) showed that using vanilla gradient descent, and with an use of overparameterization, the error decays as (1−Θ( 1 κ )) t after t iterations, where κ is the condition number of a Gram Matrix. Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of (1 − Θ( 1 √ κ )) t . For the deep linear network, prior work (Hu et al., 2020b) showed that vanilla gradient descent has a rate of (1 − Θ( 1 κ )) t , where κ is the condition number of a data matrix. Our result shows an acceleration rate (1 − Θ( 1 √ κ )) t is achievable by Polyak's momentum. This work establishes that momentum does indeed speed up neural net training. (2019)). Among all the momentum methods, the most popular one seems to be Polyak's momentum (a.k.a. Heavy Ball momentum) (Polyak, 1964), which is the default choice of momentum in PyTorch and Tensorflow. The success of Polyak's momentum in deep learning is widely appreciated and almost all of the recently developed adaptive gradient methods like Adam (Kingma & Ba, 2015), AMSGrad (Reddi et al., 2018), and AdaBound (Luo et al., 2019) adopt the use of Polyak's momentum, instead of Nesterov's momentum.
Introduction
However, despite its popularity, little is known in theory about why Polyak's momentum helps to accelerate training neural networks. Even for convex optimization, problems like strongly convex quadratic problems seem to be one of the few cases that discrete-time Polyak's momentum method provably achieves faster convergence than standard gradient descent (e.g. Lessard et al. (2016); Goh (2017) (2020)). On the other hand, the theoretical guarantees of Adam, AMSGrad , or AdaBound are only worse if the momentum parameter β is non-zero and the guarantees deteriorate as the momentum parameter increases, which do not show any advantage of the use of momentum (Alacaoglu et al., 2020). Moreover, the convergence rates that have been established for Polyak's momentum in several related works (Gadat et al., 2016;Sun et al., 2019;Yang et al., 2018;Liu et al., 2020c;Mai & Johansson, 2020) do not improve upon those for vanilla gradient descent or vanilla SGD in the worst case. Lessard et al. (2016); Ghadimi et al. (2015) even show negative cases in convex optimization that the use of Polyak's momentum results in divergence. Furthermore, Kidambi et al. (2018) construct a problem instance for which the momentum method under its arXiv:2010.01618v6 [cs.LG] 10 Jun 2021
Algorithm 1 Gradient descent with Polyak's momentum (Polyak, 1964) (Equivalent Version 1) 1: Required:
Step size η and momentum parameter β. 2: Init: w 0 ∈ R d and M −1 = 0 ∈ R d . 3: for t = 0 to T do 4:
Given current iterate w t , obtain gradient ∇ (w t ).
5:
Update momentum M t = βM t−1 + ∇ (w t ).
6:
Update iterate w t+1 = w t − ηM t . 7: end for optimal tuning is outperformed by other algorithms. Wang et al. (2020) show that Polyak's momentum helps escape saddle points faster compared with the case without momentum, which is the only provable advantage of Polyak's momentum in non-convex optimization that we are aware of. A solid understanding of the empirical success of Polyak's momentum in deep learning has eluded researchers for some time.
We begin this paper by first revisiting the use of Polyak's momentum for the class of strongly convex quadratic problems,
min w∈R d 1 2 w Γw + b w,(1)
where Γ ∈ R d×d is a PSD matrix such that λ max (Γ) = α, λ min (Γ) = µ > 0. This is one of the few 1 known examples that Polyak's momentum has a provable globally accelerated linear rate in the discrete-time setting. Yet even for this class of problems existing results only establish an accelerated linear rate in an asymptotic sense and several of them do not have an explicit rate in the non-asymptotic regime (e.g. Polyak (1964); Lessard et al. (2016); Mitliagkas (2019); Recht (2018)). Is it possible to prove a non-asymptotic accelerated linear rate in this case? We will return to this question soon.
For general µ-strongly convex, α-smooth, and twice differentiable functions (not necessarily quadratic), denoted as F 2 µ,α , Theorem 9 in Polyak (1964) shows an asymptotic accelerated linear rate when the iterate is sufficiently close to the minimizer so that the landscape can be well approximated by that of a quadratic function. However, the definition of the neighborhood was not very precise in the paper. In this work, we show a locally accelerated linear rate under a quantifiable definition of the neighborhood. Furthermore, we provably show that Polyak's momentum helps to achieve a faster convergence for training two neural networks, compared to vanilla GD. The first is training a one-layer ReLU network. Over the past few years there have appeared an enormous number of works considering training a one-layer ReLU network, provably showing con-Algorithm 2 Gradient descent with Polyak's momentum (Polyak, 1964) (Equivalent Version 2) 1: Required: step size η and momentum parameter β. 2: Init: w 0 = w −1 ∈ R d 3: for t = 0 to T do 4:
Given current iterate w t , obtain gradient ∇ (w t ).
5:
Update iterate w t+1 = w t − η∇ (w t ) + β(w t − w t−1 ). 6: end for (2020)). However, we are not aware of any theoretical works that study the momentum method in neural net training except the work Krichene et al. (2020). These authors show that SGD with Polyak's momentum (a.k.a. stochastic Heavy Ball) with infinitesimal step size, i.e. η → 0, for training a one-hidden-layer network with an infinite number of neurons, i.e. m → ∞, converges to a stationary solution. However, the theoretical result does not show a faster convergence by momentum. In this paper we consider the discrete-time setting and nets with finitely many neurons. We provide a non-asymptotic convergence rate of Polyak's momentum, establishing a concrete improvement relative to the best-known rates for vanilla gradient descent.
Our setting of training a ReLU network follows the same framework as previous results, including Du et al. (2019b);Arora et al. (2019c); Song & Yang (2019). Specifically, we study training a one-hidden-layer ReLU neural net of the form,
N ReLU W (x) := 1 √ m m r=1 a r σ( w (r) , x ),(2)
where σ(z) := z · 1{z ≥ 0} is the ReLU activation, w (1) , . . . , w (m) ∈ R d are the weights of m neurons on the first layer, a 1 , . . . , a m ∈ R are weights on the second layer, and N ReLU W (x) ∈ R is the output predicted on input x. Assume n number of samples {x i ∈ R d } n i=1 is given. (2019), we define a Gram matrix H ∈ R n×n for the weights W and its expectationH ∈ R n×n over the random draws of w (r) ∼ N (0, I d ) ∈ R d whose (i, j) entries are defined as follows,
H(W ) i,j = m r=1 x i x j m 1{ w (r) , x i ≥ 0 & w (r) , x j ≥ 0} H i,j := E w (r) [x i x j 1{ w (r) , x i ≥ 0 & w (r) , x j ≥ 0}].
(3) The matrixH is also called a neural tangent kernel (NTK) matrix in the literature (e.g. Jacot et al. (2018); Yang (2019); Bietti & Mairal (2019)). Assume that the smallest eigenvalue λ min (H) is strictly positive and certain conditions about the step size and the number of neurons are satisfied. Previous works (Du et al., 2019b;Song & Yang, 2019) show a linear rate of vanilla gradient descent, while we show an accelerated linear rate 2 of gradient descent with Polyak's momentum. As far as we are aware, our result is the first acceleration result of training an over-parametrized ReLU network.
The second result is training a deep linear network. The deep linear network is a canonical model for studying optimization and deep learning, and in particular for understanding gradient descent (e.g. Shamir (2019) (2020)). In this paper, following (Du & Hu, 2019;Hu et al., 2020b), we study training a L-layer linear network of the form,
N L-linear W (x) := 1 √ m L−1 dy W (L) W (L−1) · · · W (1) x, (4)
where W (l) ∈ R d l ×d l−1 is the weight matrix of the layer l ∈ [L], and d 0 = d, d L = d y and d l = m for l = 1, L. Therefore, except the first layer W (1) ∈ R m×d and the last layer W (L) ∈ R dy×m , all the intermediate layers are m × m square matrices. The scaling 1 √ m L−1 dy is necessary to ensure that the network's output at the initialization N L-linear W0 (x) has the same size as that of the input x, in the sense that E[ N L-linear W0 (x) 2 ] = x 2 , where the expectation is taken over some appropriate random initialization of the network (see e.g. Du & Hu (2019); Hu et al. (2020b)). Hu et al. (2020b) show vanilla gradient descent with orthogonal initialization converges linearly and the required width of the network m is independent of the depth L, while we show an accelerated linear rate of Polyak's momentum and the width m is also independent of L. To our knowledge, this is the first acceleration result of training a deep linear network.
A careful reader may be tempted by the following line of reasoning: a deep linear network (without activation) is effectively a simple linear model, and we already know that a linear model with the squared loss gives a quadratic objective for which Polyak's momentum exhibits an accelerated convergence rate. But this intuition, while natural, is not quite right: it is indeed nontrivial even to show that vanilla gradient descent provides a linear rate on deep linear networks (Hu et al., 2020b;Du & Hu, 2019;Shamir, 2019;Arora et al., 2019a;Hardt & Ma, 2016;Wu et al., 2019a;Zou et al., 2020), as the optimization landscape is non-convex. Existing works show that under certain assumptions, all the local minimum are global (Kawaguchi, 2016;Laurent & von Brecht, 2018;Yun et al., 2018;Lu & Kawaguchi, 2017;Zhou & Liang, 2018;Hardt & Ma, 2016). These results are not sufficient to explain the linear convergence of momentum, let alone the acceleration; see Section H in the appendix for an empirical result.
Similarly, it is known that under the NTK regime the output of the ReLU network trained by gradient descent can be approximated by a linear model (e.g. Hu et al. (2020a)). However, this result alone neither implies a global convergence of any algorithm nor characterizes the optimization landscape. While (Liu et al., 2020a) attempt to derive an algorithm-independent equivalence of a class of linear models and a family of wide networks, their result requires the activation function to be differentiable which does not hold for the most prevalent networks like ReLU. Also, their work heavily depends on the regularity of Hessian, making it hard to generalize beyond differentiable networks. Hence, while there has been some progress understanding training of wide networks through linear models, there remains a significant gap in applying this to the momentum dynamics of a non-differentiable networks. Liu et al. (2020b) establish an interesting connection between solving an overparametrized non-linear system of equations and solving the classical linear system. They show that for smooth and twice differentiable activation, the optimization landscape of an over-parametrized network satisfies a (non-convex) notion called the Polyak-Lokasiewicz (PL) condition (Polyak, 1963)
, i.e. 1 2 ∇ (w) 2 ≥ µ ( (w) − (w * )),
where w * is a global minimizer and µ > 0. It is not clear whether their result can be extended to ReLU activation, however, and the existing result of Danilova et al. (2018) for the discretetime Polyak's momentum under the PL condition does not give an accelerated rate nor is it better than that of vanilla GD. Aujol et al. (2020) show a variant of Polyak's momentum method having an accelerated rate in a continuous-time limit for a problem that satisfies PL and has a unique global minimizer. It is unclear if their result is applicable to our problem. Therefore, showing the advantage of training the ReLU network and the deep linear network by using existing results of Polyak's momentum can be difficult.
To summarize, our contributions in the present work include • In convex optimization, we show an accelerated linear rate in the non-asymptotic sense for solving the class of the strongly convex quadratic problems via Polyak's momentum (Theorem 7). We also provide an analysis of the accelerated local convergence for the class of functions in F 2 µ,α (Theorem 8). We establish a technical result (Theorem 5) that helps to obtain these non-asymptotic rates.
• In non-convex optimization, we show accelerated linear rates of the discrete-time Polyak's momentum for training an over-parametrized ReLU network and a deep linear network (Theorems 9 and 10).
Furthermore, we will develop a modular analysis to show all the results in this paper. We identify conditions and propose a meta theorem of acceleration when the momentum method exhibits a certain dynamic, which can be of independent interest. We show that when applying Polyak's momentum for these problems, the induced dynamics exhibit a form where we can directly apply our meta theorem.
Preliminaries
Throughout this paper, · F represents the Frobenius norm and · 2 represents the spectral norm of a matrix, while · represents l 2 norm of a vector. We also denote ⊗ the Kronecker product, σ max (·) = · 2 and σ min (·) the largest and the smallest singular value of a matrix respectively.
For the case of training neural networks, we will consider minimizing the squared loss
(W ) := 1 2 n i=1 y i − N W (x i ) 2 ,(5)
where x i ∈ R d is the feature vector, y i ∈ R dy is the label of sample i, and there are n number of samples. For training the ReLU network, we have N W (·) := N ReLU W (·), d y = 1, and W := {w (r) } m r=1 , while for the deep linear network, we have N W (·) := N L-linear W (·), and W represents the set of all the weight matrices, i.e. W := {W (l) } L l=1 . The notation A k represents the k th matrix power of A.
Prior result of Polyak's momentum
Algorithm 1 and Algorithm 2 show two equivalent presentations of gradient descent with Polyak's momentum. Given the same initialization, one can show that Algorithm 1 and Algorithm 2 generate exactly the same iterates during optimization.
Let us briefly describe a prior acceleration result of Polyak's momentum. The recursive dynamics of Poylak's momentum for solving the strongly convex quadratic problems (1) can be written as
w t+1 − w * w t − w * = I d − ηΓ + βI d −βI d I d 0 d :=A · w t − w * w t−1 − w * ,(6)
where w * is the unique minimizer. By a recursive expansion, one can get
w t − w * w t−1 − w * ≤ A t 2 w 0 − w * w −1 − w * .(7)
Hence, it suffices to control the spectral norm of the matrix power A t 2 for obtaining a convergence rate. In the literature, this is achieved by using Gelfand's formula.
Theorem 1. (Gelfand (1941); see also Foucart (2018)) (Gelfand's formula) Let A be a d×d matrix. Define the spectral radius ρ(A)
:= max i∈[d] |λ i (A)|, where λ i (·) is the i th eigenvalue. Then, there exists a non-negative sequence { t } such that A t 2 = (ρ(A) + t ) t and lim t→∞ t = 0.
We remark that there is a lack of the convergence rate of t in Gelfand's formula in general.
Denote κ := α/µ the condition number. One can control the spectral radius ρ(A) as ρ(A) ≤ 1 − 2 √ κ+1 by choosing η and β appropriately, which leads to the following result.
= 1 − 2 √ κ+1 2 has w t+1 − w * w t − w * ≤ 1 − 2 √ κ + 1 + t t+1 w 0 − w * w −1 − w * ,
where t is a non-negative sequence that goes to zero.
That is, when t → ∞, Polyak's momentum has the (1 − 2 √ κ+1 ) rate, which has a better dependency on the condition number κ than the 1 − Θ( 1 κ ) rate of vanilla gradient descent. A concern is that the bound is not quantifiable for a finite t. On the other hand, we are aware of a different analysis that leverages Chebyshev polynomials instead of Gelfand's formula (e.g. Liu & Belkin (2018)), which manages to obtain a t(1 − Θ( 1 √ κ )) t convergence rate. So the accelerated linear rate is still obtained in an asymptotic sense. Theorem 9 in Can et al. (2019) shows a rate max{C 1 , tC 2 }(1 − Θ( 1 √ κ ) t ) for some constantsC 1 andC 2 under the same choice of the momentum parameter and the step size as Theorem 2. However, for a large t, the dominant term could be t(1 − Θ( 1 √ κ ) t ). In this paper, we aim at obtaining a bound that (I) holds for a wide range of values of the parameters, (II) has a dependency on the squared root of the condition number √ κ, (III) is quantifiable in each iteration and is better than the rate t(1 − Θ( 1 √ κ )) t .
(One-layer ReLU network) Settings and Assumptions
The ReLU activation is not differentiable at zero. So for solving (5), we will replace the notion of gradient in Algorithm 1 and 2 with subgradient ∂ (Wt)
∂w (r) t := 1 √ m n i=1 N Wt (x i ) − y i a r · 1[ w (r)
t , x i ≥ 0]x i and update the neuron r as w
(r) t+1 = w (r) t − η ∂ (Wt) ∂w (r) t + β w (r) t − w (r)
t−1 . As described in the introduction, we assume that the smallest eigenvalue of the Gram matrixH ∈ R n×n is strictly positive, i.e. λ min (H) > 0. We will also denote the largest eigenvalue of the Gram matrixH as λ max (H) and denote the condition number of the Gram matrix as κ := λmax(H) λmin(H) . Du et al. (2019b) show that the strict positiveness assumption is indeed mild. Specifically, they show that if no two inputs are parallel, then the least eigenvalue is strictly positive. Panigrahi et al. (2020) were able to provide a quantitative lower bound under certain conditions. Following the same framework of Du et al. (2019b), we consider that each weight vector w (r) ∈ R d is initialized according to the normal distribution, i.e. w (r) ∼ N (0, I d ), and each a r ∈ R is sampled from the Rademacher distribution, i.e. a r = 1 with probability 0.5; and a r = −1 with probability 0.5. We also assume x i ≤ 1 for all samples i. As the previous works (e.g. Li & Liang (2018); Ji & Telgarsky (2020); Du et al. (2019b)), we consider only training the first layer {w (r) } and the second layer {a r } is fixed throughout the iterations. We will denote u t ∈ R n whose i th entry is the network's prediction for sample i, i.e. u t [i] := N ReLU
Wt (x i ) in iteration t and denote y ∈ R n the vector whose i th element is the label of sample i. The following theorem is a prior result due to Du et al. (2019b).
Theorem 3. (Theorem 4.1 in Du et al. (2019b)) Assume that λ := λ min (H)/2 > 0 and that w (r) 0 ∼ N (0, I d ) and a r uniformly sampled from {−1, 1}. Set the number of nodes m = Ω(λ −4 n 6 δ −3 ) and the constant step size η = O( λ n 2 ). Then, with probability at least 1−δ over the random initialization, vanilla gradient descent, i.e. Algorithm 1& 2 with β = 0, has u t − y 2 ≤ (1 − ηλ) t · u 0 − y 2 .
Later Song & Yang (2019) improve the network size m to m = Ω(λ −4 n 4 log 3 (n/δ)). Wu et al. (2019c) provide an improved analysis over Du et al. (2019b), which shows that the step size η of vanilla gradient descent can be set as η = 1 c1λmax(H) for some quantity c 1 > 0. The result in turn leads to a convergence rate (1 − 1 c2κ ) for some quantity c 2 > 0. However, the quantities c 1 and c 2 are not universal constants and actually depend on the problem parameters λ min (H), n, and δ. A question that we will answer in this paper is "Can Polyak's momentum achieve an accelerated linear rate 1 − Θ( 1 √ κ ) , where the factor Θ( 1 √ κ ) does not depend on any other problem parameter?".
(Deep Linear network) Settings and Assumptions
For the case of deep linear networks, we will denote X := [x 1 , . . . , x n ] ∈ R d×n the data matrix and Y := [y 1 , . . . , y n ] ∈ R dy×n the corresponding label matrix. We will also denoter := rank(X) and the condition number κ := λmax(X X) λr(X X) . Following Hu et al. (2020b), we will assume that the linear network is initialized by the orthogonal initialization, which is conducted by sampling uniformly from (scaled) orthogonal matrices such that (W
(1) 0 ) W (1) 0 = mI d , W (L) 0 (W (L) 0 ) = mI dy , and (W (l) 0 ) W (l) 0 = W (l) 0 (W (l) 0 ) = mI m for layer 2 ≤ l ≤ L − 1. We will denote W (j:i) := W j W j−1 · · · W i = Π j l=i W l , where 1 ≤ i ≤ j ≤ L and W (i−1:i) = I. We also denote the network's output U := 1 √ m L−1 dy W (L:1) X ∈ R dy×n .
In our analysis, following Du & Hu (2019); Hu et al. (2020b), we will further assume that (A1) there exists a W * such that Y = W * X, X ∈ R d×r , andr = rank(X), which is actually without loss of generality (see e.g. the discussion in Appendix B of Du & Hu (2019)).
m ≥ C X 2 F σ 2 max (X) κ 2 d y (1 + W *2
2 ) + log(r/δ) and m ≥ max{d x , d y } for some δ ∈ (0, 1) and a sufficiently large constant C > 0. Set the constant step size η = dy 2Lσ 2 max (X) . Then, with probability at least 1 − δ over the random initialization, vanilla gradient descent, i.e. Algorithm 1& 2 with
β = 0, has U t − Y 2 F ≤ 1 − Θ( 1 κ ) t · U 0 − Y 2 F .
Modular Analysis
In this section, we will provide a meta theorem for the following dynamics of the residual vector ξ t ∈ R n0 ,
ξ t+1 ξ t = I n0 − ηH + βI n0 −βI n0 I n0 0 n0 ξ t ξ t−1 + ϕ t 0 n0 ,(8)
where η is the step size, β is the momentum parameter, H ∈ R n0×n0 is a PSD matrix, ϕ t ∈ R n0 is some vector, and I n0 is the n 0 × n 0 -dimensional identity matrix. Note that ξ t and ϕ t depend on the underlying model learned at iteration t, i.e. depend on W t .
We first show that the residual dynamics of Polyak's momentum for solving all the four problems in this paper are in the form of (8). The proof of the following lemmas (Lemma 2, 3, and 4) are available in Appendix B.
Realization: Strongly convex quadratic problems
One can easily see that the dynamics of Polyak's momentum (6) for solving the strongly convex quadratic problem (1) is in the form of (8). We thus have the following lemma.
Lemma 1. Applying Algorithm 1 or Algorithm 2 to solving the class of strongly convex quadratic problems (1) induces a residual dynamics in the form of (8), where ξ t = w t − w * (and hence n 0 = d), H = Γ, ϕ t = 0 d .
Realization: Solving F 2 µ,α
A similar result holds for optimizing functions in F 2 µ,α . Lemma 2. Applying Algorithm 1 or Algorithm 2 to minimizing a function f (w) ∈ F 2 µ,α induces a residual dynamics in the form of (8)
, where ξ t = w t − w * , H = 1 0 ∇ 2 f (1−τ )w 0 +τ w * dτ , ϕ t = η 1 0 ∇ 2 f (1−τ )w 0 + τ w * dτ − 1 0 ∇ 2 f (1 − τ )w t + τ w * dτ (w t − w * ), where w * := arg min w f (w).
Realization: One-layer ReLU network
More notations: For the analysis, let us define the event
A ir := {∃w ∈ R d : w − w (r) 0 ≤ R ReLU , 1{x i w (r) 0 } = 1{x i w ≥ 0}}, where R ReLU > 0 is a number to be de- termined later.
The event A ir means that there exists a w ∈ R d which is within the R ReLU -ball centered at the initial point w (r) 0 such that its activation pattern of sample i is different from that of w (r) 0 . We also denote a random set S i := {r ∈ [m] : 1{A ir } = 0} and its complementary set
S ⊥ i := [m] \ S i .
Lemma 3 below shows that training the ReLU network N -ReLU W (·) via momentum induces the residual dynamics in the form of (8).
Lemma 3. (Residual dynamics of training the ReLU net
- work N ReLU W (·)) Denote (H t ) i,j := H(W t ) i,j = 1 m m r=1 x i x j × 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0}.
Applying Algorithm 1 or Algorithm 2 to (5) for training the ReLU network N ReLU
W (x) induces a residual dynam- ics in the form of (8) such that ξ t [i] = N ReLU Wt (x i ) − y i (and hence n 0 = d), H = H 0 , and ϕ t = φ t + ι t , where each element i of ξ t ∈ R n is the residual error of the sample i, and the i th -element of φ t ∈ R n satisfies |φ t [i]| ≤ 2η √ n|S ⊥ i | m u t − y + β t−1 s=0 β t−1−s u s − y , and ι t = η (H 0 − H t ) ξ t ∈ R n .
Realization: Deep Linear network
Lemma 4 below shows that the residual dynamics due to Polyak's momentum for training the deep linear network is indeed in the form of (8). In the lemma, "vec" stands for the vectorization of the underlying matrix in column-first order.
Lemma 4. (Residual dynamics of training N L-linear W (·)) Denote M t,l the momentum term of layer l at iteration t, which is recursively defined as M t,l = βM t,l−1 +
∂ (W (L:1) t ) ∂W (l) t . De- note H t := 1 m L−1 dy L l=1 [(W (l−1:1) t X) (W (l−1:1) t X) ⊗ W (L:l+1) t (W (L:l+1) t ) ] ∈ R dyn×dyn .
Applying Algorithm 1 or Algorithm 2 to (5) for training the deep linear network N L-linear W (x) induces a residual dynamics in the form of (8) such that
ξ t = vec(U t − Y ) ∈ R dyn (and hence n 0 = d y n), H = H 0 , and ϕ t = φ t +ψ t + ι t ∈ R dyn , where the vector φ t = 1 √ m L−1 dy vec (Φ t X) with Φ t = Π l W (l) t − ηM t,l − W (L:1) t + η L l=1 W (L:l+1) t M t,l W (l−1:1) t ,
and the vector ψ t is
ψ t = 1 √ m L−1 dy vec (L − 1)βW (L:1) t X + βW (L:1) t−1 X − β L l=1 W (L:l+1) t W (l) t−1 W (l−1:1) t X , and ι t = η(H 0 − H t )ξ t .
A key theorem of bounding a matrix-vector product
Our meta theorem of acceleration will be based on Theorem 5 in the following, which upper-bounds the size of the matrix-vector product of a matrix power A k and a vector v 0 . Compared to Gelfand's formula (Theorem 1), Theorem 5 below provides a better control of the size of the matrix-vector product, since it avoids the dependency on the unknown sequence { t }. The result can be of independent interest and might be useful for analyzing Polyak's momentum for other problems in future research.
Theorem 5. Let A := (1 + β)I n − ηH −βI n I n 0 ∈ R 2n×2n . Suppose that H ∈ R n×n is a positive semidefinite matrix. Fix a vector v 0 ∈ R n . If β is chosen to satisfy 1 ≥ β > max{ 1 − ηλ min (H) 2 , 1 − ηλ max (H) 2 }, then A k v 0 ≤ β k C 0 v 0 ,(9)
where the constant
C 0 := √ 2(β + 1) min{h(β, ηλ min (H)), h(β, ηλ max (H))} ≥ 1,(10)and the function h(β, z) is defined as h(β, z) := − β − (1 − √ z) 2 β − (1 + √ z) 2 .
Note that the constant C 0 in Theorem 5 depends on β and ηH. It should be written as C 0 (β, ηH) to be precise. However, for the brevity, we will simply denote it as C 0 when the underlying choice of β and ηH is clear from the context. The proof of Theorem 5 is available in Appendix C. Theorem 5 allows us to derive a concrete upper bound of the residual errors in each iteration of momentum, and consequently allows us to show an accelerated linear rate in the non-asymptotic sense. The favorable property of the bound will also help to analyze Polyak's momentum for training the neural networks. As shown later in this paper, we will need to guarantee the progress of Polyak's momentum in each iteration, which is not possible if we only have a quantifiable bound in the limit. Based on Theorem 5, we have the following corollary. The proof is in Appendix C.1.
Corollary 1. Assume that λ min (H) > 0. Denote κ := λ max (H)/λ min (H). Set η = 1/λ max (H) and set β = 1 − 1 2 ηλ min (H) 2 = 1 − 1 2 √ κ 2 . Then, C 0 ≤ 4 √ κ.
Meta theorem
Let λ > 0 be the smallest eigenvalue of the matrix H that appears on the residual dynamics (8). Our goal is to show that the residual errors satisfy
ξ s ξ s−1 ≤ √ β + 1 ϕ C 2 s (C 0 + 1 ϕ C 1 ) ξ 0 ξ −1 ,(11)
where C 0 is the constant defined on (10), and C 1 , C 2 ≥ 0 are some constants, 1 ϕ is an indicator if any ϕ t on the residual dynamics (8) is a non-zero vector. For the case of training the neural networks, we have 1 ϕ = 1.
Theorem 6. (Meta theorem for the residual dynamics (8)) Assume that the step size η and the momentum parameter β satisfying 1 ≥ β >
max{ 1 − ηλ min (H) 2 , 1 − ηλ max (H) 2 },
are set appropriately so that (11) holds at iteration
s = 0, 1, . . . , t − 1 implies that t−1 s=0 A t−s−1 ϕ s 0 ≤ √ β + 1 ϕ C 2 t C 3 ξ 0 ξ −1 . (12) Then, we have ξ t ξ t−1 ≤ √ β + 1 ϕ C 2 t (C 0 + 1 ϕ C 1 ) ξ 0 ξ −1 ,(13)
holds for all t, where C 0 is defined on (10) and C 1 , C 2 , C 3 ≥ 0 are some constants satisfying:
√ β t C 0 + √ β + 1 ϕ C 2 t 1 ϕ C 3 ≤ √ β + 1 ϕ C 2 t (C 0 + 1 ϕ C 1 ).(14)
Proof. The proof is by induction. At s = 0, (11) holds since C 0 ≥ 1 by Theorem 5. Now assume that the inequality holds at s = 0, 1, . . . , t − 1. Consider iteration t.
Recursively expanding the dynamics (8), we have
ξ t ξ t−1 = A t ξ 0 ξ −1 + t−1 s=0 A t−s−1 ϕ s 0 .(15)
By Theorem 5, the first term on the r.h.s. of (15) can be bounded by
A t ξ 0 ξ −1 ≤ β t C 0 ξ 0 ξ −1(16)
By assumption, given (11) holds at s = 0, 1, . . . , t − 1, we have (12). Combining (12), (14), (15), and (16), we have (13) and hence the proof is completed.
Remark: As shown in the proof, we need the residual errors be tightly bounded as (11)
Main results
The important lemmas and theorems in the previous section help to show our main results in the following subsections. The high-level idea to obtain the results is by using the meta theorem (i.e. Theorem 6). Specifically, we will need to show that if the underlying residual dynamics satisfy (11) for all the previous iterations, then the terms {ϕ s } in the dynamics satisfy (12)
1 ≥ β > max{ 1 − √ ηµ 2 , 1 − √ ηα 2 }. Gradient de- scent with Polyak's momentum for solving (1) has w t − w * w t−1 − w * ≤ β t C 0 w 0 − w * w −1 − w * ,(17)
where the constant C 0 is defined as
C 0 := √ 2(β+1) √ min{h(β,ηλmin(Γ)),h(β,ηλmax(Γ))} ≥ 1,(18)and h(β, z) = − β − (1 − √ z) 2 β − (1 + √ z) 2 .
Consequently, if the step size η = 1 α and the momentum parameter
β = 1 − 1 2 √ κ 2 , then it has w t − w * w t−1 − w * ≤ 1 − 1 2 √ κ t 4 √ κ w 0 − w * w −1 − w * . (19) Furthermore, if η = 4 ( √ µ+ √ α) 2 and β approaches β → 1 − 2 √ κ+1 2
from above, then it has a convergence rate
approximately 1 − 2 √ κ+1 as t → ∞.
The convergence rates shown in the above theorem do not depend on the unknown sequence { t }. Moreover, the rates depend on the squared root of the condition number √ κ. We have hence established a non-asymptotic accelerated linear rate of Polyak's momentum, which helps to show the advantage of Polyak's momentum over vanilla gradient descent in the finite t regime. Our result also recovers the rate 1 − 2 √ κ+1 asymptotically under the same choices of the parameters as the previous works. The detailed proof can be found in Appendix D, which is actually a trivial application of Lemma 1, Theorem 6, and Corollary 1 with
C 1 = C 2 = C 3 = 0.
4.2. Non-asymptotic accelerated linear rate of the local convergence for solving f (·) ∈ F 2 µ,α
Here we provide a local acceleration result of the discretetime Polyak's momentum for general smooth strongly convex and twice differentiable function F 2 µ,α . Compared to Theorem 9 of (Polyak, 1964), Theorem 8 clearly indicates the required distance that ensures an acceleration when the iterate is in the neighborhood of the global minimizer. Furthermore, the rate is in the non-asymptotic sense instead of the asymptotic one. We defer the proof of Theorem 8 to Appendix E.
Theorem 8. Assume that the function f (·) ∈ F 2 µ,α and its Hessian is α-Lipschitz. Denote the condition number κ := α µ . Suppose that the initial point satisfies
w 0 − w * w −1 − w * ≤ 1 683κ 3/2 . Then, Gradient descent with Polyak's momentum with the step size η = 1 α and the momentum parameter β = 1 − 1 2 √ κ 2 for solving min w f (w) has w t+1 − w * w t − w * ≤ 1 − 1 4 √ κ t+1 8 √ κ w 0 − w * w −1 − w * ,(20)
where w * = arg min w f (w).
Acceleration for training N ReLU
W (x)
Before introducing our result of training the ReLU network, we need the following lemma. (2019)] Set m = Ω(λ −2 n 2 log(n/δ)). Suppose that the neurons w
(1) 0 , . . . , w (m) 0 are i.i.d. generated by N (0, I d )
initially. Then, with probability at least 1 − δ, it holds that
H 0 −H F ≤ λ min (H) 4 , λ min H 0 ≥ 3 4 λ min (H), and λ max H 0 ≤ λ max (H) + λ min (H) 4 .
Lemma 5 shows that by the random initialization, with probability 1 − δ, the least eigenvalue of the Gram matrix H := H 0 defined in Lemma 3 is lower-bounded and the largest eigenvalue is close to λ max (H). Furthermore, Lemma 5 implies that the condition number of the Gram matrix H 0 at the initializationκ := λmax(H0) λmin(H0) satisfies
κ ≤ 4 3 κ + 1 3 , where κ := λmax(H) λmin(H) . Theorem 9. (One-layer ReLU network N ReLU W (x)) Assume that λ := 3λmin(H) 4 > 0 and that w (r) 0 ∼ N (0, I d ) and a r uniformly sampled from {−1, 1}. Denote λ max := λ max (H) + λmin(H) 4
and denoteκ := λ max /λ = (4κ + 1)/3. Set a constant step size η = 1 λmax , fix momentum parameter β = 1 − 1 2κ 2 , and finally set the number of network nodes m = Ω(λ −4 n 4 κ 2 log 3 (n/δ)). Then, with probability at least 1 − δ over the random initialization, gradient descent with Polyak's momentum satisfies for any t,
ξ t ξ t−1 ≤ 1 − 1 4 √κ t · 8 √κ ξ 0 ξ −1 .(21)
We remark thatκ, which is the condition number of the Gram matrix H 0 , is within a constant factor of the condition number ofH. Therefore, Theorem 9 essentially shows an accelerated linear rate 1 − Θ( 1 √ κ ) . The rate has an improved dependency on the condition number, i.e. √ κ instead of κ, which shows the advantage of Polyak's momentum over vanilla GD when the condition number is large. We believe this is an interesting result, as the acceleration is akin to that in convex optimization, e.g. Nesterov (2013) Our result also implies that over-parametrization helps acceleration in optimization. To our knowledge, in the literature, there is little theory of understanding why overparametrization can help training a neural network faster. The only exception that we are aware of is Arora et al. (2018), which shows that the dynamic of vanilla gradient descent for an over-parametrized objective function exhibits some momentum terms, although their message is very different from ours. The proof of Theorem 9 is in Appendix F.
Acceleration for training
N L-linear W (x) Theorem 10. (Deep linear network N L-linear W (x)) Denote λ := Lσ 2 min (X) dy and κ := σ 2 max (X) σ 2 min (X) . Set a constant step size η = dy Lσ 2 max (X) , fix momentum parameter β = 1 − 1 2 √ κ 2 ,
and finally set a parameter m that controls the width m ≥ C κ 5 σ 2 max (X) d y (1 + W * 2 2 ) + log(r/δ) and m ≥ max{d x , d y } for some constant C > 0. Then, with probability at least 1−δ over the random orthogonal initialization, gradient descent with Polyak's momentum satisfies for any t,
ξ t ξ t−1 ≤ 1 − 1 4 √ κ t · 8 √ κ ξ 0 ξ −1 .(22)
Conclusion
We show some non-asymptotic acceleration results of the discrete-time Polyak's momentum in this paper. The results not only improve the previous results in convex optimization but also establish the first time that Polyak's momentum has provable acceleration for training certain neural networks. We analyze all the acceleration results from a modular framework. We hope the framework can serve as a building block towards understanding Polyak's momentum in a more unified way.
Acknowledgment
Allen-Zhu, Z., Li, Y., and Song, Z.
A. Linear-rate results of the discrete-time Polyak's momentum
In the discrete-time setting, for general smooth, strongly convex, and differentiable functions, a linear rate of the global convergence is shown by Ghadimi et al. (2015) and Shi et al. (2018). However, the rate is not an accelerated rate and is not better than that of the vanilla gradient descent. To our knowledge, the class of the strongly convex quadratic problems is the only known example that Polyak's momentum has a provable accelerated linear rate in terms of the global convergence in the discrete-time setting.
B. Proof of Lemma 2, Lemma 3, and Lemma 4
Lemma 2: Applying Algorithm 1 or Algorithm 2 to minimizing a function f (w) ∈ F 2 µ,α induces a residual dynamics in the form of (8), where
ξ t = w t − w * H = 1 0 ∇ 2 f (1 − τ )w 0 + τ w * dτ ϕ t = η 1 0 ∇ 2 f (1 − τ )w 0 + τ w * dτ − 1 0 ∇ 2 f (1 − τ )w t + τ w * dτ (w t − w * ),
where w * := arg min w f (w).
Proof. We have
w t+1 − w * w t − w * = I d + βI d −βI d I d 0 d · w t − w * w t−1 − w * + −η∇f (w t ) 0 = I d − η 1 0 ∇ 2 f (1 − τ )w t + τ w * dτ + βI d −βI d I d 0 d · w t − w * w t−1 − w * = I d − η 1 0 ∇ 2 f (1 − τ )w 0 + τ w * dτ + βI d −βI d I d 0 d · w t − w * w t−1 − w * + η 1 0 ∇ 2 f (1 − τ )w 0 + τ w * dτ − 1 0 ∇ 2 f (1 − τ )w t + τ w * dτ (w t − w * ),(23)
where the second equality is by the fundamental theorem of calculus.
∇f (w t ) − ∇f (w * ) = 1 0 ∇ 2 f ((1 − τ )w t + τ w * )dτ (w t − w * ),(24)
and that ∇f (w * ) = 0.
Lemma 3: (Residual dynamics of training the ReLU network N ReLU
W (·)) Denote (H t ) i,j := H(W t ) i,j = 1 m m r=1 x i x j 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0}.
Applying Algorithm 1 or Algorithm 2 to (5) for training the ReLU network N ReLU W (x) induces a residual dynamics in the form of (8) such that
ξ t [i] = N ReLU Wt (x i ) − y i and hence n 0 = d H = H 0 ϕ t = φ t + ι t ,
where each element i of ξ t ∈ R n is the residual error of the sample i, the i th -element of φ t ∈ R n satisfies
|φ t [i]| ≤ 2η √ n|S ⊥ i | m u t − y + β t−1 s=0 β t−1−s u s − y , and ι t = η (H 0 − H t ) ξ t ∈ R n .
Proof. For each sample i, we will divide the contribution to N (x i ) into two groups.
N (x i ) = 1 √ m m r=1 a r σ( w (r) , x i ) = 1 √ m r∈Si a r σ( w (r) , x i ) + 1 √ m r∈S ⊥ i a r σ( w (r) , x i ).(25)
To continue, let us recall some notations; the subgradient with respect to w (r) ∈ R d is
∂L(W ) ∂w (r) := 1 √ m n i=1 N (x i ) − y i a r x i 1{ w (r) , x ≥ 0},(26)
and the Gram matrix H t whose (i, j) element is
H t [i, j] := 1 m x i x j m r=1 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0}.(27)
Let us also denote
H ⊥ t [i, j] := 1 m x i x j r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0}.(28)
We have that
ξ t+1 [i] = N t+1 (x i ) − y i (25) = 1 √ m r∈Si a r σ( w (r) t+1 , x i ) first term + 1 √ m r∈S ⊥ i a r σ( w (r) t+1 , x i ) − y i .(29)
For the first term above, we have that
1 √ m r∈Si a r σ( w (r) t+1 , x i ) first term = 1 √ m r∈Si a r σ( w (r) t − η ∂L(W t ) ∂w (r) t + β(w (r) t − w (r) t−1 ), x i ) = 1 √ m r∈Si a r w (r) t − η ∂L(W t ) ∂w (r) t + β(w (r) t − w (r) t−1 ), x i · 1{ w (r) t+1 , x i ≥ 0} (a) = 1 √ m r∈Si a r w (r) t , x i · 1{ w (r) t , x i ≥ 0} + β √ m r∈Si a r w (r) t , x i · 1{ w (r) t , x i ≥ 0} − β √ m r∈Si a r w (r) t−1 , x i · 1{ w (r) t−1 , x i ≥ 0} − η 1 √ m r∈Si a r ∂L(W t ) ∂w (r) t , x i 1{ w (r) t , x i ≥ 0} =N t (x i ) + β N t (x i ) − N t−1 (x i ) − 1 √ m r∈S ⊥ i a r w (r) t , x i 1{ w (r) t , x i ≥ 0} − β √ m r∈S ⊥ i a r w (r) t , x i 1{ w (r) t , x i ≥ 0} + β √ m r∈S ⊥ i a r w (r) t−1 , x i 1{ w (r) t−1 , x i ≥ 0} − η 1 √ m r∈Si a r ∂L(W t ) ∂w (r) t , x i 1{ w (r) t , x i ≥ 0} last term ,(30)
where (a) uses that for r ∈ S i , 1{ w (r) t+1 ,
x i ≥ 0} = 1{ w (r) t , x i ≥ 0} = 1{ w (r)
t−1 , x i ≥ 0} as the neurons in S i do not change their activation patterns. We can further bound (30) as
(b) =N t (x i ) + β N t (x i ) − N t−1 (x i ) − η n j=1 N t (x j ) − y j H(W t ) i,j − η m n j=1 x i x j (N t (x j ) − y j ) r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0} − 1 √ m r∈S ⊥ i a r w (r) t , x i 1{ w (r) t , x i ≥ 0} − β √ m r∈S ⊥ i a r w (r) t , x i 1{ w (r) t , x i ≥ 0} + β √ m r∈S ⊥ i a r w (r) t−1 , x i 1{ w (r) t−1 , x i ≥ 0} ,(31)
where (b) is due to that
1 √ m r∈Si a r ∂L(Wt) ∂w (r) t , x i 1{ w (r) t , x i ≥ 0} last term = 1 m n j=1 x i x j (N t (x j ) − y j ) r∈Si 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0} = n j=1 N t (x j ) − y j H(W t ) i,j − 1 m n j=1 x i x j (N t (x j ) − y j ) r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0}.(32)
Combining (29) and (31), we have that
ξ t+1 [i] = ξ t [i] + β ξ t [i] − ξ t−1 [i] − η n j=1 H t [i, j]ξ t [j] − η m n j=1 x i x j (N t (x j ) − y j ) r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0} + 1 √ m r∈S ⊥ i a r σ( w (r) t+1 , x i ) − a r σ( w (r) t , x i ) − βa r σ( w (r) t , x i ) + βa r σ( w (r) t−1 , x i ).(33)
So we can write the above into a matrix form.
ξ t+1 = (I n − ηH t )ξ t + β(ξ t − ξ t−1 ) + φ t = (I n − ηH 0 )ξ t + β(ξ t − ξ t−1 ) + φ t + ι t ,(34)
where the i element of φ t ∈ R n is defined as
φ t [i] = − η m n j=1 x i x j (N t (x j ) − y j ) r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0} + 1 √ m r∈S ⊥ i a r σ( w (r) t+1 , x i ) − a r σ( w (r) t , x i ) − βa r σ( w (r) t , x i ) + βa r σ( w (r) t−1 , x i ) .(35)
Now let us bound φ t [i] as follows.
φ t [i] = − η m n j=1 x i x j (N t (x j ) − y j ) r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0} + 1 √ m r∈S ⊥ i a r σ( w (r) t+1 , x i ) − a r σ( w (r) t , x i ) − βa r σ( w (r) t , x i ) + βa r σ( w (r) t−1 , x i ) (a) ≤ η √ n|S ⊥ i | m u t − y + 1 √ m r∈S ⊥ i w (r) t+1 − w (r) t + β w (r) t − w (r) t−1 (b) = η √ n|S ⊥ i | m u t − y + η √ m r∈S ⊥ i t s=0 β t−s ∂L(W s ) ∂w (r) s + β t−1 s=0 β t−1−s ∂L(W s ) ∂w (r) s (c) ≤ η √ n|S ⊥ i | m u t − y + η √ m r∈S ⊥ i t s=0 β t−s ∂L(W s ) ∂w (r) s + β t−1 s=0 β t−1−s ∂L(W s ) ∂w (r) s (d) ≤ η √ n|S ⊥ i | m u t − y + η √ n|S ⊥ i | m t s=0 β t−s u s − y + β t−1 s=0 β t−1−s u s − y = 2η √ n|S ⊥ i | m u t − y + β t−1 s=0 β t−1−s u s − y ,(36)where (a) is because − η m n j=1 x i x j (N t (x j )−y j ) r∈S ⊥ i 1{ w (r) t , x i ≥ 0 & w (r) t , x j ≥ 0} ≤ η|S ⊥ i | m n j=1 |N t (x j )− y j | ≤ η √ n|S ⊥ i | m u t − y , and that σ(·) is 1-Lipschitz so that 1 √ m r∈S ⊥ i a r σ( w (r) t+1 , x i ) − a r σ( w (r) t , x i ) ≤ 1 √ m r∈S ⊥ i | w (r) t+1 , x i − w (r) t , x i | ≤ 1 √ m r∈S ⊥ i w (r) t+1 − w (r) t x i ≤ 1 √ m r∈S ⊥ i w (r) t+1 − w (r) t , similarly, −β √ m r∈S ⊥ i a r σ( w (r) t , x i ) − a r σ( w (r) t−1 , x i ) ≤ β 1 √ m r∈S ⊥ i w (r) t − w (r) t−1 , (b) is by the update rule (Algorithm 1), (c) is by Jensen's inequality, (d) is because | ∂L(Ws) ∂w (r) s | = | 1 √ m n i=1 u s [i] − y i a r x i 1{x w (r) t ≥ 0}| ≤ √ n m u s − y .
Lemma: 4 (Residual dynamics of training N L-linear W (·)) Denote M t,l the momentum term of layer l at iteration t, which is
recursively defined as M t,l = βM t,l−1 + ∂ (W (L:1) t ) ∂W (l) t .
Denote
H t := 1 m L−1 dy L l=1 [(W (l−1:1) t X) (W (l−1:1) t X) ⊗ W (L:l+1) t (W (L:l+1) t ) ] ∈ R dyn×dyn .
Applying Algorithm 1 or Algorithm 2 to (5) for training the deep linear network N L-linear W (x) induces a residual dynamics in the form of (8) such that ξ t = vec(U t − Y ) ∈ R dyn , and hence n 0 = d y n
H = H 0 ϕ t = φ t + ψ t + ι t ∈ R dyn , where φ t = 1 m L−1 d y vec (Φ t X) with Φ t = Π l (W (l) t − ηM t,l ) − W (L:1) t + η L l=1 W (L:l+1) t M t,l W (l−1:1) t ψ t = 1 m L−1 d y vec (L − 1)βW (L:1) t X + βW (L:1) t−1 X − β L l=1 W (L:l+1) t W (l) t−1 W (l−1:1) t X ι t = η(H 0 − H t )ξ t .
Proof. According to the update rule of gradient descent with Polyak's momentum, we have
W (L:1) t+1 = Π l W (l) t − ηM t,l = W (L:1) t − η L l=1 W (L:l+1) t M t,l W (l−1:1) + Φ t ,(37)
where M t,l stands for the momentum term of layer l, which is M t,l = βM t,l−1 + ∂ (W
= −η ∂ (W (L:1) t ) ∂W (l) t + β(W (l) t − W (l) t−1 ), we can rewrite (37) as W (L:1) t+1 = W (L:1) t − η L l=1 W (L:l+1) t ∂ (W (L:1) t ) ∂W (l) t W (l−1:1) t + L l=1 W (L:l+1) t β(W (l) t − W (l) t−1 )W (l−1:1) t + Φ t = W (L:1) t − η L l=1 W (L:l+1) t ∂ (W (L:1) t ) ∂W (l) t W (l−1:1) t + β(W (L:1) t − W (L:1) t−1 ) + Φ t + (L − 1)βW (L:1) t + βW (L:1) t−1 − β L l=1 W (L:l+1) t W (l) t−1 W (l−1:1) t .(38)
Multiplying the above equality with 1 √ m L−1 dy X, we get
U t+1 = U t − η 1 m L−1 d y L l=1 W (L:l+1) t (W (L:l+1) t ) (U t − Y )(W (l−1:1) t X) W (l−1:1) t X + β(U t − U t−1 ) + 1 m L−1 d y (L − 1)βW (L:1) t + βW (L:1) t−1 − β L l=1 W (L:l+1) t W (l) t−1 W (l−1:1) t X + 1 m L−1 d y Φ t X.(39)
Using vec(ACB) = (B ⊗ A)vec(C), where ⊗ stands for the Kronecker product, we can apply a vectorization of the above equation and obtain
vec(U t+1 ) − vec(U t ) = −ηH t vec(U t − Y ) + β (vec(U t ) − vec(U t−1 )) + vec( 1 m L−1 d y (L − 1)βW (L:1) t + βW (L:1) t−1 − β L l=1 W (L:l+1) t W (l) t−1 W (l−1:1) t X) + 1 m L−1 d y vec(Φ t X),(40)
where
H t = 1 m L−1 d y L l=1 (W (l−1:1) t X) (W (l−1:1) t X) ⊗ W (L:l+1) t (W (L:l+1) t ) ,(41)
which is a positive semi-definite matrix.
In the following, we will denote ξ t := vec(U t − Y ) as the vector of the residual errors. Also, we denote
φ t := 1 √ m L−1 dy vec(Φ t X) with Φ t = Π l (W (l) t − ηM t,l ) − W (L:1) t + η L l=1 W (L:l+1) t M t,l W (l−1:1) t , and ψ t := vec( 1 √ m L−1 dy (L − 1)βW (L:1) t + βW (L:1) t−1 − β L l=1 W (L:l+1) t W (l) t−1 W (l−1:1) t X)
. Using the notations, we can rewrite (40) as
ξ t+1 ξ t = I dyn − ηH t + βI dyn −βI dyn I dyn 0 dyn ξ t ξ t−1 + φ t + ψ t 0 dyn = I dyn − ηH 0 + βI dyn −βI dyn I dyn 0 dyn ξ t ξ t−1 + ϕ t 0 dyn ,(42)
where ϕ t = φ t + ψ t + ι t ∈ R dyn and I dyn is the d y n × d y n-dimensional identity matrix.
C. Proof of Theorem 5
Theorem 5 Let A := (1 + β)I n − ηH −βI n I n 0 ∈ R 2n×2n . Suppose that H ∈ R n×n is a positive semidefinite matrix.
Fix a vector v 0 ∈ R n . If β is chosen to satisfy 1 ≥ β > max{ 1 − ηλ min (H)
2 , 1 − ηλ max (H) 2 }, then A k v 0 ≤ β k C 0 v 0 ,(43)
where the constant
C 0 := √ 2(β + 1) min{h(β, ηλ min (H)), h(β, ηλ max (H))} ≥ 1,(44)
and the function h(β, z) is defined as
h(β, z) := − β − 1 − √ z 2 β − 1 + √ z 2 .(45)
We would first prove some lemmas for the analysis.
Lemma 6. Under the assumption of Theorem 5, A is diagonalizable with respect to complex field C in C n , i.e., ∃P such that A = P DP −1 for some diagonal matrix D. Furthermore, the diagonal elements of D all have magnitudes bounded by √ β.
Proof. In the following, we will use the notation/operation Diag(· · · ) to represents a block-diagonal matrix that has the arguments on its main diagonal. Let U Diag([λ 1 , . . . , λ n ])U * be the singular-value-decomposition of H, then
A = U 0 0 U (1 + β)I n − ηDiag([λ 1 , . . . , λ n ]) −βI n I n 0 U * 0 0 U * .(46)
LetŨ = U 0 0 U . Then, after applying some permutation matrixP , A can be further simplified into
A =ŨP ΣP TŨ * ,(47)
where Σ is a block diagonal matrix consisting of n 2-by-2 matricesΣ i :
= 1 + β − ηλ i −β 1 0 . The characteristic polynomial ofΣ i is x 2 − (1 + β − λ i )x + β.
Hence it can be shown that when β > (1 − √ ηλ i ) 2 then the roots of polynomial are conjugate and have magnitude √ β. These roots are exactly the eigenvalues ofΣ i ∈ R 2×2 . On the other hand, the corresponding eigenvectors q i ,q i are also conjugate to each other asΣ i ∈ R 2×2 is a real matrix. As a result, Σ ∈ R 2n×2n admits a block eigen-decomposition as follows,
Σ =Diag(Σ i , . . . ,Σ n ) =Diag(Q 1 , . . . , Q n )Diag z 1 0 0z 1 , . . . , z n 0 0z n Diag(Q −1 1 , . . . , Q −1 n ),(48)
where Q i = [q i ,q i ] and z i ,z i are eigenvalues ofΣ i (they are conjugate by the condition on β). Denote Q := Diag(Q 1 , . . . , Q n ) and D := Diag z 1 0 0z 1 , . . . , z n 0 0z n .
By combining (47) and (48), we have
A = P Diag z 1 0 0z 1 , . . . , z n 0 0z n P −1 = P DP −1 ,(50)
where
P =ŨP Q,(51)
by the fact thatP −1 =P T andŨ −1 =Ũ * .
Proof. (of Theorem 5) Now we proceed the proof of Theorem 5. In the following, we denote v k := A k v 0 (so v k = Av k−1 ). Let P be the matrix in Lemma 6, and u k := P −1 v k , the dynamic can be rewritten as u k = P −1 Av k−1 = P −1 AP u k−1 = Du k−1 . As D is diagonal, we immediately have
u k ≤ max i∈[n] |D ii | k u 0 ⇒ P −1 v k ≤ max i∈[n] |D ii | k P −1 v 0 ⇒ σ min (P −1 ) v k ≤ β k σ max (P −1 ) v 0 (Lemma 6.) ⇒ σ −1 max (P ) v k ≤ β k σ −1 min (P ) v 0 ⇒ v k ≤ β k σ max (P ) σ min (P ) v 0 ⇒ v k ≤ β k λ max (P P * ) λ min (P P * ) v 0 .(52)
Hence, now it suffices to prove upper bound and lower bound of λ max and λ min , respectively. By using Lemma 7 in the following, we obtain the inequality of (43). We remark that as C 0 is an upper-bound of the squared root of the condition number λmax(P P * ) λmin(P P * ) , it is lower bounded by 1.
Lemma 7. Let P be the matrix in Lemma 6, then we have λ max (P P * ) ≤ 2(β + 1) and λ min (P P * ) ≥ min{h(β, ηλ min (H)), h(β, ηλ max (H))}/(1 + β), where
h(β, z) = − β − 1 − √ z 2 β − 1 + √ z 2 .(53)
Proof. As (51) in the proof of Lemma 2, P =ŨP Diag(Q 1 , . . . , Q n ). SinceŨP is unitary, it does not affect the spectrum of P , therefore, it suffices to analyze the eigenvalues of QQ * , where Q = Diag(Q 1 , . . . , Q n ). Observe that QQ * is a block diagonal matrix with blocks Q i Q * i , the eigenvalues of it are exactly that of Q i Q * i , i.e., λ max (QQ * ) = max i∈[n] λ max (Q i Q * i ) and likewise for the minimum. Recall Q i = [q i ,q i ] consisting of eigenvectors ofΣ i := 1 + β − ηλ i −β 1 0 with corresponding eigenvalues z i ,z i . The eigenvalues satisfy
z i +z i = 2 z i = 1 + β − ηλ i ,(54)z izi = |z i | 2 = β.(55)
On the other hand, the eigenvalue equationΣ i q i = z i q i together with (54)
implies q i = [z i , 1] T . Furthermore, Q i Q * i = q i q * i +q iq * i = 2 q i q * i = 2 q i q i T + 2 q i q i T . Thus, Q i Q * i = 2 q i q i T + 2 q i q i T = 2 z i 1 z i 1 + z i 0 z i 0 = 2 |z i | 2 z i z i 1 .(56)
Let the eigenvalues of Q i Q * i be θ 1 , θ 2 , then by (54)-(56) we must have θ 1 + θ 2 = 2(β + 1),
θ 1 θ 2 = 4 β − ( 1 + β − ηλ i 2 ) 2 = − β − 1 − ηλ i 2 β − 1 + ηλ i 2 ≥ 0.(57)
From (57), as both eigenvalues are nonnegative, we deduce that
2(1 + β) ≥ max{θ 1 , θ 2 } ≥ β + 1.(59)
On the other hand, from (57) we also have
min{θ 1 , θ 2 } =θ 1 θ 2 / max{θ 1 , θ 2 } ≥ − β − 1 − ηλ i 2 β − 1 + ηλ i 2 /(1 + β)
:=h(β, ηλ i )/(1 + β).
Finally, as the eigenvalues of QQ * are composed of exactly that of Q i Q * i , applying the bound of (60) to each i we have λ min (P P * ) ≥ min i∈ [n] h(β, ηλ i )/(1 + β) ≥ min{h(β, ηλ min (H)), h(β, ηλ max (H))}/(1 + β),
where the last inequality follows from the facts that λ min (H) ≤ λ i ≤ λ max (H) and h is concave quadratic function of of λ in which the minimum must occur at the boundary. . Then, C 0 ≤ max{4, 2 √ κ} ≤ 4 √ κ.
Proof. For notation brevity, in the following, we let µ := λ min (H) and α := λ max (H).
Recall that h(β, z) = − β − (1 − √ z) 2 β − (1 + √ z) 2 . We have h(β, ηµ) = − (1 − 1 2 √ ηµ) 2 − (1 − √ ηµ) 2 (1 − 1 2 √ ηµ) 2 − (1 + √ ηµ) 2 = 3 √ ηµ − 3 4 ηµ √ ηµ + 1 4 ηµ = 3 1 √ κ − 3 4κ 1 √ κ + 1 4κ(62)
and
h(β, ηα) = − (1 − 1 2 √ ηµ) 2 − (1 − √ ηα) 2 (1 − 1 2 √ ηµ) 2 − (1 + √ ηα) 2 = 2 √ ηα − √ ηµ − ηα + 1 4 ηµ √ ηµ + 2 √ ηα + ηα − 1 4 ηµ = 1 − 1 √ κ + 1 4κ 3 + 1 √ κ − 1 4κ .(63)
We can simplify it to get that h(β, ηα) = 3 − 2
√ κ − 1 2κ + 1 2κ 3/2 − 1 16κ 2 ≥ 0.5. Therefore, we have √ 2(β + 1) h(β, ηµ) = √ 2(β + 1) 3ηµ(1 − 1 2 √ ηµ − 3 16 ηµ) = √ 2(β + 1) 3(1 − 1 2 √ ηµ − 3 16 ηµ) √ κ ≤ 1 (1 − 1 2 − 3 16 ) √ κ ≤ 2 √ κ,(64)
where we use ηµ = 1 κ . On the other hand,
√ 2(β+1) √ h(β,ηα)
≤ 4. We conclude that
C 0 = √ 2(β + 1) min{h(β, ην, h(β, ηα)} ≤ max{4, 2 √ κ} ≤ 4 √ κ.(65)
D. Proof of Theorem 7
Theorem 7 Assume the momentum parameter β satisfies
1 ≥ β > max{ 1 − √ ηµ 2 , 1 − √ ηα 2 }. Gradient descent with Polyak's momentum has w t − w * w t−1 − w * ≤ β t C 0 w 0 − w * w −1 − w * ,(66)
where the constant
C 0 := √ 2(β + 1) min{h(β, ηλ min (Γ)), h(β, ηλ max (Γ))} ,(67)
and h(β, z)
= − β − (1 − √ z) 2 β − (1 + √ z) 2 .
Consequently, if the step size η = 1 α and the momentum parameter β = 1 − √ ηµ 2 , then it has
w t − w * w t−1 − w * ≤ 1 − 1 2 √ κ t 4 √ κ w 0 − w * w −1 − w * .(68)Furthermore, if η = 4 ( √ µ+ √ α) 2 and β approaches β → 1 − 2 √ κ+1 2
from above, then it has a convergence rate
approximately 1 − 2 √ κ+1 as t → ∞.
Proof. The result (66) and (68) is due to a trivial combination of Lemma 1, Theorem 6, and Corollary 1.
On the other hand, set η =
4 ( √ µ+ √ α) 2 , the lower bound on β becomes max{ 1 − √ ηµ 2 , 1 − √ ηα 2 } = 1 − 2 √ κ+1 2 .
Since the rate is r = lim t→∞ 1 t log(
√ β t+1 C 0 ) = √ β, setting β ↓ 1 − 2 √ κ+1 2
from above leads to the rate of
1 − 2 √ κ+1 .
Formally, it is straightforward to show that C 0 = Θ 1/ β − (1 − 2 1+ √ κ ) 2 , hence, for any β converges to (1 − 2 √ κ+1 ) 2 slower than inverse exponential of κ, i.e., β = (1 − 2
√ κ+1 ) 2 + ( 1 κ ) o(t) , we have r = 1 − 2 √ κ+1 .
E. Proof of Theorem 8
Proof. (of Theorem 8) In the following, we denote ξ t := w t − w * and denote λ := µ > 0, which is a lower bound of λ min (H) of the matrix H := 1 0 ∇ 2 f (1 − τ )w 0 + w * dτ defined in Lemma 2, i.e. λ min (H) ≥ λ. Also, denote β * := 1 − 1 2 √ ηλ and θ := β * + 1
4 √ ηλ = 1 − 1 4 √ ηλ. Suppose η = 1 α , where α is the smoothness constant. Denote C 0 := √ 2(β+1)
√ min{h(β,ηλmin(H)),h(β,ηλmax(H))} ≤ 4 √ κ by Corollary 1. Let C 1 = C 3 = C 0 and C 2 = 1 4 √ ηλ in Theorem 6. The goal is to show that ξ t ξ t−1 ≤ θ t 2C 0 ξ 0 ξ −1 for all t by induction. To achieve this, we will also use induction to show that for all iterations s,
w s − w * ≤ R := 3 64 √ κC0 .(69)
A sufficient condition for the base case s = 0 of (69) to hold is
w 0 − w * w −1 − w * ≤ R 2C 0 = 3 128 √ κC 2 0 ,(70)
as C 0 ≥ 1 by Theorem 5, which in turn can be guaranteed if w 0 − w * w −1 − w * ≤ 1 683κ 3/2 by using the upper bound C 0 ≤ 4 √ κ of Corollary 1.
From Lemma 2, we have
φ s ≤ η 1 0 ∇ 2 f ((1 − τ )w s + τ w * )dτ − 1 0 ∇ 2 f ((1 − τ )w 0 + τ w * )dτ ξ s (a) ≤ ηα 1 0 (1 − τ ) w s − w 0 dτ ξ s ≤ ηα w s − w 0 ξ s (b) ≤ ηα ( w s − w * + w 0 − w * ) ξ s ,(71)
where (
t−1 s=0 A t−s−1 ϕ s 0 ≤ θ t C 0 ξ 0 ξ −1 (72) w t − w * ≤ R := 3 64 √ κC 0 ,(73)
where
A := (1 + β)I n − η 1 0 ∇ 2 f (1 − τ )w 0 + w * dτ −βI n I n 0 .
We have
t−1 s=0 A t−s−1 ϕ s 0 ≤ t−1 s=0 A t−s−1 ϕ s 0 (a) ≤ t−1 s=0 β t−s−1 * C 0 ϕ s (b) ≤ 4ηαRC 2 0 t−1 s=0 β t−s−1 * θ s ξ 0 ξ −1 (c) ≤ RC 2 0 64 3 √ ηλ θ t ξ 0 ξ −1 (d) ≤ C 0 θ t ξ 0 ξ −1 ,(74)
where (a) uses Theorem 5 with β = β 2 * , (b) is by (71), (69), and the induction that ξ s ≤ θ s 2C 0
ξ 0 ξ −1 , (c) is because t−1 s=0 β t−1−s * θ s = θ t−1 t−1 s=0 β * θ t−1−s ≤ θ t−1 t−1 s=0 θ t−1−s ≤ θ t−1 4 √ ηλ ≤ θ tξ t ξ t−1 ≤ θ t 2C 0 ξ 0 ξ −1 .
Now let us switch to show (73). We have
ξ t := w t − w * induction ≤ θ t 2C 0 w 0 − w * w −1 − w * ≤ R,(75)
where the last inequality uses the constraint (70).
w 0 − w * w −1 − w * ≤ R 2C0 by
F. Proof of Theorem 9
We will need some supporting lemmas in the following for the proof. In the following analysis, we denote C 0 := √ 2(β+1) √ min{h(β,ηλmin(H)),h(β,ηλmax(H))}
, where h(β, ·) is defined in Theorem 5 and H = H 0 whose (i, j) entry is (H 0 ) i,j :=
H(W 0 ) i,j = 1 m m r=1 x i x j 1{ w (r) 0 , x i ≥ 0 & w (r)
0 , x j ≥ 0}, as defined in Lemma 3. In the following, we also denote β = (1 − 1 2 √ ηλ) 2 := β 2 * . We summarize the notations in Table 1.
Notation definition (or value) meaning N ReLU W (x) N ReLU W (x) := 1 √ m m r=1 a r σ( w (r) , x ) the ReLU network's output given x HH i,j := E w (r) [x i x j 1{ w (r) , x i ≥ 0 & w (r) , x j ≥ 0}].
the expectation of the Gram matrix
H 0 H(W 0 ) i,j = 1 m m r=1 x i x j 1{ w (r) 0 , x i ≥ 0 & w (r) 0 , x j ≥ 0}η η = 1/λ max step size β β = (1 − 1 2 √ ηλ) 2 = (1 − 1 2 √κ ) 2 := β 2 * momentum parameter β * β * = √ β = 1 − 1 2 √ ηλ squared root of β θ θ = β * + 1 4 √ ηλ = 1 − 1 4 √ ηλ = 1 − 1 4 √κ the convergence rate C 0 C 0 := √ 2(β+1)
√ min{h(β,ηλmin(H0)),h(β,ηλmax(H0))} the constant used in Theorem 5
H t − H 0 F ≤ 2nR ReLU = λ 512C 0 ,
with probability at least 1 − n 2 · exp(−mR ReLU /10).
Proof. This is an application of Lemma 3.2 in (Song & Yang, 2019).
Lemma 8 shows that if the distance between the current iterate W t and its initialization W 0 is small, then the distance between the Gram matrix H(W t ) and H(W 0 ) should also be small. Lemma 8 allows us to obtain the following lemma, which bounds the size of ϕ t (defined in Lemma 3) in the residual dynamics. Lemma 9. Following the setting as Theorem 9, denote θ := β * + 1
4 √ ηλ = 1 − 1 4 √ ηλ. Suppose that ∀i ∈ [n], |S ⊥ i | ≤ 4mR ReLU for some constant R ReLU := λ 1024nC0 > 0.
If we have (I) for any s ≤ t, the residual dynamics satisfies ξ s ξ s−1 ≤ θ s · νC 0 ξ 0 ξ −1 , for some constant ν > 0, and (II) for any r ∈ [m] and any s ≤ t, w (r)
s − w (r) 0 ≤ R ReLU ,
then φ t and ι t in Lemma 3 satisfies
φ t ≤ √ ηλ 16 θ t ν ξ 0 ξ −1 , and ι t ≤ ηλ 512 θ t ν ξ 0 ξ −1 .
Consequently, ϕ t in Lemma 3 satisfies
ϕ t ≤ √ ηλ 16 + ηλ 512 θ t ν ξ 0 ξ −1 .
Proof. Denote β * := 1 − 1 2 √ ηλ and θ := β * + 1 4 √ ηλ = 1 − 1 4 √ ηλ. We have by Lemma 3
φ t = n i=1 φ t [i] 2 ≤ n i=1 2η √ n|S ⊥ i | m ξ t + β t−1 τ =0 β t−1−τ ξ τ 2 (a) ≤ 8ηnR ReLU ξ t + β t−1 τ =0 β t−1−τ ξ τ (b) ≤ 8ηnR ReLU θ t νC 0 ξ 0 ξ −1 + β t−1 τ =0 β t−1−τ θ τ νC 0 ξ 0 ξ −1 (c) = 8ηnR ReLU θ t νC 0 ξ 0 ξ −1 + β 2 * νC 0 t−1 τ =0 β 2(t−1−τ ) * θ τ ξ 0 ξ −1 (d) ≤ 8ηnR ReLU θ t νC 0 ξ 0 ξ −1 + β 2 * νC 0 θ t−1 t−1 τ =0 θ t−1−τ ξ 0 ξ −1 ≤ 8ηnR ReLU θ t (1 + β * t−1 τ =0 θ τ )νC 0 ξ 0 ξ −1 ≤ 8ηnR ReLU θ t (1 + β * 1 − θ )νC 0 ξ 0 ξ −1 (e) ≤ √ ηλ 16 θ t ν ξ 0 ξ −1 ,(76)
where (a) is by |S ⊥ i | ≤ 4mR ReLU , (b) is by induction that ξ t ≤ θ t νC 0 ξ 0 ξ −1 as u 0 = u −1 , (c) uses that β = β 2 * , (d) uses β * ≤ θ, (e) uses 1 + β * 1−θ ≤ 2 1−θ ≤ 8 √ ηλ and R ReLU := λ 1024nC0 . Now let us switch to bound ι t .
ι t ≤ η H 0 − H t 2 ξ t ≤ ηλ 512C 0 θ t νC 0 ξ 0 ξ −1 ,(77)
where we uses Lemma 8 that H 0 − H t 2 ≤ λ 512C0 and the induction that
ξ t ξ t−1 ≤ θ t νC 0 ξ 0 ξ −1 .
The assumption of Lemma 9, ∀i ∈ [n], |S ⊥ i | ≤ 4mR ReLU only depends on the initialization. Lemma 11 shows that it holds with probability at least 1 − n · exp(−mR ReLU ).
Lemma 10. Following the setting as Theorem 9, denote θ := β * + 1
4 √ ηλ = 1− 1 4 √ ηλ.
Suppose that the initial error satisfies ξ 0 2 = O(n log(m/δ) log 2 (n/δ)). If for any s < t, the residual dynamics satisfies ξ s ξ s−1 ≤ θ s · νC 0 ξ 0 ξ −1 , for some constant ν > 0, then
w (r) t − w (r) 0 ≤ R ReLU := λ 1024nC 0 . Proof. We have w (r) t+1 − w (r) 0 (a) ≤ η t s=0 M (r) s (b) = η t s=0 s τ =0 β s−τ ∂L(W τ ) ∂w (r) τ ≤ η t s=0 s τ =0 β s−τ ∂L(W τ ) ∂w (r) τ (c) ≤ η t s=0 s τ =0 β s−τ √ n √ m y − u τ (d) ≤ η t s=0 s τ =0 β s−τ √ 2n √ m θ τ νC 0 y − u 0 (e) ≤ η √ 2n √ m t s=0 θ s 1 − θ νC 0 y − u 0 ≤ η √ 2n √ m νC 0 (1 − θ) 2 y − u 0 (f ) = η √ 2n √ m 16νC 0 ηλ y − u 0 (g) = η √ 2n √ m 16νC 0 ηλ O( n log(m/δ) log 2 (n/δ)) (h) ≤ λ 1024nC0 ,(78)
where (a), (b) is by the update rule of momentum, which is w
(r) t+1 − w (r) t = −ηM (r) t , where M (r) t := t s=0 β t−s ∂L(Ws) ∂w (r) s , (c) is because ∂L(Ws) ∂w (r) s = n i=1 (y i − u s [i]) 1 √ m a r x i · 1{ w (r) s , x ≥ 0} ≤ 1 √ m n i=1 |y i − u s [i]| ≤ √ n √ m y − u s , (d) is by ξ s ξ s−1 ≤ θ s νC 0 ξ 0 ξ −1 (e) is because that β = β 2 * ≤ θ 2 , (f) we use θ := (1 − 1 4 √ ηλ), so that 1 (1−θ) 2 = 16 ηλ ,(g)
is by that the initial error satisfies y − u 0 2 = O(n log(m/δ) log 2 (n/δ)), and (h) is by the choice of the number of neurons m = Ω(λ −4 n 4 C 4 0 log 3 (n/δ)) = Ω(λ −4 n 4 κ 2 log 3 (n/δ)), as C 0 = Θ( √ κ) by Corollary 1.
The proof is completed.
Lemma 10 basically says that if the size of the residual errors is bounded and decays over iterations, then the distance between the current iterate W t and its initialization W 0 is well-controlled. The lemma will allows us to invoke Lemma 8 and Lemma 9 when proving Theorem 9. The proof of Lemma 10 is in Appendix F. The assumption of Lemma 10, ξ 0 2 = O(n log(m/δ) log 2 (n/δ)), is satisfied by the random initialization with probability at least 1 − δ/3 according to Lemma 12 .
Lemma 11. (Claim 3.12 of (Song & Yang, 2019)) Fix a number R 1 ∈ (0, 1). Recall that S ⊥ i is a random set defined in Subsection 3.3. With probability at least 1 − n · exp(−mR 1 ), we have that for all i ∈ [n],
|S ⊥ i | ≤ 4mR 1 .
A similar lemma also appears in (Du et al., 2019b). Lemma 11 says that the number of neurons whose activation patterns for a sample i could change during the execution is only a small faction of m if R 1 is a small number, i.e. |S ⊥ i | ≤ 4mR 1 m.
Lemma 12. (Claim 3.10 in (Song & Yang, 2019)) Assume that w (r) 0 ∼ N (0, I d ) and a r uniformly sampled from {−1, 1}. For 0 < δ < 1, we have that y − u 0 2 = O(n log(m/δ) log 2 (n/δ)), with probability at least 1 − δ.
F.1. Proof of Theorem 9
Proof. (of Theorem 9) Denote λ := 3 4 λ min (H) > 0. Lemma 5 shows that λ is a lower bound of λ min (H) of the matrix H defined in Lemma 3. Also, denote β * := 1 − 1 2 √ ηλ (note that β = β 2 * ) and θ := β * + 1
4 √ ηλ = 1 − 1 4 √ ηλ.
In the following, we let ν = 2 in Lemma 9, 10, and let C 1 = C 3 = C 0 and C 2 = 1 4 √ ηλ in Theorem 6. The goal is to show that ξ t ξ t−1 ≤ θ t 2C 0 ξ 0 ξ −1 for all t by induction. To achieve this, we will also use induction to show that for all iterations s, ∀r ∈ [m], w (r)
s − w (r) 0 ≤ R ReLU := λ 1024nC0 ,(79)
which is clear true in the base case s = 0.
By Lemma 3, 5, 8, 9, Theorem 6, and Corollary 1, it suffices to show that given ξ s ξ s−1 ≤ θ s 2C 0 ξ 0 ξ −1 and (79) hold at s = 0, 1, . . . , t − 1, one has
t−1 s=0 A t−s−1 ϕ s 0 ≤ θ t C 0 ξ 0 ξ −1 ,(80)∀r ∈ [m], w (r) t − w (r) 0 ≤ R ReLU := λ 1024nC0 ,(81)
where the matrix A and the vector ϕ t are defined in Lemma 3. The inequality (80) is the required condition for using the result of Theorem 6, while the inequality (81) helps us to show (80) through invoking Lemma 9 to bound the terms {ϕ s } as shown in the following.
We have
t−1 s=0 A t−s−1 ϕ s 0 (a) ≤ t−1 s=0 β t−s−1 * C 0 ϕ s (b) ≤ √ ηλ 16 + ηλ 512 2C 0 ξ 0 ξ −1 t−1 s=0 β t−1−s * θ s (c) ≤ 1 2 + 1 64 ηλ θ t−1 C 0 ξ 0 ξ −1 (d) ≤ θ t C 0 ξ 0 ξ −1 ,(82)
where (a) uses Theorem 5, (b) is due to Lemma 9, Lemma 11,(c) is because
t−1 s=0 β t−1−s * θ s = θ t−1 t−1 s=0 β * θ t−1−s ≤ θ t−1 t−1 s=0 θ t−1−s ≤ θ t−1 4
√ ηλ , (d) uses that θ ≥ 3 4 and ηλ ≤ 1. Hence, we have shown (80). Therefore, by Theorem 6, we
have ξ t ξ t−1 ≤ θ t 2C 0 ξ 0 ξ −1 .
By Lemma 10 and Lemma 12, we have (81). Furthermore, with the choice of m, we have 3n 2 exp(−mR ReLU /10) ≤ δ. Thus, we have completed the proof.
G. Proof of Theorem 10
We will need some supporting lemmas in the following for the proof. In the following analysis, we denote ) ] ∈ R dyn×dyn , as defined in Lemma 4. We also denote β = (1 − 1 2 √ ηλ) 2 := β 2 * . As mentioned in the main text, following Du & Hu (2019); Hu et al. (2020b), we will further assume that (A1) there exists a W * such that Y = W * X, X ∈ R d×r , andr = rank(X), which is actually without loss of generality (see e.g. the discussion in Appendix B of Du & Hu (2019)). We summarize the notions in Table 2. Notation definition (or value) meaning
C 0 := √ 2(β+1) √ min{h(β,N L-linear W (x) N L-linear W (x) := 1 √ m L−1 dy W (L) W (L−1) · · · W (1) x,
output of the deep linear network H 0
H 0 := 1 m L−1 dy L l=1 [(W (l−1:1) 0 X) (W (l−1:1) 0 X) ⊗W (L:l+1) 0 (W (L:l+1) 0 ) ] ∈ R dyn×dyn H in (8) is H = H 0 (Lemma 4) λ max (H 0 ) λ max (H 0 ) ≤ Lσ 2 max (X)/d y (Lemmaη η = dy Lσ 2 max (X) step size β β = (1 − 1 2 √ ηλ) 2 = (1 − 1 2 √ κ ) 2 := β 2 * momentum parameter β * β * = √ β = 1 − 1 2 √ ηλ squared root of β θ θ = β * + 1 4 √ ηλ = 1 − 1 4 √ ηλ = 1 − 1 4 √ κ the convergence rate C 0 C 0 := √ 2(β+1)
√ min{h(β,ηλmin(H0)),h(β,ηλmax(H0))} the constant used in Theorem 5
λ min (H 0 ) ≥ Lσ 2 min (X)/d y , λ max (H 0 ) ≤ Lσ 2 max (X)/d y . σ max (W (j:i) 0 ) = m j−i+1 2 , σ min (W (j:i) 0 ) = m j−i+1 2
Furthermore, with probability 1 − δ,
(W 0 ) ≤ B 2 0 = O 1 + log(r/δ) d y + W * 2 2 ,
for some constant B 0 > 0.
We remark that Lemma 13 implies that the condition number of H 0 satisfieŝ κ := λ max (H 0 ) λ min (H 0 ) ≤ σ 2 max (X) σ 2 min (X) = κ.
φ t ≤ 43 d y √ m X 2 θ 2t ν 2 C 2 0 ξ 0 1 − θ 2 , ψ t ≤ 43 d y √ m X 2 θ 2(t−1) ν 2 C 2 0 ξ 0 1 − θ 2 , ι t ≤ ηλ 80 θ t νC 0 ξ 0 ξ −1 .
Consequently, ϕ t in Lemma 4 satisfies ϕ t ≤ 1920 d y √ m X 2 1 ηλ θ 2t ν 2 C 2 0 ξ 0 ξ −1 2 + ηλ 80 θ t νC 0 ξ 0 ξ −1 .
Proof. By Lemma 4, ϕ t = φ t + ψ t + ι t ∈ R dyn , we have
φ t := 1 m L−1 d y vec(Φ t X) , with Φ t := Π l W (l) t − ηM t,l − W (L:1) t + η L l=1 W (L:l+1) t M t,l W (l−1:1) t ,(84)
and
ι t := η(H 0 − H t )ξ t .(86)
So if we can bound φ t , ψ t , and ι t respectively, then we can bound ϕ t by the triangle inequality.
ϕ t ≤ φ t + ψ t + ι t .(87)
Let us first upper-bound φ t . Note that Φ t is the sum of all the high-order (of η's) term in the product,
√ 2 U 0 − Y F 1.1m l−1 2 X 2 ≤ 4 X 2 d y θ s νC 0 U 0 − Y F ,(89)
where the second inequality we use Lemma 16 and that ξ s ξ s−1 ≤ θ s νC 0 ξ 0 ξ −1 and ξ s = U s − Y F .
So the momentum term of each layer can be bounded as
M t,l F = t s=0 β t−s ∂ (W (L:1) s ) ∂W (l) s F ≤ t s=0 β t−s ∂ (W (L:1) s ) ∂W (l) s F ≤ 4 X 2 d y t s=0 β t−s θ s νC 0 U 0 − Y F . ≤ 4 X 2 d y t s=0 θ 2(t−s) θ s νC 0 U 0 − Y F . ≤ 4 X 2 d y θ t 1 − θ νC 0 U 0 − Y F ,(90)
where in the second to last inequality we use β = β 2 * ≤ θ 2 .
Combining all the pieces together, we can bound
1 √ m L−1 dy Φ t X F as 1 m L−1 d y Φ t X F (a) ≤ 1 m L−1 d y L j=2 L j η 4 X 2 d y θ t 1 − θ νC 0 U 0 − Y F j (1.1) j+1 m L−j 2 X 2 (b) ≤ 1.1 1 m L−1 d y L j=2 L j η 4.4 X 2 d y θ t 1 − θ νC 0 U 0 − Y F j m L−j 2 X 2 ≤ 1.1 m d y X 2 L j=2 η 4.4L X 2 md y θ t 1 − θ νC 0 U 0 − Y F j ,(91)
where (a) uses (90) and Lemma 16 for bounding a j ≥ 2 higher-order terms like 1 √ m L−1 dy βW (L:kj +1) t · (−ηM t,kj )W (kj −1:kj−1+1) t · (−ηM t,kj−1 ) · · · (−ηM t,k1 ) · W (k1−1:1) t , where 1 ≤ k 1 < · · · < k j ≤ L and (b) uses that L j ≤ L j j!
To proceed, let us bound η 4.4L X 2 √ mdy θ t 1−θ νC 0 U 0 − Y F in the sum above. We have
η 4.4L X 2 md y θ t 1 − θ νC 0 U 0 − Y F ≤ 4.4 d y m 1 X 2 θ t 1 − θ νC 0 U 0 − Y F ≤ 0.5,(92)
where the last inequality uses thatC 1
dyB 2 0 C 2 0 X 2 2 1
(1−θ) 2 ≤C 1 dyB 2 0 C 2 0 X 2 2 1 ηλ ≤C 2 dyB 2 0 κ 2 X 2 2 ≤ m, for some sufficiently large constantC 1 ,C 2 > 0. Combining the above results, we have
φ t = 1 m L−1 d y Φ t X F ≤ 1.1 m d y X 2 η 4.4L X 2 md y θ t 1 − θ νC 0 U 0 − Y F 2 L−2 j=2 (0.5) j−2 ≤ 2.2 m d y X 2 η 4.4L X 2 md y θ t 1 − θ νC 0 U 0 − Y F 2 ≤ 43 d y √ m X 2 θ t 1 − θ νC 0 U 0 − Y F 2 .(93)
A Modular Analysis of Provable Acceleration via Polyak's Momentum
The above can be written as B 0 + ηB 1 + η 2 B 2 + · · · + η L B L for some matrices B 0 , . . . , B L ∈ R dy×n . Specifically, we have
So what remains on (94) are all the higher-order terms (in terms of the power of η), i.e. those with ηM t−1,i and ηM t−1,j , ∀i = j or higher.
To continue, observe that for a fixed (i, j), i < j, the second-order term that involves ηM t−1,i and ηM t−1,j on (94) (L − 2)β. Furthermore, for a fixed (i, j, k), i < j < k, the third-order term that involves ηM t−1,i , ηM t−1,j , and ηM t−1,k on (94) By induction (see (90)), we can bound the norm of the momentum at layer l as
M t−1,l F ≤ 4 X 2 d y θ t−1 1 − θ νC 0 U 0 − Y F .(96)
Combining all the pieces together, we have
where the last inequality uses η ≤ dy L X 2 2 . Now let us switch to bound ι t . We have
ι t = η(H t − H 0 )ξ t = η m L−1 d y L l=1 W (L:l+1) t (W (L:l+1) t ) (U t − Y )(W
To bound (W
Momentum methods are very popular for training neural networks in various applications (e.g. He et al. (2016); Vaswani et al. (2017); Krizhevsky et al. (2012)). It has been widely observed that the use of momentum helps faster training in deep learning (e.g. Loshchilov & Hutter (2019); Wilson et al. (2017); Cutkosky & Orabona
; Ghadimi et al. (2015); Gitman et al. (2019); Loizou & Richtárik (2017; 2018); Can et al. (2019); Scieur & Pedregosa (2020); Flammarion & Bach (2015); Wilson et al. (2021); Franca et al. (2020); Diakonikolas & Jordan (2019); Shi et al. (2018); Hu
vergence results for vanilla (stochastic) gradient descent (e.g. Li & Liang (2018); Ji & Telgarsky (2020); Li & Yuan (2017); Du et al. (2019b;a); Allen-Zhu et al. (2019); Song & Yang (2019); Zou et al. (2019); Arora et al. (2019c); Jacot et al. (2018); Lee et al. (2019); Chizat et al. (2019); Oymak & Soltanolkotabi (2019); Brutzkus & Globerson (2017); Chen et al. (2020a); Tian (2017); Soltanolkotabi (2017); Bai & Lee (2020); Ghorbani et al. (2019); Li et al. (2020); Hanin & Nica (2020); Daniely (2017); Zou & Gu (2019); Dukler et al. (2020); Daniely (2020); Wei et al. (2019); Yehudai & Shamir (2020); Fang et al. (2019); Su & Yang (2019); Chen et al. (2020b)), as well as for other algorithms (e.g. Zhang et al. (2019); Wu et al. (2019b); Cai et al. (2019); Zhong et al. (2017); Ge et al. (2019); van den Brand et al. (2020); Lee et al. (2020); Pilanci & Ergen
Following Du et al. (2019b); Arora et al. (2019c); Song & Yang
; Saxe et al. (2014); Hu et al. (2020b)), studying the optimization landscape (e.g. Kawaguchi (2016); Laurent & von Brecht (2018)), and establishing the effect of implicit regularization (e.g. Moroshko et al. (2020); Ji & Telgarsky (2019); Li et al. (2018); Razin & Cohen (2020); Arora et al. (2019b); Gidel et al. (2019); Gunasekar et al. (2017); Lyu & Li
Theorem 2 .
2(Polyak (1964); see alsoLessard et al. (2016); Recht (2018); Mitliagkas (2019)) Gradient descent with Polyak's momentum with the step size η = 4 ( √ µ+ √ α) 2 and the momentum parameter β
Figure 1 .
1Empirical risk (Wt) vs. iteration t. Polyak's momentum accelerates the optimization process of training an overparametrized one-layer ReLU network. Experimental details are available in Appendix H.
Theorem 4 .
4(Theorem 4.1 in Hu et al. (2020b)) Assume (A1) and the use of the orthogonal initialization. Suppose the width of the deep linear network satisfies
Lemma 5 .
5[Lemma 3.1 in Du et al. (2019b) and Song & Yang
;Shi et al. (2018).
Compared with Theorem 4 of Hu et al. (2020b) for vanilla GD, our result clearly shows the acceleration via Polyak's momentum. Furthermore, the result suggests that the depth does not hurt optimization. Acceleration is achieved for any depth L and the required width m is independent of the depth L as Hu et al. (2020b); Zou et al. (2020) (of vanilla GD). The proof of Theorem 10 is in Appendix G.
A convergence theory for deep learning via overparameterization. ICML, 2019. Arora, S., Cohen, N., and Hazan, E. On the optimization of deep networks: Implicit acceleration by overparameterization. ICML, 2018.Arora, S., Cohen, N., Golowich, N., and Hu, W. A convergence analysis of gradient descent for deep linear neural networks. ICLR, 2019a.Arora, S., Cohen, N., Hu, W., and Luo, Y. Implicit regularization in deep matrix factorization. NerurIPS, 2019b.Arora, S., Du, S. S., Hu, W., Li, Z., and Wang, R. Finegrained analysis of optimization and generalization for overparameterized two-layer neural networks. NeurIPS, 2019c.Aujol, J.-F., Dossal, C., and Rondepierre, A. Convergence rates of the heavy-ball method with lojasiewicz property. hal-02928958, 2020.Bai, Y. and Lee, J. D. Beyond linearization: On quadratic and higher-order approximation of wide neural networks. ICLR, 2020.Bietti, A. and Mairal, J. On the inductive bias of neural tangent kernels. NeurIPS, 2019.Brutzkus, A. and Globerson, A. Globally optimal gradient descent for a convnet with gaussian inputs. ICML, 2017.Cai, T., Gao, R., Hou, J., Chen, S., Wang, D., He, D., Zhang, Z., and Wang, L. A gram-gauss-newton method learning overparameterized deep neural networks for regressionproblems. arXiv.org:1905.11675, 2019. Can, B., Gürbüzbalaban, M., and Zhu, L. Accelerated linear convergence of stochastic momentum methods in wasserstein distances. ICML, 2019. Chen, S., He, H., and Su, W. J. Label-aware neural tangent kernel: Toward better generalization and local elasticity. NeurIPS, 2020a. Chen, Z., Cao, Y., Gu, Q., and Zhang, T. A generalized neural tangent kernel analysis for two-layer neural network. NeurIPS, 2020b. Chizat, L., Oyallon, E., and Bach, F. On lazy training in differentiable programming. NeurIPS, 2019. Cutkosky, A. and Orabona, F. Momentum-based variance reduction in non-convex sgd. NeurIPS, 2019. Daniely, A. Sgd learns the conjugate kernel class of the network. NeurIPS, 2017. Daniely, A. Memorizing gaussians with no overparameterizaion via gradient decent on neural networks. arXiv:1909.11837, 2020. Danilova, M., Kulakova, A., and Polyak, B. Non-monotone behavior of the heavy ball method. arXiv:1811.00658, 2018. Diakonikolas, J. and Jordan, M. I. Generalized momentum-based methods: A hamiltonian perspective. arXiv:1906.00436, 2019. Du, S. S. and Hu, W. Width provably matters in optimization for deep linear neural networks. ICML, 2019. Du, S. S., Lee, J. D., Li, H., Wang, L., and Zhai, X. Gradient descent finds global minima of deep neural networks. ICML, 2019a. Du, S. S., Zhai, X., Poczos, B., and Singh, A. Gradient descent provably optimizes over-parameterized neural networks. ICLR, 2019b. Dukler, Y., Gu, Q., and Montufar, G. Optimization theory for relu neural networks trained with normalization layers. ICML, 2020. Fang, C., Dong, H., and Zhang, T. Over parameterized two-level neural networks can learn near optimal feature representations. arXiv:1910.11508, 2019. Flammarion, N. and Bach, F. From averaging to acceleration, there is only a step-size. COLT, 2015. Foucart, S. Matrix norms and spectral radii. Online lecture note, 2018.Franca, G., Sulam, J., Robinson, D. P., and Vidal, R. Conformal symplectic and relativistic optimization. Journal of Statistical Mechanics: Theory and Experiment, 2020. Gadat, S., Panloup, F., and Saadane, S. Stochastic heavy ball. arXiv:1609.04228, 2016. Ge, R., Kuditipudi, R., Li, Z., and Wang, X. Learning two-layer neural networks with symmetric inputs. ICLR, 2019. Gelfand, I. Normierte ringe. Mat. Sbornik, 1941. Ghadimi, E., Feyzmahdavian, H. R., and Johansson, M. Global convergence of the heavy-ball method for convex optimization. ECC, 2015. Ghorbani, B., Mei, S., Misiakiewicz, T., , and Montanari, A. Linearized two-layers neural networks in high dimension. arXiv:1904.12191, 2019. Gidel, G., Bach, F., and Lacoste-Julien, S. Implicit regularization of discrete gradient dynamics in linear neural networks. NeurIPS, 2019. Gitman, I., Lang, H., Zhang, P., and Xiao, L. Understanding the role of momentum in stochastic gradient methods. NeurIPS, 2019. Goh, G. Why momentum really works. Distill, 2017. Gunasekar, S., Woodworth, B., Bhojanapalli, S., Neyshabur, B., and Srebro, N. Implicit regularization in matrix factorization. NeurIPS, 2017. Hanin, B. and Nica, M. Finite depth and width corrections to the neural tangent kernel. ICLR, 2020. Hardt, M. and Ma, T. Identity matters in deep learning. ICLR, 2016. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Hu, B. Unifying the analysis in control and optimization via semidefinite programs. Lecture Note, 2020. Hu, W., Xiao, L., Adlam, B., and Pennington, J. The surprising simplicity of the early-time learning dynamics of neural networks. NeurIPS, 2020a. Hu, W., Xiao, L., and Pennington, J. Provable benefit of orthogonal initialization in optimizing deep linear networks. ICLR, 2020b. Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. NeurIPS, 2018. Ji, Z. and Telgarsky, M. Gradient descent aligns the layers of deep linear networks. ICLR, 2019. Ji, Z. and Telgarsky, M. Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks. ICLR, 2020. Kawaguchi, K. Deep learning without poor local minima. NeurIPS, 2016. Kidambi, R., Netrapalli, P., Jain, P., and Kakade, S. M. On the insufficiency of existing momentum schemes for stochastic optimization. ICLR, 2018. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. ICLR, 2015. Krichene, W., Caluyay, K. F., and Halder, A. Global convergence of second-order dynamics in two-layer neural networks. arXiv:2006.07867, 2020. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. NeurIPS, 2012. Laurent, T. and von Brecht, J. Deep linear networks with arbitrary loss: All local minima are global. ICML, 2018. Lee, J., Xiao, L., Schoenholz, S. S., Bahri, Y., Sohl-Dickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. NeurIPS, 2019. Lee, J. D., Shen, R., Song, Z., Wang, M., and Yu, Z. Generalized leverage score sampling for neural networks. arXiv:2009.09829, 2020. Lessard, L., Recht, B., and Packard, A. Analysis and design of optimization algorithms via integral quadratic constraints. SIAM Journal on Optimization, 2016. Li, Y. and Liang, Y. Learning overparameterized neural networks via stochastic gradient descent on structured data. NeurIPS, 2018. Li, Y. and Yuan, Y. Convergence analysis of two-layer neural networks with relu activation. NeurIPS, 2017. Li, Y., Ma, T., and Zhang, H. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. COLT, 2018. Li, Y., Ma, T., and Zhang, H. Learning over-parametrized two-layer relu neural networks beyond ntk. COLT, 2020. Liu, C. and Belkin, M. Parametrized accelerated methods free of condition number. arXiv:1802.10235, 2018. Liu, C., Zhu, L., and Belkin, M. On the linearity of large non-linear models: when and why the tangent kernel is constant. arXiv:2010.01092, 2020a. Liu, C., Zhu, L., and Belkin, M. Toward a theory of optimization for over-parameterized systems of non-linear equations: the lessons of deep learning. arXiv:2003.00307, 2020b. Liu, Y., Gao, Y., and Yin, W. An improved analysis of stochastic gradient descent with momentum. NeurIPS, 2020c. Loizou, N. and Richtárik, P. Momentum and stochastic momentum for stochastic gradient, newton, proximal point and subspace descent methods. arXiv:1712.09677, 2017. Loizou, N. and Richtárik, P. Accelerated gossip via stochastic heavy ball method. Allerton, 2018. Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. ICLR, 2019. Lu, H. and Kawaguchi, K. Depth creates no bad local minima. arXiv:1702.08580, 2017. Luo, L., Xiong, Y., Liu, Y., and Sun, X. Adaptive gradient methods with dynamic bound of learning rate. ICLR, 2019. Lyu, K. and Li, J. Gradient descent maximizes the margin of homogeneous neural networks. ICLR, 2020. Mai, V. V. and Johansson, M. Convergence of a stochastic gradient method with momentum for non-smooth nonconvex optimization. ICML, 2020. Mitliagkas, I. Accelerated methods -polyak's momentum (heavy ball method). Online Lecture Note, 2019. Moroshko, E., Gunasekar, S., Woodworth, B., Lee, J. D., Srebro, N., and Soudry, D. Implicit bias in deep linear classification: Initialization scale vs training accuracy. NeurIPS, 2020. Nesterov, Y. Introductory lectures on convex optimization: a basic course. Springer, 2013. Oymak, S. and Soltanolkotabi, M. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv:1902.04674, 2019. Panigrahi, A., Shetty, A., and Goyal, N. Effect of activation functions on the training of overparametrized neural nets. ICLR, 2020. Pilanci, M. and Ergen, T. Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks. ICML, 2020. Polyak, B. Gradient methods for minimizing functionals. Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, 1963. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964. Razin, N. and Cohen, N. Implicit regularization in deep learning may not be explainable by norms. NeurIPS2020, 2020. Recht, B. Lyapunov analysis and the heavy ball method. Lecture note, 2018. Reddi, S. J., Kale, S., and Kumar, S. On the convergence of adam and beyond. ICLR, 2018. Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. ICLR, 2014. Scieur, D. and Pedregosa, F. Universal average-case optimality of polyak momentum. ICML, 2020. Shamir, O. Exponential convergence time of gradient descent for one-dimensional deep linear neural networks. COLT, 2019. Shi, B., Du, S. S., Jordan, M. I., and Su, W. J. Understanding the acceleration phenomenon via high-resolution differential equations. arXiv:1810.08907, 2018. Soltanolkotabi, M. Learning relus via gradient descent. NeurIPS, 2017. Song, Z. and Yang, X. Quadratic suffices for over-parametrization via matrix chernoff bound. arXiv:1906.03593, 2019. Su, L. and Yang, P. On learning over-parameterized neural networks: A functional approximation perspective. NeurIPS, 2019. Sun, T., Yin, P., Li, D., Huang, C., Guan, L., and Jiang, H. Non-ergodic convergence analysis of heavy-ball algorithms. AAAI, 2019. Tian, Y. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. ICML, 2017. van den Brand, J., Peng, B., Song, Z., and Weinstein, O. Training (overparametrized) neural networks in nearlinear time. arXiv:2006.11648, 2020. Vaswani, A., Shazeer, N., Parmar, N., and et al. Attention is all you need. NeurIPS, 2017. Wang, J.-K., Lin, C.-H., and Abernethy, J. Escaping saddle points faster with stochastic momentum. ICLR, 2020. Wei, C., Lee, J. D., Liu, Q., and Ma, T. Regularization matters: Generalization and optimization of neural nets v.s. their induced kernel. NeurIPS, 2019. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., , and Recht., B. The marginal value of adaptive gradient methods in machine learning. NeurIPS, 2017. Wilson, A. C., Jordan, M., and Recht, B. A lyapunov analysis of momentum methods in optimization. JMLR, 2021. Wu, L., Wang, Q., and Ma, C. Global convergence of gradient descent for deep linear residual networks. NeurIPS, 2019a.Wu, S., Dimakis, A. G., and Sanghavi, S. Learning distributions generated by one-layer relu networks. NeurIPS, 2019b.Wu, X., Du, S. S., and Ward, R. Global convergence of adaptive gradient methods for an over-parameterized neural network. arXiv:1902.07111, 2019c.Yang, G. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv:1902.04760, 2019.Yang, T., Lin, Q., and Li, Z. Unified convergence analysis of stochastic momentum methods for convex and nonconvex optimization. IJCAI, 2018.Yehudai, G. and Shamir, O. Learning a single neuron with gradient methods. COLT, 2020.Yun, C., Sra, S., and Jadbabaie, A. Global optimality conditions for deep neural networks. ICLR, 2018.Zhang, G., Martens, J., and Grosse, R. B. Fast convergence of natural gradient descent for over-parameterized neural networks. NeurIPS, 2019.Zhong, K., Song, Z., Jain, P., Bartlett, P. L., and Dhillon, I. S. Recovery guarantees for one-hidden-layer neural networks. ICML, 2017.Zhou, Y. and Liang, Y. Critical points of linear neural networks: Analytical forms and landscape. ICLR, 2018.Zou, D. and Gu, Q. An improved analysis of training overparameterized deep neural networks. NeurIPS, 2019.Zou, D., Cao, Y., Zhou, D., and Gu, Q. Stochastic gradient descent optimizes overparameterized deep relu networks. Machine Learning, Springer, 2019.Zou, D., Long, P. M., and Gu, Q. On the global convergence of training deep linear resnets. ICLR, 2020.
contains all the high-order terms (in terms of η), e.g. those with ηM t,i and ηM t,j , i = j ∈ [L], or higher. Based on the equivalent update expression of gradient descent with Polyak's momentum −ηM t,l
C. 1 . Proof of Corollary 1 Corollary 1
111Assume that λ min (H) > 0. Denote κ := λ max (H)/λ min (H). Set η = 1/λ max (H) and set β
a) is by α-Lipschitzness of the Hessian and (b) is by the triangle inequality. By (69), (71), Lemma 2, Theorem 6, and Corollary 1, it suffices to show that given ξ s ξ s−1 ≤ θ s 2C 0 ξ 0 ξ −1 and w s − w * ≤ R := 3 64 √ κC0 hold at s = 0, 1, . . . , t − 1, one has
the Gram matrix at the initialization λ min (H) λ min (H) > 0 (by assumption) the least eigenvalue ofH. λ max (H) the largest eigenvalue ofH κ κ := λ max (H)/λ min (H) the condition number ofH λ λ := 3 4 λ min (H) (a lower bound of) the least eigenvalue of H 0 . λ max λ max := λ max (H) + λmin(H) 4 (an upper bound of) the largest eigenvalue of H 0 . κκ := λmax λ = 4 3 κ + 1 3 the condition number of H 0 .
Lemma 8 .
8Suppose .i.d. generated by N (0, I d ) initially. Then, for any set of weight vectors W t := {w
13) the largest eigenvalue of H 0 λ min (H 0 ) λ min (H 0 ) ≥ Lσ 2 min (X)/d y (Lemma 13) the least eigenvalue of H 0 X) = κ (Lemma 13) the condition number of H 0
( 83 )F
83Lemma 14. Following the setting as Theorem 10, denote θ := β * If we have (I) for any s ≤ t, the residual dynamics satisfies ξ s ξ s−1 ≤ θ s · νC 0 ξ 0 ξ −1 , for some constant ν > 0, and (II) for all l ∈ [L] and for any s ≤ t, W ≤ R L-linear := 64 X 2 √ dy Lσ 2 min (X) νC 0 B 0 , then
Now let us switch to upper-bound ψ t . It is equivalent to upper-bounding the Frobenius norm of 1 − ηM t−1,j X third term .
X 2
2, by using Lemma 15 and Lemma 16, we have
in each iteration. Theorem 5 is critical for establishing the desired result. On the other hand, it would become tricky if instead we use Gelfand's formula or other techniques in the related works that lead to a convergence rate in the form of O(tθ t ).
. This condition trivially holds for the case of the quadratic problems, since there is no such term. On the other hand, for solving the other problems, we need to carefully show that the condition holds. For example, according to Lemma 3, showing acceleration for the ReLU network will require bounding terms like (H 0 − H s )ξ s (and other terms as well), where H 0 −H s corresponds to the difference of the kernel matrix at two different time steps. By controlling the width of the network, we can guarantee that the change is not too much. A similar result can be obtained for the problem of the deep linear network. The high-level idea is simple but the analysis of the problems of the neural networks can be tedious.4.1. Non-asymptotic accelerated linear rate for solving strongly convex quadratic problems Theorem 7. Assume the momentum parameter β satisfies
Table 1 .
1Summary of the notations for proving Theorem 9.
Table 2 .
2Summary of the notations for proving Theorem 10. We will simply use κ to represent the condition number of the matrix H0 in the analysis since we haveκ ≤ κ.Lemma 13.[Lemma 4.2 in (Hu et al., 2020b)] By the orthogonal initialization, we have
In Section 2 and Appendix A, we will provide more discussions about this point.
We borrow the term "accelerated linear rate" from the convex optimization literature(Nesterov, 2013), because the result here has a resemblance to those results in convex optimization, even though the neural network training is a non-convex problem.
The authors thank Daniel Pozo for catching a typo. The authors acknowledge support of NSF IIS Award 1910077. JW also thanks IDEaS-TRIAD Research Scholarship 03GR10000818.Therefore, we have to bound ∆ (L:l+1) t 2 . We have for any 1 ≤ i ≤ j ≤ L.where ∆ i 2 ≤ W can be written as a finite sum of some terms of the formby Lemma 13. Thus, we can bound≤ m, (b) follows by the inequality (1 + x/n) n ≤ e x , ∀x ≥ 0, n > 0, (c) from Bernoulli's inequality e r ≤ 1 + (e − 1)r, ∀0 ≤ r ≤ 1, and (d) by choosing any sufficiently larger C .where in the last inequality we use κ := σ 2 max (X) σ 2 min (X) . Now let us switch to bound the second term, we havewhere the second to last inequality uses(106), Lemma 15, and Lemma 16, while the last inequality uses κ :=Now combing (100), (107), and (111), we havewhere we use λ := Lσ 2 min (X) dy . Now we have (93), (99), and(112), which leads towhere the last inequality uses that 1 ≤ 16 9 θ 2 as ηλ ≤ 1 so that θ ≥ 3 4 .Lemma 15. Following the setting as Theorem 10, denote θ := β * + 1where (a), (b) is by the update rule of momentum, which is WProof. The lemma has been proved in proof of Claim 4.4 and Claim 4.5 in(Hu et al., 2020b). For completeness, let us replicate the proof here.We have for anywhere ∆ i = W can be written as a finite sum of some terms of the form. Thus, we can boundwhere the last step uses m > C(LR L-linear ) 2 . By combining this with Lemma 13, one can obtain the result.Remark: In the proof of Lemma 14, we obtain a tighter bound of the distance WHowever, to get the upper-bound σ max (W √ ηλ. Let ν = 2 in Lemma 14, 15, and let C 1 = C 3 = C 0 and C 2 = 1 4 √ ηλ in Theorem 6. The goal is to show that ξ t ξ t−1 ≤ θ t 2C 0 ξ 0 ξ −1 for all t by induction. To achieve this, we will also use induction to show that for all iterations s,which is clearly true in the base case s = 0.By Lemma 4, 13, 14, 15, Theorem 6 and Corollary 1, it suffices to show thatwhere the matrix A and the vector ϕ t are defined in Lemma 4, and B 0 is a constant such that B 0 ≥ Y − U 0 F with probability 1 − δ by Lemma 13. The inequality(119)is the required condition for using the result of Theorem 6, while the inequality (120) helps us to show (119) through invoking Lemma 14 to bound the terms {ϕ s } as shown in the following.Let us show (119) first. We havewhere (a) uses Theorem 5 with β = β 2 * , (b) is by Lemma 14, (c) usesκ 5 ≤ m for some sufficiently large constants C , C > 0, and (f) uses that ηλ = 1 κ and C 0 ≤ 4 √ κ by Corollary 1. Hence, we have shown (119). Therefore, by Theorem 6,By Lemma 15, we have (120). Thus, we have completed the proof.H. ExperimentH.1. ReLU networkWe report a proof-of-concept experiment for training the ReLU network. We sample n = 5 points from the normal distribution, and then scale the size to the unit norm. We generate the labels uniformly random from {1, −1}. We let m = 1000 and d = 10. We compare vanilla GD and gradient descent with Polyak's momentum. Denoteλ max := λ max (H 0 ), λ min := λ min (H 0 ), andκ :=λ max /λ min . Then, for gradient descent with Polyak's momentum, we set the step size η = 1/ λ max and set the momentum parameter β = (1 − 1 2 1 √κ ) 2 . For gradient descent, we set the same step size. The result is shown onFigure 1.We also report the percentiles of pattern changes over iterations. Specifically, we report the quantityas there are mn patterns. For gradient descent with Polyak's momentum, the percentiles of pattern changes is approximately 0.76%; while for vanilla gradient descent, the percentiles of pattern changes is 0.55%.H.2. Deep linear networkWe let the input and output dimension d = d y = 20, the width of the intermediate layers m = 50, the depth L = 100. We sampled a X ∈ R 20×5 from the normal distribution. We let W * = I 20 + 0.1W , whereW ∈ R 20×20 is sampled from the normal distribution. Then, we have Y = W * X, η = . Vanilla GD also uses the same step size. The network is initialized by the orthogonal initialization and both algorithms start from the same initialization. The result is shown onFigure 2.
A new regret analysis for adam-type algorithms. A Alacaoglu, Y Malitsky, P Mertikopoulos, V Cevher, 2020Alacaoglu, A., Malitsky, Y., Mertikopoulos, P., and Cevher, V. A new regret analysis for adam-type algorithms. ICML, 2020.
| [] |
[
"Flows around galaxies. I. The dependence of galaxy connectivity on cosmic environments and effects on the star formation rate",
"Flows around galaxies. I. The dependence of galaxy connectivity on cosmic environments and effects on the star formation rate"
] | [
"Daniela Galárraga-Espinosa \nMax-Planck Institute for Astrophysics\nKarl-Schwarzschild-Str. 1D-85741GarchingGermany\n",
"Enrico Garaldi \nMax-Planck Institute for Astrophysics\nKarl-Schwarzschild-Str. 1D-85741GarchingGermany\n",
"Guinevere Kauffmann \nMax-Planck Institute for Astrophysics\nKarl-Schwarzschild-Str. 1D-85741GarchingGermany\n"
] | [
"Max-Planck Institute for Astrophysics\nKarl-Schwarzschild-Str. 1D-85741GarchingGermany",
"Max-Planck Institute for Astrophysics\nKarl-Schwarzschild-Str. 1D-85741GarchingGermany",
"Max-Planck Institute for Astrophysics\nKarl-Schwarzschild-Str. 1D-85741GarchingGermany"
] | [] | With the aim of bringing substantial insight to the fundamental question of how galaxies acquire their material for star formation, we present the first comprehensive characterisation of the galaxy connectivity (i.e. the number of small-scale filamentary streams connected to a galaxy) in relation to the cosmic environment, and a statistical exploration of the impact of connectivity on the star formation rate (SFR) at z = 2. We detected kiloparsec-scale filaments directly connected to galaxies by applying the DisPerSE filament finder to the dark matter density around 2942 central galaxies (M * > 10 8 M /h) of the TNG50-1 simulation. Our results demonstrate that galaxy connectivity spans a broad range (from 0 to 9), with more than half of the galaxies connected to two or three streams. We examined a variety of factors that might influence the connectivity and found that it increases with mass, decreases with local density for low-mass galaxies, and does not depend on local environment, estimated by the Delaunay tessellation, for high-mass galaxies. Beyond mass and local density, we further classified galaxies according to their location in different cosmic web environments, and we highlight the influence of the large-scale structure on the number of connected streams. Our results reflect the different strengths of the cosmic tides, which can prevent the formation of coherent streams feeding the galaxies or even disconnect the galaxy from its local web. Finally, we show that at fixed local density, the SFR of low-mass galaxies is up to 5.9σ higher as a result of connectivity. This SFR boost is even higher (6.3σ) for galaxies that are embedded in cosmic filaments, where the available matter reservoirs are large. A milder impact is found for high-mass galaxies, which indicates different relative efficiencies of matter inflow via small-scale streams in galaxies of different masses. | 10.1051/0004-6361/202244935 | [
"https://export.arxiv.org/pdf/2209.05495v4.pdf"
] | 252,212,070 | 2209.05495 | 61689ff4474c688007e882306103be3298d70eba |
Flows around galaxies. I. The dependence of galaxy connectivity on cosmic environments and effects on the star formation rate
May 1, 2023
Daniela Galárraga-Espinosa
Max-Planck Institute for Astrophysics
Karl-Schwarzschild-Str. 1D-85741GarchingGermany
Enrico Garaldi
Max-Planck Institute for Astrophysics
Karl-Schwarzschild-Str. 1D-85741GarchingGermany
Guinevere Kauffmann
Max-Planck Institute for Astrophysics
Karl-Schwarzschild-Str. 1D-85741GarchingGermany
Flows around galaxies. I. The dependence of galaxy connectivity on cosmic environments and effects on the star formation rate
May 1, 2023Received XXX; accepted YYYAstronomy & Astrophysics manuscript no. main(cosmology:) large-scale structure of Universegalaxies: evolutiongalaxies: star formationgalaxies: statisticsmeth- ods: numericalmethods: statistical
With the aim of bringing substantial insight to the fundamental question of how galaxies acquire their material for star formation, we present the first comprehensive characterisation of the galaxy connectivity (i.e. the number of small-scale filamentary streams connected to a galaxy) in relation to the cosmic environment, and a statistical exploration of the impact of connectivity on the star formation rate (SFR) at z = 2. We detected kiloparsec-scale filaments directly connected to galaxies by applying the DisPerSE filament finder to the dark matter density around 2942 central galaxies (M * > 10 8 M /h) of the TNG50-1 simulation. Our results demonstrate that galaxy connectivity spans a broad range (from 0 to 9), with more than half of the galaxies connected to two or three streams. We examined a variety of factors that might influence the connectivity and found that it increases with mass, decreases with local density for low-mass galaxies, and does not depend on local environment, estimated by the Delaunay tessellation, for high-mass galaxies. Beyond mass and local density, we further classified galaxies according to their location in different cosmic web environments, and we highlight the influence of the large-scale structure on the number of connected streams. Our results reflect the different strengths of the cosmic tides, which can prevent the formation of coherent streams feeding the galaxies or even disconnect the galaxy from its local web. Finally, we show that at fixed local density, the SFR of low-mass galaxies is up to 5.9σ higher as a result of connectivity. This SFR boost is even higher (6.3σ) for galaxies that are embedded in cosmic filaments, where the available matter reservoirs are large. A milder impact is found for high-mass galaxies, which indicates different relative efficiencies of matter inflow via small-scale streams in galaxies of different masses.
Introduction
Under the action of gravity, matter on large scales in the Universe is assembled to form a gigantic network composed of nodes, filaments, walls, and voids. This is called the cosmic web (de Lapparent et al. 1986;Bond et al. 1996). Emerging from the initial density fluctuations (Zel'dovich 1970), this cosmic skeleton is mainly composed of and ruled by the dynamics of dark matter (DM). Driven by gravity, baryonic matter falls into the DM potential wells. The structure of the cosmic web is highly multiscale (Aragón-Calvo et al. 2010). While the nodes of the web, hosting the most massive galaxy clusters, are connected to largescale cosmic filaments with widths of several megaparsec (e.g. Gouin et al. 2021Gouin et al. , 2022Galárraga-Espinosa et al. 2022), small haloes are also attached to the web via smaller-scale filaments that are characterised by widths of tens of kiloparsec (e.g. Ramsøy et al. 2021). These small-scale filaments, or streams, are expected to have a strong effect on the evolution and properties of galaxies residing at the centre of these haloes.
Galaxies are thought to be formed at the intersection of these small-scale filamentary streams, which, in theory, feed the galaxies with the cold and dense material necessary for star formation (e.g. Birnboim & Dekel 2003;Kereš et al. 2005;Ocvirk et al. 2008;Dekel et al. 2009;Pichon et al. 2011;Danovich et al. 2012). The theoretical prediction is thus that these filaments act as highways of matter, from the large-scale reservoirs down to the halo centres. This picture is supported by studies in observations such as Bauermeister et al. (2010), and more recently, Prescott et al. (2015) and Zabl et al. (2019), who have clearly demonstrated the need of gas replenishment from external reservoirs. Nevertheless, other processes can also participate in the fuelling of the galaxy with the material for star formation. These are, for example, the precipitation of hot gas in virial equilibrium with the dark matter halo (Kereš et al. 2005), the recycling of gas from the circum-galactic medium (CGM), or even galaxy-galaxy mergers, which drive gas from the outskirts of galaxies into their centres, where they form stars very rapidly in a so-called starburst. While Stewart et al. (2017) has proven that gas accretion into haloes via filamentary streams is a robust prediction of Λ-CDM (because it is independent of the adopted code and feedback model), Nelson et al. (2013) has shown that gas transport inside haloes, that is, from the CGM into the galaxies, is strongly impacted by the numerical scheme of the hydrodynamical simulation (which alters the relative importance of accretion via cold streams and via cooling of shock-heated gas). Thus, the question of how galaxies acquire the material for star formation and the relative efficiency of the processes involved is yet to be understood.
Another active topic of investigation is why galaxies stop forming stars. The current picture involves a complex variety of feedback and environmental processes that regulate the balance between gas inflows and outflows around galaxies, and whose relative impact strongly depends on other parameters Article number, page 1 of 17 arXiv:2209.05495v4 [astro-ph.GA] 27 Apr 2023 A&A proofs: manuscript no. main such as galaxy mass and environment (e.g. Kauffmann et al. 2004;Baldry et al. 2004;Bamford et al. 2009;Peng et al. 2010;Moutard et al. 2018). star formation could be suppressed either by internal mechanisms such as energetic feedback from supernovae or accreting black holes, or by environmental effects such as ram-pressure stripping or tidal interactions. The latter are external processes, which according to Aragon Calvo et al. (2019), are fundamentally linked with the disconnection (or detachment) of the galaxy from its filamentary streams. This engenders a mechanical starvation either by removing gas reservoirs or by preventing gas from reaching galaxies.
In this context, it is crucial to re-evaluate the relative effect of filamentary streams on galaxy evolution in a cosmological context, that is, to take the environment in which galaxies form and evolve into account. A study in a cosmological context is crucial because it is now well established, both in observations and simulations, that beyond the trends with mass and local environment, galaxy properties also vary as a function of their location in the structures of the cosmic web. For example, galaxies located in cluster environments are more massive, form fewer stars, are redder, and their morphologies are more elliptical than those in less dense regions (see e.g. the reviews of Dressler 1980;Boselli & Gavazzi 2006. Similar trends are found in the cores of cosmic filaments with respect to regions that lie farther away from the spines (e.g. Pandey & Bharadwaj (2006) This paper is the first in a series providing an updated picture of the impact of filamentary flows on galaxy evolution. We use the recent TNG50 simulation (Pillepich et al. 2019;Nelson et al. 2019a) to perform a statistical analysis of the number of (kiloparsec-scale) streams connected to galaxies, hereafter referred to as the galaxy connectivity, as a function of the environment of the galaxy in the cosmic web (defined at megaparsec scales). While potential inflows and outflows of baryons along these streams will be studied in the second part of this project, we provide in this paper a first exploration of the impact of galaxy connectivity on the specific star formation rate (sSFR), defined as the SFR normalised by galaxy stellar mass. We emphasise that the multi-scale analysis performed in this work is different from previous studies, which have rather focused on how large-scale structures, such as groups or clusters, are connected to largescale cosmic filaments on megaparsec scales (Kraljic et al. 2020;Gouin et al. 2021Gouin et al. , 2022, yielding relevant conclusions on the properties of the cosmic environments where galaxies live, but not on the properties of galaxies themselves. Moreover, we note that this type of study has only recently been enabled through the advent of large-scale hydrodynamical simulations with more robust baryonic models and increasing resolution (e.g. Tremmel et al. 2017;Pillepich et al. 2019;Dubois et al. 2021), and is crucial in order to interpret future observations. This paper is organised as follows. Section 2 introduces the TNG50 simulation and the dataset of galaxies. We present the detection of the small-scale filamentary streams as well as the large-scale cosmic web in Sect. 3. Results about galaxy connectivity are first introduced in Sect. 4, and the impact of the large-scale environments is discussed in Sect. 5. Finally, the relation between connectivity and SFR is explored in Sect. 6, and we summarise our conclusions in Sect. 7. Throughout this pa-per, we adopt the values of the cosmological parameters given by Planck Collaboration et al. (2016), that is, Ω Λ,0 = 0.6911, Ω m,0 = 0.3089, Ω b,0 = 0.0486, σ 8 = 0.8159, n s = 0.9667, and h = 0.6774. The error bars correspond to the errors on the mean values, derived from bootstrap resampling.
Data
TNG50-1 simulation
The analysis presented in this work uses the outputs of the TNG50-1 simulation, which is the box of the gravitomagnetohydrodynamical simulation suite, IllustrisTNG 1 , with the highest resolution (Pillepich et al. 2018;Nelson et al. 2019b;Pillepich et al. 2019). With a mass resolution of m DM = 3.07 × 10 5 M /h and a volume of (35 cMpc/h) 3 , this box is adapted to study the small-scale (kiloparsec) filamentary streams in a statistical way. We note that the IllustrisTNG project was run with the moving-mesh code Arepo (Springel 2010), and the baryonic models and prescriptions were specifically calibrated on observational data to match the observed galaxy properties and statistics (Pillepich et al. 2018;Nelson et al. 2019b). All the following results are derived from the TNG50-1 snapshot at redshift z = 2. This redshift typically corresponds to the so-called cosmic noon, the epoch in which galaxies formed stars most actively, which is therefore the ideal time at which to examine galaxy connectivity and its influence on star formation.
In the future, we will build on the current work by investigating the gas content of the DM filaments identified here. Therefore, it is crucial to verify that the simulation is also suited for this objective. We verified that TNG50-1 meets the resolution criterion found by Ramsøy et al. (2021) for capturing the filament physical properties (e.g. the shocks in their temperature profile).
Galaxy selection
From the subhalo catalogue of the TNG50-1 simulation at z = 2 (produced using the Subfind code Springel (2005)), we selected the central objects with stellar masses higher than M * > 10 8 M /h. The maximum subhalo stellar mass is 4×10 11 M /h. This selection in mass chooses subhaloes at z = 2 that will most likely become systems with a typical mass of 10 9 − 10 12 M /h at z = 0 (Brinchmann et al. 2004;Taylor et al. 2011).
Importantly, we emphasise that we focus on central galaxies alone. They are identified as the subhaloes at the centre of their corresponding friends-of-fiends (FoF) halo. Satellite galaxies were excluded from this analysis because we found (visually) that they lie very close to the spine of the filaments associated with their central galaxy, that is, satellites are probably part of these streams. A more quantitative analysis of satellite galaxies and their position relative to the filamentary streams will be performed in a future work.
In addition, in order to facilitate the procedure of extracting the filamentary streams (see next section), we conservatively chose to discard the central galaxies located at distances smaller than 1.5 cMpc/h from the edges of the full simulation box. We finally note that 98.8% of the remaining galaxies in our catalogue are star forming, as shown by their main sequence in the M * − SFR plane presented in Appendix A. We discarded the few passive galaxies (35) so that the analysis presented in this work does not mix two different galaxy populations (i.e. galaxies at different evolutionary stages) at z = 2. Based on the selections presented above, the total number of galaxies analysed in this work is 2942.
Finding small-and large-scale filaments
In this section, we explain the procedure we adopted to extract the small-scale streams connected to galaxies and the large-scale (megaparsec) cosmic web skeleton. To detect these multi-scale structures with an optimal resolution, we employed the filament finder DisPerSE (Sect. 3.1) to adapted regions of the DM density field. The small-scale streams were detected from selected sub-boxes centred on the position of individual galaxies (see Sect. 3.2), and the entire simulation box was used to find the large-scale filamentary skeleton, as explained in Sect. 3.3.
Filament extractor code DisPerSE
DisPerSE (Sousbie 2011;Sousbie et al. 2011) is a publicly available code that detects the cosmic skeleton from the topology of the density field (e.g. the DM density), using the discrete Morse theory and the theory of persistence (see Sousbie 2011, and references therein). This algorithm identifies the critical points of the field, that is, the points with a vanishing density gradient. Filaments are defined as the ridges of the density field connecting maximum-density critical points (hereafter CPmax) to saddles 2 . Importantly, the minimum significance of the detected filaments with respect to the noise can be set by fixing the persistence threshold of the corresponding pairs of CPmax-saddle critical points. For density fields that are computed on regular grids (e.g. in this work), the persistence threshold needs to be set via the cut parameter. The value of this parameter should correspond to the amplitude of the noise of the input density grid, so that any CPmax-saddle critical pair with density difference lower than the adopted threshold is rejected. For further details, we refer to the DisPerSE presentation papers (Sousbie 2011;Sousbie et al. 2011) and website 3 .
Extracting the small-scale streams
We detected the small-scale (kiloparsec) filamentary streams connected to galaxies by applying DisPerSE to the local DM density field. For each individual galaxy, we selected the DM particles in sub-boxes with a side L = 3 cMpc/h centred on the position of the galaxy. This value was chosen in order to capture the galaxy environment beyond the typical scales of the CGM, thus probing the large-scale matter distribution. For reference, 3 cMpc/h is a factor of five larger than the largest virial radius of the haloes of the galaxies in our catalogue. We also verified that increasing the size of the sub-boxes did not change the galaxy connectivity estimates. This analysis is presented in Appendix B.
The DM density field was computed by projecting the particles inside the galaxy sub-box onto a regular grid of N pix = 150 pixels per side. We applied a Gaussian filter with a standard deviation equal to the size of a pixel (i.e. 3/150 = 20 ckpc/h) to the grid values, and we rescaled the resulting pixel values by the standard deviation. These steps enable the application of DisPerSE with the same parametrisation to all the 2942 density grids. Figure 1 presents some examples of DM density grids (projected along the y-axis). For each panel, the analysed galaxy (red star) is at the centre of the sub-box, and the virial radius of the host halo is indicated by the red circle. Other centrals and satellites located in the same sub-box are shown as white stars and dots, respectively.
DisPerSE was then applied to each one of the 2942 regular grids, so that each galaxy possessed its own set of small-scale filaments. We treated the non-periodic boundary conditions of each sub-box by specifying the periodicity 0 keyword in the computation of the Morse-smale complexes. The persistence threshold, which acts as a filter of the features that are likely to have been generated by noise, was determined by exploring a broad range of values of the DisPerSE cut parameter. In Appendix C we assess the impact of this parameter on the final number of streams connected to the galaxies. We show that cut values above 25 are required to efficiently remove filaments arising from the noise, and that the progressive increase in persistence beyond cut=25 only mildly impacts the connectivity estimates (by slightly lowering the connectivity normalisation). From this study, we conclude that provided the noise-induced filaments are removed, our statistical results on connectivity depend only very weakly on the exact value of the DisPerSE persistence threshold. We therefore chose to fix this parameter to cut=30 after visual inspection of several random galaxies. This threshold kept some small filament portions that visually agreed well with the underlying DM density field, but were absent in the skeletons derived with higher cut values.
The positions of the resulting streams were then smoothed using the DisPerSE skelconv function. By straightening the skeleton segments and smoothing sharp and possibly nonphysical edges between them, this final step alleviates the effect of shot noise on the geometry of the filaments. This procedure does not affect the topology of the density field (Codis et al. 2018), and thus keeps the connectivity unchanged. Figure 2 presents some examples of the resulting streams in 3D boxes. These correspond to the same galaxies as in Fig. 1.
It is worth noting that we explicitly chose not to identify the filaments from the DM particle distribution in order to avoid the inevitable contamination from small clumps (at scales < 10 kpc) and, most of all, from the high shot-noise levels provoked by the great number of DM particles. We found that skeletons detected in the particle distribution were extremely sensitive to the slightest changes of persistence threshold, causing filaments to appear and disappear, and provoking radical changes in the position of even the most prominent structures. It is therefore a more stable method to run DisPerSE on a DM density grid, but this has the drawback of setting an intrinsic resolution scale, L/N pix = 20 ckpc/h (see the orange lines in Fig. 1, which correspond to ten pixels). This means that the positions of the filament spines are determined with a precision of ±10 ckpc/h. While this precision limit might compromise the accuracy of radial density profiles (because the exact position of the filament cores is uncertain), we emphasise that it does not undermine the results on connectivity we present here.
Extracting the large-scale cosmic skeleton
With a similar method as for the small-scale streams, the largescale (megaparsec) cosmic skeleton was detected by projecting the full TNG50-1 DM particle distribution onto a regular grid of 150 pixels per side, yielding an intrinsic resolution scale of 35/150 = 0.23 cMpc/h for these large-scale cosmic filaments. The persistence threshold was set after analysing the outputs ob- Fig. 1: Examples of 2D projected DM density fields. For each sub-box with a side of 3 cMpc/h, the red star and red circle correspond to the analysed central galaxy of mass M * > 10 8 M and to the R 200 radius of its host FoF halo, respectively. The small white stars and white dots indicate other centrals and satellite galaxies located in the sub-box, respectively. The length of the orange line in the bottom left part of the panels corresponds to ten times the resolution scale of the grid chosen to project the DM density and extract the skeleton, i.e. ten times 20 ckpc/h. tained with different values of the cut parameter. For the largescale structure, a physical criterion for determining the robustness of the skeleton is that the DisPerSE CPmax points match the positions of the most massive haloes, such as those of groups and clusters of galaxies. The results of this matching is presented in Appendix D, in which the choice running of DisPerSE with a persistence threshold of 6 is also justified. Figure 3 shows the resulting cosmic filaments in the 3D box of the TNG50-1 simulation. We recall that the identification of the large-scale cosmic skeleton in this work is done solely with the aim of classifying the galaxies into different cosmic environments, as we show in Sect. 5.
Galaxy connectivity
After detecting the small-scale filamentary streams, we present a statistical analysis of the galaxy connectivity in this section, that is, the number of streams to which each galaxy is connected. Sect. 4.1 presents general results for all the galaxies, and secondary dependences on galaxy mass and local environment are analysed in Sect. 4.2 and Sect. 4.3, respectively. Fig. 1. The red spheres correspond to spheres with a radius R 200 of the galaxy host FoF halo. For illustration, the black points represent a random sub-sample (1/1000) of the DM particle distribution in the sub-box, but the filaments were detected from 3D grids of the DM density field, as described in Sect. 3.2. Figure 4 shows the distribution of the number of streams to which a galaxy is connected for all the 2942 galaxies of the dataset. This number was obtained by counting the number of filaments that cross the virial radius of the host haloes. This figure shows that the galaxy connectivity spans a broad range (from 0 to 9) and presents a long tail towards high connectivity values, indicating that high connectivity is possible, but occurs quite rarely. . The mean value of each distribution is 2.18 ± 0.03, 2.49 ± 0.07, 2.63 ± 0.06, 2.93 ± 0.12, and 3.13 ± 0.10 from the lowest to the highest masses.
General results
The highest peaks are seen for N streams = 2 and 3, with 32.5% and 25.6% galaxies connected to two and three streams, respectively. The mean and median values of the distribution are 2.36 and 2, respectively.
The skewed shape and broad range of the distribution presented in Fig. 4 indicate that additional factors may affect the galaxy connectivity. In the next sections, we therefore distinguish secondary dependences on galaxy mass and local environment.
Trends with galaxy mass
In this section, we investigate the effects of the galaxy mass on its connectivity. Figure 5 shows the N streams distribution for galaxies separated into five different bins of stellar mass. A clear trend emerges: the distributions of the highest-mass bins (e.g. purple) are shifted towards higher connectivity values than those of the lowest-mass bins (e.g. yellow). More massive galaxies are therefore more connected than lower-mass galaxies. This result provides an extension to lower masses of the trend that is well established in galaxy clusters on megaparsec scales (Aragón-Calvo et al. 2010;Codis et al. 2018;Darragh Ford et al. 2019;Sarron et al. 2019;Malavasi et al. 2020;Kraljic et al. 2020;Gouin et al. 2021). The mean and median values of the distributions of Fig. 5 also reflect the described trend. From the lowest to the highest masses, the mean connectivity values are 2.18 ± 0.03, 2.49 ± 0.07, 2.63 ± 0.06, 2.93 ± 0.12, and 3.13 ± 0.10. According to Codis et al. (2018), a higher connectivity is predicted for high-density peaks (massive galaxies in our context) because all the eigenvalues of the Hessian matrix (i.e. the matrix of the second derivatives of the density field) are equal in the vicinity of these peaks, thus describing a situation of local isotropy where all incoming directions become possible (see also Pichon & Bernardeau 1999).
We note that the separation into different mass bins allows us to better understand the asymmetric shape of the total N streams distribution presented in Fig. 4. The peak at N streams = 0 is mostly clearly associated with the lowest-mass galaxies, whose distributions in Fig. 5 are more skewed than those of the Fig. 6: Relation between galaxy mass and mean connectivity, N streams (blue curve). The dashed red line shows the resulting fitcurve as presented in Eq. 1.
highest-mass bins.
To proceed in the quantitative analysis, we present in Fig. 6 the relation between the mean connectivity, N streams , and galaxy mass. A simple logarithmic model was used to fit this relation, and the best-fit result is shown by the dashed red diagonal.The the resulting parameters are given by
N streams = (0.47 ± 0.02) × log(M * [M /h]) − (1.69 ± 0.13). (1)
We verified that the ∼ 0.5 slope is independent of the number and limits of the mass bins. These results show that the trends of galaxy connectivity with mass can be captured quite well by a simple relation in the N streams − log(M * ) plane. This relation echoes the theoretical results of Codis et al. (2018), using peak theory.
In this section, we have shown that the number of streams connected to a galaxy depends on galaxy mass. We found the clear trend that more massive galaxies are more strongly connected than less massive galaxies on average. We now explore any dependences on the local environment of the galaxy, which is quantified by the local density.
Trends with local density
We used the Delaunay tessellation field estimator (DTFE; Schaap & van de Weygaert 2000;van de Weygaert & Schaap 2009) to compute the local densities of the galaxies. The DTFE was applied to the distribution of the 2942 massive centrals of our catalogue, so that each galaxy defined a vertex in the Delaunay tessellation and was attributed with a density value, hereafter ρ DTFE . In order to mitigate the effect of Poisson noise in our estimates, we smoothed the densities by averaging the value at each vertex with that of its direct neighbours in the Delaunay tessellation. After this smoothing, local over-densities were computed as where ρ DTFE represents the average of all the densities. Physically, this quantity can be interpreted as a proxy for the crowding of the local environment of the galaxy. Galaxies in crowded regions (i.e. with many other neighbouring galaxies) are associated with high local over-densities, whereas low local over-densities pertain to galaxies living in more locally empty, less crowded spaces. This is clearly illustrated in the example of Fig. 7.
1 + δ DTFE = ρ DTFE ρ DTFE ,(2)
Because mass and local density are intrinsically correlated (e.g. Aragón-Calvo et al. 2010), it is crucial to analyse these two parameters together in order to simultaneously capture their influence on galaxy connectivity. This is done in Fig. 8, where we present the variation in mean connectivity in the massoverdensity parameter space (left panel) and the corresponding bootstrap errors (right panel). For reference, the number of galaxies contributing to each pixel of this 2D plane is shown in Fig. E.1 of Appendix E.
In addition to the already described trends with galaxy mass, Fig. 8 shows interesting trends with 1 + δ DTFE . For low-mass galaxies (with stellar masses lower than ∼ 10 9.5 M /h), the mean number of streams strongly decreases with increasing local overdensity. The least connected galaxies are located in the highestdensity environments (see the yellow region in the top left corner of the plot). Galaxies in these crowded environments are subject to stronger (local) tidal effects (e.g. Hahn et al. 2009), which increase the probability of strong interactions (e.g. by mergers) with respect to galaxies in lower density environments. Aragon Calvo et al. (2019) has shown that these interactions can lead to the disconnection of galaxies from their filamentary web, thus leading to very low connectivity values. In line with these interpretations, this figure also shows that low-mass galaxies embedded in less crowded regions (log(1 + δ DTFE ) < −1.5) have more connections to small-scale filamentary streams.
In stark contrast with low-mass galaxies, high-mass galaxies (M * > 10 9.5 M /h) do not show any significant trend with local over-density. Their mean connectivity varies between two and five (with few exceptions) regardless of the specific values of mass and density. We note that the tail at the highest M * and 1 + δ DTFE values (top right corner) is due to the intrinsic correlation between mass and local environment. High-mass galaxies are less sensitive to the tides driven by the local density, therefore their high connectivity is most probably explained by the trends with mass discussed in Sect. 4.2.
The right panel of Fig. 8 demonstrates that the results presented in this section are significant because the errors of the relevant pixels are tiny and not correlated with their position in the mass-overdensity plane. Finally, we verified the robustness of these results by repeating the same analysis using mass-weighted Delaunay densities (not shown). We found exactly the same trends of connectivity with local density as in Fig. 8.
The local density gives a first-order description of the environment of a galaxy, but it does not encode information on the location of this galaxy in the large-scale environment, set by the different structures of the cosmic web. Knowing the position of a galaxy in the large-scale structures is crucial for fully understanding the results presented in this work. This is shown in the next section.
Connectivity in different cosmic web environments
In this section, we explore the effect of large-scale cosmic environment on galaxy connectivity. It is important to extend the study of environment beyond the first-order analysis of local densities because of the well-established influence of large-scale cosmic tides on matter assembly (Hahn et al. 2009;Musso et al. 2018;Paranjape et al. 2018). Before presenting our results, we recall that information about the local over-density of a galaxy does not allow us to unambiguously determine the position of this object in the cosmic web. This is due to the degeneracies between local and global (cosmic) environments (e.g. Cautun et al. 2014). Figure 13 Table 1: Numbers of galaxies in the different cosmic environments and zones of the mass-overdensity plane (from A to D, see Fig. 8).
illustrates this point, as the 1 + δ distributions of matter in the cosmic environments of nodes, filaments, walls, and voids largely overlap.
We associate galaxies with one of the five different cosmic environments presented in the illustration of Fig. 9. The five cosmic environments are defined below, and the number of galaxies in each is reported in the first column of Table 1.
First, galaxy clusters are spheres with a radius R 200 centred on the positions of the FoF haloes with masses M 200 > 10 12 M /h. Second, cluster outskirts are defined as spherical shells with an inner and outer radius 1 and 3×R 200 , centred on the positions of galaxy clusters. Third, cosmic filaments are cylinders aligned with the spine of the (large-scale) skeleton detected in Sect. 3.3, and have a radius of 1 cMpc/h. This value was chosen in order to select the regions associated with the cores of cosmic filaments (Galárraga-Espinosa et al. 2022). Filament outskirts are the regions between 1 and 2 cMpc/h from the axis of cosmic filaments (without filament cores). Finally, void and wall environments are all the other regions that do not belong to one of the four described above. We note that we here analyse galaxies in voids and walls together because only little information is available about the physical properties Fig. 9: 2D illustration of the five different cosmic environments. These are clusters of galaxies (red), cluster outskirts (purple), cosmic filaments (green), filament outskirts (orange), and 'other' environments (blue). The exact definitions and corresponding number of galaxies belonging to each environment are presented in the main text.
of these cosmic structures (e.g. wall average thickness or void size). This information is required in order to associate galaxies with the structures of the cosmic web described by DisPerSE. Figure 10 presents the connectivity distribution of galaxies split according to these five cosmic environments. The distributions are clearly different, demonstrating a dependence of connectivity on the location of the galaxy in the cosmic web. The corresponding trends in the mass versus 1 + δ DTFE plane are exhibited in Fig. 11, where the mean connectivity and errors are presented in the top and bottom panels, respectively. For completeness, the number of galaxies in each bin of this 2D parameter space is shown in the 2D histogram of Fig. E.2. We observe the following trends with cosmic environment.
First, in cosmic filaments and filament outskirts, low-mass galaxies in high-density regions (zone B, top left corner) are significantly less connected than the same galaxies in voids and walls. In filaments, voids, and walls, the mean connectivity of these galaxies is N streams = 1.43 ± 0.05 and 2.34 ± 0.09, respectively, yielding a 8.48σ difference between these cosmic environments. This result can be explained by the different strengths of the cosmic tidal flow (Kraljic et al. 2019). Due to the stronger gravitational pull, galaxies in filaments and their outskirts are subject to stronger large-scale tides than their analogues in walls and voids. For example, Jhee et al. (2022) presented a clear illustration of halo-mass tidal stripping by dense cosmic filaments. As argued by Hahn et al. (2009) and already mentioned in Sect. 4.3, strong tides (whether local or cosmic) can prevent the convergence of matter flows onto galaxies and hence the formation of coherent streams. Interestingly, because the galaxies in zone B share local over-density values, the observed decrease in connectivity in large-scale filaments is most probably the result of cosmic tides combined with strong interactions with the environment, which can strip these low-mass galaxies from their streams (Aragon Calvo et al. 2019).
The interpretation of the very low connectivity values observed in cluster outskirts is much less straightforward because the statistics in these regions is poor. We nevertheless comment on the fact that cluster outskirts are unique environments at the intersection between cosmic filaments and clusters, so that galaxies with different histories co-exist in these regions (e.g. galaxies falling through filaments, splash-back galaxies, or galaxies in groups, as studied in Kuchner et al. 2022;Borrow et al. 2023;Hough et al. 2023). In addition, the question of how galaxies are accreted into cluster cores and the physical processes they undergo during their infall is currently under active investigation (e.g. Gouin et al. 2022;Kotecha et al. 2022;Salerno et al. 2022, and references therein). At this stage we can therefore only argue that results in cluster outskirts might be a combined effect of galaxy diversity and interactions in this unique environment, but a study with a larger number of galaxies is required.
In stark contrast with the previously studied cosmic environments, Figs. 10 and 11 show that galaxy clusters host systems with the highest connectivity values of all, with a total average of 3.5 streams. Because these cosmic structures dominate the local gravitational field, they are rather insensitive to the cosmic tidal flows. The great number of streams of galaxies in clusters is therefore driven by the high galaxy masses found in these cosmic structures, following the trends presented in Sect. 4.2.
The results of this section echo the analysis in the zoom-in simulations of Borzyszkowski et al. (2017); Romano-Díaz et al. (2017) and Garaldi et al. (2018). In these papers, the authors focused on a few selected haloes, and separated the accreting from stalled ones, finding that their different mass-assembly histories are explained by the location of the halo in the cosmic web (see e.g. Fig. 10 of Borzyszkowski et al. 2017). While a careful study of accretion rates and outflows along the galactic streams will be done in a follow-up project, from Figs. 10 and 11 one can already hint that accreting haloes might be highly connected objects residing in cosmic environments in which the tidal field is relatively weak, whereas the stalled haloes might rather be disconnected from their matter supply and be embedded in structures where the cosmic flow is strong (e.g. in large-scale filaments).
Impact on star formation
After studying the connectivity of galaxies and understanding its dependencies on mass and environment, we present in this section a first exploration of the impact of galaxy connectivity on star formation. This is a crucial analysis because the material for star formation (cold and dense gas) is predicted to be accreted onto the galaxy via the small-scale streams (e.g. Kereš et al. 2005;Ocvirk et al. 2008;Dekel et al. 2009), such as we detected and studied here. While a more comprehensive analysis including studies of mass-accretion rates and gas properties of the filamentary streams is left for a follow-up project, we can already try to identify any possible effects solely driven by topology here, that is, by the number of connections of the galaxy to filamentary streams.
In order to break the well-known degeneracies between star formation, galaxy mass, and local density and to probe the specific effects of connectivity, we separated galaxies into the four different populations presented in the mass-overdensity plane of Fig. 8 (see the dashed grey lines). From A to D, galaxies increase in local density and mass. The limits between populations are M * = 10 9.5 M /h and 1 + δ DTFE = 10 −1.5 , and the number of galaxies in each is reported in the first line of Table 1. Fig. 10: Connectivity distribution as a function of cosmic environments. The five cosmic environments are defined in the main text. The vertical lines represent the mean values in each of the different environments. These are N streams = 2.77±0.03, 2.04±0.06, 2.09± 0.04, 0.43 ± 0.14, and 3.50 ± 0.19 for voids and walls, filament outskirts, filaments, cluster outskirts, and clusters, respectively. Fig. 11: Top: Mean galaxy connectivity (shown by the pixel colours) in the mass vs 1 + δ DTFE plane as a function of cosmic environments (from left to right panels). The dashed grey lines show the limits of the four different galaxy populations studied in Sect. 6. Bottom: Corresponding bootstrap errors. The dark green pixels (error values of zero) need to be interpreted with caution as they represent bins with only one galaxy (see the number counts in Fig. E.2). Figure 12 presents the variation in mean sSFR as a function of the galaxy connectivity for these four galaxy populations in all cosmic environments combined. For reference, the average sSFR of all the galaxies in a given population is marked by the dotted horizontal lines. For low-mass galaxies, the highest sSFR values are associated with the largest number of connections, yielding a clear positive correlation between star formation and connectivity (see populations A and B, shown in blue and green, respectively). The significance of this relation is estimated using Eq. 3 and is found to be as high as 5.84σ and 5.92σ for populations in A and B, respectively. This strong sSFR enhancement driven by connectivity is in line with the so-called cold accretion mode introduced in Kereš et al. (2005). Namely, the haloes hosting low-mass galaxies may not be massive enough to support shocks, enabling the cold gas flowing along the filamentary streams to reach the centre of the halo, thus feeding the central galaxy with material for star formation. Consequently, the more streams, the higher the sSFR enhancement.
On the other hand, for high-mass galaxies (populations C and D, shown in yellow and red, respectively), the relation between the sSFR and connectivity is rather flat. This indicates that star formation in massive galaxies is less dependent on the number of connections of the galaxy to the matter reservoirs outside the halo. In line with the so-called hot accretion mode (Kereš et al. 2005), this indicates that in massive systems, star formation might have little to do with potential inflows of cold Fig. 12: Influence of galaxy connectivity on star formation. The curves show the mean galaxy sSFR as a function of connectivity for the four different galaxy populations from A to D presented in Fig. 8. The horizontal lines show the average sSFR of all the galaxies in a given population, regardless of the connectivity value. We note that the y-axis is in logarithmic scale.
gas via the filamentary streams, and might instead be regulated by internal processes, such as the recycling of gas within the halo, or the cooling of gas that has been shock-heated by accretion into the halo. In this scenario, a more important parameter to understand star formation in massive galaxies could be the cooling rate of gas, rather than the galaxy connectivity.
It is established that galaxy properties can be impacted by the large-scale environment (e.g. Hahn et al. 2007b,a;Laigle et al. 2015;Borzyszkowski et al. 2017;Musso et al. 2018;Paranjape et al. 2018;Malavasi et al. 2022, and references therein), therefore we further differentiated galaxies with respect to their location in the cosmic web. Figure 13 captures the specific role of the large-scale environment on the relation between star formation galaxy connectivity for voids and walls, filament outskirts, and filaments. We refrained from performing this study in clusters and cluster outskirts due to the very low number of galaxies in these structures, as exposed in Table 1. Moreover, we note that in order to have statistically meaningful results, bins of N streams with fewer than ten galaxies were removed from this plot (they usually correspond to extreme connectivity values). This figure shows the same qualitative results as in Fig. 12, that is, the sSFR of low-mass galaxies is largely enhanced with connectivity, while that of high-mass galaxies shows a much milder relation with the number of connected streams.
Nevertheless, the strength of the observed trends strongly varies in the different cosmic structures. This is quantified in Fig. 14, which presents the significance ∆ of the sSFR enhancement due to connectivity. The ∆ values are estimated by
∆(N) = sSFR(N) − sSFR(N min ) σ sSFR (N) 2 + σ sSFR (N min ) 2 ,(3)
where sSFR and σ sSFR denote the mean sSFR values and corresponding bootstrap errors as seen in Figs. 12 and 13, respectively, and N min represents the lowest number of streams for galaxies in a given population and cosmic environment. It is striking to see that cosmic filaments (dot-dashed lines with circles) are the places in which the star formation of lowmass galaxies is most enhanced (with up to 6.30σ for population Fig. 13: Influence of galaxy connectivity on star formation. The curves show the mean galaxy sSFR as a function of connectivity for different galaxy populations (from A to D, see Fig. 8) and cosmic environments. The horizontal lines show the average sSFR of a given galaxy population and cosmic environment, regardless of the connectivity value. We note that the y-axis is in logarithmic scale. B). While still significant, this enhancement is more moderate in other cosmic environments, with maximum ∆ values of 3.08σ in walls and voids, and 4.19σ in filament outskirts. These differences illustrate how the matter reservoirs of the different cosmic environments play an important role in boosting galaxy star formation. At fixed connectivity values, the small-scale streams attached to galaxies embedded in (large-scale) cosmic filaments benefit from the larger matter reservoirs proper to these environments, and are thus probably more efficiently fueled than those in the emptier environments of walls and voids, for instance. To summarise, the results in this section show that high connectivity values in matter-rich large-scale environments significantly favour the star formation activity of low-mass galaxies at z = 2.
Summary and conclusions
The question of how galaxies acquire the material from the cosmic web to fuel star formation is fundamental to galaxy evolu- tion. We presented the first comprehensive characterisation of the galaxy connectivity (i.e. the number of filamentary streams attached to a galaxy) in relation with the cosmic environment. We also showed the first steps towards assessing the impact of this topological property on the galaxy SFR. By performing a statistical analysis of 2942 massive (M * > 10 8 M /h) centrals in the TNG50-1 simulation at z = 2, we reached the main conclusions summarised below.
-(i) The total connectivity distribution (Fig. 4) spans a broad range from zero to nine streams. Most of the galaxies (> 50%) are connected to two or three streams, and fewer than 5% of them are connected to five streams or more.
-(ii) Galaxy connectivity strongly depends on galaxy mass. We found that low-mass galaxies are less connected than high-mass galaxies on average (Fig. 5). Empirically, we established the following simple relation between mean connectivity and galaxy mass: N streams ∝ 0.5 log(M * [M /h]), presented in Fig. 6.
-(iii) Galaxy connectivity also depends on local environment, with differences between low-and high-mass galaxies (Fig. 8). We found that low-mass galaxies (with stellar masses lower than ∼ 10 9.5 M /h) in high local over-density environments are connected to significantly smaller numbers of streams than galaxies of the same mass that are located in lower over-dense regions. This trend with local environment was interpreted by the influence of the stronger tidal forces felt by low-mass galaxies in high over-density environments (Hahn et al. 2009;Aragon Calvo et al. 2019). We showed for high-mass galaxies that their connectivity is independent of local over-density, and that their greater number of connected streams is probably driven by their mass.
-(iv) By further disentangling galaxies in different cosmic environments, we found that the average galaxy connectivity decreases from cosmic voids and walls to filament outskirts, from the latter to filament cores, and is the lowest among all in cluster outskirts ( Fig. 11 and 10). This decrease might be due to the increasing strength of cosmic tides in these cosmic environments (e.g. Musso et al. 2018;Paranjape et al. 2018;Kraljic et al. 2019). On the other hand, we showed that the average galaxy connectivity is highest of all in galaxy clusters, where the most massive galaxies reside.
-(v) We found that galaxy connectivity significantly enhances (up to ∼ 6σ) the star formation of low-mass galaxies, but no significant effect is seen in high-mass galaxies (Figs. 12). This indicates different dominant accretion modes in lowand high-mass galaxies.
-(vi) We showed that if they keep the connections despite the strong tides, low-mass galaxies in matter-rich regions of the cosmic web (e.g. cosmic filaments) present stronger star formation activities than their analogues in emptier large-scale environments (Fig. 14). This explicitly shows the importance of the large-scale matter reservoirs in fueling the star formation of low-mass galaxies.
These results draw a picture in which star formation is linked to an external parameter describing topology, the galaxy connectivity. Within this picture, many connected streams might favour the accretion of cold material from the large scales and thus boost the galaxy star formation, especially in the case of low-mass galaxies. As mentioned in the main body of the paper, it remains to be investigated whether galaxy connectivity is a fundamental parameter or rather a proxy for gas accretion rates, for instance. For example, it remains to be determined whether all the DM streams actively transport matter towards the galaxy, what fraction of gas accreted via the streams is with respect to an isotropic accretion, and more fundamentally, whether mass is the result of connectivity (because of an efficient accretion of matter through the streams) or if the connectivity is driven by mass. These questions will be answered in the next parts of this series of papers, where we will also investigate the gas properties of the streams.
Moreover, throughout this paper, we showed that cosmic filaments host galaxies with the most diverse ranges of masses, local densities, and connectivity values (see e.g. the middle panel of Fig. 11). Different galaxy populations therefore co-exist in these cosmic environments, which are also are less extreme than those of clusters of galaxies, and present a rich diversity in terms of gas density and temperature (e.g. Galárraga-Espinosa et al. 2021. This diversity makes cosmic filaments an interesting environment for galaxies, in which the evolution of different populations of galaxies in the broader cosmological picture can be studied.
Appendix A: Galaxies in the M * − SFR plane
The relation between stellar mass and SFR of the TNG50-1 central galaxies studied in this work is presented in the 2D histogram of Fig. A.1. The silver line shows the main sequence, extracted from Pillepich et al. (2019). We specify that this curve was derived from the study of all the galaxies of the simulation at z = 2 (centrals and satellites of all masses). Star-forming and passive populations are identified following the method presented in Pillepich et al. (2019) (relying on the logarithmic distance to the main sequence). Almost all the galaxies we studied are star forming. Only 35 galaxies of our catalogue are identified as passive (red points in Fig. A.1), which means that the fraction of quenched central galaxies of mass M * > 10 8 M /h is negligible (1.2%) in TNG50-1 at z = 2.
Due to the lack of statistics, passive galaxies are not considered in this work (see Sect. 2.2). For reference only, we note that the connectivity distribution of these galaxies ranges from zero to five streams, with mean and median values of 2.2 and 2.0, respectively. Roughly half of them lie in clusters (17), 11 are in filaments, and the remaining galaxies are located in the outskirts of filaments and clusters.
Appendix B: Connectivity in larger sub-boxes
In this appendix we show that the size of the sub-boxes we used to detect the small-scale filaments around the central galaxies does not affect the results we presented. For a random sample of 388 galaxies, we applied the same method as presented in Sect. 3.2 to DM sub-boxes with a side of L = 4 cMpc/h centred on the galaxy positions. This new value of the box side is one megaparsec larger than the fiducial one and is the largest possible value while maintaining the pixel size (i.e. the resolution) fixed to the original value. The numerical load of larger boxes exceeds the capacity of the DisPerSE code. Figure B.1 compares the resulting connectivity distribution to that derived using the fiducial box size for galaxies of all masses (top panel) and in the low-and high-mass bins (bottom). Following the main text, these bins are defined by the mass limit of 10 9.5 M /h, and the 388 randomly selected galaxies are split into 351 and 37 low-and high-mass objects, respectively.
The connectivity distributions are essentially the same. This is confirmed by the p-value of 0.97 obtained from the two-sample Kolomogorov-Smirnov test comparing the distribution derived from the larger sub-boxes (dashed blue) to the fiducial one (grey) for galaxies of all masses. Nevertheless, galaxy connectivity is only very mildly affected as these short branches are only rarely connected to the central galaxy.
In Fig. C.2 we report the evolution of the mean connectivity as a function of persistence for the galaxy populations already introduced in Fig. 8. The error bars in this figure correspond to the errors on the mean, computed by bootstrap resampling. We note that given the high numerical cost of detecting the streams with DisPerSE, we limited this analysis to a random set of 372 galaxies. This number represents ∼ 13% of the total galaxy dataset. The connectivity shows a very shallow linear decrease with increasing persistence (as a consequence of the progressive removal of low-significance branches), and in our experience, deviations from this linear trend arise when noise-induced spurious filaments contribute significantly. Therefore, examining the deviations from linearity in this figure, we show that persistence thresholds above 25 are adequate choices. An even more granular view is provided in Fig. C.3, where we show the dependence of Fig. 8 on the chosen persistence value. All panels except the leftmost two show consistent results, albeit with different normalisation for the reason discussed above.
These results thus show that the statistical analysis we performed very weakly depends on the exact value of the DisPerSE persistence threshold, provided spurious filaments are efficiently removed. Therefore, any value of the cut parameter above 25 represents an adequate choice.
; Alpaslan et al. (2016); Chen et al. (2017); Malavasi et al. (2017); Laigle et al. (2018); Bonjean et al. (2020); Rost et al. (2020); Welker et al. (2020); Gouin et al. (2020); Winkel et al. (2021) in observations and Kraljic et al. (2018, 2019); Ganeshaiah Veena et al. (2018, 2021); Malavasi et al. (2022) in simulations).
Fig. 2 :
2Examples of the resulting small-scale filamentary streams detected with DisPerSE (blue lines) for the same galaxies as in
Fig. 3 :
3Cosmic filaments (black lines) of the full TNG50-1 simulation. The red dots show the position of clusters of galaxies, i.e. the FoF haloes with masses M 200 > 10 12 M /h. The blue stars correspond to the 2942 central galaxies analysed in this work. The detection of the cosmic skeleton is presented in Sect. 3.3.
Fig. 4 :
4Histogram of the number of streams, N streams , connected to each galaxy of the full galaxy catalogue. The vertical dotted and dashed lines show the mean and median values, respectively. The number of streams to which a galaxy is connected defines the galaxy connectivity.
Fig. 5 :
5Connectivity distribution by bins of galaxy mass (from yellow to purple in increasing mass)
Fig. 7 :
7Example of local over-density estimates for the galaxies (coloured circles) in a 5 cMpc/h thick slice of the TNG50-1 simulation.
Fig. 8 :
8Connectivity variations as a function of mass and local over-density. Left: Mean connectivity N streams in the galaxy mass vs local over-density plane. The pixel colours represent the mean number of streams in a given mass and 1 + δ DTFE bin. The dashed grey lines show the limits of the four different galaxy populations we study in Sect. 6. Right: Corresponding bootstrap errors on the means. The dark green pixels (error values of zero) need to be interpreted with caution because they represent bins with only one galaxy (see the number counts inFig. E.1).
Fig. 14 :
14Significance of the sSFR enhancement due to galaxy connectivity. These curves show the results of Eq. 3 for different galaxy populations (see colours) embedded in different cosmic environments (different line styles).
Fig. A. 1 :
1Galaxies in the M * − SFR plane.
Fig. B. 1 :
1Comparison of connectivity distributions derived from different galaxy box sizes. The fiducial box side is L = 3 cMpc/h (grey and solid blue histograms). The larger boxes, in dashed blue, have a side of L = 4 cMpc/h. Top: Results for galaxies of all masses. Bottom: Distributions for galaxies separated into two mass bins. Appendix C: Setting the value of the DisPerSE persistence threshold to extract the small-scale filamentary streams Figure C.1 shows three examples of galaxies for which seven different skeletons were extracted, each with a value of the Dis-PerSE persistence threshold in the range of cut = [10, 45]. As expected, these visual illustrations qualitatively show that the total number of filaments decreases with increasing values of persistence because of the progressive trimming of short branches.
Fig. C. 1 :
1Effect of the DisPerSE persistence threshold (set by the value of the cut parameter) on the resulting small-scale skeleton around three random galaxies (from top to bottom). The black points in the background present a random sub-sample 1/1000) of the underlying DM distribution, and the central white star indicates the position of the galaxy. These boxes have a side length of 3 cMpc/h.
Fig. C. 2 :
2Mean connectivity as a function of persistence for the four galaxy populations defined in the mass-overdensity plane ofFig. 8.
Fig
. E.1: 2D histogram of the number of galaxies in the mass-overdensity plane (see Sect. 4.3).
Fig. E. 2 :
22D histogram of the number of galaxies in the mass-overdensity plane, further segregated by cosmic environment (see Sect. 5).
of Cautun et al. clearly Total
A
B
C
D
All cosmic environments 2942 1750 834 149 209
Voids + Walls
1211 981 172 44
14
Filament outskirts
454
281 138 15
20
Filaments
1213 488 498 86 141
Cluster outskirts
28
0
26
0
2
Galaxy Clusters
36
0
0
4
32
Table D
D.1: Fraction of FoF haloes of the TNG50-1 simulation hosting DisPerSE CPmaxs, and conversely. Small haloes hosting CPmax 26.7% 18.4% 13.3% 9.34% Groups hosting CPmax 91.1% 89.4% 88.6% 78.9% Clusters hosting CPmax 100% 100% 100% 98.4% CPmax in small haloes 52.9% 46.4% 39.0% 33.2% CPmax in Groups 24.4% 30.6% 35.2% 37.9% CPmax in Clusters 13.7% 17.4% 20.3% 24.2% Total fraction of matched CPmax 91.1% 94.4% 94.5% 95.3%cut 4
cut 5
cut 6
cut 7
https://www.tng-project.org Article number, page 2 of 17 Daniela Galárraga-Espinosa et al.: Galaxy connectivity, cosmic environments, and star formation rate
For an illustration, we refer the reader toFig. 2of Galárraga-Espinosa et al. (2020) 3 http://www2.iap.fr/users/sousbie/web/html/indexd41d. html
Article number, page 16 of 17 Daniela Galárraga-Espinosa et al.: Galaxy connectivity, cosmic environments, and star formation rate
Acknowledgements. The authors thank the referee for their very constructive comments and suggestions. DGE would like to thank Raúl Angulo, Rüdiger Pakmor, and Nir Mandelker for useful and insightful discussions. We thank Adam L. Schaefer and Céline Gouin for providing comments on the final version of the draft. We also thank the IllustrisTNG team for making their data publicly available, and for creating a user-friendly and complete website.A&A proofs: manuscript no. mainAppendix D: Setting the value of the DisPerSE persistence threshold to extract the cosmic skeletonTesting the robustness of the positions of the cosmic filaments recovered by DisPerSE is far from being trivial. A true physical reference cannot be easily determined in the simulations, and extracting the skeleton with another detection technique (e.g.Cautun et al. 2013;Tempel et al. 2014;Bonnaire et al. 2020) and comparing the outputs would rather be a characterisation of the similarities and differences of the different methods (already performed byLibeskind et al. 2018), and not a study of the detection of the actual cosmic filaments. Therefore, in order to test the robustness of the cosmic skeleton and fix the value of the DisPerSE persistence threshold, we focused on the positions of the ending points of the filaments, the CPmax, which by construction correspond to the topological nodes of the skeleton. We compared the positions of the CPmax to those of the most massive FoF haloes of the TNG50-1 simulation. The latter were divided into the following mass bins, which were chosen in order to encompass the main classes of objects: A robust skeleton maximises the fraction of CPmax in the most massive haloes (by reducing the number of critical points in less massive objects or other less dense environments) while identifying 100% of the most massive haloes as CPmax. This is the case for the skeleton extracted with a persistence threshold of 6, which was therefore adopted for the classification of galaxies in different cosmic environments.Appendix E: Distribution of galaxies in the 2D parameter space of mass versus local densityThis appendix presents the histograms of the distribution of galaxies in the mass-1 + δ DTFE plane for all the galaxies (Fig. E.1), and for galaxies in the different cosmic environments (Fig. E.2).
. M Alpaslan, M Grootes, P M Marcum, MNRAS. 4572287Alpaslan, M., Grootes, M., Marcum, P. M., et al. 2016, MNRAS, 457, 2287
. M A Aragon Calvo, M C Neyrinck, J Silk, The Open Journal of Astrophysics. 27Aragon Calvo, M. A., Neyrinck, M. C., & Silk, J. 2019, The Open Journal of Astrophysics, 2, 7
. M A Aragón-Calvo, R Van De Weygaert, B J T Jones, MNRAS. 4082163Aragón-Calvo, M. A., van de Weygaert, R., & Jones, B. J. T. 2010, MNRAS, 408, 2163
. I K Baldry, K Glazebrook, J Brinkmann, ApJ. 600681Baldry, I. K., Glazebrook, K., Brinkmann, J., et al. 2004, ApJ, 600, 681
. S P Bamford, R C Nichol, I K Baldry, MNRAS. 3931324Bamford, S. P., Nichol, R. C., Baldry, I. K., et al. 2009, MNRAS, 393, 1324
Article number. Article number, page 12 of 17
Galaxy connectivity, cosmic environments, and star formation rate. Daniela Galárraga-Espinosa, Daniela Galárraga-Espinosa et al.: Galaxy connectivity, cosmic environments, and star formation rate
. A Bauermeister, L Blitz, C.-P Ma, ApJ. 717323Bauermeister, A., Blitz, L., & Ma, C.-P. 2010, ApJ, 717, 323
. Y Birnboim, A Dekel, MNRAS. 345349Birnboim, Y. & Dekel, A. 2003, MNRAS, 345, 349
. J R Bond, L Kofman, D Pogosyan, Nature. 380603Bond, J. R., Kofman, L., & Pogosyan, D. 1996, Nature, 380, 603
. V Bonjean, N Aghanim, M Douspis, N Malavasi, H Tanimura, A&A. 63875Bonjean, V., Aghanim, N., Douspis, M., Malavasi, N., & Tanimura, H. 2020, A&A, 638, A75
. T Bonnaire, N Aghanim, A Decelle, M Douspis, A&A. 63718Bonnaire, T., Aghanim, N., Decelle, A., & Douspis, M. 2020, A&A, 637, A18
. J Borrow, M Vogelsberger, S O'neil, M A Mcdonald, A Smith, arXiv:2205.10376MNRAS. Borrow, J., Vogelsberger, M., O'Neil, S., McDonald, M. A., & Smith, A. 2023, MNRAS[arXiv:2205.10376]
. M Borzyszkowski, C Porciani, E Romano-Díaz, E Garaldi, MN-RAS. 469594Borzyszkowski, M., Porciani, C., Romano-Díaz, E., & Garaldi, E. 2017, MN- RAS, 469, 594
. A Boselli, G Gavazzi, PASP. 118517Boselli, A. & Gavazzi, G. 2006, PASP, 118, 517
. A Boselli, G Gavazzi, A&A Rev. 2274Boselli, A. & Gavazzi, G. 2014, A&A Rev., 22, 74
. J Brinchmann, S Charlot, S D M White, MNRAS. 3511151Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151
. M Cautun, R Van De Weygaert, B J Jones, MNRAS. 4291286Cautun, M., van de Weygaert, R., & Jones, B. J. T. 2013, MNRAS, 429, 1286
. M Cautun, R Van De Weygaert, B J T Jones, C S Frenk, MNRAS. 4412923Cautun, M., van de Weygaert, R., Jones, B. J. T., & Frenk, C. S. 2014, MNRAS, 441, 2923
. Y.-C Chen, S Ho, R Mandelbaum, MNRAS. 4661880Chen, Y.-C., Ho, S., Mandelbaum, R., et al. 2017, MNRAS, 466, 1880
. S Codis, D Pogosyan, C Pichon, MNRAS. 479973Codis, S., Pogosyan, D., & Pichon, C. 2018, MNRAS, 479, 973
. M Danovich, A Dekel, O Hahn, R Teyssier, MNRAS. 4221732Danovich, M., Dekel, A., Hahn, O., & Teyssier, R. 2012, MNRAS, 422, 1732
. Darragh Ford, E Laigle, C Gozaliasl, G , MNRAS. 4895695Darragh Ford, E., Laigle, C., Gozaliasl, G., et al. 2019, MNRAS, 489, 5695
. V De Lapparent, M J Geller, J P Huchra, ApJ. 3021de Lapparent, V., Geller, M. J., & Huchra, J. P. 1986, ApJ, 302, L1
. A Dekel, Y Birnboim, G Engel, Nature. 457451Dekel, A., Birnboim, Y., Engel, G., et al. 2009, Nature, 457, 451
. A Dressler, ApJ. 236351Dressler, A. 1980, ApJ, 236, 351
. Y Dubois, R Beckmann, F Bournaud, A&A. 651109Dubois, Y., Beckmann, R., Bournaud, F., et al. 2021, A&A, 651, A109
. C.-A Faucher-Giguère, D Kereš, MNRAS. 412118Faucher-Giguère, C.-A. & Kereš, D. 2011, MNRAS, 412, L118
. C.-A Faucher-Giguère, D Kereš, C.-P Ma, MNRAS. 4172982Faucher-Giguère, C.-A., Kereš, D., & Ma, C.-P. 2011, MNRAS, 417, 2982
. D Galárraga-Espinosa, N Aghanim, M Langer, C Gouin, N Malavasi, A&A. 641173Galárraga-Espinosa, D., Aghanim, N., Langer, M., Gouin, C., & Malavasi, N. 2020, A&A, 641, A173
. D Galárraga-Espinosa, N Aghanim, M Langer, H Tanimura, A&A. 649117Galárraga-Espinosa, D., Aghanim, N., Langer, M., & Tanimura, H. 2021, A&A, 649, A117
. D Galárraga-Espinosa, M Langer, N Aghanim, A&A. 661115Galárraga-Espinosa, D., Langer, M., & Aghanim, N. 2022, A&A, 661, A115
. Ganeshaiah Veena, P Cautun, M Van De Weygaert, R Tempel, E Frenk, C S , MNRAS. 5032280Ganeshaiah Veena, P., Cautun, M., van de Weygaert, R., Tempel, E., & Frenk, C. S. 2021, MNRAS, 503, 2280
. Ganeshaiah Veena, P Cautun, M Van De Weygaert, R , MNRAS. 481414Ganeshaiah Veena, P., Cautun, M., van de Weygaert, R., et al. 2018, MNRAS, 481, 414
. E Garaldi, E Romano-Díaz, M Borzyszkowski, C Porciani, 4732234MN-RASGaraldi, E., Romano-Díaz, E., Borzyszkowski, M., & Porciani, C. 2018, MN- RAS, 473, 2234
. C Gouin, N Aghanim, V Bonjean, M Douspis, A&A. 635195Gouin, C., Aghanim, N., Bonjean, V., & Douspis, M. 2020, A&A, 635, A195
. C Gouin, T Bonnaire, N Aghanim, A&A. 65156Gouin, C., Bonnaire, T., & Aghanim, N. 2021, A&A, 651, A56
. C Gouin, S Gallo, N Aghanim, A&A. 664198Gouin, C., Gallo, S., & Aghanim, N. 2022, A&A, 664, A198
. O Hahn, C M Carollo, C Porciani, A Dekel, MNRAS. 38141Hahn, O., Carollo, C. M., Porciani, C., & Dekel, A. 2007a, MNRAS, 381, 41
. O Hahn, C Porciani, C M Carollo, A Dekel, MNRAS. 375489Hahn, O., Porciani, C., Carollo, C. M., & Dekel, A. 2007b, MNRAS, 375, 489
. O Hahn, C Porciani, A Dekel, C M Carollo, MNRAS. 3981742Hahn, O., Porciani, C., Dekel, A., & Carollo, C. M. 2009, MNRAS, 398, 1742
. T Hough, S A Cora, R Haggar, MNRAS. 5182398Hough, T., Cora, S. A., Haggar, R., et al. 2023, MNRAS, 518, 2398
. H Jhee, H Song, R Smith, ApJ. 9402Jhee, H., Song, H., Smith, R., et al. 2022, ApJ, 940, 2
. G Kauffmann, S D M White, T M Heckman, MNRAS. 353713Kauffmann, G., White, S. D. M., Heckman, T. M., et al. 2004, MNRAS, 353, 713
. D Kereš, N Katz, D H Weinberg, R Davé, MNRAS. 3632Kereš, D., Katz, N., Weinberg, D. H., & Davé, R. 2005, MNRAS, 363, 2
. S Kotecha, C Welker, Z Zhou, MNRAS. 512926Kotecha, S., Welker, C., Zhou, Z., et al. 2022, MNRAS, 512, 926
. K Kraljic, S Arnouts, C Pichon, MNRAS. 474547Kraljic, K., Arnouts, S., Pichon, C., et al. 2018, MNRAS, 474, 547
. K Kraljic, C Pichon, S Codis, MNRAS. 4914294Kraljic, K., Pichon, C., Codis, S., et al. 2020, MNRAS, 491, 4294
. K Kraljic, C Pichon, Y Dubois, MNRAS. 4833227Kraljic, K., Pichon, C., Dubois, Y., et al. 2019, MNRAS, 483, 3227
. U Kuchner, R Haggar, A Aragón-Salamanca, MNRAS. 510581Kuchner, U., Haggar, R., Aragón-Salamanca, A., et al. 2022, MNRAS, 510, 581
. C Laigle, C Pichon, S Arnouts, MNRAS. 4745437Laigle, C., Pichon, C., Arnouts, S., et al. 2018, MNRAS, 474, 5437
. C Laigle, C Pichon, S Codis, MNRAS. 4462744Laigle, C., Pichon, C., Codis, S., et al. 2015, MNRAS, 446, 2744
. N I Libeskind, R Van De Weygaert, M Cautun, MNRAS. 4731195Libeskind, N. I., van de Weygaert, R., Cautun, M., et al. 2018, MNRAS, 473, 1195
. N Malavasi, N Aghanim, M Douspis, H Tanimura, V Bonjean, A&A. 64219Malavasi, N., Aghanim, N., Douspis, M., Tanimura, H., & Bonjean, V. 2020, A&A, 642, A19
. N Malavasi, S Arnouts, D Vibert, MNRAS. 4653817Malavasi, N., Arnouts, S., Vibert, D., et al. 2017, MNRAS, 465, 3817
. N Malavasi, M Langer, N Aghanim, D Galárraga-Espinosa, C Gouin, A&A. 658113Malavasi, N., Langer, M., Aghanim, N., Galárraga-Espinosa, D., & Gouin, C. 2022, A&A, 658, A113
. T Moutard, M Sawicki, S Arnouts, MNRAS. 4792147Moutard, T., Sawicki, M., Arnouts, S., et al. 2018, MNRAS, 479, 2147
. M Musso, C Cadiou, C Pichon, MNRAS. 4764877Musso, M., Cadiou, C., Pichon, C., et al. 2018, MNRAS, 476, 4877
. D Nelson, A Pillepich, V Springel, MNRAS. 4903234Nelson, D., Pillepich, A., Springel, V., et al. 2019a, MNRAS, 490, 3234
. D Nelson, V Springel, A Pillepich, Computational Astrophysics and Cosmology. 62Nelson, D., Springel, V., Pillepich, A., et al. 2019b, Computational Astrophysics and Cosmology, 6, 2
. D Nelson, M Vogelsberger, S Genel, MNRAS. 4293353Nelson, D., Vogelsberger, M., Genel, S., et al. 2013, MNRAS, 429, 3353
. P Ocvirk, C Pichon, R Teyssier, MNRAS. 3901326Ocvirk, P., Pichon, C., & Teyssier, R. 2008, MNRAS, 390, 1326
. B Pandey, S Bharadwaj, MNRAS. 372827Pandey, B. & Bharadwaj, S. 2006, MNRAS, 372, 827
. A Paranjape, O Hahn, R Sheth, MNRAS. 4763631Paranjape, A., Hahn, O., & Sheth, R. K. 2018, MNRAS, 476, 3631
. Y Peng, S J Lilly, K Kovač, ApJ. 721193Peng, Y.-j., Lilly, S. J., Kovač, K., et al. 2010, ApJ, 721, 193
. C Pichon, F Bernardeau, A&A. 343663Pichon, C. & Bernardeau, F. 1999, A&A, 343, 663
. C Pichon, D Pogosyan, T Kimm, MNRAS. 4182493Pichon, C., Pogosyan, D., Kimm, T., et al. 2011, MNRAS, 418, 2493
. A Pillepich, D Nelson, V Springel, MNRAS. 4903196Pillepich, A., Nelson, D., Springel, V., et al. 2019, MNRAS, 490, 3196
. A Pillepich, V Springel, D Nelson, MNRAS. 4734077Pillepich, A., Springel, V., Nelson, D., et al. 2018, MNRAS, 473, 4077
. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationA&A. 59413Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13
. M K M Prescott, C L Martin, A Dey, ApJ. 79962Prescott, M. K. M., Martin, C. L., & Dey, A. 2015, ApJ, 799, 62
. M Ramsøy, A Slyz, J Devriendt, C Laigle, Y Dubois, MNRAS. 502351Ramsøy, M., Slyz, A., Devriendt, J., Laigle, C., & Dubois, Y. 2021, MNRAS, 502, 351
. E Romano-Díaz, E Garaldi, M Borzyszkowski, C Porciani, MN-RAS. 4691809Romano-Díaz, E., Garaldi, E., Borzyszkowski, M., & Porciani, C. 2017, MN- RAS, 469, 1809
. A Rost, F Stasyszyn, L Pereyra, H J Martínez, MNRAS. 307Rost, A., Stasyszyn, F., Pereyra, L., & Martínez, H. J. 2020, MNRAS, 307
. J M Salerno, H Muriel, V Coenda, arXiv:2210.09300arXiv e-printsSalerno, J. M., Muriel, H., Coenda, V., et al. 2022, arXiv e-prints, arXiv:2210.09300
. F Sarron, C Adami, F Durret, C Laigle, A&A. 63249Sarron, F., Adami, C., Durret, F., & Laigle, C. 2019, A&A, 632, A49
. W E Schaap, R Van De Weygaert, A&A. 36329Schaap, W. E. & van de Weygaert, R. 2000, A&A, 363, L29
. T Sousbie, MNRAS. 414350Sousbie, T. 2011, MNRAS, 414, 350
. T Sousbie, C Pichon, H Kawahara, MNRAS. 414384Sousbie, T., Pichon, C., & Kawahara, H. 2011, MNRAS, 414, 384
. V Springel, MNRAS. 3641105Springel, V. 2005, MNRAS, 364, 1105
. V Springel, MNRAS. 401791Springel, V. 2010, MNRAS, 401, 791
. K R Stewart, A H Maller, J Oñorbe, ApJ. 84347Stewart, K. R., Maller, A. H., Oñorbe, J., et al. 2017, ApJ, 843, 47
. E N Taylor, A M Hopkins, I K Baldry, MNRAS. 4181587Taylor, E. N., Hopkins, A. M., Baldry, I. K., et al. 2011, MNRAS, 418, 1587
. E Tempel, R S Stoica, V J Martínez, MNRAS. 4383465Tempel, E., Stoica, R. S., Martínez, V. J., et al. 2014, MNRAS, 438, 3465
M Tremmel, M Karcher, F Governato, The Cosmic Web: Geometric Analysis. V. J. Martínez, E. Saar, E. Martínez-González, & M. J. Pons-Bordería470Tremmel, M., Karcher, M., Governato, F., et al. 2017, MNRAS, 470, 1121 van de Weygaert, R. & Schaap, W. 2009, The Cosmic Web: Geometric Analysis, ed. V. J. Martínez, E. Saar, E. Martínez-González, & M. J. Pons-Bordería, Vol. 665, 291-413
. C Welker, J Bland-Hawthorn, J Van De Sande, MNRAS. 4912864Welker, C., Bland-Hawthorn, J., Van de Sande, J., et al. 2020, MNRAS, 491, 2864
. N Winkel, A Pasquali, K Kraljic, MNRAS. 5054920Winkel, N., Pasquali, A., Kraljic, K., et al. 2021, MNRAS, 505, 4920
. J Zabl, N F Bouché, I Schroetter, MNRAS. 485Zabl, J., Bouché, N. F., Schroetter, I., et al. 2019, MNRAS, 485, 1961
. Y B Zel'dovich, Astrophysics. 6164Zel'dovich, Y. B. 1970, Astrophysics, 6, 164
| [] |
[
"Mechanical features based object recognition",
"Mechanical features based object recognition"
] | [
"Pakorn Uttayopas ",
"Xiaoxiao Cheng ",
"Jonathan Eden ",
"Etienne Burdet "
] | [] | [] | Current robotic haptic object recognition relies on statistical measures derived from movement dependent interaction signals such as force, vibration or position. Mechanical properties that can be identified from these signals are intrinsic object properties that may yield a more robust object representation. Therefore, this paper proposes an object recognition framework using multiple representative mechanical properties: the coefficient of restitution, stiffness, viscosity and friction coefficient. These mechanical properties are identified in realtime using a dual Kalman filter, then used to classify objects. The proposed framework was tested with a robot identifying 20 objects through haptic exploration. The results demonstrate the technique's effectiveness and efficiency, and that all four mechanical properties are required for best recognition yielding a rate of 98.18 ± 0.424 %. Clustering with Gaussian mixture models further shows that using these mechanical properties results in superior recognition as compared to using statistical parameters of the interaction signals. | 10.48550/arxiv.2210.07721 | [
"https://export.arxiv.org/pdf/2210.07721v1.pdf"
] | 252,907,944 | 2210.07721 | 2747b57c9fa2cfcea1d4c817c5fa06f9607560a1 |
Mechanical features based object recognition
Pakorn Uttayopas
Xiaoxiao Cheng
Jonathan Eden
Etienne Burdet
Mechanical features based object recognition
haptic explorationinteraction mechanicsfeature extractionsupervised learning for classificationclustering
Current robotic haptic object recognition relies on statistical measures derived from movement dependent interaction signals such as force, vibration or position. Mechanical properties that can be identified from these signals are intrinsic object properties that may yield a more robust object representation. Therefore, this paper proposes an object recognition framework using multiple representative mechanical properties: the coefficient of restitution, stiffness, viscosity and friction coefficient. These mechanical properties are identified in realtime using a dual Kalman filter, then used to classify objects. The proposed framework was tested with a robot identifying 20 objects through haptic exploration. The results demonstrate the technique's effectiveness and efficiency, and that all four mechanical properties are required for best recognition yielding a rate of 98.18 ± 0.424 %. Clustering with Gaussian mixture models further shows that using these mechanical properties results in superior recognition as compared to using statistical parameters of the interaction signals.
I. INTRODUCTION
As robots are increasingly used in various fields such as agriculture, they have to manipulate objects of different mechanical properties skillfully. For instance, to harvest tomatoes or potatoes with similar shape, it is necessary to know their respective mechanical properties to handle them without dropping or crushing them. To recognize the objects a robot is interacting with, it is necessary to extract the unique features that characterize them [1]. While geometric features can be used to identify solid objects [2]- [4], the shape of compliant objects changes with interaction such that shape alone is not sufficient to identify them.
It is therefore necessary to characterize objects through mechanical parameters extracted during interaction. Compliant objects can be recognised by using tactile information obtained during haptic interaction such as force and vibrations. Empirical measures of these signals, such as the maximum, minimum and variance have been used for classification [5]- [7]. While these interaction features can be used for object recognition, their value depends on specific actions, and their use can be highly redundant leading to high computational cost.
The intrinsic mechanical features of objects may yield a more specific representation and thus lead to a more efficient recognition. These material properties describe an object's behavior in response to a load, for example the energy loss This during impact can be characterized by the coefficient of restitution [8]. The deformation and restoration of the surface in response to a force exerted perpendicular is characterised by the material's viscoelasticity. Similarly when applying a tangential force, the resistance to sliding can be characterized through the roughness. These parameters have been previously considered to estimate mechanical properties from tactile information.
The coefficient of restitution is an important property in characterising how a body reacts during impact. While this property has rarely been used for object recognition, related features have been extracted from acoustic and acceleration data by investigating signal magnitude in the frequency domain [9], [10] or applying unsupervised learning methods [11] or statistical tools [12], [13]. The consideration of acceleration peak has also been used as a similar impact related measure for object recognition, proving able to recognise five different materials [14].
Compliance-related features characterise deformation in response to continuous forces. Empirically, these features can be estimated by analyzing the normal force signal during interaction [7], [15], [16]. Such approaches have been used to estimate stiffness [17], [18] and to infer how full a bottle is when grasped [19]. Stiffness, however, only characterises the static response. To estimate both stiffness and viscosity, a recursive least-square algorithm has been used [20]- [22] as well as a Gaussian process [23]. This estimation has then been applied to object recognition in simulation [24].
To characterise the response to sliding roughness related features are typically used. These features are typically estimated through the force or vibration occurring in the tangential direction during sliding [25]- [27]. In addition, a constant Coulomb model is commonly used to identify a surface's friction [28], [29]. Using the surface's friction along with geometrical information, a robot could recognise 18 household objects with different shapes and materials [30]. By considering dynamic friction parameters, the robotenvironment interaction can be modelled as quasi-static LuGe model [31]. Using such dynamic friction parameters can benefit the classification of objects with different surface's materials [32], [33].
These previous works exhibit how a single mechanical property can be used to recognise objects. However, multiple objects may exhibit the same value of a specific mechanical property and thus cannot be distinguished by it. Integrating the collection of mechanical property estimations into a haptic exploration framework may improve object recognition. However, there is currently no method to estimate multiple mechanical properties simultaneously and use them together. Furthermore, the coefficient of restitution has rarely been used to recognise objects. This prompted us to develop a framework for the identification of mechanical properties and their use to recognise specific objects, which will be presented in this paper. In this new approach, the coefficient of restitution, stiffness, viscosity and friction coefficient are estimated from the interaction force during haptic exploration. Our work builds upon [34] which adapted viscoelastic parameters to maintain a stable interaction. To address issues with parameter oscillation we used a dual Kalman filter to consider sensory noise. We further added estimation of the coefficients of friction and restitution. The resulting method is first validated in simulation. The role of each mechanical parameters in object recognition is then investigated before our method is compared to representative statistical and empirical methods of the literature. Fig. 1 shows the overall recognition framework with its three components; identification, control, and object recognition. These components work together to identify and classify unknown objects based on their mechanical properties.
A robot, driven by the controller, interacts mechanically with objects to retrieve interaction force data as well as its positions. The robot's estimator first estimates the coefficient of restitution when touching the object's surface. A dual extended Kalman filter (DEKF) is then used to identify the object's stiffness, viscosity and friction coefficient online from signals of haptic sensors. These mechanical features are also used to adapt the controller's parameters so as to interact with each object properly.
Additionally, the features based on the estimated mechanical parameters are combined to form the dataset feeding object recognition algorithms, in order to identify and cluster objects. This process is performed offline after the haptic exploration.
II. ONLINE ESTIMATION AND CONTROL
This section describes the online estimation and control. First, a discrete impact model and a continuous interaction model are introduced to capture the robot-environment interaction at different stages. Using these models, the estimation of impact (coefficient of restitution) and continuous interaction properties (stiffness, viscosity and friction coefficient) are presented. Finally, the interaction controller used to drive the robot interacting with the environment smoothly is explained.
A. Interaction model
Let the dynamics of a n-DOF robot interacting with its environment be described by
M (q)ẍ + C(q,q)ẋ + G(q) = u + F + ω ,(1)
where x is the coordinate of the end effector in operational space and q is the vector of joint angle. M (q) and C(q,q) represent the inertia and Coriolis matrices and G(q) the gravitation vector, u is the control input and ω motor noise. The interaction force F can be modelled with a mass-springdamper system in the normal direction and Coulomb friction in the tangential direction:
F = F ⊥ F = F 0 + κ x + dẋ µF ⊥ ,(2)
where F 0 = −κ x 0 is the force corresponding to the surface rest length x 0 (without interaction), κ is the surface stiffness, d its viscosity, and µ its friction coefficient.
B. Impact estimation
The initial contact of a robot with an object occurs in two phases: deformation and restoration. The deformation phase occurs before the initial point of contact and continues until maximum deformation. It is followed by the restoration from the time of maximum deformation until when separation occurs. By investigating impulses of these two phases, the coefficient of restitution is defined as a ratio of a normal impulse of restoration to a normal impulse of deformation [35]:
ψ = R D = t + t 0 F ⊥ dt m ⊥ [ẋ ⊥ (t 0 ) +ẋ ⊥ (t − )] .(3)
Here, D is the momentum from 0.01 s before collision, t − , to the time of maximum deformation, t 0 . R integrates the normal force from t 0 to 0.01 s after collision, t + .
C. Continuous properties estimation
We assume that the robot can measure the end-effector position (e.g. from joint encoders) as well as the force normal to the surface subjected to a large noise ν.
For the estimation, the system dynamics become nonlinear due to the coupling of the robot's states and mechanical parameters in the interaction force model (2). In discrete state-space form, the dynamics of the robot interacting with the environment is:
ξ k+1 = f (ξ k , u k , θ k ) + ω k η k = h(ξ k ) + ν k (4) ξ ≡ x ⊥ x ⊥ x ẋ µ , η ≡ x ⊥ x , u ≡ u ⊥ u , θ ≡ F 0 k d
where f is a nonlinear mapping obtained from (1) and h is a nonlinear mapping between the states and observation. The augmented state ξ consists of the robot's states and the friction parameters, u are motor commands, where η is the measured robot's positions and θ is the viscoelasticity vector. Due to the system's nonlinearity, noise and the coupling between the states and parameters, we employ the dual extended Kalman filter method [36] to estimate the robot's state and interaction mechanics' parameters simultaneously. The dual Kalman filter is a recursive estimation process, which uses partial measurements to estimate the parameters in the model before integrating the updated model and measurements to estimate the hidden states. Fig. 2 depicts the two designed estimators. Estimator 1 estimates the state ξ, which includes the robot's states and friction parameter. Estimator 2 then estimates the viscoelasticity parameters θ from the measured normal force. In principle, the prediction error cost will be minimized when the estimated parametersθ,μ converge on the real values θ, µ while the estimated statesξ converge to the real states ξ.
1) Robot's states estimation:
The robot's states ξ and friction parameter µ will be estimated together by using the nonlinear stochastic state-space model (4), with the linearization
ξ k+1 = A k ξ k + B k u k + ω k η k = C k ξ k + ν k (5) A k = ∂f (ξ, u, θ) ∂ξ (ξ k ,θ k ,u k ) = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1F ⊥k /m 0 0 0 0 1 , B k = ∂f (ξ, u, θ) ∂u (ξ k ,θ k ,u k ) = 0 0 0 /m ⊥ 0 0 /m 0 0 0 , C k = ∂h(ξ) ∂ξ (ξ k ,θ k ,u k ) = 1 0 0 0 0 0 0 1 0 0 ,(6)
where m ⊥ and m are the mass in the normal and tangential directions, and is the integration time step.F ⊥ is the estimated normal force from the environment model (2). The Kalman Filter to estimate ξ is then designed aŝ
ξ k+1 =ξ − k+1 + K ξ,k+1 (η k − Cξ − k+1 )(7)
whereξ
− k+1 = f (ξ k , u k ,θ k )
is the predicted states obtained by using the last estimated states and K ξ,k+1 is the filter gain for state estimation.
2) Viscoelasticity parameters estimation: The stiffness parameter θ can be estimated by using the measured normal force and interaction force model (2), and the estimated robot's stateξ. The EKF is used to estimate viscoelasticity parameters by considering the following state-space model:
θ k+1 = θ k + ω k η θ,k = h θ (ξ k , θ k ) + ν k .(8)
The observer for the estimation of viscoelasticity parameters is given by:θ
k+1 =θ k + K θ,k+1 (η θ,k − C θ,kθk )(9)
where K θ,k+1 is the Kalman filter gain for parameter estimation, η θ,k is the measured normal force, and the output matrix is
C θ,k = ∂h T θ (ξ k , θ) ∂θ (ξ k ,θ k ) = 1x ⊥kẋ⊥k .(10)
D. Interaction control
To enable the robot to smoothly track a predefined trajectory r = [x ⊥r , x r ] T during the interaction with the environment, an interaction controller using the estimated mechanical parameters is defined through
u k = ι k + φ k .(11)
The feedforward component ι compensates for the interaction force using the predictive model (2). It is updated recursively with the estimated mechanical properties according to The feedback component to track the target trajectory is defined as
ι k = − F ⊥k F k .(12)φ k = −K P e k − K Dėk(13)
with the error e k = x k − r k and control gains K P and K D .
To avoid overloading while in contact with a stiff surface, the control input is saturated:
ũ k = sat M (u k ) with sat(s) = s |s| ≤ M −M s < −M < 0 M s > M > 0 .(14)
III. VALIDATION OF MECHANICAL PROPERTIES' ESTIMATION
A. Simulation
We first test the designed estimator through simulating a robot interacting with different environments. A desired trajectory was designed as r =
x ⊥r x r = 0.012 sin(15t) 0.01t m , t ∈ [0, 20] s. (15) A sinusoidal movement was used in the normal direction that satisfies the persistent excitation condition [37] thus ensuring that the estimator has suitable information to identify the viscoelastic parameters. The amplitude and frequency were adjusted according to the allowed surface deformation. In the tangential direction, sliding with constant speed was used to yield a homogeneous lateral contact. Two objects with different mechanical properties were considered: a stiff-and-smooth surface with {F 0 = 1 N, k = 1000 N/m, d = 10 Ns/m, µ = 0.5} and a soft-and-rough surface with {F 0 = 1 N, k = 500 N/m, d = 30 Ns/m, µ = 1.25}. The control and estimation parameters used in the simulations were {K P = 1000 kg/s 2 , K D = 200 kg/s, P ξ,0 = 10 I 5 , P θ,0 = 5 I 3 , = 0.001 s} where I 5 , I 3 are identity matrices, sensory noise covariance R = 4 10 −4 , process noise covariance Q = 2.5 10 −3 I 5 . Fig. 3a shows that the estimator identifies position and velocity to a value close to the real one even with large measurement noise. The estimated kinematic values were then fed back to the controller. As a result, the robot could track the target positions during the interaction in both environments. The estimated mechanical properties of the objects are shown in Fig. 3b. The estimation was stabilized after an approximately 2 s settling period to a value close to the ground truth (shown in Table I) for all mechanical properties in both stiff and compliant environments. This shows that the mechanical properties can be estimated together with the robot's states for different objects. Note that the coefficient of restitution was not estimated as it is not involved in the continuous interaction and computed directly from (3).
B. Experimental validation
To validate the designed estimator with real objects and collect data for object's recognition, the HMan robot [38] shown in Fig. 4a used its finger to interact with objects while estimating their mechanical properties. A six-axis force sensor (SI-25-0.25; ATI Industrial Automation) was mounted between the tip and base of the robot's finger shown in Fig.4b to measure the interaction forces. Note that no tangential force was required for real-time mechanical parameter estimation.
The robot interacted with the 20 objects shown in Fig. 4c. The estimator was implemented on the robot to identify the mechanical properties of tested objects. Three actions were implemented to haptically explore the objects:
• Tapping: The robot made a first contact with the objects in the normal direction to estimate surface's elasticity. • Indentation: The robot moved its finger on the objects over the normal direction with the desired trajectory x ⊥r = 0.01 sin(8τ ) + 0.01 m, τ ∈ (0, 20] s to estimate the surface impedance. • Sliding: The robot slid its finger in the tangential direction along with object's surface at 0.04 m/s while applying a 4 N large force in the normal direction.
The estimation was validated through 25 trials for each pair of action and object. The resulting estimated coefficient of restitution are listed in Table II for four representative objects, while Fig. 5 shows the estimated stiffness and friction coefficient estimations for some example trials. These results show that the estimations converged and that they resulted in unique values for different objects.
IV. OBJECTS' RECOGNITION
Object recognition was performed using the experimental data of Section III-B. The estimated coefficient of restitution was directly used as a feature. The mean values of the estimated stiffness, viscosity and friction coefficient from the last 2 s of interaction were used to extract the steady-state values for additional features.
To compare the object recognition enabled by the mechanical property features with that of previous methods used in the literature, 35 such statistical features were extracted from the raw force data. Considered were the mean and maximum values, and the standard deviation from the interaction force in each direction as well as from their magnitude were extracted. In addition, a value of the normal interaction force from the first contacts was used as another feature (referred to as the "tap peak"). The frequency spectrum of the force in both directions was obtained using a fast-Fourier transformation (FFT), which was averaged into four frequency bands: [0,35], [36,65] where these intervals were identified in a preliminary data examination to characterise the interaction. The mean values of vibration amplitude for the frequency bands were also used as features.
To avoid overfitting, these statistical features were ranked using a feature selection method. The Chi-square test was applied since it is commonly used to evaluate features by testing their independence with class label. Finally, five feature sets were also formed to recognize objects, based on the mechanical properties features, statistical features and empirical mechanical properties features as shown in Table III. These feature's sets were used to evaluate their performances in object recognition using supervised and unsupervised learning methods as described in Table IV. The Naive Bayes classifier was selected as it exhibited superior performance than other classifiers for supervised learning. Gaussian mixture models (GMMs) were used to investigate clustering with unknown labels. These clustering results were evaluated by comparing them with the known labels using normalised mutual information defined as:
NMI = 2 M I(C; L) H(C) + H(L)(16)
where M I(C; L) is the mutual information between a set of clustering results C = {c 1 , c 2 , ...c N } and known labels L = {l 1 , l 2 , ...l N }:
M I(C; L) = i j p(c i ∩ l j ) log p(c i ∩ l j ) p(c i ) · p(l j )(17)
and H(·) is an entropy:
H(X) = − x∈X p(x) log p(x) .(18)
NMI evaluates how random the generated clusters are with respect to the known labels in a range of [0,1], where 1 means the clusters are perfectly generated according to the known labels and 0 that they are generated randomly.
A. Classification with mechanical properties
To understand how each of the estimated mechanical properties impact objects' classification, an estimation using a single feature was first performed, then gradually other features were added to feed the classifier. The classification was evaluated through a four-fold cross-validation using a 3:1 train:test ratio with 100 repetitions. Fig. 6 shows the object recognition confusion matrix results using the friction coefficients (a), stiffness and viscosity (b), coefficient of restitution (c) and all estimated mechanical properties (d). It can be seen that by using only friction or the coefficient of restitution, the classifier cannot recognise all the objects, resulting in a recognition rate lower than 50%. These features only recognise a group of hard objects (classes 1-10) better than soft objects (classes [11][12][13][14][15][16][17][18][19][20] as shown in Figs. 6a,c. Using stiffness and viscosity increased the recognition rate to 74.45%, but could not differentiate hard objects (Fig. 6b).
By using all four mechanical properties in the classifier, the recognition rate increased to 98.18% (Fig. 6d). The resulting confusion matrix exhibits almost perfect recognition with a rate over 90% for each object. There still is some confusion between pairs of object class, but for each object the misclassification rate is lower than 0.05 %, which can be considered as negligible. These results demonstrate the advantages provided by using the combination of different mechanical properties in order to classify various objects.
B. Objects' classification with mechanical properties vs. statistical features
To examine the role of using the estimated mechanical properties for object classification compared to other sets of features, the classifier was used to find the recognition rates from the five sets of features described in Table III: mechanical property features (MP), statistical features (SF and CSSF) and empirical mechanical properties features (EMP1 and EMP2). These object classifications were evaluated by a four-fold cross-validation using 100 repetitions. Fig. 7a shows that using mechanical properties as features resulted in a recognition rate of 98.18 ± 0.424 %. On the other hand, the statistical features with and without feature selection resulted in a recognition rate of 92.2 ± 0.60 % and 89.7 ± 3.20 % respectively. Lastly, features used in [7] reflecting on EMP1 provides 77.5 ± 5.07 % and features used in [39] and [15] reflecting on EMP2 yields 82.9 ± 0.91 %. These results show that mechanical properties provided the highest recognition rate while using a lower number of features and without needing tangential force sensing.
C. Objects' clustering using mechanical properties or statistical features
To study the benefit of using the coefficient of restitution stiffness, viscosity and friction coefficient together in an unsupervised learning method, GMMs clustering was used to perform a clustering task with the same five sets of feature as in section IV-B. We assumed that each cluster had its own diagonal covariance matrix and the number of clusters is set to 20, i.e. the number of tested objects. This clustering task was done and evaluated by NMI for 40 repetitions for each set of features.
The evaluation of clustering results using NMI is shown in Fig. 7b. Using mechanical features as input data in a clustering task gave NMI values of 0.851 ± 0.03 which is similar to what the SF and CSSF provided at 0.863 ± 0.016 and 0.856 ± 0.018 respectively (p>0.05). However, the NMI results obtained using the MP were found to be significantly higher than the NMI results obtained using EMP1 and EMP2 Fig. 7: Comparison of classification (a) and clustering using normalized mutual information (NMI) (b) with different feature sets described in Table III. (p<0.05). These results suggest that using four mechanical properties could provide the same results as with 35 statistical features. In addition, it also could outperform the other features representing empirical mechanical properties used in [7], [39] and [15].
V. DISCUSSION
This paper introduced an object recognition framework based on the estimation of mechanical properties with a dual extended Kalman filter. This online identification extends [34] and stably estimates the coefficient of restitution, stiffness, viscosity and friction parameters. The viability of this method was demonstrated in simulations and experiments.
The classification performance was evaluated with 20 real-world objects. Using the four representative mechanical parameters, a recognition rate of 98.18 ± 0.424 % could be achieved using supervised learning, and clustering exhibited a normalized mutual information of 0.851 ± 0.03. Using only four mechanical properties resulted in a better classification and similar clustering as with 35 statistical features, suggesting that mechanical features entail a more compact and accurate representation than statistical features.
Note that all of the coefficient of restitution, viscoelastic and friction parameters were required to distinguish objects. In particular, including the coefficient of restitution largely improved the recognition rate in comparison to when using only viscoelastic parameters. For example, using stiffness could not distinguish steel from wood as both are hard materials, but they had different impact properties as measured by the coefficient of restitution.
The intrinsic mechanical properties identified in our scheme provided better and more consistent results than the empirical mechanical properties used in previous works [7], [15], [39]. This illustrates the limitations of using empirical features to recognize objects, which may depend on the specific action used. For instance, the surface texture measure of [15] was defined as a variance of the interaction force in the normal direction during a robot's finger sliding on object's surfaces, which may depend on the object's pose and robot's interaction and lead to inconsistent estimation results.
In summary, this work emphasized the role of mechanical properties in haptic exploration, and how they can be used to reliably recognise different objects. The results demonstrated the superiority of the mechanical properties-based object recognition, yielding more reliable recognition than empirical properties and requiring far less features than based on statistical features. Moreover, using this intrinsic object representation makes the framework flexible to using various classification algorithms. While the presented system could successfully recognize objects during haptic exploration, considering the weight and inertial parameters would enable extending our framework from haptic exploration to enabling full object manipulation with transport.
Fig. 1 :
1Diagram of the object recognition process. The endeffector force and position measured during interaction are utilized to identify mechanical properties using an estimator. The estimated mechanical features are then used to recognize objects and adjust the motor command with the controller.
Fig. 2 :
2Diagram of dual extended Kalman filter combining two estimators. At every time step, Estimator 1 identifies the robot states and friction coefficient based on the measured position and estimated interaction force. Estimator 2 identifies the object elastic-viscosity parameters and interaction force based on the measured force and estimated robot state.
Fig. 3 :
3Estimator in simulation. (a) Filtering of the robot position in the normal (top) and tangential (bottom) directions. (b) Identified mechanical properties of the two environments. From top to bottom: feedforward force, stiffness, viscosity and friction coefficient.
Fig. 4 :
4Experimental setup. (a) HMan robot with a sensorized finger and an object to examine. A wooden platform frame is used to attach various objects for the robot to explore. The finger is thus driven by two motors in normal and tangential direction to the object's surface, where the force sensor is facing it. (b) Diagram of robot's finger interacting with an object's surface. (c) Objects used in the experiment.
Fig. 5 :
5Example of estimated mechanical properties values as a function of time. (a) Stiffness values obtained from three soft objects. (b) Friction coefficient values obtained from three hard objects.
Fig. 6 :
6Confusion matrix obtained by using (a) friction coefficient (b) stiffness and viscosity (c) coefficient of restitution (d) all mechanical properties as features. A colour blue corresponds to correct classified objects while a colour red corresponds to an incorrect classified objects.
the classification results from all combinations of mechanical properties used as features in the Naive Bayes classifier. Starting from a single feature until four features, the recognition rate increased as the number of mechanical properties increase. The highest value can be achieved by using all four estimated mechanical proprieties.
Fig. 8 :
8Classification results with different combinations of the mechanical features.ψ is the coefficient of restitution,κ is stiffness,d is viscosity andμ the friction coefficient.
work was supported in part by the EC grants FETOPEN 829186 PH-CODING and ITN PEOPLE 861166 INTUITIVE. The authors are with the Department of Bioengineering, Imperial College of Science, Technology and Medicine, SW72AZ London, UK. Email: {pu18, xcheng4, j.eden, e.burdet}@imperial.ac.uk)
TABLE I :
IAverage value of mechanical parameters in the interval[10,20] s obtained by the estimator in simulation.Object surface
Estimated values (ground truth values)
κ [N/m]d [Ns/m]μ
stiff and smooth
953.19 (1000)
7.97 (10)
0.49 (0.5)
soft and rough
487.20 (500)
29.04 (30)
1.24 (1.25)
TABLE II :
IIEstimated coefficient of restitutionObjects
ψ
Sponge, polyethylene surface
0.118 ± 0.0026
Wool hat
0.441 ± 0.0558
Acrylic
0.299 ± 0.0258
Steel
0.632 ± 0.0401
, [66,100] and [101,500] Hz,
TABLE III :
IIIFeatures' sets used to recognize objects.Denomination
Features
MP: Mechanical properties
Estimated stiffness,
viscosity, friction coefficient,
coefficient of restitution.
SF: Statistical features
35 statistical features.
CSSF: Chi-square test
statistical features
The first 4 statistical features
ranked by Chi-square test; tap peak,
mean F obtained by sliding,
std magnitude obtained by pressing,
mean F ⊥ obtained by pressing.
EMP1: Empirical mechanical
properties feature 1
1 feature for stiffness,
3 features for surface's texture, [7].
EMP2: Empirical mechanical
properties feature 2
2 features for compliance
and texture, [15].
2 features for surface's roughness
and fineness, [39].
TABLE IV :
IVLearning method, feature selection algorithms
and dataset used for classification. The abbreviations are
defined in Table III.
Method
Algorithm
Dataset
Supervised
Naive Bayes
MP
Supervised
Naive Bayes
MP
SF
CSSF
EMP1
EMP2
Unsupervised
GMMs
as in supervised
Robotic tactile perception of object properties: A review. S Luo, J Bimbo, R Dahiya, H Liu, Mechatronics. 48S. Luo, J. Bimbo, R. Dahiya, and H. Liu, "Robotic tactile perception of object properties: A review," Mechatronics, vol. 48, pp. 54 -67, 2017.
Haptic object recognition using a multi-fingered dextrous hand. P K Allen, K S Roberts, Proceedings, 1989 International Conference on Robotics and Automation. 1989 International Conference on Robotics and Automation1P. K. Allen and K. S. Roberts, "Haptic object recogni- tion using a multi-fingered dextrous hand," in Proceedings, 1989 International Conference on Robotics and Automation, 1989, 342-347 vol.1.
Feeling the shape: Active exploration behaviors for object recognition with a robotic hand. U Martinez-Hernandez, T J Dodd, T J Prescott, IEEE Transactions on Systems, Man, and Cybernetics: Systems. 4812U. Martinez-Hernandez, T. J. Dodd, and T. J. Prescott, "Feeling the shape: Active exploration behaviors for object recognition with a robotic hand," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 12, pp. 2339-2348, 2018.
Tactile-object recognition from appearance information. Z Pezzementi, E Plaku, C Reyda, G D Hager, IEEE Transactions on Robotics. 273Z. Pezzementi, E. Plaku, C. Reyda, and G. D. Hager, "Tactile-object recognition from appearance information," IEEE Transactions on Robotics, vol. 27, no. 3, pp. 473-487, 2011.
Evaluation of tactile feature extraction for interactive object recognition. J Hoelscher, J Peters, T Hermans, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). J. Hoelscher, J. Peters, and T. Hermans, "Evaluation of tactile feature extraction for interactive object recognition," in 2015 IEEE-RAS 15th International Conference on Hu- manoid Robots (Humanoids), 2015, pp. 310-317.
Autonomous tactile perception: A combined improved sensing and bayesian nonparametric approach. P Dallaire, P Giguère, D Émond, B Chaib-Draa, Robotics and Autonomous Systems. 624P. Dallaire, P. Giguère, D.Émond, and B. Chaib-Draa, "Au- tonomous tactile perception: A combined improved sensing and bayesian nonparametric approach," Robotics and Au- tonomous Systems, vol. 62, no. 4, pp. 422 -435, 2014.
Tactilebased active object discrimination and target object search in an unknown workspace. M Kaboli, K Yao, D Feng, G Cheng, Auton. Robots. 431M. Kaboli, K. Yao, D. Feng, and G. Cheng, "Tactile- based active object discrimination and target object search in an unknown workspace," Auton. Robots, vol. 43, no. 1, 123-152, Jan. 2019.
W J Stronge, Impact Mechanics. Cambridge University Press2nd edW. J. Stronge, Impact Mechanics, 2nd ed. Cambridge Uni- versity Press, 2018.
Infomax control for acoustic exploration of objects by a mobile robot. A Rebguns, D Ford, I Fasel, Proceedings of the 15th AAAI Conference on Lifelong Learning, ser. AAAIWS'11-15. the 15th AAAI Conference on Lifelong Learning, ser. AAAIWS'11-15AAAI PressA. Rebguns, D. Ford, and I. Fasel, "Infomax control for acoustic exploration of objects by a mobile robot," in Pro- ceedings of the 15th AAAI Conference on Lifelong Learning, ser. AAAIWS'11-15, AAAI Press, 2011, 22-28.
Material classification through knocking and grasping by learning of structure-borne sound under changing acoustic conditions. M Neumann, K Nottensteiner, I Kossyk, Z.-C Marton, 2018 IEEE 14th International Conference on Automation Science and Engineering. M. Neumann, K. Nottensteiner, I. Kossyk, and Z.-C. Marton, "Material classification through knocking and grasping by learning of structure-borne sound under changing acoustic conditions," in 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), 2018, pp. 1269-1275.
Knock-knock: Acoustic object recognition by using stacked denoising autoencoders. S Luo, L Zhu, K Althoefer, H Liu, Neurocomputing. 267S. Luo, L. Zhu, K. Althoefer, and H. Liu, "Knock-knock: Acoustic object recognition by using stacked denoising autoencoders," Neurocomputing, vol. 267, pp. 18-24, 2017.
Surface sensing and classification for efficient mobile robot navigation. N Roy, G Dudek, P Freedman, Proceedings of IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation2N. Roy, G. Dudek, and P. Freedman, "Surface sensing and classification for efficient mobile robot navigation," Proceedings of IEEE International Conference on Robotics and Automation, vol. 2, 1224-1228 vol.2, 1996.
Blind collision detection and obstacle characterisation using a compliant robotic arm. P Wisanuvej, J Liu, C.-M Chen, G.-Z Yang, 2014 IEEE International Conference on Robotics and Automation (ICRA). P. Wisanuvej, J. Liu, C.-M. Chen, and G.-Z. Yang, "Blind collision detection and obstacle characterisation using a compliant robotic arm," in 2014 IEEE International Confer- ence on Robotics and Automation (ICRA), 2014, pp. 2249- 2254.
An inertia-based surface identification system. J Windau, W.-M Shen, 2010 IEEE International Conference on Robotics and Automation. J. Windau and W.-M. Shen, "An inertia-based surface iden- tification system," in 2010 IEEE International Conference on Robotics and Automation, 2010, pp. 2330-2335.
Tactile identification of objects using bayesian exploration. D Xu, G E Loeb, J A Fishel, 2013 IEEE International Conference on Robotics and Automation. D. Xu, G. E. Loeb, and J. A. Fishel, "Tactile identification of objects using bayesian exploration," in 2013 IEEE In- ternational Conference on Robotics and Automation, 2013, pp. 3056-3061.
Single-grasp object classification and feature extraction with simple robot hands and tactile sensors. A J Spiers, M V Liarokapis, B Calli, A M Dollar, IEEE Transactions on Haptics. 92A. J. Spiers, M. V. Liarokapis, B. Calli, and A. M. Dollar, "Single-grasp object classification and feature extraction with simple robot hands and tactile sensors," IEEE Trans- actions on Haptics, vol. 9, no. 2, pp. 207-220, 2016.
Use of tactile feedback to control exploratory movements to characterize object compliance. Z Su, J Fishel, T Yamamoto, G Loeb, Frontiers in Neurorobotics. 67Z. Su, J. Fishel, T. Yamamoto, and G. Loeb, "Use of tactile feedback to control exploratory movements to characterize object compliance," Frontiers in Neurorobotics, vol. 6, p. 7, 2012.
Gaining a sense of touch object stiffness estimation using a soft gripper and neural networks. M Bednarek, P Kicki, J Bednarek, K Walas, Electronics. 101M. Bednarek, P. Kicki, J. Bednarek, and K. Walas, "Gaining a sense of touch object stiffness estimation using a soft gripper and neural networks," Electronics, vol. 10, no. 1, 2021.
Tactile sensing for mobile manipulation. S Chitta, J Sturm, M Piccoli, W Burgard, IEEE Transactions on Robotics. 273S. Chitta, J. Sturm, M. Piccoli, and W. Burgard, "Tactile sensing for mobile manipulation," IEEE Transactions on Robotics, vol. 27, no. 3, pp. 558-568, 2011.
Environment estimation for enhanced impedance control. L Love, W Book, Proceedings of 1995 IEEE International Conference on Robotics and Automation. 1995 IEEE International Conference on Robotics and Automation2L. Love and W. Book, "Environment estimation for en- hanced impedance control," in Proceedings of 1995 IEEE In- ternational Conference on Robotics and Automation, vol. 2, 1995, 1854-1859 vol.2.
Contact stiffness identification with delay and structural compensation for hardware-in-the-loop contact simulator. A Ren, C Qi, F Gao, X Zhao, Q Sun, Journal of Intelligent and Robotic Systems. 86A. Ren, C. qi, F. Gao, X. Zhao, and Q. Sun, "Contact stiffness identification with delay and structural compensa- tion for hardware-in-the-loop contact simulator," Journal of Intelligent and Robotic Systems, vol. 86, pp. 1-9, Jun. 2017.
Implicit force control for an industrial robot based on stiffness estimation and compensation during motion. R Rossi, L Fossali, A Novazzi, L Bascetta, P Rocco, 2016 IEEE International Conference on Robotics and Automation (ICRA). R. Rossi, L. Fossali, A. Novazzi, L. Bascetta, and P. Rocco, "Implicit force control for an industrial robot based on stiffness estimation and compensation during motion," 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1138-1145, 2016.
Preliminary evaluation of an online estimation method for organ geometry and tissue stiffness. P Chalasani, L Wang, R Yasin, N Simaan, R H Taylor, IEEE Robotics and Automation Letters. 33P. Chalasani, L. Wang, R. Yasin, N. Simaan, and R. H. Tay- lor, "Preliminary evaluation of an online estimation method for organ geometry and tissue stiffness," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1816-1823, 2018.
Recognition of environmental impedance configuration by neural network using timeseries contact state response. K Yane, T Nozaki, 2022 IEEE 17th International Conference on Advanced Motion Control (AMC). K. Yane and T. Nozaki, "Recognition of environmental impedance configuration by neural network using time- series contact state response," in 2022 IEEE 17th Inter- national Conference on Advanced Motion Control (AMC), 2022, pp. 426-431.
Texture recognition based on perception data from a bionic tactile sensor. S Huang, H Wu, Sensors. 2115S. Huang and H. Wu, "Texture recognition based on per- ception data from a bionic tactile sensor," Sensors, vol. 21, no. 15, 2021.
Robotic learning of haptic adjectives through physical interaction. V Chu, I Mcmahon, L Riano, C G Mcdonald, Q He, J Perez-Tejada, M Arrigo, T Darrell, K J Kuchenbecker, Advances in Tactile Sensing and Touch-based Human Robot Interaction. 63V. Chu, I. McMahon, L. Riano, C. G. McDonald, Q. He, J. Martinez Perez-Tejada, M. Arrigo, T. Darrell, and K. J. Kuchenbecker, "Robotic learning of haptic adjectives through physical interaction," Robotics and Autonomous Systems, vol. 63, pp. 279-292, 2015, Advances in Tactile Sensing and Touch-based Human Robot Interaction.
Vibrotactile recognition and categorization of surfaces by a humanoid robot. J Sinapov, V Sukhoy, R Sahai, A Stoytchev, IEEE Transactions on Robotics. 273J. Sinapov, V. Sukhoy, R. Sahai, and A. Stoytchev, "Vi- brotactile recognition and categorization of surfaces by a humanoid robot," IEEE Transactions on Robotics, vol. 27, no. 3, pp. 488-497, 2011.
In-hand objectdynamics inference using tactile fingertips. B Sundaralingam, T Hermans, IEEE Transactions on Robotics. 374B. Sundaralingam and T. Hermans, "In-hand object- dynamics inference using tactile fingertips," IEEE Transac- tions on Robotics, vol. 37, no. 4, pp. 1115-1126, 2021.
Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor. Z Su, K Hausman, Y Chebotar, A Molchanov, G E Loeb, G S Sukhatme, S Schaal, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). Z. Su, K. Hausman, Y. Chebotar, A. Molchanov, G. E. Loeb, G. S. Sukhatme, and S. Schaal, "Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor," in 2015 IEEE-RAS 15th International Confer- ence on Humanoid Robots (Humanoids), 2015, pp. 297-303.
Combining contact forces and geometry to recognize objects during surface haptic exploration. T Sun, J Back, H Liu, IEEE Robotics and Automation Letters. 33T. Sun, J. Back, and H. Liu, "Combining contact forces and geometry to recognize objects during surface haptic exploration," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2509-2514, 2018.
A new model for control of systems with friction. C Canudas De Wit, H Olsson, K Astrom, P Lischinsky, IEEE Transactions on Automatic Control. 403C. Canudas de Wit, H. Olsson, K. Astrom, and P. Lischinsky, "A new model for control of systems with friction," IEEE Transactions on Automatic Control, vol. 40, no. 3, pp. 419- 425, 1995.
Object surface classificaiton based on friction properties for intelligent robotic hands. X Song, H Liu, J Bimbo, K Althoefer, L D Seneviratne, World Automation Congress. X. Song, H. Liu, J. Bimbo, K. Althoefer, and L. D. Senevi- ratne, "Object surface classificaiton based on friction prop- erties for intelligent robotic hands," in World Automation Congress 2012, 2012, pp. 1-5.
Surface material recognition through haptic exploration using an intelligent contact sensing finger. H Liu, X Song, J Bimbo, L Seneviratne, K Althoefer, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. H. Liu, X. Song, J. Bimbo, L. Seneviratne, and K. Althoefer, "Surface material recognition through haptic exploration us- ing an intelligent contact sensing finger," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 52-57.
Force, impedance, and trajectory learning for contact tooling and haptic identification. Y Li, G Ganesh, N Jarrassé, S Haddadin, A Albu-Schaeffer, E Burdet, IEEE Transactions on Robotics. 345Y. Li, G. Ganesh, N. Jarrassé, S. Haddadin, A. Albu- Schaeffer, and E. Burdet, "Force, impedance, and trajectory learning for contact tooling and haptic identification," IEEE Transactions on Robotics, vol. 34, no. 5, pp. 1170-1182, 2018.
. S D Poisson, Mechanics. Longmans. 1817S. D. Poisson, Mechanics. Longmans, London, 1817.
Dual extended kalman filter methods. E A Wan, A T Nelson, Kalman filtering and neural networks. 123E. A. Wan and A. T. Nelson, "Dual extended kalman filter methods," Kalman filtering and neural networks, vol. 123, 2001.
P A Ioannou, J Sun, Robust adaptive control. Courier Corporation. P. A. Ioannou and J. Sun, in Robust adaptive control. Courier Corporation, 2016.
H-man: A planar, h-shape cabled differential robotic manipulandum for experiments on human motor control. D Campolo, P Tommasino, K Gamage, J Klein, C M Hughes, L Masia, Journal of Neuroscience Methods. 235D. Campolo, P. Tommasino, K. Gamage, J. Klein, C. M. Hughes, and L. Masia, "H-man: A planar, h-shape ca- bled differential robotic manipulandum for experiments on human motor control," Journal of Neuroscience Methods, vol. 235, pp. 285-297, 2014.
Bayesian exploration for intelligent identification of textures. J Fishel, G Loeb, Frontiers in Neurorobotics. 64J. Fishel and G. Loeb, "Bayesian exploration for intelligent identification of textures," Frontiers in Neurorobotics, vol. 6, p. 4, 2012.
| [] |
[
"Random Multilinear Maps and the Erdős Box Problem",
"Random Multilinear Maps and the Erdős Box Problem"
] | [
"David Conlon ",
"Cosmin Pohoata ",
"Dmitriy Zakharov "
] | [] | [
"DISCRETE ANALYSIS"
] | By using random multilinear maps, we provide new lower bounds for the Erdős box problem, the problem of estimating the extremal number of the complete d-partite duniform hypergraph with two vertices in each part, thereby improving on work of Gunderson, Rödl and Sidorenko. | null | [
"https://arxiv.org/pdf/2011.09024v2.pdf"
] | 227,015,681 | 2011.09024 | 0b49b09c125c837a5c50e52527d8ac96c99cd88c |
Random Multilinear Maps and the Erdős Box Problem
2021
David Conlon
Cosmin Pohoata
Dmitriy Zakharov
Random Multilinear Maps and the Erdős Box Problem
DISCRETE ANALYSIS
8pp202110.19086/da.28336Received 19 November 2020; Published 28 September 2021and phrases: Erdős box problemextremal numbersZarankiewicz problemhypergraphs
By using random multilinear maps, we provide new lower bounds for the Erdős box problem, the problem of estimating the extremal number of the complete d-partite duniform hypergraph with two vertices in each part, thereby improving on work of Gunderson, Rödl and Sidorenko.
Introduction
Writing K ..,s d . Already for d = 2, the problem of determining these extremal numbers is one of the most famous in combinatorics, known as the Zarankiewicz problem. The classic result on this problem, due to Kővári, Sós and Turán [12], says that ex 2 (n, K s 1 ,s 2 ) = O n 2−1/s 1 for all s 1 ≤ s 2 . However, this upper bound has only been matched by a construction with Ω(n 2−1/s 1 ) edges when s 2 > (s 1 − 1)!, a result which, in this concise form, is due to Alon, Kollár, Rónyai and Szabó [1,11], but builds on a long history of earlier work on special cases (see, for example, the comprehensive survey [8]).
Generalizing the Kővári-Sós-Turán bound, Erdős [6] showed that ex d (n, K
for all s 1 ≤ s 2 ≤ . . . ≤ s d . An analogue of the Alon-Kollár-Rónyai-Szabó result, due to Ma, Yuan and Zhang [14], is also known in this context and says that (1) is tight up to the constant provided that s d is sufficiently large in terms of s 1 , . . . , s d−1 . The proof of this result is based on an application of the random algebraic method, introduced by Bukh [2] and further developed in [3] and [4].
Our concern then will be with determining the value of ex d (n, K (d)
s 1 ,.
..,s d ) in the particular case when s 1 = · · · = s d = 2. In the literature, this problem, originating in the work of Erdős [6], is sometimes referred to as the box problem, owing to a simple reformulation in terms of finding the largest subset of the grid {1, 2, . . . , n} d which does not contain the vertices of a d-dimensional box (see also [10] for a connection to a problem in analysis). By (1), we have
ex d (n, K (d) 2,...,2 ) = O n d− 1 2 d−1 .
(
While in the case d = 2 it has long been known that ex 2 (n, K 2,2 ) = Θ(n 3/2 ), with a matching construction due to Klein [5] even predating the Kővári-Sós-Turán bound, there has been very little success in finding constructions matching (2) for d ≥ 3. Indeed, it is unclear whether they should even exist. For d = 3, the best available construction is due to Katz, Krop and Maggioni [10], who showed that ex 3 (n, K
coming from an application of the probabilistic deletion method. Besides the Katz-Krop-Maggioni construction, the only improvement to this bound is an elegant construction of Gunderson, Rödl and Sidorenko [9], which amplified the deletion argument by introducing algebraic structure on one of the sides of the d-partition and using random hyperplanes to define the edges.
ex d (n, K (d) 2,...,2 ) = Ω n d− d−1/s 2 d −1 .
It is easy to see that the number s = s(d) exists precisely when d and 2 d − 1 are relatively prime, which holds, for instance, when d is a prime number or a power of 2, but does not hold for many other numbers, such as d = 6, 12, 18, 20, 21. In fact, their result fails to apply for a positive proportion of the positive integers, as may be seen by noting that if the condition (d, 2 d − 1) = 1 fails for a given d, then it also fails for all multiples of d.
In this paper, we improve on the lower bound from Theorem 1 by establishing the following result, whose proof refines the method from [9] by introducing algebraic structure on each side of the d-partition and using random multilinear maps to define the edges.
Theorem 2 For any d ≥ 2, let r and s be positive integers such that d(s − 1) < (2 d − 1)r. Then
ex d (n, K (d) 2,...,2 ) = Ω n d− r s .
This not only improves the lower bound for the box problem provided by Theorem 1 for any d which is not a power of 2, but it also yields a gain over the probabilistic deletion bound (3) for all uniformities d. To see this, note that if d ≥ 2, then d never divides 2 d − 1, so we may set r = 1 and s
= 2 d −1 d > 2 d −1 d . Corollary 1 For any d ≥ 2, ex d (n, K (d) 2,...,2 ) = Ω n d− 2 d −1 d −1 .
For the reader's convenience, we include below a table comparing the bounds provided by the deletion bound (3), by Gunderson, Rödl and Sidorenko's Theorem 1 and by our Corollary 1. A number α on the d th row of the table means that the corresponding method gives the lower bound ex d (n, K 2,...,2 ) = Ω n d−1/α , while an empty cell in the GRS column means that the method does not apply for that value of d. In particular, we note that our method recovers both the fact that ex(n, K 2,2 ) = Θ(n 3/2 ) and the lower bound ex 3 (n, K 2 New lower bounds for the Erdős box problem
Linear algebra preliminaries
Let V 1 , . . . ,V d be finite-dimensional vector spaces over the field F q . Following standard convention, we call a function T : V 1 × · · · ×V d → F q multilinear if, for every i ∈ {1, . . . , d} and every fixed choice of
x j ∈ V j for each j = i, the function T (x 1 , . . . , x i−1 , x, x i+1 , . . . , x d ), considered as a function on V i , is linear over F q .
The vector space of all multilinear functions T : V 1 × · · · ×V d → F q can be naturally identified with the space V * 1 ⊗ · · · ⊗V * d , where V * denotes the dual space of V . A uniformly random multilinear function T : V 1 × · · · × V d → F q is then a random element of the space V * 1 ⊗ · · · ⊗ V * d , chosen according to the uniform distribution.
If, for each i, we have a subspace U i ⊂ V i , then we can define a restriction map r : V * 1 ⊗ · · · ⊗V * d → U * 1 ⊗ · · · ⊗U * d .
We have the following simple, but important, claim about these restriction maps.
Claim 1 The restriction r(T ) of a uniformly random multilinear function T is again uniformly random.
Proof: The map r is linear and surjective and so all T ∈ U * 1 ⊗ · · · ⊗ U * d have the same number of preimages in V * 1 ⊗ · · · ⊗V * d . 2
It will also be useful to note the following simple consequence of multilinearity.
Proposition 1 Suppose that T : V 1 × · · · ×V d → F q is multilinear and, for every i = 1, . . . , d, there are Proof:
vectors v 0 i , v 1 i ∈ V i such that T (v ε 1 1 , . . . , v ε d d ) = 1 for all 2 d choices of ε i ∈ {0,Write u i = α 0 i v 0 i + α 1 i v 1 i for some α 0 i + α 1 i = 1.
Then, by multilinearity, we have
T (u 1 , . . . , u d ) = ∑ ε 1 ,...,ε d ∈{0,1} α ε 1 1 · · · α ε d d T (v ε 1 1 , . . . , v ε d d ) = ∑ ε 1 ,...,ε d ∈{0,1} α ε 1 1 · · · α ε d d = (α 0 1 + α 1 1 ) · · · (α 0 d + α 1 d ) = 1,
as required. 2
Proof of Theorem 2
Fix positive integers d, r and s and let q be a large prime power. Let V = F s q and let T 1 , . . . , T r ∈ V * ⊗d be independent uniformly random multilinear functions. Let H be the d-partite d-uniform hypergraph between d copies of V whose edge set E consists of all tuples (v 1 , . . . , v d ) ∈ V d such that T i (v 1 , . . . , v d ) = 1 for all i = 1, . . . , r. Let us estimate the expected number of edges in H.
Claim 2 E [|E|] = (q s − 1) d q −r ∼ q ds−r . Proof: Note that if one of v 1 , . . . , v d is zero, then T i (v 1 , . . . , v d ) = 0, so we may assume that (v 1 , . . . , v d )
is one of the (q s − 1) d remaining sequences of non-zero vectors and calculate the probability that it belongs to E. Let U i = v i ⊂ V , a one-dimensional subspace of V . By Claim 1, the restriction T i of T i to U 1 × · · · ×U d is uniformly distributed in U * 1 ⊗ · · · ⊗U * d . But the latter space is one-dimensional and so T i (v 1 , . . . , v d ) takes the value 1 with probability q −1 . Since T 1 , . . . , T r are independent, the functions T 1 , . . . , T r are independent, so they all are equal to one at (v 1 , . . . , v d ) with probability exactly q −r .
2
We now estimate the expected number of (appropriately ordered) copies of K (d) 2,...,2 in H.
Claim 3 Let F denote the family of all (v 0 1 , v 1 1 , . . . , v 0 d , v 1 d ) ∈ V 2d
where v 0 j = v 1 j for all j and T i (v ε 1 1 , . . . , v ε d d ) = 1 for all i = 1, . . . , r and all choices of ε 1 , . . . , ε d ∈ {0, 1}. Then E[|F|] ∼ q 2ds−2 d r .
Proof: If, for some j = 1, . . . , d, the vectors v 0 j and v 1 j are collinear, say v 1 j = λ v 0 j for some λ = 1 (but allowing λ = 0), then
T (v 0 1 , . . . , v 1 j , . . . , v 0 d ) = λ T (v 0 1 , . . . , v 0 j , . . . , v 0 d )
, so these two numbers cannot be equal to 1 simultaneously. Therefore, we may restrict attention to only those tuples where v 0 j and v 1 j are linearly independent for all j = 1, . . . , d.
Fix one of the (q s − 1) d (q s − q) d remaining tuplesv = (v 0 1 , v 1 1 , . . . , v 0 d , v 1 d )
and let us compute the probability thatv ∈ F. Let U j = v 0 j , v 1 j be the two-dimensional vector space spanned by v 0 j and v 1 j . By Claim 1, the restriction T i of T i to U 1 × · · · ×U d is uniformly distributed in U * 1 ⊗ · · · ⊗U * d . Moreover, the independence of T 1 , . . . , T r implies that T 1 , . . . , T r are also independent. Now observe that the set of 2 d tensors
{v ε 1 1 ⊗ · · · ⊗ v ε d d : ε j ∈ {0, 1}}
forms a basis for the space U 1 ⊗ · · · ⊗U d . Therefore, there exists a unique R ∈ U * 1 ⊗ · · · ⊗U * d such that R(v ε 1 1 , . . . , v ε d d ) = 1 for all ε j ∈ {0, 1}. Moreover, since there are q 2 d different choices for the value of a function in U * 1 ⊗ · · · ⊗ U * d at the (v ε 1 1 , . . . , v ε d d ) and each such choice determines a unique function, the probability that T i = R is q −2 d . Sincev ∈ F if and only if T i = R for all i = 1, . . . , r, the independence of the T i implies that the probabilityv ∈ F is q −2 d r . Thus,
E[|F|] = (q s − 1) d (q s − q) d q −2 d r ∼ q 2ds−2 d r , as required. 2
The next step is crucial.
Lemma 1 Let B be the family of all (v 1 , . . . , v d ) ∈ E for which there exists (v 1 , . . . , v d ) ∈ V d such that (v 1 , v 1 , . . . , v d , v d ) ∈ F. Then E[|B|] ≤ (1 + o(1))q −d E[|F|].
Proof: Given a sequence of affine lines l 1 , . . . , l d ⊂ V , denote by P(l 1 , . . . , l d ) the set of all sequences (x 1 , x 1 , . . . , x d , x d ) ∈ V 2d such that x j and x j are distinct and lie on l j for all j. Clearly,
|P(l 1 , . . . , l d )| = q d (q − 1) d ∼ q 2d .
Note that:
1. If (l 1 , . . . , l d ) = (l 1 , . . . , l d ), then P(l 1 , . . . , l d )∩P(l 1 , . . . , l d ) = / 0, since the lines l 1 , . . . , l d are uniquely determined by any member of P(l 1 , . . . , l d ).
2. If P(l 1 , . . . , l d ) ∩ F = / 0, then P(l 1 , . . . , l d ) ⊂ F by Proposition 1.
3. Anyv ∈ F is contained in P(l 1 , . . . , l d ) for some l 1 , . . . , l d .
Denote the family of all tuples (l 1 , . . . , l d ) such that P(l 1 , . . . , l d ) ∩ F = / 0 by L. By the observations above, we have that |L|q d (q − 1) d = |F|.
On the other hand, it is clear that Taking expectations, we obtain the required result.
,...,s d for the complete d-partite d-uniform hypergraph with parts of orders s 1 , . . . , s d , the extremal number ex d (n, K (d) s 1 ,...,s d ) is the maximum number of edges in a d-uniform hypergraph on n vertices containing no copy of K (d) s 1 ,.
2 ) = Ω(n 8/3 ). For general d, there is a simple, but longstanding, lower bound ex d (n, K
Theorem 1 (
1Gunderson-Rödl-Sidorenko) For any d ≥ 2, let s = s(d) be the smallest positive integer s (if it exists) such that (sd − 1)/(2 d − 1) is an integer. Then
2 ) = Ω(n 8/3 ) of Katz, Krop and Maggioni.
Corollary 2 F
2For any d ≥ 2, let F n K (d) 2,...,2 be the set of all (labeled) K (d) 2,...,2 -free d-uniform hypergraphs with vertex set {1, . . . , n}. Then there exists a positive constant C depending only on d and an infinite sequence of positive integers n for which C·ex d (n,K (d) 2,...,2 ) .
1}. Then, for any u i which lie in the affine hull of v 0 i , v 1 i for each i = 1, . . . , d, T (u 1 , . . . , u d ) = 1. DISCRETE ANALYSIS, 2021:17, 8pp.
1 ,...,l d )∈L l 1 × l 2 × . . . × l d , so that |B| ≤ q d |L| = (q − 1) −d |F|.
2
2By definition, the subgraph H of H with edge set E \ B is K 2,...,2 -free. By Lemma 1 and Claim 3,E[|B|] ≤ (1 + o(1))q −d E[|F|] = (1 + o(1))q 2ds−2 d r−d .On the other hand, by Claim 2, E[|E|] ∼ q ds−r . By the assumption on r and s from the statement of Theorem 2, we have 2ds − 2 d r − d < ds − r, which immediately implies that E[|B|] = o(E[|E|]). Therefore, there must exist a K (d) 2,...,2 -free hypergraph H on a ground set of size n = dq s with edge set E satisfying d , completing the proof of Theorem 2.
By a result of Ferber, McKinley and Samotij[7, Theorem 9], any polynomial gain over the deletion lower bound for the extremal number of a uniform hypergraph H implies an optimal counting result for the number of H-free graphs on n vertices. In combination with Corollary 1, this implies the following result, generalizing a celebrated theorem of Kleitman and Winston[13] on the d = 2 case.d
Deletion
GRS Corollary 1
2
1.50
2.00
2.00
3
2.33
2.50
3.00
4
3.75
4.00
4.00
5
6.20
6.25
7.00
6
10.50
11.00
7
18.14
18.16
19.00
8
31.87
32.00
32.00
9
56.77
56.80
57.00
10
102.30
102.33
103.00
11
186.09
186.10
187.00
12
341.25
342.00
13
630.07
630.08
631.00
14
1170.21
1170.22
1171.00
15
2184.46
2184.50
2185.00
16
4095.93
4096.00
4096.00
17
7710.05
7710.06
7711.00
18
14563.50
14564.00
19
27594.05
27594.05
27595.00
20
52428.75
52429.00
21
99864.33
99865.00
22 190650.13 190650.14 190651.00
DISCRETE ANALYSIS, 2021:17, 8pp.
AUTHORS
Norm-graphs: variations and applications. N Alon, L Rónyai, T Szabó, J. Combin. Theory Ser. B. 761N. Alon, L. Rónyai and T. Szabó, Norm-graphs: variations and applications, J. Combin. Theory Ser. B 76 (1999), 280-290. 1
Random algebraic construction of extremal graphs. B Bukh, Bull. London Math. Soc. 472B. Bukh, Random algebraic construction of extremal graphs, Bull. London Math. Soc. 47 (2015), 939-945. 2
Rational exponents in extremal graph theory. B Bukh, D Conlon, J. Eur. Math. Soc. 202B. Bukh and D. Conlon, Rational exponents in extremal graph theory, J. Eur. Math. Soc. 20 (2018), 1747-1757. 2
Graphs with few paths of prescribed length between any two vertices. D Conlon, Bull. Lond. Math. Soc. 512D. Conlon, Graphs with few paths of prescribed length between any two vertices, Bull. Lond. Math. Soc. 51 (2019), 1015-1021. 2
On sequences of integers no one of which divides the product of two others and on some related problems. P Erdős, Mitt. Forsch.-Inst. Math. Mech. Univ. Tomsk. 22P. Erdős, On sequences of integers no one of which divides the product of two others and on some related problems, Mitt. Forsch.-Inst. Math. Mech. Univ. Tomsk 2 (1938), 74-82. 2
On extremal problems of graphs and generalized hypergraphs. P Erdős, Israel J. Math. 22P. Erdős, On extremal problems of graphs and generalized hypergraphs, Israel J. Math. 2 (1964), 183-190. 2
Supersaturated sparse graphs and hypergraphs. A Ferber, G Mckinley, W Samotij, Int. Math. Res. Not. 20204A. Ferber, G. McKinley and W. Samotij, Supersaturated sparse graphs and hypergraphs, Int. Math. Res. Not. IMRN 2020 (2020), 378-402. 4
The history of degenerate (bipartite) extremal graph problems. Z Füredi, M Simonovits, Erdős centennial. Budapest25János Bolyai Math. Soc.Z. Füredi and M. Simonovits, The history of degenerate (bipartite) extremal graph problems, in Erdős centennial, 169-264, Bolyai Soc. Math. Stud., 25, János Bolyai Math. Soc., Budapest, 2013. 1
Extremal problems for sets forming Boolean algebras and complete partite hypergraphs. D S Gunderson, V Rödl, A Sidorenko, J. Combin. Theory Ser. A. 882D. S. Gunderson, V. Rödl and A. Sidorenko, Extremal problems for sets forming Boolean algebras and complete partite hypergraphs, J. Combin. Theory Ser. A 88 (1999), 342-367. 2
Remarks on the box problem. N Katz, E Krop, M Maggioni, Math. Res. Lett. 92N. Katz, E. Krop and M. Maggioni, Remarks on the box problem, Math. Res. Lett. 9 (2002), 515-519. 2
Norm-graphs and bipartite Turán numbers. J Kollár, L Rónyai, T Szabó, Combinatorica. 161J. Kollár, L. Rónyai and T. Szabó, Norm-graphs and bipartite Turán numbers, Combinatorica 16 (1996), 399-406. 1
. T Kővári, V T Sós, P Turán, Zarankiewicz, Colloq. Math. 31T. Kővári, V. T. Sós and P. Turán, On a problem of K. Zarankiewicz, Colloq. Math. 3 (1954), 50-57. 1
On the number of graphs without 4-cycles. D Kleitman, D Wilson, Discrete Math. 414D. Kleitman and D. Wilson, On the number of graphs without 4-cycles, Discrete Math. 41 (1982), 167-172. 4
Some extremal results on complete degenerate hypergraphs. J Ma, X Yuan, M Zhang, J. Combin. Theory Ser. A. 1542J. Ma, X. Yuan and M. Zhang, Some extremal results on complete degenerate hypergraphs, J. Combin. Theory Ser. A 154 (2018), 598-609. 2
| [] |
[
"Gravitational-wave physics with Cosmic Explorer: limits to low-frequency sensitivity",
"Gravitational-wave physics with Cosmic Explorer: limits to low-frequency sensitivity"
] | [
"Evan D Hall ",
"Kevin Kuns ",
"Joshua R Smith ",
"Yuntao Bai ",
"Christopher Wipf ",
"Sebastien Biscans ",
"Rana X Adhikari ",
"Koji Arai ",
"Stefan Ballmer ",
"Lisa Barsotti ",
"Yanbei Chen ",
"Matthew Evans ",
"Peter Fritschel ",
"Jan Harms ",
"Brittany Kamai ",
"Jameson Graef Rollins ",
"David Shoemaker ",
"Bram Slagmolen ",
"Rainer Weiss ",
"Hiro Yamamoto ",
"Walter Burke ",
"\nLIGO Laboratory\nMassachusetts Institute of Technology\nCambridgeMassachusettsUSA\n",
"\nInstitute for Theoretical Physics\nNicholas and Lee Begovich Center for Gravitational-Wave Physics and Astronomy\nCalifornia State University\nFullerton, FullertonCaliforniaUSA\n",
"\nLIGO Laboratory\nCalifornia Institute of Technology\nPasadenaCaliforniaUSA\n",
"\nDepartment of Physics\nCalifornia Institute of Technology\nPasadenaCaliforniaUSA\n",
"\nGran Sasso Science Institute (GSSI)\nSyracuse University\nSyracuse, AquilaNew York, I-L'USA, Italy\n",
"\nDepartment of Astronomy and Astrophysics\nINFN\nLaboratori Nazionali del Gran Sasso, I-AssergiItaly\n",
"\nDepartment of Mechanical and Civil Engineering\nUniversity of California Santa Cruz\nSanta CruzCaliforniaUSA\n",
"\nOzGrav, ANU Centre for Gravitational Astrophysics, Research Schools of Physics, and Astronomy and Astrophysics\nCalifornia Institute of Technology\nPasadenaCaliforniaUSA\n",
"\nThe Australian National University\nCanberraAustralia\n"
] | [
"LIGO Laboratory\nMassachusetts Institute of Technology\nCambridgeMassachusettsUSA",
"Institute for Theoretical Physics\nNicholas and Lee Begovich Center for Gravitational-Wave Physics and Astronomy\nCalifornia State University\nFullerton, FullertonCaliforniaUSA",
"LIGO Laboratory\nCalifornia Institute of Technology\nPasadenaCaliforniaUSA",
"Department of Physics\nCalifornia Institute of Technology\nPasadenaCaliforniaUSA",
"Gran Sasso Science Institute (GSSI)\nSyracuse University\nSyracuse, AquilaNew York, I-L'USA, Italy",
"Department of Astronomy and Astrophysics\nINFN\nLaboratori Nazionali del Gran Sasso, I-AssergiItaly",
"Department of Mechanical and Civil Engineering\nUniversity of California Santa Cruz\nSanta CruzCaliforniaUSA",
"OzGrav, ANU Centre for Gravitational Astrophysics, Research Schools of Physics, and Astronomy and Astrophysics\nCalifornia Institute of Technology\nPasadenaCaliforniaUSA",
"The Australian National University\nCanberraAustralia"
] | [] | Cosmic Explorer (CE) is a next-generation ground-based gravitational-wave observatory concept, envisioned to begin operation in the s, and expected to be capable of observing binary neutron star and black hole mergers back to the time of the first stars. Cosmic Explorer's sensitive band will extend below 10 Hz, where the design is predominantly limited by geophysical, thermal, and quantum noises. In this work, thermal, seismic, gravity-gradient, quantum, residual gas, scattered-light, and servo-control noises are analyzed in order to motivate facility and vacuum system design requirements, potential test mass suspensions, Newtonian noise reduction strategies, improved inertial sensors, and cryogenic control requirements. Our analysis shows that with improved technologies, Cosmic Explorer can deliver a strain sensitivity better than 10 −23 Hz −1/2 down to 5 Hz. Our work refines and extends previous analysis of the Cosmic Explorer concept and outlines the key research areas needed to make this observatory a reality. | 10.1103/physrevd.103.122004 | [
"https://arxiv.org/pdf/2012.03608v3.pdf"
] | 227,336,934 | 2012.03608 | 8853d53953d9e27ef9afba9bd5bb2fb62a00736c |
Gravitational-wave physics with Cosmic Explorer: limits to low-frequency sensitivity
Evan D Hall
Kevin Kuns
Joshua R Smith
Yuntao Bai
Christopher Wipf
Sebastien Biscans
Rana X Adhikari
Koji Arai
Stefan Ballmer
Lisa Barsotti
Yanbei Chen
Matthew Evans
Peter Fritschel
Jan Harms
Brittany Kamai
Jameson Graef Rollins
David Shoemaker
Bram Slagmolen
Rainer Weiss
Hiro Yamamoto
Walter Burke
LIGO Laboratory
Massachusetts Institute of Technology
CambridgeMassachusettsUSA
Institute for Theoretical Physics
Nicholas and Lee Begovich Center for Gravitational-Wave Physics and Astronomy
California State University
Fullerton, FullertonCaliforniaUSA
LIGO Laboratory
California Institute of Technology
PasadenaCaliforniaUSA
Department of Physics
California Institute of Technology
PasadenaCaliforniaUSA
Gran Sasso Science Institute (GSSI)
Syracuse University
Syracuse, AquilaNew York, I-L'USA, Italy
Department of Astronomy and Astrophysics
INFN
Laboratori Nazionali del Gran Sasso, I-AssergiItaly
Department of Mechanical and Civil Engineering
University of California Santa Cruz
Santa CruzCaliforniaUSA
OzGrav, ANU Centre for Gravitational Astrophysics, Research Schools of Physics, and Astronomy and Astrophysics
California Institute of Technology
PasadenaCaliforniaUSA
The Australian National University
CanberraAustralia
Gravitational-wave physics with Cosmic Explorer: limits to low-frequency sensitivity
Cosmic Explorer (CE) is a next-generation ground-based gravitational-wave observatory concept, envisioned to begin operation in the s, and expected to be capable of observing binary neutron star and black hole mergers back to the time of the first stars. Cosmic Explorer's sensitive band will extend below 10 Hz, where the design is predominantly limited by geophysical, thermal, and quantum noises. In this work, thermal, seismic, gravity-gradient, quantum, residual gas, scattered-light, and servo-control noises are analyzed in order to motivate facility and vacuum system design requirements, potential test mass suspensions, Newtonian noise reduction strategies, improved inertial sensors, and cryogenic control requirements. Our analysis shows that with improved technologies, Cosmic Explorer can deliver a strain sensitivity better than 10 −23 Hz −1/2 down to 5 Hz. Our work refines and extends previous analysis of the Cosmic Explorer concept and outlines the key research areas needed to make this observatory a reality.
I. INTRODUCTION
The second-generation of laser interferometric gravitationalwave observatories -Advanced LIGO [ ], Advanced Virgo [ ], and Kagra [ ] -have opened a new window on the universe by observing gravitational waves from merging systems of black holes [ , ] and neutron stars [ ], and have ushered in a new era in multi-messenger astronomy [ ]. Dozens of coalescing binary systems have been observed thus far [ , ], with rapid alerts delivering sky locations and probable system types [ ], bringing the features of the underlying astrophysical populations into focus. An enhancement to Advanced LIGO, known as LIGO A+, with improved quantum noise and optical coatings, is now being implemented [ ]. Additionally, research and development is underway toward a cryogenic silicon detector, LIGO Voyager, that could be implemented in the existing LIGO facilities [ ], and a concept for a high-frequency-focused Australian observatory is being developed [ ].
A vision is developing for a global third-generation ( G) network of ground-based gravitational-wave observatories capable of observing gravitational waves across cosmic time, with nearby systems detected with incredible precision [ , ]. The European concept for a third-generation observatory is the Einstein Telescope (ET) [ ], a 10 km triangular underground observatory combining three high-power room-temperature interferometers sensitive at high frequency and three cryogenic silicon interferometers sensitive at low frequency. A United States concept for a G observatory is Cosmic Explorer [ -], a 40 km L-shaped, single-interferometer observatory built on the Earth's surface.
We anticipate a staged approach to Cosmic Explorer, similar to the approach adopted by LIGO, in which the facility hosts successive generations of detectors, each exploiting the most advanced technology available at the time. We envision that the first detector, Cosmic Explorer (CE ), will operate in the s using LIGO A+ technology scaled up to the increased dimensions of the facility, and with a few modest improvements. For the s, the state-of-the-art technology is more difficult to predict. In this work we consider two possible designs for this detector, called Cosmic Explorer (CE ). One possibility is that CE is a further extension of LIGO A+ technology, retaining room-temperature fused silica test masses and a 1 µm laser as the working technology. Another possibility is that CE is an extension of the LIGO Voyager technology, employing silicon test masses at 123 K and a 2 µm laser. In the rest of the paper, we will refer to detectors based on room temperature fused silica test masses and 1 µm laser wavelength as the "1 µm technology" and those with cryogenic silicon test masses and 2 µm laser wavelength as the "2 µm technology." For both CE and CE , the detector designs target observations above 5 Hz, while Einstein Telescope targets observations down to 3 Hz. This paper presents an assessment of the low-frequency sensitivity of CE and CE based on recent research and development progress. We first present the basic low-frequency observational capabilities of Cosmic Explorer (Section II) and discuss broadly the limits to the Cosmic Explorer strain sensitivities (Section III). We then describe the Cosmic Explorer facility (Section IV) and go into detail about low-frequency noise sources (Section V). Then in Section VI we take stock of the research and development that will be required to realize Table I, showing that third-generation detectors will provide early warning on the scale of hours, compared to the minutes provided by second-generation detectors. Systems this loud (or louder) should be expected roughly once per year assuming a local merger rate of ∼300 Gpc −3 yr −1 [ ].
Observatory (BNS) Fig. . "BNS" refers to a . + . neutron-star system (tidal and post-merger effects not included), and "BBH" to a + black hole system, in both cases nonspinning. The time before merger is given for an optimally oriented neutron star system at a redshift = 0.03, with a threshold signal-to-noise of . max is the maximum mass for which an optimally oriented nonspinning equal-mass system could be detected at = 1.
Cosmic Explorer and we look forward to future work. Appendix A summarizes the different technologies used in the two stages of Cosmic Explorer and Appendix B compares the displacement and force noises of Cosmic Explorer with those of other detectors. In all cases the source is assumed to be circularly polarized. aLIGO and Voyager are shown for ≥ 10 Hz, CE for ≥ 5 Hz, and ET for ≥ 3 Hz; these are the low-frequency cutoffs assumed for the signal calculations throughout this work.
II. ASTROPHYSICS
Cosmic Explorer has a range of science goals, which together take advantage of the instrument's full broadband sensitivity up to several kilohertz. Here we focus on the detection of compact binary signals. The low-frequency sensitivity affects the reach of the instrument for heavy and high-redshift signals, as well as the total signal-to-noise ratio of all compact-binary signals. The optimal signal-to-noise ratio for a particular frequency-domain signal ℎ( ), measured in a detector with a strain sensitivity ℎ and a gravitational-wave sensitivity band extending down to a low-frequency cutoff low , is obtained when the signal is detected with a matched filter, yielding an amplitude signal-tonoise ratio given by 2 = 4 ∫ ∞ low d |ℎ( )| 2 / ℎ ( ) [ -].
For light systems (e.g., neutron stars) which are still in their inspiral phase at frequencies 10 Hz, improving the low-frequency cutoff low has a modest but noticeable improvement on the total signal-to-noise ratio: for the idealized case of a detector with a flat noise floor down to a lower cutoff frequency low , the matched-filter signal-to-noise ratio scales as ∝ −2/3 low , since |ℎ( )| ∝ −7/6 in the stationary phase approximation [ ]. This scaling amounts to roughly a 60 % improvement as the cutoff frequency is halved. On the other hand, the improvement in the amount of early warning for these inspiraling systems can be significant: the time merge before a coalescing system merges is related (again in the stationaryphase approximation) to the gravitational-wave frequency GW by merge ∝ −8/3
GW [ ]. Therefore, sufficiently loud signals will accumulate threshold signal-to-noise ratio soon after entering FIG. . Detectability of nonspinning equal-mass black hole binaries as a function of mass and redshift, with detectability being defined as having a matched-filter signal-to-noise ratio (SNR) ≥ 8. The solid line indicates each detector's horizon, at which an optimally oriented system with a given mass and redshift will be detected with SNR = 8, and suboptimally oriented systems have SNR < 8. Systems lying above the solid line are limited to SNR < 8 regardless of orientation. Along the edge of the dark (light) shaded band, 10 % (50 %) of the systems will be detected with SNR ≥ 8 and the remainder will have SNR < 8 due to unfavorable orientation. the sensitivity band, leading to an early warning time early ∝ −8/3 low . This means that halving the low-frequency cutoff increases the early warning time more than sixfold. Fig. shows the accumulation of signal-to-noise ratio found by
computing 2 ( merge ) = 4 ∫ GW ( merge ) low d |ℎ( )| 2 / ℎ ( ), with
ℎ( ) in this case chosen to correspond with a . + . binary system at = 0.03 (luminosity distance 0.14 Gpc). By setting a threshold SNR of , the corresponding early warning time can be solved for, and is given in Table I for the detector sensitivities shown in Fig. . Low-frequency sensitivity is especially impactful for the detection of intermediate-mass black holes in the range 100 1000 . Detecting these systems at redshifts approaching would provide information on the oldest population of stars (Population III). Additionally, these detections could demonstrate that supermassive black holes-approaching and exceeding 10 6 -were formed by accretion and hierarchical mergers from Population III remnants (the so-called "light seed" scenario) [ ]. Fig. shows (i.e., detector-frame) gravitational waveform, which is obtained from the source-frame waveform ℎ( ) by the substitutions ↦ → /(1 + ), 1 ↦ → (1 + ) 1 , and 2 ↦ → (1 + ) 2 [ ].
III. STRAIN SENSITIVITY
Our Cosmic Explorer models adopt the dual-recycled Fabry-Pérot Michelson interferometer topology now employed by advanced detectors shown in Fig. . In brief, these detectors are Michelson interferometers whose arms are enhanced by the inclusion of partially transmissive input mirrors, turning the arms into Fabry-Pérot cavities. Then, a power-recycling mirror is placed between the laser and the beamsplitter to critically couple the Fabry-Pérot arms to the laser, which maximizes the circulating arm power. Additionally, a signal extraction mirror between the beamsplitter and the output port is used to broaden the bandwidth of the instrument [ ]. Squeezed vacuum states are reflected off of a filter cavity and injected into the antisymmetric port of the interferometer in order to achieve broadband quantum noise reduction.
The upper limit to the achievable bandwidth of Cosmic Explorer is defined by the free spectral range of the = 40 km These scalings are valid if the detector does not significantly move, due to the earth's rotation and orbit around the sun, while the signal is in that detector's sensitivity band. This approximation is valid for Cosmic Explorer, since low is sufficiently high, but not for spaced-based detectors such as LISA. . Estimated low-frequency spectral sensitivity limit (solid black) of Cosmic Explorer and the known noise sources that cause these limits (colored curves). The sensitivity limit from previous work [ ] is also shown (dotted black curve). From 5 to 10 Hz, the strain sensitivity is limited by seismic Newtonian noise. arms, given by FSR = /2 = 3.75 kHz. We take the lower limit to be 5 Hz; this is not a precisely motivated cutoff, but comes from our expectation of significant noise from local gravity fluctuations at a few hertz from the atmosphere, and (if Advanced-LIGO-like suspensions are used) from thermal, seismic, and control noise. The rest of the present work is concerned primarily with the geophysical and thermal noises, leaving a detailed discussion of other noises to later works.
Since the initial exploration of the Cosmic Explorer sensitivity [ ], many of the estimates of the fundamental noises have been refined, and some new noise sources have been considered. Figs. and show the updated low-frequency limits to the spectral sensitivity for CE and CE , respectively, and some of the key sources of noise that contribute to these limits; the curves from the previous sensitivity study are also included. For CE , updates with respect to previous work mean that the instrument attains strain noise better than 10 −23 Hz −1/2 above about 5.7 Hz, whereas the instrument presented in previous work attained this performance only above 8 Hz. For CE , strain noise below 10 −23 Hz −1/2 is achieved around 4.8 Hz compared to 6.3 Hz in previous work; additionally, the noise performance around 10 Hz is slightly degraded for CE . The primary differences from this initial work are as follows.
• The ground motion of the Cosmic Explorer facility is assumed to be lower than the LIGO facilities above 5 Hz, based on long-term seismic surveys from some promising locations around the United States (Section IV). This lowers both the seismic noise, and the seismic component of the Rayleigh-wave Newtonian noise.
• CE assumes tenfold better seismic isolation than Advanced LIGO at 1 Hz, and CE assumes one hundredfold better seismic isolation than Advanced LIGO at 1 Hz (Section V B).
• The Newtonian noise estimates now include contributions from seismic body waves and atmospheric infrasound (Section V C), and CE assumes twofold suppression of ambient Rayleigh waves. Together with the reassessment of the ground motion, we find that suppression of Rayleigh and body waves is needed for CE to meet the sensitivity quoted in [ ].
• Phase noise induced by light propagation in the bulk of the input test masses is now included; this constitutes a potentially non-negligible noise source for the CE 2 µm technology (Section V D).
• The force noise caused by the residual gas molecules in the test mass chambers striking the test masses is now included (Section V F).
• The possibility of building a room temperature CE with 1 µm technology was not previously considered. Such a detector would have non-negligible coating thermal noise around 10 Hz and thus slightly worse performance than the 2 µm technology and the estimate from previous work at these frequencies (Section V D).
• The suspensions for both detectors have been enlarged to 4 m of total height (previously they were 3.2 m) and 1500 kg of total mass (previously they were 980 kg), and optimized for minimal thermal and seismic noise given updated mechanical constraints on the strength of the materials (Section V A).
• Preliminary considerations of the scattered light noise (Section V G) and control system noise (Section V H) suggest that these noises can be rendered subdominant within the gravitational-wave band.
IV. THE COSMIC EXPLORER FACILITY
Many of the limits to Cosmic Explorer sensitivity at low frequency depend on assumptions about the Cosmic Explorer facility and environment. In this section we lay down requirements for the ground motion and seismic wave content (Section IV A), the atmospheric infrasound spectrum (Section IV B), and the ultra-high vacuum system (Section IV C). This list is not exhaustive; for example, magnetic requirements are not discussed because the coupling of local magnetic fields depends primarily on technical details of the detector's electronics, which are difficult to estimate without detailed modeling. Fig. but for Cosmic Explorer realized with Left: the 2 µm technology (cryogenic silicon test masses and a 2 µm laser wavelength) and Right: the 1 µm technology (room temperature fused silica test masses and a 1 µm laser). For both technologies, the seismic and suspension thermal noises are comparable to the infrasonic Newtonian noise background, which is taken to be a geophysical limit for the facility (Section V C ).
A. Ground motion
Ground motion limits the performance of gravitational-wave interferometers both through the mechanical coupling from the ground to the suspension point of the test mass and through the direct gravitational attraction of the ground on the test mass (the so-called "Newtonian noise") [ ]. Additionally, ground motion transferred to the beam tube can cause noise from stray light.
The location of Cosmic Explorer is not yet known, but an assumption for the local ground seismicity can be made based on publicly available seismic data and on the noise environment from existing facilities. To get long-term trends that encompass diurnal and seasonal variations in ground motion, we examined noise histograms from selected USArray [ ] and ANSS [ ] seismic stations in the western United States; these stations were chosen for their proximity to promising Cosmic Explorer candidate sites which have favorable topographic properties. We also examined noise histograms from the LIGO Hanford and Livingston sites. Above a few hertz, the ground motion of the LIGO sites is dominated by on-site machinery. In particular, heating, ventilation and air conditioning systems dominate from 1 to 10 Hz [ ]. We assume that it will be possible to design the Cosmic Explorer infrastructure to better isolate the interferometer from such machinery by moving the vibration sources out of the experimental buildings, putting them on dampers or on pedestals mounted separately and deeply into the ground. The Cosmic Explorer ground noise model is shown in Fig. ; this model assumes that above 5 Hz, the ground acceleration noise is no more than 1 µm s −2 Hz −1/2 .
A complete estimate of the Newtonian noise requires a model of the seismic wave amplitude spectra and an understanding of their propagation through the ground. In general, surface seismic motion is usually assumed to be dominated by surface waves (Rayleigh and Love waves) as opposed to body waves (P and S waves), although the actual composition depends on the particular site and may additionally include higher-order surface waves [ ]. Because the Cosmic Explorer site is not known, we adopt a model in which the site is Rayleigh-wave dominated above 5 Hz, with a flat body-wave spectrum of amplitude 0.3 µm s −1 Hz −1/2 composed equally of P waves, vertically polarized S waves, and horizontally polarized S waves. Newtonian noise is generated from only the Rayleigh, P, and vertically polarized S waves, because these waves either cause a vertical displacement of the ground surface or density fluctuations of the bulk. The P-, S-, and Rayleigh-wave speeds are assumed to be P = 600 m/s, S = 300 m/s, R = 250 m/s, Love waves are not considered because they do not occur in a homogeneous and isotropic elastic half-space; moreover, Love waves do not produce Newtonian noise because their motion is a horizontal shear. respectively. These parameters, and the assumptions on the wave content of the ground motion, will have to be revised once the future Cosmic Explorer site is selected and characterized.
B. Atmospheric fluctuations
Newtonian noise from density fluctuations in the atmosphere is likely to impact the strain sensitivity of third-generation detectors. For Cosmic Explorer, the relevant mechanism is expected to be the propagation of infrasound (sound at frequency 20 Hz) in the vicinity of the test masses. Global infrasound surveys provide noise histograms up to slightly below 10 Hz [ ]; based on the median noise model, we take the outdoor infrasound spectrum for Cosmic Explorer to be 1 mPa Hz −1/2 . The choice of the median infrasound background means that, while it may be possible to find a site with lower infrasound background, we are not reliant on finding an exceptional site in order to realize the noise performance described herein. The impact of atmospheric infrasound on the detector strain sensitivity is discussed in Section V C .
Other mechanisms of atmospheric noise generation include spatially varying temperature fields that move near the test mass due to wind, and pressure fluctuations generated by turbulent mixing (the aeroacoustic effect), but these noise sources are unlikely to be significant above 5 Hz [ ]. Finally, details of the dimension and shape of the buildings housing the test masses can alter the above noise sources (e.g., by excluding large density fluctuations close to the test masses), but have the potential to introduce extra noise due to local vortices [ ]. We do not consider details of the test mass buildings here, but note that proper design will be needed to ensure that atmospherically induced noise is kept to a minimum. Accurately modeling the Newtonian noise contribution below 5 Hz is an area of ongoing research, and we do not attempt a detailed noise analysis in this frequency band.
C. Vacuum system
The design of the Cosmic Explorer vacuum system, including the beam tube infrastructure and test mass chambers, has not been determined. However, in Section V G we determine that a beam tube diameter of 120 cm with a similar acceleration spectrum as the LIGO beam tube motion is likely sufficient to keep noise from back-scattered light well below the total Cosmic Explorer noise, though this will be reevaluated once forward-scattering effects are accounted for.
Although the beam tubes and test mass chambers are evacuated, the small amount of residual gas causes noise in the detector through two mechanisms discussed in Section V F. The first is optical path length fluctuation due to the polarizability of the molecules in the beam tubes passing through the laser beam [ , ], and the second is test mass motion due to momentum transfer from the gas molecules in the chambers [ , ]. Achieving low pressures is more challenging in the chambers than in the beam tubes because the chambers will be periodically opened in order to make modifications to the detector. We thus set the vacuum system requirements such that the total residual gas noise for the 1 µm technology is a factor of three below the CE design sensitivity at 10 Hz and a factor of five below the design sensitivity at 100 Hz.
In this work we assume that the total vacuum pressure in both the tubes and chambers is dominated by molecular hydrogen, water, molecular nitrogen, and molecular oxygen with each species contributing equally to the total gas noise. Under these assumptions, the above noise requirements translate into requirements on the partial pressures in the beam tubes of H 2 = 44 nPa, H 2 O = 4.0 nPa, N 2 = 2.5 nPa, and O 2 = 2.8 nPa, for hydrogen, water, nitrogen, and oxygen, respectively, and on the partial pressures in the test mass chambers of H 2 = 410 nPa, H 2 O = 140 nPa, N 2 = 110 nPa, and O 2 = 100 nPa. It is also important that the hydrocarbons are kept low enough that they do not contaminate the mirror surface and cause excess optical loss.
V. NOISE ESTIMATES
This section describes noise terms that contribute to the limit of the low-frequency performance of Cosmic Explorer.
A. Suspension thermal noise
The baseline Cosmic Explorer design assumes scaled-up versions of the quadruple pendulum suspensions used in LIGO [ ] and planned for Voyager [ ], along with a few modifications, to decrease the seismic and suspension thermal noises. The In order to minimize thermal noise, the final suspension stage-consisting of the penultimate mass (PUM), the test mass, and the fibers or ribbons between them-is monolithic; for the 1 µm technology, the material is room-temperature fused silica, and for 2 µm technology, the material is cryogenic silicon. The top two masses, called the top mass and the antepenultimate mass (APM), are room-temperature maraging steel for both wavelengths. In order to lower the vertical suspension resonances, the top three stages are suspended by steel wires from steel blade springs attached to the stage above.
In order to further reduce the resonances, the test masses are suspended by a final set of blade springs attached to the PUM made from the same material as the PUM and test mass. One concept for the design of this final stage is shown in the right panel of Fig. . The stress and spring constant of the blade can be calculated with beam theory [ ] by approximating it as a rectangular cantilever of length ℓ, width , and thickness ℎ. The maximum stress max ∝ ℓ/ ℎ 2 occurs at the clamp, and the spring constant ∝ ℎ 3 /ℓ 3 is the ratio of the load suspended by the blade to its maximum deflection at the tip. The blade dimensions should be chosen to minimize while keeping the maximum stress below a safety factor of the breaking stress of the blade.
For the 1 µm technology, as with LIGO, the silica test mass is suspended from the PUM by four silica fibers welded to the test mass [ ]; in Cosmic Explorer they are welded at the top to the blade springs while in LIGO they are welded directly to the PUM. The contribution of the loss angle to the imaginary part of the horizontal spring constant Im ∝ / is reduced by the dilution factor ∝ −1/2 , where is the cross-sectional area moment of inertia of the fiber or ribbon [ , , ]. Since ∝ 4 for a fiber of radius , it is advantageous to make the radius as small as the breaking stress of the fiber allows. Maximizing the stress in the fiber in this way has the added benefit of reducing the contribution of the fiber to the vertical spring constant and increasing the frequency of the first violin mode, which is proportional to 1/2 .
The thermoelastic noise of the fiber has two contributions: one from thermal expansion and one from the temperature dependence of the Young modulus. These two contributions cancel when the fiber stress is appropriately chosen. Thus, a tapered fiber is used with a larger radius at the ends (where the most bending, and therefore the most loss, occurs) chosen to give the stress necessary to cancel the thermoelastic noise, and a smaller radius along the length of the fiber chosen to maximize the stress [ ].
For the 2 µm technology, as with Voyager, the silicon test mass is suspended by four silicon ribbons welded to the test mass at the bottom and to the blade springs at the top. Since the ribbons are held near the zero-crossing of the thermal expansion coefficient, the thermoelastic noise in the ribbons cannot be canceled by choice of stress as is done for the fused silica fibers. The ribbon dimensions are therefore chosen to maximize the stress along the entire length of the ribbon. Since ∝ ℎ 3 for a ribbon of width and thickness ℎ, a width-to-thickness ratio of : is chosen to soften the pendulum in the horizontal direction and to increase the gravitational dilution.
The suspension design also determines the seismic noise, discussed below in Section V B, since the suspensions provide passive 1/ 8 filtering of seismic noise above all of the longitudinal, vertical, and angular resonances. To reduce both seismic and suspension thermal noise, it is thus advantageous to make the suspensions as soft as possible and to lower their resonances.
To achieve this goal, the total allowable height of the suspensions for all technologies has been increased to 4 m and the total mass increased to 1500 kg. Within these constraints, in an analysis similar to that done for Voyager [ ], the lengths and masses of the silica and silicon suspension stages have been optimized to minimize the sum of these noises over the frequency band of 4 to 15 Hz. and vertical noises of the PUM and test mass also start to become important. The addition of blade springs lowers the first vertical mode thus reducing the vertical thermal noises, most importantly from the APM.
The maximum stress that the blade springs, fibers, and ribbons can tolerate is an important material property in the design of the suspensions, and it is difficult to predict what will be possible on a timescale of decades. The maximum stress of the LIGO silica fibers is 800 MPa [ ], which provides a safety factor of about six for the breaking stress of fibers realized at the time the LIGO suspensions were designed [ ]. Recent improvements to fused silica fiber fabrication suggest that fibers can be made with stresses of 1.2 GPa, which provides a safety factor of about three [ ]. The Cosmic Explorer fused silica suspensions use this 1.2 GPa for the fibers and tentatively set the maximum blade spring stress to be 800 MPa.
The silicon studies most relevant to the suspensions discussed in this section find that the tensile strength depends on the surface treatment and edge quality, with average breaking stresses measured ranging from 100 to 400 MPa and individual samples observed as high as 700 MPa [ , ]. Cosmic Explorer tentatively sets a maximum stress of 400 MPa for both the blades and ribbons while Voyager uses a more conservative 100 MPa [ ]. Nevertheless, larger stresses have been observed in other contexts. Stresses of 3 to 5 GPa have been observed in silicon wafers [ ] and micro-scale MEMS devices have realized fracture stresses in excess of 1 GPa and stresses of up to 10 GPa have been realized in nano-scale devices [ ].
No blade springs have yet been constructed out of either silica or silicon. Developing this technology and techniques for manufacturing highly stressed materials is a critical area of research and development in realizing the low-frequency sensitivity of Cosmic Explorer. Alternatives to blades springs, such as geometric antisprings [ ], should also be developed in parallel. Additionally, no experiment on earth has ever directly measured (low) suspension thermal noise. . Horizontal motion of the Cosmic Explorer suspension point, shown for both CE and CE . CE assumes seismic isolation that is moderately improved compared to Advanced LIGO. CE assumes further improvements to the seismic isolation using novel inertial sensing technology [ ]. A simplified budget of the CE motion is also shown, along with the CE ground motion model ( Fig. ).
B. Seismic noise
Like Advanced LIGO [ , ], Cosmic Explorer will suppress seismic noise with passive and active techniques. The suspensions described in Section V A passively filter the seismic noise with a 1/ 8 slope in amplitude above the suspension resonances, which have been reduced with the optimization described there. Even so, in order to achieve the required seismic noise suppression, the motion of the optical table supporting the suspension will be actively suppressed with a combination of inertial sensors and position sensors. The seismic isolation of the Cosmic Explorer and suspension point is shown in Fig. . For CE , we assume an isolation performance that is moderately improved compared to Advanced LIGO [ ]. At ∼10 Hz we assume a threefold improvement, and at ∼1 Hz a tenfold improvement, though to directly increase the seismic isolation the improvement is only needed down to 5 Hz; seismic isolation improvements below the gravitational-wave band will, however, lessen the requirements on the interferometer control system. The improvement could come, for example, by combining the mechanics of a conventional geophone (GS ) with an interferometric proof mass readout [ ]. The noise below 1 Hz is residual ground motion that comes from the inclusion of a position sensor signal to lock the suspension point to the ground on long timescales (also referred to as "blending"). Additionally, the horizontal inertial sensing is susceptible to contamination from ground tilt, and should therefore be paired with low-noise tiltmeters [ ]. This is motivated by studies at LIGO Hanford that have shown significant tilt-to-interferometer strain coupling after active seismic isolation [ ].
For CE , we assume that improvements in inertial sensing will yield another threefold noise improvement at 10 Hz and a tenfold improvement at 1 Hz, again with the improvement only needed down to 5 Hz to achieve a direct seismic isolation improvement. A variety of designs have been proposed, but common themes include a monolithic proof mass assembly to reduce thermal noise and an optical displacement sensor to reduce readout noise. van Heijningen et al. demonstrated a monolithic accelerometer combined with an interferometric readout that reached a noise floor of 8 × 10 −15 m Hz −1/2 above 30 Hz; this should reach 10 −15 m Hz −1/2 above 10 Hz with continued development [ ]. A proposed superconducting niobium upgrade to this system would reduce eddy current damping and greatly improve suspension thermal noise allowing, in principle, 10 −15 m Hz −1/2 above 1 Hz [ ]. However, such a device has yet to be demonstrated, and would operate at temperatures below 9.2 K, requiring additional cooling with respect to the Cosmic Explorer cryogenic environment, and would require a low-noise tiltmeter. Development of novel six-dimensional inertial isolators with optical readouts is also progressing [ ], and their use with the existing LIGO facilities and Advanced LIGO isolation infrastructure has been explored [ ]. These sensors would provide the additional benefit of sensing tilt. Additionally, the improved low-frequency noise of the inertial sensors leads to less reliance on the low-frequency position sensor signal, thereby lessening the contamination from residual ground motion.
Lowering the tilt coupling, along with mitigating gravity gradient fluctuations from the atmosphere, is an important motivator for carefully designed buildings [ ].
C. Newtonian noise
Previous studies of Newtonian noise for Cosmic Explorer considered only the contribution from seismic Rayleigh waves, and assumed a Rayleigh-wave noise amplitude equal to that of the existing LIGO facilities [ ]. Here we refine that estimate and additionally include the contributions from seismic body waves and from atmospheric infrasound. We start with analytical formulae available in the literature for the infinite half-space, and then additionally we consider numerical simulations that account for trenches that can reduce Newtonian noise relative to the half-space solutions. The Newtonian noise estimates are shown in Fig. . . Seismic Newtonian noise As described in Sec. IV A, we assume that compared to LIGO, the Cosmic Explorer facility will have a lower Rayleighwave noise in the anthropogenic band: 1 µm s −2 Hz −1/2 above 5 Hz. We also assume a body-wave noise amplitude equal to 0.3 µm s −2 Hz −1/2 above 5 Hz, equipartitioned among Pwaves, vertically polarized S-waves, and horizontally polarized S-waves.
To compute the Newtonian noise from seismic and infrasonic density fluctuations, we employ the formulae from Harms [ ], which are valid for a test mass suspended above a homogeneous, isotropic elastic half-space. We therefore do not consider effect of stratigraphy, other ground anisotropies, the interaction with structures, or the interconversion of different types of seismic waves. These features will need to be accounted for to get a full understanding of the behavior of the local seismic field and hence the Newtonian noise level. For CE , we have assumed that the effect of seismic Newtonian noise can be mitigated (Section V C ) with 2× amplitude suppression of Rayleigh waves. The result in Fig. shows that CE is limited by seismic Newtonian noise from 5-10 Hz, with a secondary contribution from infrasound. For CE , we have assumed that seismic Newtonian noise can be further mitigated with 10× amplitude suppression for Rayleigh waves and 3× amplitude suppression for body waves; the result in Fig. shows that CE is then limited by atmospheric Newtonian noise, described below.
. Atmospheric Newtonian noise
As mentioned in Section IV B, we assume the Cosmic Explorer facility has a typical infrasound spectrum of 1 mPa Hz −1/2 ; this is an extrapolation from long-term global infrasound data, available below 10 Hz [ ], and assumes no significant contribution from site infrastructure.
To compute the Newtonian noise induced by infrasound fluctuations, we use the calculation in Harms [ ], which is valid for a test mass immersed in a fluid half-space. The result is shown in Fig. . For both stages of Cosmic Explorer, no suppression is assumed.
As mentioned in Section IV B, we do not include other processes besides infrasound that produce density fluctuations in the atmosphere, such as advected temperature fluctuations or aeroacoustic noise, because we expect the Newtonian noise induced by these processes to be negligible above a few hertz.
. Mitigation strategies
Unlike mechanically coupled seismic and acoustic noise, which can be strongly attenuated by suspending and inertially isolating the test mass inside a vacuum chamber, the Newtonian effect of seismic and acoustic fluctuations cannot be attenuated except by reducing the fluctuation amplitude, increasing the distance from the fluctuations to the test mass, or using auxiliary sensors to estimate the Newtonian contribution to the detector strain channel. Newtonian noise mitigation therefore requires a different set of techniques than for mechanical isolation, and the amount of achievable suppression will not be as great.
CE calls for mitigating the seismic Rayleigh-wave Newtonian noise by a factor of 2 in amplitude; CE calls for mitigating the seismic Rayleigh-wave Newtonian noise by a factor of 10 in amplitude, and the seismic body-wave Newtonian noise by a factor of in amplitude. This mitigation could be achieved by several means, potentially used in concert:
. Seismometer array subtraction. Arrays of seismometers can be used to estimate the seismic field in the vicinity of the test mass and thereby subtract Newtonian noise from the gravitational-wave channel [ ]. A proof-ofprinciple experiment to subtract ground motion from a tiltmeter signal achieved a tenfold suppression in the region 10-20 Hz [ ].
. Excavation underneath the test masses. Nearby density and displacement fluctuations can be suppressed simply by removing earth from the vicinity the test mass, replacing it with a lightweight fill material such as extruded polystyrene if necessary. Harms and Hild [ ] computed the suppression of Rayleigh-wave Newtonian noise from a 11 m wide and 4 m deep hemispherical recess, and here we repeat their analysis to additionally include the effect of the recess on P-and S-waves. The result is shown in Fig. , showing that moderate reduction of Rayleigh waves can be achieved near and above 10 Hz, while the reduction of body waves is less significant.
. Topography and seismic metamaterials. Seismic metamaterials could be built to deflect or dissipate seismic waves before they arrive at the test mass, potentially suppressing surface wave amplitudes by a factor of a few [ -]. Similarly, berms, ditches, and other nearby topographic features can affect the propagation of seismic waves, and thus the Newtonian noise level.
No mitigation of infrasound noise is assumed, and thus infrasound is considered a sensitivity limit of the Cosmic Explorer facility. Tropospheric LIDAR, which would otherwise be well-suited to three-dimensional estimation of atmospheric fluctuations, would require sensitivity improvements of several orders of magnitude in order to sense and subtract infrasound [ ]. Baffling or otherwise acoustically isolating the interior of the test mass building may be able to reduce the infrasound Newtonian noise below the outdoor value at a discrete set of frequencies [ ]. A true cutoff for infrasound noise could be engineered by burying the test mass a depth below ground, which would suppress the noise by e − / s , where s is the speed of sound; however, to achieve significant suppression for ≥ 5 Hz would require ≥ 65 m. Additionally, underground operation requires a reassessment of the Newtonian noise, since the detector would operate in the bulk of the ground rather than on the surface.
D. Test mass thermal noise
Cosmic Explorer will use heavy, high-quality test masses which are turned into high-reflectivity Bragg mirrors by coating the test mass surface with multiple layers of dielectric films. By alternating between high-and low-refractive-index materials, and depositing the layers to a thickness that is the same scale as the laser wavelength, the coating creates the conditions for repeated thin-film interference of the laser beam [ ]. The performance of the coating depends on the optical, mechanical, and thermal properties of the materials, which therefore must be chosen with some care [ ].
The 1 µm coating technology will mostly be that of LIGO A+: room-temperature fused silica substrates and coating technology being developed for A+ [ ]. Current research aimed at improving the thermal noise of room-temperature coatings holds promise to result in improved coatings for A+ and thus the 1 µm CE technology [ ]. The 2 µm technology will mostly be that of LIGO Voyager: crystalline silicon substrates operated at 123 K, with coating materials that offer improved thermal noise performance over the 1 µm technology.
Estimated thermal noises associated with the Cosmic Explorer test masses and their coatings are shown in Fig. and the individual noises are discussed below. Neither the A+ nor the Voyager coating designs have been finalized, so in this work we have made assumptions about the high-and low-index material pairs. Depending on the progression of coating research in the next decade, it is possible that the coatings for CE or CE may be different from what is presented here, and could potentially use three or more materials to provide more flexibility to simultaneously optimize the optical and thermal noise properties of the mirrors [ , ].
. Substrates
Cosmic Explorer will use 320 kg test mass substrates; this comes from the desire to make quantum radiation-pressure noise subdominant to other noise sources and the necessity of having large test masses to accommodate the large diameter beams of a nearly diffraction limited 40 km long arm cavity. There are several sources of thermal noise in test mass substrates: mechanical (Brownian) noise, thermoelastic noise, and thermorefractive noise.
Brownian fluctuation causes a displacement of the mirror surface with a power spectrum ( ) ∝ / , where is the test mass temperature, is the spot size of the beam, and is the mechanical loss of the substrate material; there are order unity corrections due to the finite size of the test mass and additional loss on the test mass surface [ ].
Thermoelastic noise is driven by thermodynamic fluctuations that cause displacement of the test masses via the coefficient of thermal expansion, [ ]: the spectrum of the test mass surface displacement due to these fluctuations is ( ) ∝ 2 2 / 3 2 , where is the thermal conductivity of the substrate. For fused silica, the contribution of substrate thermoelastic noise to the total instrument noise is negligible. In order to prevent the substrate thermoelastic noise of silicon from making a significant contribution, the substrate temperature must be controlled to near the zero-crossing of the thermal-expansion coefficient [ , ]. The left panel of Fig. shows that | | ≤ 4 × 10 −8 K −1 meets the requirement for thermoelastic noise to be an order of magnitude below the total design sensitivity. Based on models [ ] and measurements [ ] of the temperature dependence of , this constraint on translates to a temperature control requirement of ±2.3 K relative to the zerocrossing temperature of . This temperature control accuracy is also sufficient to keep thermoelastic noise of the silicon components of the suspension from contributing significantly to the total low frequency suspension thermal noise for the 2 µm technology as shown in Fig. . To achieve ±2.3 K temperature control, it may be sufficient to control the test mass temperatures to a fixed value (for example using the frequency of the internal modes of the silicon test masses as a reference for temperature), or it may be necessary to determine the set temperature based on minimizing the observed noise or by actively measuring the substrates' values. The for the room-temperature technology is not known, so it has been assumed to have the same properties as the titania-doped tantala used in current detectors, but with a mechanical loss such that the overall coating loss is four times lower than the current Advanced LIGO coating loss.
for a signed error-signal that would enable negative feedback control. Typical room temperature variations achieved at the current LIGO observatories are of order ±1 K, and even better accuracy should be achievable with feedback control [ , ]. Temperature gradients due to heating from the environment and from absorbed laser power also need to be considered. If a power abs is absorbed on some area of the test mass and dissipates into the substrate, the resulting temperature variation Δ is determined by Fourier's law, which reads approximately abs / ∼ Δ / , where is a relevant length dimension for the test mass (both the thickness and diameter are of similar magnitude for Cosmic Explorer). This suggests that in the case of a few watts of laser power absorbed in the coating (i.e., a coating absorption of roughly 1 ppm), the temperature variation in the substrate should be of order tens of millikelvins, which is within the ±2.3 K limit set by the thermoelastic noise coupling.
The same thermal fluctuations that drive thermoelastic noise also cause phase fluctuations in light passing through the substrates, which is relevant for the two input test masses (ITMs). For both silica and cryogenic silicon, this phase fluctuation is dominated by changes in the index of refraction via the thermorefractive coefficient = d /d [ -]. The power spectrum of this noise is ( ) ∝ 2 2 /F 4 2 , where is the thickness of the test mass, and F 2 /T i is the finesse of the arm cavities, and T i is the transmissivity of the input test masses. For fused silica, this noise is well below the other test mass thermal noises. For cryogenic silicon, the higher thermal conductivity and larger thermorefractive coefficient make this noise non-negligible; with the choice of F 450, the thermorefractive noise at 10 Hz dominates the total test mass thermal noise for the 2 µm technology, and is similar in magnitude to the coating Brownian thermal noise at 10 Hz for the 1 µm technology.
Additionally, the semiconductor nature of silicon gives rise to refractive index fluctuations due to the motion of free carriers in the silicon test masses. Initial estimates of this noise source [ ] suggested that the phase noise induced by these fluctuations could be significant, but a more recent analysis that includes Debye screening indicates that this noise will lie several orders of magnitude below the total thermal noise of the substrate [ ]. We therefore do not consider this noise source.
Finally, we remark on the static birefringence effects in the test mass substrates. Cosmic Explorer, like current gravitational-wave laser interferometers, is designed to operate in a single linear polarization; interconversion of polarization inside the interferometer acts as an optical loss. The greatest potential for polarization interconversion is in the substrates of the input test masses, and consequently the optical gain of the power-recycling cavity could be impacted. Given a mass thickness of and a birefringence Δ , the power-recycling gain is limited to < 1/sin 2 ( Δ / ) [ ]; maintaining = 65 therefore requires Δ 10 −7 . This already appears
The finesse could be increased to decrease the thermorefractive noise and the power absorbed in the input test mass substrates; however, this value is chosen as a compromise to reduce the effects of signal extraction cavity loss on the high frequency quantum noise, which favors small F. achievable in existing fused-silica interferometers, and in laboratory measurements of monocrystalline silicon [ ]; for the large-diameter masses of Cosmic Explorer, particularly for the silicon technology which has not yet been demonstrated for kilometer-scale instruments, small birefringence must be maintained over a large area, requiring good optical isotropy and control of the stresses in the substrate.
. Coating noises
As with the test mass substrates, the thin-film coatings applied to the test masses also exhibit thermal noises that are driven by mechanical and thermodynamic fluctuations.
The 1 µm technology assumes the same target set for the LIGO A+ coatings: an effective factor of overall reduction in mechanical loss compared to the current Advanced LIGO coatings. This will likely be achieved using silica for the lowindex layers, and a yet-to-be-determined metal oxide (or set of metal oxides) for the high-index layers. Recent measurements indicate that the loss angle of thin-film silica can be as low as 2.3 × 10 −5 [ ]; to reach the 4× loss reduction target, this requires a loss angle of the high-index layers of 7.0 × 10 −5 . The 2 µm technology assumes LIGO Voyager coatings, where the low-refractive-index layer is again SiO 2 , but the high-refractiveindex is now amorphous silicon (aSi) with at most 1 ppm optical absorption [ ].
The coating Brownian noise is computed using the formalism of Hong et al. [ ], with the photoelastic effect ignored and the loss angle in bulk and shear strains assumed to be equal.
As in the substrates, thermodynamic fluctuations produce phase fluctuations of the light propagating in the coatings. The phase fluctuations are mediated by the coating's average coefficients of thermal expansion c and thermorefraction c . Because of the etalon effect, these coefficients act with opposite sign, leading to an overall thermo-optic effect that for most coatings-including the Cosmic Explorer coatingsis smaller than the thermoelastic or thermorefractive effects individually [ ].
E. Quantum noise
The quantum vacuum fluctuations of the modes of the electromagnetic field that enter the antisymmetric port of the interferometer are a significant source of noise at all frequencies [ -]. Quantum radiation pressure noise is caused by the laser light in the arm cavities beating with the vacuum fluctuations in the amplitude quadrature of these modes, producing A formula for Brownian noise under these assumptions is given by Eq. of Yam et al. [ ], but the expression for their coefficient has an error; the corrected expression using their notation is [ ]
= 1 1 − 1 − c 2 s (1 − − 2 2 ) (1 − s − 2 2 s ) + (1 − s − 2 2 s ) s (1 + ) .
a fluctuating radiation pressure force acting on the test masses. Shot noise is caused by the beating of the laser with the vacuum fluctuations in the orthogonal phase quadrature, which carries the gravitational wave strain signal. It is possible to alter the correlations between the fluctuations in these two quadratures in order to modify the quantum radiation pressure and shot noises. This is a rich subject which we do not attempt to review here; see, for example, Refs. [ -] and references therein. We summarize only those aspects strictly relevant to the Cosmic Explorer design outlined in Section III. Radiation pressure dominates at low frequencies with a strain power spectral density ( ) ∝ arm / 2 4 . Shot noise dominates at higher frequencies; within the bandwidth of the instrument, the strain power spectral density of the shot noise is ∝ / arm , where arm is the power in the arm cavities and is the mass of the test masses. The crossover occurs at a frequency ∝ √︁ arm / of about 10 Hz for Cosmic Explorer. The 1 and 2 µm technologies have arm powers of 1.5 and 3 MW, respectively, so both realizations of CE have the same level of quantum noise.
Squeezed vacuum states [ , ] can be injected into the antisymmetric port to reduce the noise in one quadrature at the expense of increasing the noise in the orthogonal quadrature, a technique which is being used in Advanced LIGO [ ] and Advanced Virgo [ ]. Therefore, this necessitates a tradeoff between reducing radiation pressure at low frequencies and shot noise at high frequencies. However, the frequency dependence necessary to achieve a broadband noise reduction can be realized by first reflecting the squeezed vacuum off of a detuned optical cavity, known as a filter cavity, before injection into the interferometer [ , , ]. The production of these frequency dependent squeezed vacuum states has been realized experimentally [ , ] and will be used in LIGO A+ and Advanced Virgo+.
Cosmic Explorer will employ a 4 km long filter cavity to achieve a broadband quantum noise reduction of 6 dB for CE and 10 dB for both realizations of CE . The filter cavity is critical in achieving the low frequency goals: without it and with this level of squeezing at mid to high frequencies, CE would be limited by radiation pressure down to 10 Hz and CE would be limited down to 5 Hz.
F. Residual gas noise
The residual gas in the vacuum system is responsible for two noise sources. The first is a phase noise caused by fluctuations of the gas column density in the beam tubes. The contribution to this noise from a particular molecular species with partial pressure tube in the tube, mass , and polarizability is white up to a cutoff frequency ∝ 0 / , determined by the time it takes for a molecule to cross the laser beam, with a power spectrum ∝ 2 1/2 0 tube / 3/2 tube , where tube is the temperature of the tube, is the wavelength, 0 is the laser beam's waist, is the length of the arm, and is the thermal velocity of the molecule. [ , ].
The second is a force noise caused by the residual gas in the chambers exerting a damping force on the test masses. The contribution of one molecular species with partial pressure chamber in a chamber to this noise has a power spectrum ( ) ∝ 1/2 chamber 1/2 2 chamber / 2 4 , where and are the mass and radius of the test mass, respectively, and chamber is the temperature of the chamber [ , ]. The magnitude of these two noise sources determine the pressure requirements in the beam tubes and test mass chambers for each gas species described in Section IV C.
G. Scattered light noise
Scattering of light within the beam tubes is a source of noise for all ground-based interferometric gravitational wave detectors, as first calculated by Thorne [ ]. Imperfections on the surface of the test masses lead to scattering of the main cavity mode, which can be broadly grouped into two classes:
• surface roughness, which are variations on the test mass surface responsible mostly for scattering at narrow angles; and
• point defects, which are "bright spots" on the mirror's surface that produce diffuse scattering and are therefore responsible mostly for scattering at wide angles.
These imperfections on the test masses cause light to scatter out of the cavity and reflect multiple times off the beam tube wall as it propagates down the tube, and eventually recombine with the main cavity mode at the opposite test mass. Seismic motion of the beam tube imposes a phase noise on the scattered light each time it reflects off the tube, and gives rise to readout noise when the light recombines. Scattering of this nature was first pointed out by Thorne as an important noise source for the LIGO beam tubes (see Section III.B. of [ ]). To address this, baffles were installed at various points along the LIGO beam tubes to deflect scattered light away from the test masses. However, the baffles give rise to back-scattering noise, whereby light that is scattered out of the cavity by one of the test masses is back-scattered off one of the baffles and subsequently recombines with the main cavity mode at the same test mass. Motion of the beam tube then imposes a phase noise on the back-scattered light, which gives rise to readout noise when the light recombines. A detailed explanation of this effect is given by Flanagan and Thorne [ ] and a detailed analysis of back-scattering specifically for Cosmic Explorer is given in a recent technical report [ ], which we summarize below. The effect of forward scattering, meaning the diffraction of the main beam whose time dependence arises primarily from seismically induced transverse motion of the baffles, is left for future work [ , ]. Additionally, the phase information of the baffle surfaces is not considered in this work, though simulations on Advanced LIGO indicate that the inclusion of this information can cause the scatter-induced strain noise power spectral density to fluctuate by an order of magnitude in either direction [ ].
The fractional power scattered per unit solid angle is quantified by the bidirectional reflectance distribution function (BRDF), and the power spectrum of noise due to back scattered light is ∝ , where is the BRDF of the backscattering surface (i.e., the baffles and beam tube) and is related to the square of the mirror BRDF. Furthermore, is the longitudinal displacement noise of the beam tube, taking into account fringe-wrapping as explained in [ ] and Section . of [ ]. Here we use beam tube motion measured from the LIGO Livingston observatory, but the Cosmic Explorer baffles can be suspended to reduce their motion if necessary. Cosmic Explorer will likely use baffles with a black nickel coating with a BRDF of 10 −3 sr −1 [ ], however diamond-like carbon coatings with a BRDF of 10 −4 sr −1 can be used if necessary.
Surface roughness of spatial frequency gives rise to scattering at angle ∼ , where is the optical wavelength, and is measured relative to the beam tube axis. The BRDF for this small angle scattering due to surface roughness is proportional to the PSD of the mirror surface variations at spatial frequency . The left panel of Fig. shows the noise due to surface scattering using the surface PSD shown in Fig. , assuming a 120 cm tube diameter and 100 cm baffle aperture diameter. This PSD, with functional form ( ) = (0.03 nm 2 mm)/(1 mm × ), is an upper limit requirement on the surface roughness over an appropriate range of spatial scales that scatter into the tube, based on Sec. . of [ ], which results in noise due to surface scattering that is at least a factor of ten below the design sensitivity at all frequencies. This requirement is comparable to the surface roughness that has already been achieved with the Advanced LIGO test masses at spatial scales below a few centimeters. (Comparison is harder at larger spatial scales, where the Advanced LIGO surface roughness is not well characterized.) Point defects give rise to diffuse scattering, which has a roughly constant BRDF. The right panel of Fig. shows the scattered light noise due to point defects assuming a mirror BRDF of 10 −4 sr −1 , and a 120 cm tube diameter. It appears that diffuse scattering is an insignificant noise source for all phases of Cosmic Explorer.
The choice of beam tube diameter is of particular importance for the design of Cosmic Explorer. While wider tubes lead to less scattering noise, they are also substantially more expensive considering the cost of the vacuum envelope, the metal needed for the tube, and the market availability of various tube dimensions. Fig. shows that a 120 cm diameter tube is sufficient to keep the back-scattering noise below the noise requirements for both phases of the interferometer, provided that the requirements on mirror surface roughness are met. This limit on beam tube size will be reevaluated once the effects of forward scattering are considered.
H. Noise associated with controls
As a practical matter, the relative distances between the suspended optics as well as their angular alignment must be precisely servo controlled in order to keep the interferometer stable and operating in the linear regime. Noise from the sensors used to measure the linear and angular degrees of freedom is imposed on the optics by the control systems needed to suppress their relative positions and orientations.
In addition to the differential arm motion of the four test masses, there are three auxiliary length degrees of freedom of the other core optics, which are suspended from triple pendulum suspensions, that must be controlled. These degrees of freedom are limited by seismic noise below a few hertz and by sensing noise (of similar magnitude to that of Advanced LIGO) at higher frequencies. The auxiliary degree of freedom with the strongest coupling to the differential arm motion is the Michelson degree of freedom: differential motion between the beamsplitter and the input test masses also produce phase fluctuations at the anti-symmetric port. The Michelson degree of freedom is suppressed by a factor of /2F 3.5 × 10 −3 relative to the differential arm motion since the latter is enhanced by the Fabry-Pérot arm cavities. The Michelson sensing noise is of order 10 −16 m Hz −1/2 [ ], which gives an equivalent strain sensitivity of ∼ 6 × 10 −24 Hz −1/2 . Simulations show that if this motion is sensed and subtracted from the differential arm motion, a control loop with a bandwidth of a few hertz is sufficient to suppress the Michelson noise to within a factor of below the design sensitivities for both CE and CE . Simulations also suggest that the couplings of the other two auxiliary length degrees of freedom, fluctuations in the power recycling and signal extraction cavity lengths, do not significantly couple to the differential arm motion through the fundamental optical mode.
The noise from the angular control systems is one of the most challenging low frequency technical noise sources in current gravitational wave detectors, and it is also expected to be so for third generation detectors. Radiation pressure from the circulating arm power exerts a torque on the mirrors. This torque stiffens (or hardens) the torsional resonance when the cavity mirrors rotate with the same sign, and softens the resonance when the mirrors rotate with opposite sign [ -]. The hard and soft resonances are shifted by Δ 2 h,s = h,s arm arm / , where is the moment of inertia of the mirrors, h > 0 is a geometric factor for the hard mode, and s < 0 is a geometric factor for the soft mode. The soft mode will become unstable if the torque is large enough and the (negative) shift Δ 2 s exceeds the mechanical resonance 2 0 . In this case, the bandwidth of the angular control loop needs to be several times the frequency of this unstable mode in order to stabilize the optomechanical system. Achieving this requirement without injecting excess sensing noise is challenging.
It is thus clearly advantageous to prevent the soft mode from becoming unstable. In this case the control loop bandwidth needs to be ∼ 3 s [ ]. One way to achieve this is to reduce the frequency shift Δ 2 s . The arm power and length are set and the geometric factor is constrained by the necessity of minimizing the beam spot sizes. However, the moment of inertia can be increased, perhaps by increasing the test mass thickness or altering the geometry in some other way. Another possibility is to increase the free torsional resonance 0 . The soft mode frequency shifts Δ 2 s are approximately −(0.6 Hz) 2 for the 1 µm technology and −(1.0 Hz) 2 for the 2 µm technology. The soft mode will thus be stable, necessitating a sufficiently low loop bandwidth of a few hertz, if 0 1 Hz.
Even though the frequency shift Δ 2 h for the hard mode is always positive and the hard mode always stable, it can still be excited and must be damped. Two factors make this requirement intrinsically easier for Cosmic Explorer than for Advanced LIGO. First, the typical amplitude of these excitations will be less due to the improved seismic isolation. Second, the geometric factor for the hard mode is, to first order, proportional to ( / 0 ) 4 where 0 and are the beam radii at the waist and at an optic, respectively [ ]. The ratio / 0 needs to be small for CE to reduce diffraction over 40 km, while for Advanced LIGO it is made large to reduce coating thermal noise. This results in hard mode frequency shifts Δ 2 h of approximately +(1.1 Hz) 2 for the 1 µm technology and +(2.1 Hz) 2 for the 2 µm technology.
We have only sketched the requirements for the control system and its noise performance here; while these preliminary considerations suggest that it will be possible to meet the low frequency requirements, a realistic understanding of the control noise is a significant source of uncertainty facing Cosmic Explorer and warrants a more detailed analysis.
VI. DISCUSSION AND CONCLUSION
In this work we have presented updated sensitivity curves for Cosmic Explorer and have also identified several areas of research and development that will be necessary to realize its low-frequency performance.
• the identification of a facility site with low seismic and acoustic noise, and other suitable environmental properties;
• the development of low-noise inertial isolators in multiple degrees of freedom;
• the continued development of mitigation techniques for Newtonian noise; • the production of large, high-quality test mass substrates, both silica and silicon;
• the polishing and coating of large test mass substrates to a resulting spatial roughness comparable to that achieved for the Advanced LIGO test masses, but characterized at larger spatial scales;
• the development of suitable mirror coatings;
• the development of long multi-stage suspensions employing highly stressed silica and silicon blade springs and silica fibers and silicon ribbons to support 320 kg test masses;
• the development of alternatives to blade spring suspensions, such as geometric anti-springs;
• the validation and extension of the beam-tube scattering model presented here;
• the development of a robust angular control system with possible modifications to the suspensions and/or test masses to reduce the effects of radiation pressure instabilities;
• the development of vacuum technology and practice capable of achieving ultra-high vacuum in both the test mass chambers, which will be periodically vented, and the beam tubes;
• the measurement of material properties, such as mechanical loss angles, down to 5 Hz; and
• the development of laser frequency and intensity noise requirements and the optical topologies required to achieve them not discussed here.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the support of the National Science Foundation through collaborative award numbers , , , and . EDH is supported by the MathWorks, Inc. JRS is partially supported by the Dan Black Family Trust. BK is supported by the Heising-Simons Foundation. This work has document numbers CE-P and LIGO-P .
Appendix A: Summary of Cosmic Explorer Technologies
One advantage of realizing Cosmic Explorer incrementally is that CE can achieve significantly higher sensitivities than the second generation detectors mostly using the existing technology developed for LIGO A+. In addition to providing a relatively short route to increased sensitivity, this provides some risk management: significant improvements can still be made even if some advanced technologies are not realized. Nevertheless, the baseline CE design does rely on some technological advances beyond A+. We can also consider a more conservative detector, CE −, which relies solely on A+ technology with the improved sensitivity coming only from scaling up the arm length, test masses, and suspensions from the A+ design. In particular, this would differ from CE by the following: • No fused silica blade springs on the final suspension stage between the PUM and the test mass. The suspensions are just a scaled up version of the A+ suspensions.
• No Newtonian Rayleigh wave suppression.
• The same level of suspension point motion as A+, a factor of worse than CE at 1 Hz.
Fig. shows the low-frequency limit to the spectral sensitivity of CE −. These changes only affect the low-frequency noise below about 20 Hz leaving identical high-frequency sensitivities for CE and CE −. This can also be thought of as an initial detector to be implemented first while some of the above technologies are being developed for CE if necessary. A summary of the defining parameters of the different Cosmic Explorer detectors and technologies is given in Table IV and their sensitivities compared in Fig. ; many of the other details common to all detectors using the same technology are given in Table II. All of the 1 µm detectors share the same basic properties: arm power, material, temperature, coatings, and beam spot sizes. The low-frequency sensitivity of CE is improved over CE − by the addition of fused silica blades springs, which reduce the suspension thermal and seismic noises as described in Sections V A and V B; improved seismic isolation, as discussed in Section V B; and some suppression of Newtonian Rayleigh waves, as discussed in Section V C. The test mass thermal noises, most importantly coating Brownian, are the same for all detectors using 1 µm technology since they use the same test mass substrates and coatings, beam sizes, and temperatures.
The high frequency sensitivity of CE is nearly identical for both the 1 µm and 2 µm technologies since this is determined by quantum shot noise. The 1 µm realization of CE has the same squeezing as the 2 µm realization-10 dB increased from 6 dB for CE . Since the shot noise scales as ∝ / arm , the factor of two larger power stored in the arms of the 2 µm realization gives the same shot noise level as the 1 µm realization. All other technologies not dependent on test mass material or laser wavelength are the same for both realizations of CE . In particular, the seismic isolation is improved over that of CE by a factor of at 1 Hz, Newtonian body waves are suppressed by a factor of three, and Newtonian Rayleigh waves are suppressed by an additional factor of five over that of CE . Both realizations thus have the same Newtonian noise.
To summarize, the low frequency sensitivity is dominated by suspension thermal, seismic, and Newtonian noise. The low frequency sensitivity of CE is improved over that of CE − due to improved suspensions, seismic isolation, and the addition of Newtonian noise suppression. The low frequency sensitivity of CE is improved over that of CE through further improvements to the seismic isolation and Newtonian noise suppression and increased squeezing. Since the high frequency sensitivity is determined by quantum shot noise, CE and CE − have the same high frequency sensitivity, as do both realizations of CE .
Appendix B: Displacement and Force Sensitivity
Fig. compares the noise of Cosmic Explorer, Advanced LIGO, and Voyager in terms of gravitational wave strain and the equivalent test mass displacement and force noises. To achieve its design sensitivity above ∼20 Hz, Cosmic Explorer does not require as low displacement or force noise as does Voyager, owing to the longer arms and larger test masses. However, significant improvements in displacement and force noises are required to achieve the Cosmic Explorer strain sensitivity at lower frequencies. Table II for more details common to all detectors using the same technology. , ( ), arXiv:
.
[ [astroph.HE].
. . Signal-to-noise ratio (SNR) accumulation of a . + . binary neutron star system at redshift = 0.03, optimally oriented. The low-frequency cutoffs are the same as given inFig. .Numerical early warning values for a threshold signal-to-noise ratio of are given in
FIG
. . Strain noise of Advanced LIGO, LIGO Voyager, the sixinterferometer Einstein Telescope, and both stages of Cosmic Explorer.
FIG
. . Model for Cosmic Explorer ground motion, along with representative data from LIGO Hanford (LHO), LIGO Livingston (LLO), and multi-year data from selected seismic stations in the United States. The Peterson high-and low-noise seismic models are also shown [ ].
FIG . .
.Left: schematic of the Advanced LIGO quadruple suspensions. Right: one design concept for the final two stages of a Cosmic Explorer silica suspension for a 70 cm diameter fused silica test mass. The components shown in blue are fused silica. In particular, the test masses, PUMs, and fibers between the two are are fused silica as are the blade springs on the CE PUM. The components shown in black are maraging steel blade springs. The components shown in silver are the other steel components on the LIGO suspensions. The silicon CE suspensions have silicon ribbons, silicon blade springs on the PUM, and a 80 cm diameter test mass. Note that only the final two stages of the CE suspensions are shown; the full suspension would be similar to LIGO's but would have 4 m total length rather than 1.65 m. left panel of Fig. shows a diagram of the LIGO suspensions. Suspension thermal noise is related to the mechanical response of the suspensions through the fluctuation dissipation theorem [ -] as ( ) ∝ Im / , where is the mechanical susceptibility.
Fig.shows the contributions of each stage to the total suspension thermal noise. The silica suspensions are dominated by the horizontal noise of the PUM and test mass above about 10 Hz with contributions from the horizontal noise of the APM below. The silicon suspensions are dominated by vertical noise of the APM below about 7 Hz, above which the horizontal
FIG. . Horizontal motion of the Cosmic Explorer suspension point, shown for both CE and CE . CE assumes seismic isolation that is moderately improved compared to Advanced LIGO. CE assumes further improvements to the seismic isolation using novel inertial sensing technology [ ]. A simplified budget of the CE motion is also shown, along with the CE ground motion model (Fig. ).
FIG
. . Newtonian noise estimates for Cosmic Explorer. For CE , the Rayleigh wave content is assumed to be suppressed by a factor of in amplitude below the ground motion shown inFig., either through offline subtraction or local mitigation (e.g., excavation as described in Section V C ) in the immediate vicinity of the test mass. The P-and S-wave amplitudes are each assumed to be a factor of higher than the Peterson low-noise model [ ]. For CE , the Rayleigh wave content is assumed to be suppressed by a factor of in amplitude, and the body wave content is suppressed by a factor of in amplitude. The infrasound amplitude is taken from the Bowman model [ ].
FIG
. . Seismic Newtonian-noise reduction amplitudes for P, S, and Rayleigh waves achieved by removing ground from underneath the test mass to make a 11 m wide and 4 m deep recess. This reduction estimate is computed using the Born approximation, which may affect the validity of the Rayleigh-wave reduction estimate above 15 Hz [ ]; the body-wave reduction estimate should not be significantly affected. The scatter in the curves is due to the finite number of waves simulated and the finite size of the numerical grid.
4 × 10 −8 K −1 Middelmann et al. (2015) FIG. . Left: Amplitude spectral sensitivity of CE realized by the cryogenic silicon 2 µm technology compared with the estimated thermoelastic noise of the silicon test mass substrates for = 4 × 10 −8 K −1 . The requirement that thermoelastic noise be a factor of ten below the CE design curve is met when = ±4 × 10 −8 K −1 . Right: Coefficient of thermal expansion of crystalline silicon versus temperature measured by Middelmann et al. [ ], zoomed to show the data points and error bars around the zero crossing at . K. The green region indicates the required | | ≤ 4 × 10 −8 K −1 , corresponding to a temperature accuracy of about ±2.3 K −1 .
FIG
. . Requirement on surface roughness used to calculate small angle scattering shown inFig. ,along with the measured spectra from Advanced LIGO test masses. Due to Cosmic Explorer's large beam sizes, the relevant spatial scale (inverse spatial frequency) of the mirror roughness extends to several tens of centimeters.
FIG
. . Back-scattering noise for surface roughness (left) and point defects (right) for a 120 cm diameter beam tube with 100 cm diameter baffle apertures. The black dashed curve shows the facility requirement that the scattering noise be a factor of ten below the minimum of the three design noise curves shown for Cosmic Explorer. The BRDF for surface roughness scattering is proportional to the target PSD shown inFig. andthe point scattering BRDF is 10 −4 sr −1 . The BRDF of the baffles and beam tube is 10 −3 sr −1 . The peaks are due to beam tube resonances.
FIG
. . Strain sensitivities of the different Cosmic Explorer technologies and detectors.
gr-qc]. [ ] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, and et al., Physical Review X , ( ), arXiv: . [gr-qc]. [ ] B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, and et al., Phys. Rev. Lett. , ( ), arXiv: . [gr-qc]. [ ] B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, and et al., ApJ , L ( ), arXiv: . [astro-ph.HE]. [ ] B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley, C. Adams, R. X. Adhikari, V. B. Adya, C. Affeldt, and et al., Physical Review X , ( ), arXiv: .
FIG
. . Comparison of Cosmic Explorer strain, displacement, and force noises with those of Advanced LIGO and Voyager.
[
astro-ph.HE]. [ ] A. H. Nitz, C. Capano, A. B. Nielsen, S. Reyes, R. White, D. A. Brown, and B. Krishnan, Astrophys. J. , ( ), arXiv: . [gr-qc]. [ ] B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley, C. Adams, R. X. Adhikari, V. B. Adya, C. Affeldt, and et al., Astrophys. J. , ( ), arXiv: .
the response distance [ ]the redshift out to which binary black hole systems can be detected -for Cosmic Explorer and other detectors. Computing the response distance for a threshold signal-to-noise ratio 0requires numerically solving 2
0 = 4
∫ ∞
low
d |ℎ( ; )| 2 / ℎ ( )
for the corresponding threshold ; here ℎ( ; ) is the redshifted
40 km Fabry-Pérot
arm cavity
4 km filter cavity
beamsplitter
signal
extraction
mirror
pre-stabilized
laser
320 kg end
test mass
squeezer
320 kg input
test mass
power-recycling
mirror
readout
FIG. . Simplified Cosmic Explorer interferometer topology consisting
of a dual-recycled Fabry-Pérot Michelson interferometer in addition
to a squeezer and filter cavity used to achieve broadband quantum
noise reduction.
Table I summarizes the detection prospects for high-redshift, intermediate-mass black hole mergers.
FIG. . Same asSimilar topics
are being considered for the underground Einstein Telescope
facility [ ].
3
30
Frequency / Hz
10 −26
10 −25
10 −24
10 −23
Strain noise Hz
−1/2
CE2 (2 µm) Total
Quantum Vacuum
Seismic
Newtonian
Suspension Thermal
Coating Thermal
Substrate Thermal
Residual Gas
Scattered light
10
3
30
Frequency / Hz
10 −26
10 −25
10 −24
10 −23
Strain noise Hz
−1/2
CE2 (1 µm) Total
Quantum Vacuum
Seismic
Newtonian
Suspension Thermal
Coating Thermal
Substrate Thermal
Residual Gas
Scattered light
Cosmic Explorer 2 (2 µm) Total Susp. Thermal Horiz. Top Horiz. APM Horiz. PUM Horiz. Test mass Vert. Top Vert. APM Vert. PUM FIG. . Contribution of each stage of the test mass quadruple suspension to the total suspension thermal noise for the 1 µm technology (left), which would be common for CE and CE , and for the 2 µm technology (right).Cosmic Explorer 1
Cosmic Explorer 2 (1 µm)
Total Susp. Thermal
Horiz. Top
Horiz. APM
Horiz. PUM
Horiz. Test mass
Vert. Top
Vert. APM
Vert. PUM
10
3
30
Frequency / Hz
10 −26
10 −25
10 −24
10 −23
10 −22
10 −21
Strain noise Hz
−1/2
sign change of around the zero crossing allows FIG. . Thermal noise levels, and individual noise contributions to them, of the test mass substrates and coatings for the 1 µm technology (left), which would be common for CE and CE , and for the 2 µm technology (right).10
3
30
Frequency / Hz
10 −26
10 −25
10 −24
Strain noise Hz
−1/2
Cosmic Explorer 1
Cosmic Explorer 2 (1 µm)
Total Test Mass Thermal
Substrate Brownian
Substrate Thermo-Elastic
ITM Thermo-Refractive
Coating Brownian
Coating Thermo-Optic
10
3
30
Frequency / Hz
10 −26
10 −25
10 −24
Strain noise Hz
−1/2
Cosmic Explorer 2 (2 µm)
Total Test Mass Thermal
ITM Thermo-Refractive
Substrate Brownian
Substrate Thermo-Elastic
Coating Brownian
Coating Thermo-Optic
TABLE II. Requirements for the substrate, coating, and optical properties of the Cosmic Explorer test masses. The high-index coating materialQuantity
1 µm Technology
2 µm Technology
Remarks
Substrate
Material
Fused silica
Crystalline silicon
Temperature
293 K
123 K
To within ±2.3 K for CE
Diameter
70 cm
80 cm
Thickness
38 cm
27 cm
Mass
320 kg
320 kg
Thermal expansion coeff.
0.39 ppm K −1
0.04 ppm K −1
See remark on
Refractive index
1.45
3.5
Thermorefractive coeff.
9.6 ppm K −1
100 ppm K −1
Thermal conductivity
1.38 W m −1 K −1
700 W m −1 K −1
Coating
Materials
SiO 2 / TBD
SiO 2 / aSi
Low index / high index
Refractive indices
. / .
. / .
Loss angles
2.3 × 10 −5 / 7 × 10 −5
1 × 10 −4 / 3 × 10 −5
ITM coating layers
16
11
ETM coating layers
38
15
Optical
Vacuum wavelength
1 µm
2 µm
ITM spot size i
12 cm
16 cm
1/e 2 intensity radius
ETM spot size e
12 cm
16 cm
ITM transmissivity T i
1.4 %
1.4 %
ETM transmissivity T e
5 ppm
5 ppm
Partial pressures of gases (IV C) Vacuum • Ambient seismic field characterization, incl. surface and body wave content (IV A) Ambient infrasound field characterization, distinguished from wind-induced sensor noise (IV B) Reduction of seismic field near test masses (V C ) Seismic metamaterials • Reduction of magnetic field coupling Other environmental • 1 pm Hz −1/2 horiz. susp. point motion at 1 Hz (V B) Inertial sensing • × subtraction of surface-wave NN (V C ) Seismic arrays • 1.5 MW 1 µm arm power and 6 dB FD squeezing (silica) QN, scatter • Silica test mass, 70 cm ; low impurity pm Hz −1/2 horiz. susp. point motion at 1 Hz (V B) Inertial sensing • • 1.5 MW 1 µm arm power and 10 dB FD squeezing (silica) QN, scatter • Silicon test mass, 80 cm ; low impurity Silicon materials science • Highly stressed silicon blade springs and ribbons (V A) Silicon materials science • Validation of silicon loss mechanims at 5 Hz Silicon materials science • "Voyager" coatings over 80 cm (V D ) Thin-film mirror coatings • 3.0 MW 2 µm arm power and 10 dB FD squeezing (silicon) QN, scatter • Radiative temperature control to ±2 K (V D ) Cryogenics • • TABLE III. Summary of required research and development activities. The final columns in the table indicate whether the activity involves primarily the facility, the initial Cosmic Explorer detector (CE ), or the advanced Cosmic Explorer detector (CE ); for the advanced detector, activities are presented for both the scenario in which the detector is room-temperature glass technology with 1 µm lasers, or cryogenic silicon technology with 2 µm lasers.Activity
Theme
Fac. CE CE ( ) CE ( )
Seismic arrays
•
Infrasonic arrays
•
Silica materials science
•
•
Highly stressed silica blade springs (V A)
Silica materials science
•
•
Validation of silica loss mechanisms at 5 Hz
Silica materials science
•
•
A+ coatings over 70 cm (V D )
Thin-film mirror coatings
•
•
Test mass surface polishing of large substrates (V G)
Mirror metrology
•
•
•
Control noise
Optical sensing and control
•
•
•
× subtraction of surface-wave NN (V C )
Seismic arrays
•
•
× subtraction of body-wave NN (V C )
Seismic arrays
•
•
Best effort at mitigation of infrasonic NN
Infrasonic arrays
•
•
•
0.1
Table III
IIIsummarizes the research required to reach the low frequency sensitivity presented here along with a rough timeline of when that research would need to be completed.We have also shown that both the 1 µm and 2 µm technologies can realize nearly identical low frequency sensitivities for CE . While this is true for high frequencies as well, achieving the specified quantum and thermal noise performance for both technologies requires further research and development not discussed in this paper. Additionally, if the arm length of Cosmic Explorer were significantly shortened, the relative importance of the various noise sources may change since they scale differently with arm length [ ]. 3 30 Frequency / Hz 10 −26 FIG. . Estimated low-frequency spectral sensitivity limit (solid black) of Cosmic Explorer − and the known noise sources that cause these limits (colored curves). The sensitivity limit for Cosmic Explorer from previous work [ ] is also shown (dotted black curve).10 −25
10 −24
10 −23
Strain noise Hz
−1/2
CE1− Total
Quantum Vacuum
Seismic
Newtonian
Suspension Thermal
Coating Thermal
Substrate Thermal
Residual Gas
Scattered light
TABLE IV .
IVDefining parameters of the different Cosmic Explorer technologies and detectors. See
. J Aasi, LIGO Scientific CollaborationB P Abbott, LIGO Scientific CollaborationR Abbott, LIGO Scientific CollaborationT Abbott, LIGO Scientific CollaborationM R Abernathy, LIGO Scientific CollaborationK Ackley, LIGO Scientific CollaborationC Adams, LIGO Scientific CollaborationT Adams, LIGO Scientific CollaborationP Addesso, LIGO Scientific Collaboration10.1088/0264-9381/32/7/074001arXiv: .Classical and Quantum Gravity. gr-qcLIGO Scientific Collaboration, J. Aasi, B. P. Abbott, R. Abbott, T. Abbott, M. R. Abernathy, K. Ackley, C. Adams, T. Adams, P. Addesso, and et al., Classical and Quantum Gravity , ( ), arXiv: . [gr-qc].
. F Acernese, M Agathos, K Agatsuma, D Aisa, N Allemandou, A Allocca, J Amarni, P Astone, G Balestri, G Ballardin, 10.1088/0264-9381/32/2/024001arXiv: .Classical and Quantum Gravity. gr-qcF. Acernese, M. Agathos, K. Agatsuma, D. Aisa, N. Allemandou, A. Allocca, J. Amarni, P. Astone, G. Balestri, G. Ballardin, and et al., Classical and Quantum Gravity , ( ), arXiv: . [gr-qc].
. T Akutsu, Kagra CollaborationM Ando, Kagra CollaborationK Arai, Kagra CollaborationY Arai, Kagra CollaborationS Araki, Kagra CollaborationA Araya, Kagra CollaborationN Aritomi, Kagra CollaborationH Asada, Kagra CollaborationY Aso, Kagra CollaborationS Atsuta, Kagra CollaborationK Awai, Kagra CollaborationS Bae, Kagra CollaborationL Baiotti, Kagra CollaborationM A Barton, Kagra CollaborationK Cannon, Kagra CollaborationE Capocasa, Kagra CollaborationC S Chen, Kagra CollaborationT W Chiu, Kagra CollaborationK Cho, Kagra CollaborationY K Chu, Kagra CollaborationK Craig, Kagra CollaborationW Creus, Kagra CollaborationK Doi, Kagra CollaborationK Eda, Kagra CollaborationY Enomoto, Kagra CollaborationR Flaminio, Kagra CollaborationY Fujii, Kagra CollaborationM K Fujimoto, Kagra CollaborationM Fukunaga, Kagra CollaborationM Fukushima, Kagra CollaborationT Furuhata, Kagra CollaborationS Haino, Kagra CollaborationK Hasegawa, Kagra CollaborationK Hashino, Kagra CollaborationK Hayama, Kagra CollaborationS Hirobayashi, Kagra CollaborationE Hirose, Kagra CollaborationB H Hsieh, Kagra CollaborationC Z Huang, Kagra CollaborationB Ikenoue, Kagra CollaborationY Inoue, Kagra CollaborationK Ioka, Kagra CollaborationY Itoh, Kagra CollaborationK Izumi, Kagra CollaborationT Kaji, Kagra CollaborationT Kajita, Kagra CollaborationM Kakizaki, Kagra CollaborationM Kamiizumi, Kagra CollaborationS Kanbara, Kagra CollaborationN Kanda, Kagra CollaborationS Kanemura, Kagra CollaborationM Kaneyama, Kagra CollaborationG Kang, Kagra CollaborationJ Kasuya, Kagra CollaborationY Kataoka, Kagra CollaborationN Kawai, Kagra CollaborationS Kawamura, Kagra CollaborationT Kawasaki, Kagra CollaborationC Kim, Kagra CollaborationJ. Kim, J. C. Kim, W. SKagra Collaboration, T. Akutsu, M. Ando, K. Arai, Y. Arai, S. Araki, A. Araya, N. Aritomi, H. Asada, Y. Aso, S. At- suta, K. Awai, S. Bae, L. Baiotti, M. A. Barton, K. Cannon, E. Capocasa, C. S. Chen, T. W. Chiu, K. Cho, Y. K. Chu, K. Craig, W. Creus, K. Doi, K. Eda, Y. Enomoto, R. Flaminio, Y. Fujii, M. K. Fujimoto, M. Fukunaga, M. Fukushima, T. Fu- ruhata, S. Haino, K. Hasegawa, K. Hashino, K. Hayama, S. Hi- robayashi, E. Hirose, B. H. Hsieh, C. Z. Huang, B. Ikenoue, Y. Inoue, K. Ioka, Y. Itoh, K. Izumi, T. Kaji, T. Kajita, M. Kak- izaki, M. Kamiizumi, S. Kanbara, N. Kanda, S. Kanemura, M. Kaneyama, G. Kang, J. Kasuya, Y. Kataoka, N. Kawai, S. Kawamura, T. Kawasaki, C. Kim, J. Kim, J. C. Kim, W. S.
. Y M Kim, N Kim, T Kimura, S Kinugawa, Y Kirii, H Kitaoka, Y Kitazawa, K Kojima, K Kokeyama, A K H Komori, K Kong, R Kotake, R Kozu, H S Kumar, S Kuo, H K Kuroyanagi, H M Lee, H W Lee, M Lee, C Y Leonardi, F L Lin, G C Lin, Y Liu, E Liu, S Majorana, M Mano, T Marchio, F Matsui, Y Matsushima, N Michimura, O Mio, A Miyakawa, T Miyamoto, K Miyamoto, S Miyo, W Miyoki, S Morii, Y Morisaki, T Moriwaki, M Morozumi, K Musha, S Nagano, K Nagano, T Nakamura, H Nakamura, M Nakano, K Nakano, T Nakao, L Narikawa, L Naticchioni, W T Nguyen Quynh, A Ni, Y Nishizawa, T Obuchi, J J Ochi, S H Oh, M Oh, N Ohashi, M Ohishi, K Ohkawa, K Okutomi, K Ono, C P Oohara, S S Ooi, J Pan, F E Park, I Arellano, N Pinto, M Sago, S Saijo, Y Saitou, K Saito, Y Sakai, Y Sakai, M Sakai, M Sasai, Y Sasaki, S Sasaki, N Sato, T Sato, Y Sato, N Sekiguchi, M Seto, T Shibata, H Shimoda, T Shinkai, A Shishido, K Shoda, E J Somiya, ; F Son, K Travasso, S Tsubono, N Tsuchida, T Uchikata, T Uchiyama, S Uehara, K Ueki, F Ueno, T Uraguchi, M H P M Ushiba, H Van Putten, S Vocca, T Wada, Y Wakamatsu, W R Watanabe, T Xu, A Yamada, K Yamamoto, K Yamamoto, S Yamamoto, T Yamamoto, K Yamamoto, Yokogawa, 10.1038/s41550-018-0658-yarXiv: .Nature Astronomy. J. Yokoyama, T. Yokozawa, T. H. Yoon, T. Yoshioka, H. Yuzurihara, S. Zeidler, and Z. H. Zhugr-qcKim, Y. M. Kim, N. Kimura, T. Kinugawa, S. Kirii, Y. Kitaoka, H. Kitazawa, Y. Kojima, K. Kokeyama, K. Komori, A. K. H. Kong, K. Kotake, R. Kozu, R. Kumar, H. S. Kuo, S. Kuroyanagi, H. K. Lee, H. M. Lee, H. W. Lee, M. Leonardi, C. Y. Lin, F. L. Lin, G. C. Liu, Y. Liu, E. Majorana, S. Mano, M. Marchio, T. Matsui, F. Matsushima, Y. Michimura, N. Mio, O. Miyakawa, A. Miyamoto, T. Miyamoto, K. Miyo, S. Miyoki, W. Morii, S. Morisaki, Y. Moriwaki, T. Morozumi, M. Musha, K. Nagano, S. Nagano, K. Nakamura, T. Nakamura, H. Nakano, M. Nakano, K. Nakao, T. Narikawa, L. Naticchioni, L. Nguyen Quynh, W. T. Ni, A. Nishizawa, Y. Obuchi, T. Ochi, J. J. Oh, S. H. Oh, M. Ohashi, N. Ohishi, M. Ohkawa, K. Okutomi, K. Ono, K. Oohara, C. P. Ooi, S. S. Pan, J. Park, F. E. Peña Arellano, I. Pinto, N. Sago, M. Saijo, S. Saitou, Y. Saito, K. Sakai, Y. Sakai, Y. Sakai, M. Sasai, M. Sasaki, Y. Sasaki, S. Sato, N. Sato, T. Sato, Y. Sekiguchi, N. Seto, M. Shibata, T. Shimoda, H. Shinkai, T. Shishido, A. Shoda, K. Somiya, E. J. Son, A. Sue- masa, T. Suzuki, T. Suzuki, H. Tagoshi, H. Tahara, H. Takahashi, R. Takahashi, A. Takamori, H. Takeda, H. Tanaka, K. Tanaka, T. Tanaka, S. Tanioka, E. N. Tapia San Martin, D. Tatsumi, T. Tomaru, T. Tomura, F. Travasso, K. Tsubono, S. Tsuchida, N. Uchikata, T. Uchiyama, T. Uehara, S. Ueki, K. Ueno, F. Uraguchi, T. Ushiba, M. H. P. M. van Putten, H. Vocca, S. Wada, T. Wakamatsu, Y. Watanabe, W. R. Xu, T. Yamada, A. Yamamoto, K. Yamamoto, K. Yamamoto, S. Yamamoto, T. Yamamoto, K. Yokogawa, J. Yokoyama, T. Yokozawa, T. H. Yoon, T. Yoshioka, H. Yuzurihara, S. Zeidler, and Z. H. Zhu, Nature Astronomy , ( ), arXiv: . [gr-qc].
. B P Abbott, R Abbott, T D Abbott, M R Abernathy, F Acernese, K Ackley, C Adams, T Adams, P Addesso, R X Adhikari, 10.1103/PhysRevLett.116.061102Phys. Rev. Lett. B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, and et al., Phys. Rev. Lett.
. J Miller, L Barsotti, S Vitale, P Fritschel, M Evans, D Sigg, 10.1103/PhysRevD.91.062005arXiv: .Phys. Rev. D. gr-qcJ. Miller, L. Barsotti, S. Vitale, P. Fritschel, M. Evans, and D. Sigg, Phys. Rev. D , ( ), arXiv: . [gr-qc].
. R X Adhikari, K Arai, A F Brooks, C Wipf, O Aguiar, P Altin, B Barr, L Barsotti, R Bassiri, A Bell, G Billingsley, R Birney, D Blair, E Bonilla, J Briggs, D D Brown, R Byer, H Cao, M Constancio, S Cooper, T Corbitt, D Coyne, A Cumming, E Daw, R Derosa, G Eddolls, J Eichholz, M Evans, M Fejer, E C Ferreira, A Freise, V V Frolov, S Gras, A Green, H Grote, E Gustafson, E D Hall, G Hammond, J Harms, G Harry, K Haughian, D Heinert, M Heintze, F Hellman, J Hennig, M Hennig, S Hild, J Hough, W Johnson, B Kamai, D Kapasi, K Komori, D Koptsov, M Korobko, W Z Korth, K Kuns, B Lantz, S Leavey, F Magana-Sandoval, G Mansell, A Markosyan, A Markowitz, I Martin, R Martin, D Martynov, D E Mcclelland, G Mcghee, T Mcrae, J Mills, V Mitrofanov, M Molina-Ruiz, C Mow-Lowry, J Munch, P Murray, S Ng, M A Okada, D J Ottaway, L Prokhorov, V Quetschke, S Reid, D Reitze, J Richardson, R Robie, I Romero-Shaw, R Route, S Rowan, R Schnabel, M Schneewind, F Seifert, D Shaddock, B Shapiro, D Shoemaker, A S Silva, B Slagmolen, J Smith, N Smith, J Steinlechner, K Strain, D Taira, S Tait, D Tanner, Z Tornasi, C Torrie, M Van Veggel, J Vanheijningen, P Veitch, A Wade, G Wallace, R Ward, R Weiss, P Wessels, B Willke, H Yamamoto, M J Yap, C Zhao, 10.1088/1361-6382/ab9143arXiv: . [astro-ph.IMClassical and Quantum Gravity. R. X. Adhikari, K. Arai, A. F. Brooks, C. Wipf, O. Aguiar, P. Al- tin, B. Barr, L. Barsotti, R. Bassiri, A. Bell, G. Billingsley, R. Bir- ney, D. Blair, E. Bonilla, J. Briggs, D. D. Brown, R. Byer, H. Cao, M. Constancio, S. Cooper, T. Corbitt, D. Coyne, A. Cumming, E. Daw, R. deRosa, G. Eddolls, J. Eichholz, M. Evans, M. Fejer, E. C. Ferreira, A. Freise, V. V. Frolov, S. Gras, A. Green, H. Grote, E. Gustafson, E. D. Hall, G. Hammond, J. Harms, G. Harry, K. Haughian, D. Heinert, M. Heintze, F. Hellman, J. Hennig, M. Hennig, S. Hild, J. Hough, W. Johnson, B. Kamai, D. Kapasi, K. Komori, D. Koptsov, M. Korobko, W. Z. Korth, K. Kuns, B. Lantz, S. Leavey, F. Magana-Sandoval, G. Mansell, A. Markosyan, A. Markowitz, I. Martin, R. Martin, D. Martynov, D. E. McClelland, G. McGhee, T. McRae, J. Mills, V. Mitro- fanov, M. Molina-Ruiz, C. Mow-Lowry, J. Munch, P. Murray, S. Ng, M. A. Okada, D. J. Ottaway, L. Prokhorov, V. Quetschke, S. Reid, D. Reitze, J. Richardson, R. Robie, I. Romero-Shaw, R. Route, S. Rowan, R. Schnabel, M. Schneewind, F. Seifert, D. Shaddock, B. Shapiro, D. Shoemaker, A. S. Silva, B. Slag- molen, J. Smith, N. Smith, J. Steinlechner, K. Strain, D. Taira, S. Tait, D. Tanner, Z. Tornasi, C. Torrie, M. Van Veggel, J. Vanheijningen, P. Veitch, A. Wade, G. Wallace, R. Ward, R. Weiss, P. Wessels, B. Willke, H. Yamamoto, M. J. Yap, and C. Zhao, Classical and Quantum Gravity , ( ), arXiv: . [astro-ph.IM].
. K Ackley, V B Adya, P Agrawal, P Altin, G Ashton, M Bailes, E Baltinas, A Barbuio, D Beniwal, C Blair, D Blair, G N Bolingbroke, V Bossilkov, S Boublil, D D Brown, B J Burridge, J Calderon, J Bustillo, H Cameron, J B Cao, S Carlin, P Chang, C Charlton, D Chatterjee, X Chattopadhyay, J Chen, J Chi, Q Chow, A Chu, T Ciobanu, P Clarke, J Clearwater, D Cooke, H Coward, R J Crisp, A T Dattatri, D A Deller, L Dobie, P J Dunn, J Easter, R Eichholz, C Evans, G Flynn, P Foran, Y Forsyth, S Gai, D K Galaudage, B Galloway, B Gendre, S Goncharov, D Goode, B Gozzard, A W Grace, A Graham, F Heger, Hernandez, R Vivanco, N A Hirai, Z J Holland, E Holmes, E Howard, G Howell, M T Howitt, J Hübner, C Hurley, V Ingram, K Hamedan, L Jenner, D P Ju, T Kapasi, N Kaur, M Kijbunchoo, R Kovalam, P D Kumar Choudhary, M Y M Lasky, J Lau, J Leung, K Liu, A Loh, I Mailvagan, J J Mandel, D E Mc-Cann, K Mcclelland, D Mckenzie, T Mcmanus, A Mcrae, P Melatos, H Meyers, M T Middleton, M Miles, Y Millhouse, B Mong, J Mueller, J Munch, S Musiov, R S Muusse, Y Nathan, C Naveh, B Neijssel, S W S Neil, V Ng, D J Oloworaran, M Ottaway, J Page, M Pan, E Pathak, J Payne, J Powell, E Pritchard, A Puckridge, V Raidani, D Rallabhandi, J A Reardon, L Riley, I M Roberts, T J Romero-Shaw, G Roocke, N Rowell, N Sahu, L Sarin, H Sarre, M Sattari, S M Schiworski, R Scott, D Sengar, R Shaddock, J Shannon, P Shi, B J J Sibley, T Slagmolen, R J E Slaven-Blair, J Smith, L Spollard, L Steed, H Strang, A Sun, S Sunderland, C Suvorova, E Talbot, D Thrane, P Töyrä, A Trahanas, J V Vajpeyi, A F Van Heijningen, P J Vargas, A Veitch, A Vigna-Gomez, K Wade, Z Walker, R L Wang, K Ward, S Ward, L Webb, K Wen, R Wette, J Wilcox, Winterflood, 10.1017/pasa.2020.39arXiv: . [astro-ph.HEJet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, and X. Zhu, PASA , e (K. Ackley, V. B. Adya, P. Agrawal, P. Altin, G. Ashton, M. Bailes, E. Baltinas, A. Barbuio, D. Beniwal, C. Blair, D. Blair, G. N. Bolingbroke, V. Bossilkov, S. Shachar Boublil, D. D. Brown, B. J. Burridge, J. Calderon Bustillo, J. Cameron, H. Tuong Cao, J. B. Carlin, S. Chang, P. Charlton, C. Chatterjee, D. Chattopad- hyay, X. Chen, J. Chi, J. Chow, Q. Chu, A. Ciobanu, T. Clarke, P. Clearwater, J. Cooke, D. Coward, H. Crisp, R. J. Dattatri, A. T. Deller, D. A. Dobie, L. Dunn, P. J. Easter, J. Eichholz, R. Evans, C. Flynn, G. Foran, P. Forsyth, Y. Gai, S. Galaudage, D. K. Galloway, B. Gendre, B. Goncharov, S. Goode, D. Gozzard, B. Grace, A. W. Graham, A. Heger, F. Hernandez Vivanco, R. Hirai, N. A. Holland, Z. J. Holmes, E. Howard, E. Howell, G. Howitt, M. T. Hübner, J. Hurley, C. Ingram, V. Jaberian Hamedan, K. Jenner, L. Ju, D. P. Kapasi, T. Kaur, N. Kijbun- choo, M. Kovalam, R. Kumar Choudhary, P. D. Lasky, M. Y. M. Lau, J. Leung, J. Liu, K. Loh, A. Mailvagan, I. Mandel, J. J. Mc- Cann, D. E. McClelland, K. McKenzie, D. McManus, T. McRae, A. Melatos, P. Meyers, H. Middleton, M. T. Miles, M. Millhouse, Y. Lun Mong, B. Mueller, J. Munch, J. Musiov, S. Muusse, R. S. Nathan, Y. Naveh, C. Neijssel, B. Neil, S. W. S. Ng, V. Oloworaran, D. J. Ottaway, M. Page, J. Pan, M. Pathak, E. Payne, J. Powell, J. Pritchard, E. Puckridge, A. Raidani, V. Rallabhandi, D. Reardon, J. A. Riley, L. Roberts, I. M. Romero-Shaw, T. J. Roocke, G. Rowell, N. Sahu, N. Sarin, L. Sarre, H. Sattari, M. Schiworski, S. M. Scott, R. Sengar, D. Shaddock, R. Shannon, J. SHI, P. Sibley, B. J. J. Slagmolen, T. Slaven-Blair, R. J. E. Smith, J. Spollard, L. Steed, L. Strang, H. Sun, A. Sunderland, S. Suvorova, C. Talbot, E. Thrane, D. Töyrä, P. Trahanas, A. Vajpeyi, J. V. van Heijningen, A. F. Vargas, P. J. Veitch, A. Vigna-Gomez, A. Wade, K. Walker, Z. Wang, R. L. Ward, K. Ward, S. Webb, L. Wen, K. Wette, R. Wilcox, J. Winterflood, C. Wolf, B. Wu, M. Jet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, and X. Zhu, PASA , e ( ), arXiv: . [astro-ph.HE].
Science Case Team Consortium, The Next-Generation Global Gravitational-Wave Observatory: New Astrophysics with the Farthest, Oldest, and Most Violent Events in the Universe. Gwic The, Tech. Rep. GWICThe GWIC G Science Case Team Consortium, The Next- Generation Global Gravitational-Wave Observatory: New As- trophysics with the Farthest, Oldest, and Most Violent Events in the Universe, Tech. Rep. (GWIC, ).
Research and Development for the Next Generation of Groundbased Gravitational-wave Detectors. Gwic-G Gwic, G Gwic-G-R&d-Consortium, R&d, Tech. Rep. GWICGWIC, GWIC-G, GWIC-G-R&D-Consortium, G R&D: Research and Development for the Next Generation of Ground- based Gravitational-wave Detectors, Tech. Rep. (GWIC, ).
. S Hild, M Abernathy, F Acernese, P Amaro-Seoane, N Andersson, K Arun, F Barone, B Barr, M Barsuglia, M Beker, N Beveridge, S Birindelli, S Bose, L Bosi, S Braccini, C Bradaschia, T Bulik, E Calloni, G Cella, E Mottin, S Chelkowski, A Chincarini, J Clark, E Coccia, C Colacino, J Colas, A Cumming, L Cunningham, E Cuoco, S Danilishin, K Danzmann, R De, T Salvo, R Dent, L Di Rosa, A Di Fiore, M Virgilio, V Doets, P Fafone, R Falferi, J Flaminio, F Franc, A Frasconi, D Freise, P Friedrich, J Fulda, G Gair, E Gemme, A Genin, A Gennai, K Giazotto, C Glampedakis, M Gräf, H Granata, G Grote, A Guidi, G Gurkovsky, M Hammond, J Hannam, D Harms, M Heinert, I Hendry, E Heng, J Hennes, S Hough, S Husa, G Huttner, F Jones, K Khalili, K Kokeyama, B Kokkotas, T G F Krishnan, M Li, H Lorenzini, E Lück, I Majorana, V Mandel, M Mandic, I Mantovani, C Martin, Y Michel, N Minenkov, S Morgado, B Mosca, H Mours, P Müller-Ebhardt, R Murray, J Nawrodt, R Nelson, C D Oshaughnessy, C Ott, A Palomba, G Paoli, A Parguez, R Pasqualetti, D Passaquieti, L Passuello, W Pinard, R Plastino, P Poggiani, M Popolizio, M Prato, P Punturo, D Puppo, P Rabeling, J Rapagnani, T Read, H Regimbau, S Rehbein, F Reid, F Ricci, A Richard, S Rocchi, A Rowan, L Rüdiger, B Santamaría, B Sassolas, R Sathyaprakash, C Schnabel, P Schwarz, A Seidel, K Sintes, F Somiya, K Speirits, S Strain, P Strigin, S Sutton, A Tarabrin, J Thüring, M Van Den Brand, C Van Veggel, A Van Den Broeck, J Vecchio, F Veitch, A Vetrano, S Vicere, B Vyatchanin, G Willke, K Woan, Yamamoto, 10.1088/1361-6382/abd594arXiv: . [astro-ph.COClassical and Quantum Gravity. Y. Chen, D. E. Holz, J. Miller, M. Evans, S. Vitale, and J. CreightonClassical and Quantum GravityS. Hild, M. Abernathy, F. Acernese, P. Amaro-Seoane, N. An- dersson, K. Arun, F. Barone, B. Barr, M. Barsuglia, M. Beker, N. Beveridge, S. Birindelli, S. Bose, L. Bosi, S. Braccini, C. Bradaschia, T. Bulik, E. Calloni, G. Cella, E. Chassande Mottin, S. Chelkowski, A. Chincarini, J. Clark, E. Coccia, C. Colacino, J. Colas, A. Cumming, L. Cunningham, E. Cuoco, S. Danilishin, K. Danzmann, R. De Salvo, T. Dent, R. De Rosa, L. Di Fiore, A. Di Virgilio, M. Doets, V. Fafone, P. Falferi, R. Flaminio, J. Franc, F. Frasconi, A. Freise, D. Friedrich, P. Fulda, J. Gair, G. Gemme, E. Genin, A. Gennai, A. Giazotto, K. Glampedakis, C. Gräf, M. Granata, H. Grote, G. Guidi, A. Gurkovsky, G. Hammond, M. Hannam, J. Harms, D. Heinert, M. Hendry, I. Heng, E. Hennes, J. Hough, S. Husa, S. Huttner, G. Jones, F. Khalili, K. Kokeyama, K. Kokkotas, B. Krishnan, T. G. F. Li, M. Lorenzini, H. Lück, E. Majorana, I. Man- del, V. Mandic, M. Mantovani, I. Martin, C. Michel, Y. Mi- nenkov, N. Morgado, S. Mosca, B. Mours, H. Müller-Ebhardt, P. Murray, R. Nawrodt, J. Nelson, R. Oshaughnessy, C. D. Ott, C. Palomba, A. Paoli, G. Parguez, A. Pasqualetti, R. Passaquieti, D. Passuello, L. Pinard, W. Plastino, R. Poggiani, P. Popolizio, M. Prato, M. Punturo, P. Puppo, D. Rabeling, P. Rapagnani, J. Read, T. Regimbau, H. Rehbein, S. Reid, F. Ricci, F. Richard, A. Rocchi, S. Rowan, A. Rüdiger, L. Santamaría, B. Sasso- las, B. Sathyaprakash, R. Schnabel, C. Schwarz, P. Seidel, A. Sintes, K. Somiya, F. Speirits, K. Strain, S. Strigin, P. Sutton, S. Tarabrin, A. Thüring, J. van den Brand, M. van Veggel, C. van den Broeck, A. Vecchio, J. Veitch, F. Vetrano, A. Vicere, S. Vy- atchanin, B. Willke, G. Woan, and K. Yamamoto, Classical and Quantum Gravity , [ ] H.-Y. Chen, D. E. Holz, J. Miller, M. Evans, S. Vitale, and J. Creighton, Classical and Quantum Gravity , ( ), arXiv: . [astro-ph.CO].
. D Marković, 10.1103/PhysRevD.48.4738Phys. Rev. D. D. Marković, Phys. Rev. D , ( ).
. K Izumi, D Sigg, 10.1088/0264-9381/34/1/015001Classical and Quantum Gravity. K. Izumi and D. Sigg, Classical and Quantum Gravity , ( ).
. F Amann, F Bonsignorio, T Bulik, H J Bulten, S Cuccuru, A Dassargues, R Desalvo, E Fenyvesi, F Fidecaro, I Fiori, C Giunchi, A Grado, J Harms, S Koley, L Kovács, G Losurdo, V Mandic, P Meyers, L Naticchioni, F Nguyen, G Oggiano, M Olivieri, F Paoletti, A Paoli, W Plastino, M Razzano, P Ruggi, G Saccorotti, A M Sintes, L Somlai, P Ván, M Vasúth, 10.1063/5.0018414arXiv: .Review of Scientific Instruments. physics.ins-detF. Amann, F. Bonsignorio, T. Bulik, H. J. Bulten, S. Cuc- curu, A. Dassargues, R. DeSalvo, E. Fenyvesi, F. Fidecaro, I. Fiori, C. Giunchi, A. Grado, J. Harms, S. Koley, L. Kovács, G. Losurdo, V. Mandic, P. Meyers, L. Naticchioni, F. Nguyen, G. Oggiano, M. Olivieri, F. Paoletti, A. Paoli, W. Plastino, M. Razzano, P. Ruggi, G. Saccorotti, A. M. Sintes, L. Somlai, P. Ván, and M. Vasúth, Review of Scientific Instruments , ( ), arXiv: . [physics.ins-det].
Observations and modeling of seismic background noise. J R Peterson, Tech. Rep. (US Geological Survey. J. R. Peterson, Observations and modeling of seismic back- ground noise, Tech. Rep. (US Geological Survey, ).
. J Harms, 10.1007/s41114-019-0022-2Living Reviews in Relativity. J. Harms, Living Reviews in Relativity , ( ).
. A Meltzer, R Rudnick, P Zeitler, A Levander, G Humphreys, K Karlstrom, E Ekstrom, C Carlson, T Dixon, M Gurnis, Geological Society of America TODAY. A. Meltzer, R. Rudnick, P. Zeitler, A. Levander, G. Humphreys, K. Karlstrom, E. Ekstrom, C. Carlson, T. Dixon, M. Gurnis, et al., Geological Society of America TODAY , ( ).
. H Benz, R Buland, J Filson, A Frankel, K Shedlock, Seismological Research Letters. H. Benz, R. Buland, J. Filson, A. Frankel, and K. Shedlock, Seismological Research Letters , ( ).
. P Nguyen, R M S Schofield, A Effler, C Austin, V Adya, M Ball, S Banagiri, K Banowetz, C Billman, C D , P. Nguyen, R. M. S. Schofield, A. Effler, C. Austin, V. Adya, M. Ball, S. Banagiri, K. Banowetz, C. Billman, C. D.
. Blair, arXiv: . [astro-ph.IMarXiv e-printsBlair, and et al., arXiv e-prints , arXiv: . ( ), arXiv: . [astro-ph.IM].
. S Bonnefoy-Claudet, F Cotton, P.-Y. Bard, 10.1016/j.earscirev.2006.07.004Earth-Science Reviews. S. Bonnefoy-Claudet, F. Cotton, and P.-Y. Bard, Earth-Science Reviews , ( ).
. J R Bowman, L (G E Baker, L (M Bahavar, L (10.1029/2005GL022486Geophys. Res. Lett. J. R. Bowman, G. E. Baker, and M. Bahavar, Geophys. Res. Lett. , L ( ).
. T Creighton, 10.1088/0264-9381/25/12/125011arXiv:gr-qc/ [gr-qc]Classical and Quantum Gravity. T. Creighton, Classical and Quantum Gravity , ( ), arXiv:gr-qc/ [gr-qc].
M E Zucker, S E Whitcomb, Proceedings of the Seventh Marcel Grossman Meeting on recent developments in theoretical and experimental general relativity, gravitation, and relativistic field theories. R. T. Jantzen, G. Mac Keiser, and R. Ruffini (the Seventh Marcel Grossman Meeting on recent developments in theoretical and experimental general relativity, gravitation, and relativistic field theoriespM. E. Zucker and S. E. Whitcomb, in Proceedings of the Seventh Marcel Grossman Meeting on recent developments in theoretical and experimental general relativity, gravitation, and relativistic field theories, edited by R. T. Jantzen, G. Mac Keiser, and R. Ruffini ( ) p.
. R Takahashi, Y Saito, M Fukushima, M Ando, K Arai, D Tatsumi, G Heinzel, S Kawamura, T Yamazaki, S Moriwaki, Journal of Vacuum Science Technology. [ ] R. Takahashi, Y. Saito, M. Fukushima, M. Ando, K. Arai, D. Tat- sumi, G. Heinzel, S. Kawamura, T. Yamazaki, and S. Moriwaki, Journal of Vacuum Science Technology , ( ).
. A Cavalleri, G Ciani, R Dolesi, M Hueller, D Nicolodi, D Tombolato, S Vitale, P J Wass, W J Weber, 10.1016/j.physleta.2010.06.041arXiv: .Physics Letters A. physics.class-phA. Cavalleri, G. Ciani, R. Dolesi, M. Hueller, D. Nicolodi, D. Tombolato, S. Vitale, P. J. Wass, and W. J. Weber, Physics Letters A , ( ), arXiv: . [physics.class-ph].
. R Dolesi, M Hueller, D Nicolodi, D Tombolato, S Vitale, P J Wass, W J Weber, M Evans, P Fritschel, R Weiss, J H Gundlach, C A Hagedorn, S Schlamminger, G Ciani, A Cavalleri, 10.1103/PhysRevD.84.063007arXiv: .Phys. Rev. D. gr-qcR. Dolesi, M. Hueller, D. Nicolodi, D. Tombolato, S. Vitale, P. J. Wass, W. J. Weber, M. Evans, P. Fritschel, R. Weiss, J. H. Gundlach, C. A. Hagedorn, S. Schlamminger, G. Ciani, and A. Cavalleri, Phys. Rev. D , ( ), arXiv: . [gr-qc].
. S M Aston, M A Barton, A S Bell, N Beveridge, B Bland, A J Brummitt, G Cagnoli, C A Cantley, L Carbone, A V Cumming, L Cunningham, R M Cutler, R J S Greenhalgh, G D Hammond, K Haughian, T M Hayler, A Heptonstall, J Heefner, D Hoyland, J Hough, R Jones, J S Kissel, R Kumar, N A Lockerbie, D Lodhia, I W Martin, P G Murray, J O'dell, M V Plissi, S Reid, J Romie, N A Robertson, S Rowan, B Shapiro, C C Speake, K A Strain, K V Tokmakov, C Torrie, A A Van Veggel, A Vecchio, I Wilmut, 10.1088/0264-9381/29/23/235004Classical and Quantum Gravity. S. M. Aston, M. A. Barton, A. S. Bell, N. Beveridge, B. Bland, A. J. Brummitt, G. Cagnoli, C. A. Cantley, L. Carbone, A. V. Cumming, L. Cunningham, R. M. Cutler, R. J. S. Greenhalgh, G. D. Hammond, K. Haughian, T. M. Hayler, A. Heptonstall, J. Heefner, D. Hoyland, J. Hough, R. Jones, J. S. Kissel, R. Kumar, N. A. Lockerbie, D. Lodhia, I. W. Martin, P. G. Murray, J. O'Dell, M. V. Plissi, S. Reid, J. Romie, N. A. Robertson, S. Rowan, B. Shapiro, C. C. Speake, K. A. Strain, K. V. Tokmakov, C. Torrie, A. A. van Veggel, A. Vecchio, and I. Wilmut, Classical and Quantum Gravity , ( ).
. H B Callen, T A Welton, 10.1103/PhysRev.83.34Physical Review. H. B. Callen and T. A. Welton, Physical Review , ( ).
. H B Callen, R F Greene, 10.1103/PhysRev.86.702Physical Review. H. B. Callen and R. F. Greene, Physical Review , ( ).
. G I González, P R Saulson, 10.1121/1.410467Acoustical Society of America JournalG. I. González and P. R. Saulson, Acoustical Society of America Journal , ( ).
. G González, 10.1088/0264-9381/17/21/305arXiv:gr-qc/ [gr-qc]Classical and Quantum Gravity. G. González, Classical and Quantum Gravity , ( ), arXiv:gr-qc/ [gr-qc].
J M Gere, S P Timoshenko, Mechanics of Materials. PWS Publishing Companyth ed.J. M. Gere and S. P. Timoshenko, Mechanics of Materials, th ed. (PWS Publishing Company, ).
. P R Saulson, 10.1103/PhysRevD.42.2437Phys. Rev. D. P. R. Saulson, Phys. Rev. D , ( ).
. A M Gretarsson, G M Harry, S D Penn, P R Saulson, W J Startin, S Rowan, G Cagnoli, J Hough, 10.1016/S0375-9601(00)00295-4arXiv:gr-qc/ [gr-qc]Physics Letters A. A. M. Gretarsson, G. M. Harry, S. D. Penn, P. R. Saulson, W. J. Startin, S. Rowan, G. Cagnoli, and J. Hough, Physics Letters A , ( ), arXiv:gr-qc/ [gr-qc].
. K V Tokmakov, A Cumming, J Hough, R Jones, R Kumar, S Reid, S Rowan, N A Lockerbie, A Wanner, G Hammond, 10.1016/j.jnoncrysol.2012.05.005Journal of Non Crystalline Solids. K. V. Tokmakov, A. Cumming, J. Hough, R. Jones, R. Ku- mar, S. Reid, S. Rowan, N. A. Lockerbie, A. Wanner, and G. Hammond, Journal of Non Crystalline Solids , ( ).
. K.-H Lee, G Hammond, J Hough, R Jones, S Rowan, A Cumming, 10.1088/1361-6382/ab28bdClassical and Quantum Gravity. K.-H. Lee, G. Hammond, J. Hough, R. Jones, S. Rowan, and A. Cumming, Classical and Quantum Gravity , ( ).
. A V Cumming, L Cunningham, G D Hammond, K Haughian, J Hough, S Kroker, I W Martin, R Nawrodt, S Rowan, C Schwarz, A A Van Veggel, 10.1088/0264-9381/31/2/025017Classical and Quantum Gravity. A. V. Cumming, L. Cunningham, G. D. Hammond, K. Haughian, J. Hough, S. Kroker, I. W. Martin, R. Nawrodt, S. Rowan, C. Schwarz, and A. A. van Veggel, Classical and Quantum Gravity , ( ).
. J C Mclaughlin, A F W Willoughby, 10.1016/0022-0248(87)90207-7Journal of Crystal Growth. J. C. McLaughlin and A. F. W. Willoughby, Journal of Crystal Growth , ( ).
. F W Delrio, R F Cook, B L Boyce, 10.1063/1.4919540Applied Physics Reviews. F. W. DelRio, R. F. Cook, and B. L. Boyce, Applied Physics Reviews , ( ).
. G Cella, R Desalvo, V Sannibale, H Tariq, N Viboud, A Takamori, 10.1016/S0168-9002(01)02193-3Nuclear Instruments and Methods in Physics Research A. G. Cella, R. DeSalvo, V. Sannibale, H. Tariq, N. Viboud, and A. Takamori, Nuclear Instruments and Methods in Physics Research A , ( ).
. C M Mow-Lowry, D Martynov, 10.1088/1361-6382/ab4e01arXiv: . [astro-ph.IMClassical and Quantum Gravity. C. M. Mow-Lowry and D. Martynov, Classical and Quantum Gravity , ( ), arXiv: . [astro-ph.IM].
F Matichard, B Lantz, R Mittleman, K Mason, J Kissel, B Abbott, S Biscans, J Mciver, R Abbott, S Abbott, E Allwine, S Barnum, J Birch, C Celerier, D Clark, D Coyne, D Debra, R Derosa, M Evans, S Foley, P Fritschel, J A Giaime, C Gray, G Grabeel, J Hanson, C Hardham, M Hillard, W Hua, C Kucharczyk, M Landry, A Le Roux, V Lhuillier, D Macleod, M Macinnis, R Mitchell, B O'reilly, D Ottaway, H Paris, A Pele, M Puma, H Radkins, C Ramet, M Robinson, L Ruet, P Sarin, D Shoemaker, A Stein, J Thomas, M Vargas, K Venkateswara, J Warner, S Wen, 10.1088/0264-9381/32/18/185003Classical and Quantum. F. Matichard, B. Lantz, R. Mittleman, K. Mason, J. Kissel, B. Abbott, S. Biscans, J. McIver, R. Abbott, S. Abbott, E. All- wine, S. Barnum, J. Birch, C. Celerier, D. Clark, D. Coyne, D. DeBra, R. DeRosa, M. Evans, S. Foley, P. Fritschel, J. A. Gi- aime, C. Gray, G. Grabeel, J. Hanson, C. Hardham, M. Hillard, W. Hua, C. Kucharczyk, M. Landry, A. Le Roux, V. Lhuillier, D. Macleod, M. Macinnis, R. Mitchell, B. O'Reilly, D. Ottaway, H. Paris, A. Pele, M. Puma, H. Radkins, C. Ramet, M. Robinson, L. Ruet, P. Sarin, D. Shoemaker, A. Stein, J. Thomas, M. Vargas, K. Venkateswara, J. Warner, and S. Wen, Classical and Quantum
| [] |
[
"A review of heath economic evaluation practice in the Netherlands: are we moving forward?",
"A review of heath economic evaluation practice in the Netherlands: are we moving forward?"
] | [
"Andrea Gabrio *e-mail:[email protected] \nDepartment of Methodology and Statistics\nFaculty of Health Medicine and Life Science\nMaastricht University\nP. Debyeplein 16229 HAMaastrichtNL\n"
] | [
"Department of Methodology and Statistics\nFaculty of Health Medicine and Life Science\nMaastricht University\nP. Debyeplein 16229 HAMaastrichtNL"
] | [] | In 2016, the Dutch National Health Care Institute issued new guidelines that aggregated and updated previous recommendations on key elements for conducting economic evaluation. However, the impact on standard practice after the introduction of the guidelines in terms of design, methodology and reporting choices, is still uncertain. To assess this impact, we examine and compare key analysis components of economic evaluations conducted in the Netherlands before (2010)(2011)(2012)(2013)(2014)(2015) and after (2016-2020) the introduction of the guidelines. We specifically focus on two aspects of the analysis that are crucial in determining the plausibility of the results: statistical methodology and missing data handling. Our review shows how many components of economic evaluations have changed in accordance with the new recommendations towards more transparent and advanced analytic approaches. However, potential limitations are identified in terms of the statistical software and information provided to support the choice of missing data methods. | 10.1017/s1744133123000087 | [
"https://arxiv.org/pdf/2203.15707v1.pdf"
] | 247,779,110 | 2203.15707 | f3ed1dd2ef0c05d10deb8f41055e800997ca6393 |
A review of heath economic evaluation practice in the Netherlands: are we moving forward?
29 Mar 2022
Andrea Gabrio *e-mail:[email protected]
Department of Methodology and Statistics
Faculty of Health Medicine and Life Science
Maastricht University
P. Debyeplein 16229 HAMaastrichtNL
A review of heath economic evaluation practice in the Netherlands: are we moving forward?
29 Mar 20221 Conflict of interest. The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. 2economic evaluationsreviewcost-effectivenessanalytic approachesThe Nether- lands Classification codes D61D70D81H51I18
In 2016, the Dutch National Health Care Institute issued new guidelines that aggregated and updated previous recommendations on key elements for conducting economic evaluation. However, the impact on standard practice after the introduction of the guidelines in terms of design, methodology and reporting choices, is still uncertain. To assess this impact, we examine and compare key analysis components of economic evaluations conducted in the Netherlands before (2010)(2011)(2012)(2013)(2014)(2015) and after (2016-2020) the introduction of the guidelines. We specifically focus on two aspects of the analysis that are crucial in determining the plausibility of the results: statistical methodology and missing data handling. Our review shows how many components of economic evaluations have changed in accordance with the new recommendations towards more transparent and advanced analytic approaches. However, potential limitations are identified in terms of the statistical software and information provided to support the choice of missing data methods.
Introduction
Health economic evaluation is a relatively new discipline whose definition and application have gradually but constantly evolved during the last decades. Nowadays, economic evaluations are primarily conducted to inform decisions about the allocation of limited resources across a pool of alternative healthcare interventions within a given health care system. The first official adoption of economic evaluation within a national public healthcare system is attributed to the Australian government [1] in the early '90s, and later followed by other public authorities in many other countries [17]. Although the purpose of economic evaluation remains the same across different jurisdictions, the presence of geographical and socio-cultural differences imposes national pharmaceutical decision-making committees to define their own requirements and guidelines for pharmacoeconomic evaluations [19].
In the Netherlands, the Dutch National Health Care Institute (Zorginstituut Nederland or ZIN) is the body in charge of issuing recommendations and guidance on good practice in economic evaluation, not just for pharmaceutical products, but also in relation to other fields of application that include prevention, diagnostics, medical devices, long-term care and forensics. In 2016, ZIN issued an update on the guidance for economic evaluation [40], which aggregated into a single document and revised three separately published guidelines for pharmacoeconomic evaluation [27], outcomes research [10] and costing manual [16]. The novel aspects and future policy direction introduced by these guidelines have already been object of discussion, particularly with respect to the potential impact and concerns associated with their implementation in standard health economic practice in the Netherlands [15,35]. Given the importance covered by these guidelines, an assessment of their impact on economic evaluation practice in the Netherlands would allow to draw some conclusions.
Our objective was to review the evolution of economic evaluation practice in the Netherlands before and after the introduction of the ZIN's 2016 guidelines. In addition, we provide an in-depth assessment of the quantitative approaches used by analysts with a focus on the statistical methods, missing data methods and software implemented. Given the intrinsic complexity that characterise the analysis framework of health economic data, the choice of the analytical approaches to deal with these problems as well as transparent information on their implementation is crucial in determining the degree of confidence that decision-makers should have towards cost-effectiveness results obtained from such analyses [28].
The rest of the article is structured as follows. Section 2 briefly outlines the key elements of the ZIN's 2016 guidelines, with a focus on the changes that were introduced with respect to previous guidance. Section 3 presents the review methodology and compares the characteristics of the studies with the recommendations from the 2016 guidelines. Section 4 reviews the analytical methods and software used, while Section 5 focuses on the choice of missing data methods and uses a structured grading scheme to evaluate the studies based on the overall level of missingness information provided. Finally, Section 6.4 summarises our findings and recommendations for future research.
The ZIN 2016 guidelines
The main objective of the guidelines is to ensure the comparability and quality of economic evaluations in the Netherlands, therefore facilitating the task of the decision-maker regarding the funding or reimbursement of new healthcare interventions. Following the example of guidelines issued by decision-making bodies in other countries, including the National Institute for Health and Care Excellence in the UK [26], the recommended features for economic evaluations are summarised in a "reference case", although deviations from it are allowed when properly justified (e.g. in case of non-pharmaceutical products).
Based on the structure of the reference case, four essential topics are briefly summarised: framework, analytic approach, input data and reporting. We do not review information related to cost-benefit, cost-minimisation or budget impact analyses as these do not fall within the scope of this article. For a thorough examination of the guidelines and implication on practice we refer the interested reader to, respectively, the original document [40] and two recent articles [15,35].
Framework of the economic evaluation
A series of elements form the framework and allow to identify the objective and the users of the economic evaluation. According to the reference case, the mandatory perspective to be adopted is the societal perspective, which implies that all costs and benefits, irrespective of who is the bearer/beneficiary, should be taken into account. Results from other perspectives (e.g. healthcare provider) may also be presented as additional analyses. The research question is summarised by the PICOT (Patient, Intervention, Control, Outcome and Time) criteria and should involve: a population in the Dutch setting (P); a new healthcare intervention (I) and standard of care (C) that can be applied in the Netherlands; pre-defined outcome measures (e.g. clinical, patient-reported); the expected lifetime of the target population (T). It is also recommended to "scope" the PICOT criteria beforehand with the relevant stakeholders (e.g. patient organisations) to benefit from their expertise and experience [41].
Analytic approach
The number and type of analytic techniques that should be implemented depend on the type of economic evaluation. Cost-Effectiveness Analysis (CEA) and Cost-Utility Analysis (CUA), respectively based on clinical or Quality-Adjusted Life Years (QALYs) measures, are the most popular types of analyses, with CUA being the preferred choice since it allows better comparability of results between different health conditions.
Discounting should always be applied when outcome data are analysed over a time horizon exceeding one year using a yearly discount rate of 1.5% for effects and 4% for costs. Uncertainty surrounding the economic results from the analysis should always be assessed to: 1) quantify the impact on cost-effectiveness conclusions; 2) determine if and how much additional research may reduce uncertainty. The methods and type of uncertainty analyses vary according to the type of economic evaluation, with a clear distinction between empirical (e.g. CUA alongside a trial) and model-based (e.g. simulation models) analyses since the type of input data and objective are different.
Empirical analyses should implement statistical methods to quantify the uncertainty around mean incremental costs, effects and cost-effectiveness ratios (ICERs). Bootstrapping is the standard approach used to generate a large number of resampling draws and quantify the uncertainty through the computation of confidence intervals, Cost-Effectiveness Planes [CEP; 4] and Cost-Effectiveness Acceptability Curves [CEAC ; 34]. Appropriate statistical methods should also be used to quantify the impact of missing data uncertainty on the results, with Multiple Imputation [MI; 33] being the recommended approach, with Expectation-Maximisation as a possible alternative [EM ; 11]. Regression techniques may also be used to increase the precision and correct for differences between groups, while alternative approaches can be used to assess the robustness of the results in scenario and sensitivity analyses.
Model-based analyses often consist in patient-level simulation methods which should perform Probabilistic Sensitivity Analysis [PSA; 7] by varying the assumed distributions and associated measures of variability to assess the impact of parameter uncertainty on ICERs. In addition, deterministic sensitivity analysis should be carried out on other model inputs (e.g. discount rate, cost prices) and structural uncertainty should be made transparent by presenting a clear overview of the model assumptions. Value of Information (VOI) analysis should be performed and an estimate of the Expected Value of Perfect Information (EVPI) should be produced, quantifying all consequences of the uncertainty around the model parameters [8]. Model validation is crucial and should provide information on the model structure, input data and software code.
Input data
In empirical analyses, input data on clinical effectiveness are collected and derived from the study, whereas in model-based analyses the clinical effectiveness data need to be underpinned by a systematic review of the literature, preferably using evidence from randomised studies and head-to-head comparisons. Identification, measurement and valuation of cost data should be done following the guidance in the costing manual [16]. All relevant societal cost should be identified, including those related to the healthcare system (direct and indirect medical costs), patient and family (e.g. travel, informal care), other sectors (e.g. volunteering) and productivity losses (e.g. due to absenteeism). The friction method should be used to calculate productivity losses as a result of paid work absenteeism. Costs are computed by multiplying the volume of a specific service (i.e. resource use information collected during the trial) with the corresponding standardised national unit price and adjusting for inflation via consumer price index. Quality of life data should be collected by means of validated, generic quality-oflife self-reported questionnaires which assign to each patient a utility score valued based on the preferences of the general population in the country. The reference case identifies the EQ-5D-5L questionnaire [20] as the preferred instrument to measure quality of life, valued through Dutch reference values [36]. Alternative questionnaires and other methods to evaluate quality of life may be added next to the reference case.
Reporting
Information related to input data (effectiveness, costs and quality of life) should be reported in a transparent way. This includes, but is not limited to, details of studies used to retrieve effectiveness data (e.g. patient characteristics), prices and volumes of all cost components, questionnaires or valuation methods for quality of life data. For economic evaluations based on empirical studies, missing data information should be clearly reported in terms of amount, whether partially-observed individuals differed from completers and whether missingness was addressed at the study design. Alternative approaches should be implemented to assess the sensitivity of the results to different methods.
The reporting of the results should be tailored to the type of analysis performed, namely either base-case or uncertainty analysis. In the base-case analysis both the total and incremental costs/effects for each intervention group should be reported alongside the ICER, and graphically represented via the CEP. In uncertainty analysis, parameter uncertainty should be reported in terms of the minimum and maximum variations of the ICER, as well as the impact on the incremental costs and effects via tabular form and graphically by means of a tornado diagram. Results of PSA (model-based) or bootstrapping (empirical) should be presented graphically via CEP and CEAC. As an alternative, results under the net benefit approach [31] for each intervention can be reported. Finally, results of VOI analysis should be presented using different reference values of the ICER.
Literature Review
We identified papers within the period 1 Jan 2010 to 31 December 2020. Articles were considered eligible for the review only if they were cost-effectiveness or cost-utility analyses targeting a Dutch population. Study protocols, pilot studies as well as cost-benefit, cost-minimisation or budject impact analyses were excluded. We relied on the search engines of two online fulltext journal repositories: 1) pubmed, 2) Zorginstituut. The key words used in the search strategy were (cost-effectiveness OR cost-utility OR economic evaluation). The online databases identified 4319 articles most of which were duplicates. After abstract review, 647 articles were considered, of which 190 fulfilled the eligibility criteria. We report the full list of reviewed studies in the online Appenidx
Review
We present and compare the articles reviewed between two separate periods (2010-2015 and 2016-2020) to assess and identify changes in standard health economic practice after the introduction of the ZIN's 2016 guidelines. We summarised key results in terms of the type of analysis and analytic approaches implemented. With regard to empirical analyses, we looked in detail at the statistical methods and software used, while also reviewing and evaluating the strategies implemented to handle missing data. Table 1 reports information about the reviewed studies, separately between the 2010-2015 and 2016-2020 periods, and compares it to the recommendations on each element of the economic evaluation as described in the reference case of the 2016 guidelines.
TABLE 1
Out of the 190 studies included, about half were published between 2010-2015 (96) and between 2016-2020 (94), with also comparable numbers in terms of empirical (86 vs 80) as well as model-based analyses (10 vs 14). In the Appendix, we report a visual representation of the sample size distribution based on the 166 empirical studies included in the review.
Some considerable changes are observed between the two periods in regard to different analysis components: 1) a sensible increase in the proportion of studies adopting a societal perspective in the primary analysis and a healthcare perspective in secondary analyses (from 23% to 40%); 2) an increase in the proportion of studies performing CUAs as primary analysis (from 31% to 44%) and a decrease in the number of primary CEAs (from 30% to 17%); 3) an uptake in the number of studies including all relevant societal costs in the analysis (from 53% to 68%); 3) an increase in the proportion of CUAs which provide clear information on the EQ-5D questionnaires, for both 5L (from 4% to 12%) and 3L (from 24% to 33%) versions.
In addition, we observe an increase in the proportion of studies following the recent guidelines in regard to the choice of the discount rates for future effects and costs (from 58% to 68%) as well as the use of the friction method to calculate productivity losses (from 39% to 59%). Limited variations are observed in the number of studies using both CEP and/or CEAC to report the results from uncertainty analysis, and the time horizon chosen in empirical analyses. Although there is a slight decrease in the proportions of model-based analyses using a lifetime horizon, these are calculated from relatively small numbers (10 studies between 2010-2015 and 14 between 2016-2021) and may therefore be misleading. Finally, we observe that only one study within each period conducted VOI analysis and provided an estimate of EVPI.
Analytic approaches
In this section we explore in more detail the information provided by the reviewed studies in relation to type of analytical approaches used to perform the economic evaluation and assess uncertainty. We also review information concerning the specific software program used as it may provide insights on practitioners' preferences of implementation and potential room of improvement. We specifically focus on the choice of the statistical approaches as it represents a crucial element in any economic evaluation to determine the validity and reliability of costeffectiveness conclusions.
Statistical methods
We begin by reviewing the type of statistical methods used to estimate mean incremental costs and effectiveness between treatment groups (and ICERs) and to quantify the level of uncertainty around the estimated quantities. According to ZIN's 2016 guidelines and current literature, for empirical analyses, bootstrapping is the recommended approach to deal with non-normal distributions and quantify the level of uncertainty around the incremental mean cost and effect estimates [6]. Regression technique are also important in order to obtain adjusted estimates and to control for potential imbalances in some baseline variables between treatment groups [22,32].
Almost all reviewed empirical studies used bootstrapping (95%) although with different choices for the number of iterations: the mean and standard deviation of the bootstrap replications, computed over the studies which provided such information (86%), were 4321 and 5883, respectively, with the most popular choices being 5000 (55%) followed by 2000 (29%). Studies showed even more variability in the methods used in combination with bootstrapping to correct for potential sources of bias. Figure 1 shows the type of statistical techniques implemented among the 166 empirical analyses in our review.
FIGURE 1
Seven general classes of statistical approaches were identified, among which the empirical analysis without any adjustment was the most popular choice across both time periods. Regression-based adjustment methods were also widely used either in the form of: simple univariate regression adjustment [22]; bivariate regression adjustment accounting for the correlation between effects and costs, also known as seemingly unrelated regression [SUR; 39]; linear mixed modelling to account for clustering effects, e.g. in cluster randomised trials [23,29]. Finally, delta adjustment [32,37] or simulation methods were only rarely adopted. It is apparent how between the two periods there was a shift in the use of the methods, with a considerable decrease of about 40% in the number of analyses not performing any adjustment (red bars), in contrast to an uptake in the number of analyses using SURs (from 2 to 17) or LMMs (from 4 to 10) adjustments (blue bars). Although these methods are not explicitly mentioned in the 2016 guidelines, the need to perform regression adjustment was clearly indicated as an important component in empirical analyses and both LMMs and SURs are widely used methods among the international health economics literature [38]. Bootstrapped confidence intervals for the estimated mean incremental outcomes were calculated for all analyses, although only 53 studies (32%) provided information on the specific methods used. Among those providing such information, 29 (55%) applied bias-adjusted and accelerated methods [12] and 24 (45%) applied standard percentile methods.
For model-based analyses, Monte Carlo methods [5] are the standard algorithms used in decision analytic models to simulate the evolution/progression of a target patient population and to aggregate over time the total quality of life and costs associated with each patient profile (e.g. via multi-state or Markov models). Among the 190 reviewed studies, only 24 were modelbased analyses (see Table 1) and all used Monte Carlo simulation methods. The vast majority of the approaches were Markov models (88%), followed by a decision tree and some unclear specifications, with no considerable differences between the two time periods. Information on model implementation was provided by about 75% of the studies, with a mean number of iterations run of 3028, standard deviation of 3089, and with the most popular choice being 1000. For Markov models, the number of assumed health states varied from 2 to 12, with cycle lengths ranging from 1 up to 12 months, although considerable variability was observed across the analyses.
Software
We looked at the different type and combination of software programs used as an indication of the implementation preferences of analysts when performing economic evaluations. Since no considerable differences were observed when comparing software use over time, we present the results across all publication years from 2010 up to 2020, but divided by type of analysis (empirical and model-based). Figure 2 shows an heatmap of the type of software used among the 166 empirical studies included in the review. Software programs are distinguished into "main" and "additional" categories according to the order (e.g. first mentioned) or tasks (e.g. main analysis vs secondary analyses) for which they were used according to the information provided by each study. The most popular software was SPSS, chosen by 87 (52%) of the studies, either in the main (33%) or additional (19%) analysis, and often used in combination with Excel or by itself. When either STATA (26%) or R (13%) were used in the main analysis, SPSS was still the most popular choice in additional analyses. Other combinations of software were less frequently chosen, even though 38 (23%) of the studies were unclear about the software implemented. Among the 24 model-based analyses, 14 (58%) did not provide any information in regard to the choice of the software, while Excel alone was the most frequent software choice in 9 (38%) studies, followed by TreeAge with 2 (8%), and R, Delphi and SPSS with 1 (all < 5%).
Missing data methods
The choice of the statistical methods to handle missing data has a potentially large impact on cost-effectiveness results and should be made to avoid implausible assumptions, which may lead to incorrect inferences. Since it is never possible to check assumptions about unobserved values, unless the amount of missing data is negligible (e.g. < 5%), a principled approach is typically recommended. This amounts to perform the analysis using a method associated to a benchmark missing data assumption (base-case analysis), and then assess the robustness of the base-case results to alternative assumptions using different methods (sensitivity analysis). It is important that both base-case and sensitivity analyses implement methods that are based on "plausible" missingness assumptions to ensure that the impact of missing data uncertainty is adequately quantified [25].
By their own nature missing data represent a crucial problem in empirical analyses but are less relevant in the context of model-based analyses. Within the second class of models, the long-term extrapolation of outcome data (e.g. survival beyond observed time horizon) represents a similar problem and is often accomplished through parametric or non-parametric methods. However, for the purpose of this review, we will exclusively focus on standard missing data methodology implemented in empirical analyses which represents the majority of the reviewed economic evaluations.
Base-case and sensitivity analysis
We first review the type of missing data methods implemented among empirical analyses. These are also distinguished by time period and by whether they were implemented in the base-case analysis (method used in the main analysis) or in sensitivity analysis (alternative methods used to check the robustness of base-case results). We initially planned to report missing data information separately by effects and costs but, after reviewing the analyses, we noted that only a small number of studies provided this level of detail. In the following, we will therefore provide results under the assumption that the same approaches were used to handle both missing effects and costs. In the Appendix, we provide a visual representation of the distribution of missing effect and cost rates based on the empirical studies which provided this information. Figure 3 shows, for both periods, a bubble plot for each combination of missing data methods implemented in the base-case and sensitivity analysis for empirical analyses, where the size of the bubbles indicates the frequency of use for each pairwise combination.
FIGURE 3
Overall, between the two periods, no drastic changes is observed in terms of the preference for missing data methods, with MI being the most popular base-case analysis choice, followed by complete case analysis (CCA), which remains the most popular sensitivity analysis choice.
However, some changes are observed in the frequency of adoption of these methods. On the one side, the proportion of studies using MI in the base-case analysis has increased over time (from 28% in 2010-2015 to 39% in 2016-2020). On the other side, the proportion of studies has decreased for both CCA (from 14% in 2020-2015 to 5% in 2016-2020) and single imputation (SI) methods (from 21% in 2010-2015 to 16% in 2016-2020). The number of studies not clearly reporting the methods used to handle missing data has also decreased (from 12% in 2010-2015 to 5% in 2016-2020), while the use of other methods has not varied notably.
The observed trend between the two periods may be the result of the specific recommendations from the 2016 guidelines in regard to the "optimal" missing data strategy, resulting in a more frequent adoption of MI techniques and, at the same time, a less frequent use of CCA in the base-case analysis. However, in contrast to these guidelines, a large number of studies still does not perform any sensitivity analysis to missing data assumptions (about 65% in 2010-2015 and 63% in 2016-2020).
Information was also collected across both periods about details of MI implementation when these were provided. In particular, among the 89 studies using MI: 50 (56%) used the fullyconditional or chained equation version [33], while the rest of the studies did not specify the version used; 32 (36%) used predictive mean matching as imputation technique, 10 (11%) used linear or logistic regression, 1 (1%) used predictive score matching, while the rest of the studies provided unclear information. Finally, the mean and standard deviation of the number of imputed dataset generated were 18 and 19, respectively, with the most frequent choice being 5 (23%).
Quality of missing data information
We finally review the quality of the overall missing data information reported by the studies. We specifically rely on the Quality Evaluation Scheme (QES), a structured reporting and analysis system that embeds key guidelines for missing data handling in economic evaluation [13]. Detailed information about the rationale and structure of the scheme are provided in Gabrio et al. [13], while here we only provide a concise explanation for clarity.
First, a numeric score is created to reflect the amount and type of information provided on three components characterising the missing data problem: description (e.g. number and pattern of missing data), method (e.g. type of method and detail of implementation) and limitations (e.g. limitations of assumptions). Each component is assigned a score weight (using a ratio 3 : 2 : 1) according to its importance, and then summed up to obtain an overall score for each study, ranging from 0 (no information) to 12 (full information). Next, grades are created by grouping the scores into ordered categories from A (highest score) to E (lowest scores). Finally, studies are also grouped by type of missingness method into five ordered classes, reflecting the strength of the underneath assumptions: unknown (UNK); single imputation (SI); complete case analysis (CCA); multiple imputation/expectation maximisation (MI/EM); sensitivity analysis (SA). We note that SA represents the less restrictive method as it requires studies to justify the assumptions explored in both base-case and sensitivity analysis based on the available information. Figure 4 shows a graphical representation of the quality scores (expressed in grades) in combination with the strength of assumptions (expressed by type of method) for each of the 80 empirical studies in the period 2016-2020. We specifically focus on studies in the later period as we want to assess current missing data practice (after the introduction of the 2016 guidelines). Most of the studies lie in the middle and lower part of the plot, and are associated with a limited (grades D and E) or sufficient (grade C) quality of information. However, only a few of these studies rely on very strong and unjustified missing data assumptions (red dots in the bottom-down part), while the majority provides either adequate justifications or uses methods associated with weak assumptions (green dots in the middle part). Only 11 (14%) studies are associated with both high quality scores and less restrictive missingness assumptions (blue dots in the top-right part). No study was associated with either full information (grade A) or adequate justifications for the assumptions explored in base-case and sensitivity analysis (SA).
Discussion
The objective of this paper was to review and compare the practice of conducting economic evaluation in the Netherlands before and after the introduction of the ZIN's 2016 guidelines. We focussed on the type of analytic approaches and software used to conduct the analysis, while also examining the missing data methods used and critically appraise the studies based on the overall information provided on missingness.
Descriptive review
Descriptive information extracted from the reviewed studies (Table 1) highlights some interesting discussion points when comparing economic evaluation practice between 2010-2015 and 2016-2020. First, most of the studies in the later period are CUA and use a societal perspective, with CEAs and alternative perspectives provided in secondary analyses. Second, studies tend to use EQ-5D instruments to measure quality of life and include all relevant types of societal costs, including productivity losses for which the friction approach is the current reference method of calculation. Finally, reporting of cost-effectiveness results often takes into account both uncertainty and probabilistic sensitivity analysis by providing either or both CEP and CEAC.
Most of these changes are in accordance with the 2016 guidelines, which are likely to have played a role in guiding analysts and practitioners towards a clearer and more standardised way to report health economic results. However, for some components of the analysis, such a VOI analysis or time horizon, adherence to the new guidelines seems slow (although the limited number of model-based studies makes it difficult to reach clear conclusions).
Health economic analysis
The most popular methods to quantify uncertainty around cost and effect estimates are by far bootstrapping (empirical analyses) and Monte Carlo simulation (model-based analyses). However, between the two periods a shift towards the use of statistical methods to control for potential sources of bias between treatment groups was observed, with a considerable uptake in the use of SURs and LMMs in the context of empirical analyses (Figure 1). These techniques are important in order to adjust for differences in baseline variables, handle clustered data, and formally take into account the correlation between costs and effects [38]. In addition, when further issues occur (e.g. presence of spikes in the observed data distributions), analysts should also consider the use of tailored approaches [2,3].
The complexity of the statistical framework for health economic evaluation requires the implementation of methods that can simultaneously handle multiple statistical issues to avoid biased results and misleading cost-effectiveness conclusions. However, it is equally important that the level of complexity of the analysis model is reflected in the way uncertainty surrounding the estimates is generated. For example, if clustered data are handled by means of LMMs, then clustered bootstrap methods should be used to properly generate resampling draws. Among all reviewed studies, we identified 13 cluster randomised trials but only 10 took into account clustering at the analysis stage and, among these, only 1 study implemented clustered bootstrap methods.
We believe that these inconsistencies are due to either limited familiarity of practitioners with advanced statistical methods or potential limitations of the software used to conduct the analysis. This seems to be supported by the fact that a considerable amount of studies still rely on non-statistical software (e.g. Excel), or a combination of these and user-friendly statistical software (e.g. SPSS) to perform the analysis (Figure 2). Although this does not represent an issue per se, it may become problematic and potentially lead to difficult-to-spot errors when performing complex analyses without the use of more advanced and flexible software programs, such as R or STATA [18].
Missing data
Multiple imputation is the default method of choice for handling missing data in economic evaluations. The transition between 2010-2015 and 2016-2020 suggests an increase in the use of MI techniques in the base-case analysis together with a decrease in the use of CCA (Figure 3). This suggests how analysts have become aware of the inherent limitations and potential bias of CCA and shifted towards MI as reference method. Nevertheless, improvements in the approach to deal with missing data are still needed given that many studies (more than 60%) performed the analysis under a single missing data assumption. This is not ideal since by definition missing data assumptions can never be checked, making the results obtained under a specific method (i.e. assumption) potentially biased. For example, MI is often implemented under a Missing At Random or MAR assumption (i.e. missingness only depends on observed data). However, there is no way to test if MAR is appropriate and it is always possible that missingness depends on some unobserved quantities, corresponding to a so-called Missing Not At Random or MNAR assumption [30]. This is why sensitivity analysis has a crucial role in assessing the robustness of the results to a range of plausible departures from the benchmark assumption chosen in the base-case, including MNAR [9]. In principle, the choice of the assumptions to explore should be justified in light of the available information. However, in all reviewed studies, no reasonable justification was provided to support the choice of the alternative methods used in sensitivity analysis (often using CCA despite recognising its strong limitations). This is reflected by the relatively small number of studies providing full information about the missing data problem at hand, with the majority of the studies providing an average quality of missingness information (Figure 4). Analysts may be able to improve current methodology through the adoption of more formal missing data strategies by taking into account the complexities of CEA data as well as a range of missing data assumptions. For example, MAR could be set as the benchmark assumption and external information may be incorporated into the model to elicit a set of MNAR departures from it [14,21,24].
Conclusions
Given the complexity of the economic evaluation framework, the implementation of simple but likely inadequate analytic approaches may lead to imprecise cost-effectiveness results. This is a potentially serious issue for bodies such as ZIN in the Netherlands who use these evaluations in their decision making, thus possibly leading to incorrect policy decisions about the costeffectiveness of new healthcare interventions.
Our review shows, over time, a change in many of the analysis components among standard practice in accordance with the recent ZIN's 2016 guidelines. This is an encouraging movement towards the standardised use of more suitable and robust analytic methods in terms of both statistical, uncertainty and missing data analysis. Nevertheless, improvements are still needed, particularly in the use of statistical software to implement advanced techniques as well as in the use of alternative missing data methods to explore plausible assumptions in sensitivity analysis. Figure 2: Heatmap of the type of software programs used among the empirical analyses. Software use is distinguished between main and additional analyses, defined according to the associated order or tasks that was specified in the information obtained from the studies. For each pairwise (main-additional) software combination, darker-coloured squares are associated with higher frequencies of use compared to lighter-coloured squares.
FIGURE 2
2FIGURE 2
FIGURE 4
4FIGURE 4
1 :
1Descriptive information of the reviewed studies for the periods 2010-2015 and 2016-2020. For each component of the economic evaluation, the approaches implemented are summarised and compared with the recommended approach from the 2016 guidelines (highlighted in bold). The superscripts † and denote components for which proportions are calculated out of the total number of empirical and model-based analyses, respectively.
Figure 1 :Figure 5 :
15Barchart of the number of empirical studies grouped by statistical methods implemented. Results are distinguished by time period (2010-2015 and 2016-2020) and grouped using the following method's classes: simulation, delta method, unclear, linear mixed effects model (LMM), seemingly unrelated regression (SUR), regression, empirical. Histogram of the sample size distribution of the 166 empirical studies in included in the review.
Figure 6 :
6Histograms of the distributions of missingness rates for effects (blue bars) and costs (red bars) among the empirical studies which provided the information.
Figure 3 :Figure 4 :
34Bubble plot of the type of missing data methods used among the empirical studies. Results are distinguished by time period (2010-2015 and 2016-2020) and grouped into 6 categories: no method (none); unclear (unclear); complete case analysis (CCA); single imputation (SI); expectation-maximisation (EM); multiple imputation (MI). Methods are distinguished according to whether they were used in the base-case or sensitivity analysis with the size of each bubble representing the frequency of use for each combination of base-case and sensitivity analysis missing data method. Jitter scatterplot for the joint assessment of the quality of missing data assumptions and information provided by empirical studies in the period 2016-2020. The strength of missing data assumptions is represented in terms of the type of methods used to handle missing values: unknown (UNK), single imputation (SI), complete case analysis (CCA), expectation-maximisation or multiple imputation (MI), sensitivity analysis (SA). The quality of the missing data information to support the method's assumptions is measured using the scores based on the QES and graded into the ordered categories: E, D, C, B, A.
Table
AppendixSample size distribution of reviewed studiesFIGURE 5Missing data rates of reviewed studies
Guidelines for the pharmaceutical industry on preparation of submissions to the pharmaceutical benefits advisory committee, including submissions involving economic analyses. Australian Pharmaceutical Benefits Advisory Committee (1992)Australian Pharmaceutical Benefits Advisory Committee (1992). Guidelines for the phar- maceutical industry on preparation of submissions to the pharmaceutical benefits advisory committee, including submissions involving economic analyses.
Bayesian models for cost-effectiveness analysis in the presence of structural zero costs. G Baio, Statistics in medicine. 3311Baio, G. (2014). Bayesian models for cost-effectiveness analysis in the presence of structural zero costs. Statistics in medicine, 33(11):1900-1913.
Regression estimators for generic health-related quality of life and quality-adjusted life years. A Basu, A Manca, Medical Decision Making. 321Basu, A. and Manca, A. (2012). Regression estimators for generic health-related quality of life and quality-adjusted life years. Medical Decision Making, 32(1):56-69.
The ce plane: a graphic representation of cost-effectiveness. W C Black, Medical decision making. 103Black, W. C. (1990). The ce plane: a graphic representation of cost-effectiveness. Medical decision making, 10(3):212-214.
Handling uncertainty in economic evaluation. A Briggs, BMJ. 3197202120Briggs, A. (1999). Handling uncertainty in economic evaluation. BMJ, 319(7202):120.
Bootstrapping: estimating confidence intervals for cost-effectiveness ratios. M K Campbell, D J Torgerson, Qjm. 923Campbell, M. K. and Torgerson, D. J. (1999). Bootstrapping: estimating confidence intervals for cost-effectiveness ratios. Qjm, 92(3):177-182.
Probabilistic sensitivity analysis for nice technology assessment: not an optional extra. K Claxton, M Sculpher, C Mccabe, A Briggs, R Akehurst, M Buxton, J Brazier, T Hagan, Health economics. 144Claxton, K., Sculpher, M., McCabe, C., Briggs, A., Akehurst, R., Buxton, M., Brazier, J., and O'Hagan, T. (2005). Probabilistic sensitivity analysis for nice technology assessment: not an optional extra. Health economics, 14(4):339-347.
Using value of information analysis to prioritise health research. K P Claxton, M J Sculpher, Pharmacoeconomics. 2411Claxton, K. P. and Sculpher, M. J. (2006). Using value of information analysis to prioritise health research. Pharmacoeconomics, 24(11):1055-1068.
Missing data in longitudinal studies: Strategies for Bayesian modeling and sensitivity analysis. M J Daniels, J W Hogan, CRC pressDaniels, M. J. and Hogan, J. W. (2008). Missing data in longitudinal studies: Strategies for Bayesian modeling and sensitivity analysis. CRC press.
Guidance for outcomes research for the assessment of the costeffectiveness of in-patient medicines. G Delwel, Healthcare Insurance BoardDiemen, the NetherlandsDelwel, G. (2008). Guidance for outcomes research for the assessment of the costeffective- ness of in-patient medicines. Diemen, the Netherlands: Healthcare Insurance Board.
Maximum likelihood from incomplete data via the em algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society: Series B (Methodological). 391Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from in- complete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22.
An introduction to the bootstrap. B Efron, R J Tibshirani, CRC pressEfron, B. and Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.
Handling missing data in within-trial costeffectiveness analysis: a review with future recommendations. A Gabrio, A J Mason, G Baio, PharmacoEconomics-open. 12Gabrio, A., Mason, A. J., and Baio, G. (2017). Handling missing data in within-trial cost- effectiveness analysis: a review with future recommendations. PharmacoEconomics-open, 1(2):79-97.
A full bayesian model to handle structural ones and missingness in economic evaluations from individual-level data. A Gabrio, A J Mason, G Baio, Statistics in medicine. 388Gabrio, A., Mason, A. J., and Baio, G. (2019). A full bayesian model to handle struc- tural ones and missingness in economic evaluations from individual-level data. Statistics in medicine, 38(8):1399-1420.
Dutch guidelines for economic evaluation:'from good to better'in theory but further away from pharmaceuticals in practice. L Garattini, A Padula, Journal of the Royal Society of Medicine. 1103Garattini, L. and Padula, A. (2017). Dutch guidelines for economic evaluation:'from good to better'in theory but further away from pharmaceuticals in practice? Journal of the Royal Society of Medicine, 110(3):98-103.
Methodologie van kostenonderzoek en referentieprijzen voor economische evaluaties in de gezondheidszorg. L Hakkaart-Van Roijen, N Van Der Linden, C Bouwmans, T Kanters, S S Tan, S Kostenhandleiding, ZorginstituutDiemen; NederlandHakkaart-van Roijen, L., Van der Linden, N., Bouwmans, C., Kanters, T., Tan, S. S., and Kostenhandleiding, S. (2015). Methodologie van kostenonderzoek en referentieprijzen voor economische evaluaties in de gezondheidszorg. Diemen: Zorginstituut Nederland.
Health economic guidelinessimilarities, differences and some implications. J Hjelmgren, F Berggren, Andersson , F , Value in Health. 43Hjelmgren, J., Berggren, F., and Andersson, F. (2001). Health economic guidelines- similarities, differences and some implications. Value in Health, 4(3):225-250.
R you still using excel? the advantages of modern software tools for health technology assessment. D Incerti, H Thom, G Baio, J P Jansen, Value in Health. 225Incerti, D., Thom, H., Baio, G., and Jansen, J. P. (2019). R you still using excel? the advantages of modern software tools for health technology assessment. Value in Health, 22(5):575-579.
Pharmacoeconomic Guidelines Around The World. ISPOR (2017). Pharmacoeconomic Guidelines Around The World. https://tools.ispor. org/peguidelines/. Accessed: 2021-07-23.
Measurement properties of the eq-5d-5l compared to the eq-5d-3l across eight patient groups: a multi-country study. M Janssen, A S Pickard, D Golicki, C Gudex, M Niewada, L Scalone, P Swinburn, J Busschbach, Quality of Life Research. 227Janssen, M., Pickard, A. S., Golicki, D., Gudex, C., Niewada, M., Scalone, L., Swinburn, P., and Busschbach, J. (2013). Measurement properties of the eq-5d-5l compared to the eq-5d-3l across eight patient groups: a multi-country study. Quality of Life Research, 22(7):1717-1727.
Sensitivity analysis for not-at-random missing data in trial-based cost-effectiveness analysis: a tutorial. B Leurent, M Gomes, R Faria, S Morris, R Grieve, J R Carpenter, Pharmacoeconomics. 368Leurent, B., Gomes, M., Faria, R., Morris, S., Grieve, R., and Carpenter, J. R. (2018). Sensitivity analysis for not-at-random missing data in trial-based cost-effectiveness analysis: a tutorial. Pharmacoeconomics, 36(8):889-901.
Estimating mean qalys in trialbased cost-effectiveness analysis: the importance of controlling for baseline utility. A Manca, N Hawkins, M J Sculpher, Health economics. 145Manca, A., Hawkins, N., and Sculpher, M. J. (2005a). Estimating mean qalys in trial- based cost-effectiveness analysis: the importance of controlling for baseline utility. Health economics, 14(5):487-496.
Assessing generalisability by location in trial-based cost-effectiveness analysis: the use of multilevel models. A Manca, N Rice, M J Sculpher, A H Briggs, Health Economics. 145Manca, A., Rice, N., Sculpher, M. J., and Briggs, A. H. (2005b). Assessing generalisability by location in trial-based cost-effectiveness analysis: the use of multilevel models. Health Economics, 14(5):471-485.
A bayesian framework for health economic evaluation in studies with missing data. A J Mason, M Gomes, R Grieve, J R Carpenter, Health economics. 2711Mason, A. J., Gomes, M., Grieve, R., and Carpenter, J. R. (2018). A bayesian framework for health economic evaluation in studies with missing data. Health economics, 27(11):1670- 1683.
Missing data in clinical studies. G Molenberghs, M Kenward, John Wiley & Sons61Molenberghs, G. and Kenward, M. (2007). Missing data in clinical studies, volume 61. John Wiley & Sons.
Guide to the methods of technology appraisal. NICENICE (2013). Guide to the methods of technology appraisal 2013.
Farmaco-economisch onderzoek: doelmatigheid van geneesmiddelen. M Postma, P Krabbe, Geneesmiddelen bulletin. Postma, M. and Krabbe, P. (2006). Farmaco-economisch onderzoek: doelmatigheid van geneesmiddelen. Geneesmiddelen bulletin, pages 133-140.
Cost-effectiveness analysis alongside clinical trials ii-an ispor good research practices task force report. S D Ramsey, R J Willke, H Glick, S D Reed, F Augustovski, B Jonsson, A Briggs, S D Sullivan, Value in Health. 182Ramsey, S. D., Willke, R. J., Glick, H., Reed, S. D., Augustovski, F., Jonsson, B., Briggs, A., and Sullivan, S. D. (2015). Cost-effectiveness analysis alongside clinical trials ii-an ispor good research practices task force report. Value in Health, 18(2):161-172.
Multilevel models and health economics. N Rice, A Jones, Health economics. 66Rice, N. and Jones, A. (1997). Multilevel models and health economics. Health economics, 6(6):561-575.
Multiple imputation for nonresponse in surveys. D B Rubin, John Wiley & Sons81Rubin, D. B. (2004). Multiple imputation for nonresponse in surveys, volume 81. John Wiley & Sons.
Net health benefits: a new framework for the analysis of uncertainty in cost-effectiveness analysis. A A Stinnett, J Mullahy, Medical decision making. 182supplStinnett, A. A. and Mullahy, J. (1998). Net health benefits: a new framework for the analy- sis of uncertainty in cost-effectiveness analysis. Medical decision making, 18(2 suppl):S68-S80.
How to deal with cost differences at baseline. A D Van Asselt, G A Van Mastrigt, C D Dirksen, A Arntz, J L Severens, A G Kessels, Pharmacoeconomics. 276Van Asselt, A. D., Van Mastrigt, G. A., Dirksen, C. D., Arntz, A., Severens, J. L., and Kessels, A. G. (2009). How to deal with cost differences at baseline. Pharmacoeconomics, 27(6):519-528.
Flexible imputation of missing data. S Van Buuren, CRC pressVan Buuren, S. (2018). Flexible imputation of missing data. CRC press.
Costs, effects and c/e-ratios alongside a clinical trial. B A Van Hout, M J Al, G S Gordon, F F Rutten, Health economics. 35Van Hout, B. A., Al, M. J., Gordon, G. S., and Rutten, F. F. (1994). Costs, effects and c/e-ratios alongside a clinical trial. Health economics, 3(5):309-319.
From good to better: New dutch guidelines for economic evaluations in healthcare. M Versteegh, S Knies, W Brouwer, PharmacoEconomics. 3411Versteegh, M., Knies, S., and Brouwer, W. (2016a). From good to better: New dutch guidelines for economic evaluations in healthcare. PharmacoEconomics, 34(11):1071-1074.
Dutch tariff for the five-level version of eq-5d. M M Versteegh, K M Vermeulen, S M Evers, G A De Wit, R Prenger, E A Stolk, Value in health. 194Versteegh, M. M., Vermeulen, K. M., Evers, S. M., De Wit, G. A., Prenger, R., and Stolk, E. A. (2016b). Dutch tariff for the five-level version of eq-5d. Value in health, 19(4):343-352.
Analysing controlled trials with baseline and follow up measurements. A J Vickers, D G Altman, Bmj. 3237321Vickers, A. J. and Altman, D. G. (2001). Analysing controlled trials with baseline and follow up measurements. Bmj, 323(7321):1123-1124.
Regression methods for covariate adjustment and subgroup analysis for non-censored cost-effectiveness data. A R Willan, A H Briggs, J S Hoch, Health economics. 135Willan, A. R., Briggs, A. H., and Hoch, J. S. (2004). Regression methods for covariate adjustment and subgroup analysis for non-censored cost-effectiveness data. Health economics, 13(5):461-475.
Further properties of efficient estimators for seemingly unrelated regression equations. A Zellner, D S Huang, International Economic Review. 33Zellner, A. and Huang, D. S. (1962). Further properties of efficient estimators for seemingly unrelated regression equations. International Economic Review, 3(3):300-313.
Guideline for economic evaluations in healthcare. Zorginstituut Nederland, ZINDiemenZorginstituut Nederland (2016). Guideline for economic evaluations in healthcare. Diemen: ZIN.
Beoordeling stand van de wetenschap en praktijk. Zorgverzekeraars Nederland, Zorginstituut NederlandDiemenZorgverzekeraars Nederland (2015). Beoordeling stand van de wetenschap en praktijk. Diemen: Zorginstituut Nederland.
| [] |
[
"SPATIALLY ADAPTIVE ONLINE PREDICTION OF PIECEWISE REGULAR FUNCTIONS",
"SPATIALLY ADAPTIVE ONLINE PREDICTION OF PIECEWISE REGULAR FUNCTIONS"
] | [
"Sabyasachi Chatterjee \nUniversity of Illinois at Urbana-Champaign and Tata Institute of Fundamental Research\n117 Illini Hall Champaign, [email protected] 1, Homi Bhabha Road Colaba61820, 400005MumbaiILIndia\n",
"Subhajit Goswami [email protected] \nUniversity of Illinois at Urbana-Champaign and Tata Institute of Fundamental Research\n117 Illini Hall Champaign, [email protected] 1, Homi Bhabha Road Colaba61820, 400005MumbaiILIndia\n"
] | [
"University of Illinois at Urbana-Champaign and Tata Institute of Fundamental Research\n117 Illini Hall Champaign, [email protected] 1, Homi Bhabha Road Colaba61820, 400005MumbaiILIndia",
"University of Illinois at Urbana-Champaign and Tata Institute of Fundamental Research\n117 Illini Hall Champaign, [email protected] 1, Homi Bhabha Road Colaba61820, 400005MumbaiILIndia"
] | [] | We consider the problem of estimating piecewise regular functions in an online setting, i.e., the data arrive sequentially and at any round our task is to predict the value of the true function at the next revealed point using the available data from past predictions. We propose a suitably modified version of a recently developed online learning algorithm called the sleeping experts aggregation algorithm. We show that this estimator satisfies oracle risk bounds simultaneously for all local regions of the domain. As concrete instantiations of the expert aggregation algorithm proposed here, we study an online mean aggregation and an online linear regression aggregation algorithm where experts correspond to the set of dyadic subrectangles of the domain. The resulting algorithms are near linear time computable in the sample size. We specifically focus on the performance of these online algorithms in the context of estimating piecewise polynomial and bounded variation function classes in the fixed design setup. The simultaneous oracle risk bounds we obtain for these estimators in this context provide new and improved (in certain aspects) guarantees even in the batch setting and are not available for the state of the art batch learning estimators. | 10.48550/arxiv.2203.16587 | [
"https://arxiv.org/pdf/2203.16587v1.pdf"
] | 247,839,377 | 2203.16587 | 84e57f9ac7c2abb88ad6e7acded5dd4e4ac9cdfb |
SPATIALLY ADAPTIVE ONLINE PREDICTION OF PIECEWISE REGULAR FUNCTIONS
Sabyasachi Chatterjee
University of Illinois at Urbana-Champaign and Tata Institute of Fundamental Research
117 Illini Hall Champaign, [email protected] 1, Homi Bhabha Road Colaba61820, 400005MumbaiILIndia
Subhajit Goswami [email protected]
University of Illinois at Urbana-Champaign and Tata Institute of Fundamental Research
117 Illini Hall Champaign, [email protected] 1, Homi Bhabha Road Colaba61820, 400005MumbaiILIndia
SPATIALLY ADAPTIVE ONLINE PREDICTION OF PIECEWISE REGULAR FUNCTIONS
We consider the problem of estimating piecewise regular functions in an online setting, i.e., the data arrive sequentially and at any round our task is to predict the value of the true function at the next revealed point using the available data from past predictions. We propose a suitably modified version of a recently developed online learning algorithm called the sleeping experts aggregation algorithm. We show that this estimator satisfies oracle risk bounds simultaneously for all local regions of the domain. As concrete instantiations of the expert aggregation algorithm proposed here, we study an online mean aggregation and an online linear regression aggregation algorithm where experts correspond to the set of dyadic subrectangles of the domain. The resulting algorithms are near linear time computable in the sample size. We specifically focus on the performance of these online algorithms in the context of estimating piecewise polynomial and bounded variation function classes in the fixed design setup. The simultaneous oracle risk bounds we obtain for these estimators in this context provide new and improved (in certain aspects) guarantees even in the batch setting and are not available for the state of the art batch learning estimators.
We consider the problem of estimating piecewise regular functions in an online setting, i.e., the data arrive sequentially and at any round our task is to predict the value of the true function at the next revealed point using the available data from past predictions. We propose a suitably modified version of a recently developed online learning algorithm called the sleeping experts aggregation algorithm. We show that this estimator satisfies oracle risk bounds simultaneously for all local regions of the domain. As concrete instantiations of the expert aggregation algorithm proposed here, we study an online mean aggregation and an online linear regression aggregation algorithm where experts correspond to the set of dyadic subrectangles of the domain. The resulting algorithms are near linear time computable in the sample size. We specifically focus on the performance of these online algorithms in the context of estimating piecewise polynomial and bounded variation function classes in the fixed design setup. The simultaneous oracle risk bounds we obtain for these estimators in this context provide new and improved (in certain aspects) guarantees even in the batch setting and are not available for the state of the art batch learning estimators.
1. Introduction. In this paper we revisit the classical problem of estimating piecewise regular functions from noisy evaluations. The theory discussed here is potentially useful for a general notion of piecewise regularity although we will specifically give attention to the problem of estimating piecewise constant or piecewise polynomial functions of a given degree m ≥ 0 and bounded variation functions which are known to be well approximable by such piecewise constant/polynomial functions. A classical aim is to design adaptive estimators which adapt optimally to the (unknown) number of pieces of the underlying piecewise constant/polynomial function. For example, if the true signal is (exactly or close to) a piecewise constant function with unknown number of pieces k, then it is desirable that the estimator attains a near parametric ‹ O(k/N ) rate of convergence (N is the sample size, ‹ O(·) indicates "upto some fixed power of log N "), for all possible values of k. This desired notion is often shown by demonstrating that an estimator satisfies the so called oracle risk bound trading off a squared error approximation term and a complexity term. Several nonparametric regression estimators, such as Wavelet Shrinkage Donoho and Johnstone (1994), Donoho and Johnstone (1998), Trend Filtering Mammen and van de Geer (1997), Tibshirani et al. (2014), Tibshirani (2020), Dyadic CART Donoho (1997), Optimal Regression Tree Chatterjee and Goswami (2021a), are known to attain such an oracle risk bound in the context of estimating piecewise constant/polynomial functions. In this context, two natural questions arise which we address in this paper.
1. Q1: Consider the online version of the problem of estimating piecewise constant or piecewise polynomial functions, i.e., the data arrive sequentially and at any round our task is to predict the value of the true function at the next revealed point using the available data from past predictions. Does there exist an estimator which attains an oracle risk bound (similar to what is known in the batch learning setting) in the online setting? This seems a natural and perhaps an important question given applications of online learning to forecasting trends.
2. Q2: The oracle risk bounds available for batch learning estimators in the literature imply a notion of adaptivity of the estimator. This adaptivity can be thought of as a global notion of adaptivity as the risk bound is for the entire mean sum of squared error of the estimator. If it can be shown that an estimator satisfies an oracle risk bound locally, simultaneously over several subregions of the domain, then this will imply a local/spatial notion of adaptivity. We explain this more in Section 1.2. This gives rise to our second question. Does there exist an estimator which attains an oracle risk bound simultaneously over several subregions of the domain in the online setting?. Even in the batch learning setting, it is not known whether state of the art estimators such as Wavelet Shrinkage, Trend Filtering, Dyadic CART, Optimal Regression Tree satisfy such a simultaneous oracle risk bound.
The main purpose of this paper is to recognize, prove and point out that by using a suitably modified version of an online aggregation algorithm developed in the online learning community, it is possible to answer both the above questions in the affirmative.
1.1. Problem Setting. Throguhout this paper, we will work with regression or signal denoising in the fixed lattice design setup where the underlying domain is a d dimensional grid L d,n = [n] d =: {1, . . . , n} d . Here n can be thought of as the sample size per dimension and the total sample size can be thought of as N = n d . All of what we do here is meaningful in the regime where d is moderately low and fixed but n is large. The specific dimensions of interest are d = 1, 2 or 3 which are relevant for sequence, image or video denoising or forecasting respectively.
We will focus on the problem of noisy online signal prediction. Let K denote the ddimensional grid L d,n and we abbreviate |K| = N . Suppose θ * ∈ R K is the true underlying signal and
(1.1) y = θ * + σ where σ > 0 is unknown and consists of independent, mean zero sub-Gaussian entries with unit dispersion factor.
Consider the following online prediction protocol. At round t ∈ [1 : N ],
• An adversary reveals a index ρ(t) ∈ K such that ρ(t) / ∈ {ρ(1), . . . , ρ(t − 1)}. • Learner predicts θ ρ(t) .
• Adversary reveals y ρ(t) = θ * ρ(t) + ρ(t) , a noisy version of θ * ρ(t) .
Note that ρ turns out to be a possibly adversarially chosen permutation of the entries of K (see the paragraph preceding Theorem 3.1). At the end of N rounds, the predictions of the learner are measured with the usual expected mean squared loss criterion given by MSE( θ, θ * ) := 1 N E θ − θ * 2 .
Remark 1.1. Clearly, this online setting is a harder problem than batch learning where we get to observe the whole array y at once and we need to estimate θ * by denoising y. Therefore, any online learning algorithm can also be used in the batch learning setup as well.
1.2. A Definition of Spatial Adaptivity. In nonparametric function estimation, the notion of spatial/local adaptivity for an estimator is a highly desirable property. Intuitively, an estimator is spatially/locally adaptive if it adapts to a notion of complexity of the underlying true function locally on every part of the spatial domain on which the true function is defined. It is accepted that wavelet shrinkage based estimators, trend filtering estimators or optimal decision trees are spatially adaptive in some sense or the other. However, the meaning of spatial adaptivity varies quite a bit in the literature. It seems that there is no universally agreed upon definition of spatial adaptivity. In this section, we put forward one way to give a precise definition of spatial adaptivity which is inspired from the literature on strongly adaptive online algorithms (see, e.g., Daniely et al. (2015), Adamskiy et al. (2012), Hazan and Seshadhri (2007)) developed in the online learning community. One of the goals of this paper is to convince the reader that with a fairly simple analysis of the proposed online learning algorithm it is possible to establish this notion of spatial adaptivity (precisely defined below) in a very general setting.
Several batch learning estimators θ satisfy the so called oracle risk bounds of the following form:
MSE( θ, θ * ) ≤ 1 N inf θ∈R N θ − θ * 2 + σ 2 k comp (θ)p(log n)
where k comp denotes a complexity function defined on the vectors in R m and p(log n) is some power of log n. This is a risk bound which implies that the estimator θ adapts to the complexity of the true signal k comp (θ).
For example, optimal decision trees such as the Dyadic CART and the ORT estimator satisfy an oracle risk bound (see Section 7 in Donoho (1997) and Theorem 2.1 in Chatterjee and Goswami (2021a)), so does the Trend Filtering estimator (Remark 3.1 in Guntuboyina et al. (2020)) and more classically, the wavelet shrinkage based estimators (see Section 1.5 in Donoho and Johnstone (1994)). Here, the complexity function k comp is directly proportional to the number of constant/polynomial pieces for univariate functions. For multivariate functions, k comp is still proportional to the number of constant/polynomial pieces; measured with respect to an appropriate class of rectangular partitions of the domain, see Chatterjee and Goswami (2021a).
However, from our point of view, this type of oracle risk bound, while being highly desirable and guaranteeing adaptivity against the complexity function k comp , is still a global adaptivity bound as the bound is for the mean squared error of the whole signal θ * . A good notion of local/spatial adaptivity should reveal the adaptivity of the estimator to the local complexity of the underlying signal. This naturally motivates us to make the following definition of spatial adaptivity.
We say that an estimator θ is spatially adaptive with respect to the complexity parameter k comp and with respect to a class S of subregions or subsets of the domain L d,n if the following risk bound holds simultaneously for every S ∈ S,
MSE( θ S , θ * S ) ≤ 1 |S| inf θ∈R S θ − θ * S 2 + σ 2 k comp (θ) p(log n) .
The above definition of spatial adaptivity makes sense because if the above holds simultaneously for every S ∈ S, then the estimator θ estimates at θ * locally on S with a rate of convergence that depends on the local complexity k comp (θ * S ). We will prove that our proposed online learning estimator is spatially adaptive in the sense described above with respect to a large class of subregions S.
Summary of Our Results.
1. We formulate a slightly modified version of the so called sleeping experts aggregation algorithm for a general class of experts and a general class of comparator signals. We then state and prove a general simultaneous oracle risk bound for the proposed online prediction algorithm; see Theorem 3.1. This is the main result of this paper and is potentially applicable to several canonical estimation/prediction settings.
2. We specifically study an online mean aggregation algorithm as a special instance of our general algorithm and show that it satisfies our notion of spatial adaptivity (see Theorem 4.2) with respect to the complexity parameter that counts the size of the minimal rectangular partition of the domain L d,n on which the true signal θ * is piecewise constant. Even in the easier offline setting, natural competitor estimators like Dyadic CART and ORT are not known to satisfy our notion of spatial adaptivity. Equipped with the spatially adaptive guarantee we proceed to demonstrate that this online mean aggregation algorithm also attains spatially adaptive minimax rate optimal bounds (see Theorem 4.3) for the bounded variation function class in general dimensions. This is achieved by combining Theorem 4.2 with known approximation theoretic results. Such a spatially adaptive guarantee as in Theorem 4.3 is not known to hold for the TV Denosing estimator, the canonical estimator used for estimating bounded variation functions.
3. We then study an online linear regression aggregation algorithm based on the Vovk-Azoury-Warmouth (VAW) forecaster (see Vovk (1998), Azoury and Warmuth (2001)) as another instantiation of our general algorithm. We show that this algorithm satisfies our notion of spatial adaptivity (see Theorem 5.2) with respect to the complexity parameter which counts the size of the minimal rectangular partition of the domain L d,n on which the true signal θ * is piecewise polynomial of any given fixed degree ≥ 1. Like in the case with piecewise constant signals discussed above, natural competitor estimators such as Trend Filtering or higher order versions of Dyadic CART are not known to satisfy our notion of spatial adaptivity even in the easier offline setting. We then demonstrate that this online linear regression aggregation algorithm also attains spatially adaptive minimax rate optimal bounds (see Theorem 5.3) for univariate higher order bounded variation functions. This is again achieved by combining Theorem 5.2 with known approximation theoretic results. Such a spatially adaptive guarantee as in Theorem 5.3 is not known to hold for the state of the art Trend Filtering estimator.
1.4. Closely Related Works. In a series of recent papers Baby and Wang (2019), Baby and Wang (2020), Baby et al. (2021a), Baby and Wang (2021), the authors there have studied online estimation of univariate bounded variation and piecewise polynomial signals. In particular, the paper Baby et al. (2021a) brought forward sleeping experts aggregation algorithms Daniely et al. (2015) in the context of predicting univariate bounded variation functions. These works have been a source of inspiration for this current paper.
In a previous paper Chatterjee and Goswami (2021a) of the current authors, offline estimation of piecewise polynomial and bounded variation functions were studied with a particular attention on obtaining adaptive oracle risk bounds. The estimators considered in that paper were optimal decision trees such as Dyadic CART (Donoho (1997)) and related variants. After coming across the paper Baby et al. (2021a) we realized that by using sleeping experts aggregation algorithms, one can obtain oracle risk bounds in the online setting which would then be applicable to online estimation of piecewise polynomial and bounded variation functions in general dimensions. In this sense, this work focussing on the online problem is a natural follow up of our previous work in the offline setting.
The main point of difference of this work with the papers Baby and Wang (2019), Baby and Wang (2020), Baby et al. (2021a) is that here we formulate a general oracle risk bound that works simultaneously over a collection of subsets of the underlying domain (see Theorem 3.1). We then show that this result can be used in conjunction with some approximation theoretic results (proved in Chatterjee and Goswami (2021a)) to obtain spatially adaptive near optimal oracle risk bounds for piecewise constant/polynomial and bounded variation functions in general dimensions. To the best of our understanding, the papers Baby and Wang (2019), Baby and Wang (2020), Baby et al. (2021a) have not addressed function classes beyond the univariate case, nor do they address the oracle risk bounds for piecewise constant/polynomial functions. But most importantly, it appears that our work is the first, in the online setting, to formulate a simultaneous oracle risk bound as in Theorem 3.1 and realize that one can deduce from this near optimal risk bounds for several function classes of recent interest. We hope that Theorem 3.1 will find applications for several other function classes (see Section 6.2 below).
Aggregation of Experts Algorithm.
In this section, we describe our main prediction algorithm. Our algorithm is a slight modification of the so called Strongly Adaptive online algorithms discussed in Hazan and Seshadhri (2007), Adamskiy et al. (2012), Daniely et al. (2015).
In this section K could be any general finite domain like L d,n . An expert will stand for a set S ⊂ K equipped with an online rule defined on S where, by an online rule r (S) corresponding to S, we mean a collection of (measurable) maps r (S) U,s : R U → R indexed by U ⊂ S and s ∈ S \ U . Operationally, the expert corresponding to a subset S ∈ S containing ρ(t) predicts at the revealed point ρ(t) the number given by
(2.1) y (S) ρ(t) = r (S) ρ[1:(t−1)]∩S, ρ(t) y ρ[1:(t−1)]∩S .
The display (2.1) defines a vector y (S) ∈ R S containing the predictions of the expert corresponding to the subset S. A family of experts corresponds to a sub-collection S of subsets of K. For any choice of online rules for every S ∈ S, we refer to them collectively as an online rule r associated to S.
As per the protocol described in Section 1, at the beginning of any round t ∈ [N ] where |K| = N , the data index ρ(t) is revealed. At this point, the experts corresponding to subsets S ∈ S either not containing ρ(t) or not having any data index revealed previously, become inactive. All other experts provide a prediction of their own. We are now ready to describe our aggregation algorithm A for a set of experts S. The input to this algorithm is the data vector y ∈ R K which is revealed sequentially in the order given by the permutation ρ. We denote the output of this algorithm here by y ∈ R K . At each round t ∈ [N ], the algorithm outputs a prediction y ρ(t) . Below and in the rest of the article, we use T a (x) to denote the truncation map T a (x) = min{max{x, −a}, a}.
Aggregation algorithm. Parameters -subset of experts S, online rule r = {r (S) : S ∈ S} and truncation parameter λ > 1
Initialize w S,1 = 1 |S| for all S ∈ S. For t = 1, . . . , N :
1. Adversary reveals ρ(t).
Choose a set of active experts
A t as {S ∈ S : ρ(t) ∈ S and ρ(t ) ∈ S for some t < t} if t > 1 and {S ∈ S : ρ(t) ∈ S} if t = 1. 3. Predict y ρ(t) = S∈At w S,t T λ ( y (S) ρ(t) ) where w S,t := w S,t S∈A t w S,t . 4. Update w S,t 's so that w S,t+1 = w S,t for S / ∈ A t and w S,t+1 = w S,t e −α S,t S∈At w S,t e −α S,t = w S,t e −α S,t S∈At w S,t e −α S,t S∈At w S,t otherwise, where S,t := T λ (y ρ(t) ) − T λ ( y (S)
ρ(t) ) 2 and α := 1 8λ 2 .
Remark 2.1. This algorithm is similar to the sleeping experts aggregation algorithm discussed in Daniely et al. (2015) except that we apply this algorithm after truncating the data by the map T λ . Since, we are interested in (sub-Gaussian) unbounded errors, our data vector y need not be bounded which necessitates this modification. See Remark 2.3 below.
In the sequel we will refer to our aggregation algorithm as A(r, S, λ). The following proposition guarantees that the performance of the above algorithm is not much worse as compared to the performance of any expert S ∈ S; for any possible input data y.
Proposition 2.1 (error comparison against individual experts for arbitrary data). For any ordering ρ of K and S ∈ S, we have
(2.2) s∈S y s − y s 2 ≤ s∈S y s − y (S) s 2 + 8λ 2 log e|S| + 2 y S − Π λ y S 2 + 4λ 2 s∈S 1 (|y s | > λ) ,
where Π λ z, for any vector z ∈ R A with |A| < ∞, denotes the 2 -projection of z onto the ∞ -ball of radius λ, i.e., (Π λ z) a = T λ (z a ) for all a ∈ A.
Remark 2.2. A remarkable aspect of Proposition 2.1 is that it holds for any input data y ∈ R L d,n . In particular, no probabilistic assumption is necessary. In the terminology of online learning, this is said to be a prediction bound for individual sequences. Usually, such an individual sequence prediction bound is stated for bounded data; see, e.g., Hazan and Seshadhri (2007), Daniely et al. (2015). On the other hand, Proposition 2.1 holds for any data because we have introduced a truncation parameter in our aggregation algorithm.
Remark 2.3. The effect of truncation is clearly reflected in the last two terms of (2.2). The issue of unbounded data points in the noisy setup was dealt earlier in the literature -see, e.g., (Baby et al., 2021b, Theorem 5) -by choosing a value of λ so that all the datapoints lie within the interval [−λ, λ] with some prescribed (high) probability 1 − δ. The comparison bounds analogous to (2.2) (without the last two terms) then hold on this high probability event. However, the issue of how to choose λ such that all the datapoints lie within the interval [−λ, λ] is not trivial to resolve unless one knows something about the data generating mechanism. There is a simple way to get around this problem in the offline version by setting λ = max j∈[N ] |y j | (see (Baby et al., 2021b, Remark 8)) which is obviously not possible in the online setting. Our version of the algorithm and the accompanying Proposition 2.1 provide an explicit bound on the error due to truncation for arbitrary y ∈ R K in the online problem.
To the best of our knowledge, such a bound was not available in the literature in the current setup. An operational implication of Proposition 2.1 is that even if a few data points exceed λ in absolute value by not too great a margin, we still get effective risk bounds.
Proof. The proof is similar to the proof of regret bounds for exponentially weighted average forecasters with exp-concave loss functions (see, e.g., Hazan and Seshadhri (2007), Cesa-Bianchi and Lugosi (2006)). However, we need to take some extra care in order to deal with our particular activation rule (see step 2) and obtain the error terms as in (2.2).
Let us begin with the observation that the function e
−η(x−z) 2 , where η > 0, is concave in x for all x, z ∈ [−1/ √ 8η, 1/ √ 8η].
Clearly, this condition is satisfied for η = α and all x, z ∈ [−λ, λ]. Therefore, since y ρ(t) is an average of T λ ( y (S) ρ(t) )'s (which by definition lie in [−λ, λ]) with respective weights w S,t (see step 3 in the algorithm), we can write using the Jensen's inequality,
exp Ä −α y ρ(t) − T λ (y ρ(t) ) 2 ä ≥ S∈At w S,t e −α S,t where we recall that S,t := T λ (y ρ(t) ) − T λ ( y (S) ρ(t) ) 2 .
Fix a subset S ∈ S such that S ∈ A t . Taking logarithm on both sides and using the particular definition of updates in step 4 of A, we get
y ρ(t) − T λ (y ρ(t) ) 2 − S,t ≤ α −1 log Ç e −α S,t S ∈S w S ,t e −α S ,t å = α −1 log Å w S,t+1 w S,t ã .
However, since w S,t+1 = w S,t and hence the logarithm is 0 whenever S ∈ A t (see step 4 in imsart-aos ver. 2014/02/20 file: Draft_online_learning.tex date: April 1, 2022 the algorithm), we can add the previous bound over all t such that S ∈ A t to deduce:
t:S∈At y ρ(t) − T λ (y ρ(t) ) 2 ≤ t:S∈At S,t + α −1 t∈[N ] log Å w S,t+1 w S,t ã ≤ t:S∈At S,t + α −1 log e|S|
where in the final step we used the fact that w S,1 = 1 |S| and w S,N +1 ≤ 1. Now it follows from our activation rule in step 2 that S \ {t : S ∈ A t } is at most a singleton and hence
t:S ρ(t) y ρ(t) − T λ (y ρ(t) ) 2 ≤ t:S∈At S,t + 4λ 2 + α −1 log e|S|
where we used the fact that both y ρ(t) and T λ (y ρ(t) ) lie in [−λ, λ]. We can now conclude the proof from the above display by plugging x = y ρ(t) and z = y ρ(t) into
(x − z) 2 ≤ (T λ (x) − z) 2 + 2(T λ (x) − x) 2 + 4λ 2 1(|x| > λ), when |z| ≤ λ
and also x = y s and z = y
(S) s into (T λ (x) − T λ (z)) 2 ≤ (x − z) 2 upon recalling the fact that | y s | ≤ λ for all s ∈ K.
3. A General Simultaneous Oracle Risk Bound. In this section we will state a general simultaneous oracle risk bound for online prediction of noisy signals. As in the last section, in this section also K refers to any arbitrary finite domain. Recall from our setting laid out in the introduction that we observe a data vector y ∈ R K in some order where we can write y = θ * + σ where σ > 0 is unknown and t 's are independent, mean zero sub-Gaussian variables with unit dispersion factor. For specificity, we assume in the rest of the paper that
max(P[ t ≥ x], P[ t ≤ −x]) ≤ 2e −x 2 /2 , for all x ≥ 0 and t ∈ K.
Let us emphasize that the constant 2 is arbitrary and changing the constant would only impact the absolute constants in our main result, i.e., Theorem 3.1 below . Our focus here is on estimating signals θ * that are piecewise regular on certain sets as we now explain. Let S be a family of subsets of K (cf. the family of experts S in our aggregation algorithm) and for each S ∈ S, let F S ⊂ R S denote a class of functions defined on S.
We now define P to be the set of all partitions of K all of whose constituent sets are elements of S. For any partition P ∈ P, define the class of signals
(3.1) Θ P = Θ P (F S : S ∈ S) = {θ ∈ R K : θ S ∈ F S ∀S ∈ P }.
In this section, when we mention a piecewise regular function, we mean a member of the set Θ P for a partition P ∈ P with not too many constituent sets.
For example, in this paper we will be specifically analyzing the case when K = L d,n = [n] d is the d dimensional lattice or grid, S is the set of all (dyadic) rectangles of L d,n and F S is the set of polynomial functions of a given degree m ≥ 0 on the rectangular domain S. Then, P becomes the set of all (dyadic) rectangular partitions of L d,n and Θ P becomes the set of piecewise polynomial functions on the partition P .
Coming back to the general setting, to describe our main result, we need to define an additional quantity which is a property of the set of online rules {r (S) : S ∈ S}. For any partition P ∈ P and any θ ∈ Θ P , let R(y, θ, P ) = R(r, y, θ, P ) > 0 be defined as,
(3.2) R(y, θ, P ) = sup ρ S∈P t:ρ(t)∈S (y ρ(t) − y (S) ρ(t) ) 2 − y S − θ S 2 (recall the definition of y (S)
ρ(t) from (2.1)). Clearly R(y, θ, P ) is a random variable and we denote its expected value by R(θ, P ).
Here is how we can interpret R(y, θ, P ). Given a partition P ∈ P consider the following prediction rule r P . For concreteness, let the partition P = (S 1 , . . . , S k ). At round t, there will be only one index i ∈ [k] such that ρ(t) is in S i . Then the prediction rule r P predicts a value y
(S i ) ρ(t) .
In other words, r P uses the prediction of the expert corresponding to the subset S i in this round. Also, consider the prediction rule r θ which at round t predicts by θ ρ(t) for any fixed vector θ ∈ Θ P . If we have the extra knowledge that the true signal θ * indeed lies in Θ P then it may be natural to use the above online learning rule r P if the experts are good at predicting signals (locally on the domain S) which lie in F S . We can now interpret R(y, θ, P ) as the excess squared loss or regret (when the array revealed sequentially is y) of the online rule r P as compared to the online rule r θ .
In the sequel we use θ ∞ to denote the ∞ -norm of the vector θ. We also extend the definitions of P and Θ P (see around (3.3)) to include partitions of subsets of K comprising only sets from S. In particular, for any T ⊂ K, define P T to be the set of all partitions P of T all of whose constituent sets are elements of S. For any partition P ∈ P T , define the class of signals
(3.3) Θ P = Θ P (F S : S ∈ S, S ⊂ T) = {θ ∈ R T : θ S ∈ F S ∀S ∈ P }.
Let us now say a few words about the choice of ordering ρ which we can generally think of as a stochastic process taking values in K. We call ρ as non-anticipating if, conditionally
on (ρ[1 : t], y ρ[1:t−1] ), ρ(t) is distributed as s on the event {ρ(t) = s} for any t ∈ [N ]
. Such orderings include deterministic orderings and orderings that are independent of the data. But more generally, any ordering where ρ(t) is allowed to depend on the data only through y ρ[1:t−1] is non-anticipating. In particular, an adversary can choose to reveal the next index after observing all of the past data and our actions.
We are now ready to state our general result.
Theorem 3.1 (General Simultaneous Oracle risk bound). Let K be a finite set. Fix a set of experts S equipped with online learning rule r. For each S ∈ S, fix F S ⊂ R S to be a class of functions defined on S. Suppose y is generated from the model (1.1) and is input to the algorithm A(r, S, λ). Let us denote the output of the A(r, S, λ) by θ. Let T be any subset of K. There exist absolute constants c ∈ (0, 1) and C > 1 such that for any non-anticipating ordering ρ of K,
E y T − θ * T 2 ≤ inf P ∈P T , θ∈Θ P θ * T − θ 2 + Cλ 2 |P | log e|S| + R(θ, P ) + Cλ 2 |{s ∈ T : |θ s | > λ}| + C θ * T − Π λ θ * T 2 + C(σ 2 + λ 2 ) s∈T e −c min |θ * s −λ| 2 σ 2 , |θ * s +λ| 2 σ 2 . (3.4)
In particular, there exists an absolute constant C > 1 such that
for λ ≥ C(σ log |T| ∨ θ * ∞ ), one has MSE( θ T , θ * T ) ≤ inf P ∈P T , θ∈Θ P 1 |T| θ * T − θ 2 + C λ 2 |P | log e|S| + R(θ, P ) + σ 2 + λ 2 |T| 2 . (3.5) Remark 3.1. Our truncation threshold C(σ log |T| ∨ θ * ∞ )
is comparable to the threshold given in (Baby et al., 2021b, Theorem 5) for Gaussian errors.
We now explain various features and aspects of the above theorem.
• The reader can read the bound in (3.5) as
MSE( θ T , θ * T ) ≤ inf P ∈P T , θ∈Θ P 1 |T| θ * T − θ 2 T 1 + C λ 2 |P | log e|S| T 2 + R(θ, P ) T 3 + lower order term.
Indeed, the only importance of the factor 1 |T| 2 in the last term of (3.5) is that it is o( 1 |T| ), i.e., of lower order than the principal term. Indeed, by suitably increasing the constant C, we can get any given power of |T| in the denominator.
• To understand and interpret the above bound, it helps to first consider T = K and then fix a partition P ∈ P K = P and a piecewise regular signal θ ∈ Θ P . We also keep in mind two prediction rules, the first one is the online rule r P and the second one is r θ (both described before the statement of Theorem 3.1). The bound inside the infimum is a sum of three terms, T 1 , T 2 and T 3 as in the last display.
1. The first term T 1 is simply the squared distance between θ and θ * . This term is obviously small or big depending on whether θ is close or far from θ * .
2. The second term captures the complexity of the partition P where the complexity is simply the cardinality or the number of constituent sets/experts |P | multiplied by log cardinality of the total number of experts log e|S|. The reader can think of this term as the ideal risk bound achievable and anything better is not possible when the true signal θ * is piecewise regular on P. This term is small or big depending on whether |P | is small or big.
3. The third term T 3 is R(θ, P ) which can be interpreted as the expected excess squared loss or regret of the online rule r P as compared to the prediction rule r θ . This term is small or big depending on how good or bad is the online rule r P compared to the prediction rule r θ .
• Our bound is an infimum over the sum of three terms T 1 , T 2 , T 3 for any P ∈ P and θ ∈ Θ P which is why we can think of this bound as an oracle risk bound in the following sense. Consider the case when T = K and θ * lies in Θ P * for some P * which is of course unknown. In this case, an oracle who knows the true partition P * might naturally trust experts locally and use the online prediction rule r P * . In this case, our bound reduces (by setting P = P * , θ = θ * ) to the MSE incurred by this oracle prediction rule plus the ideal risk |P * | log e|S| term which is unavoidable. Because of the term T 1 , this argument holds even if θ * does not exactly lie in Θ P * but is very close to it. To summarize, our MSE bound ensures that we nearly perform as well as an oracle prediction rule which knows the true partition corresponding to the target signal θ * .
• The term T 3 in the MSE bound in (3.5) behooves us to find experts with good online prediction rules. If each expert S ∈ S indeed is equipped with a good prediction rule, then under the assumption that θ * is exactly (or is close to) piecewise regular on a partition P * ∈ P, the term T 3 = R(θ * , P * ) will be small and our bound will thus be better. This is what we do in our example applications, where we use provably good online rules such as running mean or the online linear regression forecaster of Vovk Vovk (1998). Infact, in each of the examples that we discuss subsequently in this paper, we bound the term R(θ, P ) in two stages. First, we write
ER(y, θ, P ) ≤ |P | E sup ρ,S∈P t:ρ(t)∈S (y ρ(t) − y (S) ρ(t) ) 2 − y S − θ S 2 .
Then in the second stage we obtain a bound on the expectation in the right side above by a log factor. Thus there is no real harm if the reader thinks of R(θ, P ) as also being of the order |P | up to log factors which is the ideal and unavoidable risk as mentioned before.
• A remarkable feature of Theorem 3.1 is that the MSE bound in (3.5) holds simultaneously for all sets T ⊂ K. Therefore, the interpretation that our prediction rule performs nearly as well as an oracle prediction rule holds locally for every subset or subregion T of the domain K. This fact makes our algorithm provably spatially adaptive to the class of all subsets of K with respect to the complexity parameter proportional to |P | in the sense described in Section 1.2. The implications of this will be further discussed when we analyze online prediction of specific function classes in the next two sections.
Proof of Theorem 3.1. Since y = θ * + σ , we can write for any S ∈ S,
y S − y S 2 = y S − θ * S 2 + 2σ S , y S + σ 2 S 2 .
imsart-aos ver. 2014/02/20 file: Draft_online_learning.tex date: April 1, 2022
However, since ρ is non-anticipating and y ρ(t) is measurable relative to (ρ[1 : t], y ρ[1:(t−1)] ) and s 's have mean zero, it follows from the previous display that
E y S − y S 2 = E y S − θ * S 2 + σ 2 E S 2 . (3.6)
On the other hand, adding up the upper bound on y S − y S 2 given by Proposition 2.1 over all S ∈ P for some P ∈ P we get
y T − y T 2 ≤ S∈P t:ρ(t)∈S y ρ(t) − y (S) ρ(t) 2 + 2 y T − Π λ y T 2 + 8λ 2 |P | log e|S| + 4λ 2 s∈T 1{|y s | > λ}.
Now taking expectations on both sides and using the definition of λ(θ, P ) from (3.2), we can write
E y T − y T 2 ≤ E y T − θ 2 + λ(θ, P ) + 8λ 2 |P | log e|S| + 2E y T − Π λ y T 2 + 4λ 2 s∈T P(|y s | > λ). (3.7)
Since s 's have mean zero, we get by expanding y T − θ 2 ,
E y T − θ 2 ≤ θ * T − θ 2 + σ 2 E T 2 .
Plugging this bound into the right hand side of (3.7), we obtain
E y T − y T 2 ≤ θ * T − θ 2 + σ 2 E T 2 + λ(θ, P ) + 8λ 2 |P | log e|S| + 2E y T − Π λ y T 2 + 4λ 2 s∈T P(|y s | > λ).
Together with (3.6), this gives us
E y T − θ * T 2 ≤ θ * T − θ 2 + λ(θ, P ) + 8λ 2 |P | log e|S| + 2E y T − Π λ y T 2 + 4λ 2 s∈T P(|y s | > λ).
Minimizing the right hand side in the above display over all P ∈ P and θ ∈ Θ P , we get
E y T − θ * T 2 ≤ inf P ∈P, θ∈Θ P θ * T − θ 2 + λ(θ, P ) + 8λ 2 |P | log |S| + 2E y T − Π λ y T 2 + 4λ 2 s∈T P(|y s | > λ). (3.8)
It only remains to verify the bounds on the error terms due to truncation. Since | y s | ≤ λ by the design of our algorithm, we have (3.9) ( y s − θ * s ) 2 ≤ 2(θ * s − T λ (θ * s )) 2 + 8λ 2 for all s ∈ T. We will apply this naive bound whenever |θ * s | > λ. So let us assume that s is such that |θ * s | ≤ λ. Let us start by writing
E(y s − T λ (y s )) 2 = I + + I − ,
where, with x + := max(x, 0) and x − := − min(x, 0),
I + := E(y s − λ) 2 + = 2σ 2 x> λ σ x − λ σ P s > x − θ * s σ dx, and I − := E(y s + λ) 2 − = −2σ 2 x<− λ σ x + λ σ P s < x − θ * s σ dx.
In writing these expressions we used the standard fact that
E(X − a) 2 + = 2 x>a (x − a)P[X > x] dx.
We deal with I + first. Since θ * s ≤ λ and s has sub-Gaussian decay around 0 with unit dispersion factor, we can bound I + as follows:
I + = 2σ 2 x> λ−θ * s σ x − λ − θ * s σ P s > x dx ≤ Cσ 2 e − |θ * s −λ| 2 2σ 2 (3.10)
where C is an absolute constant. Similarly we can deduce
I − ≤ Cσ 2 e − |θ * s −λ| 2 2σ 2 . (3.11)
On the other hand, for any |θ * s | ≤ λ, we have
P[|y s | > λ] ≤ P s > λ − θ * s σ + P s < −λ − θ * s σ ≤ 2e − |θ * s −λ| 2 2σ 2 ∧ |θ * s +λ| 2 σ 2 . (3.12)
Finally we plug the estimates (3.10), (3.11) and (3.12) into (3.8) when |θ * s | ≤ λ, and the estimate (3.9) and the trivial upper bound on probabilities when |θ * s | > λ to get (3.4). (3.5) then follows immediately from (3.4) by choosing C large enough.
In the next few sections we will introduce and discuss online prediction rules for several classes of functions. In each case we will apply Theorem 3.1 to derive the corresponding risk bounds. The proofs of all the results are given in the Appendix section.
Online Mean Aggregation over Dyadic Rectangles (OMADRE).
In this section, we will specifically study a particular instantiation of our general algorithm (laid out in Section 2) which we tentatively call as the Online Mean Aggregation over Dyadic Rectangles estimator/predictor (OMADRE). Here, K = L d,n and the set of experts corresponds to the set of all dyadic rectangles of L d,n . Some precise definitions are given below.
An axis aligned rectangle or simply a rectangle R is a subset of L d,n which is a product of intervals, i.e.,
R = d i=1 [a i , b i ] for some 1 ≤ a i ≤ b i ≤ n; i ∈ [d].
A sub-interval of [1, n] is called dyadic if it is of the form ((a−1)2 s , a2 s ] for some integers 0 ≤ s ≤ k and 1 ≤ a ≤ 2 k−s where we assume n = 2 k for simplicity of exposition. We call a rectangle dyadic if it is a product of dyadic intervals. Now we take our experts to be the dyadic sub-rectangles of L d,n , i.e., in the terminology of Section 3, S is the set of dyadic sub-rectangles of L d,n . We set F S = span({1}) --the space of all constant functions on S -for all S ∈ S. We also let P dp be the set of all dyadic rectangular partitions of L d,n where a (dyadic) rectangular partition P is a partition of L d,n comprising only (respectively, dyadic) rectangles. Since there are at most 2n dyadic sub-intervals of [n], we note that
(4.1) |S| = (2n) d = 2 d N.
Under this setting, for any partition P of L d,n the set Θ P refers to the set of all arrays θ ∈ R L d,n such that θ is constant on each constituent set of P.
Finally we come to the choice of our online rule r. It is very natural to consider the online averaging rule r defined as:
(4.2) r (S)
U,s ((y u : u ∈ U )) = y U for all U ⊂ S ∈ S and s ∈ S \ U .
By convention, we set y ∅ = 0. Remark 4.1. The above computational complexity is near linear in the sample size N but exponential in the dimension d. Therefore, the estimators we are considering here are very efficiently computable in low dimensions which are the main cases of interest here.
The OMADRE, being an instance of our general algorithm will satisfy our simultaneous oracle risk bound in Theorem 3.1. This oracle risk bound can then be used to derive risk bounds for several function classes of interest. We now discuss two function classes of interest for which the OMADRE performs near optimally.
Result for Rectangular Piecewise Constant Functions in General Dimensions.
Suppose θ * is piecewise constant on some unknown rectangular partition P * of the domain K = L d,n . For concreteness, let the partition P * = (R 1 , . . . , R k ). An oracle predictor θ (oracle) -which knows the minimal rectangular partition (R 1 , . . . , R k ) of θ * exactly -can simply use the online averaging prediction rule given in (4.2) separately within each of the rectangles(R 1 , . . . , R k ). By a basic result about online mean prediction, (see Lemma 8.2), it can be shown that the MSE of this oracle predictor is bounded by O( k θ * ∞ log n N ). In words, the MSE of this oracle predictor scales (up to a log factor which is necessary) like the number of constant pieces of θ * divided by the sample size N which is precisely the parametric rate of convergence.
A natural question is whether there exists an online prediction rule which a) adaptively achieves a MSE bound similar to the oracle prediction rule b) is computationally efficient. In the batch set up, this question is classical (especially in the univariate setting when d = 1) and has recently been studied thoroughly in general dimensions in Chatterjee and Goswami (2021a). It has been shown there that the Dyadic CART estimator achieves this near (up to log factors) oracle performance when d ≤ 2 and a more computationally intensive version called the Optimal Regression Tree estimator (ORT) can achieve this near oracle performance in all dimensions under some assumptions on the true underlying partition. However, we are not aware of this question being explicitly answered in the online setting. We now state a theorem saying that the OMADRE essentially attains this objective. Below we denote the set of all partitions of L d,n into rectangles by P all . Note that the set P dp is strictly contained in the set P all .
Theorem 4.2 (Oracle Inequality for Arbitrary Rectangular Partitions). Let T be any subset of K and θ OM denote the OMADRE predictor. There exists an absolute constant C such that for any λ ≥ C(σ √ log N ∨ θ * ∞ ), one has for any non-anticipating ordering ρ of L d,n ,
E 1 |T| θ OM T − θ * T 2 ≤ inf P ∈P all,T θ∈Θ P ⊂R T 1 |T| θ * T − θ 2 + Cλ 2 |P | |T| (log en) d log 2 d N + σ 2 + λ 2 |T| 2 . (4.3)
We now discuss some noteworthy aspects of the above theorem.
1. It is worth emphasizing that the above oracle inequality holds over all subsets T of K simultaneously. Therefore, the OMADRE is a spatially adaptive estimator in the sense of Section 1.2. Such a guarantee is not available for any existing estimator, even in the batch learning setup. For example, in the batch learning setup, all available oracle risk bounds for estimators such as Dyadic CART and related variants are known only for the full sum of squared errors over the entire domain.
2. To the best of our knowledge, the above guarantee is the first of its kind explicitly stated in the online learning setup. Therefore, the above theorem shows it is possible to attain a near (up to log factors) oracle performance by a near linear time computable estimator in the online learning set up as well; thereby answering our first main question laid out in Section 1.
3. We also reiterate that the infimum in the R.H.S in (4.3) is over the space of all rectangular partitions P all . This means that if the true signal θ * is piecewise constant on an arbitrary rectangular partition with k rectangles, the OMADRE attains the desired ‹ O(k/N ) rate. Even in the batch learning set up, it is not known how to attain this rate in full generality. For example, it has been shown in Chatterjee and Goswami (2021a) that the Dyadic CART (or ORT) estimator enjoys a similar bound where the infimum is over the space of all recursive dyadic rectangular partitions (respectively decision trees) of K which is a stirct subset of P all . Thus, the bound presented here is stronger in this sensethan both these bounds known for Dyadic CART/ORT. More details about comparisons with Dyadic CART and ORT is given in Section 6.1.
Result for Functions with Bounded Total Variation in General Dimensions.
Consider the function class whose total variation (defined below) is bounded by some number. This is a classical function class of interest in offline nonparametric regression since it contains functions which demonstrate spatially heterogenous smoothness; see Section 6.2 in Tibshirani (2015) and references therein. In the offline setting, the most natural estimator for this class of functions is what is called the Total Variation Denoising (TVD) estimator. The two dimensional version of this estimator is also very popularly used for image denoising; see Rudin et al. (1992). It is known that a well tuned TVD estimator is minimax rate optimal for this class in all dimensions; see Hütter and Rigollet (2016) and Sadhanala et al. (2016).
In the online setting, to the best of our knowledge, the paper Baby and Wang (2019) gave the first online algorithm attaining the minimax optimal rate. This algorithm is based on wavelet shrinkage. Recently, the paper Baby et al. (2021a) studied a version of the OMADRE in the context of online estimation of univariate bounded variation functions. In this section we state a result showing that with our definition of the OMADRE, it is possible to predict/forecast bounded variation functions online in general dimensions at nearly the same rate as is known for the batch set up.
We can think of K = L d,n as the d dimensional regular lattice graph. Then, thinking of θ ∈ R L d,n as a function on L d,n we define
(4.4) TV(θ) = (u,v)∈E d,n |θ u − θ v |
where E d,n is the edge set of the graph L d,n . The above definition can be motivated via the analogy with the continuum case. If we think θ[i 1 , . . . , i n ] = f ( i 1 n , . . . , i d n ) for a differentiable function f : [0, 1] d → R, then the above definition divided by n d−1 is precisely the Reimann approximation for [0,1] d ∇f 1 . In the sequel we denote,
BV d,n (V * ) := {θ ∈ R n : TV(θ) ≤ V * }.
We are now ready to state: Theorem 4.3 (Prediction error for BV d,n (V * ) with online averages). Fix any T ⊂ K that is a dyadic square and denote V * T = T V (θ * T ). If λ ≥ C(σ √ log N ∨ θ * ∞ ) as in Theorem 3.1, we have for some absolute constant C > 1 and any non-anticipating ordering ρ of L d,n ,
(4.5) E 1 |T| θ OM T − θ * T 2 ≤ C |T| λ 2 (log 2 d N ) 2 + λV * T (log 2 d N ) 3/2 + σ 2 + λ 2 |T| 2 .
when d > 1. On the other hand, for d = 1 we have
(4.6) E 1 |T| θ OM T − θ * T 2 ≤ C λ 4/3 (log 2 d N ) 4/3 Å V * T |T| ã 2/3 + σ 2 + λ 2 |T| 2 .
Here are some noteworthy aspects of the above theorem.
1. The above theorem ensure that the OMADRE matches the known minimax rate of estimating bounded variation functions in any dimension. To the best of our knowledge, this result is new in the in the online setting for the multivariate (i.e., d ≥ 2) case. 2. Note that our MSE bounds hold simultaneously for all dyadic square regions. Thus, the OMADRE adapts to the unknown variation of the signal V * T , for any local dyadic square region T. In this sense, the OMADRE is spatially adaptive. Even in the batch setting, this type of simultaneous guarantee over a class of subsets of L d,n is not available for the canonical batch TVD estimator. 3. We require T to be a dyadic square because of a particular step in our proof where we approximate a bounded variation array with an array that is piecewise constant over a recursive dyadic partition of L d,n with pieces that have bounded aspect ratio. See Proposition 8.4 in the appendix.
Online Linear Regression Aggregation over Dyadic Rectangles (OLRADRE).
In this section, we consider another instantiation of our general prediction algorithm which is based on the Vovk, Azoury and Warmuth online linear regression forecaster, e.g see Vovk (1998). Similar to Section 4, we take our set of experts S to be the set of all dyadic subrectangles of L d,n . However, the main difference is that we now take F S to be the subspace spanned by a finite set F of basis functions on R d restricted to S . In the next two subsections, we will focus specifically on the case when F is the set of all monomials in d variables with a maximum degree (see (5.2) below).
Next we need to choose an online rule r to which end the VAW linear regression forecaster leads to an online rule defined as:
(5.1) r (S) U,s ((y u : u ∈ U )) = β s · x s with β s := I + u∈U ∪{s} x u x T u −1 u∈U y u x u
for all U ⊂ S ∈ S and s ∈ S \ U where x u is the vector (f (u) : f ∈ F) ∈ R F and · denotes the canonical inner product in R F . By convention, we interpret an empty summation as 0.
The next lemma gives the computational complexity of the OLRADRE which is the same as that of the OMADRE except that it scales cubically with the cardinality of the basis function class F.
Lemma 5.1 (Computational complexity of OLRADRE). There exists an absolute constant C > 0 such that the computational complexity of OLRADRE is bounded by C|F| 3 N (log 2 2n) d .
We reiterate here that the set of basis functions can be taken to be anything (e.g relu functions, wavelet basis etc.) and yet a simultaneous oracle risk bound such as Theorem 3.1 will hold for the OLRADRE. We now move on to focus specifically on piecewise polynomial and univariate higher order bounded variation functions where OLRADRE performs near optimally.
Result for Rectangular Piecewise Polynomial Functions in General
Dimensions. The setup for this subsection is essentially similar to that in Section 4.1 except that θ * can now be piecewise polynomial of degree at most m on the (unknown) partition P * . More precisely, we let As before, an oracle predictor θ (oracle) -which knows the minimal rectangular partition (R 1 , . . . , R k ) of θ * exactly -can simply use the online VAW online linear regression rule given in (5.1) separately within each of the rectangles(R 1 , . . . , R k ). By a basic result about VAW online linear regression, (see Proposition 8.6), it can be shown that the MSE of this oracle predictor is bounded by O d ( k θ * ∞ log n N ). In words, the MSE of this oracle predictor again scales (up to a log factor which is necessary) like the number of constant pieces of θ * divided by the sample size N which is precisely the parametric rate of convergence. We will now state a result saying that OLRADRE, which is computationally efficient, can attain this oracle rate of convergence, up to certain additional multiplicative log factors.
Since any θ ∈ Θ P , where P ∈ P all,T for some T ⊂ K (cf. the statement of Theorem 4.2), is piecewise polynomial on P , we can associate to any such θ the number The reader should think of θ s as g( s n ) where g is some piecewise polynomial function defined on the unit cube [0, 1] d and hence of s m,∞ (θ) as its maximum coefficient which is a bounded number, i.e., it does not grow with n. Let us keep in mind that the OLRADRE depends on the underlying degree m which we keep implicit in our discussions below. We can now state the analogue of Theorem 4.2 in this case.
Theorem 5.2 (Oracle Inequality for Arbitrary Rectangular Partitions). Let T be any subset of K and θ OL denote the OLRADRE predictor. Then there exist an absolute constant C and a number C m,d > 1 depending only on m and d such that for λ ≥ C(σ √ log N ∨ θ ∞ ), one has for any non-anticipating ordering ρ of L d,n ,
E 1 |T| θ OL T − θ * T 2 ≤ inf P ∈P all,T θ∈Θ P ⊂R T 1 |T| θ * T − θ 2 + C m,d λ 2 m, * |P | |T| (log en) d log 2 d N + σ 2 + λ 2 |T| 2 (5.4)
where λ m, * = λ + s m,∞ (θ).
We now make some remarks about this theorem.
Remark 5.1. We are not aware of such a simultaneous oracle risk bound explicitly stated before in the literature for piecewise polynomial signals in general dimensions in the online learning setting.
Remark 5.2. Even in the batch learning setting, the above oracle inequality is a stronger result than available results for higher order Dyadic CART or ORT (Chatterjee and Goswami (2021a)) in the sense that the infimum is taken over the space of all rectangular partitions P all instead of a more restricted class of partitions.
Result for Univariate Functions of Bounded Variation of Higher Orders.
One can consider the univariate function class of all m times (weakly) differentiable functions, whose m th derivative is of bounded variation. This is also a canonical function class in offline nonparametric regression. A seminal result of Donoho and Johnstone (1998) shows that a wavelet threshholding estimator attains the minimax rate in this problem. Locally adaptive regression splines, proposed by Mammen and van de Geer (1997), is also known to achieve the minimax rate in this problem. Recently, Trend Filtering, proposed by Kim et al. (2009), has proved to be a popular nonparametric regression method. Trend Filtering is very closely related to locally adaptive regression splines and is also minimax rate optimal over the space of higher order bounded variation functions; see Tibshirani et al. (2014) and references therein. Moreover, it is known that Trend Filtering adapts to functions which are piecewise polynomials with regularity at the knots. If the number of pieces is not too large and the length of the pieces is not too small, a well tuned Trend Filtering estimator can attain near parametric risk as shown in Guntuboyina et al. (2020). In the online learning setting, this function class has been studied recently by Baby and Wang (2020) using online wavelet shrinkage methods. We now state a spatially adaptive oracle risk bound attained by the OLRADRE for this function class.
Let K = L 1,n = [[1, n]] and for any vector θ ∈ R n , let us define its m-th order (discrete) derivative for any integer r ≥ 0 in a recursive manner as follows. We start with D (0) (θ) = θ and D (1) (θ) = (θ 2 − θ 1 , . . . , θ n − θ n−1 ). Having defined D (m−1) (θ) for some m ≥ 2, we set D (m) (θ) = D (1) (D (m−1) (θ)). Note that D (m) (θ) ∈ R n−m . For sake of convenience, we denote the operator D (1) by D. For any positive integer m ≥ 1, let us also define the m-th order variation of a vector θ as follows:
(5.5) V (m) (θ) = n m−1 |D (m) (θ)| 1
where |.| 1 denotes the usual 1 -norm of a vector. Notice that V 1 (θ) is the total variation of a vector defined in (4.4). Like our definition of total variation, our definition in (5.5) is also motivated by the analogy with the continuum. If we think of θ as an evaluation of an m times differentiable function f : [0, 1] → R on the grid (1/n, 2/n . . . , n/n), then the Reimann approximation to the integral [0,1] f (m) (t)dt is precisely equal to V (m) (θ). Here f (m) denotes the m-th order derivative of f . Thus, the reader should assume that V (m) (θ) is of constant order for a generic θ. Analogous to the class BV d,n (V * ), let us define for any integer m ≥ 1,
BV (m) n (V * ) = {θ ∈ R n : V (m) (θ) ≤ V * }.
In the spirit of our treatment of the class BV d,n (V * ) in Section 4.2, we take
F = {1, x, . . . , x m−1 }.
We now state the main result of this subsection.
Theorem 5.3 (Prediction error for BV
(m) N (V * ), m > 1). Fix any interval T ⊂ K and denote V * T = V (m) (θ * T )
. Also let θ * m−1,∞ := max 0≤j<m N j D j (θ * ) ∞ . Then there exist an absolute constant C and a number C m > 1 depending only on m such that for λ ≥ C(σ √ log N ∨ θ * ∞ ), we have for any non-anticipating ordering ρ of L 1,n ,
(5.6) E 1 |T| θ OL T − θ T 2 ≤ C m λ 4m 2m+1 m, * log eN Ç (V * T ) 1/m |T| å 2m 2m+1 + σ 2 + λ 2 |T| 2
where λ m, * := λ + θ * m−1,∞ (cf. the statement of Theorem 5.2).
We now make some remarks about the above theorem.
Remark 5.3. The above spatially adaptive risk bound for bounded variation functions of a general order is new even in the easier batch learning setting. State of the art batch learning estimators like Trend Filtering or Dyadic CART are not known to attain such a spatially adaptive risk bound.
6. Discussion. In this section we discuss some natural related matters.
6.1. Detailed Comparison with Dyadic CART. The Dyadic CART is a natural offline analogue of the OMADRE described in Section 4. Similarly, higher order versions of Dyadic CART and Trend Filtering are natural offline analogues of the univariate piecewise polynomial OLRADRE described in Section 5. Therefore, it makes sense to compare our oracle risk bound (notwithstanding simultaneity and the fact that OMADRE/OLRADRE are online algorithms) in Theorems 4.2, 5.2, 5.3 with the available offline oracle risk bound for Dyadic CART, see Theorem 2.1 in Chatterjee and Goswami (2021a). This result is an oracle risk bound where the infimum is over all recursive dyadic partitions (see a precise definition in Section of Chatterjee and Goswami (2021a)) of L d,n . On the other hand, our oracle risk bounds are essentially an infimum over all dyadic partitions P dp . In dimensions d = 1, 2 these two classes of partitions coincide (see Lemma 8.2 in Chatterjee and Goswami (2021a)) but for d > 2, the class of partitions P dp strictly contain the class of recursive dyadic partitions (see Remark 8.3 in Chatterjee and Goswami (2021a)). Therefore, the oracle risk bounds in Theorems 4.2, 5.2 are stronger in this sense.
The above fact also allows us to convert the infimum over all dyadic partitions P dp to the space of all rectangular partitions P all since any partition in P all can be refined into a partition in P dp with the number of rectangles inflated by a (log n) d factor. In dimensions d ≥ 3, such an offline oracle risk bound (where the infimum is over P all ) is not known for Dyadic CART. As far as we are aware, the state of the art result here is shown in Chatterjee and Goswami (2021a) where the authors show that a significantly more computationally intensive version of Dyadic CART, called the ORT estimator is able to adaptively estimate signals which are piecewise constant on fat partitions. In contrast, Theorems 4.2, 5.2 hold for all dimensions d, the infimum in the oracle risk bound is over the set of all rectangular partitions P all and no fatness is needed.
It should also be mentioned here that compared to batch learning bounds for Dyadic CART, our bounds have an extra log factor and some signal dependent factors which typically scale like O(1). Note that the computational complexity of our algorithm is also worse by a factor (log n) d , compare Lemma 4.1 to Lemma 1.1 in Chatterjee and Goswami (2021a). However, it should be kept in mind that we are in the online setup which is a more difficult problem setting than the batch learning setting.
6.2. Some Other Function Classes. Our simultaneous oracle risk bounds are potentially applicable to other function classes as well not considered in this paper. We now mention some of these function classes.
A similar batch learning oracle risk bound with an infimum over the set of all recursive dyadic partitions was used by Donoho (1997) to demonstrate minimax rate optimality of Dyadic CART for some anisotropically smooth bivariate function classes. Using our result, it should be possible to attain a simultaneous version of minimax rate optimal bounds for these types of function classes.
Consider the class of bounded monotone signals on L d,n defined as
M d,n = {θ ∈ [0, 1] L n,d : θ[i 1 , . . . , i d ] ≤ θ[j 1 , . . . , , j d ] whenever i 1 ≤ j 1 , . . . , i d ≤ j d }.
Estimating signals within this class falls under the purview of Isotonic Regression. Isotonic Regression has been a topic of recent interest in the online learning community; see Kot lowski et al. (2016), Kotlowski et al. (2017). It can be checked that the total variation for any d dimensional isotonic signal with range O(1) grows like O(n d−1 ) which is of the same order as a canonical bounded variation function. Therefore, the bound in Theorem 4.2 would give spatially adaptive minimax rate optimal bounds for Isotonic Regression as well. In the offline setup, a lot of recent papers have investigated Isotonic regression with the aim of establishing minimax rate optimal rates as well as near optimal adaptivity to rectangular piecewise constant signals; see Deng and Zhang (2020), Han et al. (2019). Theorem 4.2 establishes that such adaptivity to rectangular piecewise constant signals as well as maintaining rate optimality over isotonic functions is also possible in the online setting by using the OMADRE proposed here.
Let us now consider univariate convex regression. In the offline setting, it is known that the least squares estimator LSE is minimax rate optimal, attaining the ‹ O(n −4/5 ) rate, over convex functions with bounded entries, see e.g. Guntuboyina and Sen (2013), Chatterjee et al. (2016). It is also known that the LSE attains the ‹ O(k/n) rate if the true signal is piecewise linear in addition to being convex. Theorem 5.2 and Theorem 5.3 imply both these facts also hold for the OLRADRE (since a convex function automatically has finite second order bounded variation) where we fit linear functions (polynomial of degree 1) on intervals. To the best of our knowledge, such explicit guarantees for online univariate convex regression were not available in the literature before this work. 6.3. Computation Risk Tradeoff. The main reason for us considering dyadic rectangles (instead of all rectangles) as experts is to save computation. In particular, if one uses the set of all rectangles as experts, the computational complexity of the resulting algorithm would be O d (N 3 ). One can think of this estimator as the online analogue of the ORT estimator defined in Chatterjee and Goswami (2021a). For this estimator, the risk bounds would be better. For example, the (log n) d term multiplying |P | in the bound in Theorems 4.2, 5.2 would now no longer be present. In particular, the exponent of log n would be 2 for all dimensions d which is only one log factor more than a known minimax lower bound for the space of all rectangular piecewise constant functions; see Lemma 3.1 in Chatterjee and Goswami (2021a).
One can also easily interpolate and take the set of experts somewhere between the set of dyadic rectangles and the set of all rectangles, say by considering all rectangles with side lengths a multiple of some chosen integer l. Thus one can choose the set of experts by trading off computational time and the desired statistical prediction performance. 6.4. Open Problems. In our opinion, our work here raises some interesting open questions which we leave for future research.
1. It appears that if a function class is well approximable by rectangular piecewise constant/polynomial functions then the type of oracle risk bounds proved here may be used to derive some nontrivial prediction bounds. However, for many function classes, this kind of approximability may not hold. For example, we can consider the class of Hardy Krause Bounded Variation Functions (see Fang et al. (2021)) or its higher order versions (see Ki et al. (2021)) where the existing covering argument produces nets (to estimate metric entropy) which are not necessarily rectangular piecewise constant/linear respectively. These function classes are also known not to suffer from the curse of dimensionality in the sense that the metric entropy does not grow exponentially in 1 with the dimension d. More generally, it would be very interesting to come up with computationally efficient and statistically rate optimal online prediction algorithms for such function classes.
2. The analysis presented here relies a lot on the light tailed nature of the noise. It can be checked that Theorem 3.1 can also be proved when the noise is mean 0 sub exponential, we would only get an appropriate extra log factor. However, the proof would break down for heavy tailed noise. This seems to be an open area and not much attention has been given to the noisy online prediction problem with heavy tailed noise. Most of the existing results in the online learning community assume bounded but arbitrary data. The heavy tailed setting we have in mind is that the data y is not arbitrary but of the form signal plus noise, except that the noise can be heavy tailed. It would be very interesting to obtain an analogue of Theorem 3.1 in this setting. Clearly, the algorithm has to change as well in the sense that instead of aggregating means one should aggregate medians of various rectangles in some appropriate way.
3. Another important aspect that we have not discussed here is the issue of choosing the tuning/truncation parameter λ in a data driven manner. It is possibly natural to choose a grid of candidate truncation values and run an exponentially weighted aggregation algorithm aggregating the predictions corresponding to each truncation value. This approach was already considered in Baby et al. (2021a) (see Section 4). However, since our data is unbounded, we run into the same issue of choosing an appropriate tuning parameter. It is an important research direction to investigate whether the recent developments in the cross validation there are any other natural ways to address this problem.
Simulations.
7.1. 1D Plots. We provide plots of the OMADRE for a visual inspection of its performance. There are three plots for scenarios corresponding to different true signals θ * , where for any i ∈ [n], we have θ * i = f (i/n) for some function f : [0, 1] → R, specified below and the errors are generated from N (0, 1). The sample size is taken to be n = 2 16 for these plots, given in Figure 1. The truncation parameter λ has been taken to be 2 max{ θ * ∞ , σ (2 log n) 1/2 } for all our 1D simulations. It may be possible to get better predictions by choosing a smaller value of λ but we have not done any systematic search for these simulations as this particular choice seemed to work well.
The ordering of the revealed indices is taken to be the forward ordering 1, 2, 3, . . . and the backward ordering n, n − 1, n − 2, . . . . The predictions corresponding to the two orderings are then averaged in the plots.
f (x) = 18x 2 if x ∈ [0, 1/3] −36(x − 1/2 − 1/ √ 12)(x − 1/2 + √ 12) if x ∈ [1/3, 2/3] 18(x − 1) 2 if x ∈ [2/3, 1]
. and consider the 1D OMADRE estimator. The corresponding plot is shown in the third diagram of Figure 1. 7.2. 1D Comparisons. We conduct a simulation study to compare the performance of the OMADRE and the OLRADRE of order 1, 2 which aggregates linear function predictions and quadratic function predictions. We consider the ground truth signal as the smooth sinusoidal function f (x) = sin 2πx + cos 5πx.
We considered various signal to noise ratios by setting the noise standard deviation σ to be 0.5, 1 or 2. We also considered sample sizes n = 2 10 , 2 12 , 2 14 . In each case, we estimated the MSE by 50 Monte Carlo replications. Here also, the predictions corresponding to the forward and backward orderings are averaged. We report the MSE's in Tables 1, 2 and 3 respectively. It is reasonable to expect that the OLRADRE aggregating quadratic function predictions would perform no worse than the OLRADRE aggregating linear function predictions which in turn would perform no worse than the OMADRE estimator. From the tables 1, 2 and 3 we see that when the noise variance is low, the opposite happens and the OMADRE gives a better performance. It is only when the noise variance becomes high, the OLRADRE aggregating quadratic functions starts to perform the best. We see a similar phenomenon for other ground truth functions as well. We are not sure what causes this but we believe that in the low noise regime, the weights of the local experts are high (for the OMADRE estimator) and for smooth functions these predictions would be very accurate. Since the OLRADRE has a shrinkage effect (note the presence of I in the gram matrix), there is bias for the predictions of the local experts which is why the local experts in this case predict slightly worse than for the OMADRE estimator. In the case when the signal to noise ratio is low, the algorithms are forced to use experts corresponding to wider intervals for which case the bias of the OLRADRE predictions become negligible. 7.3. 2D Plots. We conduct a simulation study to observe the performance of the proposed OMADRE estimator in three different scenarios each corresponding to a different true signal θ * . In every case, the errors are generated from a centered normal distribution with standard deviation 0.25, the dimension d = 2 and we take the number of pixels in each dimension to be n = 64, 128, 256. We estimate the MSE by 50 Monte Carlo replications and they are reported in Table 4. The truncation parameter λ has been taken to be 2 max{ θ * ∞ , σ 2 log(n 2 ) 1/2 }.
In each of the cases, a uniformly random ordering of the vertices of L 2,n has been taken to construct the OMADRE estimator. Overall, we see that our OMADRE estimator performs pretty well.
1. Scenario 1 [Rectangular Signal]: The true signal θ * is such that for every (i 1 , i 2 ) ∈ L 2,n , we have
θ * (i 1 ,i 2 ) = 1 if n/3 ≤ i 1 , i 2 ≤ 2n/3 0 otherwise .
The corresponding plots are shown in Figure 2 when n = 256. 2. Scenario 2 [Circular Signal]: The true signal θ * is such that for every (i 1 , i 2 ) ∈ L 2,n , we have
θ * (i 1 ,i 2 ) = 1 if (i 1 − n/2) 2 + (i 2 − n/2) 2 ≤ n/4 0 otherwise .
The corresponding plots are shown in Figure 3 when n = 256.
Scenario 3 [Sinusoidal Smooth
Signal]: The true signal θ * is such that for every (i 1 , i 2 ) ∈ L 2,n , we have θ * (i 1 ,i 2 ) = f (i 1 /n, i 2 /n), where
f (x, y) = sin(πx) sin(πy).
The corresponding plots are shown in Figure 4 when n = 256. In the remainder of this subsection the constant C always stands for an absolute constant whose precise value may change from one occurrence to the next. For every s ∈ L d,n , we let S(s) denote the subcollection of all dyadic rectangles S ⊂ L d,n containing s.
At the outset of every round t = 1, . . . , N , we maintain several objects for every S ∈ S. These include the weight w S,t , the L × L matrix X S,t := I + s∈ρ[1:(t−1)]∩S x s x T s where L = |F| and the vector z S,t = s∈ρ[1:(t−1)]∩S y s x s ∈ R L . We also store the indicator I S,t ∈ {0, 1} whether S has had any datapoint upto round t − 1 which is required to determine the set of active experts A t (recall step 2 of A). In the beginning, w S,1 = 1 |S| (recall the initialization step of A), X S,1 = I, z S,1 = 0 and I S,1 = 0 for all S ∈ S. We first analyze the number of elementary operations necessary for computing the estimate y ρ(t) and updating the matrices (X S,t ; S ∈ S(ρ(t))) as well as the indicators I S,t after the adversary reveals ρ(t).
To this end observe that, we can visit all the rectangles in A t ⊂ S(ρ(t)) by performing binary search on each coordinate of ρ(t) ∈ L d,n in the lexicographic order and checking for the value of I S,t . This implies, firstly, that |S(ρ(t))| ≤ (log 2 2n) d and secondly, that the number of operations required to update the indicators I S,t 's is bounded by (log 2 2n) d . Now let us recall from (5.1) that,
y (S) ρ(t) = X −1 S+1,t z S,t · x ρ(t) , where X S,t+1 = X S,t + x ρ(t) x T ρ(t)
. Computing X S+1,t and its inverse, and the subsequent multiplication with z S,t require at most CSL 3 and CL 2 many basic operations respectively. Evaluating the inner product with x s afterwards take at most CL many basic steps. Thus, we incur CL 3 as the total cost for computing w S,t T λ ( y (S) ρ(t) ) and updating X S,t for each S ∈ S(ρ(t)). Calculating y ρ(t) from the numbers w S,t T λ ( y (S) ρ(t) )'s (see step 3 of A), where S ∈ S(ρ(t)), requires C|S(ρ(t))| many additional steps. Therefore, the combined cost for computing y ρ(t) and updating X S,t 's for all S ∈ S(ρ(t)) is bounded by CL 3 |S(ρ(t))| = CL 3 (log 2 2n) d .
After the adversary reveals y ρ(t) , we need to update the weights w S,t , the vectors z S,t and the indicators I S,t for all S ∈ S(t) (see step 4 of A). For this we first need to compute the numbers w S,t e −α S,t for all S ∈ A t and this takes C|S(ρ(t))| = C(log 2 2n) d many basic operations. It takes an additional C(log 2 2n) d many basic operations in order to compute the sums S∈At w S,t and S∈At w S,t e −α S,t . Using these numbers, we can now update the weights as
w S,t+1 = w S,t e −α S,t S∈At w S,t e −α S,t
S∈At w S,t and this also involves C(log 2 n) d many elementary operations. Updating the vector z S,t to z S,t+1 = z S,t + y ρ(t) x ρ(t) takes at most CL many basic steps for every S and hence CL(log 2 n) d many steps in total.
Putting everything together, we get that the computational complexity of OLRADRE is bounded by CL 3 N (log 2 n) d .
Proof of Theorem 4.2.
Recall the definition of R(θ, P ) for any partition P of K = L d,n and a θ ∈ Θ P given right after (3.2). It turns out that for the online averaging rule, one can give a clean bound on R(θ, P ) which is stated next as a proposition.
Proposition 8.1. Let y t = θ * t +σ t for t ∈ K = L d,n where σ > 0 and t 's are independent, mean zero sub-Gaussian variables with unit dispersion factor. Then we have for any partition P ∈ P T , where T ⊂ K, and any θ ∈ Θ P , (8.1) R(θ, P ) ≤ C|P | ( θ * T 2 ∞ + σ 2 log eN ) log eN. where C > 1 is some absolute constant.
Proof. We have R(θ, P ) = E R(y, θ, P ) ≤ |P | E sup ρ,S∈P t:ρ(t)∈S (y ρ(t) − y (S) ρ(t) ) 2 − y S − θ S 2 . Now, the following deterministic lemma is going to be of use to us.
Lemma 8.2. Let z 1 , . . . , z T be an arbitrary sequence of numbers and z t := 1 t−1 t−1 s=1 z s for t = 2, . . . , T where z 1 = 0. Then, we have
(8.2) z − z 2 − z −z 2 ≤ 4 z 2 ∞ log eT.
For a proof of the above lemma, see, e.g., Theorem 1.2 in Orabona (2019). Using the above deterministic lemma and the previous display, we can write for any partition P ∈ P T and θ ∈ Θ P ,
R(θ, P ) ≤ 4|P | E sup ρ,S∈P t:ρ(t)∈S (y ρ(t) − y (S) ρ(t) ) 2 − y S − y S 2 ≤ 4|P | E y 2 ∞ log en ≤ C|P | ( θ * 2 ∞ + σ 2 log eN ) log eN. (8.3)
where y S denotes the mean of the entries of y S and we deduce the last inequality from a standard upper bound on the tail of sub-Gaussian random variables.
The following corollary is a direct implication of Theorem 3.1 and Proposition 8.1 applied to the particular setting described at the beginning of Section 4. Corollary 8.3. Let T be any subset of K. Let θ OM denote the OMADRE predictor. There exists an absolute constant C > 1 such that for λ ≥ C(σ √ log N ∨ θ * ∞ ), one has for any non-anticipating ordering ρ of K, E θ OM T − θ * T 2 ≤ inf P ∈P dp,T θ∈Θ P ⊂R T θ * T − θ 2 + Cλ 2 |P | log 2 d N + σ 2 + λ 2 |T| . (8.4)
We are now ready to prove Theorem 4.2.
Proof. The proof directly follows from (3.5) and Proposition 8.1.
Proof of Theorem 4.2. Fix any partition P ∈ P all,T and any θ ∈ Θ P . Consider a dyadic refinement of P which we denote by ‹ P . By definition, ‹ P ∈ P dp,T and θ ∈ Θ ‹ P . Therefore, we can use the bound in (8.4) given in Corollary 8.3. The proof is then finished by noting that | ‹ P | ≤ |P |(log en) d .
8.3. Proof of Theorem 4.3. It has been shown in Chatterjee and Goswami (2021a) that the class of functions BV d,n (V * ) is well-approximable by piecewise constant functions with dyadic rectangular level sets which makes it natural to study the OMADRE estimator for this function class.
The following result was proved in Chatterjee and Goswami (2021a) (see Proposition 8.5 in the arxiv version).
Proposition 8.4. Let θ ∈ R L d,n and δ > 0. Then there exists a dyadic partition P θ,δ = (R 1 , . . . , R k ) in P rdp such that a) k = |P θ,δ | ≤ 1 + log 2 N 1 + TV(θ) δ , and for all i ∈ [k], c) TV(θ R i ) ≤ δ, and d) A(R i ) ≤ 2 where A(R) denotes the aspect ratio of a generic rectangle R.
Let Π Θ P θ,δ := Π P θ,δ denote the orthogonal projector onto the subspace Θ P θ,δ of R L d,n comprising functions that are constant on each R i ∈ P θ,δ . It is clear that Π P θ,δ θ(a) = θ R ithe average value of θ over R i -for all a ∈ R i and i ∈ [k]. We will use Π P θ * ,δ θ * as θ in our application of (8.4) in this case. In order to estimate θ * − Π Θ P θ * ,δ θ * , we would need the following approximation theoretic result. See Chatterjee and Goswami (2021a) (Proposition 8.7 in the arxiv version) for a proof of (8.5) and Chatterjee and Goswami (2021b) (Lemma 10.3 in the arxiv version) for (8.6). Propositions 8.4 and 8.5 together with the description of Π P θ,δ as the operator that projects θ onto its average value on each rectangle R i , imply that (8.7) θ − Π P θ,δ θ 2 ≤ C|P θ,δ |δ 2 = C log 2 2N (δ 2 + δTV(θ)) for d > 1 whereas for d = 1, (8.8) θ − Π P θ,δ θ 2 ≤ N δ 2 .
Proof of Theorem 4.3. We get by plugging the bounds from (8.7) and item (a) in Proposition 8.4 -both evaluated at θ = Π P θ * T ,δ θ * T -into Corollary 8.3
E θ OM T − θ * T 2 ≤ infδ>0
C (δ 2 + δV * T ) log 2 d N + λ 2 (log 2 d N ) 2 1 + V * T δ + σ 2 + λ 2 |T| . Now putting δ = λ(log 2 d N ) 1/2 in the above display, we obtain (4.5).
For d = 1, we follow the exact same steps except that we now use the bound (8.8) in lieu of (8.7) to deduce
E θ OM T − θ * T 2 ≤ inf δ>0 C |T|δ 2 + λ 2 (log 2 d N ) 2 1 + V * T δ + σ 2 + λ 2 |T| .
This immediately leads to (4.6) upon setting δ = (V * T ) 1/3 λ 2/3 (log 2 d N ) 2/3 |T| −1/3 . 8.4. Proof of Theorem 5.2. We take a similar approach as in the proof of Theorem 4.2. Let us begin with an upper bound on the regret of the estimator (see, e.g., (Rakhlin and Sridharan, 2012, pp. 38-40) for a proof).
Proposition 8.6 (Regret bound for Vovk-Azoury-Warmuth forecaster). Let (z 1 , x 1 ), . . . , (z T , x T ) ∈ R × R d and define for t = 1, . . . , T (cf. (5.1)),
z t := β s · x s where β s = I + t−1 s=1 x s x T s −1 t−1 s=1 y s x s .
Then, we have
(8.9) t∈[T ] ( z t − z t ) 2 − inf β∈R d t∈[T ] (z t − β · x t ) 2 + β 2 ≤ d z ∞ 2 log(1 + T max t∈[T ]
x t 2 /d).
Using similar arguments as in (8.3), but applying (8.9) instead of (8.2) for bounding the regret, we get for any P ∈ P T and θ ∈ Θ P , R(θ, P ) ≤ C m,d |P | (s m,∞ (θ) 2 + σ 2 log eN ) log eN where C m,d > 1 depends only on m and d. The remaining part of the proof is similar to that of Theorem 4.2.
8.5. Proof of Theorem 5.3. The proof requires, first of all, that the class BV (m) n (V * ) is well-approximable by piecewise polynomial functions with degree at most m − 1. To this end we present the following result which was proved in Chatterjee and Goswami (2021a) (see Proposition 8.9 in the arxiv version).
Proposition 8.7. Fix a positive integer m > 1 and θ ∈ R n , and let V m (θ) := V . For any δ > 0, there exists a partition P θ,m,δ (of L 1,n ) in P dp and θ ∈ Θ P θ,m,δ ({F S : S ∈ S}) such that a) |P θ,m,δ | ≤ C δ −1/m for an absolute constant C, b) θ − θ ∞ ≤ V δ, and c) max S∈P θ,m,δ , 0≤j<m n j β j,S ≤ C m max 0≤j<m n j D j (θ) ∞ where θ ≡ β 0,S +. . .+β m−1,S x m−1 on S and C m is a constant depending only on m.
Proposition 8.7 immediately gives us (8.10) θ − Π Θ P θ,m,δ θ 2 ≤ N V 2 δ 2 .
Lemma 4. 1 (
1Computational complexity of OMADRE). There exists an absolute constant C > 0 such that the computational complexity, i.e., the number of elementary operations involved in the computation of OMADRE is bounded by CN (log 2 2n) d .
:
|m| ≤ m where m = (m 1 , . . . , m d ) is the multidegree of the monomial u n m and |m| = i∈[d] m i is the corresponding degree.
the the 1D OMADRE. The corresponding plot is shown in the second diagram of Figure 1. 2. Scenario 2 [Piecewise Linear Signal]: We consider the piecewise linear function the 1D OMADRE. The corresponding plot is shown in the second diagram of Figure 1. 3. Scenario 3 [Piecewise Quadratic Signal]: We consider the piecewise quadratic function
Fig 1 .
1The blue curve is the true signal, the grey points are data points and the green curve constitutes the OMADRE predictions. The plotted predictions are averaged over two predictions when the data are revealed in the forward and backward order.
Fig 2 .
2The first diagram refers to the true signal, the second one to the noisy signal and the third one to the estimated signal by the OMADRE estimator.
Fig 3 .
3The first diagram refers to the true signal, the second one to the noisy signal and the third one to the estimated signal by the OMADRE estimator.
Fig 4 .
4The first diagram refers to the true signal, the second one to the noisy signal and the third one to the estimated signal by the OMADRE estimator.
Proposition 8 . 5 . 1
851Let θ ∈ R ⊗ i∈[d] [n i ] and θ := (j 1 ,j 2 ,...,j d ) ∈ ⊗ i∈[d] [n i ] θ[j 1 , j 2 , . . . , j d ]/ i∈[d]n i be the average of the elements of θ. Then for every d > 1 we have,(8.5) (j 1 ,j 2 ,...,j d ) ∈ ⊗ i∈[d] [n i ]|θ[j 1 , j 2 , . . . , j d ] − θ| 2 ≤ j] − θ| 2 ≤ N TV(θ) 2 .
Table 1
1MSEs of OMADRE estimator in different scenariosn
σ = 0.5 σ = 1 σ = 2
2 10
0.045
0.076
0.191
2 12
0.027
0.048
0.127
2 14
0.014
0.031
0.087
Table 2
MSEs of OLRADRE (linear) in different scenarios
n
σ = 0.5 σ = 1 σ = 2
2 10
0.088
0.099
0.143
2 12
0.049
0.057
0.087
2 14
0.025
0.030
0.050
Table 3
MSEs of OLRADRE (quadratic) in different scenarios
n
σ = 0.5 σ = 1 σ = 2
2 10
0.079
0.091
0.136
2 12
0.040
0.048
0.079
2 14
0.020
0.025
0.044
Table 4
4MSEs of CV Dyadic CART estimator in different scenarios 8.1. Proofs of Lemma 4.1 and Lemma 5.1. We only prove Lemma 5.1 since it contains the proof of Lemma 4.1.n × n
Scenario 1 Scenario 2 Scenario 3
64 × 64
0.035
0.037
0.014
128 × 128
0.022
0.022
0.008
256 × 256
0.012
0.013
0.005
8. Appendix.
imsart-aos ver. 2014/02/20 file: Draft_online_learning.tex date: April 1, 2022
CHATTERJEE, S. AND GOSWAMI S.
A closer look at adaptive regret. D Adamskiy, W M Koolen, A Chernov, V Vovk, International Conference on Algorithmic Learning Theory. SpringerAdamskiy, D., W. M. Koolen, A. Chernov, and V. Vovk (2012). A closer look at adaptive regret. In International Conference on Algorithmic Learning Theory, pp. 290-304. Springer.
Relative loss bounds for on-line density estimation with the exponential family of distributions. K S Azoury, M K Warmuth, Machine Learning. 433Azoury, K. S. and M. K. Warmuth (2001). Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning 43 (3), 211-246.
Online forecasting of total-variation-bounded sequences. D Baby, Y.-X Wang, Advances in Neural Information Processing Systems. 32Baby, D. and Y.-X. Wang (2019). Online forecasting of total-variation-bounded sequences. Advances in Neural Information Processing Systems 32.
Adaptive online estimation of piecewise polynomial trends. D Baby, Y.-X Wang, Advances in Neural Information Processing Systems. 33Baby, D. and Y.-X. Wang (2020). Adaptive online estimation of piecewise polynomial trends. Advances in Neural Information Processing Systems 33, 20462-20472.
Optimal dynamic regret in exp-concave online learning. D Baby, Y.-X Wang, PMLRConference on Learning Theory. Baby, D. and Y.-X. Wang (2021). Optimal dynamic regret in exp-concave online learning. In Conference on Learning Theory, pp. 359-409. PMLR.
An optimal reduction of tv-denoising to adaptive online learning. D Baby, X Zhao, Y.-X Wang, ; Pmlr, D Baby, X Zhao, Y.-X Wang, PMLRInternational Conference on Artificial Intelligence and Statistics. International Conference on Artificial Intelligence and StatisticsBaby, D., X. Zhao, and Y.-X. Wang (2021a). An optimal reduction of tv-denoising to adaptive online learning. In International Conference on Artificial Intelligence and Statistics, pp. 2899-2907. PMLR. Baby, D., X. Zhao, and Y.-X. Wang (2021b). An optimal reduction of tv-denoising to adaptive online learning. In International Conference on Artificial Intelligence and Statistics, pp. 2899-2907. PMLR.
Prediction, learning, and games. N Cesa-Bianchi, G Lugosi, Cambridge university pressCesa-Bianchi, N. and G. Lugosi (2006). Prediction, learning, and games. Cambridge university press.
An improved global risk bound in concave regression. S Chatterjee, Electronic Journal of Statistics. 101Chatterjee, S. et al. (2016). An improved global risk bound in concave regression. Electronic Journal of Statistics 10 (1), 1608-1629.
Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees. S Chatterjee, S Goswami, Ann. Statist. 495imsart-aos ver. 2014/02/20 file: Draft_online_learning.tex dateChatterjee, S. and S. Goswami (2021a). Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees. Ann. Statist. 49 (5), 2531-2551. imsart-aos ver. 2014/02/20 file: Draft_online_learning.tex date: April 1, 2022
. S Chatterjee, Goswami S, CHATTERJEE, S. AND GOSWAMI S.
New risk bounds for 2D total variation denoising. S Chatterjee, S Goswami, IEEE Trans. Inform. Theory. 676Chatterjee, S. and S. Goswami (2021b). New risk bounds for 2D total variation denoising. IEEE Trans. Inform. Theory 67 (6, part 2), 4060-4091.
Strongly adaptive online learning. A Daniely, A Gonen, S Shalev-Shwartz, PMLRInternational Conference on Machine Learning. Daniely, A., A. Gonen, and S. Shalev-Shwartz (2015). Strongly adaptive online learning. In International Conference on Machine Learning, pp. 1405-1411. PMLR.
Isotonic regression in multi-dimensional spaces and graphs. H Deng, C.-H Zhang, The Annals of Statistics. 486Deng, H. and C.-H. Zhang (2020). Isotonic regression in multi-dimensional spaces and graphs. The Annals of Statistics 48 (6), 3672-3698.
Cart and best-ortho-basis: a connection. D L Donoho, The Annals of Statistics. 255Donoho, D. L. (1997). Cart and best-ortho-basis: a connection. The Annals of Statistics 25 (5), 1870-1911.
Minimax estimation via wavelet shrinkage. D L Donoho, I M Johnstone, Annals of Statistics. 263Donoho, D. L. and I. M. Johnstone (1998). Minimax estimation via wavelet shrinkage. Annals of Statis- tics 26 (3), 879-921.
Ideal spatial adaptation by wavelet shrinkage. D L Donoho, J M Johnstone, biometrika. 813Donoho, D. L. and J. M. Johnstone (1994). Ideal spatial adaptation by wavelet shrinkage. biometrika 81 (3), 425-455.
Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and hardy-krause variation. B Fang, A Guntuboyina, B Sen, The Annals of Statistics. 492Fang, B., A. Guntuboyina, and B. Sen (2021). Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and hardy-krause variation. The Annals of Statistics 49 (2), 769-792.
Adaptive risk bounds in univariate total variation denoising and trend filtering. A Guntuboyina, D Lieu, S Chatterjee, B Sen, The Annals of Statistics. 481Guntuboyina, A., D. Lieu, S. Chatterjee, and B. Sen (2020). Adaptive risk bounds in univariate total variation denoising and trend filtering. The Annals of Statistics 48 (1), 205-229.
Global risk bounds and adaptation in univariate convex regression. A Guntuboyina, B Sen, Probab. TheoryRelated Fields. To appearGuntuboyina, A. and B. Sen (2013). Global risk bounds and adaptation in univariate convex regression. Probab. Theory Related Fields. To appear, available at http://arxiv.org/abs/1305.1648.
Isotonic regression in general dimensions. Q Han, T Wang, S Chatterjee, R J Samworth, The Annals of Statistics. 475Han, Q., T. Wang, S. Chatterjee, R. J. Samworth, et al. (2019). Isotonic regression in general dimensions. The Annals of Statistics 47 (5), 2440-2471.
Adaptive algorithms for online decision problems. E Hazan, C Seshadhri, Electronic colloquium on computational complexity (ECCC). 14Hazan, E. and C. Seshadhri (2007). Adaptive algorithms for online decision problems. In Electronic collo- quium on computational complexity (ECCC), Volume 14.
Optimal rates for total variation denoising. J.-C Hütter, P Rigollet, Conference on Learning Theory. Hütter, J.-C. and P. Rigollet (2016). Optimal rates for total variation denoising. In Conference on Learning Theory, pp. 1115-1146.
D Ki, B Fang, A Guntuboyina, arXiv:2111.11694Mars via lasso. arXiv preprintKi, D., B. Fang, and A. Guntuboyina (2021). Mars via lasso. arXiv preprint arXiv:2111.11694 .
. S.-J Kim, K Koh, S Boyd, D Gorinevsky, SIAM Rev. 512l1 trend filteringKim, S.-J., K. Koh, S. Boyd, and D. Gorinevsky (2009). l1 trend filtering. SIAM Rev. 51 (2), 339-360.
Online isotonic regression. W Kot Lowski, W M Koolen, A Malek, PMLRConference on Learning Theory. Kot lowski, W., W. M. Koolen, and A. Malek (2016). Online isotonic regression. In Conference on Learning Theory, pp. 1165-1189. PMLR.
Random permutation online isotonic regression. W Kotlowski, W M Koolen, A Malek, Advances in Neural Information Processing Systems. 30Kotlowski, W., W. M. Koolen, and A. Malek (2017). Random permutation online isotonic regression. Advances in Neural Information Processing Systems 30.
Locally adaptive regression splines. E Mammen, S Van De Geer, The Annals of Statistics. 251Mammen, E. and S. van de Geer (1997). Locally adaptive regression splines. The Annals of Statistics 25 (1), 387-413.
A modern introduction to online learning. F Orabona, arXiv:1912.13213arXiv preprintOrabona, F. (2019). A modern introduction to online learning. arXiv preprint arXiv:1912.13213 .
Statistical learning theory and sequential prediction. A Rakhlin, K Sridharan, Lecture Notes in University of Pennsyvania. Rakhlin, A. and K. Sridharan (2012). Statistical learning theory and sequential prediction. Lecture Notes in University of Pennsyvania.
Nonlinear total variation based noise removal algorithms. L I Rudin, S Osher, E Fatemi, Physica D: Nonlinear Phenomena. 601Rudin, L. I., S. Osher, and E. Fatemi (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60 (1), 259-268.
Total variation classes beyond 1d: Minimax rates, and the limitations of linear smoothers. V Sadhanala, Y.-X Wang, R J Tibshirani, Advances in Neural Information Processing Systems. Sadhanala, V., Y.-X. Wang, and R. J. Tibshirani (2016). Total variation classes beyond 1d: Minimax rates, and the limitations of linear smoothers. In Advances in Neural Information Processing Systems, pp. 3513-3521.
Nonparametric regression (and classification). R Tibshirani, Tibshirani, R. (2015). Nonparametric regression (and classification).
Divided differences, falling factorials, and discrete splines: Another look at trend filtering and related problems. R J Tibshirani, arXiv:2003.03886arXiv preprintTibshirani, R. J. (2020). Divided differences, falling factorials, and discrete splines: Another look at trend filtering and related problems. arXiv preprint arXiv:2003.03886 .
Adaptive piecewise polynomial estimation via trend filtering. R J Tibshirani, The Annals of Statistics. 421Tibshirani, R. J. et al. (2014). Adaptive piecewise polynomial estimation via trend filtering. The Annals of Statistics 42 (1), 285-323.
Competitive on-line linear regression. V Vovk, Advances in Neural Information Processing Systems. Vovk, V. (1998). Competitive on-line linear regression. Advances in Neural Information Processing Systems, 364-370.
| [] |
[
"Topological Graph-based Analysis of Solid-State Ion Migration",
"Topological Graph-based Analysis of Solid-State Ion Migration"
] | [
"Jimmy-Xuan Shen \nDepartment of Material Science and Engineering\nMaterials Sciences Division\nDepartment of Material Science and Engineering\nLawrence Berkeley National Laboratory\nUniversity of California\n94720, 94720Berkeley, BerkeleyCAU.S.A., United States\n",
"Haoming Howard Li \nDepartment of Material Science and Engineering\nMaterials Sciences Division\nDepartment of Material Science and Engineering\nLawrence Berkeley National Laboratory\nUniversity of California\n94720, 94720Berkeley, BerkeleyCAU.S.A., United States\n",
"Ann Rutt \nDepartment of Material Science and Engineering\nMaterials Sciences Division\nDepartment of Material Science and Engineering\nLawrence Berkeley National Laboratory\nUniversity of California\n94720, 94720Berkeley, BerkeleyCAU.S.A., United States\n",
"Matthew K Horton \nUniversity of California\n94720BerkeleyCAU.S.A\n",
"Kristin A Persson \nUniversity of California\n94720BerkeleyCAU.S.A\n"
] | [
"Department of Material Science and Engineering\nMaterials Sciences Division\nDepartment of Material Science and Engineering\nLawrence Berkeley National Laboratory\nUniversity of California\n94720, 94720Berkeley, BerkeleyCAU.S.A., United States",
"Department of Material Science and Engineering\nMaterials Sciences Division\nDepartment of Material Science and Engineering\nLawrence Berkeley National Laboratory\nUniversity of California\n94720, 94720Berkeley, BerkeleyCAU.S.A., United States",
"Department of Material Science and Engineering\nMaterials Sciences Division\nDepartment of Material Science and Engineering\nLawrence Berkeley National Laboratory\nUniversity of California\n94720, 94720Berkeley, BerkeleyCAU.S.A., United States",
"University of California\n94720BerkeleyCAU.S.A",
"University of California\n94720BerkeleyCAU.S.A"
] | [] | To accelerate the development of novel ion conducting materials, we present a general graphtheoretic analysis framework for ion migration in any crystalline structure. The nodes of the graph represent metastable sites of the migrating ion and the edges represent discrete migration events between adjacent sites. Starting from a collection of possible metastable migration sites, the framework assigns a weight to the edges by calculating the individual migration energy barriers between those sites. Connected pathways in the periodic simulation cell corresponding to macroscopic ion migration are identified by searching for the lowest-cost cycle in the periodic migration graph. To exemplify the utility of the framework, we present the automatic analyses of Li migration in different polymorphs of VO(PO4), with the resulting identification of two distinct crystal structures with simple migration pathways demonstrating overall < 300 meV migration barriers. arXiv:2202.00222v2 [cond-mat.mtrl-sci] 5 Jul 2022 | 10.1038/s41524-023-01051-2 | [
"https://arxiv.org/pdf/2202.00222v2.pdf"
] | 250,311,850 | 2202.00222 | bf09900dce6413c9d39d2270e3823cd5f78fd2a7 |
Topological Graph-based Analysis of Solid-State Ion Migration
Jimmy-Xuan Shen
Department of Material Science and Engineering
Materials Sciences Division
Department of Material Science and Engineering
Lawrence Berkeley National Laboratory
University of California
94720, 94720Berkeley, BerkeleyCAU.S.A., United States
Haoming Howard Li
Department of Material Science and Engineering
Materials Sciences Division
Department of Material Science and Engineering
Lawrence Berkeley National Laboratory
University of California
94720, 94720Berkeley, BerkeleyCAU.S.A., United States
Ann Rutt
Department of Material Science and Engineering
Materials Sciences Division
Department of Material Science and Engineering
Lawrence Berkeley National Laboratory
University of California
94720, 94720Berkeley, BerkeleyCAU.S.A., United States
Matthew K Horton
University of California
94720BerkeleyCAU.S.A
Kristin A Persson
University of California
94720BerkeleyCAU.S.A
Topological Graph-based Analysis of Solid-State Ion Migration
(Dated: July 7, 2022)
To accelerate the development of novel ion conducting materials, we present a general graphtheoretic analysis framework for ion migration in any crystalline structure. The nodes of the graph represent metastable sites of the migrating ion and the edges represent discrete migration events between adjacent sites. Starting from a collection of possible metastable migration sites, the framework assigns a weight to the edges by calculating the individual migration energy barriers between those sites. Connected pathways in the periodic simulation cell corresponding to macroscopic ion migration are identified by searching for the lowest-cost cycle in the periodic migration graph. To exemplify the utility of the framework, we present the automatic analyses of Li migration in different polymorphs of VO(PO4), with the resulting identification of two distinct crystal structures with simple migration pathways demonstrating overall < 300 meV migration barriers. arXiv:2202.00222v2 [cond-mat.mtrl-sci] 5 Jul 2022
INTRODUCTION
The migration of charged ions (eg. Li, Mg, Na, O 2− etc.) through solid-state materials is the primary physical mechanism behind the operation of Li-ion batteries, solid-oxide fuel cells, and solid-state electrolytes. Rapid identification and discovery of new materials with favorable migration characteristics is key to developing all-solid-state batteries where the current state-of-theart organic electrolytes are replaced with a solid-state alternative, leading to improved power density and safety. Traditionally, the discovery of novel electrode materials has focused on compounds that contain the migrating ion in their as-synthesized state. However, this is not a strict requirement, and many materials synthesized without the migrating species are capable ion conductors. In fact, it has been shown that for multivalent applications, materials that are synthesized without the working ion tend to exhibit a flatter migration energy landscape and hence better performance [1][2][3][4].
The established method for identifying the optimal path between two sites in a crystal is the nudged-elastic band (NEB) method [5,6]. However, NEB calculations are computationally costly and are only able to analyze short-distance migration events provided that an initial, reasonably accurate, guess for the connecting path is available. To understand the migration characteristics of a material, the motion of the ion through the entire crystal must be considered. Recent high-throughput studies have attempted to address this either by simplifying the problem to analyzing the migration of a working ion in a fictitious field [7] or by focusing on individual migration events but not how they connect over larger distances [8]. Additionally, previous work exclusively treat materials where valid sites for the working ion are known beforehand. To explore the broader class of materials, where there is no a priori knowledge of the sites and migration properties of the possible intercalants, it is of considerable interest to develop algorithms and frameworks to analyze possible ion migration behavior in any crystalline solid.
In this endeavor, we employ a recently developed methodology where the charge density analysis was shown to be a reliable descriptor for generating initial guesses of working ion sites [9] which allows us to systematically identify metastable intercalation sites in any crystalline structure. Here, we build upon this framework and present a graph theory extension to automatically identify ion migration pathways in any periodic solid. The migration is treated as a periodic graph where symmetrically equivalent copies of the metastable sites constitute the nodes and the individual migration events between these sites are the edges. Additionally, we assign a cost to the graph edges based on the migration energy barriers and showcase how optimal intercalation pathways can be discovered with a Dijkstra's-inspired algorithm defined on the periodic graph. The original code provided here is distributed as an extension to the pymatgen material analysis library. We demonstrate our framework on two well-known structures of MnO 2 and CoO 2 to show how the migration graphs can be constructed and utilized. Finally, the methodology is applied to the different configurations of VO(PO) 4 in the Materials Project [10] to assess the migration characteristics of each polymorph, and we exemplify the capability to identify promising new ionic conductors within this set of materials.
RESULTS & DISCUSSIONS
Site identification
Our graph-based migration analysis is best suited for the two limits of working ion occupation, either singleion migration in the dilute limit or vacancy migration in the fully intercalated limit. While it is possible to analyze intermediate concentrations, the large configurational space associated with the working ion ordering arrangements make a thorough investigation computationally demanding and not suitable for high-throughput evaluation of viable intercalation pathways. For vacancy migration, a priori knowledge of the working ion sites makes the construction of migration graphs trivial. In materials where we lack knowledge of working ion sites, we utilize a recently developed, generally robust computational workflow for identifying the metastable sites of the working ion in any structure. [9] The methodology selects sites at the local minima of charge density and, for each candidate site, a working ion is inserted and the structure is allowed to relax using densityfunctional theory calculations. An inserted structure is considered "topotactic" if the positions of its framework atoms closely resemble the relaxed atomic positions of the host material. The metastable sites are obtained by mapping the working ion in the topotactically inserted structures onto the empty host structure and identifying all symmetry-equivalent positions in the host structure. Based on the location and connectivity of the metastable sites, we build our graph-based migration analyses.
To exemplify our approach, we use two materials: MnO2 2 in the λ phase [11] with cubic spinel structure and layered CoO 2 with ABBA stacking [12]. After performing indepedent single Li insertions into the sites suggested by the charge density analysis and relaxing the new structures [9], two distinct singly-inserted structures for each material were topotactically matched to the host material as shown in Fig. 1. We denote the base structure S base and the set of relaxed inserted topotactic structures {S α } where α ∈ {A, B} for both examples. Since the host sublattice (which does not contain the working ion) of each S α can be mapped onto S base , the relaxed positions of the cations in each structure can also be mapped to position s α in S base . This mapping allows us the identify two symmetry-distinct metastable sites s A (blue) and s B (orange) for MnO 2 and CoO 2 , respectively. Utilizing the spglib package [13] and its interface with pymatgen [14], we analyze the crystal symmetry of the structure with the inserted ion, S base , and apply the valid point group operations to each s α to generate all of the possible cation positions, designated by an integer index value i at position r i , in the unit cell.
In MnO 2 , the s A metastable site is represented by the fractional coordinates 1 8 , 1 8 , 1 8 and all space-group op-erations of the host material will either map the site to itself or 7 8 , 7 8 , 7 8 . The s B site is represented by the fractional coordinates (0, 0, 0), which has three additional symmetry-equivalent sites as shown in Tab. I. This results in a total of six metastable sites per unit cell as shown in Fig. 1 (d). We perform the same analysis for CoO 2 , which results in s A and s B at the face centers of the primitive cell. The space-group operations of CoO 2 map the sites onto periodic images of the original, as such, no new symmetrically equivalent sites are created from symmetry operations, the resulting two metastable sties are shown in Fig 1 (g).
Graph Analysis
Using a distance cutoff of l max , we connect two nearby metastable sites r i and r j to represent a discrete migration event in the material which we will call a "hop". The network formed by these hops is infinite and the following convention ensures that we only consider hops that are inequivalent by lattice transitions. Each hop between sites i and j in the periodic unit cell is labeled h K ij where the additional index K is an integer-valued vector representing the relative periodic image displacement between the endpoints [ie. K = (0, 0, 1) means that the hop crosses a period cell boundary once in the positive c-direction]. In general, we consider the migration graph to be undirected. As such, the hops h K ij and h −K ji will represent the same migration event, but only one representation is needed. As a convention to prevent double-counting, we require the site indices to satisfy i ≤ j. Additionally, since there is ambiguity when j = i and K = 0, we only retain the hop where the first non-zero component of K is positive.
Using a threshold value of l max = 3Å, the migration graph for Li + in MnO 2 (denoted as G(MnO 2 )) is constructed and shown in Fig. 2 (a-b). There are 18 hops in G(MnO 2 ) that are not equivalent under discrete lattice translations. Using the space group symmetry between the hops, we can reduce them to 2 symmetry-distinct groups indicated by the edge color in the graph. The Li + migration graph in CoO 2 , with 8 hops in 3 symmetrydistinct groups, is shown in Fig. 2 (c,d). For a complete enumeration of the migration hops in these two materials and their symmetry equivalence, see Table S.II and S.III of the supplemental materials [15] (SI). In principle, once we have identified the symmetrically equivalent groups, we can obtain the migration barrier using nudged elastic band (NEB) calculations [6] to chart the migration energy landscape of the material.
A candidate ion-conducting material must enable a continuous migration pathway for the working ion across the unit cell, connecting to the next one. In a periodic system, continuous pathways are infinite, which we term "intercalating pathways". Since our migration graph con- tains only one copy of each node, periodicity manifests via the image displacement vector K. The intercalating pathways are essentially cycles in the graph where the total image displacement is non-zero. The series of hops in such a cycle will connect a metastable site to a different periodic image of itself, which constitutes a repeating unit of an infinite periodic migration pathway. Basic examples of intercalating pathways are highlighted in light green in Fig. 2 (b) and (d), which connect a node to a periodic image of itself. To identify these pathways, we used a modified Dijkstra's type algorithm on the periodic graph. The key difference between the modified algorithm and the original Dijkstra's algorithm is that the periodic image vector is tracked during graph traversal. This means that the optimal cost to reach any node during the graph traversal is defined for the combination of node index i and periodic image vector K. A detailed description of the path-finding algorithm on the periodic graph is presented in the SI; Algo. S1. The cost function employed in the path-finding algorithm can be any positive definite function assigned to the edges of the graph. A good choice in most cases is the migration energy barrier for the ion-migration event represented by that particular edge. However, the difference between the binding energies of the endpoints, which can be computed without expensive NEB calculations, may also be used as a lower bound of the activation barrier for screening purposes.
(a) S A S B (b) (e) (f) (c) (d) s A s B S A S B (g ) s A s B Apply
Application to Polymorphs of VO(PO4)
We demonstrate the utility of the obtained migration graphs for Li migration in VO(PO 4 ). Of all 18 VO(PO 4 ) phases currently available in the Materials Project, five are distinct known, synthesized phases with the following IDs(spacegroup symbols): mp-25265(Pnma), mp-556459(Cc), mp-559299(P4/n), mp-763482(P4/n), mp-1104567(C2/m). While all five phases listed above have been experimentally synthesized, only some of them have readily available electrochemical analysis data. In particular, mp-25265 (β-VOPO 4 ) demonstrates a capacity of 118.6 mAh/g against Li insertion at average 4V [16], and mp-556459 (ε-VOPO 4 ) has shown a capacity of 305 mAh/g against Li insertion over two voltage plateaus at about 4.0 and 2.5 V [17]. For each of the five structures, we performed a set of ion insertions to generate metastable sites and constructed the migration graphs, yielding connectivity of hops to form intercalation pathways. With this connectivity, the only missing piece to a complete description of the intercalation behavior is understanding of the ion migration energy evolution during the hops.
The energy profile for each hop can be estimated by implementing the ApproxNEB method [18] in atomate. The ApproxNEB method performs independent constrained optimizations for each image structures which allows us to trade accuracy for speed since the independent relaxations are trivially parallelised. Each phase of VO(PO 4 ) has 3 to 10 such hops and thus 3 to 10 Approx-NEB calculations. Due to the high computational cost involved, one might find it helpful, in general, to rank migration pathways before ApproxNEB is employed. To demonstrate testing of one possible choice of cost function for this purpose, we performed charge-density analyses on these phases and compared them to our Approx-NEB results. We examined the total change-density in a radius 1Å cylinder between sites the end points of a hop; ρ cyl (h K ij ). Since the background charge density can change between different structures, we will only focus on the relative charge barrier, defined as the ratio between the integrated charge, ρ cyl (h K ij ), and its minimum value in that particular structure; min(ρ cyl (h K ij )). The relationship between the total charge ratio and the ApproxNEB barrier is shown in Fig. 3 (c), which indicates little correlation between the total charge in the cylinder and the energy barrier. Hence, while promising insertion sites could be identified by low charge-density, it is clear that local atomic relaxations around the working ion during the mi-gration significantly impact the energy barrier such that those effects cannot be ignored. However, since the relative charge barrier is an indicator of the amount of negative charge that the migrating ion has to move through we are most interested in migration evens with low relative charge and low ApproxNEB barriers, i.e. the bottom left corner of FIG. 3 (c), for further analysis.
With details of the hops and their connectivity, we can now construct a complete picture of long-range migration in the system. In Fig. 3 (a)(b), we show the lowest energy barrier intercalation pathway for two of the structures (mp-25265 and mp-559299) that contain multiple low-barrier hops. In order to reach an accurate description, we performed NEB calculations when evaluating the energy landscape of each hop, the results of which show that both structures contain an intercalation pathway which has an overall energy barrier of less than 250 eV.
CONCLUSIONS
We demonstrate that the intercalation properties of cations in a solid-state material can be fully captured by a migration graph where the metastable sites represent the nodes and the migration energy barriers are the edge weights. Using a previously-developed, unbiased cation insertion algorithm, we identify the symmetry-distinct metastable sites in the structure and generate all equivalent sites by repeatedly applying the symmetry operations of the host. The migration energy is calculated for the symmetrically-distinct hop between pairs of adjacent metastable sites and the data is replicated on symmetrically equivalent hops to obtain the migration barriers on the entire graph. To identify intercalating pathways, we detect cycles in the periodic graph. Finally, we applied this analysis framework on a diverse set of polymorph structures of VO(PO 4 ) and present several promising structures with low migration barriers. The framework and code presented here can be used to automatically obtain the migration properties of solid-state materials with essentially no a priori knowledge. Our work opens up opportunities for high throughput studies in the future and can offer a deeper understanding of the migration properties of crystalline solids.
Supplemental Materials: Rapid discovery of cathodes, ionic conductors and solid-stateelectrolytes through topological migration analysis
MIGRATIONS GRAPHS FOR THE EXAMPLE MATERIALS
The s A/B sites are positions in the host structure that correspond to the Li positions in the inserted structure S A/B . Using the SpacegroupAnalyzer functionality within pymatgen we can apply all of the symmetry transformations of the host structure to the insertions sites. The position of the inserted site obtained via mapping from the inserted structures, the symmetry transformations of the host as well as resulting transformed position of the inserted site for MnO 2 are listed in Table SI. For CoO 2 , since all of the allowed symmetry operation of host leaves the inserted site position fixed, they are now listed here.
For MnO 2 , this results in two symmetry-equivalent copies of s A in at 1 8 , 1 8 , 1 8 and 7 8 , 7 8 , 7 8 are labeled 0 and 1 respectively. The four symmetry-equivalent copies of s B that form a tetrahedron around 1 8 , 1 8 , 1 8 are labeled 2 through 5. The two sites that are symmetrically equivalent to s A are labeled 0 and 1, while s B sites are labeled 2 through 5.
Using a distance threshold of 3Å, we find migrations hops between the metastable sites, the hops in the migrations graph for MnO 2 are listed in Table SII and the hops in the migration graph for CoO 2 are listed in Table SIII. The charge density analysis developed for the approximate NEB workflow [S18] calculates an optimal pathway between two point using the electronic charge density as a "virtual" potential. We assign a charge barrier ρ b to a given unique hop as the peak averages charge density (in a sphere of 0.4Å) along the path. Using a simple cost function of the charge barrier times the distance of the hop, we can assign a cost to each hop or edge of our migration graph. The minimum cost of from node u to itself with finite cumulitive displacement.
(x, y, z), (z, −x − y − z + 1 2 , x) 1 8 , 1 8 , 1 8 (−x − y − z + 1 2 , z, y), (y, x, −x − y − z + 1 2 ) (y, z, −x − y − z + 1
all symmetry operations of the host No new sites from symmetry operation FIG. 1. Illustration of identified metastable sites in (a-d) MnO2 and (e-g) CoO2. (a-b) Crystal structure indicating the different relaxed atomic structures of MnO2 after single Li insertion. (e-f) Crystal structure indicating the different relaxed atomic structures of CoO2 after single Li insertion. (c) Crystal structure indicating the metastable Li position after mapping onto S base . (d,g) Crstyal structure with the full set of possible Li sites after symmetry operations. Note that the applied symmetry operations did not result in additional sites for CoO2.
FIG. 2 .
2Graph representation of the migration hops in MnO2 (a,b) and CoO2 (c,d). (a,c) Crystal structure with hops shown that fall below 3Å and are colored by symmetric equivalence. (c,d) The migration graphs for the two materials MnO2 and CoO2 are shown, where dashed nodes represent j-index nodes that are outside the (0,0,0) unit cell. Although there is only one periodic copy of each node, the dashed nodes are used to differentiate the multiple edges connecting the same two nodes. Two examples of intercalating pathways are highlighted in light-green in (b) and (d).
FIG. 3 .
3The NEB-calculated energy landscapes along the lowest-barrier paths for mp-25265 (a) and mp-559299 (b) are shown. These two optimal paths both contain 2 hops. Comparison between the ApproxNEB energy barriers vs. the relative charge barrier [ρ cyl (h K ij )/ min ρ cyl (h K mn )] of all the hops in the five chosen phases of VO(PO4) are shown in (c).
I. Space group mapping of Li positions si in MnO2 under the space group operations host crystal structure. The positions of the si's and their images are given in fractional coordinates.
. ApproxNEB energy barrier and charge barrier of mp-25265 ri → rj Barrier (eV) ρ cyl i,j (e − ) . ApproxNEB energy barrier and charge barrier of mp-559299 ri → rj Barrier (eV) ρ cyl i,j (e − )
1: minCost[v, D] ← Mapping with default value INFINITY 2: prev[v, D] ← Mapping with default value NULL 3: minCost[u, D = (0, 0, 0)] ← Set to 0 4: Q ← add (v, (0, 0, 0)) to a queue 5: while Q is not empty do 6: u, D ← pop vertex in Q with lowest cost 7: for each (v, K) neighbor of u do 8: if u < v or (u == v and first non-zero index of K is positive) = minCost[u, D] + cost[u, v, K] 14: if D within some user-defined limit and newCost < minCost[v, D ] 19: end while 20: return cost[u, D] where D not (0,0,0)
TABLE
TABLE III .
IIIThe hops with the same label are equivalent under some space group operation.Full list of the migration events (h K
ij ) in CoO2 that are less than 3Å and are distinct under lattice-vector translations.
i-index
j-index
j-image vector (K)
distance
label
0
1
(-1, 0, 0)
2.846
0
0
1
(-1, 1, 0)
2.846
0
0
1
(0, 0, 0)
2.846
0
0
1
(0, 1, 0)
2.846
0
0
0
(-1, 0, 0)
2.820
1
0
0
(1, 0, 0)
2.820
1
1
1
(-1, 0, 0)
2.820
2
1
1
(1, 0, 0)
2.820
2
INTERCALATING PATHWAY FINDING
TABLE IV .
IVApproxNEB energy barrier and charge barrier of mp-1104567ri → rj
Barrier (eV)
ρ cyl
i,j (e − )
ρ max
i,j
), (−x − y − z + 1 2 , x, y)
(−y, −z, x + y + z + 1 2 ), (x + y + z +1 2, −x, −y)a Determined by the SpacegroupAnalyzer
. S Kim, L Yin, M H Lee, P Parajuli, L Blanc, T T Fister, H Park, B J Kwon, B J Ingram, P Zapol, R F Klie, K Kang, L F Nazar, S H Lapidus, J T Vaughey, 10.1021/acsenergylett.0c01663ACS Energy Lett. 53203S. Kim, L. Yin, M. H. Lee, P. Parajuli, L. Blanc, T. T. Fister, H. Park, B. J. Kwon, B. J. Ingram, P. Zapol, R. F. Klie, K. Kang, L. F. Nazar, S. H. Lapidus, and J. T. Vaughey, ACS Energy Lett. 5, 3203 (2020).
. X Sun, P Bonnick, L F Nazar, 10.1021/acsenergylett.6b00145ACS Energy Lett. 1297X. Sun, P. Bonnick, and L. F. Nazar, ACS Energy Lett. 1, 297 (2016).
. Z Rong, R Malik, P Canepa, G Gautam, M Liu, A Jain, K Persson, G Ceder, 10.1021/acs.chemmater.5b02342Chem. Mater. 276016Z. Rong, R. Malik, P. Canepa, G. Sai Gautam, M. Liu, A. Jain, K. Persson, and G. Ceder, Chem. Mater. 27, 6016 (2015).
. E Levi, D Aurbach, 10.1021/cm100422zChem. Mater. 223678E. Levi and D. Aurbach, Chem. Mater. 22, 3678 (2010).
. G Mills, H Jónsson, 10.1103/PhysRevLett.72.1124Phys. Rev. Lett. 721124G. Mills and H. Jónsson, Phys. Rev. Lett. 72, 1124 (1994).
. H Jónsson, G Mills, K W Jacobsen, 10.1142/9789812839664_0016385H. Jónsson, G. Mills, and K. W. Jacobsen, cqdc , 385 (1998).
. L Kahle, A Marcolongo, N Marzari, 10.1039/C9EE02457CEnergy Environ. Sci. L. Kahle, A. Marcolongo, and N. Marzari, Energy Env- iron. Sci. (2020), 10.1039/C9EE02457C.
. F T Bölle, N R Mathiesen, A J Nielsen, T Vegge, J M Garcia-Lastra, I E Castelli, 10.1002/batt.201900152Batteries & Supercaps. 3488F. T. Bölle, N. R. Mathiesen, A. J. Nielsen, T. Vegge, J. M. Garcia-Lastra, and I. E. Castelli, Batteries & Su- percaps 3, 488 (2020).
. J.-X Shen, M Horton, K A Persson, 10.1038/s41524-020-00422-3Comput. Mater. 6J.-X. Shen, M. Horton, and K. A. Persson, npj Comput. Mater. 6 (2020), 10.1038/s41524-020-00422-3.
. A Jain, S P Ong, G Hautier, W Chen, W D Richards, S Dacek, S Cholia, D Gunter, D Skinner, G Ceder, K A Persson, Mater, 10.1063/1.4812323111002A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, and K. A. Persson, APL Mater. 1, 011002 (2013).
. T Juran, J Young, M Smeu, 10.1021/acs.jpcc.8b00918J. Phys. Chem. C. 1228788T. Juran, J. Young, and M. Smeu, J. Phys. Chem. C 122, 8788 (2018).
. S Laubach, S Laubach, P C Schmidt, D Ensling, S Schmid, W Jaegermann, A Thißen, K Nikolowski, H Ehrenberg, 10.1039/B901200APhys. Chem. Chem. Phys. 113278S. Laubach, S. Laubach, P. C. Schmidt, D. Ensling, S. Schmid, W. Jaegermann, A. Thißen, K. Nikolowski, and H. Ehrenberg, Phys. Chem. Chem. Phys. 11, 3278 (2009).
. A Togo, I Tanaka, 1808.01590A. Togo and I. Tanaka, arXiv (2018), 1808.01590.
. S P Ong, S Cholia, A Jain, M Brafman, D Gunter, G Ceder, K A Persson, 10.1016/j.commatsci.2014.10.037Comput. Mater. Sci. 97209S. P. Ong, S. Cholia, A. Jain, M. Brafman, D. Gunter, G. Ceder, and K. A. Persson, Comput. Mater. Sci. 97, 209 (2015).
Supplemental Material reference. Supplemental Material reference.
. M M Ren, Z Zhou, L W Su, X P Gao, 10.1016/j.jpowsour.2008.07.092J. Power Sources. 189786M. M. Ren, Z. Zhou, L. W. Su, and X. P. Gao, J. Power Sources 189, 786 (2009).
. C Siu, I D Seymour, S Britto, H Zhang, J Rana, J Feng, F O Omenya, H Zhou, N A Chernova, G Zhou, C P Grey, L F J Piper, M S Whittingham, 10.1039/C8CC02386GChem. Commun. 547802C. Siu, I. D. Seymour, S. Britto, H. Zhang, J. Rana, J. Feng, F. O. Omenya, H. Zhou, N. A. Chernova, G. Zhou, C. P. Grey, L. F. J. Piper, and M. S. Whit- tingham, Chem. Commun. 54, 7802 (2018).
. Z Rong, D Kitchaev, P Canepa, W Huang, G Ceder, 10.1063/1.4960790J. Chem. Phys. 14574112Z. Rong, D. Kitchaev, P. Canepa, W. Huang, and G. Ceder, J. Chem. Phys. 145, 074112 (2016).
| [] |
[
"Deep Rotation Correction without Angle Prior",
"Deep Rotation Correction without Angle Prior"
] | [
"Lang Nie ",
"Member, IEEEChunyu Lin ",
"Kang Liao ",
"Member, IEEEShuaicheng Liu ",
"Fellow, IEEEYao Zhao "
] | [] | [] | Not everybody can be equipped with professional photography skills and sufficient shooting time, and there can be some tilts in the captured images occasionally. In this paper, we propose a new and practical task, named Rotation Correction, to automatically correct the tilt with high content fidelity in the condition that the rotated angle is unknown. This task can be easily integrated into image editing applications, allowing users to correct the rotated images without any manual operations. To this end, we leverage a neural network to predict the optical flows that can warp the tilted images to be perceptually horizontal. Nevertheless, the pixel-wise optical flow estimation from a single image is severely unstable, especially in large-angle tilted images. To enhance its robustness, we propose a simple but effective prediction strategy to form a robust elastic warp. Particularly, we first regress the mesh deformation that can be transformed into robust initial optical flows. Then we estimate residual optical flows to facilitate our network the flexibility of pixel-wise deformation, further correcting the details of the tilted images. To establish an evaluation benchmark and train the learning framework, a comprehensive rotation correction dataset is presented with a large diversity in scenes and rotated angles. Extensive experiments demonstrate that even in the absence of the angle prior, our algorithm can outperform other state-ofthe-art solutions requiring this prior. The code and dataset are available at https://github.com/nie-lang/RotationCorrection. | 10.1109/tip.2023.3275869 | [
"https://export.arxiv.org/pdf/2207.03054v2.pdf"
] | 250,334,407 | 2207.03054 | 786006c716896c4611bcdf5e2606b8618c13a355 |
Deep Rotation Correction without Angle Prior
Lang Nie
Member, IEEEChunyu Lin
Kang Liao
Member, IEEEShuaicheng Liu
Fellow, IEEEYao Zhao
Deep Rotation Correction without Angle Prior
1Index Terms-Computer visionrotation correctionmesh deformationoptical flow
Not everybody can be equipped with professional photography skills and sufficient shooting time, and there can be some tilts in the captured images occasionally. In this paper, we propose a new and practical task, named Rotation Correction, to automatically correct the tilt with high content fidelity in the condition that the rotated angle is unknown. This task can be easily integrated into image editing applications, allowing users to correct the rotated images without any manual operations. To this end, we leverage a neural network to predict the optical flows that can warp the tilted images to be perceptually horizontal. Nevertheless, the pixel-wise optical flow estimation from a single image is severely unstable, especially in large-angle tilted images. To enhance its robustness, we propose a simple but effective prediction strategy to form a robust elastic warp. Particularly, we first regress the mesh deformation that can be transformed into robust initial optical flows. Then we estimate residual optical flows to facilitate our network the flexibility of pixel-wise deformation, further correcting the details of the tilted images. To establish an evaluation benchmark and train the learning framework, a comprehensive rotation correction dataset is presented with a large diversity in scenes and rotated angles. Extensive experiments demonstrate that even in the absence of the angle prior, our algorithm can outperform other state-ofthe-art solutions requiring this prior. The code and dataset are available at https://github.com/nie-lang/RotationCorrection.
I. Introduction
People favor recording fascinating landscapes and objects by taking photos. To obtain a visually pleasing appearance, they have to adjust the shooting perspective carefully, and firmly hold the camera to catch a perceptually horizontal photograph. However, limited by the photography skills and shooting time, the captured images might exhibit some tilts (Fig.1a) sometimes. To overcome the annoying tilt, people have to rotate the images by manually fine-tuning the rotated angle. Nevertheless, this rotation operation is rigid, which destroys the rectangular boundaries (Fig.1b).
To obtain the visually horizontal perception and retain the rectangular boundaries simultaneously, cropping and completion [1] are the common operations following the rigid rotation. As illustrated in Fig. 1c and Fig. 1d, these operations decrease or increase the image contents, damaging the authenticity of images.
To avoid this problem, image rectangling algorithms [2]- [6] can be leveraged to warp the rotated image (Fig.1b) to rectangular image. Fig. 1g and Fig. 1h demonstrate the results of He et al.'s traditional rectangling [2] and Nie et al.'s deep rectangling [5]. But these solutions might reintroduce some slight tilts into the rectangling results because they neglect the angle-preserving constraint. Besides that, content-aware rotation [7] is another practical solution. Given the prior angle, it directly warps the contents in a tilted image to produce a visually horizontal perception without content increasing or decreasing. But the warp optimization heavily relies on the performance of the line segment detector (LSD [8]), inevitably yielding local distortions where LSD fails (Fig. 1f). Moreover, both rectangling-based solutions and the contentaware rotation scheme share a common limitation: the rotated angle should be known in advance. This limitation makes them non-automatic and multi-step solutions.
In this paper, we propose a new and practical task, Rotation Correction, aiming to solve the above problems in one step. To be rigorous, we define this task as automatically correcting the 2D in-plane tilt (roll) with high content fidelity (preserving contents and boundaries) without the angle prior. It can be easily integrated into image edition-related applications, freeing the users to rectify the tilts from any manual operations.
To achieve this goal, we design a simple but effective neural network to predict the optical flows progressively that can warp the tilted image to be perceptually horizontal. Nevertheless, the pixel-wise prediction in large-angle tilted scenes is extremely not robust, which requires the flows to be large and stable. To enhance the robustness, we first propose to predict the mesh deformation so that every pixel in a grid corresponds to an identical homography transformation [9]. The estimated mesh deformation can be more robust and stable due to its lower resolution representation than the optical flow. Subsequently, we transform the mesh deformation into the robust initial optical flows and predict the residual flows to facilitate our network the flexibility of pixel-wise deformation. In this progressive manner, our predicted flows are both robust and elastic, yielding better correction in detail.
Actually, predicting such flows from a single image is an illpose problem, which requires a high-quality dataset to assist in learning this prediction capability. Meanwhile, to establish an evaluation benchmark, we build a comprehensive rotation correction dataset (DRC-D) with a large diversity in scenes and tilted angles. Particularly, we leverage He et al.'s rotation [7] to generate abundant sample candidates, and further filter and rectify them manually. In sum, the proposed dataset includes over 6k samples with tilted inputs, corrected labels, and rotated angles.
Experimental results show that our "no-angle prior" algo- rithm outperforms the existing state-of-the-art solutions requiring this prior, quantitatively and qualitatively. We conclude our contributions as follows:
• We propose a new and practical rotation correction task, aiming to automatically correct the 2D in-plane tilt with high content fidelity without the angle prior. To accomplish it, an automatic solution is proposed to rectify the tilts by predicting the optical flows for warping. • To address the instability of monocular optical flow prediction, we propose a simple but effective strategy to form a robust elastic warp, by integrating the robustness of sparse mesh estimation into dense optical flow prediction. • Due to the absence of a proper dataset with tilted and corrected images, we build a rotation correction dataset with a wide range of rotated angles and scenes. The remainder of this paper is organized as follows. In Section II, we discuss the related works of rotation correction. The methodology and dataset are described in Section III and IV, respectively. In Section V, we demonstrate extensive experiments to validate the effectiveness of the proposed solution. Finally, we conclude this work in Section VI.
II. Related Work
We review the related methods on image rotation, rectangling, and monocular optical flow estimation here.
A. Image Rotation
The image rotation process can be implemented in two steps: estimating a rotated angle and rotating.
For the first step, people can acquire the rotated angle by rotating an image clockwise or counterclockwise until it becomes perceptually horizontal. But this manner demands the manual operation and time cost, especially for the images with few horizontal/vertical straight lines. An alternative way is to estimate the rotated angle from a single image. Particularly, some methods detect the vanishing points [10]- [12] first and determine the horizon accordingly. Others devote themselves to directly calibrating the camera parameters [13]- [15] such as camera poses that indicate the rotated angle.
Then, the rotating operation is conducted. In addition to the rigid rotation, other relatively complex transformations (e.g., homography) [14], [15] can be adopted to upright the tilted image. However, they inevitably destroy the rectangular boundaries. To avoid it, He et al. propose the content-aware rotation [7] to preserve the boundaries while exhibiting a perceptually rotated appearance. It leverages a property that the human eyes are sensitive to tilted horizontal/vertical lines, optimizing a mesh wap that encourages the straight lines to rotate at the same angle.
In contrast, our purpose is to design a one-step solution to correct the tilt automatically with high content fidelity.
B. Image Rectangling
Rectangling [2]- [6] refers to the problem of correcting the irregular boundaries (e.g. the rigid rotated images, panoramas, etc) to a rectangle. To this end, He et al. [2] propose a twostage warping solution to acquire an initial mesh and optimize a line-preserving target mesh. However, it can only protect limited perceptual properties (such as straight lines), usually failing in scenes with abundant non-linear structures. Recently, Nie et al. [5] combine this traditional warping problem with deep learning to facilitate the algorithm the perception capability to abundant semantic properties. It significantly reduces the distortions around non-linear objects while simplifying the rectangling process to a one-stage pipeline.
C. Monocular Optical Flow
Monocular optical flow prediction can be widely used in extensive computer vision applications, such as wide-angle portrait correction [16], rectangling [5], retargeting [17], stitching [18], etc.
Tan et al. [16] predict the optical flows to correct the wideangle portrait distortions. These predicted flows are robust and stable because wide-angle distortions are relatively slight, which does not require long optical flows. In image stitching, Kweon et al. [18] propose to estimate the flows for nonoverlapping regions. The prediction is severely unstable because it requires predicting long optical flows to satisfy the long-range warping. In large-angle tilted images, long flows are also required. In this paper, we enhance the robustness of monocular optical flow prediction by integrating the robustness of sparse mesh estimation into dense optical flow prediction.
III. Methodology
In this section, we first discuss the robust elastic warp in Section III-A. Then we propose our solution to rotation correction in detail in Section III-B and Section III-C. Finally, we clarify the difference between related works and ours in Section III-D.
A. Robust Elastic Warp
Considering the flexibility of optical flows, we apply monocular optical flow estimation to realize rotation correction. At first, we adopt a simple Unet-style [19] network with 5 different resolution hierarchies to predict the optical flows in our proposed dataset. The results are shown in Fig. 2, where two instances with different tilted angles are exhibited and the red arrows highlight the discontinuous regions. From the corrected results, we can observe:
• The performance of monocular optical flow estimation is not stable. Even in a case of small-angle tilt, the discontinuous content could appear. • As the increase of tilted angle, the predicted flows become increasingly unstable. More discontinuous regions counld be observed in the case of large-angle tilt. The cause of unstable performance can attribute to the high resolution of optical flows (the same resolution as the input image). It's challenging to predict accurate flows for every pixel, especially accurate long flows in large-angle tilted cases.
To enhance the robustness of optical flows, we propose a robust elastic warp from low-resolution robust mesh deformation to high-resolution flexible optical flows. Particularly, the network is designed to regress the mesh deformation first. Due to the low-resolution representation of mesh, the burden of network prediction is reduced from the flows of full pixels to the offsets of sparse mesh vertexes, thus ensuring the robustness of mesh deformation prediction. When the mesh deformation is transformed into corresponding high-resolution optical flows, the converted flows are naturally robust. But it sacrifices the flexibility of pixel-wise deformation. Subsequently, residual optical flows can be predicted to make up for this defect.
B. Network Architecture
To embody the effectiveness of this strategy, we accomplish it with a simple network instead of designing complex network structures. The network pipeline is illustrated in Fig. 3.
1) Mesh Prediction:
Assuming the mesh resolution is U × V, the mesh vertexes have a resolution of (U + 1) × (V + 1). We predefine a rigid mesh M rig and place it on the corrected image. Then our goal is to predict a mesh M pre that is placed on the tilted image. Supposing the position of a vertex in M rig is (m i j , n i j ), we desisn an encoder and regressor to predict the corresponding vertex motion (∆m i j , ∆n i j ). The corresponding vertex position in M pre is denoted as (m i j + ∆m i j , n i j + ∆n i j ).
For the encoder, we employ 10 convolutional layers with filter numbers set to 64, 64, 64, 64, 128, 128, 128, 128, 256, and 256. A max-pooling layer is adopted every two convolutions except at the beginning and the end. For the regressor, we stack 4 convolutional layers with max-pooling operations and 3 fully connected layers to regress the vertex motion. The filter numbers of convolutions and dimensions of fully connected layers are set to 256, 256, 512, 512, 2048, 1024, and (U + 1) × (V + 1) × 2, respectively.
2) Mesh to Flow: We transform the mesh deformation into optical flows in this step. Particularly, we calculate a homography transformation H uv (u = 1, 2, ..., U; v = 1, 2, ..., V) for every corresponding grid pair from M rig to M pre with respective four adjacent vertexes by direct linear transform (DLT) algorithm [20]. We represent H uv as a 3 × 3 matrix as follows:
H uv = h 1 h 2 h 3 = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 1 ,(1)
where h 1 , h 2 and h 3 are the row vectors of H uv . Then, for every pixel (x, y) in the corrected image at the uv-th grid of M rig , we convert the mesh deformation into the corresponding flows ( f xy hor , f xy ver ) as Eq. (2):
( f xy hor , f xy ver ) = ( h 1 · (x y 1) T h 3 · (x y 1) T − x, h 2 · (x y 1) T h 3 · (x y 1) T − y). (2)
The initial flows calculated from the mesh deformation are stable because the mesh deformation is robust due to the low-resolution characteristic. However, the flows sacrifice the flexibility of pixel-wise deformation. For example, the flows transformed from the same grid pair share the same warping function (homography transformation). Fig. 3. The pipeline of the proposed rotation correction network. We first regress the low-resolution mesh deformation that can be transformed into a robust initial flow. Then the residual flows are predicted to make up for the flexibility of pixel-wise deformation.
3) Residual Flow Prediction:
To remedy this drawback, we design an additional encoder-decoder network to predict the residual flows. This part takes the intermediate corrected image O ic as the input that can be obtained by warping the tilted image I via the mesh deformation. We formulate this process as Eq. (3):
O ic = W({H uv |u = 1, 2, ..., U; v = 1, 2, ..., V}, I),(3)
where W(·, ·) represents the mesh warping operation that takes a mapping function and an image as input.
The encoder-decoder network follows a similar structure to Unet [19]. Specifically, its encoder shares the same structure as that in the mesh prediction stage. The decoder adopts a symmetrical structure as the encoder, where the transposed convolutions are employed to increase the feature resolution, and skip connections are used to connect the features with the same resolution. At the end of the decoder, an additional convolution with 2 filters is adopted to output the residual optical flows (∆ f xy hor , ∆ f xy ver ). By the addition of initial flows and residual flows, we obtain the final flows that are used to get the final corrected image O f c as Eq. (4):
O f c (x, y) = I(x + f xy
hor + ∆ f xy hor , y + f xy ver + ∆ f xy ver ).
C. Objective Function
The objective function consists of a content term L content and a symmetry-equivariant term L symmetry , which is formulated as follows:
L = L content + ωL symmetry ,(5)
where ω is the weight to balance the significance of L content and L symmetry .
Content Term. We design our content term following two principles:
1) Simple. We hope the proposed robust elastic warp to be simple but effective. Therefore, the learning framework should be effectively trained by a simple loss.
2) Perceptual. The network should focus on the semantic properties that could embody the horizon instead of the ordinary pixels without attention.
To satisfy the above requirements, we leverage the perceptual loss [21] as our loss function. The perceptual loss minimizes the distance between the high-level semantics of corrected images and that of labels. It encourages the objects that are more significant in semantics instead of all pixels to be more strictly horizontal, which conforms to human perception naturally. It can promote tilt correction from two complementary perspectives: 1) Find the semantically significant region by the pretrained VGG19 [22]. Then, the network is encouraged to preserve the shape of these regions and only correct the tilt. 2) Find the semantically insignificant regions implicitly. Compared with semantically significant regions, these insignificant regions in perceptual loss have quietly low errors. Therefore, to keep a rectangular boundary, many warping operations such as stretching or flattening usually happen in these regions (e.g., the lake, sky, and so on), making the distortions visually unnoticeable.
We define Φ(·) as the operation of extracting the semantic features. It takes an image as input and outputs the feature maps from VGG19 [22]. In our implementation, we use the features extracted after the conv4 3 layer as an effective perceptual representation. Denoted the corrected label asÔ, the conent term can be formulated as follows:
L content = 1 N ( Φ(Ô) − Φ(O ic ) 2 2 + λ Φ(Ô) − Φ(O f c )2
2 ), (6) where N and λ denotes the element number of the feature maps and the weight for residual flow prediction, respectively.
Symmetry-Equivariant Term. For two left-right symmetrical images, the corresponding corrected images should also be symmetrical. In other words, if we exchange the operation orders of symmetry and rotation correction, the results would be invariant. Based on this observation, we design a symmetryequivariant loss to further facilitate the network the capability of horizon perception. Assuming I sym be the left-right symmetric image of I, the corresponding outputs are O sym ic and O sym f c . Then the symmetry-equivariant term can be defined as:
L symmetry = 1 N ( Φ(O sym ic ) − Φ( f sym (O ic )) 2 2 + λ Φ(O sym f c ) − Φ( f sym (O f c )) 2 2 ),(7)
where f sym (·) denotes the operation of left-right symmetry.
D. Comparisons to Related Work
The proposed rotation correction method takes a tilted image as input. It rectifies the content tilt automatically without an angle prior requirement. Next, we discuss the differences between the related works and ours.
Nie et al.'s rectangling [5] vs. ours: The input of Nie et al.'s rectangling is the rigidly rotated image with a specific angle (e.g., the tilted angle). Then it is warped to produce rectangular boundaries using the mesh deformation. However, the mesh representation loses the flexibility of pixel-wise deformation. In our work, we use optical flow to produce a flexible warp. To overcome the instability of monocular optical flow estimation especially in heavily tilted images, we propose to incorporate the robustness of mesh deformation with the flexibility of optical flows. This ingenious combination of their natural advantages yields a robust elastic warp.
He et al.'s rotation [7] vs. ours: The angle prior is also required in He et al.'s rotation, which takes a tilted image and the corresponding tilted angle as input. Then it optimizes an energy function that encourage the boundaries to keep rectangular and the lines to rotate with the known tilted angle. Different from it, our solution frees from the limitation of the angle prior, predicting a warp according to the different tilted degrees.
IV. Data Preparation
As there is no proper dataset for rotation correction, we build a comprehensive dataset to train the learning framework and establish an evaluation benchmark. The process can be divided into 4 steps: 1) We collect abundant perceptually horizontal images (Fig. 5a) as the ground truth of the rotation corrected images. Specifically, we select the categories that belong to buildings and landscapes from ImageNet [23], e.g. boathouse, castle, church, lakeshore, volcano, etc, because the horizontal properties of these images are relatively easy to perceive. Then we manually pick up the perceptually horizontal images from these labeled candidates. We repeat this filtering process for 3 epochs and less than 3k images remain from more than 10k images. Besides, to enrich the variety of scenes, we also collected some pictures of other categories by ourselves.
2) We apply He et al.'s rotation algorithm [7] to rotate these horizontal images (Fig. 5b). Particularly, every image is rotated with 6 random angles that belong to 6 different angle intervals ([-10 • ,-7 • ), [-7 • ,-4 • ), [-4 • ,-1 • ), (1 • ,4 • ], (4 • ,7 • ], and (7 • ,10 • ]). We define the maximum tilted angle as 10 • , because people prefer to take pictures without tilt unconsciously. Even if there is a tilt, it would not be very large. Besides, He et al.'s rotation fails frequently (loses image content or produces large distortions) when the rotation angle is larger than 10 • , which affects the dataset generation.
3) Then, we filter out the rotated images with noticeable distortions manually for 3 epochs. Only 6,202 images remain from over 20k samples. 4) To further enhance the quality of our dataset, we develop a mesh-based program to manually fine-tune the rotated results following He et al.'s rotation [7]. It allows the user to drag the mesh vertexes with a mouse to modify the mesh deformation interactively as shown in Fig. 5c. This process requires huge manual labor, and we randomly select about 20% images from the last step to conduct the manual correction. Now, we get all the samples of our dataset with tilted images, labels for corrected images, and labels for rotated angles to rectify the horizon. The training set and testing set are randomly divided according to the ratio of 9:1. Finally, we get the 5,537 samples for training and 665 samples for testing with the resolution set to 512 × 384. We name this dataset DRC-D, and the data distribution concerning rotated angles is shown in Fig. 4.
V. Experiment
In this section, we conduct extensive comparative experiments and ablation studies in Section V-B and V-C. To further validate the effectiveness of our method, cross-dataset evaluation is demonstrated in Section V-B. Moreover, the analysis of the feature visualization is provided to explore the secret of rotation correction in the same section. Finally, we discuss the potential applications and future prospects in Section V-D.
A. Implement Details 1) Training Details: We use Adam optimizer [24] to train our network with an exponentially decaying learning rate initialized to 10e-4. The batch size is set to 4 and the training process takes 150k iterations in a single GPU with NVIDIA RTX 2080 Ti. ω and λ are set to 0.1 and 0.25, respectively. We assign 8 × 6 to U × V because the highresolution mesh regression would increase the computational burden. Moreover, the subsequent residual flow prediction can make up for the lack of pixel-wise deformation in the lowresolution mesh deformation.
2) Inference: The proposed algorithm can process images with arbitrary resolutions. For example, given a 2048 × 1536 tilted image, it would be first downsampled to 512 × 384 and the optical flows to rectify the horizon are predicted in the downsampled resolution. Then the flows would be upsampled by increasing the resolution and magnifying the values, and the corrected result can be obtained by warping the full resolution input using the upsampled flows.
It takes about 0.2 seconds to process such a high-resolution image in GPU. The running time dominantly depends on warping (interpolating) the full resolution image. B. Comparative Experiment 1) Compared with Content-Altering Solutions: Cropping and completion are straightforward solutions for the irregular boundaries caused by the rigid rotation. However, cropping decreases image contents so that the cropped results would exhibit a visual effect of FoV shrinking. Completion [1] increasing the extra contents that are visually reasonable but not reliable. Therefore, it's unfair to compare our solution with them quantitatively.
We demonstrate the qualitative comparisons in Fig. 6, where three instances with different tilted angles are given. As the increase of rotated angles (from top to bottom), the content loss and content addition become more noticeable. Compared with them, our solution can correct the tilt naturally without content altering and angle prior.
2) Compared with Content-Preserving Solutions:
We also compare our solution with the content-preserving solutions as follows:
Rotation: The rotation operation takes the tilted image and the ground truth rotated angle as the input. After rigid rotation, the boundaries become irregular and the resolution is also changed according to the rotated angle. We resize the rotated image to the original resolution for comparison purposes.
Rectangling: The rectangling operation takes the rotated image as input and outputs the rectangular image without content altering. [5] to 8 × 6 for two reasons: 1) keeping the mesh resolution consistent with that in our solution, and 2) avoiding noticeable distortions that might be frequently produced in the condition of high-resolution mesh. Content-aware rotation: Similar to rotation, the contentaware rotation algorithm [7] also takes the tilted image and the ground truth rotated angle as the input. But it outputs the content-rotated rectangular results. The mesh resolution is also set to 8 × 6 since this resolution can yield better "lessdistortion" results compared with high-resolution mesh.
We demonstrate the quantitative comparisons and noreference quantitative comparisons in Table I and Table II. From Table I Since human eyes are sensitive to the salient regions in horizontal perception, these two statistic-based metrics cannot objectively reflect the quality of rotation correction. Therefore, we add FID [25] and LPIPS [26] as the perceptual measures that are popularly adopted in image generation tasks [1], [30]. From these perception-based metrics, the proposed solution reaches the best performance. Besides that, we conduct a no-reference blind image quality evaluation on these solutions. As shown in Table II, BRISQUE [27] is a natural scene statistic-based assessment while PIQUE [28] and RandIQA [29] are perception-based assessments. Our method is ranked the best solution in these no-reference metrics.
Moreover, the qualitative comparisons are exhibited in Fig. 7 and Fig. 8. The rectangling-based solutions might reintroduce the slight tilt to the rectangular results because they neglect the angle-preserving constraint. The content-aware [7]. The rotated angles are -9.5 • , 6.5 • , 8.1 • , and 5.7 • from left to right. We zoomed in on local regions to compare the details. Notice that our algorithm does not require the angle prior while He et al.'s rotation [7] must take a specific angle as additional input. rotation gives poor details on the rotated regions occasionally due to the limitation of mesh resolution. (Actually, higher mesh resolution in the content-aware rotation might produce much more distortions, yielding a worse performance.) On the other hand, they all require the rotated angles as additional input. Compared with them, our solution corrects the tilt naturally without this angle prior.
3) Cross-Dataset Evaluation:
In this cross-dataset evaluation, we adopt the DRC-D dataset to train our model and test this model in other datasets. Here we conduct the testing experiments on the images from MS-COCO [31] and demonstrate the visual appearance. As shown in Fig. 9, several examples with different resolutions and aspect ratios are given. Our solution predicts the optical flows to eliminate the visual tilt without the angle prior.
4) The Secret of Rotation Correction: To further explore how the neural network can work for rotation correction, we give an analysis of the monocular optical flow prediction process in this section. To this end, we visualize the feature maps to explore the secret of rotation correction. Specifically, we adopt an encoder-decoder network with skip connections and 5 different resolution hierarchies to predict the optical flows. The feature maps from the conv3 2 layers (both encoder and decoder) and the predicted flows are visualized in Fig. 10 1) The role of the encoder is to extract semantic features that are beneficial to horizontal perception, such as straight lines close to vertical/horizontal direction.
2) The decoder is responsible for distinguishing the tilted regions. For example, the heads of the tower in the third column of Fig. 10 are highlighted to different degrees. In the large-angle rotated instance, the network assigns larger values to the feature region (row 1, col 3, highlighted by the arrow) to predict longer flows that can correct it. In contrast, the feature region is relatively neglected in the small-rotated instance (row 2, col 3, highlighted by the arrow).
C. Ablation Study
The proposed monocular optical flow prediction strategy is simple but effective. We evaluate the effectiveness of every module on DRC-D.
1) Only Mesh: Another effective approach to correct the tilt is to predict the mesh deformation instead of the optical flows. However, the low-resolution mesh deformation might damage the content details while the high-resolution one tends to cause unnatural distortions to the mesh, such as self-intersection. Besides, only predicting the mesh can yield uneven rectangular boundaries in the corrected image as shown in Fig. 11 (left). Considering the fairness of comparing the mesh prediction method with ours, we also repeat the mesh deformation module to replace the residual optical flow prediction module ("mesh+mesh" in Table III).
2) Only Flow: Predicting pixel-wise optical flows can produce a pixel-wise warp. But this warp is extremely unstable especially in large-angle tilted scenes as shown in Fig. 11 (middle). On the other hand, it can produce perfect rectangular boundaries in the corrected result, which is opposite to the mesh warp. Also, we repeat the optical flow prediction module to replace the mesh deformation estimation module ("flow+flow" in Table III).
3) Mesh + Flow: Mesh prediction can produce a robust warp, but it lacks pixel-wise deformation capability and produces uneven boundaries. In contrast, flow prediction can produce pixel-wise flexible warp with even rectangular boundaries, but the prediction would become unstable in large-angle tilted scenes. We combine their advantages to form a robust elastic warp that can rectify the horizon effectively.
4) Symmetry-Equivariant Loss: The symmetry-equivariant loss can further improve the performance to rectify the horizon. Besides, it can enhance the generalization capability when the pre-trained model is transferred to other datasets.
D. Future Prospect
To correct the tilt, existing solutions share a two-stage pipeline, in which a single image calibration method is first used to estimate the tilt angle and then a content-aware warp method is used to remove the content tilt. Compared with them, we propose the first one-stage baseline by constructing a robust elastic warp and benchmark dataset. For future works, more geometric features (e.g., line, curve, et al.) can be combined with the semantic features to reach better content preservation. To generalize to other scenes, weakly-supervised and semi-supervised algorithms could be studied to decrease the urgent demand for the expensive labeled data. Besides, the proposed mesh-to-flow strategy shows the potential to be extended to other image-warping tasks, such as portrait correction [16], image rectangling [5], image retargeting [17], and so on.
VI. Conclusion
In this paper, we propose a new and practical task to automatically correct the content tilt without the angle prior, which can be easily integrated into image editing applications. To accomplish this task, we design a simple but effective warping strategy. Particularly, we combine the robustness of mesh estimation and the flexibility of optical flow prediction into a unified framework, contributing to a robust elastic warp. To establish an evaluation benchmark and train the learning framework, we build a comprehensive rotation correction dataset with a large diversity in rotated angles and scenes. Finally, we validate our method by conducting extensive experiments and ablation studies. The results show our superiority over other state-of-the-art solutions and the effectiveness of the new warping strategy.
Rotation & He et al.' s rectangling (f) He et al.'
Fig. 1 .
1Different solutions to correct the tilted image. Our solution (e) can eliminate the tilt without angle prior, while the others (b)(c)(d)(f)(g)(h) require an accurate rotated angle. The red square denotes the cropping region, and the arrow highlights the distorted area. The horizontal and vertical dotted lines are drawn to help observe the slight tilt.
Fig. 2 .
2Qualitative analysis of the optical flow robustness. Top: small-angle tilt (3.1 • ). Bottom: large-angle tilt (9.5 • ).
Fig. 4 .
4Data distribution of training and testing set of the proposed dataset with respect to rotated angles.
He et al.' s rotation (6.1°). From left to right: initial mesh, optimized mesh, rotated image, and zooming-in region.(c) The result of further manual correction. From left to right: manual adjustment, corrected mesh, corrected image, and zooming-in region.
Fig. 5 .
5The process of dataset generation. We further correct the randomly rotated result generated from He et al.' rotation[7]. The red arrows in (c) indicate the manual adjustment of moving the mesh vertices. He et al.'s rotation neglects the rotation of the corss ((b) right), while our manual correction slightly rotates it to produce a more natural appearance ((c) right).
Fig. 6 .
6Qualitative comparisons with cropping and completion. The rotated angles are 3.5 • , -6.5 • , and 9.0 • from top to bottom. The red circles highlight the failure of completion.
Fig. 7 .
7For fairness, we retrain Nie et al.'s rectangling [5] model on DRC-D by replacing the input with the rigid rotated images. We set the mesh resolution of both He et al.'s Tilted image Rotation Rotation & He et al.' s rectangling Rotation & Nie et al.' s rectangling Our rotation correction Qualitative comparisons with rectangling-based solutions. The rotated angles are -5.3 • , -5.5 • , and 5.4 • from top to bottom. The rectangling algorithms take rotated images (the second col) as input and output the rectangular images at the cost of reintroducing a slight tilt in perception. We plot the horizontal or vertical dotted lines for better observation.
, He et al.'s rotation is evaluated with the best PSNR and SSIM while ours is ranked the second in these two metrics. He et al.'s rotation has an inevitable advantage on DCR-D dataset, because the tilted images are generated from horizontal images with He et al's rotation and further manual correction. Besides, the accurate rotation angle (ground truth angle) is provided in advance for He et al's solution, while ours requires no angle prior.
Fig. 8 .
8Qualitative comparison with He et al.'s rotation
Fig. 9 .
9Cross-dataset evaluation. We train our model on DRC-D and qualitatively evaluate it on MS-COCO[31]. Several examples with different resolutions and aspect ratio are demonstrated here, where each example includes a pair of tilted image (left) and our result (right).
Fig. 10 .
10The secret of rotation correction. The endoer and decoder are responsible for different works.
TABLE I Quantitative
Icomparisons with content-preserving solutions on DIR-D.Solution
Angle prior PSNR (↑) SSIM (↑) FID [25] (↓) LPIPS-vgg [26] (↓) LPIPS-alex [26] (↓)
1
Rotation
w/
11.57
0.374
34.40
0.468
0.441
2 Rotation & He et al.'s rectangling [2]
w/
17.63
0.488
15.30
0.345
0.324
3 Rotation & Nie et al.'s rectangling [5]
w/
19.89
0.550
13.40
0.286
0.295
4
He et al.'s rotation [7]
w/
21.69
0.646
8.51
0.212
0.171
5
Our rotation correction
w/o
21.02
0.628
7.12
0.205
0.096
TABLE II
No-reference quantitative comparisons with content-preserving solutions on DIR-D.
Solution
Angle prior BRISQUE [27] (↓) PIQUE [28] (↓) RankIQA [29] (↓)
1 Rotation & He et al.'s rectangling [2]
w/
32.81
17.20
0.745
2 Rotation & Nie et al.'s rectangling [5]
w/
32.94
15.41
0.795
3
He et al.'s rotation [7]
w/
28.44
9.44
0. 424
4
Our rotation correction
w/o
26.38
8.19
0.299
[2] and Nie et al.'s rectangling
. The two testing samples are tilted to different degrees. By comparing the feature maps, we conclude the encoder andFig. 11. Ablation studies on warping strategies. Compared with only mesh/ f low, mesh + f low can yield robust correction with better details and boundaries.Predicting mesh to warp
Predicting flows to warp
Predicting mesh+flow to warp
TABLE III Ablation
IIIstudies on DRC-D.Architecture
Loss
PSNR (↑) SSIM (↑)
1
Mesh
L content
19.64
0.618
2 Mesh+mesh
L content
19.52
0.604
3
Flow
L content
18.48
0.500
4 Flow+flow
L content
18.66
0.511
5 Mesh+flow
L content
20.90
0.621
6 Mesh+flow L content + L symmetry
21.02
0.628
decoder play different roles:
Contextual residual aggregation for ultra high-resolution image inpainting. Z Yi, Q Tang, S Azizi, D Jang, Z Xu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZ. Yi, Q. Tang, S. Azizi, D. Jang, and Z. Xu, "Contextual residual ag- gregation for ultra high-resolution image inpainting," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7508-7517, 2020.
Rectangling panoramic images via warping. K He, H Chang, J Sun, ACM Transactions on Graphics. 324K. He, H. Chang, and J. Sun, "Rectangling panoramic images via warping," ACM Transactions on Graphics, vol. 32, no. 4, pp. 1-10, 2013.
A geodesic-preserving method for image warping. D Li, K He, J Sun, K Zhou, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionD. Li, K. He, J. Sun, and K. Zhou, "A geodesic-preserving method for image warping," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 213-221, 2015.
Content-preserving image stitching with piecewise rectangular boundary constraints. Y Zhang, Y.-K Lai, F.-L Zhang, IEEE Transactions on Visualization and Computer Graphics. 277Y. Zhang, Y.-K. Lai, and F.-L. Zhang, "Content-preserving image stitch- ing with piecewise rectangular boundary constraints," IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 7, pp. 3198-3212, 2020.
Deep rectangling for image stitching: A learning baseline. L Nie, C Lin, K Liao, S Liu, Y Zhao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionL. Nie, C. Lin, K. Liao, S. Liu, and Y. Zhao, "Deep rectangling for image stitching: A learning baseline," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5740- 5748, 2022.
Rectangling irregular videos by optimal spatio-temporal warping. J.-L Wu, J.-J Shi, L Zhang, Computational Visual Media. 81J.-L. Wu, J.-J. Shi, and L. Zhang, "Rectangling irregular videos by optimal spatio-temporal warping," Computational Visual Media, vol. 8, no. 1, pp. 93-103, 2022.
Content-aware rotation. K He, H Chang, J Sun, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionK. He, H. Chang, and J. Sun, "Content-aware rotation," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 553- 560, 2013.
Lsd: A fast line segment detector with a false detection control. R G Von Gioi, J Jakubowicz, J.-M Morel, G Randall, IEEE Transactions on Pattern Analysis and Machine Intelligence. 324R. G. Von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall, "Lsd: A fast line segment detector with a false detection control," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722- 732, 2008.
Depth-aware multigrid deep homography estimation with contextual correlation. L Nie, C Lin, K Liao, S Liu, Y Zhao, IEEE Transactions on Circuits and Systems for Video Technology. L. Nie, C. Lin, K. Liao, S. Liu, and Y. Zhao, "Depth-aware multi- grid deep homography estimation with contextual correlation," IEEE Transactions on Circuits and Systems for Video Technology, pp. 1-1, 2021.
Using vanishing points to correct camera rotation in images. A C Gallagher, The 2nd Canadian Conference on Computer and Robot Vision. A. C. Gallagher, "Using vanishing points to correct camera rotation in images," in The 2nd Canadian Conference on Computer and Robot Vision, pp. 460-467, 2005.
Ctrl-c: Camera calibration transformer with line-classification. J Lee, H Go, H Lee, S Cho, M Sung, J Kim, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJ. Lee, H. Go, H. Lee, S. Cho, M. Sung, and J. Kim, "Ctrl-c: Camera calibration transformer with line-classification," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16228- 16237, 2021.
Deep vanishing point detection: Geometric priors make dataset variations vanish. Y Lin, R Wiersma, S L Pintea, K Hildebrandt, E Eisemann, J C Van Gemert, arXiv:2203.08586arXiv preprintY. Lin, R. Wiersma, S. L. Pintea, K. Hildebrandt, E. Eisemann, and J. C. van Gemert, "Deep vanishing point detection: Geometric priors make dataset variations vanish," arXiv preprint arXiv:2203.08586, 2022.
Uprightnet: geometry-aware camera orientation estimation from single images. W Xian, Z Li, M Fisher, J Eisenmann, E Shechtman, N Snavely, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionW. Xian, Z. Li, M. Fisher, J. Eisenmann, E. Shechtman, and N. Snavely, "Uprightnet: geometry-aware camera orientation estimation from single images," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9974-9983, 2019.
Automatic upright adjustment of photographs with robust camera calibration. H Lee, E Shechtman, J Wang, S Lee, IEEE Transactions on Pattern Analysis and Machine Intelligence. 365H. Lee, E. Shechtman, J. Wang, and S. Lee, "Automatic upright adjust- ment of photographs with robust camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 833- 844, 2013.
Surface normal estimation of tilted images via spatial rectifier. T Do, K Vuong, S I Roumeliotis, H S Park, European Conference on Computer Vision. T. Do, K. Vuong, S. I. Roumeliotis, and H. S. Park, "Surface normal estimation of tilted images via spatial rectifier," in European Conference on Computer Vision, pp. 265-280, 2020.
Practical wideangle portraits correction with deep structured models. J Tan, S Zhao, P Xiong, J Liu, H Fan, S Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Tan, S. Zhao, P. Xiong, J. Liu, H. Fan, and S. Liu, "Practical wide- angle portraits correction with deep structured models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, pp. 3498-3506, 2021.
Cycle-ir: Deep cyclic image retargeting. W Tan, B Yan, C Lin, X Niu, IEEE Transactions on Multimedia. 227W. Tan, B. Yan, C. Lin, and X. Niu, "Cycle-ir: Deep cyclic image retargeting," IEEE Transactions on Multimedia, vol. 22, no. 7, pp. 1730- 1743, 2019.
Pixelwise deep image stitching. H Kweon, H Kim, Y Kang, Y Yoon, W Jeong, K.-J Yoon, arXiv:2112.06171arXiv preprintH. Kweon, H. Kim, Y. Kang, Y. Yoon, W. Jeong, and K.-J. Yoon, "Pixel- wise deep image stitching," arXiv preprint arXiv:2112.06171, 2021.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical Image Computing and Computer-Assisted Intervention. O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional net- works for biomedical image segmentation," in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, 2015.
Multiple view geometry in computer vision. R Hartley, A Zisserman, Cambridge university pressR. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003.
Perceptual losses for realtime style transfer and super-resolution. J Johnson, A Alahi, L Fei-Fei, European Conference on Computer Vision. J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real- time style transfer and super-resolution," in European Conference on Computer Vision, pp. 694-711, 2016.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Ima- genet: A large-scale hierarchical image database," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. M Heusel, H Ramsauer, T Unterthiner, B Nessler, S Hochreiter, Advances in Neural Information Processing Systems. 30M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, "Gans trained by a two time-scale update rule converge to a local nash equilibrium," Advances in Neural Information Processing Systems, vol. 30, 2017.
The unreasonable effectiveness of deep features as a perceptual metric. R Zhang, P Isola, A A Efros, E Shechtman, O Wang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionR. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, "The unreasonable effectiveness of deep features as a perceptual metric," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 586-595, 2018.
No-reference image quality assessment in the spatial domain. A Mittal, A K Moorthy, A C Bovik, IEEE Transactions on Image Processing. 2112A. Mittal, A. K. Moorthy, and A. C. Bovik, "No-reference image quality assessment in the spatial domain," IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, 2012.
Blind image quality evaluation using perception based features. N Venkatanath, D Praneeth, M C Bh, S S Channappayya, S S Medasani, 2015 Twenty First National Conference on Communications. N. Venkatanath, D. Praneeth, M. C. Bh, S. S. Channappayya, and S. S. Medasani, "Blind image quality evaluation using perception based fea- tures," in 2015 Twenty First National Conference on Communications, pp. 1-6, 2015.
Rankiqa: Learning from rankings for no-reference image quality assessment. X Liu, J Van De Weijer, A D Bagdanov, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionX. Liu, J. Van De Weijer, and A. D. Bagdanov, "Rankiqa: Learning from rankings for no-reference image quality assessment," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1040- 1049, 2017.
Taming transformers for highresolution image synthesis. P Esser, R Rombach, B Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionP. Esser, R. Rombach, and B. Ommer, "Taming transformers for high- resolution image synthesis," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12873-12883, 2021.
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, European Conference on Computer Vision. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," in European Conference on Computer Vision, pp. 740-755, 2014.
| [
"https://github.com/nie-lang/RotationCorrection."
] |
[
"Hybrid Energy Based Model in the Feature Space for Out-of-Distribution Detection",
"Hybrid Energy Based Model in the Feature Space for Out-of-Distribution Detection"
] | [
"Marc Lafon ",
"Elias Ramzi ",
"Clément Rambour ",
"Nicolas Thome "
] | [] | [] | Out-of-distribution (OOD) detection is a critical requirement for the deployment of deep neural networks. This paper introduces the HEAT model, a new post-hoc OOD detection method estimating the density of in-distribution (ID) samples using hybrid energy-based models (EBM) in the feature space of a pre-trained backbone. HEAT complements prior density estimators of the ID density, e.g. parametric models like the Gaussian Mixture Model (GMM), to provide an accurate yet robust density estimation. A second contribution is to leverage the EBM framework to provide a unified density estimation and to compose several energy terms. Extensive experiments demonstrate the significance of the two contributions. HEAT sets new state-of-the-art OOD detection results on the CIFAR-10 / CIFAR-100 benchmark as well as on the large-scale Imagenet benchmark. The code is available at: github.com/MarcLafon/heatood. 2022) suitable for real-world purpose, which offers the possibility to leverage state-of-the-art models for the main prediction task and to maintain their performances. It also relaxes the need for very demanding training processes, which can be prohibitive with huge deep neural nets and foundation models(Bommasani et al., 2021;Radford et al., 2021;Rombach et al., 2022;Alayrac et al., 2022). Density FPRNear = 51.6% FPRFar = 7.0% Density Prior K -(EL) FPRNear = 45.2% FPRFar = 17.8% a) Prior scorers Hybrid density 1 -(HEAT-GMM) Density FPRNear = 46.9% FPRFar = 7.1% Density Hybrid density K -(HEAT-EL) FPRNear = 44.0% FPRFar = 16.6% b) Hybrid densities (Sec. 3.1) HEAT Density FPRNear = 39.4% FPRFar = 6.2% | 10.48550/arxiv.2305.16966 | [
"https://export.arxiv.org/pdf/2305.16966v3.pdf"
] | 258,947,192 | 2305.16966 | e75e08851675eb506ea0149b0403828b6fb24900 |
Hybrid Energy Based Model in the Feature Space for Out-of-Distribution Detection
Marc Lafon
Elias Ramzi
Clément Rambour
Nicolas Thome
Hybrid Energy Based Model in the Feature Space for Out-of-Distribution Detection
Out-of-distribution (OOD) detection is a critical requirement for the deployment of deep neural networks. This paper introduces the HEAT model, a new post-hoc OOD detection method estimating the density of in-distribution (ID) samples using hybrid energy-based models (EBM) in the feature space of a pre-trained backbone. HEAT complements prior density estimators of the ID density, e.g. parametric models like the Gaussian Mixture Model (GMM), to provide an accurate yet robust density estimation. A second contribution is to leverage the EBM framework to provide a unified density estimation and to compose several energy terms. Extensive experiments demonstrate the significance of the two contributions. HEAT sets new state-of-the-art OOD detection results on the CIFAR-10 / CIFAR-100 benchmark as well as on the large-scale Imagenet benchmark. The code is available at: github.com/MarcLafon/heatood. 2022) suitable for real-world purpose, which offers the possibility to leverage state-of-the-art models for the main prediction task and to maintain their performances. It also relaxes the need for very demanding training processes, which can be prohibitive with huge deep neural nets and foundation models(Bommasani et al., 2021;Radford et al., 2021;Rombach et al., 2022;Alayrac et al., 2022). Density FPRNear = 51.6% FPRFar = 7.0% Density Prior K -(EL) FPRNear = 45.2% FPRFar = 17.8% a) Prior scorers Hybrid density 1 -(HEAT-GMM) Density FPRNear = 46.9% FPRFar = 7.1% Density Hybrid density K -(HEAT-EL) FPRNear = 44.0% FPRFar = 16.6% b) Hybrid densities (Sec. 3.1) HEAT Density FPRNear = 39.4% FPRFar = 6.2%
Introduction
Out-of-distribution (OOD) detection is a major safety requirement for the deployment of deep learning models in critical applications, e.g. healthcare, autonomous steering, or defense (Bendale & Boult, 2015;Amodei et al., 2016;Janai et al., 2020). Deployed machine learning systems must successfully perform a specific task, e.g. image classification, or image segmentation while being able to distinguish in-distribution (ID) from OOD samples, in order to abstain from making an arbitrary prediction when facing the latter.
OOD detection is a challenge for state-of-the-art deep neural networks. Most recent approaches follow a post-hoc strategy (Hendrycks & Gimpel, 2017;Liang et al., 2018a;Liu et al., 2020;Sehwag et al., 2021;Sun et al., 2022;Wang et al., Post-hoc methods exploit the feature space of a pre-trained network and attempt at estimating the density of ID features to address OOD detection. Existing ID density estimation methods include Gaussian Mixture Models (GMMs) (Lee et al., 2018b;Sehwag et al., 2021), the nearest neighbors distribution (Sun et al., 2022), or the distribution derived from the energy logits (EL) (Liu et al., 2020). However, these approaches tend to detect different types of OOD data: for instance, GMMs' density explicitly decreases when moving away from training data, making them effective for far-OOD 1 detection, while EL benefits from the classifier training to obtain strong results on near-OOD samples (Wang et al., 2022).
In this work, we introduce HEAT, a new density-based OOD detection method which estimate the density of ID samples using a Hybrid Energy based model in the feATure space of a fixed pre-trained backbone, which provides strong OOD detection performances on both near and far-OOD data. HEAT leverages the energy-based model (EBM) framework (LeCun et al., 2006) to build a powerful density estimation method relying on two main components:
1. Energy-based correction of prior OOD detectors (e.g.
GMMs or EL) with a data-driven EBM, providing an accurate ID density estimation while benefiting from the strong generalization properties of the priors. The corrected model is carefully trained such that the prior and residual terms achieve optimal cooperation.
2. Hybrid density estimation grounded by a sound energy functions composition combining several sources to improve OOD detection. The energy composition requires a single hyper-parameter, and involves no computational overhead since it is applied at a single layer of the network. Figure 1. Illustration of our HEAT model. HEAT leverages a) K prior density estimators, such as GMM or EL, and overcomes their modeling biases by learning a residual term with an EBM b) leading to more accurate OOD scorers, e.g. HEAT-GMM or HEAT-EL. The second contribution is to combine the different refined scorers using an EBM energy composition function. The final HEAT prediction c) can thus leverage the strengths of the different OOD scorers, and be effective for both far and near-OOD detection.
We illustrate HEAT in Fig. 1 using two prior OOD detectors from the literature: SSD+ which is based on GMMs (Sehwag et al., 2021) and EL (Liu et al., 2020), with CIFAR-10 dataset as ID dataset and with six OOD datasets, see Sec. 4. We can see in Fig. 1 that GMM is able to correctly detect far-OOD samples while struggling on near-OOD samples when EL exhibits the opposite behavior. The energy-correction step enhance both priors, reducing the false positive rate (FPR) by -4.7 pts on near-OOD while being stable on far-OOD for GMM, and by -3.2 pts on near-OOD and -1.2 pts for EL. Finally, the energy-composition step produces a hybrid density estimator leading to a better ID density estimation which further improves the OOD detection performances, both for near and far OOD regimes.
We conduct an extensive experimental validation in Sec. 4, showing the importance of our two contributions. HEAT sets new state-of-the-art OOD detection results with CIFAR-10/-100 as ID data, but also on the large-scale Imagenet dataset. HEAT is also agnostic to the prediction backbone (ResNet, ViT) and remains effective in low-data regimes.
Related work
Seminal attempts for OOD detection used supervised methods based on external OOD samples (Lee et al., 2018a;Malinin & Gales, 2018) or "Outlier Exposure" (OE) (Hendrycks et al., 2019) enforcing a uniform OOD distribution. Although OOD datasets can improve OOD detection, their relevance is questionable since collecting representative OOD datasets is arguably impossible as OOD lie anywhere outside the training distribution (Charpentier et al., 2020). It can also have the undesirable effect of learning detectors biased towards certain types of OOD (Wang et al., 2022).
Density-based OOD detection. Estimating the density of ID training samples to perform OOD detection is a natural strategy that has been widely explored. In their seminal work, (Lee et al., 2018b) first proposed to approximate the ID features density with a class-conditional GMM. Subsequent works adopted the same approach by adding slight modifications. For instance, (Sehwag et al., 2021) proposed to learn the GMM density of normalized features without having access to class labels. Recently, (Sun et al., 2022) challenged the GMM distributional assumption by showing that using a deep nearest neighbors approach on normalized features has strong OOD detection performances.
Energy-Based Models (EBM) are another approach to estimate the ID density which have made incredible progress in generative modeling for images in recent years (Xie et al., 2016;Du & Mordatch, 2019;Grathwohl et al., 2020). However, their performances for OOD detection are not yet comparable with OOD methods based on the feature space (Elflein et al., 2021). (Liu et al., 2020) have proposed to perform OOD detection with an energy score defined by the logsumexp of the logits (EL) of the pre-trained classifier showing improvement over using the classifier's predicted probabilities (Hendrycks & Gimpel, 2017). Furthermore, the authors of EL propose to fine-tune the logits of the classifier using external OOD datasets. Contrarily, we do not use any OOD to learn HEAT but rely on proper EBM training to estimate the ID features density.
Energy-based correction. Our method rely on energy-based correction of a reference model. This idea has been explored in noise contrastive estimation (NCE) (Gutmann & Hyvärinen, 2010) where the correction is obtained by discriminative learning. Learning an EBM in cooperation with a generator model has been introduce in (Xie et al., 2018) where an EBM learns to refine generated samples and has also been applied to cooperative learning of an EBM with a conditional generator (Xie et al., 2022a), a VAE (Pang et al., 2020;Xie et al., 2021;Xiao et al., 2021) a normalizing flow (Nijkamp et al., 2022;Xie et al., 2022b). Contrarily to our method which is designed for OOD detection, previous works focus on generation and cannot benefit from a fixed prior OOD detector as they use a cooperative learning strategy.
Residual learning. Training hybrid models, where a data-driven residual complements an approximate predictor, has been proposed in several context, e.g. in complex dynamic forecasting (Yin et al., 2021), in NLP (Bakhtin et al., 2021), in video prediction (Le Guen & Thome, 2020;Le Guen et al., 2022), or in robotics (Zeng et al., 2020. Such residual approaches have also emerged for OOD detection. ResFlow (Zisselman & Tamar, 2020) uses a normalizing flow (NF) to learn the residual of a Gaussian density for OOD detection. The approach is related to ours, but NFs require invertible mapping, which intrinsically limit their expressive power and make the learned residual less accurate. Also, ViM (Wang et al., 2022) proposes to model the residual of the ID density by using the complement to a linear manifold on the ID manifold. With HEAT, we can learn a non-linear residual and include a residual from different energy terms to improve ID density modeling. We verify experimentally that HEAT significantly outperforms these two baselines for OOD detection.
Ensembling & composition. The question of merging several networks, also known as ensembling (Lakshminarayanan et al., 2017) has been among the first and most successful approaches for OOD detection. The ensemble can include different backbones or different training variants. For OOD detection, several post-hoc approaches also model the ID density at different layer depth of a pre-trained model, the overall density score being obtained by ensembling such predictions (Lee et al., 2018b;Sastry & Oore, 2020;Zisselman & Tamar, 2020). The main limitation of these approaches relates to their computational cost since the inference time is proportional to the number of networks. The overhead quickly becomes prohibitive in contexts with limited resources. Several sources of prior densities are combined in (Wang et al., 2022) to refine OOD detection. Our approach is a general framework adapting the EBM composition model (Du et al., 2020; to OOD detection, and can thus include several hybrid energy terms to refine ID density estimation. In terms of computational cost, we apply our model at a single layer of the network, bringing essentially no computational overhead at inference (see Appendix B.2).
HEAT for OOD detection
In this section, we describe the proposed HEAT model to estimate the density of in-distribution (ID) features using a hybrid energy-based model (EBM). We remind that we place ourselves in the difficult but realistic case where only ID samples are available, and we do not use any OOD samples for density estimation. Also, HEAT is a post-hoc approach estimating the density of the latent space of a pre-trained prediction model, as in (Lee et al., 2018b;Wang et al., 2022;Sehwag et al., 2021;Sun et al., 2022).
Let p(x) be the probability of ID samples, where x ∈ X , and z = ϕ(x) ∈ Z denotes the network's embedding of x with Z the latent space at the penultimate layer of a pre-trained prediction model f , e.g. a deep neural net for classification. We aim at estimating p(z|D) with D :
= {x i } N i=1 the ID training dataset 2 .
We illustrate the two main components at the core of HEAT in Fig. 2. Firstly, we introduce a hybrid density estimation to refine a set of prior densities {q k (z)} 1≤k≤K by complementing each of them with a residual EBM. Secondly, we propose to compose several hybrid density estimations based on different priors, which capture different facets of ID density distributions.
Hybrid Energy-based density estimation
The main motivation in hybrid EBM density estimation is to leverage existing models that rely on specific assumptions on the form of the density p(z), e.g.EL (Liu et al., 2020), which captures class-specific information in the logit vector, or SSD (Sehwag et al., 2021) which uses a GMM. These approaches have appealing properties: GMM is a parametric model relying on few parameters thus exhibiting strong generalization performances, and EL benefits from classification training. However, their underlying modeling assumptions intrinsically limit their expressiveness which leads to coarse boundaries between ID and OOD, and they generally fail at discriminating between ambiguous data.
Hybrid EBM model. Formally, let q k (z) be a density estimator inducing an OOD-prior among a set of K priors {q k (z)} 1≤k≤K . We propose to refine its estimated density by learning a residual model p r θ k (z), such that our hybrid density estimation is performed by p h θ k (z) as follows:
p h θ k (z) = 1 Z(θ k ) p r θ k (z)q k (z),(1)
with Z(θ k ) = p r θ k (z)q k (z)dz the normalization constant. We propose to learn the residual density p r θ k (z) with an EBM: p r θ k (z) ∝ exp(−E θ k (z)). From Eq.
(1), we can derive
x ϕ z + Residual EBM E θ 1
Prior Energy
E1
Hybrid Energy
E h θ 1 + Residual EBM E θ K Prior Energy EK Hybrid Energy E h θ K HEAT Composition E β HEAT ( 7) Ltot(θ k ) = LMLE(θ k )+λLC(θ k ) (6)
. . . . . . Figure 2. Schematic view of the HEAT model for OOD detection. Each selected prior density estimator q k is expressed as an EBM, q k (z) ∝ exp(−Eq k (z)), and is refined with its own residual EBM parameterized with a neural network: The energy for each prior E k (e.g. EL, GMM) is corrected by a residual energy E θ k to produce an hybrid energy E h θ k (cf. Sec. 3.1). Then all hybrid energies are composed to produce HEAT's energy E β HEAT (cf. Sec. 3.2), which is used as uncertainty score for OOD detection.
a hybrid energy E h θ k (z) = E q k (z)+E θ k (z) and express p h θ k (z) as follows: p h θ k (z) = 1 Z(θ k ) exp −E h θ k (z) ,(2)
with E q k = −logq k (z) the energy from the prior. The goal of the residual energy E θ k (z) is to compensate for the lack of accuracy of the energy of the prior density q k (z). We choose to parameterize it with a neural network, as shown in Fig. 2. This gives our EBM density estimation the required expressive power to approximate the residual term.
Hybrid EBM training. The hybrid model energy E h θ k (z) can be learned via maximum likelihood estimation (MLE), which amounts to perform stochastic gradient descent with the following loss (cf. Appendix A.1 for details):
L MLE (θ k ) = E z∼pin E θ k (z) −E z ′ ∼p h θ k E θ k (z ′ ) ,(3)
with z ∼ p in being the true distribution of the features from the dataset. Minimizing Eq. (3) has for effect to lower the energy of real samples while raising the energy of generated ones. To learn a residual model we must sample z ′ from the hybrid model p h θ k . To do so, we follow previous works on EBM training (Du & Mordatch, 2019) and exploit stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011). SGLD sampling consists in gradient descent on the energy function:
z t+1 = z t − η 2 ∇ z E h θ k (z t )+ √ η w t , with w t ∼ N (0,I) (4)
where η is the step size, the chain being initiated with z 0 ∼ q k . The residual energy corrects the prior density by raising (resp. lowering) the energies in areas where the prior over-(resp. under-) estimates p in . It then does so for the current hybrid model E h θ k . The overall training hybrid EBM scheme is summarized Algorithm 1.
Controlling the residual. As our goal is to learn a residual model over q, we must prevent the energy-correction term E θ k to take too large values thus canceling the benefit from the prior model q k . Therefore, we introduce an additional loss term preventing the hybrid model from deviating to much from the prior density:
L C (θ k ) = E pin,p h θ k (E h θ k −E q k ) 2 .(5)
The final loss is then:
L Tot (θ k ) = L MLE (θ k )+λ L C (θ k ),(6)
where λ is an hyper-parameter balancing between the two losses. Although L C (θ k ) in Eq. (5) rewrites as E pin,p h θ k E 2 θ k , we point out that its objective goes beyond a standard ℓ 2 -regularization used to stabilize training. It has the more fundamental role of balancing the prior and the residual energy terms in order to drive a proper cooperation.
Composition of refined prior density estimators
In this section, we motivate the choice of prior OOD scorers that we correct, and how to efficiently compose them within our HEAT framework.
Selected OOD-Priors. As previously stated, EL and GMM show complementary OOD detection performances, EL being useful to discriminate class ambiguities while GMM is effective on far-OOD. Additionally they can be directly interpreted as energy-based models and thus can easily be Algorithm 1 Hybrid Energy Based Model Training input : Features D z , ID-Prior (q k ,E q k ), λ, α and η.
output : Hybrid EBM E h θ k = E q k (z)+E θ k (z). // cf. Eq. (2) while not converged do Sample z ∈ D z and z ′ 0 ∼ q k for 0 ≤ t ≤ T −1 do w ∼ N (0,I) z ′ t+1 ← z t ′ − η 2 ∇ z E h θ k (z ′ t )+ √ ηw // SGLD, Eq. (4) end L T ot (θ k ) = L M LE (θ k )+λL c (θ k ) // cf. Eq. (6) θ k ← θ k −α∇ θ k LT ot(θk )
end refined and composed with HEAT. Based on the energy from the logits derived in (Liu et al., 2020) we express the hybrid
energy HEAT-EL as E h θ l (z) = −log c e f (z)[c] +E r θ l (z) where f (.)[c]
denotes the logit associated to the class c.
For the GMM prior, we derive an energy from the Mahalanobis distances to each class centroid. Giving the following expression for our hybrid HEAT-GMM's energy
E h θg (z) = −log c e − 1 2 (z−µc) T Σ −1 (z−µc) +E r θg (z)
with Σ and µ c being the empirical covariance matrix and mean feature for class c. HEAT-GMM's energy is computed on the z vector in Fig. 2, which is obtained by average pooling from the preceding tensor in the network. To improve HEAT's OOD detection performances, we propose to further exploit feature volume prior to the pooling operation (e.g. average pooling) as we hypothesize that it contains more information relevant to OOD detection. To do so, we compute the vector of second-order moments of the feature volume by using a std-pooling operator and subsequently model the density of the second-order features with a GMM. This leads to a third hybrid EBM denoted as HEAT-GMM std .
Note that our HEAT method can be extended to K prior scorers, provided that they can write as an EBM and that they are differentiable in order to perform SGLD sampling. Interesting extensions would include adapting the approach to other state-of-the-art OOD detectors, such as a soft-KNN (Sun et al., 2022) or ViM (Wang et al., 2022). We leave these non-trivial extensions for future works.
Composition strategy. The EBM framework offers a principled way to make a composition (Du et al., 2020)
of energy functions. Given K corrected energy functions E h θ k , such that: p h θ k ∝ exp(−(E θ k (z)+E q k (z))
, we introduce the following composition function:
E β HEAT = 1 β log K k=1 e βE h θ k(7)
Depending on β, E β HEAT can recover a sum of energies (β = 0), i.e. a product of probabilities. For β = −1, E β HEAT is equivalent to the logsumexp operator, i.e. a sum of probabilities. More details in Appendix A.2. Moreover, unlike previous approaches that require learning a set of weights (Lee et al., 2018b;Zisselman & Tamar, 2020), HEAT's composition only requires tuning a single hyper-parameter, i.e. β which has a clear interpretation.
The composition strategy adopted in HEAT is also scalable since: i) we work in the feature space z = ϕ(x) ∈ Z of controlled dimension (e.g. 1024 even for the CLIP foundation model (Radford et al., 2021)), and ii) our energy-based correction uses a relatively small model (we use a 6-layers MLP in practice). We study the computational cost of HEAT in Appendix B.2 and show the large gain in efficiency compared to e.g. deep ensembles.
OOD detection with HEAT. Finally, we use the learned and composed energy of HEAT, E β HEAT in Eq. (7), as an uncertainty score to detect OOD samples.
Experiments
Datasets. We validate HEAT on several benchmarks. The two commonly used CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) benchmarks as in (Sehwag et al., 2021;Sun et al., 2022). We also conduct experiments on the large-scale Imagenet (Deng et al., 2009) dataset. More details in Appendix B.
Evaluation metrics. We report the following standard metrics used in the literature (Hendrycks & Gimpel, 2017): the area under the receiver operating characteristic curve (AUC) and the false positive rate at a threshold corresponding to a true positive rate of 95% (FPR95).
Implementation details. All results on CIFAR-10 and CIFAR-100 are reported using a ResNet-34 (He et al., 2016), on Imagenet we use the pre-trained ResNet-50 from PyTorch (Paszke et al., 2019). We detail all implementation details in Appendix B.
Baselines. We perform extensive validation of HEAT vs. several recent state-of-the-art baselines, including the maximum softmax probability (MSP) (Hendrycks & Gimpel, 2017), ODIN (Liang et al., 2018b), Energy-logits (Liu et al., 2020), SSD (Sehwag et al., 2021), KNN (Sun et al., 2022) and ViM (Wang et al., 2022). We apply our energy-based correction of EL, GMM and GMM std that we then denote as HEAT-EL, HEAT-GMM and HEAT-GMM std . We choose those priors as they can naturally be written as energy models as described in Sec. 3.1, furthermore, they are strong baselines and combining them allows us to take advantage of their respective strengths (discussed in Sec. 3.2). All the baselines are compared using the same backbone trained with the standard cross-entropy loss. (Liu et al., 2020) (EL) and GMM, GMM with std-pooling (GMM std ) with our energy-based correction on CIFAR-10 and CIFAR-100 as in-distribution datasets. Results are reported with FPR95↓ / AUC ↑.
Method
Near-OOD Mid-OOD Correcting prior scorers. In Tab. 1 we demonstrate the effectiveness of energy-based correction to improve different prior OOD scorers on two ID dataset: CIFAR-10 and CIFAR-100. We show that across the two ID datasets and for all prior scorers, using a residual corrections always improves the aggregated results, e.g. for GMM -3.6 pts FPR95 on CIFAR-10 and -1.3 pts FPR95 on CIFAR-100. Furthermore on near-OOD and mid-OOD learning our correction always improves the prior scores, e.g. on LSUN with CIFAR-10 as ID dataset the correction improves EL by -2.9 pts FPR95, GMM by -6.6 pts FPR95 and -4 pts FPR95 for GMM std . On far-OOD the corrected scorers performs at least on par with the base scorers, and can further improve it, e.g. on SVHN when CIFAR-100 is the ID datasets, the correction improves by -5.5pts FPR95, -2.2 pts FPR95 and -3.8pts FPR95, EL, GMM, GMM std respectively. Overall Tab. 1 clearly validates the relevance of correcting the modeling assumptions of prior scorers with our learned energy-based residual.
Learning a residual model. In Tab. 2 we compare learning an EBM (cf. Appendix A.1) vs. our residual training using a GMM prior (HEAT-GMM) of Sec. 3 on CIFAR-10 and CIFAR-100. The EBM is a fully data-driven approach, which learns the density of ID samples without any prior distribution model. On both datasets, our residual training leads to better performances than the EBM, e.g. +2.6 pts AUC on CIFAR-100. On near-OOD, both the residual training and the EBM perform on par. On far-OOD, our residual training takes advantage of the good performances of the prior scorer, i.e.GMM, and significantly outperforms the EBM, especially on CIFAR-100, with e.g. +7.7 pts AUC on Textures. Our residual training combines the strengths of GMM and EBMs: Gaussian modelization by design penalizes samples far away from the training dataset and thus eases far-OOD's detection, whereas EBM may overfit in this case. On the other hand, near-OOD detection requires a too complex density estimation for simple parametric distribution models such as GMMs. Composing energy-based scorers. In Tab. 3 we show that composing different energy-based scores (see Sec. 3.2), i.e. the selected OOD prior scorers with our energy-based correction as described in Sec. 3.1, improves overall performances on CIFAR-10 and CIFAR-100. For instance composing our HEAT-GMM and HEAT-GMM std leads to improvements of all reported results, i.e. on CIFAR-10 -5.1 pts FPR95 and +0.8 pts AUC and on CIFAR-100 -0.6 pts FPR95 and +0.6 pts AUC. Composing the three prior scorers leads to the best results, improving over the best single scorer performances by great margins on CIFAR-10 with -7.1 pts FPR95 and +1 AUC and with smaller margins on CIFAR-100 -0.8 pts FPR95 and +1.1 pts AUC on CIFAR-100. This shows the interest of composing different scorers as they detect different types of OOD. Note that while the composition has the best performances our correction model (HEAT-GMM) already has competitive performances on CIFAR-10 and better performances on CIFAR-100 than state-of-the-art methods reported in Tab. 4.
Comparison to state-of-the-art
In this section, we present the results of HEAT vs. stateof-the-art methods. In Tab. 4 we present our results with CIFAR-10, and CIFAR-100 as ID data, and in Tab. 5 we present our results on the large and complex Imagenet dataset.
CIFAR-10 results. In Tab . Impact on performances (AUC↑ on CIFAR-100) vs. the number of training data for GMM density, fully data-driven EBM, and HEAT. Our hybrid approach maintains strong performances in low-data regime, in contrast to the fully data-driven EBM.
far-, mid-, and near-OOD detection, whereas state-of-the-art methods are competitive for a specific type of OOD only. For instance the performance of DICE drops significantly on Textures. Furthermore, we show in Appendix B.1.3 that using an energy refined version of DICE instead of EL into HEAT's composition further improve OOD detection results. This also shows that HEAT performs well on larger scale and more complex datasets such as Imagenet. In Appendix B.1.1 we show the results of HEAT on the more recent Imagenet OpenOOD benchmarks (Yang et al., 2022), where we show that HEAT also outperforms state-of-the-art methods. In Appendix B.1.4 we show that HEAT also outperforms other methods when using a supervised contrastive backbone. Finally in Appendix B.1.2 we show that HEAT is also state-of-the-art when using another type of neural network, i.e. Vision Transformer (Dosovitskiy et al., 2020).
Model analysis
In this section we show how HEAT works in a wide range of settings. We show in Fig. 3 the impact of λ and β and in Fig. 4 that HEAT performs well in low data regimes.
Robustness to λ. We show in Fig. 3a the impact of λ on the FPR95 for CIFAR-10 as the ID dataset. We can observe that for a wide range of λ, e.g. [2, 50], our energy-based correction improves the OOD detection of the prior scorer, i.e.GMM, with ideal values close to ∼ 10. λ controls the cooperation between the prior scorer and the learned residual term which can be observed on Fig. 3a. When setting λ to a value that is too low there is no control over the energy. The prior density is completely disregarded which will eventually lead to optimization issues resulting in poor detection performances. On the other hand, setting λ to a value too high (e.g. 100) will constrain the energy too much, resulting in performances closer to that of GMM. On CIFAR-10 as the ID dataset we observe similar trends in Appendix B.2.
Robustness to β. We show in Fig. 3b that HEAT is robust wrt. β in Eq. (7). We remind that β → 0 is equivalent to the mean, β → −∞ is equivalent to the minimum and β → ∞ is equivalent to the maximum. We show that HEAT is stable to different values of β, and performs best with values close to 0, this is also the case in Appendix B.2. Note that we used β = 0 for HEAT in Tab. 5 and Tab. 4 but using a lower value, i.e.-1, leads to better results. We hypothesize that using a more advanced β selection methods could further improve performances.
EL SSD HEAT threshold@95 Figure 5. Qualitative comparison of HEAT vs.EL (Liu et al., 2020) and SSD (Sehwag et al., 2021). Samples in green are correctly detected as OOD (LSUN), samples in red are incorrectly predicted as ID (CIFAR-10).
Low data regime. We study in Fig. 4 (and Appendix B.2) the stability of HEAT on low data regimes. Specifically, we restrict the training of HEAT to a subset of the ID dataset, i.e. CIFAR-100. We compare HEAT to a fully data-driven EBM and to a GMM. The EBM is very sensitive to the lack of training data, with a gap of 12 pts AUC between 10% of data and 100%. On the other hand, GMM is quite robust to low data regimes with a minor gap of 0.3 pts AUC between 10% and 100%. HEAT builds on this stability and is able to improve the performance of GMM for all tested sampling ratios. HEAT is very stable to low data regimes which makes it easier to use than a standard EBM, it is also able to improve GMM even when few training data are available.
Qualitative results
In Fig. 5 we display qualitative results on CIFAR-10 (ID) and show the detection results of OOD samples from LSUN. We display in red OOD samples incorrectly identified as ID samples, i.e. below the threshold at 95% of ID samples, and in green OOD samples correctly detected, i.e. are above the 95% threshold. We can see that SSD detects different OOD than EL. HEAT correctly predicts all OOD samples. Other qualitative results are provided in Appendix B.3.
Conclusion
We have introduced the HEAT model which leverages the versatility of the EBM framework to provide a strong OOD detection method jointly effective on both far and near-OODs. HEAT i) corrects prior OOD detectors to boost their detection performances and ii) naturally combines the corrected detectors to take advantage of their strengths. We perform extensive experiments to validate HEAT on several benchmarks, highlighting the importance of the correction and the composition, and showing that HEAT sets new state-of-the-art performances on CIFAR-10, CIFAR-100, and on the largescale Imagenet dataset. HEAT is also applicable to different backbones, and remains efficient in low-data regimes.
Future works include extending HEAT by correcting other prior density estimators, e.g.KNN. Another interesting direction is to validate HEAT on other tasks, e.g. segmentation, or modalities, e.g. NLP.
Xie, J., Zhu, Y., Li, J., and Li, P. A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energy-based model. Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., Du, X., Zhou, K., Zhang, W., Hendrycks, D., Li, Y., and Liu, Z
A.2. Composition function
While many composition strategies can be consider, we choose to use the best trade-off between detection efficiency and flexibility. While combining many OOD-prior is great, hand tuning many balancing hyper-parameters can quickly become cumbersome. Our energy composition strategy E β HEAT presents the advantage to have only one hyper-parameter β to tune with a clear interpretation for its different regimes. Indeed, depending on β, this composition operator generalizes several standard aggregation operators. When β → +∞, we recover the maximum operator, while when β → −∞, we recover the minimum operator. In the β → 0 case, we recover the sum and the resulting distribution is equivalent to a product of experts. Finally, taking β = −1 amounts to using the logsumexp operator which approximates a mixture of experts. In addition, to prevent the energy of one prior scorer to dominate the others, we normalize the energies using the train statistics (subtracting their mean and dividing by their standard deviation). This simple standardization gives good results in our setting, although more advanced normalization schemes could certainly be explored if needed with other prior scorers.
B. Experiments
Datasets. We conduct experiments using CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) as in-distribution datasets. For OOD datasets, we define three categories: near-OOD datasets, mid-OOD datasets and far-OOD datasets. These correspond to different levels of proximity with the ID datasets. For CIFAR-10 (resp. CIFAR-100), we consider TinyImagenet 3 and CIFAR-100 (resp. CIFAR-10) as near-OOD datasets. Then for both CIFAR-10 and CIFAR-100, we use LSUN (Yu et al., 2015) and Places (Zhou et al., 2017) datasets as mid-OOD datasets, and Textures (Cimpoi et al., 2014) and SVHN (Netzer et al., 2011) as far-OOD datasets. We use different Imagenet (Deng et al., 2009) benchmarks. In Tab. 5 we use the benchmarks of (Sehwag et al., 2021;Sun et al., 2022) (Cimpoi et al., 2014). In Appendix B.1.1 we use the Imagent benchmark recently introduced in OpenOOD (Yang et al., 2022), and we refer the reader to the paper for details about the dataset 4 . Finally in Appendix B.1.2 we ue the Imagenet benchmark proposed in (Wang et al., 2022), with notably the OpenImage-O dataset introduced specifically as an OOD dataset for the Imagenet benchmark in (Wang et al., 2022), for more details we refer the reader to the paper. Implementation details. All experiments were conducted using PyTorch (Paszke et al., 2019). We use a ResNet-34 classifier from the timm library (Wightman, 2019) for the CIFAR-10 and CIFAR-100 datasets and a ResNet-50 for the Imagenet experiments. HEAT consists in a 6 layers MLP trained for 20 epochs with Adam with learning rate 5e-6. The network input dimension is 512 (which is the dimension of the penultimate layer of ResNet-34) for the CIFAR-10/100 benchmarks and 2048 (which is the dimension of the penultimate layer of ResNet-50) for the Imagenet benchmark. The hidden dimension is 1024 for CIFAR-10/100 and 2048 for Imagenet, and the output dimension is 1. For SGLD sampling, we use 20 steps with an initial step size of 1e-4 linearly decayed to 1e-5 and an initial noise scale of 5e-3 linearly decayed to 5e-4. We add a small Gaussian noise with std 1e-4 to each input of the EBM network to stabilize training as done in previous works (Du & Mordatch, 2019;Grathwohl et al., 2020). The L 2 coefficient is set to 10. We use temperature scaling on the mixture of Gaussian distributions energy with temperature T G = 1e3. The hyper-parameters for the CIFAR-10 and CIFAR-100 models are identical.
B.1.2. VIT RESULTS
In Tab. 9 we compare HEAT using a Vision Transformer 5 (ViT), on the Imagenet benchmark introduced in (Wang et al., 2022). We show that on the aggregated results HEAT outperforms the previous best method, ViM (Wang et al., 2022), by -1.7 pts FPR95. Importantly HEAT ouperforms other method on three datasets of the benchmark, i.e. OpenImage-O, Textures, Imagenet-O, and is competitive on iNaturalist. Tab. 9 demonstrates the ability of HEAT to adapt to architectures of neural networks, i.e. Vision Transformer (Dosovitskiy et al., 2020), other than the convolutional networks (i.e. ResNet-34 & ResNet-50) tested in Sec. 4.2. Table 9. Results on Imagenet. All methods are based on an Imagenet pre-trained Vision Transformer (ViT) model. ↑ indicates larger is better and ↓ the opposite.
Method
OpenImage-O Textures iNaturalist Imagenet-O Average
B.2. Model analysis
In Fig. 6 we show the impact of λ in Eq. (6) and β vs.FPR95 on CIFAR-100, we study in Fig. 7 how HEAT behaves on low data regimes with CIFAR-10 as ID dataset. Finally in Tab. 12 we study the computational requirements of HEAT.
Robustness to λ In Fig. 6a we can see that we have similar trends to Fig. 3a. For values of λ too high, i.e. when the expressivity of the energy-based correction is limited, HEAT-GMM has the same performances than GMM. For values of λ too low the energy-based correction is not controlled and disregards the prior scorer, i.e.GMM. Finally for a wide range of λ values HEAT-GMM improves the OOD detection performances of GMM.
Robustness to β In Fig. 6b we show that HEAT is stable wrt. β on CIFAR-100 similarly to Fig. 3b. . Impact on performances (AUC↑ on CIFAR-10) vs. the number of training data for GMM density, fully data-driven EBM, and HEAT. Our hybrid approach maintains strong performances in low-data regime, in contrast to the fully data-driven EBM.
Low data regime Similarly to Fig. 4, we can see that training solely an EBM is very unstable when the number of data is low. On the other hand HEAT-GMM is stable to the lack of data and improves GMM even with few ID samples available.
Computational cost In Tab. 12 we report the cost of computing of different components of HEAT, e.g. forward pass of a ResNet-50, energy computation of GMM. We extrapolate based on those inference time the computational cost of deepensembles (Lakshminarayanan et al., 2017) and of HEAT. The compute time for HEAT,5.194ms, is due at 84% by the inference time of a ResNet-50. This is why deep-ensembles has a compute time of 2500ms which is 4.8 times larger than that of HEAT. Further more we can see that correcting GMM with HEAT only brings an overhead of 1ms, which will not scale with the size of the model but only its embedding size, e.g. CLIP (Radford et al., 2021) as an embedding size of 1024 for its largest model.
B.3. Qualitative results
We show qualitative results of HEAT vs. EL (Liu et al., 2020) and SSD (Sehwag et al., 2021) on LSUN Fig. 8 and Textures Fig. 9. On Fig. 8 we can see that EL and SSD detect different OOD samples. HEAT is able though the correction and composition to recover those mis-detected OOD samples. On Fig. 9 we can see that SSD performs well on the far-OOD dataset (Textures), however HEAT is able to recover a mis-detected OOD sample. Fig. 8 and Fig. 9 qualitatively show how HEAT is able to better mis-detect OOD samples.
EL SSD HEAT threshold@95 Figure 8. Qualitative comparison of HEAT vs. EL (Liu et al., 2020) and SSD (Sehwag et al., 2021) on LSUN. Samples in green are correctly detected as OOD (above the 95% of ID threshold), samples in red are incorrectly predicted as ID, i.e. an energy lower than the threshold.
EL SSD HEAT threshold@95 Figure 9. Qualitative comparison of HEAT vs. EL (Liu et al., 2020) and SSD (Sehwag et al., 2021) on Textures. Samples in green are correctly detected as OOD (above the 95% of ID threshold), samples in red are incorrectly predicted as ID, i.e. an energy lower than the threshold.
Figure 3 .Figure 4
34On CIFAR-10 ID: (a) impact of λ in Eq. (6) vs. FPR95 and (b) analysis of β in Eq. (7) vs. FPR95.
with iNaturalist (Van Horn et al., 2018), LSUN (Yu et al., 2015), Places (Zhou et al., 2017) and Textures
Figure 6 .Figure 7
67On CIFAR-100 ID: (a) impact of λ in Eq. (6) vs. FPR95 and (b) analysis of β in Eq. (7) vs. FPR95.
Table 1 .
1Refinement of Energy-logits
In this section we study the different components of HEAT. In Tab. 1 we show that learning a residual correction term with HEAT improves the OOD detection performances of prior scorers. In Tab. 2 we show the interest of learning a residual model as described in Sec. 3.1 rather than a standard fully data-driven energy-based model. Finally in Tab. 3 we show how using the energy composition improves OOD detection.Far-OOD
Average
C-100/10
TinyIN
LSUN
Places
Textures
SVHN
CIFAR-10
EL
48.4 / 86.9
41.9 / 88.2
33.7 / 92.6
35.7 / 91.0
30.7 / 92.9
4.9 / 99.0
32.6 / 91.8
HEAT-EL
47.3 / 88.0
40.7 / 88.9
30.8 / 93.4
33.8 / 91.8
28.8 / 93.9
4.5 / 99.1
31.0 / 92.5
GMM
52.6 / 89.0
50.9 / 89.5
47.1 / 92.4
46.4 / 91.2
13.1 / 97.8
0.9 / 99.8
35.1 / 93.3
HEAT-GMM
49.0 / 89.8
44.8 / 90.4
40.5 / 93.2
40.4 / 92.0
13.4 / 97.7
0.8 / 99.8
31.5 / 93.8
GMM std
58.4 / 84.9
50.6 / 87.9
32.2 / 94.5
38.5 / 91.8
13.8 / 97.6
2.5 / 99.5
32.7 / 92.7
HEAT-GMM std
56.1 / 86.1
47.8 / 88.7
28.2 / 95.2
35.8 / 92.5
13.3 / 97.5
2.7 / 99.4
30.7 / 93.2
CIFAR-100
EL
80.6 / 76.9
79.4 / 76.5
87.6 / 71.7
83.1 / 74.7
62.4 / 85.2
53.0 / 88.9
74.3 / 79.0
HEAT-EL
80.1 / 77.2
77.6 / 77.5
87.2 / 72.2
81.8 / 75.0
61.5 / 85.8
47.5 / 90.2
72.6 / 79.6
GMM
85.6 / 73.6
82.5 / 77.2
87.8 / 73.7
84.5 / 74.4
36.7 / 92.4
20.0 / 96.3
66.2 / 81.3
HEAT-GMM
84.2 / 74.8
80.5 / 78.5
86.4 / 74.8
82.7 / 75.9
37.9 / 92.2
17.8 / 96.7
64.9 / 82.1
GMM std
91.4 / 67.9
84.3 / 74.8
83.4 / 75.2
83.5 / 75.2
40.6 / 91.3
36.7 / 93.1
70.0 / 79.6
HEAT-GMM std
89.1 / 70.3
82.2 / 76.2
82.3 / 76.1
81.4 / 76.7
42.9 / 90.7
32.9 / 93.8
68.5 / 80.6
4.1. HEAT improvements
Table 2 .
2Comparison of learning a residual model, i.e. HEAT-GMM, vs. learning an EBM and GMM. Results reported with AUC ↑.Method
Near-OOD
Mid-OOD
Far-OOD
Average
C-100/10 TinyIN LSUN Places Textures SVHN
C-10
GMM
89.0
89.5 92.4 91.2
97.7
99.8 93.3
EBM
89.4
89.9 93.8 91.8 96.2
99.0 93.3
HEAT-GMM 89.8
90.4 93.2 92.0 97.7
99.8 93.8
C-100
GMM
73.6
77.0 73.8 74.5
92.4
96.4 81.3
EBM
74.8
79.7 71.9 75.4 84.5
91.0 79.5
HEAT-GMM 74.8
78.5 74.8 75.9 92.2
96.7 82.1
Table 3 .
3Aggregated performances on CIFAR-10 and CIFAR-100 for the energy composition of the refined OOD scorers of Tab. 1.HEAT
HEAT
HEAT
CIFAR-10
CIFAR-100
-GMM -GMM std
-EL
FPR95↓ AUC ↑ FPR95↓ AUC ↑
✓
✗
✗
31.5
93.8
64.9
82.1
✗
✓
✗
30.7
93.2
68.5
80.6
✗
✗
✓
31.0
92.5
72.6
79.6
✓
✓
✗
25.6
94.6
64.3
82.7
✓
✗
✓
28.0
94.1
65.5
82.4
✗
✓
✓
23.6
94.6
66.6
82.1
✓
✓
✓
23.5
94.8
63.9
83.0
Table 4 .
4Results on CIFAR-10 & CIFAR-100. All methods are based on a pre-trained ResNet-34 trained on the ID dataset only. ↑ indicates larger is better and ↓ the opposite. Best results are in bold, second best underlined. Results are reported with FPR95↓ / AUC ↑.Method
Near-OOD
Mid-OOD
Far-OOD
Average
C-10/C-100
TinyIN
LSUN
Places
Textures
SVHN
CIFAR-10
MSP (Hendrycks & Gimpel, 2017)
58.0 / 87.9
55.9 / 88.2
50.5 / 91.9
52.7 / 90.2
52.3 / 91.7
19.7 / 97.0
48.2 / 91.2
ODIN (Liang et al., 2018b)
48.4 / 86.0
42.2 / 87.3
32.6 / 92.3
35.6 / 90.4
29.4 / 92.6
7.8 / 98.3
32.6 / 91.1
KNN (Sun et al., 2022)
47.9 / 90.3
43.1 / 90.6
36.1 / 94.1
37.9 / 92.7
24.9 / 96.0
8.1 / 98.6
33.0 / 93.7
ViM (Wang et al., 2022)
44.8 / 89.2
40.1 / 89.8
32.0 / 93.8
34.3 / 92.2
17.9 / 96.4
3.6 / 99.2
28.8 / 93.4
SSD+ (Sehwag et al., 2021)
52.6 / 89.0
50.9 / 89.5
47.1 / 92.4
46.4 / 91.2
13.1 / 97.8
0.9 / 99.8
35.1 / 93.3
EL (Liu et al., 2020)
48.4 / 86.9
41.9 / 88.2
33.7 / 92.6
35.7 / 91.0
30.7 / 92.9
4.9 / 99.0
32.6 / 91.8
DICE (Sun & Li, 2022)
51.0 / 85.7
44.3 / 87.0
33.3 / 92.3
35.6 / 90.5
29.3 / 92.8
3.6 / 99.2
32.8 / 91.3
HEAT (ours)
43.1 / 90.2
35.7 / 91.3
22.2 / 95.8
27.4 / 93.9
11.3 / 97.9
1.1 / 99.8
23.5 / 94.8
CIFAR-100
MSP (Hendrycks & Gimpel, 2017)
80.0 / 76.6
78.3 / 77.6
83.5 / 74.7
81.0 / 76.4
72.1 / 81.0
62.0 / 86.4
76.1 / 78.8
ODIN (Liang et al., 2018b)
81.4 / 76.4
78.7 / 76.2
86.1 / 72.0
82.6 / 74.5
62.4 / 85.2
80.7 / 80.4
78.6 / 77.5
KNN (Sun et al., 2022)
82.1 / 74.5
76.7 / 80.2
90.1 / 74.4
83.2 / 75.5
47.2 / 90.2
35.6 / 93.6
69.2 / 81.4
ViM (Wang et al., 2022)
85.8 / 74.3
77.5 / 79.6
86.2 / 75.3
79.8 / 77.6
42.3 / 91.9
41.3 / 93.2
68.8 / 82.0
SSD+ (Sehwag et al., 2021)
85.6 / 73.6
82.5 / 77.2
87.8 / 73.7
84.5 / 74.4
36.7 / 92.4
20.0 / 96.3
66.2 / 81.3
EL (Liu et al., 2020)
80.6 / 76.9
79.4 / 76.5
87.6 / 71.7
83.1 / 74.7
62.4 / 85.2
53.0 / 88.9
74.3 / 79.0
DICE (Sun & Li, 2022)
81.2 / 75.8
82.4 / 74.2
87.8 / 70.4
84.5 / 73.1
63.0 / 83.8
51.9 / 88.1
75.2 / 77.6
HEAT (ours)
83.7 / 75.8
77.7 / 79.5
83.4 / 76.3
80.0 / 77.8
37.1 / 92.7
21.7 / 96.0
63.9 / 83.0
pts AUC. Interestingly we can see that HEAT outperforms other methods because it improves OOD detection on near-, mid-, and far-OOD. On near OOD, it outperforms KNN by -4.6 pts FPR95 on C-100 and Energy-logits by -6.1 pts FPR95 on TinyIN. On mid-OOD detection, it outperforms ViM by -9.8 pts FPR95 on LSUN and Energy-logits by -8.5 pts FPR95. Finally, on far-OOD, the performances are similar to SSD+ which is by far the best performing method on this regime.. 4 we compare HEAT vs.
state-of-the-art methods when using CIFAR-10 as the ID
dataset. First, we show that HEAT sets a new state-of-the-art
on the aggregated results. It outperforms the prior scorers
it corrects, i.e.SSD+ by -11.6 pts FPR95 and Energy-logits
by -9.1 pts FPR95. It also outperforms the previous state-
of-the-art methods ViM by -5.3 pts FPR95 and KNN by +1.1
CIFAR-100 results. In Tab. 4 we compare HEAT vs. state-
of-the-art method when using CIFAR-100 as the ID dataset.
HEAT outperforms state-of-the-art methods on aggregated
results, with -2.3 pts FPR95 and +1.7 pts AUC vs.SSD+. HEAT
takes advantage of SSD+ on far-OOD and outperforms other
methods (except SSD+) by large margins -13.9 pts FPR95
and +2.4 pts AUC on SVHN vs. the best non-parametric
data-driven density estimation, i.e. KNN. Also, HEAT sig-
nificantly outperforms SSD+ for near-OOD and mid-OOD,
e.g.-4.8 pts FPR95 on TinyIN or -4.5 pts FPR95 on Places.
Imagenet results. In Tab. 5 we compare HEAT on the
recently introduced (Sun et al., 2022) Imagenet OOD
benchmark. HEAT sets a new state-of-the-art on this
Imagenet benchmark for the aggregated results, with 34.4
FPR95 and 92.6 AUC which outperforms by -1.5 pts FPR95
and +1.7 pts AUC vs. the previous best performing method
DICE. Furthermore, HEAT improves the aggregated results
because it is a competitive method on each dataset. On
far-OOD, i.e. Textures, it performs on par with SSD+, i.e.
5.7 FPR95, the best performing method on this dataset. On
mid-OOD, it is the second best method on SUN and on
Places behind DICE. Finally, on near-OOD it performs on par
with DICE. This shows that HEAT can be jointly effective on
Table 5 .
5Results on Imagenet. All methods use an Imagenet pre-trained ResNet-50. Results are reported with FPR95↓ / AUC ↑.Method
iNaturalist
SUN
Places
Textures
Average
MSP (Hendrycks & Gimpel, 2017)
52.8 / 88.4
69.1 / 81.6
72.1 / 80.5
66.2 / 80.4
65.1 / 82.7
ODIN (Liang et al., 2018b)
41.1 / 92.3
56.4 / 86.8
64.2 / 84.0
46.5 / 87.9
52.1 / 87.8
ViM (Wang et al., 2022)
47.4 / 92.3
62.3 / 86.4
68.6 / 83.3
15.2 / 96.3
48.4 / 89.6
KNN (Sun et al., 2022)
60.0 / 86.2
70.3 / 80.5
78.6 / 74.8
11.1 / 97.4
55.0 / 84.7
SSD+ (Sehwag et al., 2021)
50.0 / 90.7
66.5 / 83.9
76.5 / 78.7
5.8 / 98.8
49.7 / 88.0
EL (Liu et al., 2020)
53.7 / 90.6
58.8 / 86.6
66.0 / 84.0
52.4 / 86.7
57.7 / 87.0
DICE (Sun & Li, 2022)
26.6 / 94.5
36.5 / 90.8
47.9 / 87.5
32.6 / 90.4
35.9 / 90.9
HEAT (ours)
28.1 / 94.9
44.6 / 90.7
58.8 / 86.3
5.9 / 98.7
34.4 / 92.6
Table 12 .
12Computational cost reported in ms ↓. Times are reported using RGB images of size 224×224, a ResNet-50 with an output size of 2048, 1000 classes (i.e. Imagenet setup), on a single GPU (Quadro RTX 6000 with 24576MiB).ResNet-50
GMM
GMM-std
EL
EBM
deep-ensemble
HEAT
500
8
8
0.4
1
2501
519.4
We denote as far (resp. near) OOD samples with classes that are semantically distant (resp. close) from the ID classes.
we ignore the dependence to D in the following and denote the sought density as p(z).
The dataset can be found at: https://www.kaggle.com/c/tiny-imagenet 4 Datasets for the OpenOOD benchmark can be downloaded using: https://github.com/Jingkang50/OpenOOD.
The model used can be found at https://github.com/haoqiwang/vim
AcknowledgementsThis work was done under grants from the DIAMELEX ANR program (ANR-20-CE45-0026) and the AHEAD ANR program (ANR-20-THIA-0002). It was granted access to the HPC resources of IDRIS under the allocation 2021-AD011012645R1 made by GENCI.Additional metric In Appendix B.1.1 we use an additional metric the AUPR, which measures the area under the Precision-Recall (PR) curve, using the ID samples as positives (see (Yang et al., 2022) for details). The score corresponds to the AUPR-In metric in other works.(Bendale & Boult, 2016), the Gram matrix detector (Sastry & Oore, 2020) (Gram), KL matching(Hendrycks et al., 2022)(KLM) and GradNorm(Huang et al., 2021). We show that on the aggregated results in Tab. 6 HEAT outperforms previous methods, and sets new state-of-the-art performances for our considered setting. Similarly to our comparison in Sec. 4.2 we can see that HEAT performs well on far-OOD Tab. 7 and near-OOD Tab. 8. On far-OOD HEAT has the best performances on each metric, i.e. 2.4 FPR95, 99.4 AUC and 100 AUPR. On near-OOD HEAT has the best performances onTherefore, training EBMs via maximum likelihood estimation (MLE) amounts to perform stochastic gradient descent with the following loss:Intuitively, this loss amounts to diminishing the energy for samples from the true data distribution p(x) and to increasing the energy for synthesized examples sampled according from the current model. Eventually, the gradients of the energy function will be equivalent for samples from the model and the true data distribution and the loss term will be zero.The expectation E z ′ ∼p θ E θ (z ′ ) can be approximated through MCMC sampling, but we need to sample z ′ from the model p θ which is an unknown moving density. To estimate the expectation under p θ in the right hand-side of equation(11)we must sample according to the energy-based model p θ . To generate synthesized examples from p θ , we can use gradient-based MCMC sampling such as Stochastic Gradient Langevin Dynamics (SGLD)(Welling & Teh, 2011)or Hamiltonian Monte Carlo (HMC)(Neal, 2010). In this work, we use SGLD sampling following(Du & Mordatch, 2019;Grathwohl et al., 2020). In SGLD, initial features are sampled from a proposal distribution p 0 and are updated for T steps with the following iterative rule:where η is the step size. Therefore sampling from p θ does not require to compute the normalization constant Z θ either.Many variants of this training procedure have been proposed including Contrastive Divergence (CD)(Hinton, 2002)where p 0 = p data , or Persistent Contrastive Divergence (PCD)(Tieleman, 2008)which uses a buffer to extend the length of the MCMC chains. We refer the reader to(Song & Kingma, 2021)for more details on EBM training with MLE as well as other alternative training strategies (score-matching, noise contrastive estimation, Stein discrepancy minimization, etc.). AUC, i.e. +2.6 pts AUC vs. KNN, -0.8 pts FPR95 and +0.9 pts AUPR vs. ReAct.Table 6. Aggregated results on the Imagenet OpenOOD benchmark. All methods are based on an Imagnet pre-trained ResNet-50.MethodNear-OOD Far-OOD Average In this section we compare the performance of HEAT when using DICE instead of EL as one its components. We show in Tab. 10 that using DICE instead of EL as part of HEAT's components further improves the OOD detection performances on all OOD datasets except Textures. In this section we evaluate HEAT when using a supervised contrastive backbone and compare with KNN+ and SSD+. We show in Tab. 11 that HEAT still largely outperforms the competition even with the supervised contrastive backbone.
J.-B Alayrac, J Donahue, P Luc, A Miech, I Barr, Y Hasson, K Lenc, A Mensch, K Millican, M Reynolds, R Ring, E Rutherford, S Cabi, T Han, Z Gong, S Samangooei, M Monteiro, J Menick, S Borgeaud, A Brock, A Nematzadeh, S Sharifzadeh, M Binkowski, R Barreira, O Vinyals, A Zisserman, K Simonyan, Advances in Neural Information Processing Systems, number NeurIPS. Visual Language Model for Few-Shot LearningAlayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., Ring, R., Rutherford, E., Cabi, S., Han, T., Gong, Z., Samangooei, S., Monteiro, M., Menick, J., Borgeaud, S., Brock, A., Nematzadeh, A., Sharifzadeh, S., Binkowski, M., Barreira, R., Vinyals, O., Zisserman, A., and Simonyan, K. Flamingo: a Visual Language Model for Few-Shot Learning. In Advances in Neural Information Processing Systems, number NeurIPS, 2022. URL http://arxiv.org/abs/2204.14198. 1
Concrete Problems in AI Safety. D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané, Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. Concrete Prob- lems in AI Safety. pp. 1-29, 2016. URL http://arxiv.org/abs/1606.06565. 1
Residual Energy-based Models for Text. A Bakhtin, Y Deng, S Gross, M Ott, M A Ranzato, A Szlam, 15337928. 3Journal of Machine Learning Research. 22Bakhtin, A., Deng, Y., Gross, S., Ott, M., Ranzato, M. A., and Szlam, A. Residual Energy-based Models for Text. Journal of Machine Learning Research, 22:1-18, 2021. ISSN 15337928. 3
Towards open world recognition. A Bendale, T E Boult, 10.1109/CVPR.2015.7298799IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USAIEEE Computer SocietyBendale, A. and Boult, T. E. Towards open world recognition. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pp. 1893-1902. IEEE Computer Society, 2015. doi: 10.1109/CVPR.2015.7298799. URL https: //doi.org/10.1109/CVPR.2015.7298799. 1
Towards open set deep networks. A Bendale, T E Boult, 10.1109/CVPR.2016.1732016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USAIEEE Computer SocietyBendale, A. and Boult, T. E. Towards open set deep networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 1563-1572. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.173. URL https: //doi.org/10.1109/CVPR.2016.173. 15
R Bommasani, D A Hudson, E Adeli, R Altman, S Arora, S Von Arx, M S Bernstein, J Bohg, A Bosselut, E Brunskill, arXiv:2108.07258On the opportunities and risks of foundation models. arXiv preprintBommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., et al. On the opportunities and risks of foun- dation models. arXiv preprint arXiv:2108.07258, 2021. 1
Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. B Charpentier, D Zügner, S Günnemann, Advances in Neural Information Processing Systems. 2020Charpentier, B., Zügner, D., and Günnemann, S. Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. In Advances in Neural Information Processing Systems, 2020. 2
Describing textures in the wild. M Cimpoi, S Maji, I Kokkinos, S Mohamed, A Vedaldi, Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)15Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., , and Vedaldi, A. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014. 15
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. Ieee515Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009. 5, 15
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:2010.11929An image is worth 16x16 words: Transformers for image recognition at scale. 817arXiv preprintDosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 8, 17
Implicit generation and modeling with energy-based models. Y Du, I Mordatch, Advances in Neural Information Processing Systems. 3215Du, Y. and Mordatch, I. Implicit generation and modeling with energy-based models. In Advances in Neural Informa- tion Processing Systems, volume 32, 2019. URL https: //sites.google.com/view/igebm. 2, 4, 14, 15
Compositional Visual Generation with Energy Based Models. Y Du, S Li, I Mordatch, 10495258. URL https:Advances in Neural Information Processing Systems. Du, Y., Li, S., and Mordatch, I. Compositional Visual Generation with Energy Based Models. Advances in Neural Information Processing Systems, 2020- Decem, 2020. ISSN 10495258. URL https:
Unsupervised Learning of Compositional Energy Concepts. Y Du, S Li, Y Sharma, J B Tenenbaum, I Mordatch, Advances in Neural Information Processing Systems. Du, Y., Li, S., Sharma, Y., Tenenbaum, J. B., and Mordatch, I. Unsupervised Learning of Compo- sitional Energy Concepts. Advances in Neural Information Processing Systems, 2021. URL https: //energy-based-model.github.io/comet/ http://arxiv.org/abs/2111.03042. 3
On Out-of-distribution Detection with Energybased Models. S Elflein, B Charpentier, D Zügner, S Günnemann, ICML Workshop. Elflein, S., Charpentier, B., Zügner, D., and Günnemann, S. On Out-of-distribution Detection with Energy- based Models. In ICML Workshop, 2021. URL https://github.com/selflein/. 2
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. W Grathwohl, K.-C Wang, J.-H Jacobsen, D Duvenaud, M Norouzi, K Swersky, 8th International Conference on Learning Representations. 1415Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Du- venaud, D., Norouzi, M., and Swersky, K. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. In 8th International Conference on Learning Representations, 2020. URL http://arxiv.org/abs/1912.03263. 2, 14, 15
Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. M Gutmann, A Hyvärinen, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort. Teh, Y. W. and Titterington, D. M.the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna ResortSardinia, Italy9JMLR.orgGutmann, M. and Hyvärinen, A. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Teh, Y. W. and Titterington, D. M. (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, volume 9 of JMLR Proceedings, pp. 297-304. JMLR.org, 2010. URL http://proceedings.mlr.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE/CVF Conference on Computer Vision and Pattern Recognition. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016. 5
A baseline for detecting misclassified and out-of-distribution examples in neural networks. D Hendrycks, K Gimpel, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations7Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In Proceedings of International Conference on Learning Representations, 2017. 1, 2, 5, 7, 8
Deep Anomaly Detection with Outlier Exposure. D Hendrycks, M Mazeika, T Dietterich, ICLR. Hendrycks, D., Mazeika, M., and Dietterich, T. Deep Anomaly Detection with Outlier Exposure. In ICLR, 2019. URL https://github.com/hendrycks/ outlier-exposure. 2
Scaling out-of-distribution detection for real-world settings. D Hendrycks, S Basart, M Mazeika, A Zou, J Kwon, M Mostajabi, J Steinhardt, D Song, K Chaudhuri, S Jegelka, L Song, C Szepesvári, G Niu, Sabato , PMLR, 2022International Conference on Machine Learning, ICML 2022. S.Baltimore, Maryland, USA162Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., Steinhardt, J., and Song, D. Scaling out-of-distribution detection for real-world settings. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 8759-8773. PMLR, 2022. URL https://proceedings.mlr.press/ v162/hendrycks22a.html. 15
Training Products of Experts by Minimizing Contrastive Divergence. G E Hinton, Neural Computation. 1414Hinton, G. E. Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation, 1800(14): 1771-1800, 2002. 14
On the importance of gradients for detecting distributional shifts in the wild. R Huang, A Geng, Y ; Li, A Beygelzimer, Y N Dauphin, P Liang, Vaughan , Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021. J. W.202115Ranzato, M. December 6-14, 2021, virtualHuang, R., Geng, A., and Li, Y. On the importance of gra- dients for detecting distributional shifts in the wild. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Informa- tion Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, De- cember 6-14, 2021, virtual, pp. 677-689, 2021. 15
Computer vision for autonomous vehicles: Problems, datasets and state of the art. J Janai, F Güney, A Behl, A Geiger, 10.1561/0600000079Found. Trends Comput. Graph. Vis. 121-3Janai, J., Güney, F., Behl, A., and Geiger, A. Computer vision for autonomous vehicles: Problems, datasets and state of the art. Found. Trends Comput. Graph. Vis., 12 (1-3):1-308, 2020. doi: 10.1561/0600000079. URL https://doi.org/10.1561/0600000079. 1
Learning multiple layers of features from tiny images. A Krizhevsky, 515Department of Computer Science, University of TorontoMaster's thesisKrizhevsky, A. Learning multiple layers of features from tiny images. Master's thesis, Department of Com- puter Science, University of Toronto, pp. 32-33, 2009. URL https://www.cs.toronto.edu/˜kriz/ learning-features-2009-TR.pdf. 5, 15
Simple and scalable predictive uncertainty estimation using deep ensembles. B Lakshminarayanan, A Pritzel, C Blundell, Advances in Neural Information Processing Systems. 318Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, 2017. 3, 18
Disentangling physical dynamics from unknown factors for unsupervised video prediction. V Le Guen, N Thome, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Le Guen, V. and Thome, N. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 3
Complementing brightness constancy with deep networks for optical flow prediction. V Le Guen, C Rambour, N Thome, 2022. 3European Conference on Computer Vision (ECCV. Le Guen, V., Rambour, C., and Thome, N. Complementing brightness constancy with deep networks for optical flow prediction. In European Conference on Computer Vision (ECCV). 2022. 3
A tutorial on energy-based learning. Y Lecun, S Chopra, R Hadsell, F J Huang, Predicting Structured Data. MIT PressLeCun, Y., Chopra, S., Hadsell, R., and Huang, F. J. A tutorial on energy-based learning. In Predicting Structured Data. MIT Press, 2006. 1
Training confidencecalibrated classifiers for detecting out-of-distribution samples. K Lee, H Lee, K Lee, J Shin, ICLR. Lee, K., Lee, H., Lee, K., and Shin, J. Training confidence- calibrated classifiers for detecting out-of-distribution samples. In ICLR, 2018a. 2
A simple unified framework for detecting out-of-distribution samples and adversarial attacks. K Lee, K Lee, H Lee, J Shin, Advances in Neural Information Processing Systems. 515Lee, K., Lee, K., Lee, H., and Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, 2018b. 1, 2, 3, 5, 15
Enhancing The Reliability of Out-Of-Distribution Image Detection in Neural Networks. S Liang, Y Li, R Srikant, ICLR. Liang, S., Li, Y., and Srikant, R. Enhancing The Re- liability of Out-Of-Distribution Image Detection in Neural Networks. In ICLR, 2018a. URL https: //github.com/facebookresearch/odin. 1
Enhancing the reliability of out-of-distribution image detection in neural networks. S Liang, Y Li, R Srikant, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations7Liang, S., Li, Y., and Srikant, R. Enhancing the reliability of out-of-distribution image detection in neural networks. In Proceedings of International Conference on Learning Representations, 2018b. 5, 7, 8
Energybased Out-of-distribution Detection. W Liu, X Wang, J D Owens, Li , Y , Advances in Neural Information Processing Systems. 19Liu, W., Wang, X., Owens, J. D., and Li, Y. Energy- based Out-of-distribution Detection. In Advances in Neural Information Processing Systems, 2020. URL https://github.com/wetliu/energy_ood. 1, 2, 3, 5, 6, 7, 8, 9, 19
Predictive uncertainty estimation via prior networks. A Malinin, M Gales, Advances in Neural Information Processing Systems. Malinin, A. and Gales, M. Predictive uncertainty estimation via prior networks. In Advances in Neural Information Processing Systems, 2018. 2
MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo. R M Neal, 5414Neal, R. M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 54:113-162, 2010. 14
Reading digits in natural images with unsupervised feature learning. Y Netzer, T Wang, A Coates, A Bissacco, B Wu, A Y Ng, NIPS Workshop on Deep Learning and Unsupervised Feature Learning. 15Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. 15
MCMC should mix: Learning energy-based model with neural transport latent space MCMC. E Nijkamp, R Gao, P Sountsov, S Vasudevan, B Pang, S Zhu, Y N Wu, The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event. Nijkamp, E., Gao, R., Sountsov, P., Vasudevan, S., Pang, B., Zhu, S., and Wu, Y. N. MCMC should mix: Learning energy-based model with neural transport latent space MCMC. In The Tenth International Conference on Learn- ing Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https:// openreview.net/forum?id=4C93Qvn-tz. 3
Learning latent space energy-based prior model. B Pang, T Han, E Nijkamp, S C Zhu, Y N Wu, Advances in Neural Information Processing Systems. Pang, B., Han, T., Nijkamp, E., Zhu, S. C., and Wu, Y. N. Learning latent space energy-based prior model. In Advances in Neural Information Processing Systems, volume 2020-Decem, 2020. 3
PyTorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R.32Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. PyTorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035.
Learning Transferable Visual Models From Natural Language Supervision. A Radford, J W Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, A Askell, P Mishkin, J Clark, G Krueger, I Sutskever, 518Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. Learning Transferable Visual Models From Natural Language Supervision. 2021. URL http://arxiv.org/abs/2103.00020. 1, 5, 18
High-resolution image synthesis with latent diffusion models. R Rombach, A Blattmann, D Lorenz, P Esser, B Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684-10695, June 2022. 1
Detecting out-of-distribution examples with gram matrices. C S Sastry, S Oore, 37th International Conference on Machine Learning. 202015Sastry, C. S. and Oore, S. Detecting out-of-distribution exam- ples with gram matrices. 37th International Conference on Machine Learning, ICML 2020, PartF16814:8449-8459, 2020. URL https://github.com/. 3, 15
SSD: A unified framework for self-supervised outlier detection. V Sehwag, M Chiang, P Mittal, 9th International Conference on Learning Representations, ICLR 2021, Virtual Event. Austria1519OpenReview.net, 2021. URL https://openreview.net/ forum?id=v5gjXpmR8J. 1, 2, 3, 5, 7, 8, 9Sehwag, V., Chiang, M., and Mittal, P. SSD: A unified frame- work for self-supervised outlier detection. In 9th Inter- national Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenRe- view.net, 2021. URL https://openreview.net/ forum?id=v5gjXpmR8J. 1, 2, 3, 5, 7, 8, 9, 15, 19
How to train your energybased models. CoRR, abs/2101.03288. Y Song, D P Kingma, Song, Y. and Kingma, D. P. How to train your energy- based models. CoRR, abs/2101.03288, 2021. URL https://arxiv.org/abs/2101.03288. 14
Dice: Leveraging sparsification for out-of-distribution detection. Y Sun, Y Li, European Conference on Computer Vision, 2022. 7Sun, Y. and Li, Y. Dice: Leveraging sparsification for out-of-distribution detection. In European Conference on Computer Vision, 2022. 7, 8
Out-of-distribution detection with deep nearest neighbors. Y Sun, Y Ming, X Zhu, Y ; K Li, S Jegelka, L Song, C Szepesvári, G Niu, Sabato , International Conference on Machine Learning, ICML 2022. S.Baltimore, Maryland, USA16215Chaudhuri,. PMLR, 2022. URL https://proceedings.mlr.press/v162/ sun22d.html. 1, 2, 3, 5, 7, 8Sun, Y., Ming, Y., Zhu, X., and Li, Y. Out-of-distribution detection with deep nearest neighbors. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 20827-20840. PMLR, 2022. URL https://proceedings.mlr.press/v162/ sun22d.html. 1, 2, 3, 5, 7, 8, 15
Training restricted boltzmann machines using approximations to the likelihood gradient. T Tieleman, 10.1145/1390156.1390290.14Proceedings of the 25th International Conference on Machine Learning. the 25th International Conference on Machine LearningTieleman, T. Training restricted boltzmann machines using approximations to the likelihood gradient. Proceedings of the 25th International Conference on Machine Learning, pp. 1064-1071, 2008. doi: 10.1145/1390156.1390290. 14
The inaturalist species classification and detection dataset. G Van Horn, O Mac Aodha, Y Song, Y Cui, C Sun, A Shepard, H Adam, P Perona, S Belongie, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition15Van Horn, G., Mac Aodha, O., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Perona, P., and Belongie, S. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8769-8778, 2018. 15
Out-ofdistribution with virtual-logit matching. H Wang, Z Li, L Feng, W Zhang, Vim, 10.1109/CVPR52688.2022.00487IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, LA, USAIEEE17Wang, H., Li, Z., Feng, L., and Zhang, W. Vim: Out-of- distribution with virtual-logit matching. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp. 4911-4920. IEEE, 2022. doi: 10.1109/CVPR52688. 2022.00487. URL https://doi.org/10.1109/ CVPR52688.2022.00487. 1, 2, 3, 5, 7, 8, 15, 17
Bayesian learning via stochastic gradient langevin dynamics. M Welling, Y W Teh, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningBellevue, Washington, USA14Welling, M. and Teh, Y. W. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pp. 681-688, 2011. URL https://icml.cc/2011/ papers/398_icmlpaper.pdf. 4, 14
Pytorch image models. R Wightman, 15Wightman, R. Pytorch image models. https://github. com/rwightman/pytorch-image-models, 2019. 15
VAEBM: A Symbiosis between Variational Autoencoders and Energybased Models. Z Xiao, K Kreis, J Kautz, A Vahdat, 2021. ISBN 2010.00654v2Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsXiao, Z., Kreis, K., Kautz, J., and Vahdat, A. VAEBM: A Symbiosis between Variational Autoencoders and Energy- based Models. In Proceedings of International Conference on Learning Representations, 2021. ISBN 2010.00654v2. URL http://arxiv.org/abs/2010.00654. 3
A theory of generative convnet. J Xie, Y Lu, S Zhu, Y N Wu, Proceedings of the 33nd International Conference on Machine Learning. Balcan, M. and Weinberger, K. Q.the 33nd International Conference on Machine LearningNew York City, NY, USA48of JMLR Workshop and Conference ProceedingsXie, J., Lu, Y., Zhu, S., and Wu, Y. N. A theory of generative convnet. In Balcan, M. and Weinberger, K. Q. (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pp. 2635-2644. JMLR.org, 2016. URL http://proceedings.mlr.press/ v48/xiec16.html. 2
Cooperative learning of energy-based model and latent variable model via MCMC teaching. J Xie, Y Lu, R Gao, Y N Wu, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. Xie, J., Lu, Y., Gao, R., and Wu, Y. N. Cooperative learning of energy-based model and latent variable model via MCMC teaching. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 4292-4301, 2018. 3
Learning energy-based model with variational auto-encoder as amortized sampler. J Xie, Z Zheng, Li , P , Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI PressXie, J., Zheng, Z., and Li, P. Learning energy-based model with variational auto-encoder as amortized sampler. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 10441-10451. AAAI Press, 2021. URL https://ojs.aaai.org/index.php/AAAI/ article/view/17250. 3
Cooperative training of fast thinking initializer and slow thinking solver for conditional learning. J Xie, Z Zheng, X Fang, S Zhu, Y N Wu, 10.1109/TPAMI.2021.3069023IEEE Trans. Pattern Anal. Mach. Intell. 448Xie, J., Zheng, Z., Fang, X., Zhu, S., and Wu, Y. N. Cooperative training of fast thinking initializer and slow thinking solver for conditional learning. IEEE Trans. Pattern Anal. Mach. Intell., 44(8):3957-3973, 2022a. doi: 10.1109/TPAMI.2021.3069023. URL https: //doi.org/10.1109/TPAMI.2021.3069023. 3
| [
"https://github.com/Jingkang50/OpenOOD.",
"https://github.com/haoqiwang/vim",
"https://github.com/selflein/.",
"https://github.com/hendrycks/",
"https://github.com/wetliu/energy_ood.",
"https://github.com/."
] |
[
"Exploring Weight Balancing on Long-Tailed Recognition Problem",
"Exploring Weight Balancing on Long-Tailed Recognition Problem"
] | [
"Naoya Hasegawa [email protected] \nThe University of Tokyo\nThe University of Tokyo\n\n",
"Issei Sato [email protected] \nThe University of Tokyo\nThe University of Tokyo\n\n"
] | [
"The University of Tokyo\nThe University of Tokyo\n",
"The University of Tokyo\nThe University of Tokyo\n"
] | [] | Recognition problems in long-tailed data, where the sample size per class is heavily skewed, have recently gained importance because the distribution of the sample size per class in a dataset is generally exponential unless the sample size is intentionally adjusted. Various approaches have been devised to address these problems. Recently, weight balancing, which combines well-known classical regularization techniques with two-stage training, has been proposed. Despite its simplicity, it is known for its high performance against existing methods devised in various ways. However, there is a lack of understanding as to why this approach is effective for long-tailed data. In this study, we analyze the method focusing on neural collapse and cone effect at each training stage and find that it can be decomposed into the increase in Fisher's discriminant ratio of the feature extractor caused by weight decay and cross entropy loss and implicit logit adjustment caused by weight decay and class-balanced loss. Our analysis shows that the training method can be further simplified by reducing the number of training stages to one while increasing accuracy. | 10.48550/arxiv.2305.16573 | [
"https://export.arxiv.org/pdf/2305.16573v2.pdf"
] | 258,947,205 | 2305.16573 | 30dcfe72eacb4663ed02335bc4a14f2dd5c8173b |
Exploring Weight Balancing on Long-Tailed Recognition Problem
May 30, 2023
Naoya Hasegawa [email protected]
The University of Tokyo
The University of Tokyo
Issei Sato [email protected]
The University of Tokyo
The University of Tokyo
Exploring Weight Balancing on Long-Tailed Recognition Problem
May 30, 2023
Recognition problems in long-tailed data, where the sample size per class is heavily skewed, have recently gained importance because the distribution of the sample size per class in a dataset is generally exponential unless the sample size is intentionally adjusted. Various approaches have been devised to address these problems. Recently, weight balancing, which combines well-known classical regularization techniques with two-stage training, has been proposed. Despite its simplicity, it is known for its high performance against existing methods devised in various ways. However, there is a lack of understanding as to why this approach is effective for long-tailed data. In this study, we analyze the method focusing on neural collapse and cone effect at each training stage and find that it can be decomposed into the increase in Fisher's discriminant ratio of the feature extractor caused by weight decay and cross entropy loss and implicit logit adjustment caused by weight decay and class-balanced loss. Our analysis shows that the training method can be further simplified by reducing the number of training stages to one while increasing accuracy.
Introduction
Datasets with an equal number of samples per class, such as MNIST [23], CIFAR10, and CIFAR100 [22], are often used, when evaluating classification models and training methods in machine learning. However, it is known empirically that the size distribution in the real world often shows a kind of exponential distribution called Pareto distribution [43], and the same is true for the number of per-class samples in classification problems [26,44]. Such distributions are called long-tailed data due to the shape of the distribution since some classes (head classes) are rarely sampled and many other classes (tail classes) are not sampled very often. Long-tailed recognition (LTR) attempts to improve the accuracy of classification models when training data shows such a distribution. There is a problem in LTR that the head classes have large sample size; thus, the output is biased toward them. This reduces the overall and tail class accuracy because tail classes make up the majority [57].
Various methods have been developed for LTR, such as class-balanced loss (CB) [4], augmenting tail classes' samples [53], two-stage learning [18], and enhancing feature extractors [30,55]. Alshammari et al. [1] proposed a simple method, called weight balancing (WB), that empirically outperforms previous complex state-of-the-art methods. WB simply combines two classic techniques, weight decay (WD) [12] and MaxNorm (MN) [15], with two-stage learning. WD and MN are known to prevent overlearning [14]; however, it is not known why WB significantly improves the performance of LTR.
Contribution In this work, we analyze the effectiveness of WB in LTR focusing on neural collapse (NC) [40] and cone effect (see Appendix 7.1) [28]. We first decompose WB into five components: WD, MN, cross entropy (CE), CB, and two-stage learning. We then show that each of the components has the following useful properties.
• 1st stage: WD and CE increase the features' Fisher's discriminant ratio (FDR) [7].
-Degrade the intra-class cosine similarities (Theorem 1 in Sec. 4.2).
-Decrease the scaling parameters of BN (Sec. 4.3). This has a positive effect on feature training, but may temporarily reduce FDR.
-Facilitate improvement of FDR as they pass through layers (Sec. 4.4).
• 1st stage: WD raises the features' norms to higher values for tail classes (Sec. 4.5).
• 2nd stage: WD and CB perform implicit logit adjustment making the classifier's norm higher for tail classes. MN facilitates this effect. This stage does not work well for dataset with a small class number (Theorem 2 in Sec. 5.1).
The above analysis reveals the following useful points: 1. our analysis provides a guideline for the design of learning in LTR; 2. it can be further simplified by removing the second stage. Specifically, we only need to learn in LTR by WD, feature regularization (FR), and an ETF classifier for the linear layer to extract features that are more linearly separable and adjusting the norm of classifier's weights by LA after the training.
Related Work
Long-tailed recognition
There are three main approaches to LTR; "Class Re-balancing, Information Augmentation, and Module Improvement" [57]. "Class Re-balancing" adjusts the imbalance in the number of samples per class at various stages to prevent accuracy deterioration. It includes logit adjustment (LA) [21,39] and balancing the loss function such as CB. "Information Augmentation" prevents the accuracy from degrading by supplementing the information of the tail classes that lacks the number of samples [3,29,53]. "Module Improvement" improves accuracy by increasing the performance of each module of the network individually, e.g., training feature extractors and Table 1: Comparison of existing LTR methods with our simplified method. A symbol ✓ indicates that the component is required, -indicates that it is not. A superscript * means the method needs pretrained models. Authors from left to right: Kim and Kim [21], Menon et al. [39], Yang et al. [55], Liu et al. [30], Alshammari et al. [1], Cao et al. [2], Kang et al. [19], Li et al. [25], Long et al. [33], Ma et al. [37], Tian et al. [48], and us. We regard the deferred re-balancing optimization schedule [2]
- - ✓ -✓ ✓ ✓ ✓ ✓ ✓ ✓ - Resampling - - - ✓ --✓ ✓ - ✓ - - Contrastive learning - - - - --✓ ✓ - ✓ ✓ - Training of linear layer ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ -1 ✓ 2 - Extra text encoder - - - - -- - - ✓ ✓ ✓ - Extra image encoder - - - - -- - - ✓ - - - Extra dataset - - - - -- - - - - ✓ -
classifiers separately [18], fixing the linear layer to the ETF classifier for the better feature extractor [55], and regularizing to occur NC [30]. In recent years, as Kang et al. [19] found contrastive learning [10] is effective for imbalanced data, many methods have been proposed to use it [19,25,37,48]. Ma et al. [37], Long et al. [33], and Tian et al. [48] also include vision-language models such as CLIP [41] and leverage text features to improve accuracy. However, these usually have problems such as slow convergence and complex models [30]. WB is a combination of "Class Re-balancing" and "Module Improvement". Table 1 compares our simplification of WB with typical existing methods, including those that have achieved SOTA. While many methods use complex innovations, we aimed to improve on a simple structure.
Neural collapse and minority collapse
Papyan et al. [40] investigated the set of phenomena that occurs when terminal phase of training (TPT), which they termed NC. NC can be briefly described as follows. Feature vectors converge to their class means (NC1); class means of feature vectors converge to a simplex equiangular tight frame (ETF) [45] (NC2); class means of the feature vector and the corresponding weights of the linear classifier converge in the same direction (NC3); when making predictions, models converge to predict the class whose mean feature vector is closest to the feature in Euclidean distance (NC4). Papyan et al. [40] claimed that NC leads to increased generalization accuracy and robustness to adversarial samples. The number of studies have been conducted on the conditions under which NC occurs [11,17,35,40]. Rangamani and Banburski-Fahey [42] theoretically and empirically showed that the occurrence of NC needs WD.
There have also been studied on NC when the model is trained with imbalanced data. Fang et al. [6] theoretically and experimentally proved "minority collapse" in which features corresponding to classes with small numbers of samples tend to converge in the same direction, even if they belong to different classes. Yang et al. [55] attempted to solve this problem by fixing the weights of the linear layers to an ETF. Thrampoulidis et al. [47] tried to generalize NC to imbalanced data by demonstrating that the features converge towards SELI, an extension of ETF in TPT. They also demonstrated that minority collapse does not occur under the appropriate regularization including WD.
WD
WD is a regularization method often used even for deep neural network models with batch normalization (BN) [16], which is scale invariant. Zhang et al. [56] revealed that WD increases effective learning rate which facilitates regularization in such models. Training of networks containing BN layers with WD is often studied in terms of training dynamics [27,32,52]. Summers and Dinneen [46] and Kim et al. [20] have studied whether to apply WD to the scaling and shifting parameters of the BN for such networks. WD is also known to have many other positive effects on deep neural networks, smoothing loss landscape [24,36], causing NC [42], and making filters sparser [38] and low-ranked [8]. WD is also effective for imbalanced data [1,47].
Preliminaries
This section defines the notations used in the paper and presents the strategies for LTR. See Appendix 7.2 for the table of these notations. We describe details on WD, FR, MN, WB, ETF classifier, and LA in the Appendix 7.3. Suppose a multiclass classification with C classes by samples X ⊂ R p and labels Y ≡ {1, 2, . . . , C}.
The dataset D = {(x i , y i )|x i ∈ X , y i ∈ Y} N i=1 consists of D k ≡ {(x i , y i ) ∈ D | y i = k}.
We define N as ∥D∥ and N k as ∥D k ∥, the number of samples. Without loss of generality, assume the classes are sorted in descending order by the number of samples. In other words, ∀k ∈ {1, 2, . . . , C − 1}, N k ≥ N k+1 holds. Imbalance ratio ρ = N1 N C = max k N k min k N k indicates the extent to which the training dataset is imbalanced and ρ ≫ 1 holds in LTR. Therefore, the number of samples in each class N k satisfies
N k = N 1 ρ − k−1 C−1 . Define N as C C k=1 1 N k −1
, the harmonic mean of the number of samples per class.
Consider a network f (·; Θ) : X → R C parameterized by Θ = {θ l }. It outputs a logit z i = f (x i ; Θ). The network is further divided into a feature extractor g(·; Θ g ) : X → R d and a classifier h(·; Θ h ) : R d → R C with d as the number of dimensions for features. This means f (x i ; Θ) = h(g(x i ; Θ g ); Θ h ). We often abbreviate g(x i ; Θ g ) to g(x i ). Define g(x i ) as g(xi) ∥g(xi)∥2 . Let µ k ≡ 1 N k (xi,k)∈D k g(x i ) be the inner-class mean of the features for class k and µ ≡ 1 C C k=1 µ k be the mean of the inner-class mean of the features. We used a linear layer for the classifier. In other words, h(v; Θ h ) = W ⊤ v holds for a feature v ∈ R d with W = R d×C as a weight matrix. Denote the kth column vector of W by w k ; thus, Θ h = {w k }. The loss function is denoted by ℓ(z i , y i ) : R C × R C → R. Let ℓ CE and ℓ CB be the loss function of CE and CB respectively.
Rewriting F (Θ; D) ≡ 1 N N i=1 ℓ(f (x i ; Θ), y i )
, parameters Θ are optimized as follows in the absence of regularization:
Θ * = arg min Θ F (Θ; D).(1)
Denote by F WB the objective function in the second training stage of WB. Let λ be the hyperparameter of WD.
Settings
We used the CIFAR10, CIFAR100 [22] and mini-ImageNet [51] as the datasets and followed Cui et al. [4] and Vigneswaran et al. [50] to create long-tailed datasets, CIFAR10-LT, CIFAR100-LT, and mini-ImageNet-LT. Each class is classified into one of the three groups, Many, Medium, and Few, depending on the number of training samples N k . We used ResNet32 [13] for the network architecture. In some of the methods used in our study, parameters were trained in two stages [18]; in the first stage, we trained the weights of the entire model Θ, while in the second stage, we trained the weights of the classifiers Θ h with the weights of the feature extractors Θ g fixed.
To investigate the properties of neural network models, we also experimented with models using MLP and Residual block (ResBlock) [13]. The number of blocks is added after the model name to distinguish them (e.g., MLP3). We trained and evaluated these models using MNIST [23].
For training losses, we used CE unless otherwise stated. See Appendix 7.5 for detailed settings.
WD and CE degrade intra-class cosine similarities
We first examined the role of WD in the first stage of training. We trained the model in two ways: without WD (Naive) and with WD (WD). In addition, we compared two types of loss: one using CE and the other using CB, for a total of four training methods. We investigated the FDR, the inter-class and intra-class mean cosine similarity of the features, and the norm of the mean features per class. Due to space limitations, some of results of the mini-ImageNet-LT excluding the FDR are presented in Appendix 7.6. Table 2 shows the FDR of the models trained by each method, and Figure 1 shows the cosine similarity of the training features per class. It can be seen that the combination of WD and CE achieves the highest FDR. This fact is consistent with the conclusion that WD is necessary for the NC in balanced datasets, as shown by Rangamani and Banburski-Fahey [42] and suggests that this may also hold for long-tailed data. When WD is not applied, the cosine similarity is generally higher even between different classes' features. This is natural because neural networks have cone effect found in Liang et al. [28]. However, the methods with WD result in lower cosine similarity between features of different classes and higher cosine similarity between features of the same classes, leading to higher FDR. This shows that WD has the effect of preventing the cone effect and the following theorem supports it.
Theorem 1. For all (x i , y i ), (x j , y j ) ∈ D s.t. y i ̸ = y j , if W is ETF and there exists ϵ and L s.t. ∂ℓCE ∂g(xi) 2 , ∂ℓCE ∂g(xj ) 2 ≤ ϵ < 1 C and ∥g(x i )∥ 2 , ∥g(x j )∥ 2 ≤ L ≤ 2 √ 2 log (C − 1)
, the following holds:
cos (g(x i ), g(x j )) ≤ 2δ 1 − δ 2 ,(2)where δ ≡ 1 L C−1 C log (C−1)(1−ϵ) ϵ ∈ 1 √
2 , 1 and cos(·, ·) means cosine similarity of the two vectors.
This theorem states that regularizing the features reduces the cosine similarity of the features between different classes. In fact, as shown in Figure 3, WD also indirectly degrades the norm of the features by regularizing the weights causing low intra-class cosine similarity. This also suggests that explicit FR leads to better feature extractor training. We consider that in Sec. 5.2.
WD and CE decrease scaling parameters of BN
Next, we split the effect of WD on the model into that on the convolution layers and that on the BN layers. First, to examine the impact on convolution, we examined the FDR of features from models trained with WD applied only to convolution layers (WD w/o BN). The bottom halves of Table 2 show that the FDR of the methods with restricted WD application. These results indicate that when WD is only applied to the convolution layers, the FDR improves sufficiently, which means that enabling WD for the convolution layers is essential for improving accuracy. One reason for this may be an increase in the effective learning rate [56].
Small BN's scaling parameters facilitates feature learning We also examined the effect on BN layers and found that WD reduces the mean of the BN's scaling parameters. Note that the standard deviation of the shifting parameters remains almost unchanged. As an example, for the top BN layer of ResNet34 trained on CIFAR100-LT, applying WD changes the mean of the scaling parameters from 1.0 to 0.19 and the variance of the shifting parameters from 0.012 to 0.009. We investigated how the FDR changes when models are trained with the BN's scaling parameter fixed to one common small value (WD fixed BN). In this method, we applied WD only to convolution layers and fixed the BN's shifting parameters to 0. We selected the optimal value for the scaling parameters from {0.01, 0.02, . . . , 0.20} using the validation dataset. Table 2 shows that a small value of the BN's scaling parameter is crucial for boosting FDR; even setting the same value for all scaling parameters in the entire model also work. In this case, the scaling parameter of the BN does not directly affect the FDR, as it only multiplies the features uniformly by a constant. This suggests that the improvement in FDR caused by applying the WD to BN layers is mainly because the smaller scaling parameters have a positive effect on the learning dynamics and improve the FDR. Although Kim et al. [20] attribute this to the increase in the effective learning rate of the scaling parameters, they have a positive effect on the FDR even when the scaling parameters are not trained, suggesting that there are other significant effects. For example, the smaller scaling parameters reduce the norm of the feature, promoting the effects shown in Sec. 4.2.
High relative variance of BN's shifting parameters temporarily degrades FDR What then is the impact of the higher relative variance of the shifting parameters of the BN layers? Our experiments have shown that, in practice, they temporarily worsen the FDR in Figure 2. In the experiments, we trained MLPs with MNIST using three different methods: without WD (Naive), with WD (WD all), and with WD except for any shifting parameters of the BN (WD w/o shift). We retrieved the output features of untrained and these trained models for each layer, applied ReLU to them, and examined their FDR. We refer to this as ReLUFDR. Figure 2 shows the increase and decrease in the ReLUFDR of the intermediate outputs of the MLPs trained with each method, indicating that the ReLUFDR of the model trained without WD monotonically and gradually increases as the features pass through the layers, whereas this is not the case for the models trained with WD. In particular, ReLUFDR decreases drastically when one trains the shifting parameters and applies the scaling and shifting parameters of the BN in blocks close to the last layer. However, in this case, the ReLUFDR increases when passing through the linear layer more than the ReLUFDR decreases when passing through the BN later. This result suggests that the applying WD to the scaling and shifting parameters of the BN have a positive effect on the training of the linear layer.
In summary, smaller scaling parameters of BN layers promote learning of the linear layers but temporarily worsen the FDR. Table 3 compares the FDR of two sets of features; ones from each model trained with each method (before) and ones output from the model, passed through a randomly initialized linear layer, and then with ReLU applied (after). This experiment revealed that the features obtained from the models trained with WD have a bias to improve their FDR regardless of the class imbalance. The detailed experimental set-up is described in the Appendix 7.6. While training without WD does not increase the FDR in most cases, the features from models with WD-applied convolution improve the FDR over the before in most cases. Moreover, the after-FDR increases significantly when applied to the whole model. These results suggest that the features from models trained with WD are more likely to improve their FDR by passing through more layers.
WD and CE facilitate improvement of FDR as features pass through layers
WD raises features' norms to higher values for tail classes
We also investigated the properties appearing in the norm of the training features from each model investigated in Sec. 4.2; Sec. 5.1 reveal that this property is crucial to the effectiveness of WB. Figure 3 shows the norm of the training features per class. Note that this phenomenon does not occur with test features. The method without WD shows almost no relationship between the number of samples and the norm for each class. However, when WD is applied, the norms of the Many classes features drop significantly, to between one-half and one-fifth of that of the Few classes. While this phenomenon has been observed in step-imbalanced data [47], we found that it also occurs in long-tailed data.
Role of second training stage in WB
In this section, we theoretically analyze how the second stage of training operates in Sec. 5.1, which leads to further simplification and empirical performance improvement of WB in Sec. 5.2.
WD and CB perform implicit logit adjustment
We found that WB encourages the model to output features with high FDR in the first stage of training. Then, how does the second stage of training improve the accuracy? Alshammari et al. [1] found that WB trains models so that the norm of the linear layer becomes more significant for the tail classes in the second stage of training, but WB does not explicitly train in this way. We found the following theorem that shows the second stage of WB is equivalent to multiplicative LA under certain assumptions. In the following theorem, we assume µ = 0; this is valid when NC occurs, i.e., when per-class mean features follow ETF or SELI [40,47]. See Appendix. 7.4 for the theorem and proof when this does not hold.
Theorem 2. Assume µ = 0. For any k ∈ Y, if there exists w * k s.t. ∂FWB ∂w k w k =w * k = 0 and ∥w * k ∥ 2 ≤ O 1 λρC , we have ∀k ∈ Y, w * k − N λN µ k 2 ≤ O 1 λ 2 ρ 2 C 2 .(3)
Note that N λN is independent of k. This theorem states that if the number of classes or the imbalance ratio is sufficiently large, and if the NC has happened in the first stage, there exists linear layer weights that are constant multiples of the corresponding features and sufficiently close to the stationary point in the optimization. In other words, if the previous conditions are satisfied, the norm of the linear layer is proportional to the mean norm of the corresponding feature; thus, the linear layer is trained to be larger for the Few classes. Besides, the features and the corresponding linear layers' weights are aligned when NC occurs, so the orientation of the linear layers' weights hardly changes during this training. This is the same operation as when multiplicative LA makes adjustments to long-tailed data, increasing the norm of the linear layer for the tail class [21]. Indeed, if there exists some c 0 and γ 0 , and for all k, ∥µ k ∥ 2 = c 0 P(Y = k) γ0 holds, then the second stage training is equivalent to multiplicative LA. In this theorem, we do not consider MN cause we found that MN makes the norm of the weights in the linear layer closer to the same value before training, only facilitating convergence. For more details, see Appendix 7.6.
Note also that this theorem does not guarantee that the second stage of WB is an operation equivalent to multiplicative LA when the number of classes is small. In fact, even though the norm of the training features is larger for Few classes (Figure 3), WB fails to make the norm of the weights greater for Few classes in CIFAR10-LT: see Appendix 7.6. This suggests that replacing the second stage of training with LA would be more generic.
Is two-stage learning really necessary?
We consider a method to replace the second stage of WB with multiplicative LA, which does not require additional training. To improve FDR and promote NC, we use FR and an ETF classifier for the linear layer. We fix the linear layer and train the feature extractor with CE and WD; then adjust the norm of the linear layer with multiplicative LA. To verify the effectiveness of this combination, we compared the FDR and accuracy of following training methods.
• Training with CE or CB only (Naive, CB).
• Training with WD and CE (WD).
• Using an ETF classifier as the linear layer and training only the feature extractor with WD and CE (WD&ETF).
• Using an ETF classifier as the linear layer and training only the feature extractor with WD, FR, and CE (WD&FR&ETF).
For those using LA, we examined additive LA [39] and multiplicative LA [21]. To ensure that the model correctly classifies both the head classes' samples and the tail classes' ones, we consider the average of the per-class accuracy of all classes and each group, i.e., Many, Medium, and Few. We show the result of the CIFAR10-LT and mini-ImageNet-LT in Appendix 7.6. Table 4 list the FDR and accuracy of each method for CIFAR100-LT. See Appendix 7.6 for the results of the other datasets. First, training with the ETF classifier in addition to WD increases the FDR for both training and test features. Using FR further improves the FDR. However, it does not boost the average accuracy much. In contrast, LA enhances the classification accuracy of Medium and Few classes in all cases; thus, it also increase the final average accuracy significantly. It was observed that multiplicative LA improved the accuracy to a greater extent than additive LA and that WD&FR&ETF&Multiplicative LA outperformed WB on average accuracy despite our method requiring only one stage training.
Conclusion
We have theoretically and experimentally demonstrated how WB improves accuracy in LTR by investigating each components of WB. In the first stage of training, WD and CE cause following three effects, decrease in intra-class cosine similarity, reduction in BN's scaling parameters, and improvement of features in the ease of increasing FDR, which enhance FDR of the feature extractor. In the second stage, WD, CB, and the norm of features that increased in the tail classes work as multiplicative LA, by improving the classification accuracy of the tail classes. Our analysis also reveals a training method that achieves higher accuracy than WB by improving each stage based on the objectives. This method is simple, thus we recommend trying it first. A limitation is that our experiments are limited to ResNet [13] and the usefulness in other models such as ViT [5] is unknown. Future research will include experiments and development of useful methods for such models.
Appendix
Related work
Cone effect
Liang et al. [28] found a phenomenon they termed "cone effect" observed in neural networks. This is a tendency in which features from deep learning models with activation functions are prone to have high cosine similarity even when the features are output from different data, regardless of model learning. Cone effects are observed in various models and have been reconfirmed in our experiments (see Sec. 4.2). Table 5 summarizes the notations used in this paper.
Notation
min k N k X / Y domain of samples / labels x / y / z sample / label / logit D/D k dataset of all classes / class k f / g / h
neural network of whole model / feature extractor / linear classifier g(x) / g(x)
feature of sample x / normalized feature of sample x µ / µ k mean of inner-class mean of features / inner-class mean of features for class k Θ / Θ g / Θ h set of parameters of f / g / h θ parameter in Θ W / w k linear layers' weight matrix / vector of class k ℓ / ℓ CE / ℓ CB loss function / of CE / of CB F / F WB objective function / in the second training stage of WB cos(·, ·) cosine similarity of two vectors τ / γ hyperparameter of additive LA / multiplicative LA
Preliminaries
WD Note that we implemented WD as L2 regularization because we used SGD for the optimizer.
Therefore, the optimization of Θ can be formulated as follows.
Θ * = arg min Θ F (Θ; D) + λ 2 θ l ∈Θ ∥θ l ∥ 2 2 .(4)
FR We implemented FR by adding the average of the squared norm of the features to the loss; the optimization when applying FR is done in the following way.
Θ * = arg min Θ F (Θ; D) + ζ 2 1 N x i ∈D ∥g(xi)∥ 2 2 ,(5)
where ζ is a hyperparameter.
MN MN is a regularization that places a restriction on the upper bound of the norm of weights. As with Alshammari et al. [1], for the convenience of hyperparameter tuning, the weights to be constrained are only the weights belonging to the classifier Θ h ⊂ Θ, and the following constrained optimization is solved:
Θ * = arg min Θ F (Θ; D), s.t. ∀ w k ∈ Θ h , ∥w k ∥ 2 2 ≤ η 2 k ,(6)
where η k is a hyperparameter. Since constrained optimization is difficult to solve in neural networks, we implemented it using projected gradient descent. In our implementation, this projects the weights that do not satisfy the constraint to the range where the constraint is satisfied at each batch update as follows:
w k ← min 1, η k ∥w k ∥2 w k .(7)
WB WB adopts two-stage training [18]: in the first stage, the parameters of the entire model Θ are optimized with CE using WD; in the second stage, Θg is fixed and only Θ h is optimized with CB using WD and MN. For simplicity, let β of CB be 1 in this paper. That is, each effective number of samples is equal to the number of samples, and the following holds 3 :
ℓCB(zi, yi) = − N Ny i log exp((zi)y i ) C j=1 exp((zi)j) .(8)
Therefore, in the second stage of WB, FWB(W; D) ≡ 1 N N i=1 ℓCB(zi, yi) + λ 2 k∈Y ∥w k ∥ 2 2 is optimized for W. Note that we do not consider MN in this paper, as we revealed that it only changes the initial values and do not impose any intrinsic constraints on the optimization in Appendix. 7.6.
Multiplicative LA Multiplicative LA [21] changes the norm of the per-class weights of the linear layer. The weights of the linear layer corresponding to class k, w ′ k are adjusted using the hyperparameter γ > 0 as follows.
w ′ k = 1 P(Y = k) γ w k ∥w k ∥2 .(9)
Kim and Kim [21] trained using projected gradient descent so that the weights of the linear layer always satisfy ∀k, ∥w k ∥2 = 1, but in our study, for comparison, we trained with regular gradient descent, normalized the norm post-hoc, and then adjusted the norm.
Additive LA Additive LA [39] is a method to adjust the output to minimize the average per-class error by adding a different constant for each class to the logit at prediction. That is, the logit zi at prediction is adjusted to z ′ i as follows.
(z ′ i ) k = (zi) k − τ log P(Y = k),(10)
where τ is a hyperparameter.
ETF Classifier NC indicates that the matrix of classifier weights in deep learning models converges
to an ETF when trained with CE [40]. The idea of ETF classifiers is to train the feature extractor by fixing the linear layer to an ETF from the initial step, without training it. ETF classifiers' weights W ∈ R d×C satisfy the following equation;
W = EW C C − 1 U IC − 1 C 1C 1 ⊤ C ,(11)
where U ∈ R d×C is a matrix such that U ⊤ U is an identity matrix, IC ∈ R C×C is an identity matrix, and 1C is a C-dimensional vector whose all elements are 1. Also, EW is a hyperparameter, set to 1 in our study.
Proof
Proof of Theorem 1
Proof. Define p k (g(xi)) as
exp(w ⊤ k g(x i )) C l=1 exp(w ⊤ l g(x i )) . For x ∈ {xi, xj}, we have ∂ℓCE ∂g(x) 2 = ∂ℓCE ∂g(x) 2 ∥wy∥2 ≥ ∂ℓCE ∂g(x) , wy = −(1 − py(g(x)))w ⊤ y wy + l̸ =y p l (g(x))w ⊤ l wy = −(1 − py(g(x))) − 1 C − 1 l̸ =y p l (g(x)) = C C − 1 (1 − py(g(x))) ≥ (1 − py(g(x))). (12)
Therefore,
(1 − py(g(x))) ≤ ∂ℓCE ∂g(x) 2 ≤ ϵ ⇒ py(g(x)) ≥ 1 − ϵ.(13)
For py(g(x)), using Jensen's inequality, we get
py(g(x)) = exp(w ⊤ y g(x)) C l=1 exp w ⊤ l g(x) = exp(w ⊤ y g(x)) exp(w ⊤ y g(x) + (C − 1) l̸ =y 1 C−1 exp w ⊤ l g(x) ≤ exp(w ⊤ y g(x)) exp(w ⊤ y g(x) + (C − 1) exp 1 C−1 l̸ =y w l ⊤ g(x) = exp(w ⊤ y g(x)) exp(w ⊤ y g(x) + (C − 1) exp − 1 C−1 w ⊤ y g(x) = 1 1 + (C − 1) exp − C C−1 w ⊤ y g(x)
. (14) By (13) and (14), we have
(C − 1) exp − C C − 1 w ⊤ y g(x) ≤ ϵ 1 − ϵ ⇒ w ⊤ y g(x) ≥ 1 ∥g(x)∥2 C − 1 C log (C − 1)(1 − ϵ) ϵ ⇒ w ⊤ y g(x) ≥ 1 L C − 1 C log (C − 1)(1 − ϵ) ϵ ≡ δ.(15)
Note that ϵ ≤ 1 C , L ≤ 2
√ 2 log (C − 1) < 2 √ 2 log (C − 1) C C−1 , which means δ > 1 √ 2 .
Also, δ is a lower bound of the inner product of the two unit vectors. Therefore, we have
δ ∈ 1 √ 2 , 1 .(16)
Let ∠(s, t) ≡ arccos(cos(s, t)), which represents the angle between vectors s and t. For example, we have ∠(wy i , wy j ) = arccos − 1 C−1 . Using (16), the following holds.
∠(g(xi), g(xj)) ≥ ∠(wy i , wy j ) − ∠(wy i , g(xi)) − ∠(wy j , g(xj)) ≥ ∠(wy i , wy j ) − 2 max x∈{x i ,x j } (∠(wy, g(x))) ≥ 0.(17)
Rewrite ∠(wy i , wy j ) and max x∈{x i ,x j } (∠(wy, g(xx))) as α and β respectively. Since the cosine is monotonically decreasing in [0, π], we get
g(xi) ⊤ g(xj) ≤ cos(α − 2β)
= cos α(2 cos 2 β − 1) + 2 sin α sin β cos β ≤ cos α(2 cos 2 β − 1) + 2 1 − cos 2 β cos β
Note that x √ 1 − x 2 is monotonically decreasing and 2x 2 ≥ 1 holds in 1 √ 2 , 1 . Therefore, from (15), (16), and (18), the following holds.
cos (g(xi), g(xj)) ≤ 2δ 1 − δ 2 .(19)
Proof of Theorem 2
Defineŵ k be N λN µ k . We prove more strict theorem. You can easily derive theorem 2 from the following theorem.
Theorem 3. For any k ∈ Y, if there exists w * k s.t. ∂F WB ∂w k w k =w * k = 0 and ∥w * k ∥2 ≤ O 1 λρC , we have ∀k ∈ Y, ∥w * k −ŵ k ∥ 2 ≤ N λN ∥µ∥2 + O 1 λ 2 ρ 2 C 2 .(20)
Before proving the theorem, we first present the following lemmas.
Lemma 1. For the dataset of Imbalance rate ρ, the following holds.
N N = O 1 ρC .(21)
Proof. Since N k = nρ − k−1 C−1 holds, the harmonic mean and the number of all samples are
N = C C k=1 1 N k = nC C k=1 ρ k−1 C−1 ,(22)N = C k=1 N k = n C k=1 ρ − k−1 C−1 = n C k=1 ρ k−1 C−1 ρ .(23)
Therefore,
N N = ρC C k=1 ρ k−1 C−1 2 = ρC ρ 1 C−1 − 1 2 ρ C C−1 − 1 2 .(24)
From (24),
N N = O 1 ρ (ρ → ∞)
is obvious. In addition, the following holds.
lim C→∞ N C N = lim C→∞ ρC 2 C k=1 ρ k−1 C−1 2 ≤ lim C→∞ ρC 2 C 2 = ρ.(25)Thus, N N = O 1 C (C → ∞) is also satisfied. Lemma 2. When w k = N λN µ k is satisfied, we have ∀k ∈ Y, ∂FWB ∂w k ≤ N N µ + O 1 λρ 2 C 2 .(26)
Proof of Lemma 2. For any k ∈ Y, it holds that
∂FWB ∂w k = N N C j=1 1 Nj (x i ,j)∈D j exp(w ⊤ k g(xi)) C l=1 exp w ⊤ l g(xi) g(xi) − N N µ k + λw k .(27)
In addition, using Lemma 1, we derive the following:
exp(w ⊤ k g(xi)) C l=1 exp w ⊤ l g(xi) = 1 C + exp(w ⊤ k g(xi)) C l=1 exp w ⊤ l g(xi) − 1 C ≤ 1 C + 1 + O(w ⊤ k g(xi)) C l=1 1 + O(w ⊤ l g(xi)) − 1 C = 1 C + 1 C 1 + O 1 λρC 1 − O 1 λρC 2 − 1 = 1 C + O 1 λρC 2 .(28)
Using this, the assumption and (27), we get
∂FWB ∂w k ≤ N N µ + O 1 λρ 2 C 2 .(29)
Proof of Theorem 3.
∂FWB ∂w k w k =ŵ k = ∂FWB ∂w k w k =ŵ k − ∂FWB ∂w k w k =w * k = λ(ŵ k − w * k ) + N N C j=1 1 Nj (x i ,j)∈D j exp(ŵ ⊤ k g(xi)) C l=1 exp ŵ ⊤ l g(xi) − exp(w * ⊤ k g(xi)) C l=1 exp w * ⊤ l g(xi) g(xi) .(30)
Here, similar to (28), the following can be derived;
exp(ŵ ⊤ k g(xi)) C l=1 exp ŵ ⊤ l g(xi) − exp(w * ⊤ k g(xi)) C l=1 exp w * ⊤ l g(xi) = 1 C − 1 C + O 1 λρC 2 = O 1 λρC 2 .(31)
Using (30) and (31),
λ(ŵ k − w * k ) = ∂FWB ∂w k w k =ŵ k + O 1 λρ 2 C 2 ≤ N N µ + O 1 λρ 2 C 2 .(32)
Therefore,
∥w * k −ŵ k ∥ 2 ≤ N λN ∥µ∥2 + O 1 λ 2 ρ 2 C 2 .(33)
Settings
Our implementation is based on Alshammari et al. [1] and Vigneswaran et al. [50]. For comparison, many of the experimental settings also follow Alshammari et al. [1].
Datasests
We created validation datasets from portions of the training datasets cause CIFAR10 and CIFAR100 have only training and test data. As with Liu et al. [31], only 20 samples per class were taken from the training data to compose the validation dataset, and the training dataset was composed of the rest of the data. We set N1 to 4980 for CIFAR10 and 480 for CIFAR100. In this experiment, we set imbalance ratio ρ to 100. We call the class k Many if the number of training samples satisfies 1000 < N k ≤ 4980 (resp. 100 < N k ≤ 480), Medium if the number of training samples fullfills 200 ≤ N k ≤ 1000 (resp. 20 ≤ N k ≤ 100), and Few otherwise for CIFAR10-LT (resp. CIFAR100-LT). For mini-ImageNet-LT, the same applies as for CIFAR100-LT.
Models
For MLP, a module consisting of a linear layer outputting 1024 dimensional features, a BN layer, and a ReLU layer, stacked in sequence, is considered as one block, which is combined into a sequential. For ResBlock, each block has the same structure as He et al. [13], except that the linear layer outputs 1024 dimensional features. These blocks are combined into a sequential, and at the bottom of it, we further concatenate an MLP block. In other words, the input first passes through the linear layer, BN, and ReLU before flowing into the Residual block. In both cases, a classifier consisting of a linear layer and a softmax activation function is on the top and does not count as one block. For example, in MLP3, the features pass through the three blocks and the classifier in sequence.
Evaluation metrics
Unless otherwise noted, we used the following values for hyperparameters for the ResNet. The optimizer was SGD with momentum = 0.9 and cosine learning rate scheduler [34] to gradually decrease the learning rate from 0.01 to 0. The batch size was 64, and the number of epochs was 320 for the first stage and 10 for the second stage. As loss functions, we used naive CE and CB. For CB, we used class-balanced CE with β = 0.9999. We set λ of WD to 0.005 in the first stage and 0.1 in the second stage. For mini-ImageNet-LT, we set λ for the first stage to 0.003. We set ζ of FR to 0.01. We calculated the MN's threshold η using a method similar to Alshammari et al. [1]. We searched the optimal τ and γ for the LA by cross-validation using the validation data. We chose the optimal value of τ from {1.00, 1.05, . . . , 2.00} and γ from {0.00, 0.05, . . . , , 1.00}.
We trained MLP and ResBlock with λ set to 0.01 and the number of epochs set to 150. Other parameters are the same as above. FDR and accuracy reported in the experiment were obtained by averaging the results from five training runs with different random seeds. We conducted experiments on an NVIDIA A100.
Why we do not use DR loss
This section explains why we use an ETF classifier but not the dot-regression loss (DR) proposed in Yang et al. [55]. Table 6 compares FDR and accuracy when using WD and an ETF Classifier for the linear layer with two options, using CE or DR for the loss, and shows that CE is superior in test FDR and average accuracy. Theorem 1 also claims that training by CE and WD has the effect of preventing cone effects. It does not apply to DR, which is a squared loss function. Thus, it is difficult to prove whether DR is equally effective in preventing the cone effect.
Experiments
Experiments of Sec.4.2
The right half of the Figure 5 show the result of Sec.4.2 for mini-ImageNet-LT. We also examined the forget score [49] of each method in Sec.4.2. Phenomena similar to the ones we showed in Sec.4.2 can be observed in the forgetting score with CB: Figure 4 shows that the forgetting score is higher on average with CB than with CE. In the absence of WD, this phenomenon is particularly evident for the Many classes, possibly as a result of the fact that CB gives each image in the Many classes less weight during training, which inhibits learning. These may be some of the reasons for the poor accuracy of CB compared to CE in training feature extractors reported by Kang et al. [18].
Experiments of Sec. 4.4
This subsection describes the experimental setup of Experiments 4.4. We trained models for the balanced dataset using the same parameters settings as in Sec. 4.2. First, the FDR of the "before" is measured from the features obtained by models trained in the same way as in the experiments in Sec. 4.2. Next, these features are passed through an extra linear layer, and FDR of them is measured as the "after". We initialized the weight ∈ R d×d and bias ∈ R d×1 of this extra linear layer with values following a uniform distribution in − 1 First, we experimentally show MN is not necessary for the second stage of WB. In the second training stage in WB, we observed that normalizing the norm of the weights to one before training instead of applying MN every epoch (WD + WD & Norm & CB) gives almost identical results as shown in Figure 6. This phenomenon is consistent with the fact that the regularization loses its effect beyond a certain epoch [9]. Figure 6 also shows that WB has difficulty in doing implicit LA when the number of classes is small. While the linear layer's weights are larger for Few classes in the datasets with a sufficiently large number of classes, they remain lower for Few classes in the CIFAR10-LT. Thus, these results are consistent with the conclusions drawn from Theorem 2.
Experiments of Sec.5.2
We show the FDR and accuracy of methods with restricted WD for CIFAR100-LT in Table 7. Table 8 and 9 show the results for CIFAR10-LT and mini-ImageNet-LT respectively. We set ζ for CIFAR10-LT and mini-ImageNet-LT to 0.02 and 0.001 respectively.
We also experimented with the ResNeXt50 [54] as with Alshammari et al. [1]. We used the same parameters as the ResNet. Table 10 presents the results for CIFAR100-LT. Figure 6: The norm ratio of the mean per-class training features produced from the models trained by each dataset and each method. Note that the vertical axis shows the ratio of the norm of the weights for each class with the one for the class whose sample size is the largest. It can be seen that models trained with both methods have almost identical linear layer norms.
Broader impacts
Our research provides theoretical and experimental evidence for the effectiveness of an existing ad-hoc method. We also show that the original method can be simplified based on this theory. The theory is valid for general deep neural networks and does not concern any social issues such as privacy. On the contrary, it reduces the number of training stages to one while maintaining a higher level of accuracy, thus reducing the computational cost and the negative impact on the environment.
Figure 1 :
1Heatmaps showing average cosine similarity of the training features for the two classes. WD maintains high cosine similarity between the same classes and reduces the cosine similarity between the different classes.
Figure 2 :
2The ReLUFDR of the intermediate outputs of each MLP trained with each method. Training with WD decreases ReLUFDR significantly when scaling and shifting features in the BN of the final layer.
Figure 3 :
3Norm of the mean per-class training features produced from the models trained by each method. Features learned by methods with WD all demonstrate the tendency that the norms of the Many classes' features are smaller than those of the Few classes.
Figure 4 :
4Average forgetting scores per class when trained with each method. These indicate higher forgetting scores when the models are trained with CB; this is particularly noticeable in the Many classes without WD.
Figure 5 :
5The results for mini-ImageNet-LT. (Left) The norm of the mean per-class training features produced from the models trained by each method. (Right) Heatmaps showing the average of the cosine similarity of the training features for the two classes. 7.6.3 Experiments of Sec. 5.1
Table 2 :
2FDR for each dataset of models trained with each method. A higher FDR indicates that features are more easily linearly separable. For all datasets, the method with WD or CE produces a higher FDR and using both results in the highest FDR.CIFAR10-LT
CIFAR100-LT
mini-ImageNet-LT
Method
Train
Test
Train
Test
Train
Test
Naive & CE 8.17 × 10 1 2.17 × 10 1 1.28 × 10 2 4.16 × 10 1 7.56 × 10 1 4.28 × 10 1
Naive & CB 4.43 × 10 1 1.50 × 10 1 8.17 × 10 1 2.42 × 10 1 4.68 × 10 1 2.93 × 10 1
WD & CE
2.60 × 10 3 3.89 × 10 1 2.87 × 10 4 1.07 × 10 2 6.58 × 10 2 1.01 × 10 2
WD & CB
3.39 × 10 2 3.04 × 10 1 2.12 × 10 4 6.74 × 10 1 4.84 × 10 2 6.84 × 10 1
WD w/o BN 2.19 × 10 2 3.42 × 10 1 5.77 × 10 2 7.95 × 10 1 1.61 × 10 2 6.73 × 10 1
WD fixed BN 1.63 × 10 3 4.16 × 10 1 2.04 × 10 4 1.05 × 10 2 3.94 × 10 2 1.04 × 10 2
4 Role of first training stage in WB
First, we analyze the effect of WD and CE in the first training stage. In Sec. 4.1, we present the
datasets and models used in the analysis. Then, we examine the cosine similarity and FDR of
the features from the models trained by each training method for the first stage of training in
Sec. 4.2, 4.3 and 4.4. In Sec. 4.5 we identify the properties of training features trained with WD,
which is the key to the success of WB.
Table 3 :
3FDR of features from each model trained with each method and the features passed through a randomly initialized linear layer and ReLU after the model. C10, C100, and mIm are the abbreviations for CIFAR10, CIFAR100, and mini-ImageNet respectively. The FDR of features obtained from models trained with WD improves by passing the features through a random initialized layer and ReLU.C100 7.51 × 10 1 7.11 × 10 1 2.29 × 10 2 2.34 × 10 2 3.98 × 10 2 4.10 × 10 2 C100-LT 4.16 × 10 1 4.02 × 10 1 7.95 × 10 1 7.89 × 10 1 1.07 × 10 2 1.13 × 10 2 C10 4.77 × 10 1 5.56 × 10 1 7.96 × 10 1 1.02 × 10 2 1.81 × 10 2 2.35 × 10 2 C10-LT 2.17 × 10 1 2.36 × 10 1 3.42 × 10 1 3.94 × 10 1 3.89 × 10 1 4.30 × 10 1 mIm 7.34 × 10 1 6.66 × 10 1 1.81 × 10 2 1.86 × 10 2 3.63 × 10 2 3.93 × 10 2 ResNet34 mIm-LT 4.28 × 10 1 3.97 × 10 1 6.73 × 10 1 6.30 × 10 1 1.01 × 10 2 1.08 × 10 2 MNIST 1.66 × 10 2 1.59 × 10 2 2.97 × 10 2 3.65 × 10 2 4.44 × 10 2 9.55 × 10 2Naive
WD w/o BN
WD
Model
Dataset
before
after
before
after
before
after
MLP3
1.58 × 10 2 1.54 × 10 2 2.48 × 10 2 3.53 × 10 2 2.36 × 10 2 7.96 × 10 2
MLP4
1.98 × 10 2 1.96 × 10 2 3.60 × 10 2 4.97 × 10 2 4.12 × 10 2 1.43 × 10 3
MLP5
2.44 × 10 2 2.37 × 10 2 4.91 × 10 2 6.76 × 10 2 6.10 × 10 2 2.47 × 10 3
ResBlock1
1.39 × 10 2 1.36 × 10 2 2.20 × 10 2 2.69 × 10 2 2.44 × 10 2 5.81 × 10 2
ResBlock2
Table 4 :
4FDR and accuracy of models trained with each method for CIFAR100-LT. In addition to WD, training with FR and an ETF classifier further improves the FDR. However, only using these does not greatly improve accuracy; LA increases the accuracy of Medium and Few classes and enhances the average accuracy significantly as a result. In all cases, multiplicative LA improves the accuracy more than additive LA.FDR
Accuracy (%)
Method
LA
Train
Test
Many Medium
Few
Average
Naive
N/A 1.28×10 2
±0.9×10 1
4.16×10 1
±1.0×10 0 64.6 ±0.9 36.9 ±0.8 11.9 ±0.7 38.5 ±0.6
CB
N/A 8.17×10 1
±8.0×10 0
2.42×10 1
±0.6×10 0 47.6 ±1.0 23.2 ±0.6 5.61 ±0.4 26.1 ±0.4
N/A 2.87×10 4
±6.0×10 3
1.07×10 2
±0.2×10 1 75.9 ±0.5 45.3 ±0.6 13.9 ±0.7 46.0 ±0.4
Add 2.87×10 4
±6.0×10 3
1.07×10 2
±0.2×10 1 70.7 ±0.9 45.7 ±0.9 30.9 ±0.9 49.6 ±0.4
WD
Mult 2.87×10 4
±6.0×10 3
1.07×10 2
±0.2×10 1 72.6 ±0.7 48.5 ±0.8 29.5 ±0.9 50.8 ±0.4
WB
N/A 2.94×10 4
±6.0×10 3
1.07×10 2
±0.2×10 1 73.8 ±0.7 50.2 ±0.6 25.6 ±1.0 50.6 ±0.2
N/A 3.33×10 4
±2.0×10 3
1.13×10 2
±0.1×10 1 76.3 ±0.3 46.0 ±0.4 15.5 ±0.6 46.9 ±0.2
Add 3.33×10 4
±2.0×10 3
1.13×10 2
±0.1×10 1 73.8 ±0.4 48.9 ±0.3 25.8 ±0.4 50.2 ±0.2
WD&ETF
Mult 3.33×10 4
±2.0×10 3
1.13×10 2
±0.1×10 1 70.4 ±0.7 51.4 ±0.3 31.7 ±0.6 51.7 ±0.3
N/A 8.81×10 4
±2.0×10 3 1.22×10 2
±0.1×10 1 77.9 ±0.3 46.8 ±1.0 15.3 ±0.3 47.6 ±0.5
Add 8.85×10 4
±2.0×10 3 1.22×10 2
±0.1×10 1 75.1 ±0.3 49.3 ±1.0 26.2 ±0.8 50.9 ±0.3
WD&FR
&ETF
Mult 8.85×10 4
±2.0×10 3 1.22×10 2
±0.1×10 1 74.2 ±0.3 52.9 ±0.9 29.9 ±0.8 53.0 ±0.3
Table 5 :
5Table of notations.Variable
Definition
C
number of classes
d
number of dimensions for features
N / N k
number of samples / of class k
N
harmonic mean of number of samples per class
λ, ζ
hyper parameter of weight decay / feature regularization
ρ
imbalance ratio: max k N k
Table 6 :
6FDR and accuracy of models trained with an ETF classifier for CIFAR100-LT.FDR
Accuracy (%)
Method
Train
Test
Many Medium
Few
Average
WD&ETF
3.33×10 4
±2.3×10 3 1.13×10 2
±0.1×10 1 76.3 ±0.3 46.0 ±0.4 15.5 ±0.6 46.9 ±0.2
WD&ETF&DR 8.25×10 5
±7.0×10 4 9.46×10 1
±0.8×10 0 72.9 ±0.2 43.0 ±0.6 20.0 ±0.9 46.0 ±0.4
Table 7 :
7FDR and accuracy of models trained with restricted WD and ETF Classifier for CIFAR100-LT.FDR
Accuracy (%)
Method
LA
Train
Test
Many Medium
Few
Average
N/A 1.94×10 2
±0.8×10 1
5.89×10 1
±1.0×10 0 74.6 ±0.5 44.4 ±0.7 13.4 ±0.6 45.1 ±0.5
Add 1.94×10 2
±0.8×10 1
5.89×10 1
±1.0×10 0 69.3 ±0.8 49.4 ±0.5 30.4 ±0.9 50.3 ±0.6
WD w/o BN
& ETF
Mult 1.94×10 2
±0.8×10 1
5.89×10 1
±1.0×10 0 67.6 ±0.9 51.0 ±0.5 31.5 ±1.3 50.6 ±0.7
N/A 5.47×10 4
±8.0×10 3 1.05×10 2
±0.2×10 1 75.2 ±0.6 45.4 ±0.9 16.2 ±0.4 46.5 ±0.1
Add 5.42×10 4
±8.0×10 3 1.05×10 2
±0.2×10 1 71.9 ±0.6 44.8 ±0.6 26.5 ±0.5 48.4 ±0.3
WD fixed BN
&ETF
Mult 5.42×10 4
±8.0×10 3 1.05×10 2
±0.2×10 1 72.2 ±0.8 45.7 ±0.5 26.9 ±0.4 48.9 ±0.3
Table 8 :
8FDR and accuracy of models trained with each method for CIFAR10-LT.±1.7×10 0 86.2 ±2.9 80.3 ±1.0 74.0 ±3.9 80.8 ±0.5 WB N/A 1.82×10 3 ±2.58×10 3 3.86×10 1 ±4.5×10 0 87.9 ±2.4 77.6 ±1.5 67.9 ±3.2 78.8 ±0.6 N/A 3.90×10 4 ±4.57×10 4 4.38×10 1 ±3.2×10 0 89.7 ±1.9 73.9 ±5.6 59.1 ±1.8 75.8 ±1.5 Add 3.59×10 4 ±4.18×10 4 4.38×10 1 ±3.2×10 0 87.2 ±3.4 77.1 ±5.1 72.5 ±2.6 79.8 ±1.1 WD&ETF Mult 3.59×10 4 ±4.18×10 4 4.38×10 1 ±3.2×10 0 86.9 ±4.2 79.0 ±4.4 73.2 ±2.9 80.4 ±0.8 N/A 5.56×10 5 ±4.29×10 5 4.78×10 1 ±2.0×10 0 90.9 ±0.6 76.7 ±1.4 56.2 ±2.0 76.2 ±0.4 Add 7.58×10 5 ±6.06×10 5 4.78×10 1 ±2.0×10 0 88.8 ±1.1 78.7 ±1.2 70.3 ±2.1 80.2 ±0.5 ±2.0×10 0 89.9 ±0.9 80.8 ±1.3 66.3 ±2.1 80.1 ±0.4 N/A 9.17×10 1 ±3.60×10 1 2.95×10 1 ±2.8×10 0 88.3 ±1.4 75.6 ±3.6 59.1 ±1.8 75.7 ±1.6 Add 9.26×10 1 ±3.66×10 1 2.95×10 1 ±2.8×10 0 85.9 ±2.5 79.0 ±3.2 74.3 ±1.5 80.3 ±1.5 ±3.66×10 1 2.95×10 1 ±2.8×10 0 85.5 ±2.5 79.0 ±3.2 75.8 ±1.0 80.7 ±1.8 N/A 7.02×10 3 ±4.59×10 3 4.29×10 1 ±2.8×10 0 89.5 ±1.9 74.3 ±3.2 58.9 ±5.8 75.7 ±1.3 Add 6.60×10 3 ±4.28×10 3 4.29×10 1 ±2.8×10 0 79.3 ±9.2 76.9 ±3.6 78.7 ±5.4 78.4 ±3.2 ±2.8×10 0 84.5 ±6.9 78.5 ±4.1 77.1 ±5.6 80.5 ±2.6FDR
Accuracy (%)
Method
LA
Train
Test
Many Medium
Few
Average
Naive
N/A 8.17×10 1
±1.17×10 1
2.17×10 1
±0.6×10 0 87.7 ±0.3 67.2 ±3.1 47.4 ±2.2 69.4 ±1.0
CB
N/A 4.43×10 1
±1.21×10 1
1.50×10 1
±0.9×10 0 81.7 ±1.3 60.6 ±2.0 42.0 ±4.7 63.5 ±1.3
N/A 2.97×10 3
±3.38×10 3
4.21×10 1
±1.7×10 0 89.1 ±1.8 76.6 ±1.7 61.5 ±5.0 77.1 ±1.0
Add 3.40×10 3
±4.32×10 3
4.21×10 1
±1.7×10 0 81.6 ±3.8 79.5 ±1.2 79.7 ±3.1 80.4 ±0.7
WD
Mult 3.40×10 3
±4.32×10 3
4.21×10 1
WD&FR
&ETF
Mult 7.58×10 5
±6.06×10 5 4.78×10 1
WD w/o BN
& ETF
Mult 9.26×10 1
WD fixed BN
&ETF
Mult 6.60×10 3
±4.28×10 3
4.29×10 1
Table 9 :
9FDR and accuracy of models trained with each method for mini-ImageNet-LT. ±0.2×10 0 59.4 ±0.5 25.0 ±0.4 15.7 ±0.4 34.3 ±0.2 N/A 6.58×10 2 ±2.8×10 1 1.01×10 2 ±0.1×10 1 81.7 ±0.4 43.3 ±0.5 20.3 ±0.7 49.8 ±0.3 Add 6.51×10 2 ±3.3×10 1 ±0.1×10 1 80.1 ±0.2 43.9 ±0.7 38.1 ±0.5 54.8 ±0.3 N/A 8.20×10 2 ±6.5×10 1 1.09×10 2 ±0.2×10 1 81.7 ±0.7 42.9 ±0.4 21.6 ±1.2 50.1 ±0.2 Add 8.27×10 2 ±8.5×10 1 ±0.2×10 1 78.8 ±1.1 42.7 ±1.0 39.4 ±1.0 54.3 ±0.3 N/A 1.32×10 3 ±1.2×10 2 1.20×10 2 ±0.2×10 1 81.7 ±0.6 43.1 ±0.7 20.8 ±0.7 49.9 ±0.5 Add 1.31×10 3 ±1.1×10 2 1.20×10 2 ±0.2×10 1 79.6 ±0.5 43.5 ±0.6 34.9 ±0.9 53.6 ±0.6 ±0.2×10 1 78.8 ±0.3 43.5 ±0.8 37.9 ±0.8 54.2 ±0.5 N/A 1.21×10 2 ±0.1×10 1 5.85×10 1 ±0.3×10 0 78.6 ±0.4 41.7 ±0.6 21.7 ±0.7 48.6 ±0.3 Add 1.21×10 2 ±0.1×10 1 5.85×10 1 ±0.3×10 0 75.1 ±0.2 40.3 ±0.7 41.7 ±0.6 52.9 ±0.3 ±0.1×10 1 5.85×10 1 ±0.3×10 0 76.2 ±0.3 41.0 ±0.5 39.6 ±0.6 52.9 ±0.3 N/A 8.55×10 2 ±2.0×10 1 1.25×10 2 ±0.1×10 1 80.6 ±0.2 42.8 ±0.5 20.3 ±0.5 49.3 ±0.1 Add 8.56×10 2 ±2.1×10 1 1.25×10 2 ±0.1×10 1 78.7 ±0.3 41.6 ±0.4 36.0 ±0.7 52.9 ±0.3 WD fixed BN &ETF Mult 8.56×10 2 ±2.1×10 1 1.25×10 2 ±0.1×10 1 80.5 ±0.2 43.6 ±0.3 25.6 ±0.5 51.1 ±0.1FDR
Accuracy (%)
Method
LA
Train
Test
Many Medium
Few
Average
Naive
N/A 7.56×10 1
±1.6×10 0
4.28×10 1
±0.3×10 0 72.1 ±0.5 34.1 ±0.6 18.3 ±0.5 42.7 ±0.2
CB
N/A 4.68×10 1
±0.7×10 0
2.93×10 1
1.01×10 2
±0.1×10 1 80.5 ±0.3 42.1 ±0.6 39.3 ±0.2 54.7 ±0.3
WD
Mult 6.51×10 2
±3.3×10 1
1.01×10 2
±0.1×10 1 79.9 ±0.2 41.6 ±0.8 40.3 ±0.3 54.6 ±0.4
WB
N/A 6.52×10 2
±3.0×10 1
1.00×10 2
1.09×10 2
±0.2×10 1 79.6 ±0.8 43.0 ±0.6 37.5 ±0.7 54.1 ±0.2
WD&ETF
Mult 8.27×10 2
±8.5×10 1
1.09×10 2
WD&FR
&ETF
Mult 1.31×10 3
±1.1×10 2 1.20×10 2
WD w/o BN
& ETF
Mult 1.21×10 2
Table 10 :
10FDR and accuracy of ResNeXt50 trained with each method for CIFAR100-LT.±0.6×10 1 7.32×10 1 ±1.1×10 0 59.9 ±0.4 32.8 ±0.7 9.51 ±0.3 34.8 ±0.4 CB N/A 1.37×10 2 ±1.6×10 1 5.19×10 1 ±1.1×10 0 44.8 ±1.4 20.7 ±0.9 4.62 ±0.4 23.9 ±0.9 N/A 8.36×10 4 ±7.4×10 3 2.06×10 2 ±0.3×10 1 77.9 ±0.2 48.3 ±0.5 14.9 ±0.8 48.0 ±0.3 Add 8.41×10 4 ±7.9×10 3 2.06×10 2 ±0.3×10 1 73.4 ±0.5 48.6 ±0.8 32.9 ±0.8 52.2 ±0.2 WD Mult 8.41×10 4 ±7.9×10 3 2.06×10 2 ±0.3×10 1 73.7 ±0.6 49.2 ±1.1 33.8 ±1.5 52.8 ±0.2 WB N/A 3.19×10 5 ±2.0×10 4 2.06×10 2 ±0.2×10 1 77.5 ±0.2 51.2 ±0.9 21.1 ±0.6 50.8 ±0.3 N/A 1.67×10 5 ±2.4×10 4 2.02×10 2 ±0.3×10 1 77.7 ±0.6 48.8 ±0.9 17.4 ±0.5 48.9 ±0.4 Add 1.67×10 5 ±2.4×10 4 2.02×10 2 ±0.3×10 1 72.7 ±0.9 50.3 ±1.0 31.1 ±0.5 52.0 ±0.6 WD&ETF Mult 1.67×10 5 ±2.4×10 4 2.02×10 2 ±0.3×10 1 74.6 ±0.7 54.0 ±1.0 30.9 ±0.9 53.8 ±0.4 N/A 3.19×10 5 ±1.8×10 4 2.06×10 2 ±0.2×10 1 77.4 ±0.3 49.8 ±0.5 18.2 ±0.4 49.4 ±0.3 Add 3.17×10 5 ±1.8×10 4 2.06×10 2 ±0.2×10 1 76.3 ±0.3 51.9 ±0.4 26.2 ±0.4 52.2 ±0.2 ±0.2×10 1 72.9 ±0.5 53.8 ±1.1 33.4 ±0.6 54.0 ±0.3FDR
Accuracy (%)
Method
LA
Train
Test
Many Medium
Few
Average
Naive
N/A 2.04×10 2
WD&FR
&ETF
Mult 3.17×10 5
±1.8×10 4 2.06×10 2
This method does not require linear layers.2 This method requires the LGR head to be trained[48].
In the paper of Cui et al.[4], N is set to be 1, but in their implementation, N is equal to the harmonic mean. We adopt the latter since our experiments are based on their implementation.
Long-Tailed Recognition via Weight Balancing. Shaden Alshammari, Yu-Xiong Wang, Deva Ramanan, Shu Kong, 10.1109/cvpr52688.2022.00677arXiv:2203.14197ISBN:9781665469463c2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Shaden Alshammari, Yu-Xiong Wang, Deva Ramanan, and Shu Kong. Long-Tailed Recognition via Weight Balancing. c2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6887-6897, March 2022. ISSN 10636919. doi: 10.1109/cvpr52688.2022.00677. URL https://arxiv.org/abs/2203.14197. arXiv: 2203.14197 ISBN: 9781665469463.
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss. Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma, Advances in Neural Information Processing Systems. Curran Associates, Inc32Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/ 2019/hash/621461af90cadfdaf0e8d4cc25129f91-Abstract.html.
Feature Space Augmentation for Long-Tailed Data. Peng Chu, Xiao Bian, Shaopeng Liu, Haibin Ling, 10.1007/978-3-030-58526-6_41Computer Vision -ECCV 2020. Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael FrahmChamSpringer International PublishingPeng Chu, Xiao Bian, Shaopeng Liu, and Haibin Ling. Feature Space Augmentation for Long-Tailed Data. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision -ECCV 2020, Lecture Notes in Computer Science, pages 694-710, Cham, 2020. Springer International Publishing. ISBN 978-3-030-58526-6. doi: 10.1007/978-3-030-58526-6_41.
Class-balanced loss based on effective number of samples. Yin Cui, Menglin Jia, Yi Tsung, Yang Lin, Serge Song, Belongie, arXiv:1901.05555ISBN:9781728132938Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionYin Cui, Menglin Jia, Tsung Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019-June:9260-9269, January 2019. ISSN 10636919. doi: 10. 1109/CVPR.2019.00949. URL http://arxiv.org/abs/1901.05555. arXiv: 1901.05555 ISBN: 9781728132938.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations, January 2021. URL https: //openreview.net/forum?id=YicbFdNTTy.
Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Cong Fang, Hangfeng He, Qi Long, Weijie J Su, 10.1073/pnas.2103091118arXiv:2101.12699Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America118Cong Fang, Hangfeng He, Qi Long, and Weijie J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences of the United States of America, 118(43), January 2021. ISSN 10916490. doi: 10.1073/pnas.2103091118. URL https://arxiv.org/abs/2101.12699. arXiv: 2101.12699.
The Use of Multiple Measurements in Taxonomic Problems. R A Fisher, 10.1111/j.1469-1809.1936.tb02137.xAnnals of Eugenics. 72R. A. Fisher. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 7 (2):179-188, 1936. doi: 10.1111/j.1469-1809.1936.tb02137.x.
SGD and Weight Decay Provably Induce a Low-Rank Bias in Neural Networks. Tomer Galanti, Zachary S Siegel, Aparna Gupte, Tomaso Poggio, arXiv:2206.05794cs, statTomer Galanti, Zachary S. Siegel, Aparna Gupte, and Tomaso Poggio. SGD and Weight Decay Provably Induce a Low-Rank Bias in Neural Networks, January 2023. URL http://arxiv.org/ abs/2206.05794. arXiv:2206.05794 [cs, stat].
Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence. Aditya Sharad Golatkar, Alessandro Achille, Stefano Soatto, Advances in Neural Information Processing Systems. Curran Associates, Inc32Aditya Sharad Golatkar, Alessandro Achille, and Stefano Soatto. Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/hash/ 87784eca6b0dea1dff92478fb786b401-Abstract.html.
Dimensionality Reduction by Learning an Invariant Mapping. R Hadsell, S Chopra, Y Lecun, 10.1109/CVPR.2006.100IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). 2R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality Reduction by Learning an Invariant Map- ping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 1735-1742, June 2006. doi: 10.1109/CVPR.2006.100. ISSN: 1063-6919.
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path. X Y Han, Vardan Papyan, David L Donoho, International Conference on Learning Representations. X. Y. Han, Vardan Papyan, and David L. Donoho. Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path. In International Conference on Learning Representations, January 2022. URL https://openreview.net/forum?id=w1UbdvWH_R3.
Comparing Biases for Minimal Network Construction with Back-Propagation. Stephen José , Hanson , Lorien Pratt, Advances in neural information processing systems. 1Morgan-KaufmannStephen José Hanson and Lorien Pratt. Comparing Biases for Minimal Network Construction with Back-Propagation. Advances in neural information processing systems 1, 1:177-185, 1989. URL http://portal.acm.org/citation.cfm?id=89851.89872. Publisher: Morgan-Kaufmann ISBN: 1-558-60015-9.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 10.1109/CVPR.2016.90arXiv:1512.03385Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2016-Decem. the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2016-Decem10636919Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2016-Decem, pages 770-778, December 2016. ISBN 978-1-4673-8850-4. doi: 10.1109/CVPR.2016.90. URL http://arxiv.org/abs/1512.03385. arXiv: 1512.03385 ISSN: 10636919.
Connectionist learning procedures. Geoffrey E Hinton, 10.1016/0004-3702(89)90049-0September 1989. 40Geoffrey E. Hinton. Connectionist learning procedures. Artificial Intelligence, 40(1):185-234, Septem- ber 1989. ISSN 0004-3702. doi: 10.1016/0004-3702(89)90049-0. URL https://www.sciencedirect. com/science/article/pii/0004370289900490.
Improving neural networks by preventing co-adaptation of feature detectors. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R Salakhutdinov, 10.48550/arxiv.1207.0580arXiv:1207.0580Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. July 2012. doi: 10.48550/arxiv.1207.0580. URL http://arxiv.org/abs/1207.0580. arXiv: 1207.0580.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, International conference on machine learning. pmlrSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448-456. pmlr, 2015.
An Unconstrained Layer-Peeled Perspective on Neural Collapse. Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, Weijie J Su, International Conference on Learning Representations. Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, and Weijie J. Su. An Unconstrained Layer-Peeled Perspective on Neural Collapse. In International Conference on Learning Representations, January 2022. URL https://openreview.net/forum?id=WZ3yjh8coDg.
Decoupling Representation and Classifier for Long-Tailed Recognition. Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, Yannis Kalantidis, International Conference on Learning Representations. Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decoupling Representation and Classifier for Long-Tailed Recognition. In International Conference on Learning Representations, 2020. URL https://openreview.net/ forum?id=r1gRTCVFvB.
Exploring Balanced Feature Spaces for Representation Learning. Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, Jiashi Feng, International Conference on Learning Representations. Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, and Jiashi Feng. Exploring Balanced Feature Spaces for Representation Learning. In International Conference on Learning Representations, March 2023. URL https://openreview.net/forum?id=OqtLIabPTit.
Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks. Hyeyeon Bum Jun Kim, Hyeonah Choi, Dong Jang, Wonseok Gu Lee, Sang Woo Jeong, Kim, arXiv:2205.07260csBum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Dong Gu Lee, Wonseok Jeong, and Sang Woo Kim. Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks, May 2022. URL http://arxiv.org/abs/2205.07260. arXiv:2205.07260 [cs].
Adjusting decision boundary for class imbalanced learning. Byungju Kim, Junmo Kim, 10.1109/ACCESS.2020.2991231arXiv:1912.01857IEEE Access. 8Byungju Kim and Junmo Kim. Adjusting decision boundary for class imbalanced learning. IEEE Access, 8:81674-81685, December 2020. ISSN 21693536. doi: 10.1109/ACCESS.2020.2991231. URL https://arxiv.org/abs/1912.01857. arXiv: 1912.01857.
Learning Multiple Layers of Features from Tiny Images. Alex Krizhevsky, arXiv:1011.1669v3ISBN:9788578110796Tech. Science Department, University of TorontoAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Science Department, University of Toronto, Tech., pages 1-60, 2009. ISSN 1098-6596. doi: 10.1.1.222.9220. arXiv: 1011.1669v3 ISBN: 9788578110796.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, 10.1109/5.726791Conference Name: Proceedings of the IEEE. 86Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, January 1998. ISSN 1558-2256. doi: 10.1109/5.726791. Conference Name: Proceedings of the IEEE.
Visualizing the Loss Landscape of Neural Nets. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein, Advances in Neural Information Processing Systems. Curran Associates, Inc31Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the Loss Landscape of Neural Nets. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/ hash/a41b3bb3e6b050b6c9067c67f663b915-Abstract.html.
Targeted Supervised Contrastive Learning for Long-Tailed Recognition. Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio S Feris, Piotr Indyk, Dina Katabi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio S. Feris, Piotr Indyk, and Dina Katabi. Targeted Supervised Contrastive Learning for Long-Tailed Recognition. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6918-6928, 2022. URL https://openaccess.thecvf.com/content/CVPR2022/html/Li_Targeted_ Supervised_Contrastive_Learning_for_Long-Tailed_Recognition_CVPR_2022_paper.html.
WebVision Database: Visual Learning and Understanding from Web Data. Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, Luc Van Gool, arXiv:1708.02862csWen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. WebVision Database: Visual Learning and Understanding from Web Data, August 2017. URL http://arxiv.org/abs/1708. 02862. arXiv:1708.02862 [cs].
An Exponential Learning Rate Schedule for Deep Learning. Zhiyuan Li, Sanjeev Arora, International Conference on Learning Representations. Zhiyuan Li and Sanjeev Arora. An Exponential Learning Rate Schedule for Deep Learning. In International Conference on Learning Representations, March 2020. URL https://openreview. net/forum?id=rJg8TeSFDH.
Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning. Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, James Zou, arXiv:2203.02053Advances in Neural Information Processing Systems. Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Zou. Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning. In Advances in Neural Information Processing Systems, 2022. URL http://arxiv.org/abs/2203.02053. arXiv: 2203.02053.
Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. Jialun Liu, Yifan Sun, Chuchu Han, Zhaopeng Dou, Wenhui Li, 10.1109/CVPR42600.2020.00304arXiv:2002.10826Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern Recognition10636919Jialun Liu, Yifan Sun, Chuchu Han, Zhaopeng Dou, and Wenhui Li. Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2967-2976, February 2020. doi: 10.1109/CVPR42600.2020.00304. URL https://arxiv.org/abs/2002.10826. arXiv: 2002.10826 ISSN: 10636919.
Inducing Neural Collapse in Deep Long-tailed Learning. Xuantong Liu, Jianfeng Zhang, Tianyang Hu, He Cao, Yuan Yao, Lujia Pan, International Conference on Artificial Intelligence and Statistics. PMLRXuantong Liu, Jianfeng Zhang, Tianyang Hu, He Cao, Yuan Yao, and Lujia Pan. Inducing Neural Collapse in Deep Long-tailed Learning. In International Conference on Artificial Intelligence and Statistics, pages 11534-11544. PMLR, 2023.
Large-Scale Long-Tailed Recognition in an Open World. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, Stella X Yu, 10.48550/arxiv.1904.05160arXiv:1904.05160Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X. Yu. Large- Scale Long-Tailed Recognition in an Open World. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), April 2019. doi: 10.48550/arxiv.1904.05160. URL https://arxiv.org/abs/1904.05160. arXiv: 1904.05160.
On the periodic behavior of neural network training with batch normalization and weight decay. Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry P Vetrov, Advances in Neural Information Processing Systems. 34Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, and Dmitry P. Vetrov. On the periodic behavior of neural network training with batch normalization and weight decay. Advances in Neural Information Processing Systems, 34:21545-21556, 2021.
Chunhua Shen, and Anton van den Hengel. Retrieval Augmented Classification for Long-Tail Visual Recognition. Alexander Long, Wei Yin, Thalaiyasingam Ajanthan, Pulak Vu Nguyen, Ravi Purkait, Alan Garg, Blair, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Alexander Long, Wei Yin, Thalaiyasingam Ajanthan, Vu Nguyen, Pulak Purkait, Ravi Garg, Alan Blair, Chunhua Shen, and Anton van den Hengel. Retrieval Augmented Classification for Long-Tail Visual Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6959- 6969, 2022. URL https://openaccess.thecvf.com/content/CVPR2022/html/Long_Retrieval_ Augmented_Classification_for_Long-Tail_Visual_Recognition_CVPR_2022_paper.html.
SGDR: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, 10.48550/arxiv.1608.03983arXiv:1608.039835th International Conference on Learning Representations, ICLR 2017 -Conference Track Proceedings. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, ICLR 2017 -Conference Track Proceedings, August 2017. doi: 10.48550/arxiv.1608.03983. URL https://arxiv.org/abs/1608.03983. arXiv: 1608.03983.
Neural Collapse with Cross-Entropy Loss. Jianfeng Lu, Stefan Steinerberger, arXiv:2012.08465cs, mathJianfeng Lu and Stefan Steinerberger. Neural Collapse with Cross-Entropy Loss, January 2021. URL http://arxiv.org/abs/2012.08465. arXiv:2012.08465 [cs, math].
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora, Advances in Neural Information Processing Systems. Kaifeng Lyu, Zhiyuan Li, and Sanjeev Arora. Understanding the Generalization Benefit of Nor- malization Layers: Sharpness Reduction. In Advances in Neural Information Processing Systems, October 2022. URL https://openreview.net/forum?id=xp5VOBxTxZ.
A Simple Long-Tailed Recognition Baseline via Vision-Language Model. Teli Ma, Shijie Geng, Mengmeng Wang, Jing Shao, Jiasen Lu, Hongsheng Li, Peng Gao, Yu Qiao, Teli Ma, Shijie Geng, Mengmeng Wang, Jing Shao, Jiasen Lu, Hongsheng Li, Peng Gao, and Yu Qiao. A Simple Long-Tailed Recognition Baseline via Vision-Language Model, November 2021. URL https://arxiv.org/abs/2111.14745v1.
On implicit filter level sparsity in convolutional neural networks. Dushyant Mehta, Kwang In Kim, Christian Theobalt, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDushyant Mehta, Kwang In Kim, and Christian Theobalt. On implicit filter level sparsity in convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 520-528, 2019.
Long-tail learning via logit adjustment. Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar, 10.48550/arxiv.2007.07314arXiv:2007.07314International Conference on Learning Representations. Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. International Conference on Learning Representations, July 2020. doi: 10.48550/arxiv.2007.07314. URL http://arxiv.org/abs/2007. 07314. arXiv: 2007.07314.
Prevalence of neural collapse during the terminal phase of deep learning training. X Y Vardan Papyan, David L Han, Donoho, 10.1073/pnas.2015509117arXiv:2008.08186Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America117Vardan Papyan, X. Y. Han, and David L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences of the United States of America, 117(40):24652-24663, August 2020. ISSN 10916490. doi: 10.1073/pnas.2015509117. URL http://arxiv.org/abs/2008.08186. arXiv: 2008.08186.
Learning Transferable Visual Models From Natural Language Supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, PMLRProceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine LearningAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning, pages 8748-8763. PMLR, July 2021. URL https://proceedings.mlr.press/v139/radford21a.html. ISSN: 2640-3498.
Neural Collapse in Deep Homogeneous Classifiers and The Role of Weight Decay. Akshay Rangamani, Andrzej Banburski-Fahey, 10.1109/ICASSP43922.2022.9746778ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings, volume 2022-May. 15206149Akshay Rangamani and Andrzej Banburski-Fahey. Neural Collapse in Deep Homogeneous Classifiers and The Role of Weight Decay. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings, volume 2022-May, pages 4243-4247, 2022. ISBN 978-1-66540-540-9. doi: 10.1109/ICASSP43922.2022.9746778. ISSN: 15206149.
The Pareto, Zipf and other power laws. William J Reed, 10.1016/S0165-1765(01)00524-9Economics Letters. 741North-HollandWilliam J. Reed. The Pareto, Zipf and other power laws. Economics Letters, 74(1):15-19, December 2001. ISSN 01651765. doi: 10.1016/S0165-1765(01)00524-9. Publisher: North-Holland.
Measuring and Predicting Importance of Objects in Our Visual World. Merrielle Spain, Pietro Perona, Num Pages: 8 Place. Pasadena, CACalifornia Institute of TechnologyMerrielle Spain and Pietro Perona. Measuring and Predicting Importance of Objects in Our Visual World, November 2007. URL https://resolver.caltech.edu/CaltechAUTHORS:CNS-TR-2007-002. Num Pages: 8 Place: Pasadena, CA Publisher: California Institute of Technology.
Grassmannian frames with applications to coding and communication. Thomas Strohmer, W Robert, Heath, 10.1016/S1063-5203(03)00023-XarXiv:math/0301135Applied and Computational Harmonic Analysis. 143Thomas Strohmer and Robert W Heath. Grassmannian frames with applications to coding and communication. Applied and Computational Harmonic Analysis, 14(3):257-275, 2003. ISSN 10635203. doi: 10.1016/S1063-5203(03)00023-X. URL https://www.sciencedirect.com/science/ article/pii/S106352030300023X. arXiv: math/0301135.
Four Things Everyone Should Know to Improve Batch Normalization. Cecilia Summers, Michael J Dinneen, International Conference on Learning Representations. Cecilia Summers and Michael J. Dinneen. Four Things Everyone Should Know to Improve Batch Normalization. In International Conference on Learning Representations, March 2020. URL https://openreview.net/forum?id=HJx8HANFDH.
Imbalance Trouble: Revisiting Neural-Collapse Geometry. Christos Thrampoulidis, R Ganesh, Vala Kini, Tina Vakilian, Behnia, 10.48550/arxiv.2208.05512arXiv:2208.05512Advances in Neural Information Processing Systems. Christos Thrampoulidis, Ganesh R. Kini, Vala Vakilian, and Tina Behnia. Imbalance Trouble: Revis- iting Neural-Collapse Geometry. Advances in Neural Information Processing Systems, August 2022. doi: 10.48550/arxiv.2208.05512. URL http://arxiv.org/abs/2208.05512. arXiv: 2208.05512.
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition. Changyao Tian, Wenhai Wang, Xizhou Zhu, Jifeng Dai, Yu Qiao, 10.1007/978-3-031-19806-9_5Computer Vision -ECCV 2022. Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal HassnerChamSpringer Nature SwitzerlandChangyao Tian, Wenhai Wang, Xizhou Zhu, Jifeng Dai, and Yu Qiao. VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition. In Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, editors, Computer Vision -ECCV 2022, Lecture Notes in Computer Science, pages 73-91, Cham, 2022. Springer Nature Switzerland. ISBN 978-3-031-19806-9. doi: 10.1007/978-3-031-19806-9_5.
An empirical study of example forgetting during deep neural network learning. Mariya Toneva, Adam Trischler, Alessandro Sordoni, Yoshua Bengio, Remi Tachet Des, Geoffrey J Combes, Gordon, 10.48550/arxiv.1812.05159arXiv:1812.051597th International Conference on Learning Representations. Mariya Toneva, Adam Trischler, Alessandro Sordoni, Yoshua Bengio, Remi Tachet Des Combes, and Geoffrey J. Gordon. An empirical study of example forgetting during deep neural network learning. 7th International Conference on Learning Representations, ICLR 2019, December 2019. doi: 10.48550/arxiv.1812.05159. URL http://arxiv.org/abs/1812.05159. arXiv: 1812.05159.
Feature generation for long-tail classification. Rahul Vigneswaran, Marc T Law, N Vineeth, Makarand Balasubramanian, Tapaswi, https:/dl.acm.org/doi/10.1145/3490035.3490300Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP '21. the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP '21New York, NY, USAAssociation for Computing MachineryRahul Vigneswaran, Marc T. Law, Vineeth N. Balasubramanian, and Makarand Tapaswi. Feature generation for long-tail classification. In Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP '21, pages 1-9, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 978-1-4503-7596-2. doi: 10.1145/3490035.3490300. URL https://dl.acm.org/doi/10.1145/3490035.3490300.
Matching Networks for One Shot Learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Kavukcuoglu Koray, Daan Wierstra, Advances in Neural Information Processing Systems. Curran Associates, Inc29Oriol Vinyals, Charles Blundell, Timothy Lillicrap, koray kavukcuoglu, and Daan Wierstra. Matching Networks for One Shot Learning. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://papers.nips.cc/paper_files/paper/2016/hash/ 90e1357833654983612fb05e3ec9148c-Abstract.html.
Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay. Ruosi Wan, Zhanxing Zhu, Xiangyu Zhang, Jian Sun, Advances in Neural Information Processing Systems. Curran Associates, Inc34Ruosi Wan, Zhanxing Zhu, Xiangyu Zhang, and Jian Sun. Spherical Motion Dy- namics: Learning Dynamics of Normalized Neural Network using SGD and Weight De- cay. In Advances in Neural Information Processing Systems, volume 34, pages 6380-6391. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ 326a8c055c0d04f5b06544665d8bb3ea-Abstract.html.
RSG: A Simple but Effective Module for Learning Imbalanced Datasets. Jianfeng Wang, Thomas Lukasiewicz, Xiaolin Hu, Jianfei Cai, Zhenghua Xu, 10.1109/CVPR46437.2021.003782021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Jianfeng Wang, Thomas Lukasiewicz, Xiaolin Hu, Jianfei Cai, and Zhenghua Xu. RSG: A Simple but Effective Module for Learning Imbalanced Datasets. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3783-3792, June 2021. doi: 10.1109/CVPR46437. 2021.00378. ISSN: 2575-7075.
Aggregated Residual Transformations for Deep Neural Networks. Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, Kaiming He, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSaining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1492-1500, 2017. URL https://openaccess.thecvf.com/ content_cvpr_2017/html/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.html.
Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network?. Yibo Yang, Shixiang Chen, Xiangtai Li, Liang Xie, Zhouchen Lin, Dacheng Tao, Advances in Neural Information Processing Systems. Yibo Yang, Shixiang Chen, Xiangtai Li, Liang Xie, Zhouchen Lin, and Dacheng Tao. Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network? In Advances in Neural Information Processing Systems, October 2022. URL https://openreview.net/forum?id=A6EmxI3_Xc.
Three mechanisms of weight decay regularization. Guodong Zhang, Chaoqi Wang, Bowen Xu, Roger Grosse, 10.48550/arxiv.1810.12281arXiv:1810.122817th International Conference on Learning Representations, ICLR 2019. Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. 7th International Conference on Learning Representations, ICLR 2019, October 2019. doi: 10.48550/arxiv.1810.12281. URL http://arxiv.org/abs/1810.12281. arXiv: 1810.12281.
Deep Long-Tailed Learning: A Survey. Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, Jiashi Feng, 10.48550/arxiv.2110.04596arXiv:2110.04596Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. Deep Long-Tailed Learning: A Survey. October 2021. doi: 10.48550/arxiv.2110.04596. URL http://arxiv.org/abs/2110.04596. arXiv: 2110.04596.
| [] |
[
"Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues",
"Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues"
] | [
"Yue Feng \nUniversity College London\nLondonUK\n",
"Yunlong Jiao \nAmazon\nLondonUnited Kingdom\n",
"Animesh Prasad \nAmazon\nLondonUnited Kingdom\n",
"Nikolaos Aletras \nAmazon\nLondonUnited Kingdom\n\nUniversity of Sheffield\nSheffieldUK\n",
"◇ ",
"Emine Yilmaz [email protected]‡jyunlong \nUniversity College London\nLondonUK\n\nAmazon\nLondonUnited Kingdom\n",
"Gabriella Kazai [email protected]◇[email protected] \nAmazon\nLondonUnited Kingdom\n"
] | [
"University College London\nLondonUK",
"Amazon\nLondonUnited Kingdom",
"Amazon\nLondonUnited Kingdom",
"Amazon\nLondonUnited Kingdom",
"University of Sheffield\nSheffieldUK",
"University College London\nLondonUK",
"Amazon\nLondonUnited Kingdom",
"Amazon\nLondonUnited Kingdom"
] | [] | User Satisfaction Modeling (USM) is one of the popular choices for task-oriented dialogue systems evaluation, where user satisfaction typically depends on whether the user's task goals were fulfilled by the system. Task-oriented dialogue systems use task schema, which is a set of task attributes, to encode the user's task goals. Existing studies on USM neglect explicitly modeling the user's task goals fulfillment using the task schema. In this paper, we propose SG-USM, a novel schema-guided user satisfaction modeling framework. It explicitly models the degree to which the user's preferences regarding the task attributes are fulfilled by the system for predicting the user's satisfaction level. SG-USM employs a pre-trained language model for encoding dialogue context and task attributes. Further, it employs a fulfillment representation layer for learning how many task attributes have been fulfilled in the dialogue, an importance predictor component for calculating the importance of task attributes. Finally, it predicts the user satisfaction based on task attribute fulfillment and task attribute importance. Experimental results on benchmark datasets (i.e. MWOZ, SGD, ReDial, and JDDC) show that SG-USM consistently outperforms competitive existing methods. Our extensive analysis demonstrates that SG-USM can improve the interpretability of user satisfaction modeling, has good scalability as it can effectively deal with unseen tasks and can also effectively work in low-resource settings by leveraging unlabeled data. 1 | 10.48550/arxiv.2305.16798 | [
"https://export.arxiv.org/pdf/2305.16798v1.pdf"
] | 258,947,293 | 2305.16798 | 716864897ed59877d59be830fea092c4e1bcfc8b |
Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues
Yue Feng
University College London
LondonUK
Yunlong Jiao
Amazon
LondonUnited Kingdom
Animesh Prasad
Amazon
LondonUnited Kingdom
Nikolaos Aletras
Amazon
LondonUnited Kingdom
University of Sheffield
SheffieldUK
◇
Emine Yilmaz [email protected]‡jyunlong
University College London
LondonUK
Amazon
LondonUnited Kingdom
Gabriella Kazai [email protected]◇[email protected]
Amazon
LondonUnited Kingdom
Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues
User Satisfaction Modeling (USM) is one of the popular choices for task-oriented dialogue systems evaluation, where user satisfaction typically depends on whether the user's task goals were fulfilled by the system. Task-oriented dialogue systems use task schema, which is a set of task attributes, to encode the user's task goals. Existing studies on USM neglect explicitly modeling the user's task goals fulfillment using the task schema. In this paper, we propose SG-USM, a novel schema-guided user satisfaction modeling framework. It explicitly models the degree to which the user's preferences regarding the task attributes are fulfilled by the system for predicting the user's satisfaction level. SG-USM employs a pre-trained language model for encoding dialogue context and task attributes. Further, it employs a fulfillment representation layer for learning how many task attributes have been fulfilled in the dialogue, an importance predictor component for calculating the importance of task attributes. Finally, it predicts the user satisfaction based on task attribute fulfillment and task attribute importance. Experimental results on benchmark datasets (i.e. MWOZ, SGD, ReDial, and JDDC) show that SG-USM consistently outperforms competitive existing methods. Our extensive analysis demonstrates that SG-USM can improve the interpretability of user satisfaction modeling, has good scalability as it can effectively deal with unseen tasks and can also effectively work in low-resource settings by leveraging unlabeled data. 1
Introduction
Task-oriented dialogue systems have emerged for helping users to solve specific tasks efficiently (Hosseini-Asl et al., 2020). Evaluation is Figure 1: Task-oriented dialogue system has a predefined schema for each task, which is composed of a set of task attributes. In a dialogue, the user's task goal is encoded by the task attribute and value pairs. The user is satisfied with the service when the provided solution fulfills the user's preferences for the task attributes. a crucial part of the development process of such systems. Many of the standard automatic evaluation metrics, e.g. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), have been shown to be ineffective in task-oriented dialogue evaluation (Deriu et al., 2021;. As a consequence, User Satisfaction Modeling (USM) (Sun et al., 2021;Kachuee et al., 2021;Bodigutla et al., 2020;Song et al., 2019;Rebensburg et al., 2023) has gained momentum as the core evaluation metric for task-oriented dialogue systems. USM estimates the overall satisfaction of a user interaction with the system. In task-oriented dialogue systems, whether a user is satisfied largely depends on how well the user's task goals were fulfilled. Each task would typically have an associated task schema, which is a set of task attributes (e.g. location, date for check-in and check-out, etc. for a hotel booking task), and for the user to be satisfied, the system is expected to fulfill the user's preferences about these task attributes. Figure 1 shows an example of USM for task-oriented dialogues.
Effective USM models should have the following abilities: (1) Interpretability by giving insights on what aspect of the task the system performs well. For instance, this can help the system to recover from an error and optimize it toward an individual aspect to avoid dissatisfaction. (2) Scalability in dealing with unseen tasks, e.g. the model does not need to retrain when integrating new tasks. (3) Cost-efficiency for performing well in low-resource settings where it is often hard to collect and expensive to annotate task-specific data.
Previous work in USM follows two main lines of research. First, several methods use user behavior or system actions to model user satisfaction. In this setting, it is assumed that user satisfaction can be reflected by user behaviors or system actions in task-oriented dialogue systems, such as click, pause, request, inform Guo et al., 2020). A second approach is to analyze semantic information in user natural language feedback to estimate user satisfaction, such as sentiment analysis (Sun et al., 2021;Song et al., 2019) or response quality assessment (Bodigutla et al., 2020;Zeng et al., 2020). However, both of these two lines of work do not take into account the abilities of interpretability, scalability, and cost-efficiency.
In this paper, we propose a novel approach to USM, referred to as Schema-Guided User Satisfaction Modeling (SG-USM). We hypothesize that user satisfaction should be predicted by the fulfillment degree of the user's task goals that are typically represented by a set of task attribute and value pairs. Therefore, we explicitly formalize this by predicting how many task attributes fulfill the user's preferences and how important these attributes are. When more important attributes are fulfilled, taskoriented dialogue systems should achieve better user satisfaction.
Specifically, SG-USM comprises a pre-trained text encoder to represent dialogue context and task attributes, a task attribute fulfillment representation layer to represent the fulfillment based on the relation between the dialogue context and task attributions, a task attribute importance predictor to cal-culate the importance based on the task attributes popularity in labeled and unlabeled dialogue corpus, and a user satisfaction predictor which uses task attributes fulfillment and task attributes importance to predict user satisfaction. SG-USM uses task attributes fulfillment and task attributes importance to explicitly model the fulfillment degree of the user's task goals (interpetability). It uses an task-agnostic text encoder to create representations of task attributes by description, no matter whether the task are seen or not (scalability). Finally, it uses unlabeled dialogues in low-resource settings (cost-efficiency).
Experimental results on popular task-oriented benchmark datasets show that SG-SUM substantially and consistently outperforms existing methods on user satisfaction modeling. Extensive analysis also reveals the significance of explicitly modeling the fulfillment degree of the user's task goals, the ability to deal with unseen tasks, and the effectiveness of utilizing unlabeled dialogues.
Related Work
Task-oriented Dialogue Systems. Unlike chitchat dialogue systems that aim at conversing with users without specific goals, task-oriented dialogue systems assist users to accomplish certain tasks (Feng et al., 2021;Eric et al., 2020). Task-oriented dialogue systems can be divided into module-based methods (Feng et al., 2022b;Ye et al., 2022;Su et al., 2022;Heck et al., 2020;Chen et al., 2020a;Wu et al., 2019a;Lei et al., 2018;Liu and Lane, 2016) and end-to-end methods (Feng et al., 2022a;Qin et al., 2020;Yang et al., 2020;Madotto et al., 2018;Yao et al., 2014). To measure the effectiveness of task-oriented dialogue systems, evaluation is a crucial part of the development process. Several approaches have been proposed including automatic evaluation metrics (Rastogi et al., 2020;Mrkšić et al., 2017), human evaluation (Feng et al., 2022a;Goo et al., 2018), and user satisfaction modeling (Sun et al., 2021;Mehrotra et al., 2019). Automatic evaluation metrics, such as BLEU (Papineni et al., 2002), make a strong assumption for dialogue systems, which is that valid responses have significant word overlap with the ground truth responses. However, there is significant diversity in the space of valid responses to a given context . Human evaluation is considered to reflect the overall performance of the system in a real-world scenario, but it is intrusive, time-intensive, and does not scale (Deriu et al., 2021). Recently, user satisfaction modeling has been proposed as the main evaluation metric for task-oriented dialogue systems, which can address the issues listed above.
User Satisfaction Modeling. User satisfaction in task-oriented dialogue systems is related to whether or not, or to what degree, the user's task goals are fulfilled by the system. Some researchers study user satisfaction from temporal user behaviors, such as click, pause, etc. Guo et al., 2020;Mehrotra et al., 2019;Wu et al., 2019b;Su et al., 2018;Mehrotra et al., 2017). Other related studies view dialogue action recognition as an important preceding step to USM, such as request, inform, etc. Kim and Lipani, 2022). However, sometimes the user behavior or system actions are hidden in the user's natural language feedback and the system's natural language response (Hashemi et al., 2018). To cope with this problem, a number of methods are developed from the perspective of sentiment analysis (Sun et al., 2021;Song et al., 2019;Engelbrecht et al., 2009) and response quality assessment (Bodigutla et al., 2020;Zeng et al., 2020). However, all existing methods cannot explicitly predict user satisfaction with fine-grained explanations, deal with unseen tasks, and alleviate low-resource learning problem. Our work is proposed to solve these issues.
Schema-guided User Satisfaction Modeling
Our SG-USM approach formalizes user satisfaction modeling by representing the user's task goals as a set of task attributes, as shown in Figure 1.
The goal is to explicitly model the degree to which task attributes are fulfilled, taking into account the importance of the attributes. As shown in Figure 2, SG-USM consists of a text encoder, a task attribute fulfillment representation layer, a task attribute importance predictor, and a user satisfaction predictor. Specifically, the text encoder transforms dialogue context and task attributes into dialogue embeddings and task attribute embeddings using BERT (Devlin et al., 2019). The task attribute fulfillment representation layer models relations between the dialogue embeddings and the task attribute embeddings by attention mechanism to create task attribute fulfillment representations. Further, the task attribute importance predictor models the task attribute popularity in labeled and unla-beled dialogues by the ranking model to obtain task attribute importance weights. Finally, the user satisfaction predictor predicts user satisfaction score on the basis of the task attribute fulfillment representations and task attribute importance weights using a multilayer perceptron.
Text Encoder
The text encoder takes the dialogue context (user and system utterances) and the descriptions of task attributes as input and uses BERT to obtain dialogue and task attribute embeddings, respectively. Considering the limitation of the maximum input sequence length of BERT, we encode dialogue context by each dialogue turn. Specifically, the BERT encoder takes as input a sequence of tokens with length L, denoted as X = (x 1 , ..., x L ). The first token x 1 is [CLS], followed by the tokens of the user utterance and the tokens of the system utterance in one dialogue turn, separated by [SEP]. The representation of [CLS] is used as the embedding of the dialogue turn. Given a dialogue with N dialogue turns, the output dialogue embeddings is the concatenation of all dialogue turn embeddings
D = [d 1 ; d 2 ; ...; d N ].
To obtain task attribute embeddings, the input is a sequence of tokens with length K, denoted as Y = {y 1 , ..., y K }. The sequence starts with [CLS], followed by the tokens of the task attribute description. The representation of [CLS] is used as the embedding of the task attribute. The set of task attribute embeddings are denoted as T = {t 1 , t 2 , ..., t M }, where M is the number of task attributes.
Task Attribute Fulfillment Representation Layer
The task attribute fulfillment representation layer takes the dialogue and task attribute embeddings as input and calculates dialogue-attended task attribute fulfillment representations. This way, whether each task attribute can be fulfilled in the dialogue context is represented. Specifically, the task attribute fulfillment representation layer constructs an attention vector by a bilinear interaction, indicating the relevance between dialogue and task attribute embeddings. Given the dialogue embeddings D and i-th task attribute embedding t i , it calculates the relevance as follows, where W a is the bilinear interaction matrix to be learned. A i represents the attention weights of dialogue turns with respect to the i-th task attribute. Then the dialogue-attended i-th task attribute fulfillment representations are calculated as follows,
A i = softmax(exp(D T W a t i )),(1)BERT-Based Encoder Task Attribute Description CLS ! " ! # Task Attribute Text Encoder User Utterance System Utterance CLS $ " $ % SEP $ %&' x L Dialogue … … … Dialogue Embeddings ( / Task Attribute Embeddings )
Task Attribute Fulfillment Representation Layer
t a i = DA i .(2)
The dialogue-attended task attribute fulfillment representations for all task attributes are denoted as:
T a = [t a 1 , t a 2 , ..., t a M ].(3)
where M is the number of the task attributes.
Task Attribute Importance Predictor
The task attribute importance predictor also takes the dialogue and task attribute embeddings as input and calculates attribute importance scores. The importance scores are obtained by considering both the task attribute presence frequency and task attribute presence position in the dialogue. First, we use the Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) to select the top relevant task attributes for the dialogue context. The selected task attributes are then used to calculate the task attribute presence frequency in the dialogue. The MMR takes the j-th dialogue turn embeddings d j and task attribute embeddings T as input, and picks the top K relevant task attributes for the j-th dialogue turn:
Rj = argmax t i ∈T ∖U [λcos(ti, dj) − (1 − λ) max t k ∈U cos(ti, t k )] (4)
where U is the subset of attributes already selected as top relevant task attributes, cos() is the cosine similarity between the embeddings. λ trades off between the similarity of the selected task attributes to the dialogue turn and also controls the diversity among the selected task attributes. The task attribute presence frequency vector for the j-th dialogue turn is computed as follows,
F j = [f 1 j , f 2 j , f 3 j , ..., f M j ] (5) f i j = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ 1 i ∈ R j 0 i ∉ R j(6)
where M is the number of the task attributes. However, the task attribute presence frequency vector does not reward task attributes that appear in the beginning of the dialogue. The premise of task attribute importance score is that task attributes appearing near the end of the dialogue should be penalized as the graded importance value is reduced logarithmically proportional to the position of the dialogue turn. A common effective discounting method is to divide by the natural log of the position:F j = F j log(j + 1)
The task attribute importance predictor then computes the importance score on the basis of the sum of the discounted task attribute presence frequency of all dialogues. Given the dialogue corpus (including both labeled and unlabeled dialogues) with Z dialogues C = {D 1 , D 2 , ..., D Z }, the task attribute importance scores are calculated as follow:
S = softmax( Z ∑ l=1 Num(D l ) ∑ j=1F l j )(8)
where Num() is the number of the dialogue turn in dialogue D l , andF l j is the discounted task attribute presence frequency of j-th dialogue turn in dialogue D l .
User Satisfaction Predictor
Given the dialogue-attended task attribute fulfillment representations T a and the task attribute importance scores S, the user satisfaction labels are obtained by aggregating task attribute fulfillment representations based on task attribute importance scores. This way, the user satisfaction is explicitly modeled by the fulfillment of the task attributes and their individual importance.
Specifically, an aggregation layer integrates the dialogue-attended task attribute fulfillment representations by the task attribute importance scores as follows:
h = T a S(9)
Then the Multilayer Perceptron (MLP) (Hastie et al., 2009) with softmax normalization is employed to calculate the probability distribution of user satisfaction classes:
p = softmax(MLP(h))(10)
Training
We train SG-USM in an end-to-end fashion by minimizing the cross-entropy loss between the predicted user satisfaction probabilities and the ground-truth satisfaction:
L = −ylog(p)(11)
where y is the ground-truth user satisfaction. Pretrained BERT encoders are used for encoding representations of utterances and schema descriptions respectively. The encoders are fine-tuned during the training process.
Experimental Setup
Datasets
We conduct experiments using four benchmark datasets containing task-oriented dialogue on different domains and languages (English and Chinese), including MultiWOZ2.1 (MWOZ) (Eric et al., 2020), Schema Guided Dialogue (SGD) (Rastogi et al., 2020), ReDial , and JDDC (Chen et al., 2020b). MWOZ and SGD are English multi-domain taskoriented dialogue datasets, which include hotel, restaurant, flight, etc. These datasets contain domain-slot pairs, where the slot information could correspond to the task attributes.
ReDial is an English conversational recommendation dataset for movie recommendation. The task attributes are obtained from the Movie 2 type on Schema.org. JDDC is a Chinese customer service dialogue dataset in E-Commerce. The task attributes are obtained from the Product 3 type on Schema.org.cn, which provides schemas in Chinese. Specifically, we use the subsets of these datasets with the user satisfaction annotation for evaluation, which is provided by Sun et al (Sun et al., 2021). We also use the subsets of these datasets without the user satisfaction annotation to investigate the semi-supervised learning abilities of SG-USM. Ta
Baselines and SG-USM Variants
We compare our SG-USM approach with competitive baselines as well as state-of-the-art methods in user satisfaction modeling.
HiGRU (Jiao et al., 2019) proposes a hierarchical structure to encode each turn in the dialogue using a word-level gated recurrent unit (GRU) (Dey and Salem, 2017) and a sentence-level GRU. It uses the last hidden states of the sentence-level GRU as inputs of a multilayer perceptron (MLP) (Hastie et al., 2009) Table 2: Performance of SG-USM and baselines on various evaluation benchmarks. Numbers in bold denote the best model performance for a given metric. Numbers with * indicate that SG-USM model is better than the best-performing baseline method (underlined scores) with statistical significance (t-test, p < 0.05).
HiGRU to represent dialogues. An MLP takes the dialogue representation as inputs to predict the user satisfaction level. Transformer (Vaswani et al., 2017) is a simple baseline that takes the dialogue context as input and uses the standard Transformer encoder to obtain the dialogue representations. An MLP is used on the encoder to predict the user satisfaction level. BERT (Devlin et al., 2019) concatenates the last 512 tokens of the dialogue context into a long sequence with a [SEP] token for separating dialogue turns. It uses the [CLS] token of a pre-trained BERT models to represent dialogues. An MLP is used on the BERT to predict the user satisfaction level. USDA ) employs a hierarchical BERT encoder to encode the whole dialogue context at the turn-level and the dialogue-level. It also incorporates the sequential dynamics of dialogue acts with the dialogue context in a multi-task framework for user satisfaction modeling.
We also report the performance of two simpler SG-USM variants: SG-USM(L) only uses the dialogues with groundtruth user satisfaction labels to train the model. SG-USM(L&U) uses both labeled and unlabeled dialogues in the training process. It takes the dialogues without user satisfaction annotation as the inputs of task attribute importance predictor module to obtain more general and accurate task attribute importance scores.
For a fair comparison with previous work and without loss of generality, we adopt BERT as the backbone encoder for all methods that use pretrained language models.
Evaluation Metrics
Following previous work Cai and Chen, 2020;Choi et al., 2019;Song et al., 2019), we consider a three-class classification task for user satisfaction modeling by treating the rating "</=/> 3" as "dissatisfied/neutral/satisfied". Accuracy (Acc), Precision (P), Recall (R), and F1 are used as the evaluation metrics.
Training
We use BERT-Base uncased, which has 12 hidden layers of 768 units and 12 self-attention heads to encode the utterances and schema descriptions. We apply a two-layer MLP with the hidden size as 768 on top of the text encoders. ReLU is used as the activation function. The dropout probability is 0.1. Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate of 1e-4. We train up to 20 epochs with a batch size of 16, and select the best checkpoints based on the F1 score on the validation set. Table 2 shows the results of SG-USM on MWOZ, SGD, ReDial, and JDDC datasets. Overall, we observe that SG-USM substantially and consistently outperforms all other methods across four datasets with a noticeable margin. Specifically, SG-USM(L) improves the performance of user satisfaction modeling via explicitly modeling the degree to which the task attributes are fulfilled. SG-USM(L&U) further aids the user satisfaction modeling via predicting task attribute importance based on both labeled dialogues and unlabeled dialogues. It appears that the success of SG-USM is due to its architecture design which consists of the task attribute fulfillment representation layer and the task attribute importance predictor. In addition, SG-USM can also effectively leverage unlabeled dialogues to alleviate the cost of user satisfaction score annotation.
Experimental Results
Overall Performance
Ground Truth User Satisfaction Dissatisfied User Satisfaction Predicted by SG-USM Dissatisfied
User Satisfaction Predicted by USDA Neural
Task Attributes
Doctor: § Type: speciality of the doctor. § City: city where the doctor is located.
Importance:
(a) Example 1
Dialogue Context
U: Would you show me attractions to visit in Philadelphia? I prefer a museum, and someplace without an entry fee. S: Barnes Foundation is an art museum that you may like. U: Okay. Is it free? S: No. The ticket for an adult is $25. U: Sorry, I want a museum without an entry fee.
Ground Truth User Satisfaction Neural User Satisfaction Predicted by SG-USM Neural
User Satisfaction Predicted by USDA Satisfied
Task Attributes
Travel: § Category: category to which the attraction belongs. § FreeEntry: whether entrance to attraction is free.
Importance:
(b) Example 2 Figure 4: Case study on SG-USM and USDA on SGD dataset. The yellow ★ represents the importance of task attributes. The texts in green are the users' preferences for the task attributes. The texts in red are the attributes of the provided solutions.
Ablation Study
We also conduct an ablation study on SG-USM to study the contribution of its two main components: task attribute importance and task attribute fulfillment.
Effect of Task Attribute Importance
To investigate the effectiveness of task attribute importance in user satisfaction modeling, we eliminate the task attribute importance predictor and run the model on MWOZ, SGD, ReDial, and JDDC. As shown in Figure 3, the performance of SG-USMw/oImp decreases substantially compared with SG-USM. This indicates that the task attribute importance is essential for user satisfaction modeling. We conjecture that it is due to the user satisfaction relates to the importance of the fulfilled task attributes.
Effect of Task Attribute Fulfillment
To investigate the effectiveness of task attribute fulfillment in user satisfaction modeling, we compare SG-USM with SG-USM-w/oFul which eliminates the task attribute fulfillment representation. Figure 3 shows the results on MWOZ, SGD, Re-Dial, and JDDC in terms of F1. From the results, we can observe that without task attribute fulfillment representation the performances deteriorate considerably. Thus, utilization of task attribute fulfillment representation is necessary for user satisfaction modeling.
Discussion
Case Study
We also perform a qualitative analysis on the results of SG-USM and the best baseline USDA on the SGD dataset to delve deeper into the differences of the two models. We first find that SG-USM can make accurate inferences about user satisfaction by explicitly modeling the fulfillment degree of task attributes. For example, in the first case in Figure 4, the user wants to find a gynecologist in New York. SG-USM can correctly predict the dissatisfied label by inferring that the first important task attribute "Type" is not fulfilled. In the second case, the user wants to find a museum without an entry fee. SG-USM can yield Table 3: Performance of SG-USM and the best baseline USDA on zero-shot learning ability. All the models are trained on SGD and tested on MWOZ and ReDial. Numbers in bold denote best results in that metric. Numbers with * indicate that the model is better than the performance of baseline with statistical significance (t-test, p < 0.05). the correct neural label by inferring that the second important task attribute "FreeEntry" is not fulfilled. From our analysis, we think that SG-USM achieves better accuracy due to its ability to explicitly model how many task attributes are fulfilled and how important the fulfilled task attributes are. In contrast, the USDA does not model the fulfillment degree of task attributes, thus it cannot properly infer the overall user satisfaction.
Dealing with Unseen Task Attributes
We furhter analyze the zero-shot capabilities of SG-USM and the best baseline of USDA. The SGD, MWOZ, and ReDial datasets are English dialogue datasets that contain different task attributes. Therefore, we train models on SGD, and test models on MWOZ and ReDial to evaluate the zero-shot learning ability. Table 3 presents the Accuracy, Precision, Recall, and F1 of SG-USM and USDA on MWOZ and ReDial. From the results, we can observe that SG-USM performs significantly better than the baseline USDA on both datasets. This indicates that the agnostic task attribute encoder of SG-USM is effective. We argue that it can learn shared knowledge between task attributes and create more accurate semantic representations for unseen task attributes to improve performance in zeroshot learning settings.
Effect of the Unlabeled Dialogues
To analyze the effect of the unlabeled dialogues for SG-USM, we test different numbers of unlabeled dialogues during the training process of SG-USM. Figure 5 shows the Accuracy and F1 of SG-USM when using 1 to 4 thousand unlabeled dialogues for training on MWOZ, SGD, ReDial, and JDDC. From the results, we can see that SG-USM can achieve higher performance with more unlabeled dialogues. This indicates that SG-USM can effectively utilize unlabeled dialogues to improve the performance of user satisfaction modeling. We reason that with a larger corpus, the model can more accurately estimate the importance of task attributes.
Conclusion
User satisfaction modeling is an important yet challenging problem for task-oriented dialogue systems evaluation. For this purpose, we proposed to explicitly model the degree to which the user's task goals are fulfilled. Our novel method, namely SG-USM, models user satisfaction as a function of the degree to which the attributes of the user's task goals are fulfilled, taking into account the importance of the attributes. Extensive experiments show that SG-USM significantly outperforms the state-of-the-art methods in user satisfaction modeling on various benchmark datasets, i.e. MWOZ, SGD, ReDial, and JDDC. Our extensive analysis also validates the benefit of explicitly modeling the fulfillment degree of a user's task goal based on the fulfillment of its constituent task attributes. In future work, it is worth exploring the reasons of user dissatisfaction to better evaluate and improve task-oriented dialogue systems.
Limitations
Our approach builds on a task schema that characterizes a task-oriented dialogue system's domain. For example, the schema captures various attributes of the task. For some domains, when a schema is not pre-defined, it first needs to be extracted, e.g., from a corpus of dialogues. In this paper, we used BERT as our LM to be comparable with related work, but more advanced models could further improve the performance. A limitation of our task attribute importance scoring method is that it currently produces a static set of weights, reflecting the domain. In the future, the importance weights may be personalized to the current user's needs instead.
Figure 2 :
2The architecture of SG-USM for user satisfaction modeling on task-oriented dialogues.
Figure 3 :
3Performance of SG-USM by ablating the task attribute importance and task attribute fulfillment components across datasets.Dialogue Context U: I need a doctor. S: In what city? U: In New York. S: Do you want a general practitioner, ophthalmologist, or something else? U: I'm looking for a gynecologist. S: Borodulin Tatyana MD, a general practitioner in New York, is a good option.
Figure 5 :
5Performance of SG-USM trained with different numbers of unlabeled dialogues on MWOZ, SGD, ReDial, and JDDC datasets.
ble 1 displays the statistics of the datasets in the experiments.Characteristics
MWOZ
SGD
ReDial
JDDC
Language
English
English
English
Chinese
#Dialogues
1,000
1,000
1,000
3,300
#Utterances
12,553
13,833
11,806
54,517
#Avg Turn
23.1
26.7
22.5
32.3
#Attributes
37
215
128
13
%Sat. Class
27:39:34 22:30:48 23:26:51 23:53:24
#TrainSplit
7,648
8,674
7,372
38,146
#ValidSplit
952
1,074
700
5,006
#TestSplit
953
1,085
547
4,765
#Unlabeled Dialogues
4,000
4,000
4,000
4,000
Table 1 :
1Statistics of the task-oriented dialogue datasets.
SG-USM-L&U 52.3 * 50.4 * 51.4 * 50.9 * 64.7 * 61.6 * 58.8 * 60.2 * 58.4 * 55.8 * 53.2 * 54.5 * 63.3 * 63.1 * 64.1 * 63.5 *to predict the user satisfaction level.
HAN (Yang et al., 2016) applies a two-level at-
tention mechanism in the hierarchical structure of
Model
MWOZ
SGD
ReDial
JDDC
Acc
P
R
F1
Acc
P
R
F1
Acc
P
R
F1
Acc
P
R
F1
HiGRU
44.6
43.7
44.3
43.7
50.0
47.3
48.4
47.5
46.1
44.4
44.0
43.5
59.7
57.3
50.4
52.0
HAN
39.0
37.1
37.1
36.8
47.7
47.1
44.8
44.9
46.3
40.0
40.3
40.0
58.4
54.2
50.1
51.2
Transformer
42.8
41.5
41.9
41.7
53.1
48.3
49.9
49.1
47.5
44.9
44.7
44.8
60.9
59.2
53.4
56.2
BERT
46.1
45.5
47.4
45.9
56.2
55.0
53.7
53.7
53.6
50.5
51.3
50.0
60.4
59.8
58.8
59.5
USDA
49.9
49.2
49.0
48.9
61.4
60.1
55.7
57.0
57.3
54.3
52.9
53.4
61.8
62.8
63.7
61.7
SG-USM-L
50.8 * 49.3 50.2 * 49.4 * 62.6 * 58.5 57.2 * 57.8 * 57.9 * 54.7
53.0
53.8 62.5 * 62.6
63.9 62.8 *
* 38.9 * 41.3 * 40.2 * 30.8 * 34.6 * 30.7 * 32.1 * SG-USM(L&U) 43.1 * 40.9 * 43.5 * 42.8 * 32.3 * 36.4 * 32.8 * 33.4 *Model
MWOZ
ReDial
Acc
P
R
F1
Acc
P
R
F1
USDA
32.8
34.5
32.2
33.1
25.4
29.5
26.4
27.3
SG-USM(L)
40.9
https://schema.org/Movie 3 https://schema.org.cn/Product
Joint turn and dialogue level user satisfaction estimation on multi-domain conversations. Praveen Kumar Bodigutla, Aditya Tiwari, Spyros Matsoukas, Findings of the Association for Computational Linguistics: EMNLP 2020. Josep Valls-Vargas, and Lazaros PolymenakosPraveen Kumar Bodigutla, Aditya Tiwari, Spyros Matsoukas, Josep Valls-Vargas, and Lazaros Poly- menakos. 2020. Joint turn and dialogue level user sat- isfaction estimation on multi-domain conversations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3897-3909.
Predicting user intents and satisfaction with dialogue-based conversational recommendations. Wanling Cai, Li Chen, Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. the 28th ACM Conference on User Modeling, Adaptation and PersonalizationWanling Cai and Li Chen. 2020. Predicting user intents and satisfaction with dialogue-based conversational recommendations. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Per- sonalization, pages 33-42.
The use of mmr, diversity-based reranking for reordering documents and producing summaries. Jaime Carbonell, Jade Goldstein, Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. the 21st annual international ACM SIGIR conference on Research and development in information retrievalJaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 335-336.
Schema-guided multi-domain dialogue state tracking with graph attention neural networks. Lu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, Kai Yu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Lu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, and Kai Yu. 2020a. Schema-guided multi-domain dialogue state tracking with graph attention neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7521-7528.
. Meng Chen, Ruixue Liu, Lei Shen, Shaozu Yuan, Jingyan Zhou, Meng Chen, Ruixue Liu, Lei Shen, Shaozu Yuan, Jingyan Zhou, Youzheng Wu, Xiaodong He, and
The jddc corpus: A large-scale multi-turn chinese dialogue dataset for e-commerce customer service. Bowen Zhou, LREC. Bowen Zhou. 2020b. The jddc corpus: A large-scale multi-turn chinese dialogue dataset for e-commerce customer service. In LREC.
Offline and online satisfaction prediction in open-domain conversational systems. Jason Ingyu Choi, Ali Ahmadvand, Eugene Agichtein, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementJason Ingyu Choi, Ali Ahmadvand, and Eugene Agichtein. 2019. Offline and online satisfaction pre- diction in open-domain conversational systems. In Proceedings of the 28th ACM International Confer- ence on Information and Knowledge Management, pages 1281-1290.
User satisfaction estimation with sequential dialogue act modeling in goaloriented conversational systems. Yang Deng, Wenxuan Zhang, Wai Lam, Hong Cheng, Helen Meng, Proceedings of the ACM Web Conference 2022. the ACM Web Conference 2022Yang Deng, Wenxuan Zhang, Wai Lam, Hong Cheng, and Helen Meng. 2022. User satisfaction estima- tion with sequential dialogue act modeling in goal- oriented conversational systems. In Proceedings of the ACM Web Conference 2022, pages 2998-3008.
Survey on evaluation methods for dialogue systems. Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, Mark Cieliebak, Artificial Intelligence Review. 541Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2021. Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, 54(1):755-810.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kristina Toutanova, Proceedings of NAACL-HLT. NAACL-HLTJacob Devlin, Ming-Wei Chang, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186.
Gate-variants of gated recurrent unit (gru) neural networks. Rahul Dey, M Fathi, Salem, 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEERahul Dey and Fathi M Salem. 2017. Gate-variants of gated recurrent unit (gru) neural networks. In 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), pages 1597-1600. IEEE.
Modeling user satisfaction with hidden markov models. Klaus-Peter Engelbrecht, Florian Gödde, Felix Hartard, Hamed Ketabdar, Sebastian Möller, Proceedings of the SIGDIAL 2009 Conference. the SIGDIAL 2009 ConferenceKlaus-Peter Engelbrecht, Florian Gödde, Felix Hartard, Hamed Ketabdar, and Sebastian Möller. 2009. Mod- eling user satisfaction with hidden markov models. In Proceedings of the SIGDIAL 2009 Conference, pages 170-177.
Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Kumar Goyal, Peter Ku, Dilek Hakkani-Tür, LREC. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Kumar Goyal, Peter Ku, and Dilek Hakkani-Tür. 2020. Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state track- ing baselines. In LREC.
Topic-aware response generation in task-oriented dialogue with unstructured knowledge access. Yue Feng, Gerasimos Lampouras, and Ignacio Iacobacci. 2022a. Yue Feng, Gerasimos Lampouras, and Ignacio Ia- cobacci. 2022a. Topic-aware response generation in task-oriented dialogue with unstructured knowl- edge access. EMNLP.
Dynamic schema graph fusion network for multi-domain dialogue state tracking. Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, Emine Yilmaz, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, and Emine Yilmaz. 2022b. Dynamic schema graph fu- sion network for multi-domain dialogue state track- ing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 115-126.
A sequenceto-sequence approach to dialogue state tracking. Yue Feng, Yang Wang, Hang Li, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. the 59th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yue Feng, Yang Wang, and Hang Li. 2021. A sequence- to-sequence approach to dialogue state tracking. In Proceedings of the 59th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1714-1725.
Slot-gated modeling for joint slot filling and intent prediction. Guang Chih-Wen Goo, Yun-Kai Gao, Chih-Li Hsu, Tsung-Chieh Huo, Keng-Wei Chen, Yun-Nung Hsu, Chen, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. the 2018 Conference of the North American Chapter of the Association for Computational LinguisticsChih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pages 753-757.
A deep prediction network for understanding advertiser intent and satisfaction. Liyi Guo, Rui Lu, Haoqi Zhang, Junqi Jin, Zhenzhe Zheng, Fan Wu, Jin Li, Haiyang Xu, Han Li, Wenkai Lu, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementLiyi Guo, Rui Lu, Haoqi Zhang, Junqi Jin, Zhenzhe Zheng, Fan Wu, Jin Li, Haiyang Xu, Han Li, Wenkai Lu, et al. 2020. A deep prediction network for un- derstanding advertiser intent and satisfaction. In Pro- ceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2501-2508.
Measuring user satisfaction on smart speaker intelligent assistants using intent sensitive query embeddings. Seyyed Hadi, Kyle Hashemi, Ahmed El Williams, Imed Kholy, Paul A Zitouni, Crook, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementSeyyed Hadi Hashemi, Kyle Williams, Ahmed El Kholy, Imed Zitouni, and Paul A Crook. 2018. Measuring user satisfaction on smart speaker intelligent assis- tants using intent sensitive query embeddings. In Proceedings of the 27th ACM International Confer- ence on Information and Knowledge Management, pages 1183-1192.
The elements of statistical learning: data mining, inference, and prediction. Trevor Hastie, Robert Tibshirani, H Jerome, Jerome H Friedman, Friedman, Springer2Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. 2009. The elements of statis- tical learning: data mining, inference, and prediction, volume 2. Springer.
Trippy: A triple copy strategy for value independent neural dialog state tracking. Michael Heck, Nurul Carel Van Niekerk, Christian Lubis, Hsien-Chin Geishauser, Marco Lin, Milica Moresi, Gasic, Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 21th Annual Meeting of the Special Interest Group on Discourse and DialogueMichael Heck, Carel van Niekerk, Nurul Lubis, Chris- tian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. Trippy: A triple copy strategy for value independent neural dialog state tracking. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 35-44.
A simple language model for task-oriented dialogue. Ehsan Hosseini-Asl, Bryan Mccann, Chien-Sheng Wu, Semih Yavuz, Richard Socher, Proceedings of the 34th International Conference on Neural Information Processing Systems. the 34th International Conference on Neural Information Processing SystemsEhsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A sim- ple language model for task-oriented dialogue. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pages 20179-20191.
Higru: Hierarchical gated recurrent units for utterance-level emotion recognition. Wenxiang Jiao, Haiqin Yang, Irwin King, Michael R Lyu, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsWenxiang Jiao, Haiqin Yang, Irwin King, and Michael R Lyu. 2019. Higru: Hierarchical gated recurrent units for utterance-level emotion recognition. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 397-406.
Self-supervised contrastive learning for efficient user satisfaction prediction in conversational agents. Mohammad Kachuee, Hao Yuan, Young-Bum Kim, Sungjin Lee, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics. the 2021 Conference of the North American Chapter of the Association for Computational LinguisticsMohammad Kachuee, Hao Yuan, Young-Bum Kim, and Sungjin Lee. 2021. Self-supervised contrastive learn- ing for efficient user satisfaction prediction in con- versational agents. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics, pages 4053- 4064.
A multi-task based neural model to simulate users in goal-oriented dialogue systems. Eun To, Aldo Kim, Lipani, SIGIRTo Eun Kim and Aldo Lipani. 2022. A multi-task based neural model to simulate users in goal-oriented dia- logue systems. SIGIR.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, CoRRDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR.
Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, Dawei Yin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Sim- plifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437-1447.
Towards deep conversational recommendations. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, Chris Pal, Advances in neural information processing systems. 31Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommenda- tions. Advances in neural information processing systems, 31.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
Attention-based recurrent neural network models for joint intent detection and slot filling. Bing Liu, Ian Lane, Interspeech. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. Interspeech 2016, pages 685-689.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, Joelle Pineau, EMNLP. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP.
Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. Andrea Madotto, Chien-Sheng Wu, Pascale Fung, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAndrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1468-1478.
Jointly leveraging intent and interaction signals to predict user satisfaction with slate recommendations. Rishabh Mehrotra, Mounia Lalmas, Doug Kenney, Thomas Lim-Meng, Golli Hashemian, The World Wide Web Conference. Rishabh Mehrotra, Mounia Lalmas, Doug Kenney, Thomas Lim-Meng, and Golli Hashemian. 2019. Jointly leveraging intent and interaction signals to predict user satisfaction with slate recommendations. In The World Wide Web Conference, pages 1256- 1267.
User interaction sequences for search satisfaction prediction. Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval. Rishabh Mehrotra, Imed Zitouni, Ahmed Hassan Awadallah, Ahmed El Kholy, and Madian Khabsathe 40th International ACM SIGIR conference on research and development in information retrievalRishabh Mehrotra, Imed Zitouni, Ahmed Has- san Awadallah, Ahmed El Kholy, and Madian Khabsa. 2017. User interaction sequences for search satisfaction prediction. In Proceedings of the 40th In- ternational ACM SIGIR conference on research and development in information retrieval, pages 165-174.
Neural belief tracker: Data-driven dialogue state tracking. Nikola Mrkšić, Ó Diarmuid, Tsung-Hsien Séaghdha, Blaise Wen, Steve Thomson, Young, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Nikola Mrkšić, Diarmuid Ó Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neu- ral belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777-1788.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311-318.
Dynamic fusion network for multidomain end-to-end task-oriented dialog. Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLibo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, and Ting Liu. 2020. Dynamic fusion network for multi- domain end-to-end task-oriented dialog. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6344-6354.
Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 8689-8696.
Automatic user experience evaluation of goaloriented dialogs using pre-trained language models. Mika Rebensburg, Stefan Hillmann, Nils Feldhus, Proc. ESSV 2023. ESSV 2023MunichTUDpressMika Rebensburg, Stefan Hillmann, and Nils Feldhus. 2023. Automatic user experience evaluation of goal- oriented dialogs using pre-trained language models. In In Proc. ESSV 2023 (March 1-3, Munich), TUD- press.
Using customer service dialogues for satisfaction analysis with contextassisted multiple instance learning. Kaisong Song, Lidong Bing, Wei Gao, Jun Lin, Lujun Zhao, Jiancheng Wang, Changlong Sun, Xiaozhong Liu, Qiong Zhang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP)Kaisong Song, Lidong Bing, Wei Gao, Jun Lin, Lujun Zhao, Jiancheng Wang, Changlong Sun, Xiaozhong Liu, and Qiong Zhang. 2019. Using customer ser- vice dialogues for satisfaction analysis with context- assisted multiple instance learning. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 198-207.
User intent, behaviour, and perceived satisfaction in product search. Ning Su, Jiyin He, Yiqun Liu, Min Zhang, Shaoping Ma, Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. the Eleventh ACM International Conference on Web Search and Data MiningNing Su, Jiyin He, Yiqun Liu, Min Zhang, and Shaoping Ma. 2018. User intent, behaviour, and perceived satisfaction in product search. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 547-555.
Multi-task pre-training for plug-and-play task-oriented dialogue system. Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, Yi Zhang, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2022. Multi-task pre-training for plug-and-play task-oriented dialogue system. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4661-4676.
Simulating user satisfaction for the evaluation of task-oriented dialogue systems. Weiwei Sun, Shuo Zhang, Krisztian Balog, Zhaochun Ren, Pengjie Ren, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalZhumin Chen, and Maarten de RijkeWeiwei Sun, Shuo Zhang, Krisztian Balog, Zhaochun Ren, Pengjie Ren, Zhumin Chen, and Maarten de Ri- jke. 2021. Simulating user satisfaction for the evalu- ation of task-oriented dialogue systems. In Proceed- ings of the 44th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 2499-2506.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Transferable multi-domain state generator for task-oriented dialogue systems. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsChien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019a. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808-819.
The influence of image search intents on user behavior and satisfaction. Zhijing Wu, Yiqun Liu, Qianfan Zhang, Kailu Wu, Min Zhang, Shaoping Ma, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningZhijing Wu, Yiqun Liu, Qianfan Zhang, Kailu Wu, Min Zhang, and Shaoping Ma. 2019b. The influence of image search intents on user behavior and satisfaction. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 645-653.
Graphdialog: Integrating graph knowledge into endto-end task-oriented dialogue systems. Shiquan Yang, Rui Zhang, Sarah Erfani, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Shiquan Yang, Rui Zhang, and Sarah Erfani. 2020. Graphdialog: Integrating graph knowledge into end- to-end task-oriented dialogue systems. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1878-1888.
Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics. the 2016 conference of the North American chapter of the association for computational linguisticsXiaodong He, Alex Smola, and Eduard HovyZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computa- tional linguistics, pages 1480-1489.
Recurrent conditional random field for language understanding. Kaisheng Yao, Baolin Peng, Geoffrey Zweig, Dong Yu, Xiaolong Li, Feng Gao, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEKaisheng Yao, Baolin Peng, Geoffrey Zweig, Dong Yu, Xiaolong Li, and Feng Gao. 2014. Recurrent conditional random field for language understanding. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4077- 4081. IEEE.
Assist: Towards label noise-robust dialogue state tracking. Fanghua Ye, Yue Feng, Emine Yilmaz, Findings of the Association for Computational Linguistics: ACL 2022. Fanghua Ye, Yue Feng, and Emine Yilmaz. 2022. Assist: Towards label noise-robust dialogue state tracking. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2719-2731.
Overview of the ntcir-15 dialogue evaluation (dialeval-1) task. Zhaohao Zeng, Sosuke Kato, Tetsuya Sakai, Inho Kang, Zhaohao Zeng, Sosuke Kato, Tetsuya Sakai, and Inho Kang. 2020. Overview of the ntcir-15 dialogue eval- uation (dialeval-1) task.
| [] |
[] | [
"Xiang Zhang [email protected] \nAlberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada\n",
"Senyu Li \nAlberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada\n",
"Bradley Hauer [email protected] \nAlberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada\n",
"Ning Shi [email protected] \nAlberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada\n",
"Grzegorz Kondrak [email protected] \nAlberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada\n"
] | [
"Alberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada",
"Alberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada",
"Alberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada",
"Alberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada",
"Alberta Machine Intelligence Institute (Amii)\nDepartment of Computing Science\nUniversity of Alberta\nEdmontonCanada"
] | [] | Large Language Models (LLMs) have demonstrated exceptional natural language understanding abilities and have excelled in a variety of natural language processing (NLP) tasks in recent years. Despite the fact that most LLMs are trained predominantly on English, multiple studies have demonstrated their comparative performance in many other languages. However, fundamental questions persist regarding how LLMs acquire their multilingual abilities and how performance varies across different languages. These inquiries are crucial for the study of LLMs since users and researchers often come from diverse language backgrounds, potentially influencing their utilization and interpretation of LLMs' results. In this work, we propose a systematic way of qualifying the performance disparities of LLMs under multilingual settings. We investigate the phenomenon of across-language generalizations in LLMs, wherein insufficient multilingual training data leads to advanced multilingual capabilities. To accomplish this, we employ a novel back-translation-based prompting method. The restuls show that GPT exhibit highly translating-like behaviour in multilingual settings. | 10.48550/arxiv.2305.16339 | [
"https://export.arxiv.org/pdf/2305.16339v1.pdf"
] | 258,947,405 | 2305.16339 | 5ffd2ded3d828a34aed5c0076834e14c634256a2 |
24 May 2023
Xiang Zhang [email protected]
Alberta Machine Intelligence Institute (Amii)
Department of Computing Science
University of Alberta
EdmontonCanada
Senyu Li
Alberta Machine Intelligence Institute (Amii)
Department of Computing Science
University of Alberta
EdmontonCanada
Bradley Hauer [email protected]
Alberta Machine Intelligence Institute (Amii)
Department of Computing Science
University of Alberta
EdmontonCanada
Ning Shi [email protected]
Alberta Machine Intelligence Institute (Amii)
Department of Computing Science
University of Alberta
EdmontonCanada
Grzegorz Kondrak [email protected]
Alberta Machine Intelligence Institute (Amii)
Department of Computing Science
University of Alberta
EdmontonCanada
24 May 2023Don't Trust GPT When Your Question Is Not In English
Large Language Models (LLMs) have demonstrated exceptional natural language understanding abilities and have excelled in a variety of natural language processing (NLP) tasks in recent years. Despite the fact that most LLMs are trained predominantly on English, multiple studies have demonstrated their comparative performance in many other languages. However, fundamental questions persist regarding how LLMs acquire their multilingual abilities and how performance varies across different languages. These inquiries are crucial for the study of LLMs since users and researchers often come from diverse language backgrounds, potentially influencing their utilization and interpretation of LLMs' results. In this work, we propose a systematic way of qualifying the performance disparities of LLMs under multilingual settings. We investigate the phenomenon of across-language generalizations in LLMs, wherein insufficient multilingual training data leads to advanced multilingual capabilities. To accomplish this, we employ a novel back-translation-based prompting method. The restuls show that GPT exhibit highly translating-like behaviour in multilingual settings.
Introduction
The study of bilingualism has long been a topic of interest among linguists Hoffmann, 2014), as it provides insight into the mechanisms of language acquisition and processing. Furthermore, research on multilingualism has contributed to the development of more effective machine learning models, such as neural translation systems (Zou et al., 2013). With the rise of large language models, researchers have discovered many emergent properties (Wei et al., 2022) in these models and have used them for a variety of purposes (Wei et al., 2023). However, the multilingual ability of these models has not been extensively studied. Previous research has shown that large language models, such as GPT, are universally capable of performing various language tasks when English is used as the descriptive language (Qin et al., 2023). However, investigations into the multilingual language abilities of these models have been limited. Shi et al. (2022) first shed light on this topic, but it simply applied the models to multilingual datasets and measured performance differences across languages, without exploring the underlying mechanisms of how different tasks are performed in the network and how such multilingual ability is acquired. Moreover, most large language models (Brown et al., 2020;Touvron et al., 2023) are trained on datasets that are heavily skewed towards English, making it interesting and important to understand how the models acquire their cross-lingual abilities, as this affects their usage in multilingual settings.
In this study, we present a systematic approach to quantitatively analyze the multilingual capabilities of LLMs. To facilitate a comprehensive analysis, we propose categorizing language-dependent abilities into three distinct categories: reasoning, knowledge access, and articulation. These categories encompass various effects that language choice can have on LLM performance.
To evaluate these abilities, we employ a novel back-translation prompting method, carefully designing tasks within each category using multi-ple languages. By comparing the generated answers, we not only measure the multilingual proficiency of LLMs but also determine their multilingual type, which is a indication of how underlying network works when switching languages. Furthermore, our study includes a non-translatable taskpun detection-which provides compelling evidence of the translation-based task-solving strategy employed by LLMs. It sheds light on the advantages of dominant languages within the LLMs, highlighting the influence of language dominance on their performance.
We apply our method on GPT-3.5 and the results demonstrate that it exhibits consistency in multilingual tasks that can be explicitly translated, while inconsistency in tasks that cannot. Strong evidence has shown that GPT is applying a translation-like multilingual task solving process in its underlying network, thus failing many tasks which possess translating variant properties. Our work provide a novel view of utilizing LLMs for multilingual inference, shedding light on their strengths and limitations in various language contexts.
Background
At the heart of the fundamental study of multilingualism, linguists have categorized bilingual (multilingual) speakers into three distinct groups: compound bilinguals, coordinate bilinguals, and subordinate bilinguals (D'Acierno, 1990), which is the basic framework of understanding how human engage with different languages.
Compound bilingualism mostly emerges among individuals who learn two languages simultaneously from birth. In this case, both languages are equally dominant and integrated, blurring any clear distinction between them and giving the impression of a single unified language. Compound bilingualism entails a shared mental representation of lexicons across both languages they acquire. Researchers found them more flexible and efficient in cognitive control, which allows them to switch between tasks and languages more quickly and effectively. In contrast, coordinate bilinguals maintain separate mental representations for lexicons in each language they learn. This separation leads to differences when tasks are performed under different language settings.
In contrast to the previous two types, where speakers have unrestricted access to the mental representations of each language, subordinate bilin-guals exhibit a more "Translator"-like behaviour. When performing tasks in languages other than their dominant one, subordinate bilinguals tend to rely on translating their thoughts into their dominant language. This type of bilingualism is characterized by a single lexicon representation that is linked to their dominant language. Consequently, any multilingual task-solving necessitates first processing the information in their native language before translating it back into the target language. Subordinate bilingualism is frequently observed in individuals who acquire their second language later in life, and it is often accompanied by a constant translation process occurring within their minds. As a result, subordinate bilinguals may experience lower proficiency in language production.
Despite the previous study's demonstration of seemingly consistent multilingual performance in many large language models (LLMs), it remains unclear how proficiency in non-dominant languages, such as Japanese in the case of GPT-3, is acquired within the network. Specifically, it is unclear whether the LLMs exhibit a learned representation akin to compound/coordinate bilingualism or rely on translation processes resembling subordinate bilinguals. Interestingly, previous observations have revealed semantically similar responses when the same questions are posed in different languages to LLMs, suggesting a behaviour more aligned with subordinate/compound bilingualism. Our research aims to unravel the mechanisms underlying the acquisition of multilingual abilities in LLMs and examine the extent of performance variations across different languages. Such insights are crucial for ensuring the reliability and accuracy of LLMs in multilingual settings.
Language Dependent Ability Category
Language ability is a multifaceted concept encompassing various tasks and aspects (Wei et al., 2022). However, the inherent complexity of language evaluation poses challenges in assessing model or individual performance. To better understand and evaluate language ability within the broader field of Natural Language Processing (NLP), researchers have often classified sub-fields into distinct categories (Khurana et al., 2023). For instance, NLP tasks are commonly divided into two major groups: natural language generation (NLG) (Dong et al., 2022) and natural language understanding (NLU) (Wilks, 1976). Within NLG, tasks such as summarization (Nenkova and McKeown, 2012;Liu et al.), translation (Slocum, 1985), paraphrase generation (Zhou and Bhat, 2021) are grouped together, while NLU encompasses tasks like word sense disambiguation (Zhang et al., 2022), sememe prediction and representation learning (Cheng et al., 2020), among others.
However, the creation of such sub-tasks often lacks a systematic criterion and can lead to a fragmented landscape, particularly in the context of multilingual analysis. In this section, we propose an alternative approach to categorize tasks from two different perspectives, providing valuable insights into how different languages impact the ability separation. This categorization framework serves to facilitate the analysis of multilingual abilities, specifically within LLMs. Subsequently, we utilize these proposed categories in our experiments, enabling a comprehensive investigation of multilingual performance.
Categorization Based on Task Properties
The choice of language significantly impacts the performance of different tasks (Ahuja et al., 2023). While some tasks are relatively less languagedependent, as they heavily rely on universal language elements such as symbols and notations (e.g., mathematical question solving), others exhibit significant variations in results when the language is switched due to cultural differences and linguistic properties. Bilingual or multilingual individuals undertaking tasks in different languages may demonstrate varying levels of consistency and performance, influenced by their fluency in the second language and the type of bilingualism they possess (coordinate or subordinate) (Diller, 1970). Therefore, a well-defined task-oriented categorization is essential for comprehending performance disparities across different languages and analyzing multilingual abilities. Figure 2 provides an overview of this categorization.
In our approach, we classify all NLP tasks into three distinct categories: reasoning, knowledge accessing and articulating. This division is based on the extent to which each task is influenced by the language used and the magnitude of the language's impact on task outcomes.
Reasoning. The first category includes tasks that are minimally influenced by language. Reasoning tasks involve logical and rational thinking to solve problems based on available information and logical principles. Examples include mathematical problem-solving (Lu et al., 2022), coding (Li et al., 2023), and common sense reasoning (Sap et al., 2020). These tasks can be performed using universal language elements, such as mathematical symbols, or rely on general life experience and common sense that does not use language to acquire. For example, questions like "If I drop an apple, which direction will it go?" rely more on understanding the gravity rather than language-specific knowledge, therefore consistent performance is expected.
With LLMs, the limited explainability hinders our comprehensive understanding of how reasoning is executed within these complex networks.
To address this, we aim to unravel the underlying mechanisms and measure the consistency among languages of reasoning tasks through prompting techniques, which will be detailed in the later section.
Knowledge Accessing. LLMs have the capability to function as Knowledge Bases (KBs) by storing knowledge extracted from extensive training data (Heinzerling and Inui, 2021). Human knowledge is encoded in natural language, which is inherently influenced by the lexicons of a specific language. While much knowledge is considered universal regardless of the language used (e.g., Paris being the capital of France), the lexicalization of knowledge impacts how it is structured and represented in various ways. For instance, the polysemous of words used to define knowledge in one language can lead to ambiguous knowledge representations, which may not be the case when another language is employed. Additionally, certain knowledge may be language-dependent and not exist in other language contexts. For example, the meaning of slang terms may only be fully defined within the language of origin. Overall, knowledge plays a vital role in the abilities of LLMs, and its representation is influenced to some extent by language, distinguishing it from the reasoning category.
Tasks that involve knowledge representation fall under this category and can be used to assess the knowledge enrichment of LLMs. Examples include factual knowledge checking (Cao et al., 2021), knowledge-focused question answering (Wang, 2022), word sense disambiguation (Zhang et al., 2022) (sense knowledge) and so on. An illustrative prompt to evaluate this ability is provided in Figure 2 Articulating. Much of everyday human conversation is highly language-dependent, as it involves the pragmatics and cultural nuances of the spoken language. For instance, writing a cover letter in English significantly differs in format and word choice compared to writing one in Japanese, reflecting distinct social norms and conventions. Moreover, individuals who use specific languages often share similar backgrounds, influencing their thinking patterns and giving rise to unique styles and expressions for conveying ideas. Tasks that are heavily influenced by language choices fall under the "Articulating" category. This category encompasses tasks such as summarization (Nenkova and McKeown, 2012), dialogue generation (Ni et al., 2022), paraphrasing (Zhou and Bhat, 2021), and style writing (Jin et al., 2021), among others. These tasks require a deep understanding of the cultural and linguistic contexts, as they involve capturing and reproducing the appropriate style, tone, and manner of expression specific to a given language. Discussion. Categories in the NLP field are not mutually exclusive, as tasks can exhibit properties of multiple categories. For example, NLG tasks like summarization rely on NLU for understanding the input text. In our categorization, tasks can also rely on each other. For instance, while articulating tasks require knowledge access, the primary focus is on generating socially understandable text rather than serving as a dictionary. Recognizing these interdependencies provides a comprehensive understanding of how LLMs handle multilingual tasks and the complex nature of language processing.
Category by Translatability
Multi-lingual analysis often comes together with translations, which unify the text into the same language for better analysis. We propose a second way of categorization that is essential for such analysis using translations and can facilitate the multi-lingual type analysis of LLMs.
Translating Equivariant and Translating Variant
We introduce the concepts of Translating Equivariant (TE) and Translating Variant (TV) to facilitate task categorization. An invariant function remains unchanged when subjected to a specific transformation. Formally, f (x) is said to be Equivariant under g(·) if:
∀x ∈ D, g(f (x)) = f (g(x))(1)
Here, D represents the domain of both functions f and g.
In our context, we define g(·) as a translation function that converts an input sentence x in language A to x ′ in language B, with g encoded by a machine translation system. A task is considered translating Equivariant between languages A and B if the correct output, given by a solver system f (x), remains unchanged when applying the translation system g(x) in both input and output. An example is illustrated in Figure 3.
Most of the tasks in the Reasoning and Knowledge Access categories we defined earlier are regarded as Translating Invariant since their core functionality does not rely on the lexicons specific to a chosen language.
Likewise, we define TV tasks to be those that have their output changed when input are translated. Such tasks rely heavily on the language choice and most tasks in the Articulating category are TV, such as summarization, paraphrasing and pun detection.
Such categorization facilitates the tasks analysis under multilingual settings, which we will elaborate in the method and experiments sections.
Methods
In this section, we present our systematic and quantitative approach to analyze the multilingual ability of LLMs. Our method includes a translation-based prompting approach to measure the performance differences resulting from language choice in LLMs. Additionally, we employ a back-translation-based method to assess the consistency in the network's internal reasoning process, which is coupled with a TV task prompting method to determine the type of bilingualism (coordinate/compound/subordinate) exhibited by the LLMs. We release our code and data in :
Prompt Translation for Measuring Differences in Multilingual Task Performance
Acquiring a multilingual dataset can be challenging due to the requirement of numerous multilingual annotators. However, with the advancements in Machine Translation (MT) systems and LLMs, we can leverage translation-based data augmentation to overcome this limitation effectively and accurately. For most tasks, we can utilize a translation system to generate the multilingual version of the same task data, preserving the semantic meaning. Exceptions may arise in translation-variant tasks, which we will address in a later section. An overview of this method is presented in part (a) of Figure 4. Given an instance t = (x, y) of task T, where x represents the input and y is the corresponding label, we employ a translation function g(·) to map the instance t from its original Language A to Language B. The translated instance is denoted ast = (x,ỹ). Next, we use a pre-designed prompt p in the original language A, combined with the input x, to query the LLMs, and the response is denoted as r. We then apply translation to the prompt p to obtainp, and repeat the query process usingp andt. The responser is collected accordingly. It is important to note that prompting in language B is conducted separately from language A to prevent potential information crossover.
We then compare the pairwise difference between the answers r andr. Assuming that the LLMs successfully acquire the representation of task T, and given that T is translating invariant, the pairwise responses r andr for each instance T should be the same. In other words, translating invariant problems (such as mathematical problem solving) should not rely on the language used to query the LLMs, and the expected outcome should be consistent (regardless of whether it is correct or incorrect). Failure to achieve such consistency provides evidence of unsuccessful learning of the representation of task T within the LLM network, indicating potential performance differences caused by language choice. The extent of such differences can be qualitatively measured through pairwise response comparisons.
Back-Translation Prompt
Even though the final results of task T may be consistent across different languages, it is still important to determine the type of bilingualism that the network is most similar to. This is crucial for individuals who use LLMs for multilingual tasks, as it can impact the way task results are generated. For example, a network similar to subordinate bilingualism would perform tasks in a translation fashion rather than relying on the network's representation of that specific task and language (coordinate/compound). This can affect the quality and consistency of the results.
To quantitatively measure how reasoning is performed, we propose a back-translation based prompting method, as illustrated in part (b) of Figure 4. Given an instance t in task T, we perform translation-based prompting separately in language A and language B, as described in Section [reference]. After obtaining the response, we use another designed prompt u and correspondingũ (e.g. Explain how you obtain this result) in language A and language B, respectively, to capture the reasoning process behind the obtained response. We denote the explanation for obtaining the response in each language as l andl, respectively. We then apply back-translation onl from language B back to language A and compute the pairwise N-gram difference with l.
If the LLM is performing translation-based reasoning, the reasoning process is conducted in one language and then translated into another. Since the language model's internal reasoning process can be partially observed through the output explanation (Chain of Thought prompting), back-translating such explanations to the same language allows us to compare the internal reasoning differences. High N-gram-based overlapping indicates homogeneity in using the same internal network to perform the task in both language A and language B. On the other hand, dissimilarity in the internal network during the reasoning process is indicated by a lower N-gram-based overlapping.
Translating Invariant Task for Measuring Translating Usage in LLMs
Although back-translation prompting measures the homogeneity of reasoning process quantitatively, Chain of Thought Reasoning output can not always fully characterize the whole reasoning landscape in LLMs and thus does not serve a strong proof of LLM translating the task first and then perform the task. We provide another approach using translating invariant method to further define the translation usage in LLMs when performing multilingual tasks. As defined in Section 3.3, TI task will have different output when translation function is applied on the instances of the task. We utilize such properties to examine if answer is changed when given prompt and TI question are translated into another language (e.g. if the pun disappear when translating the pun into another language). If the output stays the same before and after the translation, then it serves as strong indication of translation on such instance applied within the network before the reasoning is done. On the other hand, if the answer changes accordingly, we have the reason to believe language is perfroming tasks in its learnt representation for language B instead of translating the instance into language A.
Counter Randomness in LLMs
Due to the random decoding strategy employed in most LLMs, the results can vary between different inference sessions even when the same input is used. This randomness introduces inaccuracies and inconsistency in the statistics obtained using the above approaches. To address this issue, we employ a repetition of prompting to counteract the randomness caused by the decoding process.
Specifically, for each instance I with the designed prompt p, we repeat the above method n times independently. We then select the mode as the final answer (e.g., if we observe 4 instances with answer A and 1 instance with answer B, we choose A as the final answer) when calculating the consistency score. For the back-translation prompting, we obtain all n Chain of Thought (CoT) reasoning explanations from LLMs and calculate the mutual pairwise n-gram difference between the two languages (n*n times of calculation in total). We then select the highest score as the most similar rea- soning pair, as shown in Figure 6. This repetition and selection process helps mitigate the impact of randomness and provides more reliable and consistent results.
Multilingual Type Deciding
Using the approach described earlier, we can effectively identify the source of multilingual capability in Language Models (LLMs) by examining their multilingual type. As shown in flow chart in Figure 5, a subordinate LLM ( part (c) of Figure 1) is one that performs cross-language tasks mainly based on translation. Consequently, it shows consistent results for both TI and TV task acorss different langauges as it simply translates the results from its 'native language's reasoning. We detect such type mainly by examining if it fails to perfrom TV tasks. Similarly, Compound LLMs are ones that access a joint representation for both language in the network and thus shows consistency in TI tasks. However, Compound bilingualism is performing reasoning steps directly in both languages through the network instead of naive translation, thus not "tricked" by the TV tasks and produce the different results accordingly. In contrast, a coordinate LLM maintains distinct network representations for different languages. Consequently, this type of model may display noticeable discrepancies in TI tasks due to the independent treatment of each language.
We conduct experiments using the proposed approach. We first apply translation-based prompting and back-translation prompting in the representing tasks of each category, namely Reasoning, Knowledge Access and Articulating. Then we analyze our results for TE tasks and TV tasks separately before we draw the conclusion for the specific model, GPT 3.5, we use. We detail our experiments process in the following subsections.
Setting
Dataset
Reasoning: We used two datasets for reasoning, the CommonsenseQA (Talmor et al., 2019) dataset and the GSM8K (Cobbe et al., 2021) dataset. The CommonsenseQA dataset is a question-answering challenge targeting commonsense knowledge, which aims to test the ability to utilize logic and common sense beyond associations to find the most appropriate subject under specific conditions. Each question consists of a question stem that illustrates the scene and the property of the target, and five choices to be selected, only one choice best describes the stem. GSM8K dataset, a dataset of high-quality linguistically diverse grade school math word problems, each question consists of a question stem and an explanation of the final answer, all final answers are in the form of an integer. Knowledge Access: The WebQuestions (Bordes et al., 2014) dataset is a question-answering dataset using Freebase as the knowledge base. For example, a question could ask: "Where is the capital city of the USA?", the answer would be "Washington DC". We used the webQuestions database as a base to build a multiple-choice dataset in the same format as CommonsenseQA. The original answer to a question will be one choice of the newly generated multiple-choice question. We paid extra effort to make the choices more realistic to fool the model. For example, if the question asks for a certain people's name, then the generated new choices will be names of real-world celebrities. The reason for such an edition is that the target answers of We-bQuestions are in the form of freebases which can potentially have multiple ways of expression, it is hard sometimes to judge the correctness of the output answers. And with a format of multiple-choice questions, the answer is certain.
Articulating: Since most datasets under this categories can not fully examine the Articulating properties under multi-lingual settings, we proposed a set of prompts asking for a cover letter with randomly generated constraints. For each prompt, we first generate the name of a person and the background of him/her, like hobbies, specialties and level of education. Then randomly select one realworld well-known company as the target company for such a cover letter. Finally, we select a set of topics such as "What skills would you want to develop in this role?" as the final constraints of the content. An example of a prompt is: "You are Stewie Griffin from the university with a GPA of 3.6. You like Rapping. You are great at programming and software development. You want to join Nestle company. Write a cover letter about: If anything was possible, what would you do to improve our company? What challenges are you looking for in your next position? and What's a goal you set that you didn't meet and can you explain what happened?"
Model.
Due to the limited access to many LLMs in the area, we simply use the commonly available Model GPT-3.5. We access such a model through the official OpenAI Web application.
Metrics
As ChatGPT can give different answers to the same question. In our experiments, each question is prompted five times, and the most occurring output is considered as the stable final answer. For each question, if the final answer matches the golden answer, we annotate such a final answer as "1", or "0" otherwise Accuracy:
Accuracy measures the level of correctness of the set of answers under each language. It equals the number of correct final answers over the total number of questions.
Consistency:
Consistency measures how consistent the final answers are between pairs of different languages. If the final answers to the same question are both correct or both incorrect, we count such a pair as a consistent case. The consistency of the final answers between a pair of languages equals the number of consistent cases over the total number of questions.
Unigram-similarity: Alongside each answer, there comes an explanation of why ChatGPT gives such an answer. Unigram similarity aims to detect how similar the explanations of the answers are. In case two answers under two languages to the same question are the same, we calculate the overlapping rate of every single token in the explanations. Thus with five answers for each question under each language, there will be at most 25 similarity scores for each question. We take the maximum of such similarity scores and consider it as the final similarity scores for that question. Thus the unigram similarity between 2 languages will be the average of all the final similarity scores. Bert-Similarity: Similar to unigram similarity, however, for each pair of explanations, we calculate the cosine similarity of the BERT (Devlin et al., 2019) embeddings of sentences instead of the token level overlapping rate.
TE Tasks Results.
We used English as the source language, and 5 other languages (simplified Chinese, Japanese, French, German and Spanish) as the translation target languages for the translation equivariant tasks experiments. Generally speaking, the accuracy of experiments in English has relatively high accuracy, but still consisten with all other languages. As seen in Table, we see a higher consistency and reasoning similarity when less language dependent tasks are performed, with Math having the higest similarity in reasoning process. Although some dissimilarities can be observed due to the possible translation variance and model's inference variance, such high similarity still indicate the homogeneity in the reasoning process. Following the proposed approach, we have the reason to believe that the network's reasoning under multilingual setting is close to Compound or Subordinate structure. We further investigate such differences with translating variant tasks.
TI tasks Results.
We choose the style writing, specifically cover letter writing, under Articulating category for such evaluating, coupled with Pun-detection task which is considered as highly translating variant. As seen in table 3, cover letters across different languages yield very high 1-gram overlap and bert similarity. This indicates a high overlapping in word choice and semantics representation. A case study has shown that GPT-generated cover letters are highly similar to each other under different languages setting, when the same instruction is provided. We then use the Joker Pun-detection dataset, which consistent of english setences with labels indicating weather it's pun or not. Similarly, high consistency has been observed when the pun is translated, even though pun is not preserved after the translation. We provide case study for selected pun detection cases as shwon in Table 4
Conclusion
In this work, we have proposed the systematic approach for analyzing multilingual ability in LLMs.
Our proposed method has provided evidence for a subordinate-similar network sturcture in and examples have shown that such reasoning process can cause problems in certain tasks. Further prompt design can be investigated in the future to counter such problem.
Figure 1 :
1Distinction of three types of bilingualism.
Figure 2 :
2Three categories of language dependent tasks.
Figure 3 :
3Difference between TV and TE tasks.
Figure 4 :
4An overview of our method.
Figure 5 :
5Flowchart of decideing multilingual type.
Figure 6 :
6Counter randomness techniques.
Table 1: Results for TE tasks.TE Task
Language Answer Accuracy
Consistency
1-gram with En Bert Similarity
En
De
Fr
Zh
Ja
Es
En
0.58
-
0.78 0.60 0.54 0.64 0.70
-
-
Common
De
0.44
0.78
-
0.62 0.64 0.7 0.76
0.5918
0.8806
Sense
Fr
0.54
0.60 0.62
-
0.66 0.64 0.62
0.5858
0.8574
Reasoning
Zh
0.36
0.54 0.64 0.66
-
0.66 0.64
0.5589
0.8527
Ja
0.42
0.64 0.7 0.66 0.66
-
0.62
0.5616
0.8407
Es
0.32
0.70 0.76 0.62 0.64 0.62
-
0.6055
0.8749
En
0.90
-
0.88 0.86 0.84 0.84 0.90
-
-
De
0.78
0.88
-
0.82 0.84 0.76 0.90
0.8051
0.9448
Math
Fr
0.80
0.86 0.82
-
0.78 0.82 0.92
0.8159
0.9575
Reasoning
Zh
0.78
0.84 0.84 0.78
-
0.76 0.82
0.7567
0.9043
Ja
0.82
0.84 0.76 0.82 0.76
-
0.82
0.7257
0.9140
Es
0.80
0.90 0.90 0.92 0.82 0.82
-
0.8196
0.9540
En
0.94
-
0.92 0.94 0.64 0.80 0.86
-
-
De
0.98
0.92
-
0.94 0.64 0.80 0.94
0.6834
0.9330
Knowledge
Fr
0.92
0.94 0.94
-
0.66 0.82 0.88
0.6685
0.9221
Access
Zh
0.62
0.64 0.64 0.66
-
0.76 0.62
0.5968
0.8707
Ja
0.78
0.80 0.80 0.82 0.76
-
0.82
0.6114
0.8962
Es
0.92
0.86 0.94 0.98 0.62 0.82
-
0.6567
0.9246
Table 2 :
2Results for Pun detection dataset.
, where we see a clear reasoning process under the English language domain for translated puns.Language 1-gram with EnSentence-Bert
Similarity
En
-
-
De
0.6426
0.8649
Fr
0.6380
0.8549
Zh
0.6209
0.8405
Ja
0.6095
0.8180
Es
0.6380
0.8499
Table 3 :
3Results for Cover Letter writting
AcknowledgementsThis research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Alberta Machine Intelligence Institute (Amii).Prompt(EN):A bicycle can't stand on its own because it is two-tired. is there a pun in this sentence? English:Yes, there is a pun in this sentence. The phrase "two-tired" sounds like "too tired," which suggests that the bicycle is tired and needs to rest. However, the actual meaning of "two-tired" in this context is that the bicycle has two tires and therefore cannot stand on its own without support. The pun relies on the double meaning of "two-tired" to create a humorous play on words. Prompt(ZH): 这句话有双关么?:自行车不能独自站立,因为它有两个轮胎 (in a completely new session) Chinese:是的,这句话有双关含义。在字面上,句子表达了一个事实:自行车不能独自站立,因为它有两个轮胎。然而,这 句话也可以被解释为一个玩笑或谐音。"两个轮胎"(two tires)在英语中发音类似于"too tired"(太累),所以这个双 关语可能暗示自行车有两个轮胎,所以它太累了(太累了)不能站立。 Prompt:translate this into English English:Yes, this sentence has a pun. Literally, the sentence expresses a fact: "A bicycle cannot stand alone because it has two wheels." However, the sentence can also be interpreted as a joke or wordplay. "Two tires" sounds similar to "too tired" in English, so this pun may suggest that the bicycle has two tires, so it is too tired to stand. Prompt(FR): Y a-t-il un jeu de mots dans cette phrase? "Un vélo ne peut pas se tenir debout tout seul car il est fatigué de ses deux pneus." (in a completely new session) Chinese:Oui, il y a un jeu de mots dans cette phrase. Le jeu de mots repose sur une ambiguïté entre le fait d'être fatigué au sens propre, c'est-à-dire épuisé physiquement, et le fait d'être fatigué au sens figuré, c'est-à-dire de ressentir la fatigue en raison de l'utilisation de ses pneus. Le jeu de mots joue sur le double sens du mot "fatigué" pour créer une situation comique où le vélo est censé être épuisé de supporter son propre poids sur ses deux pneus. Prompt:translate this into English English:Yes, there is a play on words in this sentence. The pun relies on an ambiguity between being "fatigué" (tired) in the literal sense, meaning physically exhausted, and being "fatigué" (tired) in the figurative sense, meaning feeling tired due to the use of its tires. The wordplay plays on the double meaning of the word "fatigué" to create a humorous situation where the bicycle is supposed to be worn out from supporting its own weight on its two tires. Prompt(FR): ¿Hay un juego de palabras en esta oración: Una bicicleta no puede mantenerse en pie por sí misma porque tiene dos cansadas?"" (in a completely new session) Chinese:Sí, hay un juego de palabras en esa oración. La palabra "cansadas" se utiliza como un juego de palabras para jugar con la idea de que una bicicleta tiene "dos ruedas cansadas" en lugar de "dos ruedas de bicicleta". Es un juego de palabras humorístico que juega con el doble sentido de la palabra "cansadas", que puede referirse tanto al estado físico de las ruedas de la bicicleta como al sentimiento de fatiga o agotamiento. Prompt:translate this into English The most famous bat brand for baseball is Louisville Slugger. Louisville Slugger has been a renowned brand in the baseball industry for over 135 years and is widely recognized for producing high-quality baseball bats. They have been the official bat of Major League Baseball (MLB) for many years and are used by numerous professional baseball players. Louisville Slugger bats are known for their durability, performance, and iconic design. translate this into English English:In the world of baseball, there are several well-known and highly respected bat brands. Here are some of the most famous bat brands: Louisville Slugger: Louisville Slugger bats are one of the most popular and well-known brands in baseball history. They are commonly seen in Major League and amateur baseball games. Prompt(FR): 棒球最出名的老虎牌子有哪些? Chinese:关于棒球最出名的"老虎牌子",我了解到您可能是指与底特律老虎队(Detroit Tigers)有关的品牌。然而,老虎队并 没有以自己的名字推出专门的棒球品牌。 Prompt:translate this into English English:Regarding the most famous "Tiger brand" in baseball, I understand that you may be referring to brands associated with the Detroit Tigers team. However, the Tigers team does not have its own baseball brand named after them.
Kabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Samuel Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, arXiv:2303.12528Mega: Multilingual evaluation of generative ai. arXiv preprintKabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Samuel Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, et al. 2023. Mega: Multilingual evaluation of generative ai. arXiv preprint arXiv:2303.12528.
Question answering with subgraph embeddings. Antoine Bordes, Sumit Chopra, Jason Weston, CoRRAntoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embed- dings. CoRR.
. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskeverand Dario Amodei. 2020. Language models are few-shot learnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
Editing factual knowledge in language models. Nicola De Cao, Wilker Aziz, Ivan Titov, Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Edit- ing factual knowledge in language models.
Improving disentangled text representation learning with information-theoretic guidance. Pengyu Cheng, Dinghan Martin Renqiang Min, Christopher Shen, Yizhe Malon, Yitong Zhang, Lawrence Li, Carin, 10.18653/v1/2020.acl-main.673Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsPengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, and Lawrence Carin. 2020. Improving disentangled text representation learning with information-theoretic guidance. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7530-7541, Online. Association for Computa- tional Linguistics.
Training verifiers to solve math word problems. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman, CoRRKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR.
Three types of bilingualism. Maria Rosaria, D' Acierno, Maria Rosaria D'Acierno. 1990. Three types of bilin- gualism.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. pages 4171-4186.
. Karl Diller, 10.1080/00437956.1970.1143559626compound" and "coordinate" bilingualism: A conceptual artifact. WordKarl Diller. 1970. "compound" and "coordinate" bilin- gualism: A conceptual artifact. Word, 26:254-261.
2022. A survey of natural language generation. Chenhe Dong, Yinghui Li, Haifan Gong, Miaoxin Chen, Junxin Li, Ying Shen, Min Yang, 10.1145/3554727ACM Computing Surveys. 558Chenhe Dong, Yinghui Li, Haifan Gong, Miaoxin Chen, Junxin Li, Ying Shen, and Min Yang. 2022. A survey of natural language generation. ACM Computing Surveys, 55(8):1-38.
Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries. Benjamin Heinzerling, Kentaro Inui, Benjamin Heinzerling and Kentaro Inui. 2021. Lan- guage models as knowledge bases: On entity repre- sentations, storage capacity, and paraphrased queries.
Introduction to bilingualism. Charlotte Hoffmann, Charlotte Hoffmann. 2014. Introduction to bilingual- ism.
Olga Vechtomova, and Rada Mihalcea. 2021. Deep learning for text style transfer: A survey. Di Jin, Zhijing Jin, Zhiting Hu, Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2021. Deep learning for text style transfer: A survey.
Natural language processing: state of the art, current trends and challenges. Diksha Khurana, Aditya Koli, Kiran Khatter, Sukhdev Singh, 10.1007/s11042-022-13428-4Multimedia Tools and Applications. 82Diksha Khurana, Aditya Koli, Kiran Khatter, and Sukhdev Singh. 2023. Natural language processing: state of the art, current trends and challenges. Multi- media Tools and Applications, 82(3):3713-3744.
Enabling programming thinking in large language models toward code generation. Jia Li, Ge Li, Yongmin Li, Zhi Jin, Jia Li, Ge Li, Yongmin Li, and Zhi Jin. 2023. Enabling programming thinking in large language models to- ward code generation.
A characterlevel length-control algorithm for non-autoregressive sentence summarization. Puyuan Liu, Xiang Zhang, Lili Mou, Advances in Neural Information Processing Systems. Puyuan Liu, Xiang Zhang, and Lili Mou. A character- level length-control algorithm for non-autoregressive sentence summarization. In Advances in Neural In- formation Processing Systems.
A survey of deep learning for mathematical reasoning. Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, Kai-Wei Chang, Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. 2022. A survey of deep learning for mathematical reasoning.
A survey of text summarization techniques. Mining text data. Ani Nenkova, Kathleen Mckeown, Ani Nenkova and Kathleen McKeown. 2012. A survey of text summarization techniques. Mining text data, pages 43-76.
Fuzhao Xue, and Erik Cambria. 2022. Recent advances in deep learning based dialogue systems: A systematic survey. Jinjie Ni, Tom Young, Vlad Pandelea, Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, and Erik Cambria. 2022. Recent advances in deep learn- ing based dialogue systems: A systematic survey.
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang, arXiv:2302.06476Is chatgpt a general-purpose natural language processing task solver? arXiv preprint. Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language process- ing task solver? arXiv preprint arXiv:2302.06476.
Commonsense reasoning for natural language processing. Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth, 10.18653/v1/2020.acl-tutorials.7Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts. the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts5Example of polysemous changing in Chinese Maarten SapTable 5: Example of polysemous changing in Chinese Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth. 2020. Commonsense reason- ing for natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 27-33, Online. Association for Computational Lin- guistics.
Language models are multilingual chain-of-thought reasoners. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won, Yi Chung, Sebastian Tay, Denny Ruder, Zhou, arXiv:2210.03057arXiv preprintFreda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. 2022. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057.
A survey of machine translation: Its history, current status and future prospects. Jonathan Slocum, Computational linguistics. 111Jonathan Slocum. 1985. A survey of machine transla- tion: Its history, current status and future prospects. Computational linguistics, 11(1):1-17.
Commonsenseqa: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, ArXivAlon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. ArXiv.
Llama: Open and efficient foundation language models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Naman Baptiste Rozière, Eric Goyal, Faisal Hambro, Aurelien Azhar, Armand Rodriguez, Edouard Joulin, Guillaume Grave, Lample, Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models.
Modern question answering datasets and benchmarks: A survey. Zhen Wang, Zhen Wang. 2022. Modern question answering datasets and benchmarks: A survey.
. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Emergent abilities of large language models. Tatsunori Chi, Oriol Hashimoto, Percy Vinyals, Jeff Liang, William Dean, Fedus, Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emer- gent abilities of large language models.
Chain-of-thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models.
Natural language understanding systems within the ai paradigm-a survey and some comparisons. Yorick Wilks, American Journal of Computational Linguistics. Yorick Wilks. 1976. Natural language understanding systems within the ai paradigm-a survey and some comparisons. American Journal of Computational Linguistics.
Beyond counting datasets: A survey of multilingual dataset construction and necessary resources. Akari Xinyan Velocity Yu, Trina Asai, Junjie Chatterjee, Eunsol Hu, Choi, Xinyan Velocity Yu, Akari Asai, Trina Chatterjee, Jun- jie Hu, and Eunsol Choi. 2022. Beyond counting datasets: A survey of multilingual dataset construc- tion and necessary resources.
Improving HowNet-based Chinese word sense disambiguation with translations. Xiang Zhang, Bradley Hauer, Grzegorz Kondrak, Findings of the Association for Computational Linguistics: EMNLP 2022. Abu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsXiang Zhang, Bradley Hauer, and Grzegorz Kondrak. 2022. Improving HowNet-based Chinese word sense disambiguation with translations. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4530-4536, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Bridging the gap between Babel-Net and HowNet: Unsupervised sense alignment and sememe prediction. Xiang Zhang, Ning Shi, Bradley Hauer, Grzegorz Kondrak, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. the 17th Conference of the European Chapter of the Association for Computational LinguisticsDubrovnik, CroatiaAssociation for Computational LinguisticsXiang Zhang, Ning Shi, Bradley Hauer, and Grzegorz Kondrak. 2023. Bridging the gap between Babel- Net and HowNet: Unsupervised sense alignment and sememe prediction. In Proceedings of the 17th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 2789-2798, Dubrovnik, Croatia. Association for Computational Linguistics.
Paraphrase generation: A survey of the state of the art. Jianing Zhou, Suma Bhat, 10.18653/v1/2021.emnlp-main.414Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsJianing Zhou and Suma Bhat. 2021. Paraphrase genera- tion: A survey of the state of the art. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 5075-5086, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Bilingual word embeddings for phrase-based machine translation. Y Will, Richard Zou, Daniel Socher, Christopher D Cer, Manning, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingWill Y Zou, Richard Socher, Daniel Cer, and Christo- pher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceed- ings of the 2013 conference on empirical methods in natural language processing, pages 1393-1398.
| [] |
[] | [
"\nASUMAN GÜVEN AKSOY\nDANIEL AKECH THIONG\nDepartment of Mathematics\nClaremont McKenna College\n850 Columbia Av-enue91711ClaremontCAUSA\n",
"\nDepartment of Mathematics\nClaremont Graduate University\n710 N. Col-lege Avenue91711ClaremontCAUSA\n"
] | [
"ASUMAN GÜVEN AKSOY\nDANIEL AKECH THIONG\nDepartment of Mathematics\nClaremont McKenna College\n850 Columbia Av-enue91711ClaremontCAUSA",
"Department of Mathematics\nClaremont Graduate University\n710 N. Col-lege Avenue91711ClaremontCAUSA"
] | [] | We investigate an extension of Schauder's theorem by studying the relationship between various s-numbers of an operator T and its adjoint T * . We have three main results. First, we present a new proof that the approximation number of T and T * are equal for compact operators. Second, for non-compact, bounded linear operators from X to Y , we obtain a relationship between certain s-numbers of T and T * under natural conditions on X and Y . Lastly, for noncompact operators that are compact with respect to certain approximation schemes, we prove results for comparing the degree of compactness of T with that of its adjoint T * . | null | [
"https://export.arxiv.org/pdf/2306.03629v1.pdf"
] | 259,089,057 | 2306.03629 | a29830f7684c4b18f425a525cb602b7fbf8229a3 |
6 Jun 2023
ASUMAN GÜVEN AKSOY
DANIEL AKECH THIONG
Department of Mathematics
Claremont McKenna College
850 Columbia Av-enue91711ClaremontCAUSA
Department of Mathematics
Claremont Graduate University
710 N. Col-lege Avenue91711ClaremontCAUSA
6 Jun 2023EQUALITY IN DEGREES OF COMPACTNESS: SCHAUDER'S THEOREM AND S-NUMBERS2010 Mathematics Subject Classification Primary 47A1647B10; Secondary 47A68 Key words and phrases s-numbersapproximation schemesSchauder's theorem
We investigate an extension of Schauder's theorem by studying the relationship between various s-numbers of an operator T and its adjoint T * . We have three main results. First, we present a new proof that the approximation number of T and T * are equal for compact operators. Second, for non-compact, bounded linear operators from X to Y , we obtain a relationship between certain s-numbers of T and T * under natural conditions on X and Y . Lastly, for noncompact operators that are compact with respect to certain approximation schemes, we prove results for comparing the degree of compactness of T with that of its adjoint T * .
Introduction
In the following, we give a brief review of the background, notation, and terminology that will be relevant to this paper. Let L(X, Y ) denote the normed vector space of all continuous operators from X to Y , X * be the dual space of X, and K(X, Y ) denote the collection of all compact operators from X to Y . Denote by T * ∈ L(Y * , X * ) the adjoint operator of T ∈ L(X, Y ). The well known theorem of Schauder states that T ∈ K(X, Y ) if and only if T * ∈ K(Y * , X * ). The proof of Schauder's theorem that uses Arzelà-Ascoli Theorem is presented in most textbooks on functional analysis (see, e.g., [19]). A new and simple proof which does not depend on Arzelà-Ascoli can be found in [20]. Recalling the fact that a class of operators A(X, Y ) ⊂ L(X, Y ) is called symmetric if T ∈ A(X, Y ) implies T * ∈ A(Y * , X * ), we note that Schauder's Theorem assures that the class K(X, Y ) of compact operators between arbitrary Banach spaces X and Y is a symmetric operator ideal in L(X, Y ).
In [18] F. Riesz proved compact operators have at most countable set of eigenvalues λ n (T ), which arranged in a sequence, tend to zero. This result raises the question of what are the conditions on T ∈ L(X, Y ) such that (λ n (T )) ∈ ℓ q ? Specifically, what is the rate of convergence to zero of the sequence (λ n (T ))? To answer these questions, in [15] and [17], A. Pietsch developed s-numbers s n (T ) (closely related to singular values), which characterize the degree of compactness of T . The concept of s-numbers s n (T ) is introduced axiomatically in [15], their relationship to eigenvalues are given in detail in [17]. Definition 1.1. A map which assigns to every operator T a scalar sequence, is said to be a s-function if the following conditions are satisfied:
(1) ||T || = s 1 (T ) ≥ s 2 (T ) ≥ · · · ≥ 0 for T ∈ L(X, Y ).
(2) s m+n−1 (S + T ) ≤ s m (T ) + s n (T ) for S, T ∈ L(X, Y ) .
(3) s n (RT K) ≤ ||R||s n (T )||K|| for K ∈ L(X 0 , X), T ∈ L(X, Y ), R ∈ L(Y, Y o ). (4) If rank (T ) < n, then s n (T ) = 0. (5) s n (I n ) = 1 where I n is the identity map of ℓ n 2 . We call s n (T ) the n-th s-number of the operator T . Observe that s n (T ) depends on T continuously since
|s n (S) − s n (T )| ≤ ||S − T ||.
In [15] it is shown that there is only one s-function on the class of all operators between Hilbert spaces. For example, if we let T be a diagonal operator acting on ℓ 2 such that T (x n ) = (λ n x n ), where λ 1 ≥ λ 2 ≥ · · · ≥ 0 then s n (T ) = λ n for every s-function.
However for Banach spaces there are several possibilities of assigning to every operator T : X → Y a certain sequence of numbers {s n (T )} which characterizes the degree of approximability or compactness of T . The main examples of s-numbers to be used in this paper are approximation numbers, Kolmogorov numbers, Gelfand numbers and symmetrized approximation numbers which are all defined below.
First, for two arbitrary normed spaces X and Y , we define the collection of the finite-rank operators as follows:
F n (X, Y ) = {A ∈ L(X, Y ) : rank(A) ≤ n}, and F (X, Y ) = ∞ n=0 F n (X, Y )
which forms the smallest ideal of operators that exists. Definition 1.2. In the following we define the s-numbers we will use.
(1) The nth approximation number a n (T ) = inf{||T − A|| : A ∈ F n (X, Y )}, n = 0, 1, . . .
Note that a n (T ) provides a measure of how well T can be approximated by finite mappings whose range is at most n-dimensional. It is clear that the sequence {a n (T )} is monotone decreasing and lim n→∞ a n (T ) = 0 if and only if T is the limit of finite rank operators. It is known that the largest snumber is the approximation number. This is so because a : S → (a n (S)) is an s-function and if we consider S ∈ L(X, Y ) and if L ∈ F (X, Y ) with rank(L) < n, then
s n (S) ≤ s n (L) + ||S − L|| = ||S − L||.
Therefore s n (S) ≤ a n (S). See [7] or [15] for more details.
(2) The nth Kolmogorov diameter of T ∈ L(X) is defined by
δ n (T ) = inf{||Q G T || : dim G ≤ n}
where the infimum is over all subspaces G ⊂ X such that dim G ≤ n and Q G denotes the canonical quotient map Q G : X → X/G.
(3) The nth Gelfand number of T , c n (T ) is defined as:
c n (T ) = inf{ǫ > 0 : ||T x|| ≤ sup 1≤i≤k | x, a i | + ǫ||x||}
where a i ∈ X * , 1 ≤ i ≤ k with k < n. It follows that an operator T is compact if and only if c n (T ) → 0 as n → ∞. (4) The nth symmetrized approximation number τ n (T ) for any operator T defined between arbitrary Banach spaces X and Y is defined as follows:
τ n (T ) = δ n (J Y T ) where J Y : Y → ℓ ∞ (B Y * )
is an embedding map. Note that above definition is equivalent to
τ n (T ) = a n (J Y T Q X )
as well as to
τ n (T ) = c n (T Q X ), where Q X : ℓ 1 (B X ) → X is a metric surjection onto X given by Q X (ξ x ) = B X ξ x x for (ξ x ) ∈ ℓ 1 (B X ) .
It is possible to compare various s-numbers such as a n (T ), δ n (T ), c n (T ) if one imposes some mild restrictions on X and Y . With this purpose in mind we define well known concepts of lifting and extension properties. Definition 1.3. In the following we introduce two well-known important properties of Banach spaces. See [7] for details.
(1) We say that a Banach space X has the lifting property if for every T ∈ L(X, Y /F ) and every ǫ > 0 there exists an operator S ∈ L(X, Y ) such that ||S|| ≤ (1 + ǫ)||T || and T = Q F S, where F is a closed subspace of the Banach space Y and Q F : Y → Y /F denotes the canonical projection.
Example 1.4. The Banach space ℓ 1 (Γ) of summable number families {λ γ } γ∈Γ over an arbitrary index set Γ, whose elements {λ γ } γ∈Γ are characterized by γ∈Γ |λ γ | < ∞, has the metric lifting property. We mention a couple of facts to illustrate the importance of lifting and extensions properties with respect to s-numbers. If T is any map from a Banach space with metric lifting property to an arbitrary Banach space, then a n (T ) = δ n (T )
([7], Prop. 2.2.3).
It is also known that every Banach space X appears as a quotient space of an appropriate space ℓ 1 (Γ) (see [7], p.52). Furthermore, If T is any map from an arbitrary Banach space into a Banach space with metric extension property, then a n (T ) = c n (T ) ( [7], Prop. 2.3.3). Additionally, every Banach space Y can be regarded as a subspace of an appropriate space ℓ ∞ (Γ) (see [7], p.60).
For non-compact operator T ∈ L(X, Y ), we do not have too much information about the relationship between s n (T ) with s n (T * ). In this paper, by imposing certain natural conditions on X and Y we are able to obtain a relationship between s n (T ) with s n (T * ) for certain s-numbers. Moreover, using a new characterization of compactness due to Runde [20] together with the Principle of Local Reflexivity, we give a different, simpler proof of Hutton's theorem [11] establishing that for any compact map T , a n (T ) = a n (T * ) for all n.
Next we consider operators which are not compact but compact with respect to certain approximation schemes Q. We call such operators as Q-compact and prove that for any Q-compact operator T , one has τ n (T ) = τ n (T * ). This result answers the question of comparing the degree of compactness for T and its adjoint T * for non-compact operators T .
2. Comparing s n (T ) and s n (T * )
Hutton in [11] used the Principle of Local Reflexivity (PLR) to prove that for T ∈ K(X, Y ) we have a n (T ) = a n (T * ) for all n.
This result fails for non-compact operators. For example, if T = I : ℓ 1 → c 0 is the canonical injection and T * : ℓ 1 → ℓ ∞ is the natural injection, then one can show 1 = a n (T ) = a n (T * ) = 1 2 .
On the other hand by considering the ball measure of non-compactness, namely,
γ(T ) := inf{r > 0 : T (B X ) ⊂ n k=1 A k , max 1≤k≤n diam (A k ) < r, n ∈ N} Astala in [4] proved that if T ∈ L(X, Y )
, where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then
γ(T ) = γ(T * ).
Our first result is a different, simpler proof of Hutton's Theorem. We use only the characterization of compactness by Runde [20], together with the Principle of Local Reflexivity. Lindenstrass and Rosenthal [13] discovered a principle that shows that all Banach spaces are "locally reflexive" or said in another way, every bidual X * * is finitely representable in the original space X. The following is a stronger version of this property called Principle of Local Reflexivity (PLR) due to Johnson, Rosenthal and Zippin [9]: Definition 2.1. Let X be a Banach space regarded as a subspace of X * * , let E and F be finite dimensional subspaces of X * * and X * respectively and let ǫ > 0.
Then there exist a one-to-one operator T :
E → X such that (1) T (x) = x for all x ∈ X ∩ E (2) f (T e) = e(f ) for all e ∈ E and f ∈ F (3) ||T ||||T −1 || < 1 + ǫ.
PLR is an effective tool in Banach space theory. For example Oja and Silja in [14] investigated versions of the principle of local reflexivity for nets of subspaces of a Banach space and gave some applications to duality and lifting theorems. [20]). Let X be a Banach space and let T ∈ L(X).
Lemma 2.2 (Lemma 1 inThen T ∈ K(X) if and only if, for each ǫ > 0, there is a finite-dimensional subspace F ǫ of X such that ||Q Fǫ T || < ǫ, where Q Fǫ : X → X/F ǫ is the canonical projection. Theorem 2.3. Let T ∈ K(X)
. Then a n (T ) = a n (T * ) for all n.
Proof. Since one always has a n (T * ) ≤ a n (T ), if we have a n (T ) ≤ a n (T * * ), then a n (T * * ) ≤ a n (T * ) would imply a n (T ) ≤ a n (T * ). Thus we must verify a n (T ) ≤ a n (T * * ). To this end, suppose T ∈ K(X), by Schauder's theorem, T * and T * * are compact. Let ǫ > 0, then by definition, there exists A ∈ F n (X * * ) such that ||T * * − A|| < a n (T * * ) + ǫ.
By Lemma 2.2, there are finite-dimensional subspaces E ǫ of X * * and F ǫ of X * such that ||Q Eǫ T * * || < ǫ, where Q Eǫ : X * * → X * * /E ǫ and ||Q Fǫ T * || < ǫ, where Q Fǫ : X * → X * /F ǫ .
By the Principle of Local Reflexivity (PLR), there exists a one-to-one linear operator S : E ǫ → X such that ||S||||S −1 || < 1 + ǫ, y * (Sx * * ) = x * * (y * ) for all x * * ∈ E ǫ and all y * ∈ F ǫ , and S |Eǫ∩X = I.
Let J : X → X * * be the canonical map. By the Hahn-Banach theorem, since E ǫ is a subspace of X * * , S : E ǫ → X can be extended to a linear operator S : X * * → X.
We now have T ∈ L(X) and SAJ ∈ L(X) and rank (SAJ) = rank(A) < n, and therefore a n (T ) ≤ ||T − SAJ||.
To get an upper bound for ||T −SAJ|| we estimate ||T x−SAJ(x)|| for x ∈ B X using an appropriate element z j of the covering of the set T (B X ).
Indeed, the compactness of T implies that T (B X ) is relatively compact so that one can extract a finite-dimensional subset Y ǫ ⊂ T (B X ) ⊂ X and let z j = T x j be the n elements forming a basis.
Let x ∈ B X . Then we have
a n (T ) ≤ |T x − SAJ(x)|| ≤ ||T x − z j || + ||z j − SAJ(x)|| ≤ ǫ + ||z j − SAJ(x)|| = ǫ + ||Sz j − SAJ(x)|| ≤ ǫ + (1 + ǫ)||z j − AJ(x)|| < ǫ + (1 + ǫ)(a n (T * ) + ǫ) since ||z j − AJ(x)|| = ||Jz j − AJ(x)|| ≤ ||Jz j − JT x|| + ||JT x − AJ(x)|| ≤ ǫ + ||JT x − AJx|| = ǫ + ||T * * Jx − AJx|| ≤ ||T * * − A|| < a n (T * ) + ǫ.
It follows that a n (T ) ≤ a n (T * * ), as promised.
Theorem 2.4. If T ∈ L(X, Y )
, where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then δ n (T * ) = δ n (T ) for all n.
Proof. It is known that if T ∈ L(X, Y ), where X and Y are arbitrary Banach spaces, then δ n (T * ) = c n (T ) ( [7], Prop. 2.5.5). We also know that if T ∈ L(X, Y ), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then δ n (T ) = a n (T ) = c n (T ). Hence, δ n (T * ) = c n (T ) = a n (T ) = δ n (T ).
Remark 2.5. As stated before, Astala in [4] proved that if T ∈ L(X, Y ), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then γ(T ) = γ(T * ), where γ(T ) denotes the measure of noncompactness of T . In [1], it is shown that lim n→∞ δ n (T ) = γ(T ). This relationship between Kolmogorov diameters and the measure of non-compactness together with Theorem 2.4 provide an alternative proof for the result of Astala.
Theorem 2.6. If T ∈ K(X, Y ), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then c n (T * ) = c n (T ) for all n.
Proof. If T ∈ K(X, Y ), then it is known that δ n (T ) = c n (T * ) ( [7], Prop. 2.5.6). If X and Y are Banach spaces with metric lifting and extension property, respectively, then we also have δ n (T ) = a n (T ) = c n (T ). Thus, c n (T * ) = c n (T ) for all n.
Remark 2.7. In [10] it is shown that if X has the lifting property, then X * has the extension property. However, if Y has the extension property, then Y * has the lifting property if and only if Y is finite-dimensional. Therefore one can observe that if X has the lifting property and Y is finite-dimensional with the extension property, then Y * has the lifting property and X * has the extension property, so that we have δ n (T * ) = a n (T * ) = c n (T * ).
Compactness with Approximation schemes
Approximation schemes were introduced in Banach space theory by Butzer and Scherer in 1968 [5] and independently by Y. Brudnyi and N. Kruglyak under the name of "approximation families" in [6]. They were popularized by Pietsch in his 1981 paper [16], for later developments we refer the reader to [1,2,3]. The following definition is due to Aksoy and generalizes the classical concept of approximation scheme in a way that allows using families of subsets of X instead of elements of X, which is useful when we deal with n-widths. Definition 3.1 (Generalized Approximation Scheme). Let X be a Banach space. For each n ∈ N, let Q n = Q n (X) be a family of subsets of X satisfying the following conditions: (GA1) {0} = Q 0 ⊂ Q 1 ⊂ · · · ⊂ Q n ⊂ . . . . (GA2) λQ n ⊂ Q n for all n ∈ N and all scalars λ. (GA3) Q n + Q m ⊆ Q n+m for every n, m ∈ N. Then Q(X) = (Q n (X)) n∈N is called a generalized approximation scheme on X. We shall simply use Q n to denote Q n (X) if the context is clear.
We use here the term "generalized" because the elements of Q n may be subsets of X. Let us now give a few important examples of generalized approximation schemes.
Example 3.2.
(1) Q n = the set of all at-most-n-dimensional subspaces of any given Banach space X.
(2) Let E be a Banach space and X = L(E); let Q n = N n (E), where N n (E) = the set of all n-nuclear maps on E [15]. (3) Let a k = (a n ) 1+ 1 k , where (a n ) is a nuclear exponent sequence. Then Q n on X = L(E) can be defined as the set of all Λ ∞ (a k )-nuclear maps on E [8].
Definition 3.3 (Generalized Kolmogorov Number).
Let B X be the closed unit ball of X, Q = Q(X) = (Q n (X)) n∈N be a generalized approximation scheme on X, and D be a bounded subset of X. Then the n th generalized Kolmogorov number δ n (D; Q) of D with respect to Q is defined by δ n (D; Q) = inf{r > 0 : D ⊂ rB X + A for some A ∈ Q n (X)}.
(3.1)
Assume that Y is a Banach space and T ∈ L(Y, X). The n th Kolmogorov number δ n (T ; Q) of T is defined as δ n (T (B Y ); Q).
It follows that δ n (T ; Q) forms a non-increasing sequence of non-negative numbers:
T = δ 0 (T ; Q) ≥ δ 1 (T ; Q) ≥ · · · ≥ δ n (T ; Q) ≥ 0.
(3.2) We are now able to introduce Q-compact sets and operators: In the following we present two examples of Q-compact maps which are not compact. First of this examples is known (see [1]) and it involves a projection P : L p [0, 1] → R p where R p denotes the closure of the span of the space of Rademacher functions. Second example is new and illustrates the fact that if B w is a weighted backward shift on c 0 (N) with w = (w n ) n a bounded sequence not converging to 0, then B w is Q-compact operator which is not compact.
Example 3.6. Let {r n (t)} be the space spanned by the Rademacher functions. It can be seen from the Khinchin inequality [12] that
ℓ 2 ≈ {r n (t)} ⊂ L p [0, 1] for all 1 ≤ p ≤ ∞. (3.3)
We define an approximation scheme A n on L p [0, 1] as follows:
A n = L p+ 1 n . (3.4) L p+ 1 n ⊂ L p+ 1 n+1
gives us A n ⊂ A n+1 . for n = 1, 2, . . . , and it is easily seen that A n + A m ⊂ A n+m for n, m = 1, 2, . . . , and that λA n ⊂ A n for all λ. Thus {A n } is an approximation scheme.
It can be shown that for p ≥ 2 the projection P : L p [0, 1] → R p is a noncompact Q-compact map, where R p denotes the closure of the span of {r n (t)} in L p [0, 1]. (See [1] for details) Next we give another example is an Q-operator which is not compact .
B(x 1 , x 2 , x 3 , . . . ) = (w 2 x 2 , w 3 x 3 , w 4 x 4 , . . . )
where w = (w n ) n is a sequence of non-zero scalars called a weight sequence. Any weighted shift is a linear operator and is bounded if and only if w is a bounded sequence.
Let w = (w n ) n be a bounded sequence of positive real numbers. The unilateral weighted shift on c 0 (N) is defined by B w (e 1 ) = 0 and B w (e n ) = w n e n−1 for all n ≥ 2.
δ m (B w (U c 0 ), (A n ) n ) = inf{r > 0 : B w (U c 0 ) ⊆ rU c 0 + ℓ m } = inf{r > 0 : ∀x ∈ U c 0 , ∃y ∈ U c 0 , ∃z ∈ ℓ m with B w (x) = ry + z}.
Let x = (x n ) n≥1 ∈ U c 0 . Let us define y = (y n ) n≥1 ∈ U c 0 and z = (z n ) n≥1 ∈ ℓ 1 ⊆ ℓ m such that B w (x) = 1 2 m y + z. Let A := {n ≥ 1 : 2 m |x n w n | > 1}. The set A is finite, otherwise (w n ) n is unbounded. Set,
x n w n = z n−1 y n−1 = 0, ∀n ∈ A.
Observe that (w n x n ) n∈N\A ∈ c 0 , hence there exists a subsequence (n k ) k such that ∞ k=1 |w n k x n k | < ∞. Set, x n k w n k = z n k −1 y n k −1 = 0, ∀k ≥ 1.
Finally, set 2 m x n w n = y n−1 z n−1 = 0, ∀n ∈ N \ {(n k ) k ∪ A}.
Hence, x n w n = 1 2 m y n−1 + z n−1 , for all n ≥ 2. In other words, B w (x) = 1 2 m y + z. Note that y ∈ U c 0 and z ∈ ℓ 1 ⊂ ℓ m . In conclusion, δ m (B w (U c 0 ), (A n ) n ) ≤ 1 2 m . As m goes to ∞, we obtain that δ m (B w (U c 0 ), (A n ) n ) goes to 0 and B w is Qcompact.
It is well-known that B w is compact if and only if w = (w n ) n is a null-sequence. Our next objective here is to ascertain whether or not Schauder's type of theorem is true for Q-compact maps. For this purpose we use symmetrized approximation numbers of T . For our needs, we choose the closed unit ball B Z of the Banach space Z as an index set Γ. Our proof of the Schauder's theorem for Q-compact operators will depend on the fact that ℓ 1 (B Z ) has the lifting property and ℓ ∞ (B Z ) has the extension property. First we recall the following proposition. Motivated by this, we give the definition of Q-compact operators using the symmetrized approximation numbers. Remark 3.12. We need the following simple facts for our proof, for details we refer the reader to [7] Prop. 2.5.4 − 6.
a) Recall that τ n (T, Q) = c n (T Q X , Q), where Q X : ℓ 1 (B X ) → X. b) We will also abbreviate the canonical embedding
K ℓ 1 (B Y * ) : ℓ 1 (B Y * ) → ℓ ∞ (B Y * ) * by K so that Q Y * = J * Y K.
c) Denote by P 0 : ℓ ∞ (B X * * ) → ℓ ∞ (B X ) the operator which restricts any bounded function on B X * * to the subset K X (B X ) ⊂ B X * * so that Q * X = P 0 J X * . d) The relations (b) and (c) are crucial facts for the estimates of δ n (T * , Q * ) and c n (T * , Q * ). In particular, we have c n (T * , Q * ) ≤ δ n (T, Q).
We now state and prove the following theorem which states that the degree of Q-compactness of T and T * is the same in so far as it is measured by the symmetrized approximation numbers τ n . Theorem 3.13 (Schauder's theorem for Q-compact operators). Let T ∈ L(X, Y ) with X, Y are arbitrary Banach spaces, and let Q = (Q n (X)) be a generalized approximation scheme on X. Then τ n (T * , Q * ) = τ n (T, Q) for all n.
Proof. Let us show that τ n (T * , Q * ) = τ n (T, Q). By Remark 3.12 parts (a) and (b) we have the following estimates:
τ n (T * , Q * ) = c n (T * Q Y * , Q * ) = c n (T * J * Y K, Q * ) ≤ c n ((J Y T ) * , Q * ) ≤ δ n (J Y T, Q) = t n (T, Q)
Conversely, we have by using Remark 3.12 parts (c) and (d):
t n (T, Q) = c n (T Q X , Q) = δ n (T Q X ) * , Q * )
= δ n (Q * X T * , Q * ) = δ n (P 0 J X * T * , Q * ) ≤ δ n (J X * T * , Q * ) = t n (T * , Q * ).
Next we define approximation numbers with respect to a given scheme as follows: Definition 3.14. Given an approximation scheme {Q n } on X and T ∈ L(X), the n-th approximation number a n (T, Q) with respect to this approximation scheme is defined as: a n (T, Q) = inf{||T − B|| : B ∈ L(X), B(X) ⊆ Q n } Let X * and X * * be the dual and second dual of X. Note that if we let J : X → X * * be the canonical injection and let (X, Q n ) be an approximation scheme, then (X * * , J(Q n )) is an approximation scheme.
Let {Q n } and {Q * * n } := {J(Q n )} denote the subsets of X and X * * respectively. Definition 3.15. We say (X, Q n ) has the Extended Local Reflexivity Property (ELRP) if for each countable subset C of X * * , for each F ∈ Q * * n for some n and each ǫ > 0, there exists a continuous linear map P : span(F ∪ C) → X such that (1) ||P || ≤ 1 + ǫ (2) P ↾ C∩X = I(Identity) Note that ELRP is an analogue of local reflexivity principle which is possessed by all Banach spaces. Theorem 3.16. Suppose (X, Q n ) has ELRP and T ∈ L(X) has separable range. Then for each n we have a n (T, Q) = a n (T * , Q * ).
Proof. Since one always have a n (T * , Q * ) ≤ a n (T, Q) we only need to verify a n (T, Q) ≤ a n (T * * , Q * * ). Let J : X → X * * be the canonical map and U X be the unit ball of X. Given ǫ > 0, choose B ∈ L(X * * ) such that B(X * * ) ∈ Q * * n and ||B − T * * || < ǫ + a n (T * * , Q * * n ). Let {z j } be a countable dense set in T (X), thus T x j = z j where x j ∈ X. Consider the set K = span{(JT x j ) ∞ 1 ∪ B(X * * )} applying ELRP of X we obtain a map P : K → X such that ||P || ≤ 1 + ǫ and P ↾ (JT x j ) ∞ 1 ∩X = I For x ∈ U X , consider
||T x − P BJx|| ≤ ||T x − z j || + ||z j − P BJx|| ≤ ǫ + ||P JT x j − P BJx|| ≤ ǫ + (1 + ǫ)||JT x j − BJx|| ≤ ǫ + (1 + ǫ)[||JT x j − JT x|| + ||JT x − BJx||]
≤ ǫ + (1 + ǫ)[a n (T * * , Q * * n ) + 2ǫ] and thus a n (T, Q) ≤ a n (T * * , Q * * n ).
( 2 )
2A Banach space Y is said to have the extension property if for each T ∈ L(M, Y ) there exists an operator S ∈ L(X, Y ) such that T = SJ M and ||T || = ||S||, where M is a closed subspace of an arbitrary Banach space X and J M : M → Y is the canonical injection.
Example 1. 5 .
5The Banach space ℓ ∞ (Γ) of bounded number families {λ γ } γ∈Γ over an arbitrary index set Γ has the metric extension property.
Definition 3.4 (Q-compact set). Let D be a bounded subset of X. We say that D is Q-compact if lim n δ n (D; Q) = 0.
Definition 3.5 (Q-compact map). We say that T ∈ L(Y, X) is a Q-compact map if T (B Y ) is a Q-compact set,lim n δ n (T ; Q) = 0. Q-compact maps are a genuine generalization of compact maps since there are examples of Q-compact maps which are not compact in the usual sense.
Example 3 . 7 .
37Consider the weighted backward shift
Proposition 3 . 8 .
38Suppose the approximation scheme Q = (A n ) ∞ n=1 of c 0 (N) is defined as A n = ℓ n (N) for all n. Then any bounded weighted shift on c 0 is Qcompact Proof. Let B w be any bounded and linear weighted shift on c 0 , then w = (w n ) n is a bounded weight. Let m ≥ 1. Consider,
Corollary 3 . 9 .
39Let B w be a weighted backward shift on c 0 (N) with w = (w n ) n a bounded sequence not converging to 0. Consider the approximation schemes on c 0 (N) as Q = (A n ) ∞ n=1 with A n = ℓ n (N) for all n. Then, B w is a non-compact Q-compact operator.
Proposition 3 .
310 (Refined version of Schauder's theorem [7], p.84). An operator T between arbitrary Banach spaces X and Y is compact if and only if lim n→∞ τ n (T ) = 0 and moreover, τ n (T ) = τ n (T * ).
Definition 3. 11 .
11We say T is Q-symmetric compact if and only if lim n→∞ τ n (T, Q) = 0.
Q-Compact sets and Q-compact maps. A G Aksoy, Math. Japon. 361A. G. Aksoy, Q-Compact sets and Q-compact maps, Math. Japon. 36 (1991), no. 1, 1-7.
On approximation schemes and compactness. A G Aksoy, J M Almira, Proceedings of the first conference on classical and functional analysis. A. G. Aksoy, J. M. Almira, On approximation schemes and compactness, Proceedings of the first conference on classical and functional analysis, 5-24, Azuga-Romania, (2014).
Compactness and generalized approximation spaces. J M Almira, U Luther, Numer. Funct. Anal. and Optimiz. 23J. M. Almira and U. Luther,Compactness and generalized approximation spaces, Numer. Funct. Anal. and Optimiz., 23, (2002) 1-38.
On measures of non-compactness and ideal variations in Banach spaces. K Astala, Ann. Acad. Sci. Fenn. Ser. AI Math. Dissertations. 29K. Astala, On measures of non-compactness and ideal variations in Banach spaces, Ann. Acad. Sci. Fenn. Ser. AI Math. Dissertations 29, (1980), 1-42.
Approximations Prozesse und Interpolations methoden, Biliographisches Inst. P L Butzer, K Scherer, MannheimP. L. Butzer and K. Scherer, Approximations Prozesse und Interpolations methoden, Bili- ographisches Inst. Mannheim, 1968.
On a family of approximation spaces, Investigation in function theory of several real variables. Y Brudnyi, N Kruglyak, Yaroslavl State Univ., YaroslavlY. Brudnyi and N. Kruglyak, On a family of approximation spaces, Investigation in function theory of several real variables, Yaroslavl State Univ., Yaroslavl, (1978), 15-42.
Entropy, compactness and the approximation of operators. B Carl, I Stephani, Cambridge University PressB. Carl and I. Stephani, Entropy, compactness and the approximation of operators, Cam- bridge University Press, 1990.
On λ-nuclearity. E Dubinsky, M S Ramanujan, Mem. Amer. Math. Soc. 128E. Dubinsky and M. S. Ramanujan, On λ-nuclearity, Mem. Amer. Math. Soc.,128, 1972.
On bases, finite dimensional decomposition and weaker structures in Banach spaces. W B Johnson, H P Rosenthal, M Zippin, Israel J. Math. 9W. B. Johnson, H.P. Rosenthal and M. Zippin, On bases, finite dimensional decomposition and weaker structures in Banach spaces, Israel J. Math. 9, (1971), 488-506.
The extension and the lifting properties of Banach spaces. M Hasumi, G L Seever, Proc. Amer. Math. Soc. 15M. Hasumi and G. L. Seever. The extension and the lifting properties of Banach spaces, Proc. Amer. Math. Soc. 15 (1964), 773-775.
On approximation numbers and its adjoint. C V Hutton, Math. Ann. 210C. V. Hutton On approximation numbers and its adjoint. Math. Ann. 210 (1974), 277-280.
Classical Banach Spaces I, Sequence Spaces Springer-Verlag. J Lindenstrauss, L Tzafriri, Berlin, Heidelberg, New YorkJ. Lindenstrauss and L. Tzafriri, Classical Banach Spaces I, Sequence Spaces Springer- Verlag, Berlin, Heidelberg, New York, 1977.
Rosenthal The L p spaces. J Lindenstrauss, H P , Israel J. Math. 7J. Lindenstrauss and H. P. Rosenthal The L p spaces, Israel J. Math. 7, (1969), 325-340
Principle of local reflexivity respecting nests of subspaces and the nest approximation properties. E Oja, V Silja, J. Funct. Anal. 9E. Oja and V. Silja, Principle of local reflexivity respecting nests of subspaces and the nest approximation properties, J. Funct. Anal. 9, (2017), 2916-2938.
A Pietsch, Operator ideals. North-Holland, AmsterdamA. Pietsch, Operator ideals, North-Holland, Amsterdam, 1980.
Approximation spaces. P Pietsch, J. Approx. Theory. 322P. Pietsch, Approximation spaces, J. Approx. Theory, 32 (1981), no. 2, 115-134.
Eigenvalues and s-numbers, Cambridge studies in advanced mathematics 13. A Pietsch, A. Pietsch, Eigenvalues and s-numbers, Cambridge studies in advanced mathematics 13, 1985.
Uber lineare Functionalgleichungen. F Riesz, Acta Mathematica. 411F. Riesz Uber lineare Functionalgleichungen , Acta Mathematica, 41 (1), 71-98, 1918.
W Rudin, Functional Analysis. McGraw-Hill, IncSecond EditionW. Rudin, Functional Analysis (Second Edition), McGraw-Hill, Inc., 1991.
V Runde, arXiv:1010.1298v8A new and simple proof of Schauder's theorem. V. Runde, A new and simple proof of Schauder's theorem, arXiv:1010.1298v8, 9 Mar 2011.
| [] |
[
"HERMITIAN STRUCTURES ON SIX-DIMENSIONAL ALMOST NILPOTENT SOLVMANIFOLDS",
"HERMITIAN STRUCTURES ON SIX-DIMENSIONAL ALMOST NILPOTENT SOLVMANIFOLDS"
] | [
"Anna Fino ",
"Fabio Paradiso "
] | [] | [] | We complete the classification of six-dimensional strongly unimodular almost nilpotent Lie algebras admitting complex structures. For several cases we describe the space of complex structures up to isomorphism. As a consequence we determine the six-dimensional almost nilpotent solvmanifolds admitting an invariant complex structure and study the existence of special types of Hermitian metrics, including SKT, balanced, locally conformally Kähler, and strongly Gauduchon metrics. In particular, we determine new balanced solvmanifolds and confirm a conjecture by the first author and Vezzoni regarding SKT and balanced structures in the six-dimensional strongly unimodular almost nilpotent case. Moreover, we prove some negative results regarding complex structures tamed by symplectic forms, showing in particular that in every dimension such structures cannot exist on non-Kähler almost abelian Lie algebras.2020 Mathematics Subject Classification. 22E25, 53C15, 53C30, 53C55. | null | [
"https://export.arxiv.org/pdf/2306.03485v1.pdf"
] | 259,089,083 | 2306.03485 | 07dd4d2c1e6d9b2e7847025f615c665b17a917f6 |
HERMITIAN STRUCTURES ON SIX-DIMENSIONAL ALMOST NILPOTENT SOLVMANIFOLDS
Anna Fino
Fabio Paradiso
HERMITIAN STRUCTURES ON SIX-DIMENSIONAL ALMOST NILPOTENT SOLVMANIFOLDS
We complete the classification of six-dimensional strongly unimodular almost nilpotent Lie algebras admitting complex structures. For several cases we describe the space of complex structures up to isomorphism. As a consequence we determine the six-dimensional almost nilpotent solvmanifolds admitting an invariant complex structure and study the existence of special types of Hermitian metrics, including SKT, balanced, locally conformally Kähler, and strongly Gauduchon metrics. In particular, we determine new balanced solvmanifolds and confirm a conjecture by the first author and Vezzoni regarding SKT and balanced structures in the six-dimensional strongly unimodular almost nilpotent case. Moreover, we prove some negative results regarding complex structures tamed by symplectic forms, showing in particular that in every dimension such structures cannot exist on non-Kähler almost abelian Lie algebras.2020 Mathematics Subject Classification. 22E25, 53C15, 53C30, 53C55.
Introduction
A Hermitian structure on a 2n-dimensional smooth manifold M is defined as a pair (J, g), where J is an integrable almost complex structure which is compatible with a Riemannian metric g on M , namely g(J·, J·) = g(·, ·). When the fundamental form ω(·, ·) := g(J·, ·) is closed, or equivalently J is parallel with respect to the Levi-Civita connection relative to g, the Hermitian structure (J, g) is Kähler. An ever-growing interest has been placed on generalizations of the Kähler condition, whose inception stems from both the desire to equip non-Kähler complex manifolds with special (hopefully canonical) Hermitian metrics and from their applications in theoretical physics, demanding their investigation.
Two of the most studied ones are the strong Kähler with torsion (SKT for short) or pluriclosed condition, defined by the closure of H := d c ω = i(∂ − ∂)ω = Jdω, and the balanced condition, characterized by the coclosure of ω or equivalently by the closure of ω n−1 . The SKT and balanced conditions are incompatible one with the other, in the sense that a Hermitian metric which is both SKT and balanced is necessarily Kähler, and it has even been conjectured (and confirmed in some special cases) that a compact complex manifold admitting both SKT and balanced Hermitian metrics must also admit Kähler metrics.
Other important generalizations of the Kähler condition include:
• the balanced condition, characterized by the closure of ω n−1 ,
• the locally conformally balanced (LCB) condition, characterized by the closure of the Lee form θ, which is the unique 1-form satisfying dω n−1 = θ ∧ ω n−1 , • the locally conformally Kähler (LCK) condition, which holds when dω = 1 n−1 θ ∧ ω, • the locally conformally SKT (LCSKT) condition, introduced in [7] and characterized by the condition dH = µ ∧ H, with µ a non-zero closed 1-form, • the 1 st -Gauduchon condition, characterized by the condition ∂∂ω ∧ ω n−2 = 0, • the strongly Gauduchon condition, characterized by ∂ω n−1 being ∂-exact.
We recall that the SKT condition implies the 1 st -Gauduchon condition, the LCK or balanced condition implies the LCB condition, while the balanced condition implies the strongly Gauduchon condition.
Another property which has gathered interested in recent literature is the tamed condition for complex structures: given a complex structure J and a symplectic form Ω on M , J is said to be tamed by Ω if the (1, 1)-part of Ω is positive. This condition generalizes the Kähler condition, since the fundamental form of a Kähler structure (J, g) trivially tames J. In [8], it has been proven that the existence of a symplectic form Ω taming J is equivalent to the existence of a J-Hermitian metric whose fundamental form ω satisfies ∂ω = ∂β for some ∂-closed (2, 0)-form β. In particular, the Hermitian metric has to be SKT.
A natural setting in which to study the previously-mentioned types of Hermitian structures is provided by nilmanifolds or solvmanifolds, i.e. by compact quotients Γ\G of solvable or nilpotent Lie groups by cocompact lattices, since left-invariant structures on G naturally descend to locally homogeneous structures on such quotients. A solvable Lie group G is called almost nilpotent if its nilradical has codimension one. In particular, if the nilradical is abelian, G is almost abelian. Useful criteria for the existence of lattices of almost nilpotent Lie groups are given in [5].
For a solvable Lie group G a necessary condition for the existence of cocompact lattices is the strongly unimodularity of the Lie algebra g of G, i.e. that for every X ∈ g and every k ∈ N, one has tr ad X | n k /n k+1 = 0, where n is the nilradical of g (see [23]).
Real dimension six is of particular interest, due to it often being the first non-trivial dimension in which to look for certain special structures and for its theoretical physical applications. Moreover, nilpotent and solvable Lie groups are fully classified in dimension six (see [26,27,28,36,35]).
In the nilpotent case, most of the previously defined structures have been thoroughly investigated in literature during the last decades (see [7,17,29,33,34,37]), while only partial results exist thus far in the general solvable case. Recently, almost abelian Lie groups have gathered notable interest, leading to the development of several characterization and classification results for special Hermitian structures (see [24] for the Kähler case, [2,14] for the SKT case, [16,32] for the balanced case and [1,31] for the LCK and LCB case). Furthermore, in [20], the authors obtained results regarding 2-step solvable Lie groups, of which almost abelian ones are a special class. Another special class of almost nilpotent Lie groups has been studied in [15], where characterizations and classification results regarding complex, SKT and balanced structures were obtained for almost nilpotent Lie groups whose nilradical has one-dimensional commutator. In the paper we extend and refine these results, completing the classification of six-dimensional strongly unimodular almost nilpotent Lie algebras admitting complex structures and the aforementioned types of special Hermitian structures. This involves exploiting the technical machinery developed in the previous literature regarding almost (nilpotent) abelian Lie algebras, as well as some different kinds of computations, as we shall explain. In particular, we determine that the previous classification results in [14,15] exhaust the six-dimensional strongly unimodular almost nilpotent Lie algebras admitting SKT structures, while, with respect to [15,16], we were able to find two more Lie algebras, up to isomorphisms, admitting balanced structures. Moreover, we proved that, in the six-dimensional almost nilpotent strongly unimodular case, a Lie algebra admitting strongly Gauduchon structures always admits balanced structures: it would be interesting to see if this result holds in higher dimensions as well.
In Section 2, we start by classifying six-dimensional strongly unimodular almost nilpotent Lie algebras admitting complex structures, while, in Section 3, we delve deeper into the subject by providing a classification up to automorphisms of complex structures on some of these Lie algebras. In Section 4, we then provide a full classification result regarding 1 st -Gauduchon structures and the same is done for SKT and LCSKT structures in Section 5. In particular, we determine the first example of compact manifold admitting an LCSKT structure with non-degenerate torsion. The results of Section 5 are then used in Section 6 to investigate the existence of complex structures tamed by symplectic forms, showing that such structures can never exist on non-Kähler almost abelian Lie algebras and thus improving the partial results obtained in [11,12]. We then go back to classifying Lie algebras admitting special Hermitian metrics, studying LCB and LCK structures in Section 7 and strongly Gauduchon and balanced structures in Section 8.
Name Structure equations Nilradical
Step of solvability s −1 6.140
(f 35
(f 35 + f 16 , f 34 − f 26 , f 45 , −f 46 , f 56 , 0) n 5.2 = (f 35 , f 34 , f 45 , 0, 0) 3s 0 3.3 ⊕ R 3 = f 26 , −f 16 , 0, 0, 0, 0 , h 3 ⊕ s 0 3.3 = (f 23 , 0, 0, f 56 , −f 46 , 0), s − 1 2 ,− 1 2 4.3 ⊕ R 2 = f 16 , − 1 2 f 26 , − 1 2 f 36 , 0, 0, 0 , s p,− p 2 4.5 ⊕ R 2 = pf 16 , − p 2 f 26 + f 36 , −f 26 − p 2 f 36 , 0, 0, 0 , p > 0, s 4.6 ⊕ R 2 = (f 23 , f 26 , −f 36 , 0, 0, 0), s 4.7 ⊕ R 2 = (f 23 , f 36 , −f 26 , 0, 0, 0), s 0 5.4 ⊕ R = f 26 , 0, f 46 , −f 36 , 0, 0 , s 0 5.8 ⊕ R = f 26 + f 36 , −f 16 + f 46 , f 46 , −f 36 , 0, 0 , s 1,−1,−1 5.9 ⊕ R = f 16 , f 26 , −f 36 , −f 46 , 0, 0 , s p,p,−p 5.11 ⊕ R = pf 16 , pf 26 , −pf 36 + f 46 , −f 36 − pf 46 , 0, 0 , p > 0, s p,−p,r 5.13 ⊕ R = pf 16 + f 26 , −f 16 + pf 26 , −pf 36 + rf 46 , −rf 36 − pf 46 , 0, 0 , r > 0, s 5.16 ⊕ R = (f 23 + f 46 , f 36 , −f 26 , 0, 0, 0), s − 1 4 ,− 1 4 6.14 = − 1 4 f 16 + f 26 , − 1 4 f 26 , − 1 4 f 36 + f 46 , − 1 4 f 46 , f 56 , 0 , s p,−4p 6.16 = pf 16 + f 26 + f 36 , −f 16 + pf 26 + f 46 , pf 36 + f 46 , −f 36 + pf 46 , −4pf 56 , 0 , p < 0, s 1,q,q,−2(1+q) 6.17 = f 16 , f 26 , qf 36 , qf 46 , −2(1 + q)f 56 , 0 , 0 < |q| ≤ 1, q ̸ = −1, s 1,− 3 2 ,− 3 2 6.18 = f 16 + f 26 , f 26 , f 36 , − 3 2 f 46 , − 3 2 f 56 , 0 , s p,p,q,−p− q 2 6.19 = pf 16 , pf 26 , qf 36 , − p + q 2 f 46 + f 56 , −f 46 − p + q 2 f 56 , 0 , p, q ̸ = 0,= (f 35 + f 26 , f 45 − f 16 , f 46 , −f 36 , 0, 0), s 0 6.147 = (f 35 + f 26 , f 45 − f 16 + f 36 , f 46 , −f 36 , 0, 0), s 6.152 = (f 35 + f 26 , f 34 − f 16 + f 56 , f 45 , −f 56 , f 46 , 0), s 0 6.154 = (f 35 + f 26 , f 34 − f 16 , f 45 , −f 56 , f 46 , 0), s 6.158 = (f 24 + f 35 , 0, f 36 , 0, −f 56 , 0), s 6.159 = (f 24 + f 35 , 0, −f 56 , 0, f 36 , 0), s 1 6.162 = (f 24 + f 35 , f 26 , f 36 , −f 46 , −f 56 , 0), s p 6.164 = (f 24 + f 35 , pf 26 , f 56 , −pf 46 , −f 36 , 0), p > 0, s p 6.165 = (f 24 + f 35 , pf 26 + f 36 , −f 26 + pf 36 , −pf 46 + f 56 , −f 46 − pf 56 , 0), p > 0, s p 6.166 = (f 24 + f 35 , −f 46 , −pf 56 , f 26 , pf 36 , 0), 0 < |p| ≤ 1, s 6.167 = (f 24 + f 35 , −f 36 , −f 26 , f 26 + f 56 , f 36 − f 46 , 0).
Proof. The result concerning almost abelian Lie algebras and Lie algebras with Heisenberg-type nilradical follows from [14] and [15], after properly adapting the names of the Lie algebras involved, based on the conventions in [36]. The remaining eight isomorphism classes of Lie algebras (see Table 2.1) can be studied via a case-by-case argument.
Having provided explicit examples of complex structures on s 0 6.145 , s 0 6.147 , s 6.152 and s 0 6.154 , it suffices to prove that the remaining four Lie algebras in Table 2.1 do not admit complex structures, which can be achieved via explicit computations, considering the generic almost complex structure J = (J jk ) j,k=1,...,6 written as a matrix with respect to the basis {f 1 , . . . , f 6 } defining the structure equations. We then impose the equations dictated by the condition J 2 = −Id and N J = 0, showing that they lead to some contradiction.
We can consider the Lie algebra
g = (f 35 + f 16 + εf 36 , f 45 − f 26 − εf 46 , f 36 , −f 46 , 0, 0), ε ∈ {0, 1},
with the cases ε = 0 and ε = 1 yielding s −1 6.140 and s −1 6.146 , respectively. One computes f 6 (N J (f 2 , f 4 )) = J 62 (J 52 − εJ 62 ) .
Let us first assume J 62 ̸ = 0, obtaining J 52 = εJ 62 . Now,
f 4 (N J (f 1 , f 2 )) = −2J 41 J 62 , f 6 (N J (f 1 , f 2 )) = −2J 61 J 62 , implying J 41 = J 61 = 0. From f 3 (N J (f 1 , f 5 )) = −(J 31 ) 2 , f 5 (N J (f 1 , f 2 )) = −J 51 J 62 , we deduce J 31 = J 51 = 0. Finally, f 6 (N J (f 1 , f 6 )) = J 21 J 62 forces J 21 = 0, but now (J 2 ) 11 = (J 11 ) 2 ≥ 0, a contradiction.
We should then have J 62 = 0. One has
f 3 (N J (f 1 , f 2 )) = −2J 32 J 61 . We claim J 61 = 0. Otherwise we would have f 4 (N J (f 2 , f 5 )) = −(J 42 ) 2 , yielding J 42 = 0. Then, f 6 (N J (f i , f 6 )) = J 4i J 64 , i = 1, 2, 3, 5,
we deduce J 41 = J 42 = J 43 = J 45 = 0. Having
f 5 (N J (f 1 , f 3 )) = (J 51 ) 2 , f 3 (N J (f 1 , f 5 )) = −(J 31 ) 2 ,
we must have J 31 = J 51 = 0 and now f 2 (N J (f 1 , f 4 )) = −2J 21 J 64 forces J 21 = 0, yielding (J 2 ) 11 = (J 11 ) 2 ≥ 0. Then, J 64 = 0 necessarily holds. Now, in order for (J 2 ) 66 = (J 66 ) 2 + J 56 J 65 to be negative, we must have J 65 ̸ = 0, so that
(J 2 ) 6i = J 5i J 65 , i = 1, 2, 3, 4, implies J 51 = J 52 = J 53 = J 54 = 0. Now, f 4 (N J (f 2 , f 3 )) = −(J 42 ) 2 , f 3 (N J (f 2 , f 4 )) = (J 32 ) 2 imply J 32 = J 42 = 0, leading to f 1 (N J (f 2 , f 5 )) = 2J 12 J 65 yielding J 12 = 0 and the contradiction (J 2 ) 22 = (J 22 ) 2 ≥ 0. □ Remark 2.3.
Among the eight Lie algebras of Table 2.1, we have thus proved that exactly four of them, namely s 0 6.145 , s 0 6.147 , s 6.152 and s 0 6.154 , admit complex structures. For three of them (all but s 6.152 ), one can prove that the corresponding simply connected Lie groups admit cocompact lattices, hence they give rise to compact solvmanifolds. The claim regarding s 0 6.145 was proved in [5,Proposition 8.4.9] (we note that s 0 6.145 is isomorphic to g 0,0 6.70 in [5]). To prove an analogous result for the remaining two Lie algebras, we can exploit the same technique used in [5] (see also [13,Lemma 2.9]), by which a lattice on the Lie group corresponding to an n-dimensional almost nilpotent Lie algebra n ⋊ span ⟨X⟩ can be constructed if there exist a nonzero t and a rational basis {Y 1 , . . . , Y n−1 } of n such that the matrix associated with exp (t ad X | n ) has integer entries. Now, the Lie algebra s 0 6.147 can be regarded as the semidirect product n 5.1 ⋊ span ⟨X⟩, with X = f 6 − π−1 π f 5 . If we fix the (rational) basis {f 1 , . . . , f 5 } for n 5.1 , we have
exp (2π ad X | n5.1 ) = 1 0 2 0 0 0 1 0 2 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 ,
thus proving the claim. The result regarding s 0 6.154 = n 5.2 ⋊ span ⟨f 6 ⟩ easily follows from the fact that exp (2π ad f6 | n5.2 ) = Id n5.2 . It is not known whether the Lie group corresponding with s 6.152 admits cocompact lattices.
Moduli of complex structures
In order to obtain some of the results of the next sections, we shall examine some of the Lie algebras of the previous theorem, studying all complex structures on them up to equivalence: we say that two complex structures J 1 and J 2 on a Lie algebra g are equivalent if there exists a Lie algebra automorphism φ ∈ GL(g) such that
J 2 = φJ 1 φ −1 .
Moreover, two complex structures are equivalent if and only if they admit two respective bases of (1, 0)-forms having the same structure equations. This is the way in which we are going to tackle this problem. In the next sections, when we study Hermitian structures (J, g) on such Lie algebras, it will be enough to assume such structure equations for some basis of (1, 0)-forms {α 1 , α 2 , α 3 } and then consider the generic Hermitian metric, which, with respect to the previous frame, has fundamental form
(3.1) ω = i(λ 1 α 11 + λ 2 α 22 + λ 3 α 33 ) + w 1 α 23 − w 1 α 32 + w 2 α 13 − w 2 α 31 + w 3 α 12 − w 3 α 21 ,
with λ 1 , λ 2 , λ 3 ∈ R >0 and w 1 , w 2 , w 3 ∈ C satisfying the positivity conditions
λ 1 λ 2 > |w 3 | 2 , λ 2 λ 3 > |w 1 | 2 , λ 1 λ 3 > |w 2 | 2 , λ 1 λ 2 λ 3 + 2ℜ(iw 1 w 2 w 3 ) > λ 1 |w 1 | 2 + λ 2 |w 2 | 2 + λ 3 |w 3 | 2 .
We start by analyzing the four Lie algebras of Theorem 2.2 having nilradical n satisfying dim n 1 > 1. Similarly to the previous proof, for each Lie algebra g, we shall consider the generic endomorphism J ∈ gl(g), represented by some matrix (J jk ) j,k=1,...,6 with respect to the basis {f 1 , . . . , f 6 } used in the structure equations. We shall then look at the conditions set by the requirements N J = 0 and J 2 = −Id g and solve them to obtain the expression for the generic complex structure J, which we shall use to extract a J-adapted basis of the form {e 1 = f j , e 2 = Jf j , e 3 = f k , e 4 = Jf k , e 5 = f l , e 6 = Jf l } and then to write a suitable basis {α 1 , α 2 , α 3 } of (1, 0)-forms as
α 1 = ζ 1 (e 1 + ie 2 ), α 2 = ζ 2 (e 3 + ie 4 ), α 3 = ζ 3 (e 5 + ie 6 ),
where ζ 1 , ζ 2 , ζ 3 ∈ C − {0} will be chosen so as to simplify the associated structure equations as much as possible.
Proposition 3.1. For every complex structure J on the Lie algebra s 0 6.145 , there exists a basis of (1, 0)-forms {α 1 , α 2 , α 3 } whose structure equations are
(3.2) dα 1 = α 1 ∧ (α 3 − α 3 ) + e iθ α 23 + e −iθ α 23 + να 33 , dα 2 = α 2 ∧ (α 3 − α 3 ), dα 3 = 0,
for some θ ∈ (− π 2 , π 2 ) and ν ∈ {0, 1}.
For every complex structure J on the Lie algebra s 0 6.147 , there exists a basis of (1, 0)-forms {α 1 , α 2 , α 3 } satisfying structure equations
(3.3) dα 1 = α 1 ∧ (α 3 − α 3 ) + (1 + z)α 23 + (1 − z)α 23 + να 33 , dα 2 = α 2 ∧ (α 3 − α 3 ), dα 3 = 0,
with z ∈ C, ℜ(z) ̸ = 0, and ν ∈ {0, 1}, or
(3.4) dα 1 = α 1 ∧ (α 3 − α 3 ) + z α 2 ∧ (α 3 − α 3 ) − α 32 + α 33 , dα 2 = −α 2 ∧ (α 3 − α 3 ), dα 3 = 0, for some z ∈ C, or (3.5) dα 1 = α 1 ∧ (α 3 − α 3 ) + xα 2 ∧ (α 3 − α 3 ) − α 32 dα 2 = −α 2 ∧ (α 3 − α 3 ), dα 3 = 0, with x ∈ R ≥0 .
Proof. In order for the two parts of the statement to share part of the proof, it is easy to start by working on the Lie algebra
(3.6) g = (f 35 + f 26 , f 45 − f 16 + εf 36 , f 46 , −f 36 , 0, 0), ε ∈ {0, 1},
yielding s 0 6.145 for ε = 0 and s 0 6.147 for ε = 1.
J 56 = − (J 66 ) 2 + 1 J 65 = − (J 55 ) 2 + 1 J 65 .
Moving on,
f 1 (N J (f 1 , f 5 )) = −J 11 J 31 − J 12 J 41 + J 65 J 12 + J 65 J 21 + J 31 J 55 implies J 21 = 1 J 65 (J 11 J 31 + J 12 J 41 − J 12 J 65 − J 31 J 55 )
.
J 22 = 1 J 65 (J 11 J 32 + J 11 J 65 + J 12 J 42 − J 32 J 55 ),
at which point
f 4 (N J (f 3 , f 5 )) = −εJ 42 J 65 − J 33 J 41 − J 33 J 65 − J 41 J 55 − J 42 J 43 + J 44 J 65 forces J 44 = 1 J 65 (εJ 42 J 65 + J 33 J 41 + J 33 J 65 + J 41 J 55 + J 42 J 43 ) ,
and now
f 4 (N J (f 4 , f 5 )) = − 1 J 65 (ε(J 42 ) 2 J 65 + J 33 J 41 J 42 + J 33 J 42 J 65 + J 34 J 41 J 65 + J 34 J 2 65 + J 41 J 42 J 55 + (J 42 ) 2 J 43 + J 42 J 55 J 65 + J 43 J 2 65 ) implies J 43 = − 1 (J 42 ) 2 + (J 65 ) 2 ε(J 42 ) 2 J 65 + J 33 J 41 J 42 + J 33 J 42 J 65 + J 34 J 41 J 65 +J 34 (J 65 ) 2 + J 41 J 42 J 55 + J 42 J 55 J 65 .
Now, notice we must have J 41 ̸ = J 65 , otherwise we would find Now, we claim J 41 = 0: assuming this is not the case, consider
f 3 (N J (f 1 , f 5 )) = −(J 31 ) 2 + (J 65 ) 2 , f 4 (N J (f 1 , f 5 )) = −2J 31 J 65 ,f 2 (N J (f 1 , f 5 )) = εJ 31 J 65 − 2J 11 J 41 + 2J 12 J 31 + 2J 41 J 55 , yielding J 55 = J 11 − 1 2J 41 (εJ 31 J 65 + 2J 12 J 31 ) .
Now, in order for
f 4 (N J (f 2 , f 5 )) = − 2 (J 41 − J 65 ) 2 J 65 (J 31 ) 2 J 41 + (J 31 ) 2 J 65 + (J 41 ) 3 − (J 41 ) 2 J 65
to vanish, we must assume J 31 ̸ = ±J 41 and impose
J 65 = − J 41 ((J 31 ) 2 + (J 41 ) 2 ) (J 31 ) 2 − (J 41 ) 2 , at which point f 3 (N J (f 2 , f 5 )) = (J 31 ) 2 + (J 41 ) 2 3 2(J 31 ) 3 J 41 cannot vanish.
Now, having J 41 = 0, we can compute
f 4 (N J (f 2 , f 5 )) = −2J 2 31 ,
forcing J 31 = 0. Now, we have (J 2 ) 11 = (J 11 ) 2 − (J 2 12 ), (J 2 ) 12 = 2J 11 J 12 , (J 2 ) 33 = (J 33 ) 2 − (J 34 ) 2 , (J 2 ) 34 = 2J 33 J 34 , from which we deduce J 11 = J 33 = 0 and (J 12 ) 2 = (J 34 ) 2 = 1. Then,
f 1 (N J (f 3 , f 5 )) = −εJ 12 J 65 + J 12 J 34 + J 14 J 65 + J 23 J 65 − 1, f 3 (N J (f 5 , f 6 )) = J 34 J 35 − J 45 J 55 − J 46 J 65 , f 4 (N J (f 5 , f 6 )) = J 34 J 45 + J 35 J 55 + J 36 J 65
force to vanish, we can set J 25 = −δ(J 15 J 55 + J 16 J 65 ).
J now defines a complex structure, since an explicit computation shows that we have J 2 = −Id and N J = 0. One can then see that {e 1 = f 1 , e 2 = Jf 1 , e 3 = f 3 , e 4 = Jf 3 , e 5 = f 5 , e 6 = Jf 5 } defines a new basis for s 0 6.145 . If (J 35 ) 2 + (J 45 ) 2 ̸ = 0, we can consider the basis {α 1 , α 2 , α 3 } of (1, 0)-forms provided by
α 1 = − (J 65 ) 2 2(δJ 45 + iJ 35 ) (e 1 + ie 2 ), α 2 = δJ 65 (1 + (J 55 ) 2 ) 1 2 2 (δJ 45 + iJ 35 ) (e 3 + ie 4 ), α 3 = − δJ 65 2 (e 5 + ie 6 ),
satisfying the structure equations (3.2), with ν = 1 and
e iθ = 1 − iJ 55 1 + (J 55 ) 2 .
Instead, if J 35 = J 45 = 0, we can set
α 1 = f 1 + iJf 1 , α 2 = − 1 J 65 δ(1 + (J 55 ) 2 ) 1 2 (f 3 + iJf 3 ), α 3 = − δJ 65 2 (f 5 + iJf 5 ),
obtaining (3.2) with ν = 0 and e iθ as before.
We now turn our attention to the Lie algebra s 0 6.147 , setting ϵ = 1 in (3.6). Picking up from where we left earlier, we recall that we have (J 12 ) 2 = (J 34 ) 2 = 1 and we start by assuming J 34 = J 12 = δ ∈ {−1, 1}. Then,
(J 2 ) 14 = 2δJ 24 , f 1 (N J (f 4 , f 6 )) = 2δJ 14 − 1 force J 24 = 0, J 14 = δ 2 . Finally, from (J 2 ) 15 = δJ 25 + δ 2 J 45 + J 15 J 55 + J 16 J 65 , f 1 (N J (f 5 , f 6 )) = δJ 15 − δ 2 J 35 − J 25 J 55 − J 36 J 65 ,
we deduce
J 16 = − 1 2J 65 (2δJ 25 + δJ 45 + 2J 15 J 55 ) , J 26 = 1 2J 65 (2δJ 15 − δJ 35 − 2J 25 J 55 ) .
We now have (J 2 ) = −Id and N J = 0. A new basis for s 0 6.147 is provided by {e 1 = f 1 , e 2 = Jf 1 , e 3 = f 3 , e 4 = Jf 3 , e 5 = f 5 , e 6 = Jf 5 }, as one can check directly. In the case (J 35 ) 2 + (J 45 ) 2 ̸ = 0, the basis {α 1 , α 2 , α 3 } for (1, 0)-forms defined by
α 1 = − (J 65 ) 2 2(δJ 45 + 2iJ 35 ) (e 1 + ie 2 ), α 2 = J 65 2(δJ 45 + 2iJ 35 ) (e 3 + ie 4 ), α 3 = δ 2 J 65 (e 5 + ie 6 ) yields (3.3), with z = − δ 2 (J 65 − iJ 55
) and ν = 1, while, when J 35 = J 45 = 0, we can take
α 1 = f 1 + iJf 1 , α 2 = δ J 65 (f 3 + iJf 3 ), α 3 = δ 2 J 65 (f 5 + iJf 5 ),
giving (3.3), with z as before and ν = 0.
We can now assume
J 34 = −J 12 = δ ∈ {−1, 1}, instead. Having f 2 (N J (f 3 , f 5 )) = 4δJ 55 , we must have J 55 = 0. Now, f 2 (N J (f 3 , f 6 )) = −1 − 2 δ J 65 forces J 65 = −2δ. Finally, by (J 2 ) 15 = −2δJ 16 + δJ 25 + J 14 J 45 + J 24 J 35 , (J 2 ) 16 = 1 2 (J 14 J 35 − J 24 J 45 + δJ 15 ) + δJ 26 ,
we have
J 16 = 1 2 J 25 + δ 2 (J 14 J 45 + J 24 J 35 ), J 26 = 1 2 J 15 + δ 2 (J 14 J 35 − J 24 J 45 ).
Again, we now have (J 2 ) = −Id, N J = 0 and {e 1 = f 1 , e 2 = Jf 1 , e 3 = f 3 , e 4 = Jf 3 , e 5 = f 5 , e 6 = Jf 5 } is a basis for s 0 6.147 . If we have (J 35 ) 2 + (J 45 ) 2 ̸ = 0,
α 1 = − 2 δJ 45 + iJ 35 (e 1 + ie 2 ), α 2 = − 2 δJ 45 − iJ 35 (e 3 + ie 4 ), α 3 = −e 5 − ie 6
define a basis of (1, 0)-forms satisfying (3.4) with
z = (δJ 45 − iJ 35 ) 2 (−1 + 2δJ 14 − 2iJ 13 ) 2((J 35 ) 2 + (J 45 ) 2 ) .
In the case J 35 = J 45 = 0, we can consider the basis {α 1 , α 2 , α 3 } of (1, 0)-forms defined by
α 1 = e −i θ 2 (e 1 + ie 2 ), α 2 = e i θ 2 (e 3 + ie 4 ), α 3 = −e 5 − ie 6 , with θ = arg(−1 + 2δJ 14 − 2iJ 13 ), −1 + 2δJ 14 − 2iJ 13 ̸ = 0, 0, −1 + 2δJ 14 − 2iJ 13 = 0, yielding (3.5), with x = 1 2 | − 1 + 2δJ 14 − 2iJ 13 |. □ Proposition 3.2.
For every complex structure J on the Lie algebra s 6.152 , there exists a basis of (1, 0)-forms {α 1 , α 2 , α 3 } whose structure equations are
(3.8) dα 1 = α 12 − ℜ(z 2 )α 13 − α 12 + ℜ(z 2 )α 13 + z 1 α 23 + z 2 α 22 − (ℜ(z 2 )z 2 + iδ)α 23 +(z 1 − ℜ(z 2 )z 2 )α 32 + ℜ(z 2 ) 2 z 2 − ℜ(z 2 )z 1 + i 2 δz 2 α 33 , dα 2 = ℜ(z 2 )α 23 + ℜ(z 2 )α 32 − ℜ(z 2 ) 2 + i 2 δ α 33 , dα 3 = α 23 + α 32 − ℜ(z 2 )α 33 , for some z 1 , z 2 ∈ C, δ ∈ {−1, 1}, or (3.9) dα 1 = α 12 − ℜ(z 1 )α 13 − α 12 + ℜ(z 1 )α 13 + 1 2ℑ(z2) 2 (|z 1 | 2 − δℑ(z 2 )(z 2 + i))α 23 + z1 ℑ(z2) 2 α 22 − ℜ(z1)z1 ℑ(z2) 2 α 23 − 1 2ℑ(z2) 2 (z 2 1 + δℑ(z 2 )(z 2 − i))α 32 + 1 2ℑ(z2) 2 ℜ(z 1 )z 2 1 + δℜ(z 1 )ℑ(z 2 )z 2 − δℑ(z 1 )ℑ(z 2 ) α 33 , dα 2 = −ℜ(z 1 )α 23 − ℜ(z 1 )α 32 + ℜ(z 1 ) 2 + i 2 δℑ(z 2 ) α 33 , dα 3 = −α 23 − α 32 + ℜ(z 1 )α 33 , for some z 1 , z 2 ∈ C, ℑ(z 2 ) ̸ = 0, δ ∈ {−1, 1}.
For every complex structure J on the Lie algebra s 0 6.154 , there exists a basis of (1, 0)-forms {α 1 , α 2 , α 3 } whose structure equations are
(3.10) dα 1 = α 12 − ℜ(z)α 13 − α 12 + ℜ(z)α 13 + 1 2x 2 (|z| 2 − x(y + i))α 23 + z x 2 α 22 − ℜ(z)z x 2 α 23 − 1 2x 2 (z 2 + x(y − i))α 32 + 1 2x 2 ℜ(z)z 2 + xyℜ(z) − xℑ(z) α 33 , dα 2 = −ℜ(z)α 23 − ℜ(z)α 32 + ℜ(z 1 ) 2 + i 2 x α 33 , dα 3 = −α 23 − α 32 + ℜ(z)α 33 ,
for some z ∈ C, x, y ∈ R, x ̸ = 0.
Proof. As done in the proof of the previous theorem, it is convenient to study the Lie algebras s 6.152 and s 0 6.154 together, by considering the Lie algebra
(f 35 + f 26 , f 34 − f 16 + εf 56 , f 45 , −f 56 , f 46 , 0), ε ∈ {0, 1}.
We then write the matrix J ij representing the generic J and impose conditions J 2 = −Id and
N J = 0. First, f 6 (N J (f 1 , f 2 )) = (J 61 ) 2 + (J 62 ) 2 , forces J 61 = J 62 = 0. From f 6 (N J (f 4 , f 5 )) = − (J 64 ) 2 + (J 65 ) 2 + J 63 (J 44 + J 55 ) , (J 2 ) 66 = J 36 J 63 + J 46 J 64 + J 56 J 65 + (J 66 ) 2 we deduce J 63 ̸ = 0, so that f 6 (N J (f k , f 4 )) = J 5k J 63 , f 6 (N J (f k , f 5 )) = −J 4k J 63 , k ∈ 1, 2, yield J 41 = J 51 = J 42 = J 52 = 0.
In the same way,
f 3 (N J (f 2 , f 3 )) = −J 31 J 63 , f 6 (N J (f 3 , f 4 )) = J 63 (J 53 + J 65 ), f 6 (N J (f 3 , f 5 )) = −J 63 (J 43 + J 64 ) force J 31 = 0, J 65 = −J 53 , J 64 = −J 43 and f 3 (N J (f 2 , f 4 )) = −(J 32 ) 2 , f 1 (N J (f 1 , f 3 )) = J 63 (J 12 + J 21 ), f 2 (N J (f 1 , f 3 )) = J 63 (J 22 − J 11 ) imply J 32 = 0, J 21 = −J 12 , J 22 = J 11 . Having f 1 (N J (f 1 , f 6 )) = 2J 11 J 12 , f 2 (N J (f 1 , f 6 )) = (J 11 ) 2 − (J 12 ) 2 + 1,
we must have J 11 = 0 and J 12 = δ ∈ {−1, 1}. We continue by computing
f 4 (N J (f 3 , f 4 )) = 2J 43 J 53 + J 45 J 63 + J 54 J 63 , f 4 (N J (f 3 , f 5 )) = −(J 43 ) 2 + (J 53 ) 2 − J 44 J 63 + J 55 J 63 , f 4 (N J (f 3 , f 6 )) = −J 43 J 45 + J 44 J 53 − J 53 J 66 + J 56 J 63 ,
whose vanishing we can impose by setting
J 54 = −J 45 − 2 J 43 J 53 J 63 , J 55 = J 44 + (J 43 ) 2 − (J 53 ) 2 J 63 , J 56 = 1 J 63 (J 43 J 45 − J 44 J 53 + J 53 J 66 ) .
Then, having
f 3 (N J (f 3 , f 4 )) = J 33 J 53 − J 43 J 45 − J 44 J 53 + J 35 J 63 − 2 (J 43 ) 2 J 53 J 63 , f 6 (N J (f 4 , f 5 )) = −2 (J 43 ) 2 + J 44 J 63 ,
we can set
J 35 = − 1 J 63 (J 33 J 53 − J 43 J 45 − J 44 J 53 ) + 2 (J 43 ) 2 J 53 (J 63 ) 2 , J 44 = − (J 2 43 ) J 63 .
Now, we have
We can now consider
f 3 (N J (f 4 , f 5 )) = − 1 (J 63 ) 2 (J 43 + J 45 J 63 + J 63 )(J 43 J 53 + J 45 J 63 − J 63 ),
which we can impose to equal zero by introducing η ∈ {−1, 1} and setting
J 45 = η − J 43 J 53 J 63 . Now, (J 2 ) 15 = δJ 25 + ηJ 14 + 1 J 63 (δJ 23 J 53 + ηJ 13 J 43 ) yields J 25 = δηJ 14 − 1 J 63 (J 23 J 53 + δηJ 13 J 43 ).
Since δ, η ∈ {−1, 1}, we can either have η = δ or η = −δ. We start by discussing the former case, considering now f 2 (N J (f 3 , f 4 )) = −2 + δεJ 63 , which forces ε = 1, meaning that we are now working on the Lie algebra s 6.152 , and
J 63 = 2δ, while f 2 (N J (f 3 , f 5 )) = 2δJ 33 − δ 2 ((J 43 ) 2 + (J 53 ) 2 ) implies J 33 = δ 4 ((J 43 ) 2 + (J 53 ) 2 ).
We have now obtained both N J = 0 and J 2 = −Id. A basis of (1, 0)-forms is now provided by
α 1 = f 1 + iJf 1 , α 2 = f 3 + iJf 3 , α 3 = f 4 + iJf 4 ,
whose associated structure equations are of the form (3.8), with
z 1 = δJ 24 − 1 8 ((J 43 ) 2 + (J 53 ) 2 ) + 1 2 J 23 J 43 + i 4 (2δJ 13 J 43 + δJ 43 J 53 + 2δ + 4J 14 ) , z 2 = 1 2 (δJ 43 + iJ 53 ).
We now go back and tackle the remaining case η = −δ and consider
f 1 (N J (f 3 , f 4 )) = −2J 23 J 43 − 2J 24 J 63 + δ J 63 ((J 43 ) 2 − (J 53 ) 2 ), yielding J 24 = − J 23 J 24 J 63 + δ 2(J 63 ) 2 ((J 43 ) 2 − (J 53 ) 2 ), while f 2 (N J (f 3 , f 4 )) = −δεJ 63 + 2J 13 J 43 + 2J 14 J 63 − 2 δJ 43 J 53 J 63 forces J 14 = 1 2 δε − J 13 J 43 J 63 + δJ 43 J 53 (J 63 ) 2 .
This is enough to guarantee N J = 0 and J 2 = −Id. A basis of (1, 0)-forms is given by
α 1 = f 1 + iJf 1 , α 2 = δ 2 J 63 (f 3 + iJf 3 ), α 3 = f 4 + iJf 4 .
In the case ε = 1, we get (3.9), with
z 1 = δ 2 J 43 + i 2 J 53 , z 2 = J 33 + i 2 δJ 63 .
Instead, if ε = 0, we obtain (3.10), with
z = δ 2 J 43 + i 2 J 53 , x = J 63 2 , y = J 33 . □
Now, we are interested in obtaining analogous results for complex structures J satisfying Jn 1 ̸ ⊂ n on Lie algebras with Heisenberg-type nilradical n. As we shall see in the next sections, it will not be necessary to study complex structures satisfying Jn 1 ⊂ n in this way, as there will be workarounds based on the use of algebraic data.
Let g be a 2n-dimensional strongly unimodular almost nilpotent Lie algebra with Heisenberg-type nilradical. Following [15], we can consider the map φ : g/n → gl(n/n 1 ),
X + n → π(ad X | n ), with π(ad X | n )(Y + n 1 ) := [X, Y ] + n 1 , Y ∈ n.
Since dim g/n = 1, φ induces a well-defined endomorphism A g of n/n 1 , up to non-zero rescalings. Now, given a complex structure J on g satisfying Jn 1 ̸ ⊂ n, it is possible to find a J-adapted basis for g, {e 1 , . . . , e 2n }, such that n 1 = span ⟨e 1 ⟩, k := n ∩ Jn = span ⟨e 2 , . . . , e 2n−1 ⟩ and e 2n = Je 1 / ∈ n. By [15], with respect to such a basis, we have
ad e2n | n = diag(0, A), A ∈ gl(n 1 ), tr A = 0, [A, J| k ] = 0, de 1 = η ∈ Λ 1,1 k * − {0},
which encode the whole Lie bracket of g. Observe that A is a representative of A g , under the isomorphism n/n 1 ∼ = k. Now, the condition [A, J| k ] = 0 implies that, up to a change of basis of k, we can assume that the matrices representing A and J| k are in their respective real Jordan form, namely
(3.11) A = diag(C k1 a1,b1 , . . . , C k l a l ,b l ), J = diag J k1 ε1 , . . . , J k l ε l , with k j ∈ N, a j , b j ∈ R, ε j ∈ {−1, 1}, j = 1, . . . , l, where J k ε := diag 0 −ε ε 0 , . . . , 0 −ε ε 0 ∈ gl 2k . and (3.12) C k a,b := a −b 1 0 b a 0 1 a −b 1 0 b a 0 1 . . . . . . a −b 1 0 b a 0 1 a −b b a ∈ gl 2k , k ∈ N, a, b ∈ R,
is a real Jordan block corresponding to the conjugate pair of eigenvalues a ± ib of A, with Spec(A) = {a 1 ± ib 1 , . . . , a l ± ib l } and k 1 + . . . + k l = n − 1. Notice that C k a,b is similar to C k a,−b via the change of basis changing the sign of every even-indexed basis vector. We assume our choice of basis yielding (3.11) to have been made by fixing one of the two possible signs for each b j , at the expense of allowing J to feature some diagonal blocks with an inverted sign. To obtain a J-adapted basis for k of the form {e 2 , e 3 = Je 2 , . . . , e 2n−2 , e 2n−1 = Je 2n−2 } we may have to fix the some of some b j 's, based on the value of the corresponding ε j in the expression of J. By doing so, we now have
A = diag(C k1 a1,ε1b1 , . . . , C k l a l ,ε l b l ).
Moreover, by rescaling e 2n (and e 1 accordingly, in order to preserve Je 1 = e 2n ), we can assume the eigenvalues of A to be properly normalized in order to match an arbitrary choice of rescaling of A g , as can be read in the structure equations of each Lie algebra.
Having simplified the expression of A, we can then consider the 2-form η, writing the generic (1, 1)-form on k with respect to the basis {e 2 , . . . , e 2n−1 }, for example
η = x 1 e 23 + x 2 e 45 + x 3 (e 24 + e 35 ) + x 4 (e 25 − e 34 ), x 1 , x 2 , x 3 , x 4 ∈ R,
in dimension six, and impose A * η = 0 to ensure that the Jacobi identity holds (see [15]) and that η has the right rank for the Lie algebra g we are considering, by imposing the right power of η to be the last non-vanishing one (n ∼ = h 2k+1 ⊕ R 2(n−k) forces η k ̸ = 0, η k+1 = 0). This, together with some further requirements dictated by the isomorphism class of g (for example, focusing on the center of n, see the proof below), yield conditions on the coefficients of η.
Focusing on the six-dimensional case, we then consider the basis of (1, 0)-forms given by
α 1 = ζ 1 (e 1 + ie 6 ), α 2 = ζ 2 (e 2 + ie 3 ), α 3 = ζ 3 (e 4 + ie 5 ),
where ζ 1 , ζ 2 , ζ 3 ∈ C − {0} are chosen as to simplify the associated structure equations as much as possible.
Proposition 3.3. Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra with nilradical n satisfying dim n 1 = 1. Then, every complex structure J on g satisfying Jn 1 ̸ ⊂ n admits a basis of (1, 0)-forms {α 1 , α 2 , α 3 } whose set of structure equations appears in Table 9.2, based on the isomorphism class of g.
Proof. We only prove the result for the Lie algebra g = s 0,q 6.52 , q > 0, as an explicit example of the general procedure described earlier.
Using the structure equations of the Lie algebra, we notice that, under the isomorphisms n/n 1 ∼ = R ⟨f 2 , f 3 , f 4 , f 5 ⟩ and g/n ∼ = R ⟨f 6 ⟩, the endomorphism A g is diagonalizable with spectrum a uniform rescaling of {±i, ±ib}. Let J be a complex structure on g. Then, by [15], J satisfies Jn 1 ̸ ⊂ n and, following the construction above, we can find a basis {ẽ 1 , . . . ,ẽ 6 } of g such that n 1 = R ⟨ẽ 1 ⟩,ẽ 6 = Jẽ 1 , k := n ∩ Jn = span ⟨ẽ 2 ,ẽ 3 ,ẽ 4 ,ẽ 5 ⟩ and the matrix A representing adẽ 6 | k and the one representing J| k are of the form
A = diag 0 1 −1 0 , 0 q −q 0 , J| k = diag 0 −ε 1 ε 1 0 , 0 −ε 2 ε 2 0 , ε 1 , ε 2 ∈ {−1, 1}.
With respect to the new basis {e 1 = ε 1ẽ1 , e 2 =ẽ 2 , e 3 = ε 1ẽ3 , e 4 =ẽ 4 , e 5 = ε 2ẽ5 , e 6 = ε 1ẽ6 }, the matrix A is now of the form
A = diag 0 1 −1 0 , 0 εq −εq 0 , with ε = ε 1 ε 2 and J satisfying Je 1 = e 6 , Je 2 = e 3 , Je 4 = e 5 . Now, de 1 = η, with η of the form η = x 1 e 23 + x 2 e 45 + x 3 (e 24 + e 35 ) + x 4 (e 25 − e 34 ), x 1 , x 2 , x 3 , x 4 ∈ R,
We compute
A * η = (εq − 1) x 3 (e 25 − e 34 ) − x 4 (e 24 + e 35 ) .
Now, if εq ̸ = 1, we must set x 3 = x 4 = 0 and be left with η = x 1 e 23 + x 2 e 45 , with x 1 x 2 = 0 in order for η 2 to vanish, as imposed by n ∼ = h 3 ⊕ R 2 . Instead, when εq = 1, we obtain A| k = J| k , implying that, after a proper diagonalization, we can assume η to be in diagonal form, namely x 3 = x 4 = 0, obtaining the same result as the previous case. Now, knowing that the eigenvectors of A with eigenvalue ±iq lie in the center of n, we can set x 2 = 0. Then, the basis of (1, 0)-forms
α 1 = 1 2 (e 1 + ie 6 ), α 2 = |x 1 | 1 2 2 (e 2 + ie 3 ), α 3 = 1 2 (e 4 + ie 5 ),
yields the structure equations in Table 9.2, with δ = sgn(x 1 ). □ Lastly, we consider almost abelian algebras, where we can use some of the previous strategies to simplify the expression of the Hermitian metric (3.1), as well as the complex structure equations. Let g be a 2n-dimensional almost abelian Lie algebras endowed with a Hermitian structure (J, g). Then, as described in [14], there exists a basis {e 1 , . . . , e 2n } such that n = span ⟨e 1 , . . . , e 2n−1 ⟩, n ⊥g = R ⟨e 2n ⟩, e 1 = −Je 2n , ∥e 1 ∥ g = ∥e 2n ∥ g = 1 and k := n ∩ Jn = span ⟨e 2 , . . . , e 2n−1 ⟩. Here, we are not assuming {e 2 , . . . , e 2n−1 } to be orthonormal. With respect to such a basis, the whole Lie bracket of g is encoded in ad e2n | n , which, with respect to the previous basis, takes the matrix form
ad e2n | n = a 0 v A ,
with a ∈ R, v ∈ k and A ∈ gl(k), with [A, J| k ] = 0. As before, this final condition can be exploited to say that, up to changing basis for k, we can assume Je 2j = e 2j+1 , j = 1, . . . , n − 1 and A to be in real Jordan form (see (3.12))
A = diag(C k1 a1,b1 , . . . , C k l a l ,b l ), with k j ∈ N, a j , b j ∈ R, j = 1, . . . , l, where Spec(A) = {a 1 ± ib 1 , .
. . , a l ± ib l } can be deduced, up to a uniform scaling, from the structure equations of g, and the signs of b 1 , . . . , b l in A are determined by the behavior of J and by a possible sign swap for e 2n (and, consequently, e 1 ), as in the case of Heisenberg-type nilradical. Finally, we can rescale e 2n and e 1 to apply a uniform rescaling to a and A, in order to better match the original structure equations of g.
We can consider a basis of (1, 0)-forms of the kind
α 1 = 1 2 (e 1 + ie 2n ), α j = 1 2 (e 2j−2 + ie 2j−1 ), j = 2, . . . , n,
and write the corresponding structure equations. Looking at the metric g, we can exploit the orthogonal splitting g = R ⟨e 1 ⟩ ⊕ R ⟨e 2 , . . . , e 2n−1 ⟩ ⊕ R ⟨e 2n ⟩ to deduce that the fundamental form must be of the form
ω = iλ 1 α 11 + ω k ,
with λ 1 > 0 and ω k ∈ Λ 1,1 k * the fundamental form of the restriction of g to k.
Proposition 3.4. Let g be a six-dimensional unimodular almost abelian Lie algebra. Then, every Hermitian structure (J, g) on g admits a basis of (1, 0)-forms {α 1 , α 2 , α 3 } satisfying the structure equations appearing in Table 9.3 (based on the isomorphism class of g), and such that
(3.13) ω = i(λ 1 α 11 + λ 2 α 22 + λ 3 α 33 ) + wα 23 − wα 32 ,
with λ 1 , λ 2 , λ 3 ∈ R >0 and w ∈ C satisfying the positivity condition
(3.14) λ 2 λ 3 > |w| 2 .
1 st -Gauduchon structures
We start our list of classification results regarding Hermitian structures in six dimensions with 1 st -Gauduchon structures, i.e., Hermitian structures (J, g) satisfying ∂∂ω ∧ ω = 0. We do so in order to partly simplify our later discussion regarding SKT structures, as we can exploit the trivial fact that Lie algebras not admitting 1 st -Gauduchon structures certainly do not admit SKT structures. To do so, we consider the generic Hermitian structure, which, by Proposition 3.4, always admits a basis of (1, 0)-forms satisfying the structure equations listed in Table 9.3 and making the metric of the form ω = i(λ 1 α 11 + λ 2 α 22 + λ 3 α 33 ) + wα 23 − wα 32 , with λ 1 , λ 2 , λ 3 ∈ R >0 , w ∈ C and λ 2 λ 3 > |w| 2 . By a direct computation we can show that the 6-form ∂∂ω ∧ ω cannot vanish for each of the Lie algebras in the previous list.
s 0 3.3 ⊕ R 3 , h 3 ⊕ s 0 3.3 , s − 1 2 ,− 1 2 4.3 ⊕ R 2 ,s
Now, let us focus on Lie algebras having nilradical with one-dimensional commutator. We need to prove that 1 st -Gauduchon structures do not exist on the following Lie algebras: s 6.44 , s 1 6.162 , s p 6.165 , p>0, s 1 6.166 , s 6.167 .
By [15], every complex structure on these Lie algebras satisfies Jn 1 ̸ ⊂ n, meaning we can refer to Proposition 3.3 to extract a suitable basis of (1, 0)-forms satisfying the structure equations in Table 9.5 and consider the generic Hermitian metric with fundamental form (3.1). From the computation of ∂∂ω ∧ ω in each case we can deduce the claim. Finally, we have to tackle the Lie algebras with nilradical n satisfying dim n 1 > 1, namely the ones isomorphic to s 0 6.145 , s 0 6.147 , s 6.152 , s 0 6.154 . Similarly to the previous case, if J is a complex structure on one of them, we can refer to Propositions 3.1 and 3.2 for the structure equations of a special basis {α 1 , α 2 , α 3 } of (1, 0)-forms and consider the generic Hermitian metric (3.1). For s 0 6.145 , (3.2) yields ∂∂ω ∧ ω = −2λ 2 1 α 112233 , while, on s 0 6.147 , we have
∂∂ω ∧ ω = −2(1 + |z| 2 )α 112233 , if (3.3) holds, ∂∂ω ∧ ω = −2(λ 1 ℜ(z) − 2ℑ(w 3 )) 2 − 2(λ 1 ℑ(z) − 2ℜ(w 3 )) 2 − λ 2 1 α 112233 , if (3.4) holds, ∂∂ω ∧ ω = −2(xλ 1 − 2ℑ(w 3 )) 2 − 8ℜ(w 3 ) 2 − λ 2 1 α 112233 , if (3.5) holds.
In the case of s 6.152 , one has
∂∂ω ∧ ω = −2(λ 1 ℜ(z 1 ) + 2ℑ(w 3 )ℜ(z 2 ) + 2ℑ(w 2 )) 2 −2(λ 1 ℑ(z 1 ) + 2ℜ(w 3 )ℜ(z 2 ) + 2ℜ(w 2 )) 2 − λ 2 1 α 112233 , if (3.8) holds, ∂∂ω ∧ ω = − λ 2 1 2ℑ(z 2 ) 4 (δℜ(z 2 )ℑ(z 2 ) − |z 1 | 2 ) 2 + ℑ(z 2 ) 2 (1 + ℑ(z 2 ) 2 ) α 112233 , if (3.9) holds,
and (3.10) on s 0 6.154 yields
∂∂ω ∧ ω = − λ 2 1 2x 4 1 (x 1 x 2 − |z| 2 ) 2 + x 2 1 α 112233 .
The non-vanishing of all these expressions concludes the proof of the theorem. □
SKT and LCSKT structures
As described in the Introduction, the SKT condition for a Hermitian structure (J, g) is characterized by the closure of H := d c ω, which is the torsion 3-form associated with the Bismut connection. The locally conformally SKT (LCSKT) condition was recently introduced in [7] and can be characterized by the condition dH = µ ∧ H, for some non-zero closed 1-form µ.
In the Lie algebra setting, six-dimensional Lie algebras admitting SKT structures have been classified in the nilpotent case [17], the almost abelian case [14], while some partial results were obtained in the strongly unimodular almost nilpotent case when the nilradical is of Heisenberg type [15]. Full characterizations in the almost abelian case and in the six-dimensional 2-step solvable case have been proved in [2] and [20], respectively.
Instead, [7] and [4] provide a full classification result for six-dimensional nilpotent and almost abelian Lie algebras admitting LCSKT structures.
The next theorem builds on some of these results to complete the classification of six-dimensional strongly unimodular almost nilpotent Lie algebras admitting SKT or LCSKT structures.
Theorem 5.1. Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits: (i) SKT structures if and only if it is isomorphic to one among , p̸ =0, r>0, s 0 6.154 . In particular, if a six-dimensional strongly unimodular almost nilpotent Lie algebra admits SKT structures, then it is either almost abelian or its nilradical has one-dimensional commutator.
s 0 3.3 ⊕ R 3 , h 3 ⊕ s 0 3.3 , s − 1 2 ,− 1 2 4.3 ⊕ R 2 , s p,− p 2 4.5 ⊕ R 2 , p>0,s 4.
Proof. The almost abelian case was treated in [14] and [4]. For the non-almost abelian case, we start by proving that the Lie algebras not appearing in the statement do not admit twisted SKT structures, namely Hermitian structures (J, g) satisfying dd c ω = µ ∧ d c ω, with the 1-form µ being closed and being allowed to vanish: these structures encompass SKT structures and LCSKT structures as special cases (µ = 0 and µ ̸ = 0, respectively). The Lie algebras we need to work with are the following: Comparing the two lists in the statement, it is easy to prove that s 0 6.154 does not admit SKT structures, since, by Theorem 4.1, it does not admit 1 st -Gauduchon structures.
s 5.16 ⊕ R,s 6.ω − µ ∧ d c ω = 1 4 ξ jklm α jklm + ξ 123j α 123j + ξ j123 α j123 s 6.44 iy(α 1 − α 1 ) ξ 1213 = 2yλ 2 =⇒ y = 0, ξ 1313 = 4λ 2 s 0 6.145 ζα 3 + ζα 3 ξ 2323 − 2ℑ(w 3 )ξ 1323 = 4λ 1 s 0 6.147 ζα 3 + ζα 3 λ 1 ξ 2323 − 2ℑ(w 3 ξ 1323 ) = 4(1 + |z| 2 )λ 2 1 ζα 3 + ζα 3 ξ 1233 = λ 1 ζ =⇒ ζ = 0, ξ 1323 = 4λ 1 z + 8iw 3 =⇒ w 3 = i 2 λ 1 z, ξ 2323 = 2λ 1 ζα 3 + ζα 3 ξ 1233 = λ 1 ζ =⇒ ζ = 0, ξ 2323 − xℜ(ξ 1323 ) = 2λ 1 s 6.152 iy(α 2 − α 2 ) − iyℜ(z 2 )(α 3 − α 3 ) ξ 1232 = −δλ 1 y =⇒ y = 0 ξ 1223 = 4λ 1 z 1 − 8iℜ(z 2 )w 3 − 8iw 2 =⇒ w 2 = − i 2 λ 1 z 1 − ℜ(z 2 )w 3 ξ 2323 = 2λ 1 iy(α 2 − α 2 ) − iyℜ(z 1 )(α 3 − α 3 ) ℑ(ξ 1223 ) = −δλ 1 y =⇒ y = 0 ξ 2323 = λ1 ℑ(z2) 4 (δℜ(z 2 )ℑ(z 2 ) − |z 1 | 2 ) 2 + ℑ(z 2 ) 2 (1 + ℑ(z 2 ) 2 ) s 6.159 iy(α 1 − α 1 ) + ζα 3 + ζα 3 ξ 1212 = 2δλ 1 y =⇒ y = 0, ξ 1223 = −iδλ 1 ζ =⇒ ζ = 0, ξ 2323 = −4δελ 1
It remains to prove that the following Lie algebras (which admit SKT structures, with the exception of s 5.16 ⊕ R) do not admit LCSKT structures: s 4.6 ⊕ R 2 , s 4.7 ⊕ R 2 , s 5.16 ⊕ R, s 6.25 , s p,0 6.51 , p>0, s 0,q 6.52 , q>0, s 6.158 , s p 6.164 , p>0.
All these Lie algebras have nilradical with one-dimensional commutator. First, we work with complex structures satisfying Jn 1 ⊂ n, meaning we are ruling out s 0,q 6.52 , by [15]. Assume (g, J, g) is an LCSKT Lie algebra, with g isomorphic to one of the previous Lie algebras and J satisfying Jn 1 ⊂ n. By [15], g admits an orthonormal basis {e 1 , . . . , e 6 }, with n = R ⟨e 1 , . . . , e 5 ⟩, n 1 = R ⟨e 1 ⟩ and Je 1 = e 2 , Je 3 = e 4 , Je 5 = e 6 . With respect to this basis, one has
ad e6 | n = 0 0 γ 1 v 1 0 m 1 Jγ 1 + γ 2 v 2 0 0 − 1 2 (m 1 + m 2 ) q −q − 1 2 (m 1 + m 2 ) v 0 0 0 m 2 , de 1 = η = c e 34 + γ 2 ∧ e 5 + m 1 e 25 , with m 1 , m 2 , q, c, v 1 , v 2 ∈ R, v ∈ k, γ 1 , γ 2 ∈ k * ,
with k := R ⟨e 3 , e 4 ⟩, satisfying the following conditions imposed by the Jacobi identity:
m 1 (m 1 + m 2 ) = 0, (5.1) c(m 1 + m 2 ) = 0, (5.2) (m 1 + m 2 )γ 2 + m 1 Jγ 1 + A * γ 2 − cJv ♭ = 0, (5.3) where A = − 1 2 (m 1 + m 2 ) q −q − 1 2 (m 1 + m 2 )
. and (·) ♭ is the musical isomorphism between g and g * induced by the metric g. Actually, one can prove that m 1 + m 2 = 0 holds: if this were not case, (5.1) and (5.2) would imply m 1 = c = 0, so that (5.3) reads A * γ 2 = 0, which cannot occur, since the degeneracy of A would imply m 2 = q = 0.
In the end, one obtains
(5.4) ad 6 | n = 0 0 γ 1 v 1 0 m Jγ 1 + γ 2 v 2 0 0 0 q −q 0 v 0 0 0 −m
, de 1 = η = c e 34 + γ 2 ∧ e 5 + m e 25 , with (5.5) mγ 1 + qγ 2 − cv ♭ = 0, exploiting A = −qJ| k . Following the techniques in [15], imposing g to be isomorphic to a specific Lie algebra induces further conditions on the parameters involved. We do this one Lie algebra at a time, proving that the vanishing of
ξ := dd c ω − µ ∧ d c ω,
with µ a generic non-zero closed 1-form, produces a contradiction: • s 4.7 ⊕ R 2 , s 5.16 ⊕ R, s 6.25 : necessarily, m = 0, q ̸ = 0. Now, the generic closed 1-form is (5.6) µ = y 1 e 1 + y 2 e 2 + µ k + y 5 e 5 + y 6 e 6 , y 1 , y 2 , y 5 , y 6 ∈ R, µ k ∈ k * , with (5.7) cy 1 = 0, y 1 ν = 0,
y 1 v 1 + y 2 v 2 + µ k (v) = 0, µ k = 1 q (y 1 Jγ 1 − y 2 γ 1 + y 2 Jγ 2 ) .
For a generic X ∈ k, a computation yields ξ(e 1 , e 2 , e 3 , e 4 ) = cy 2 , (5.8)
ξ(e 1 , e 2 , X, e 5 ) = −y 1 (γ 1 − Jγ 2 )(X) − y 2 Jγ 1 (X), (5.9) ξ(e 1 , e 2 , X, e 6 ) = y 2 Jγ 2 (X), (5.10) ξ(e 2 , X, e 5 , e 6 ) = q(Jγ 1 + γ 2 )(X) + y 6 (γ 1 − Jγ 2 )(X) − y 2 g(v, X) + v 2 µ k (X). (5.11) First, we notice that y 1 = y 2 = 0 must hold, otherwise (5.7), (5.8), (5.9) and (5.10) would imply c = 0, γ 2 = 0, annihilating η = de 1 , a contradiction. Then, (5.7) forces µ k = 0, so that (5.11) now implies γ 2 = −Jγ 1 . We now compute ξ(e 1 , e 3 , e 4 , e 5 ) = cy 5 , ξ(e 1 , e 3 , e 4 , e 6 ) = cy 6 and, since we want µ = y 5 e 5 + y 6 e 6 not to vanish, we must set c = 0. Now, ξ(e 3 , e 4 , e 5 , e 6 ) = −2|γ 1 | 2 implies γ 1 = 0, in turn forcing η = 0; • s 4.6 ⊕ R 2 , s p,0 6.51 , s 6.158 , s p 6.164 : we need to require m ̸ = 0. We write the generic 1-form (5.6) and compute dµ(e 2 , e 5 ) = y 1 m, dµ(e 2 , e 6 ) = y 2 m, implying y 1 = y 2 = 0. Now, a computations shows ξ(e 1 , e 2 , X, e 5 ) = mµ k (X), X ∈ k, ξ(e 1 , e 2 , e 5 , e 6 ) = −my 6 , so that we must set µ k = 0 and y 6 = 0. Now, µ is reduced to µ = y 5 e 5 and it cannot be closed unless it vanishes, since dµ = −my 5 e 56 .
To end the proof, we need to study the Lie algebras s 4.7 ⊕ R 2 and s 0,q 6.52 in the case of a complex structure J satisfying Jn 1 ̸ ⊂ n. We proceed exactly as in the case of Lie algebras admitting neither SKT nor LCSKT structures, this time assuming µ ̸ = 0. The computations proving the claim are summarized in Table 5
.2. □
Lie algebra Generic real closed 1-form µ dd c ω − µ ∧ d c ω = 1 4 ξ jklm α jklm + ξ 123j α 123j + ξ j123 α j123 s 4.7 ⊕ R 2 iy(α 1 − α 1 ) + ζα 3 + ζα 3 ξ 1212 = 2εyλ 1 =⇒ y = 0, ξ 2312 = iεζλ 1 s 0,q 6.52 , q > 0 iy(α 1 − α 1 ) ξ 1212 = 2δyλ 1 The claim follows from noticing that the 2-forms ι fi H, i = 1, . . . , 6, are all linearly independent.
Existence of tamed complex structures
In this section we provide a negative result regarding the existence of symplectic forms taming complex structures on almost nilpotent Lie algebras. In the almost abelian, we can drop both the six-dimensional and the unimodular hypothesis and prove a general result. Notice that the result has already been proven in [12] under the hypothesis that the Lie algebra is not of type I, namely, that some of the eigenvalues of the adjoint action of a non-nilpotent element on the nilradical are not imaginary. Theorem 6.1. Let g be a 2n-dimensional almost abelian Lie algebra endowed with a complex structure J. Then, there do not exist symplectic structures Ω on g taming J, unless (g, J) admits Kähler metrics.
Proof. Using the characterization we have just recalled, we assume g admits an SKT structure (J, g) such that ∂ω = ∂β for some ∂-closed (2, 0)-form β.
Following [14], we now fix a codimension-one abelian ideal and observe that Jn ⊥g ⊂ n since g is J-Hermitian. We also denote by k := n ∩ Jn = (n ⊥g ⊕ Jn ⊥g ) ⊥g the maximal J-invariant subspace of n.
By [2], one can then find an orthonormal basis {e 1 , . . . , e 2n } of g adapted to the splitting g = Jn ⊥g ⊕ k ⊕ n ⊥g , such that Je 1 = e 2n , Je 2k−1 = e 2k , k = 1, . . . , n − 1 and such that, with respect to the basis {e 1 , . . . , e 2n−1 } for n, the matrix associated with the endomorphism ad e2n | n is of the form
ad e2n | n = a 0 v A ,
with a ∈ R, v ∈ k 1 and A of the form
(6.1) A = diag − a 2 −b 1 b 1 − a 2 , . . . , − a 2 −b h b h − a 2 , 0 −b h+1 b h+1 0 , . . . , 0 −b n−1 b n−1 0 ,
for some b k ∈ R, k = 1, . . . , n − 1, h ∈ {0, . . . , n − 1}, |b 1 | ≥ . . . ≥ |b h |, |b h+1 | ≥ . . . ≥ |b n−1 |. Now, denote by g 1,0 , g 0,1 ⊂ g ⊗ C the eigenspaces of J with respect to the eigenvalues ±i. Bases for g 1,0 and g 0,1 are given, respectively, by {Z 1 , . . . , Z n } and {Z 1 , . . . , Z n }, where
Z 1 = 1 2 (e 1 − ie 2n ), Z k = 1 2 (e 2k−1 − ie 2k ), k = 2, . . . , n.
Their respective dual bases for (g 1,0 ) * and (g 0,1 ) * are {α 1 , . . . , α n } and {α 1 , . . . , α n }, with α 1 = e 1 + ie 2n , α k = e 2k−1 + ie 2k , k = 2, . . . , n.
Recall that, by the integrability of J, the exterior differential d splits into the sum ∂ + ∂, with ∂ : Λ p,q g * → Λ p+1,q g * , ∂ : Λ p,q g * → Λ p,q+1 g * , for all p, q = 1, . . . , n, where Λ p,q g * := Λ p (g 1,0 ) * ⊗ Λ q (g 0,1 ) * ⊂ Λ p+q (g * ⊗ C). In the setup above, it is easy to see that dg * ⊂ g * ∧ e 2n , so that one has
∂(Λ 1,1 g * ) ⊂ Λ 1,1 g * ∧ α 1 , ∂(Λ 2,0 g * ) ⊂ Λ 2,0 g * ∧ α 1 .
Therefore, since, by hypothesis, ∂ω ∈ ∂(Λ 1,1 g * ) ∩ ∂(Λ 2,0 g * ), we must have ∂ω ∈ Λ 1,0 n * 1 ∧ α 1 ∧ α 1 . Then, ∂ω has no components inside Λ 1,1 n * 1 ∧ α 1 , so that, for all X, Y ∈ n 1 , one must have dω(Z 1 , X − iJX, Y + iJY ) = 0.
Exploiting [n, n] = {0} and [A, J| k ] = 0, we compute
dω(Z 1 , X − iJX, Y + iJY ) = −ω([Z 1 , X − iJX], Y + iJY ) + ω([Z 1 , Y + iJY ], X − iJX) = i 2 ω([e 2n , X − iJX], Y + iJY ) − i 2 ω([e 2n , Y + iJY ], X − iJX) = i 2 ω(AX − iJAX, Y + iJY ) − i 2 ω(AY + iJAY, X − iJX) = − i 2 g(AX − iJAX, J(X + iJY )) + i 2 g(AY + iJAY, J(X − iJX)) = − 1 2 g(AX − iJAX, Y + iJY ) − 1 2 g(AY + iJAY, X − iJX) = −(g(AX, Y ) + g(X, AY )) − i(g(AX, JY ) − g(JX, AY )) = −g((A + A t )X, Y ) − ig((A + A t )X, JY ),
which vanishes for all X, Y ∈ k if and only if A is skew-symmetric. In particular, we obtain h = 0 in (6.1) and the eigenvalues of A are all purely imaginary. Now, let us assume a ̸ = 0. Then, a cannot be an eigenvalue of A, so that A − aId k is non-singular, ensuring the existence of a unique X ∈ k such that k ∋ v = AX − aX.
Consider then the new J-Hermitian metric
g ′ = g| k + (e 1 ′ ) 2 + (e 2n ′ ) 2 ,
where e ′ 1 = e 1 − X, e ′ 2n = Je ′ 1 = e 2n − JX and duals are taken with respect to the splitting g = span ⟨e ′ 1 ⟩⊕k⊕span ⟨e ′ 2n ⟩. Then, with respect to the new adapted unitary basis {e ′ 1 , e 2 , . . . , e 2n−1 } for n, we have
ad e ′ 2n | n = a 0 0 A ,
with a and A as above: observe that A is still skew-symmetric with respect to g ′ , since g| k = g ′ | k . By the characterization of Kähler almost abelian Lie algebras [24], (g, J, g ′ ) is Kähler. We can then assume a = 0 and the previous method can only work if we manage to prove that v lies in the image of A, which corresponds to the orthogonal complement to ker A inside k. A is now of the form
A = diag 0 −b 1 b 1 0 , . . . , 0 −b t b t 0 , 0, . . . , 0 , for some b k ∈ R − {0}, k ∈ {1, .
. . , t}, and some t ∈ {0, . . . , n − 1}. Notice that ker A and its orthogonal complement inside k 1 are both J-invariant Now, using again ∂ω = ∂β ∈ Λ 1,0 n * 1 ∧ α 1 ∧ α 1 , we consider k ∈ {2, . . . , t + 1}, l ∈ {t + 2, . . . , n} (implying Z k ∈ Λ 1,0 (ker A) ⊥g and Z l ∈ Λ 1,0 ker A) and we have
0 = dβ(Z 1 , Z k , Z l ) = −β([Z 1 , Z k ], Z l ) + β([Z 1 , Z l ], Z k ), = iβ([e 2n , Z k ], Z l ) − iβ([e 2n , Z l ], Z k ), = b k β(Z l , Z k ).
We then must have β(Z k , Z l ), meaning
(6.2) β Λ 1,0 ker A, Λ 1,0 (ker A) ⊥g ∩ k = 0.
It is easy to see that, in our situation, given a 2-form γ ∈ Λ 2 g * , one has
ι Z 1 ι Z1 dγ | ker A = i 2 (ι v γ)| ker A .
As a consequence, the condition that ∂ω and ∂β coincide along their respective α 1 ∧α 1 ∧Λ 1,0 (ker A) *components boils down to
(6.3) (ι v β)| Λ 1,0 (ker A) = (ι v ω)| Λ 1,0 (ker A) .
To exploit this, we first write v = w +w, w ∈ ker A,w ∈ (ker A) ⊥g ∩ k = Im A, and then w = w 1,0 + w 1,0 , w 1,0 ∈ Λ 1,0 (ker A). Evaluating the left-hand side of (6.3) on w 1,0 , we obtain β v, w 1,0 = β w 1,0 , w 1,0 = 0, where we used that β is of type (2, 0) and condition (6.2). Instead, the right-hand side reads
ω v, w 1,0 = − i 2 w 1,0 2 .
It follows that w = 0, so that v ∈ (ker A) ⊥g ∩ k = Im A. The claim then follows. □
Having dealt with the almost abelian case in full generality, we restrict to the six-dimensional strongly unimodular case and analyze the remaining almost nilpotent Lie algebras. Theorem 6.2. Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra endowed with a complex structure J. Then, g admits symplectic structures Ω taming J if and only if (g, J) admits Kähler metrics.
Proof. In the almost abelian case, the results follows from Theorem 6.1. Now the Lie algebra h 3 ⊕s 0 3.3 is the only Lie algebra of Theorem 2.2 admitting both SKT structures and symplectic structures, both of whose existence is necessary for the existence of a complex structure tamed by a symplectic form: this is achieved by considering the Lie algebras of Theorem 2.2 and comparing with [25], or by performing simple explicit computations, considering the generic closed 2-form on each Lie algebra and checking if it can be non-degenerate. We can then take the generic complex structure on h 3 ⊕ s 0 3.3 and consider the basis {α 1 , α 2 , α 3 } of (1, 0)-forms yielding the structure equations in Table 9.2, by Proposition 3.3. In such a basis, the generic ∂-closed (2, 0)-form is
β = z 1 α 12 + z 2 α 13 , z 1 , z 2 ∈ C and, letting {Z 1 , Z 2 , Z 3 , Z 1 , Z 2 , Z 3 } denote the basis of (h 3 ⊕ s 0 3.3 ) ⊗ C which is dual with respect to {α 1 , α 2 , α 3 , α 1 , α 2 , α 3 }, one can compute (∂ω − ∂β)(Z 1 , Z 3 , Z 3 ) = ελ 1 ̸ = 0,
negating the tamed condition and concluding the proof. □
LCB and LCK structures
In [31], the LCB condition was studied on almost abelian algebras, yielding a characterization result in all dimensions and a full classification result in dimension six.
The LCK condition has been studied more widely in literature: in [34], it was proven that the only (non-abelian) nilpotent Lie algebras admitting LCK structure are of the form h 2k+1 ⊕ R, where h 2k+1 denotes the (2k + 1)-dimensional Heisenberg Lie algebra we have defined in Section 2. In the almost abelian case, the characterization results of [1] were used in [31] to obtain a full classification in dimension six.
The following results extend the previously-mentioned results to the complete six-dimensional strongly unimodular almost nilpotent case. Explicit examples of LCB structures on the remaining Lie algebras are provided in Tables 9.4, 9.5 and 9.6.
Proof. The result in the almost abelian case follows from [31]. It remains to prove that s 6.25 does not admit LCB structures. This Lie algebra has nilradical n isomorphic to h 3 ⊕ R 2 and all its complex structures satisfy Jn 1 ⊂ n, by [15]. Following the same techniques of the proof of Theorem 5.1, let us assume s 6.25 admits an LCB structure (J, g); then, there exists an orthonormal basis {e 1 , . . . , e 6 }, with n = R ⟨e 1 , . . . , e 5 ⟩, n 1 = R ⟨e 1 ⟩ and Je 1 = e 2 , Je 3 = e 4 , Je 5 = e 6 , with respect to which one has (5.4), with the further conditions m = 0, q ̸ = 0. Now, (5.5) implies γ 2 = c q v ♭ must hold. By [15], the Lee form is
(7.1) θ = −v 2 e 1 + (c + v 1 )e 2 + Jv ♭ − me 6 ,
and a computation shows dθ(e 3 , e 4 ) = −cv 2 , meaning v 2 = 0, since c ̸ = 0 is forced by the nonvanishing of η. Moving on, we have
(7.2) dθ(X, e 6 ) = 1 q (c + v 1 )(cv ♭ + qJα) − q 2 v ♭ , X ∈ k.
If we assume v 1 = −c, (7.2) implies v = 0, and now the change of basis
f 1 = e 1 , f 2 = 1 q (γ 1 (e 4 )e 1 + γ 1 (e 3 )e 2 ) + e 3 , f 3 = 1 q (−γ 1 (e 3 )e 1 + γ 1 (e 4 )e 2 ) + e 4 , f 4 = − q c e 5 , f 5 = e 2 , f 6 = 1 q e 6
provides an explicit isomorphism with s 5.16 ⊕ R, a contradiction. This means that we must have c + v 1 ̸ = 0, instead. Equation (7.2) then forces
α = c(c + v 1 ) − q 2 q(c + v 1 ) Jv ♭ . Now, if (7.3) (c(c + v 1 ) − q 2 )|v| 2 − q 2 v 1 (c + v 1 )
vanishes, the change of basis
f 1 = e 1 , f 2 = c(c + v 1 ) − q 2 q 2 (c + v 1 ) g(v, e 3 )e 1 + g(v, e 4 ) c + v 1 e 2 + e 3 , f 3 = c(c + v 1 ) − q 2 q 2 (c + v 1 ) g(v, e 4 )e 1 − g(v, e 3 ) c + v 1 e 2 + e 4 , f 4 = − 1 q Jv + e 5 , f 5 = e 2 , f 6 = 1 q e 6
induces an isomorphism with the Lie algebra s 4.7 ⊕ R 2 , a contradiction. Instead, if (7.3) is non-zero, a similar change of basis, but with
f 4 = q 2 (c + v 1 ) (c(c + v 1 ) − q 2 )|v| 2 − q 2 v 1 (c + v 1 ) (Jv − qe 5 ) ,
does the same with respect to the Lie algebra s 5.16 ⊕ R, again a contradiction, concluding the proof. □ Theorem 7.2. Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits LCK structures if and only if it is isomorphic to one among Proof. In the almost abelian case, the result follows from [31]. If the nilradical n of g has onedimensional commutator and g admits an LCK structure (J, g), we first assume that the complex structure satisfies Jn 1 ⊂ n: then, following the steps in the proof of Theorem 5.1, by [15], g admits an orthonormal basis {e 1 , . . . , e 6 }, with n = R ⟨e 1 , . . . , e 5 ⟩, n 1 = R ⟨e 1 ⟩ and Je 1 = e 2 , Je 3 = e 4 , Je 5 = e 6 , with respect to which one has (5.4), satisfying (5.5). The Lee form θ is given by (7.1) and its closure implies v 2 = 0, since a computation shows dθ(e 2 , e 5 ) = −v 2 m, dθ(e 3 , e 4 ) = −v 2 c, dθ(X, e 5 ) = −v 2 γ 2 (X), X ∈ k, and we want η not to vanish. Now, we can explicitly compute
s 0 3.3 ⊕ R 3 , s 0,0,r 5.13 ⊕ R, r>0,s 5.dω − θ 2 ∧ ω = − m 2 e 126 + c − v 1 2 e 2 ∧ (e 34 − e 56 ) − (Jγ 1 − γ 2 ) ∧ e 16 − γ 1 ∧ e 26 − γ 2 ∧ e 25 − Jv ♭ ∧ (e 12 − e 56 ),
so that the LCK condition forces
m = 0, v 1 = c, v = 0, α = ν = 0.
We also obtain c ̸ = 0, in order for η not to vanish. Now, an explicit isomorphism with s 5.16 ⊕ R is provided by the change of basis This rules out the existence of LCK structures on all the other Lie algebras of Theorem with nilradical with one-dimensional commutator and only admitting complex structures satisfying Jn 1 ⊂ n, namely s 4.6 ⊕ R 2 , s 6.25 (here, the non-existence of LCB structures was already enough), s p,0 6.51 , s 6.158 and s p 6.164 . Now, we only need to study Lie algebras with Heisenberg-type nilradical with respect to complex structures satisfying Jn 1 ̸ ⊂ n and s 0 6.145 , s 0 6.147 , s 6.152 , s 0 6.154 , whose nilradical has at least twodimensional commutator. As we have already done in the previous proofs, we do this by considering the structure equations of Table 9.2, by virtue of Proposition 3.3, and those of Propositions 3.1 and 3.2, together with the generic Hermitian metric (3.1). Instead of computing the associated Lee form θ, imposing its closure and the LCK condition dω = θ 2 ∧ ω, we first exhibit the generic real closed 1-form µ and then impose the condition dω = µ ∧ ω, proving that it always leads to a contradiction. Notice that, apart from h 3 ⊕ s 0 3.3 , a contradiction is reached as soon as µ is required to vanish, as µ = 0 would imply dω = 0 and we already know that none of these Lie algebras (again, except h 3 ⊕ s Table 7.1. □ Lie algebra Generic real closed 1-form µ dω − µ ∧ ω = 1 2 ξ jkm α jkm + 1 2 ξ jkm α mjk
h 3 ⊕ s 0 3.3 iy(α 1 − α 1 ) + ζα 3 + ζα 3 ξ 122 = yλ 2 , ξ 232 = iζλ 2 =⇒ y = ζ = 0, ξ 133 = ελ 1 s 4.7 ⊕ R 2 iy(α 1 − α 1 ) + ζα 3 + ζα 3 ξ 122 = ελ 1 + yλ 2 , ξ 131 = i(ζλ 1 + yw 2 ) =⇒ y = −ε λ1 λ2 , ζ = ε λ2 w 2 , ξ 133 = − ε λ2 (λ 1 λ 3 − |w 2 | 2 ) s 6.44 iy(α 1 − α 1 ) ξ 122 = yλ 2 s 0,q 6.52 , q > 0 iy(α 1 − α 1 ) ξ 133 = yλ 3 s 0 6.145 ζα 3 + ζα 3 ξ 131 = iζλ 1 s 0 6.147 ζα 3 + ζα 3 ξ 131 = iζλ 1 ζα 3 + ζα 3 ξ 123 = −iλ 1 ζα 3 + ζα 3 ξ 123 = −iλ 1 s 6.152 iy(α 2 − α 2 ) − iyℜ(z 2 )(α 3 − α 3 ) ξ 121 = −yλ 1 iy(α 2 − α 2 ) − iyℜ(z 1 )(α 3 − α 3 ) ξ 121 = −yλ 1 s 0 6.154 iy(α 2 − α 2 ) − iyℜ(z 1 )(α 3 − α 3 ) ξ 121 = −yλ 1 s 1 6.162 iy(α 1 − α 1 ) ξ 122 = (y − 2)λ 2 =⇒ y = 2, ξ 133 = 4λ 3 s p 6.165 , p > 0 iy(α 1 − α 1 ) ξ 122 = (y − 2p)λ 2 =⇒ y = 2p, ξ 133 = 4pλ 3 s 6.167 iy(α 1 − α 1 ) ξ 122 = yλ 2
Strongly Gauduchon and balanced structures
A Hermitian structure (J, g) on a 2n-dimensional smooth manifold is called Gauduchon when it satisfies ∂∂ω = 0. Clearly, the Gauduchon condition generalizes the balanced condition dω n−1 = 0.
In [39], the author introduced the strongly Gauduchon condition, defined by the condition ∂ω n−1 = ∂β, for some (n, n − 2)-form β. The strongly Gauduchon sits in between the Gauduchon and the balanced conditions, strengthening the former and generalizing the latter.
Speaking of balanced structures in the Lie algebra setting, we mention the classification result in the six-dimensional nilpotent case obtained in [37] and the characterization results for the almost abelian case in [16], yielding a full classification result in six dimensions. Partial results in the almost nilpotent case with Heisenberg-type nilradical were obtained in [15].
In the next theorem, we provide a full classification result for six-dimensional strongly unimodular almost nilpotent Lie algebras admitting balanced structures, showing that the generalization provided by the strongly Gauduchon condition is not reflected in this classification result. Explicit examples of balanced structures on such Lie algebras are provided in Tables 9.4, 9.5 and 9.6.
Proof. We start with the almost abelian case, proving that a six-dimensional (not necessarily unimodular) almost abelian Lie algebra g admits strongly Gauduchon structures if and only if it admits balanced structures. Assume (J, g) is a strongly Gauduchon structure on g. Then, by [15], g admits an orthonormal basis {e 1 , . . . , e 6 } such that n = R ⟨e 1 , . . . , e 5 ⟩ is an abelian ideal and J satisfies Je 1 = e 6 , Je 2 = e 3 , Je 4 = e 5 . In such a basis, the Lie bracket of g is encoded by the adjoint action of e 6 on n, which has the following matrix form, with respect to the chosen basis:
ad e6 | n = a 0 0 0 0 v 1 A 11 A 12 A 13 A 14 v 2 −A 12 A 11 −A 14 A 13 v 3 A 31 A 32 A 33 A 34 v 4 −A 32 A 31 −A 34 A 33 ,
with a, v k , A jk , k = 1, 2, 3, 4, j = 1, 3. We denote k := n∩Jn = R ⟨e 2 , e 3 , e 4 , e 5 ⟩, v :
= (v 1 , v 2 , v 3 , v 4 ) t ∈ k, A := ad e6 | k ∈ gl(k).
We consider the basis of (1, 0)-forms defined by
α 1 = 1 2 (e 1 + ie 6 ), α 2 = 1 2 (e 2 + ie 3 ), α 3 = 1 2 (e 4 + ie 5 ),
so that the fundamental form
ω = i 2 α 11 + i 2 α 22 + i 2 α 33 satisfies (8.1) ∂ω 2 = i 4 α 123 ∧ −(v 3 − iv 4 )α 12 + (v 1 − iv 2 )α 13 + (tr A)α 23 ,
while the generic (3, 1)-form
β = α 123 ∧ (z 1 α 1 + z 2 α 2 + z 3 α 3 ), z 1 , z 2 , z 3 ∈ C, satisfies (8.2) ∂β = i 2 α 1231 ∧ ((A 33 − iA 14 + 2A 11 + a)z 2 + (A 31 + iA 32 )z 3 ) α 2 + ((A 13 + iA 14 )z 2 + (A 11 − iA 12 + 2A 22 + a)z 3 ) α 3
Equating (
v = v 1 v 2 v 3 v 4 = 2(a − A 11 )ℜ(z 3 ) + 2A 12 ℑ(z 3 ) + 2A 13 ℜ(z 2 ) − 2A 14 ℑ(z 2 ) −2(a − A 11 )ℑ(z 3 ) + 2A 12 ℜ(z 3 ) − 2A 13 ℑ(z 2 ) − 2A 14 ℜ(z 2 ) −2(a + A 11 )ℜ(z 2 ) − 2A 31 ℜ(z 3 ) + 2A 32 ℑ(z 3 ) − 2A 34 ℑ(z 2 ) 2(a + A 11 )ℑ(z 2 ) + 2A 31 ℑ(z 3 ) + 2A 32 ℜ(z 3 ) − 2A 34 ℜ(z 2 ) = (A − a Id| k )X, X = −2ℜ(z 3 ) 2ℑ(z 3 ) 2ℜ(z 2 ) −2ℑ(z 2 ) ,
so that the new J-Hermitian metric for which the basis {e 1 − X, e 2 , e 3 , e 4 , e 5 , e 6 − JX} is balanced, as the matrix associated with ad e6−JX | k is of the form
adẽ 6 | k = a 0 0 A ,
with a and A being the same as before: in particular, tr A = 0. The classification of six-dimensional almost abelian Lie algebras admitting balanced structures was obtained in [16]: the unimodular ones are listed in the statement. We now turn our attention to Lie algebras with Heisenberg-type nilradical n, assuming the existence of a strongly Gauduchon structure (J, g) with complex structure satisfying Jn 1 ⊂ n: following the steps in the proof of Theorem 5.1, by [15], g admits an orthonormal basis {e 1 , . . . , e 6 }, with n = R ⟨e 1 , . . . , e 5 ⟩, n 1 = R ⟨e 1 ⟩ and Je 1 = e 2 , Je 3 = e 4 , Je 5 = e 6 , with respect to which one has (5.4), with (5.5). In what follows, we write v = v 3 e 3 + v 4 e 4 , γ 1 = γ 1,3 e 3 + γ 1,4 e 4 , γ 2 = γ 2,3 e 3 + γ 2,4 e 4 , v k , γ j,k ∈ R, j = 1, 2, k = 3, 4.
We now consider the basis of (1, 0)-forms provided by
α 1 = 1 2 (e 1 + ie 2 ), α 2 = 1 2 (e 3 + ie 4 ), α 3 = 1 2 (e 5 + ie 6 ).
We can compute
(8.3) ∂ω 2 = i 4 α 123 ∧ mα 12 + (v 3 − iv 4 )α 13 + (−v 1 + iv 2 − c)α 23 .
Taking the generic (3, 1)-form
β = α 123 ∧ (z 1 α 1 + z 2 α 2 + z 3 α 3 ), z 1 , z 2 , z 3 ∈ C,
we have (8.4) ∂β = 1 2 α 1233 qz 1 α 1 + ((γ 2,3 − γ 1,4 + i(γ 2,4 + γ 1,3 ))z 1 − imz 2 ) α 2 .
Equating (8.3) and (8.4) yields
m = 0, v 1 = −2mx 2 + 2γ 1,3 x 1 + 2γ 2,4 x 1 − c, v 2 = −2γ 1,4 x 1 + 2γ 2,3 x 1 , v 3 = 0, v 4 = −2qx 1 . m = 0, v 1 − iv 2 = −2i(γ 2,3 − γ 1,4 + i(γ 2,4 + γ 1,3 ))z 1 − c, v 3 − iv 4 = 2iqz 1 .
The condition (5.5) now reads 2cqz 1 + iq(γ 2,3 − iγ 2,4 ) = 0. We observe that q cannot vanish, otherwise g would be nilpotent, meaning we must set
γ 2,3 − iγ 2,4 = −2icz 1 .
To recap, we now have
ad e6 | n = 0 0 γ 1,3 γ 1,4 −4c|z 1 | 2 + 2γ 1,3 ℜ(z 1 ) − 2γ 1,4 ℑ(z 1 ) − c 0 0 −2cℑ(z 1 ) − γ 1,4 −2cℜ(z 1 ) + γ 1,3 −2γ 1,3 ℑ(z 1 ) − 2γ 1,4 ℜ(z 1 ) 0 0 0 q −2qℑ(z 1 ) 0 0 −q 0 −2qℜ(z 1 ) 0 0 0 0 0 , η = c e 34 − 2cℑ(z 1 )e 35 − 2cℜ(z 1 )e 45 .
Now, observing that c cannot vanish (otherwise, η = 0), we can perform a change of basis, making apparent the isomorphism between g and s 5.16 ⊕ R, which admits balanced structures, by [15]:
f 1 =e 1 , f 2 = 1 q (γ 1,4 e 1 + (γ 1,3 − 2cℜ(z 1 ))e 2 ) + e 3 , f 3 = 1 q (−γ 1,3 e 1 + (γ 1,4 + 2cℑ(z 1 )e 2 ) + e 4 , f 4 = q c(4|z 1 | 2 + 1) (2ℜ(z 1 )e 3 − 2ℑ(z 1 )e 4 − e 5 ),f 5 =e 2 , f 6 = 1 q e 6 .
It remains to examine Lie algebras with Heisenberg-type nilradical with respect to complex structures satisfying Jn 1 ̸ ⊂ n and Lie algebras with nilradical satisfying dim n 1 > 1. Following what we have done in the previous proofs, we can consider the structure equations of Table 9.2, thanks to Proposition 3.3, and those of Propositions 3.1 and 3.2, together the generic Hermitian metric of the form (3.1). Focusing on the Lie algebras not appearing in the statement, we impose the strongly Gauduchon condition ∂(ω 2 ) = ∂β for the generic (3, 1)-form
β = α 123 ∧ (z 1 α 1 + z 2 α 2 + z 3 α 3 ), z 1 , z 2 , z 3 ∈ R,
and show that we get to a contradiction.
− ∂β = 1 2 ξ 123jk α 123jk h 3 ⊕ s 0 3.3 ξ 12323 = −2iε(λ 1 λ 2 − |w 3 | 2 ) s 4.7 ⊕ R 2 ξ 12323 = −2iε(λ 1 λ 3 − |w 2 | 2 ) s 6.44 ξ 12323 = −2iε(λ 1 λ 2 − |w 3 | 2 )
s 0,q 6.52 , q > 0 ξ 12323 = −2iδ(λ 1 λ 3 − |w 2 | 2 ) [15,16] Having studied the SKT condition and the balanced condition, we can now say something about Kähler condition. Table 9.4.
s 6.152 ℜ(z 2 )ξ 12312 + ξ 12313 = −iδ(λ 1 λ 2 − |w 3 | 2 ) ℜ(z 1 )ξ 12312 + ξ 12313 = iδℑ(z 2 )(λ 1 λ 2 − |w 3 | 2 ) s 0 6.154 ℜ(z)ξ 12312 + ξ 12313 = ix(λ 1 λ 2 − |w 3 | 2 )
Proof. Comparing Theorems 5.1 and 8.1, the two Lie algebras of the statement are the only sixdimensional strongly unimodular almost nilpotent Lie algebras, up to isomorphism, which admit both SKT and balanced structures, which is a necessary condition for the existence of Kähler structures. The existence of Kähler structures on them was established in [14] and is confirmed by the explicit examples in Table 9.4, proving the claim. □
In [18], the authors formulated a conjecture, according to which a compact complex manifold admitting both SKT and balanced metrics necessarily admits Kähler metrics. So far, such conjecture has been confirmed in several cases [3,6,9,10,12,15,18,19,20,22,30,38]. Moreover, in [21], the authors provided the first examples of complex structures on non-unimodular Lie groups admitting both SKT and balanced metrics but no Kähler metrics.
We note that the previous theorem confirms this conjecture for six-dimensional almost nilpotent solvmanifolds.
Appendix
This section features the tables mentioned throughout the article. Table 9.1 summarizes the main results, providing, at a glance, a list of all six-dimensional strongly unimodular almost nilpotent Lie algebras admitting complex structures, up to isomorphism, along with their structure equations and the types of special Hermitian structures they admit. Parentheses indicate trivial results: for example, the Lie algebra s 0 5.8 ⊕ R admits balanced structures, so it trivially admits LCB structures. Tables 9.2 and 9.3 were referenced in Propositions 3.3 and 3.4 and provide the classification of complex structures, up to automorphisms, on some of the Lie algebras of Table 9.1 (with some further conditions on the possible Hermitian metrics in the almost abelian case). Finally, Tables 9.4, 9.5 and 9.6 provide explicit examples of special Hermitian structures on each of the Lie algebras of Table 9.1. In order to simplify these final three tables, we recall the inclusions among the different types of special Hermitian metrics mentioned in the Introduction: for instance, the balanced structure on s 0 5.8 ⊕ R is obviously also an example of LCB structure, but we omit writing it.
s 0 3.3 ⊕ R 3 (f 26 , −f 16 , 0, 0, 0, 0) R 5 ✓ (✓) (✓) (✓) (✓) (✓) (✓) s − 1 2 ,− 1 2 4.3 ⊕ R 2 f 16 , − 1 2 f 26 , − 1 2 f 36 , 0, 0, 0 R 5 - ✓ - - - ✓ (✓) s p,− p 2 4.5 ⊕ R 2 pf 16 , − p 2 f 26 + f 36 , −f 26 − p 2 f 36 , 0, 0, 0 , p > 0 R 5 - ✓ - - - ✓ (✓) s 0 5.4 ⊕ R f 26 , 0, f 46 , −f 36 , 0, 0 R 5 - ✓ - - ✓ - (✓) s 0 5.8 ⊕ R f 26 + f 36 , −f 16 + f 46 , f 46 , −f 36 , 0, 0 R 5 - - ✓ - - (✓) - s 1,−1,−1 5.9 ⊕ R f 16 , f 26 , −f 36 , −f 46 , 0, 0 R 5 - - ✓ - - (✓) - s p,p,−p 5.11 ⊕ R pf 16 , pf 26 , −pf 36 + f 46 , −f 36 − pf 46 , 0, 0 , p > 0 R 5 - - ✓ - - (✓) - s p,−p,r 5.13 ⊕ R pf 16 + f 26 , −f 16 + pf 26 , −pf 36 + rf 46 , −rf 36 − pf 46 , 0, 0 , r > 0 R 5 p = 0 (p = 0) ✓ (p = 0) (p = 0) (✓) (p = 0) s − 1 4 ,− 1 4 6.14 − 1 4 f 16 + f 26 , − 1 4 f 26 , − 1 4 f 36 + f 46 , − 1 4 f 46 , f 56 , 0 R 5 - - - - - ✓ ✓ s p,−4p
6.16
pf 16 + f 26 + f 36 , −f 16 + pf 26 + f 46 , pf 36 + f 46 , −f 36 + pf 46 , −4pf 56 , 0 , p < 0 R 5 - - - - - ✓ ✓ s 1,q,q,−2(1+q) 6.17 f 16 , f 26 , qf 36 , qf 46 , −2(1 + q)f 56 , 0 , 0 < |q| ≤ 1, q ̸ = 1 R 5 - - - q = 1 q = 1 ✓ q > 0 s 1,− 3 2 ,− 3 2
6.18
f 16 + f 26 , f 26 , f 36 , − 3 2 f 46 , − 3 2 f 56 , 0 R 5 - - - - - - - s p,p,q,−p− q 2 6.19 pf 16 , pf 26 , qf 36 , − p + q 2 f 46 + f 56 , −f 46 − p + q 2 f 56 , 0 , p, q ̸ = 0 R 5 - q = −2p - q = −4p q = −4p ✓ p(2p + q) ≤ 0 s p,p,− 3 2 p 6.20 pf 16 + f 26 , pf 26 , pf 36 , − 3 2 pf 46 + f 56 , −f 46 − 3 2 pf 56 , 0 , p > 0 R 5 - - - - - - - s p,q,r,−2(p+q) 6.21 pf 16 + f 26 , −f 16 + pf 26 , qf 36 + rf 46 , −rf 36 + qf 46 , −2(p + q)f 56 , 0 , |p| ≥ |q|, q ̸ = −p, r > 0 R 5 - q = 0 - q = p q = p ✓ pq ≥ 0 h3 ⊕ s 0 3.3 (f 23 , 0, 0, f 56 , −f 46 , 0) h3 ⊕ R 2 - ✓ - - ✓ ✓ (✓) s4.6 ⊕ R 2 (f 23 , f 26 , −f 36 , 0, 0, 0) h3 ⊕ R 2 - ✓ - - - ✓ (✓) s4.7 ⊕ R 2 (f 23 , f 36 , −f 26 , 0, 0, 0) h3 ⊕ R 2 - ✓ - - - ✓ (✓) s5.16 ⊕ R (f 23 + f 46 , f 36 , −f 26 , 0, 0, 0) h3 ⊕ R 2 - - ✓ ✓ - (✓) ✓ s6.25 (f 23 , f 36 , −f 26 , 0, f 46 , 0) h3 ⊕ R 2 - ✓ - - - - (✓) s6.44 (f 23 , f 36 , −f 26 , f 26 + f 56 , f 36 − f 46 , 0) h3 ⊕ R 2 - - - - - ✓ - s p,0 6.51 (f 23 , pf 26 , −pf 36 , f 56 , −f 46 , 0), p > 0 h3 ⊕ R 2 - ✓ - - - ✓ (✓) s 0,q 6.52 (f 23 , f 36 , −f 26 , qf 56 , −qf 46 , 0), q > 0 h3 ⊕ R 2 - ✓ - - - ✓ (✓) s6.158 (f 24 + f 35 , 0, f 36 , 0, −f 56 , 0) h5 - ✓ - - - ✓ (✓) s6.159 (f 24 + f 35 , 0, f 56 , 0, −f 36 , 0) h5 - - ✓ ✓ - (✓) ✓ s 1 6.162 (f 24 + f 35 , f 26 , f 36 , −f 46 , −f 56 , 0) h5 - - ✓ - - (✓) - s p 6.164 (f 24 + f 35 , pf 26 , f 56 , −pf 46 , −f 36 , 0), p > 0 h5 - ✓ - - - ✓ (✓) s p 6.165 (f 24 + f 35 , pf 26 + f 36 , −f 26 + pf 36 , −pf 46 + f 56 , −f 46 − pf 56 , 0), p > 0 h5 - - ✓ - - (✓) - s p 6.166 (f 24 + f 35 , f 46 , pf 56 , −f 26 , −pf 36 , 0), 0 < |p| ≤ 1 h5 - - ✓ ✓ - (✓) p ̸ = 1 s6.167 (f 24 + f 35 , f 36 , −f 26 , f 26 + f 56 , f 36 − f 46 , 0) h5 - - ✓ - - (✓) - s 0 6.145 (f 35 + f 26 , f 45 − f 16 , f 46 , −f 36 , 0, 0) n5.1 - - ✓ - - (✓) - s 0 6.147 (f 35 + f 26 + f 36 , f 45 − f 16 + f 46 , f 46 , −f 36 , 0, 0) n5.1 - - ✓ - - (✓) - s6.152 f 35 + f 26 , f 34 − f 16 + f 56 , f 45 , −f 56 , f 46 , 0 n5.2 - - - - - ✓ - s 0 6.154 f 35 + f 26 , f 34 − f 16 , f 45 , −f 56 , f 46 , 0 n5.2 - - - - ✓ ✓ -h 3 ⊕ s 0 3.3 dα 1 = iεα 33 , dα 2 = −α 2 ∧ (α 1 − α 1 ), dα 3 = 0. ε ∈ {−1, 1} s 4.7 ⊕ R 2 dα 1 = iεα 22 , dα 2 = −α 2 ∧ (α 1 − α 1 ), dα 3 = 0. ε ∈ {−1, 1} s 6.44 dα 1 = iεα 33 , dα 2 = −(α 2 + iα 3 ) ∧ (α 1 − α 1 ), dα 3 = −α 3 ∧ (α 1 − α 1 ). ε ∈ {−1, 1} s 0,q 6.52 , q > 0 dα 1 = iδα 22 , dα 2 = −α 2 ∧ (α 1 − α 1 ), dα 3 = −εqα 3 ∧ (α 1 − α 1 ). ε ∈ {−1, 1} s 6.159 dα 1 = iδα 22 + iεα 33 , dα 2 = −α 2 ∧ (α 1 − α 1 ), dα 3 = 0. δ, ε ∈ {−1, 1} s 1 6.162 dα 1 = α 23 − α 32 , dα 2 = −iα 2 ∧ (α 1 − α 1 ), dα 3 = iα 3 ∧ (α 1 − α 1 ). - s p 6.165 , p > 0 dα 1 = α 23 − α 32 , dα 2 = −(1 + ip) α 2 ∧ (α 1 − α 1 ), dα 3 = −(1 − ip) α 3 ∧ (α 1 − α 1 ). - s p 6.166 , 0 < |p| ≤ 1 dα 1 = iδα 22 + iεδα 33 , dα 2 = −α 2 ∧ (α 1 − α 1 ), dα 3 = −εpα 3 ∧ (α 1 − α 1 ). δ, ε ∈ {−1, 1} s 6.167 dα 1 = εα 23 − εα 32 + ixα 33 , dα 2 = −(α 2 + iα 3 ) ∧ (α 1 − α 1 ), dα 3 = −α 3 ∧ (α 1 − α 1 ). ε ∈ {−1, 1}, x ∈ Rµ = f 6 s − 1 2 ,− 1 2 4.3 ⊕ R 2 - f 16 , − 1 2 f 26 , − 1 2 f 36 , 0, 0, 0 SKT LCB Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 s p,− p 2 4.5 ⊕ R 2 p > 0 pf 16 , − p 2 f 26 + f 36 , −f 26 − p 2 f 36 , 0, 0, 0 SKT LCB Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 s 0 5.4 ⊕ R - f 26 , 0, f 46 , −f 36 , 0, 0 SKT LCSKT Jf1 = f5, Jf2 = f6, Jf3 = f4 ω = f 15 + f 26 + f 34 µ = f 6 s 0 5.8 ⊕ R - f 26 + f 36 , −f 16 + f 46 , f 46 , −f 36 , 0, 0 Balanced Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56
+ f 16 , f 45 − f 26 , f 36 , −f 46 , 35 + f 26 , f 45 − f 16 , f 46 , −f 36 , 0, 0) n 5.1 = (f 35 , f 45 , 35 + f 16 , f 45 − f 16 + f 36 , f 36 , −f 46 , 0, 0) n 5.1 = (f 35 , f 45 , 35 + f 26 + f 36 , f 45 − f 16 + f 46 , f 46 , −f 36 ,0, 0) n 5.1 = (f 35 , f 45 , 0, 0, 0) 2 s 6.151 (f 35 + f 16 , f 34 − f 26 − f 46 , f 45 , −f 46 , f 56 , 0) n 5.2 = (f 35 , f 34 , f 45 , 0, 0) 3 s 6.152 (f 35 + f 26 , f 34 − f 16 + f 56 , f 45 , −f 56 , f 46 , 0) n 5.2 = (f 35 , f 34 , f 45 , 35 + f 26 , f 34 − f 16 , f 45 , −f 56 , f 46 , 0) n 5.2 = (f 35 , f 34 , f 45 ,
First, f 6 (N J (f 1 , f 2 )) = (J 61 ) 2 + (J 62 ) 2 forces J 61 = J 62 = 0, and now f 6 (N J (f 3 , f 4 )) = (J 63 ) 2 + (J 64 ) 2 yields J 63 = J 64 = 0, at which point f 5 (N J (f 1 , f 3 )) = (J 51 ) 2 , f 5 (N J (f 2 , f 4 )) = (J 52 ) 2 imply J 51 = J 52 = 0. Now, we necessarily have J 65 ̸ = 0, as (3.7) −1 = (J 2 ) 66 = J 2 66 + J 56 J 65 , so that f 5 (N J (f 3 , f 5 )) = J 65 J 54 , f 5 (N J (f 4 , f 5 )) = −J 65 J 53 , (J 2 ) 65 = J 65 (J 55 + J 66 ) yield J 53 = J 54 = 0, J 66 = −J 55 . Moreover, by (3.7), we deduce
One computes f 1 (N J (f 4 , f 5 )) = −J 11 J 34 − J 12 J 44 − J 12 J 55 − J 13 J 65 + J 24 J 65 + J 34 J 55 , f 1 (N J (f 2 , f 5 )) = −J 11 J 32 − J 11 J 65 − J 12 J 42 + J 22 J 65 + J 32 J 55 , yielding J 13 = − 1 J 65 (J 11 J 34 + J 12 J 44 + J 12 J 55 − J 24 J 65 − J 34 J 55 ),
which cannot both vanish, having established J 65 ̸ = 0. Now, we can explicitly impose the vanishing off 3 (N J (f 1 , f 5 )) = −(J 31 ) 2 − J 32 J 41 + J 32 J 65 + J 41 J 65 ,f 4 (N J (f 1 , f 5 )) = −J 31 J 41 − J 31 J 65 − J 41 J 42 + J 42 J 65 , by setting J 32 = J 2 31 − J 41 J 65 J 65 − J 41 , J 42 = J 31 (J 41 + J 65 ) J 65 − J 41 .
Theorem 4. 1 .
1Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits 1 st -Gauduchon structures (J, g) if and only if it is isomorphic to one among
Theorem 7. 1 .
1Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits LCB structures if and only if it is isomorphic to one of the Lie algebras of Theorem 2.
0 3.3 ) admit Kähler structures: s 0 6.164 does not admit 1 st -Gauduchon structures, while the other ones do not admit LCSKT structures, and both are generalizations of the Kähler condition. The computations are summarized in
8.1) and (8.2) yields tr A = 0 and
Corollary 8 . 4 .
84Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits Kähler structures if and only if it admits both SKT and balanced structures if and only if it is isomorphic to one among s 0 3.3 ⊕ R 3 = f 26 , −f 16 , ⊕ R = f 26 , −f 16 , rf 46 , −rf 36 , 0, 0 , r > 0. Explicit examples of Kähler structures on these Lie algebras are exhibited in
⊕q
R -f 16 , f 26 , −f 36 , −f 46 , 0, 0 Balanced Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56 16 , pf 26 , −pf 36 + f 46 , −f 36 − pf 46 , 0, 0 Balanced Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56 26 , −f 16 , rf 46 , −rf 36 , 0, 0) KählerJf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56 µ = f 6 p ̸ = 0, r > 0 (pf 16 + f 26 , −f 16 + pf 26 , −pf 36 + rf 46 , −rf 36 − pf 46 , 0, 0, 0) Balanced Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 16 + f 26 , − 1 4 f 26 , − 1 4 f 36 + f 46 , − 1 4 f 46 , 0, 0 LCB 1 st -Gauduchon Jf1 = f3, Jf2 = f4, Jf5 = f6 ω = f 13 + 4f 24 + f 56 16 + f 26 + f 36 , −f 16 + pf 26 + f 46 , pf 36 + f 46 , −f 36 + pf 46 , −4pf 56 , 0 LCB 1 st -Gauduchon Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = 4p 2 f 12 + f 34 + f 16 , f 26 , qf 36 , qf 46 , −2(1 + q)f 56 , 0) LCB Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56 0 < q < 1 (f 16 , f 26 , qf 36 , qf 46 , −2(1 + q)f 56 , 0) LCB 1 st -Gauduchon Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + 4f 13 + 4f 24 + 4 (1+q) 16 , f 26 , f 36 , f 46 , −4f 56 , 0) LCK LCSKT 1 st -Gauduchon Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + 4f 13 + 4f 24 + 16f 34 + 2f 56 16 + f 26 , f 26 , f 36 , − 3 2 f 46 , − 3 2 f 56 , 0 Complex Jf1 = f3, Jf2 = f6, 16 , pf 26 , −2pf 36 , f 56 , −f 46 , 0) SKT LCB Jf1 = f2, Jf3 = f6, Jf4 = f5 ω = f 12 + f 36 + f 45 q = −4p ̸ = 0 (pf 16 , pf 26 , −4pf 36 , pf 46 + f 56 , −f 46 + pf 56 , 0) LCK LCSKT 1 st -Gauduchon Jf1 = f2, Jf3 = f6, Jf4 = f5 ω = f 12 + f 36 + f 45 µ = 2pf 6 p(2p + q) < 0, q ̸ = −4p (pf 16 , pf 26 , qf 36 , −(p + q 2 )f 46 + f 56 , −f 46 − (p + q 2 )f 56 , 0) LCB 1 st -Gauduchon Jf1 = f2, Jf3 = f6, Jf4 = f5 ω = f 12 + 8f 14 + 8f 25 + 4f 36 − 8 q 2 +4 p(2p+q) f 45 p(2p + q) > 0, q ̸ = 0 (pf 16 , pf 26 , qf 36 , −(p + q 2 )f 46 + f 56 , −f 46 − (p + q 2 )f 56 , 0) LCB Jf1 = f2, Jf3 = f6, Jf4 = f5 ω = f 12 + f 36 + f 16 + f 26 , pf 26 , pf 36 , − 3 2 pf 46 + f 56 , −f 56 − 3 2 pf 56 , 0 Complex Jf1 = f3, Jf2 = f6, = 0, p ̸ = 0, r > 0 (pf 16 + f 26 , −f 16 + pf 26 , rf 46 , −rf 36 , −2pf 56 , 0) SKT LCB Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56 q = p ̸ = 0, r > 0 (pf 16 + f 26 , −f 16 + pf 26 , pf 36 + rf 46 , −rf 36 + pf 46 , −4pf 56 , 0) LCK LCSKT Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56 µ = 2pf 6 1 st -Gauduchon Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + 2f 13 + 2f 24 + 4p 2 +(r−1) 2 p2f 34 + f 56 pq > 0, |p| > |q|, r > 0 (pf 16 + f 26 , −f 16 + pf 26 , qf 36 + rf 46 , −rf 36 + qf 46 , −2(p + q)f 56 , 0) LCB 1 st -Gauduchon Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + 2f 13 + 2f 24 + (p+q) 2 +(r−1) 2 pq f 34 + f 56 pq < 0, |p| > |q|, r > 0 (pf 16 + f 26 , −f 16 + pf 26 , qf 36 + rf 46 , −rf 36 + qf 46 , −2(p + q)f 56 , 0) LCB Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56
f2, Jf3 = f6, Jf4 = f5 ω = f 12 + f 36 + f 45 s4.7 ⊕ R 2 -f 23 , f 36 , −f 26 , 0, 0, 0 SKT LCB Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 s5.16 ⊕ R f 23 + f 46 , f 36 , −f 26 , 0, 0, 0 Balanced Jf1 = f5, Jf2 = f3, Jf4 = −f6 ω = f 15 + f 23 − f 46 LCK Jf1 = f5, Jf2 = f3, Jf4 = f6 ω = f 15 + f 23 + f 46 1 st -Gauduchon Jf1 = f5, Jf2 = f3, Jf4 = f6 ω = f 13 + f 15 + 2f 23 + f 25 + f 46 s6.25 -f 23 , f 36 , −f 26 , 0, f 46 , 0 SKT Jf1 = f5, Jf2 = f3, Jf4 = f6 ω = f 15 + f 23 + f 46 s6.44 -f 23 , f 36 , −f 26 , f 26 + f 56 , f 36 − f 46 , 0 LCB Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 23 , pf 26 , −pf 36 , f 56 , −f 46 , 0 SKT LCB Jf1 = pf2, Jf3 = f6, Jf4 = f5 ω = pf 12 + f 36 + f 45 23 , f 36 , −f 26 , qf 56 , −qf 46 , 0 SKT LCB Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 s6.158 f 24 + f 35 , 0, f 36 , 0, −f 56 , 0 SKT Jf1 = f3, Jf2 = f4, Jf5 = f6 ω = f 13 + f 24 + f 56 f 13 + f 24 − f 35 + f 56 s6.159 f 24 + f 35 , 0, f 56 , 0, −f 36 , 0 Balanced Jf1 = f6, Jf2 = −f4, Jf3 = f5 ω = f 16 − f 24 + f 35 LCK Jf1 = f6,Jf2 = f4, Jf3 = f5 ω = f 16 + f 24 + f 35 1 st -Gauduchon Jf1 = f6, Jf2 = f4, Jf3 = f5 ω = f 16 − f 23 + 2f 24 + f 35 − f 24 + f 35 , f 26 , f 36 , −f 46 , −f 56 , 0 Balanced Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 24 + f 35 , pf 26 , f 56 , −pf 46 , −f 36 , 0 SKT Jf1 = pf2, Jf3 = f5, Jf4 = f6 ω = pf 12 + f 35 + f 46 pf 12 − f 24 + f 35 + f 46 24 + f 35 , pf 26 + f 36 , −f 26 + pf 36 , −pf 46 + f 56 , −f 46 − pf 56 , 0 Balanced Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45 24 + f 35 , f 46 , f 56 , −f 26 , −f 36 , 0Balanced Jf1 = f6, Jf2 = −f4, Jf3 = f5 ω = f 16 − f 24 + f 35 LCK Jf1 = f6, Jf2 = f4, Jf3 = f5 ω = f 16 + f 24 + f 35 0 < |p| ≤ 1, p ̸ = 1 f 24 + f 35 , f 46 , pf 56 , −f 26 , −pf 36 , 0 Balanced Jf1 = f6, Jf2 = −f4, Jf3 = f5 ω = f 16 − f 24 + f 35 LCK Jf1 = f6, Jf2 = f4, Jf3 = f5 ω = f 16 + f 24 + f 35 1 st -Gauduchon Jf1 = f6, Jf2 = f4,Jf3 = f5 ω = |p − 1|f 16 + f 23 + 2f 24 + f 35 + f 45 s6.167 -f 24 + f 35 , f 36 , −f 26 , f 26 + f 56 , f 36 − f 46 , 0 Balanced Jf1 = f6, Jf2 = f3, Jf4 = f5 ω = f 16 + f 23 + f 45
- f 35
35+ f 26 , f 45 − f 16 , f 46 , −f 36 , 0, 0 Balanced Jf 1 = f 2 , Jf 3 = f 4 , Jf 5 = f 6 ω = f 12 + f 34 + f 35 + f 26 + f 36 , f 45 − f 16 , f 46 , −f 36 , 2f 12 + f 14 + f 34 + f 56 s 6.152 -f 35 + f 26 , f 34 − f 16 + f 56 , f 45 , −f 56 , f 46 , 35 + f 26 , f 34 − f 16 , f 45 , −f 56 , f 46 , 0 LCSKT LCB Jf 1 = f 2 , Jf 3 = f 6 , Jf 4 = −f 5 ω = f 12 + f 36 − f 45 µ = −2f 6
Table 2 . 1 .
21Six-dimensional strongly unimodular almost nilpotent Lie algebras with nilradical having commutator of dimension at least two.Theorem 2.2. Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits complex structures if and only if it is isomorphic to one among
s = pf 16 + f 26 , −f 16 + pf 26 , qf 36 + rf 46 , −rf 36 + qf 46 , −2(p + q)f 56 , 0 , |p| ≥ |q|, q ̸ = −p, r > 0, s 6.25 = (f 23 , f 36 , −f 26 , 0, f 46 , 0), s 6.44 = (f 23 , f 36 , −f 26 , f 26 + f 56 , f 36 − f 46 , 0),p,p,− 3
2 p
6.20
= pf 16 + f 26 , pf 26 , pf 36 , − 3
2 pf 46 + f 56 , −f 46 − 3
2 pf 56 , 0 , p > 0,
s
p,q,r,−2(p+q)
6.21
s p,0
6.51 = (f 23 , pf 26 , −pf 36 , f 56 , −f 46 , 0), p > 0,
s 0,q
6.52 = (f 23 , f 36 , −f 26 , qf 56 , −qf 46 , 0), q > 0,
s 0
6.145
Proof. We start with the almost abelian case. Comparing the Lie algebras of the statement with the ones of Theorem 2.2, we need to prove the non-existence of 1 st -Gauduchon structures on the almost abelian Lie algebrasp,− p
2
4.5
⊕ R 2 , p>0,
s 4.6 ⊕ R 2 ,
s 4.7 ⊕ R 2 ,
s 0
5.4 ⊕ R,
s 0,0,r
5.13 ⊕ R, r>0,
s 5.16 ⊕ R,
s
− 1
4 ,− 1
4
6.14
,
s p,−4p
6.16 , p<0,
s
1,q,q,−2(1+q)
6.17
, 0<q≤1,
s
p,p,q,−p− q
2
6.19
, p(2p+q)≤0, p,q̸ =0, s
p,q,r,−2(p+q)
6.21
, pq≥0, |p|≥|q|, p̸ =0, r>0, s 6.25 ,
s p,0
6.51 , p>0,
s 0,q
6.52 , q>0,
s 6.158 ,
s 6.159 ,
s p
6.164 , a>0,
s p
6.166 , 0<|p|≤1, p̸ =1.
Explicit examples of 1 st -Gauduchon structures on these Lie algebras are provided in Tables 9.4 and
9.5.
s 0
5.8 ⊕ R,
s 1,−1,−1
5.9
⊕ R,
s p,p,−p
5.11
⊕ R, p>0,
s p,−p,r
5.13
⊕ R, p̸ =0, r>0,
s
1,q,q,−2(1+q)
6.17
, −1<q<0, s
1,− 3
2 ,− 3
2
6.18
,
s
p,p,q,−p− q
2
6.19
, p(2p+q)>0, q̸ =0, s
p,p,− 3
2 p
6.20
, p>0,
s
p,q,r,−2(p+q)
6.21
, pq<0, |p|≥|q|, q̸ =−p, r>0.
LCSKT structures if and only if it is isomorphic to one among6 ⊕ R 2 ,
s 4.7 ⊕ R 2 ,
s 0
5.4 ⊕ R,
s 0,0,r
5.13 ⊕ R, r>0,
s p,p,−2p,0
6.19
, p̸ =0,
s p,0,r,−2p
6.21
, p̸ =0, r>0,
s 6.25 ,
s p,0
6.51 , p>0,
s 0,q
6.52 , q>0,
s 6.158 ,
s p
6.164 , p>0,
(ii) s 0
3.3 ⊕ R 3 ,
h 3 ⊕ s 0
3.3 ,
s 0
5.4 ⊕ R,
s 0,0,r
5.13 ⊕ R, r>0,
s 1,1,1,−4
6.17
,
s p,p,−4p,p
6.19
, p̸ =0,
s p,p,r,−4p
6.21
Table 9 .
92, exhibit the generic real closed 1-form µ, consider the generic Hermitian metric (3.1) and compute dd c ω − µ ∧ d c ω, showing that it cannot vanish. The computations proving the claim are summarized inTable 5.1.Lie algebra
Generic real closed 1-form µ
dd c
Table 5 . 1 .
51Six-dimensional strongly unimodular non-almost abelian almost nilpotent Lie algebras admitting neither SKT nor LCSKT structures (except s 5.16 ⊕ R).
Table 5 . 2 .
52Six-dimensional strongly unimodular non-almost abelian almost nilpotent Lie algebras admitting SKT structures but no LCSKT structures (excluding those with nilradical having one-dimensional commutator and only admitting complex structures satisfying Jn 1 ⊂ n). provide new examples of Lie algebras admitting LCSKT structures, with the latter being of particular interest, as it does cannot carry SKT structures. The simply connected Lie groups associated with these two Lie algebras admit cocompact lattices (see[15, Remark 4.8], for example, and Remark 2.3), yielding new examples of compact LCSKT manifolds. Moreover, in the case of s 0 6.154 , such examples can have non-degenerate torsion form H = d c ω, in the sense that the contraction ι X H of H by any non-zero tangent vector X yields a non-zero 2-form: for example, one can consider the one induced by the example of LCSKT structure on s 0 6.154 from Table 9.6, having torsion form H = f 146 − f 256 − f 345 .Remark 5.2. We note that we have not obtained any new isomorphism classes of Lie algebras
admitting SKT structures with respect to the ones already known by the classification results in
[14, 15]. Instead, h 3 ⊕ s 0
3.3 and s 0
6.154
Table 7 .
71. Six-dimensional strongly unimodular (non-almost abelian) almost
nilpotent Lie algebras not admitting LCK structures (excluding those with
Heisenberg-type nilradical and only admitting complex structures satisfying Jn 1 ⊂
n).
Theorem 8.1. Let g be a six-dimensional strongly unimodular almost nilpotent Lie algebra. Then, g admits strongly Gauduchon structures if and only if it admits balanced structures if and only if it is isomorphic to one amongs 0
3.3 ⊕ R 3 ,
s 0
5.8 ⊕ R,
s 1,−1,−1
5.9
⊕ R,
s p,p,−p
5.11
⊕ R, p>0,
s 0,0,r
5.13 ⊕ R, r>0,
s 5.16 ⊕ R,
s 0
6.145 ,
s 0
6.147 ,
s 6.159 ,
s 1
6.162 ,
s p
6.165 , p>0,
s p
6.166 , 0<|p|≤1,
s 6.167 .
Table 8.1 summarizes these computations. □Lie algebra ∂ω 2
Table 8 .
81. Six-dimensional strongly unimodular non-almost abelian almost nilpo-
tent Lie algebras not admitting strongly Gauduchon structures (excluding those
with Heisenberg-type nilradical and only admitting complex structures satisfying
Jn 1 ⊂ n).
Remark 8.2. We note that it is possible to prove that all the Lie algebras of Theorem 8.1, beside
admitting balanced structures, also admit non-balanced strongly Gauduchon structures.
Remark 8.3. The previous result provides two new classes of Lie algebras admitting balanced
structures, namely s 0
6.145 and s 0
6.147 , with the respect to the ones previously known in literature
(see
). Thanks to Remark 2.3, this yields new examples of compact solvmanifolds admitting invariant balanced structures.
Table 9 . 1 .
91Six-dimensional strongly unimodular almost nilpotent Lie algebras admitting complex structures.Lie algebra
Complex structure equations
Conditions
Table 9 . 2 .
92Complex structures satisfying Jn 1 ̸ ⊂ n, up to automorphisms, on sixdimensional strongly unimodular almost nilpotent Lie algebras with nilradical having one-dimensional commutator.Jf1 = f2, Jf3 = f4, Jf5 = f6 ω = f 12 + f 34 + f 56Lie algebra
Conditions
Structure equations
Structure type
Example
s3.3 ⊕ R 3
-
(f 26 , −f 16 , 0, 0, 0, 0)
Kähler
Table 9 . 4 .
94Examples of complex and special Hermitian structures on sixdimensional unimodular almost abelian Lie algebras.Lie algebra Conditions
Structure equations
Structure type
Example
h3 ⊕ s 0
3.3
Table 9 . 5 .
95Explicit Hermitian structures on six-dimensional strongly unimodular almost nilpotent Lie algebras with nilradical having one-dimensional commutator.Lie algebra Conditions Structure equations
Structure type
Example
s 0
6.145
Table 9 . 6 .
96Examples of special Hermitian structures on six-dimensional strongly unimodular almost nilpotent Lie algebras with nilradical having commutator of dimension at least two. (A. Fino) Dipartimento di Matematica "G. Peano", Università di Torino, Via Carlo Alberto 10, 10123 Torino, Italy & Department of Mathematics and Statistics, Florida International University, 33199 Miami, Florida, USA Email address: [email protected], [email protected] (F. Paradiso) Dipartimento di Matematica "G. Peano", Università di Torino, Via Carlo Alberto 10, 10123 Torino, Italy Email address: [email protected]
f 2 (N J (f 4 , f 6 )) = 2δJ 24 ,yielding J 24 = 0, andf 1 (N J (f 5 , f 6 )) = δJ 15 + J 14 J 35 − J 25 J 55 − J 26 J 65 ,from which we obtainJ 26 = 1 J 65 (δJ 15 + J 14 J 35 − J 25 J 55 ).Lastly, f 1 (N J (f 4 , f 6 )) = 2δJ 14 forces J 14 = 0, so that, in order for (J 2 ) 26 = −δJ 15 J 55 − δJ 16 J 65 − J 25
f 3 (N J (f 3 , f 5 )) = −J 33 J 43 − J 34 J 63 − J 45 J 63 − J 43 (J 53 ) 2 J 63 , (J 2 ) 23 = −δJ 13 + J 23 J 33 + J 24 J 43 + J 25 J 53 + J 26 J 63 ,(J 2 ) 63 = −(J 43 ) 2 − (J 53 ) 2 + J 33 J 63 + J 63 J 66 , yielding J 34 = − 1 J 63 (J 33 J 43 + J 45 J 53 ) ,
Acknowledgments. The authors are partially supported by Project PRIN 2017 "Real and complexz1, z2 ∈ C, ε ∈ {−1, 1}Table 9.3. Hermitian structures up to equivalence on six-dimensional unimodular almost abelian Lie algebras, with Hermitian metric (3.13).
Lattices in almost abelian Lie groups with locally conformal Kähler or symplectic structures. A Andrada, M Origlia, Manuscripta Math. 155A. Andrada, M. Origlia, Lattices in almost abelian Lie groups with locally conformal Kähler or symplectic structures, Manuscripta Math. 155 (2018), 389-417.
The long-time behavior of the homogeneous pluriclosed flow. R M Arroyo, R A Lafuente, Proc. Lond. Math. Soc. 3R. M. Arroyo, R. A. Lafuente, The long-time behavior of the homogeneous pluriclosed flow, Proc. Lond. Math. Soc. (3) 119 (2019), no. 1, 266-289.
R M Arroyo, M Nicolini, SKT structures on nilmanifolds. 302R. M. Arroyo, M. Nicolini, SKT structures on nilmanifolds, Math. Z. 302 (2022), 1307-1320.
L.-B Beaufort, A Fino, arXiv:2212.11539Locally conformal SKT almost abelian Lie algebras. L.-B. Beaufort, A. Fino, Locally conformal SKT almost abelian Lie algebras, arXiv:2212.11539.
On low dimensional solvmanifolds. C Bock, Asian J. Math. 202C. Bock, On low dimensional solvmanifolds, Asian J. Math. 20 (2016), no. 2, 199-262.
Obstructions to the existence of Kähler structures on compact complex manifolds. I Chiose, Proc. Amer. Math. Soc. 14210I. Chiose, Obstructions to the existence of Kähler structures on compact complex manifolds, Proc. Amer. Math. Soc. 142 (2014), no. 10, 3561-3568.
Locally conformal SKT structures. B Djebbar, A C Ferreira, A Fino, N Z Larbi Youcef, Int. J. Math. 3314B. Djebbar, A. C. Ferreira, A. Fino, N. Z. Larbi Youcef, Locally conformal SKT structures, Int. J. Math 33 (2022), no. 14.
Tamed symplectic forms and strong Kähler with torsion metrics. N Enrietti, A Fino, L Vezzoni, J. Symplectic Geom. 10N. Enrietti, A. Fino, L. Vezzoni, Tamed symplectic forms and strong Kähler with torsion metrics, J. Symplectic Geom. 10 (2012), 203-223.
A construction of non-Kähler Calabi-Yau manifolds and new solutions to the Strominger system. T Fei, Adv. Math. 302T. Fei, A construction of non-Kähler Calabi-Yau manifolds and new solutions to the Strominger system, Adv. Math. 302 (2016), 529-550.
A Fino, G Grantcharov, M Verbitsky, arXiv:2208.12168Special Hermitian structures on suspensions. A. Fino, G. Grantcharov, M. Verbitsky, Special Hermitian structures on suspensions, arXiv:2208.12168.
A Fino, H Kasuya, Tamed symplectic structures on compact solvmanifolds of completely solvable type. 16A. Fino, H. Kasuya, Tamed symplectic structures on compact solvmanifolds of completely solvable type, Ann. Sc. Norm. Super. Pisa Cl. Sci. 16 (2016), no. 3, 971-980.
SKT and tamed symplectic structures on solvmanifolds. A Fino, H Kasuya, L Vezzoni, Tohoku Math J. 67A. Fino, H. Kasuya, L. Vezzoni, SKT and tamed symplectic structures on solvmanifolds, Tohoku Math J. 67 (2015), 19-37.
Six-dimensional solvmanifolds with holomorphically trivial canonical bundle. A Fino, A Otal, L Ugarte, Int. Math. Res. Not. 24A. Fino, A. Otal, L. Ugarte, Six-dimensional solvmanifolds with holomorphically trivial canon- ical bundle, Int. Math. Res. Not. 2015 (2015), no. 24, 13757-13799.
Generalized Kähler almost abelian Lie groups. A Fino, F Paradiso, Ann. Mat. Pura Appl. 2004A. Fino, F. Paradiso, Generalized Kähler almost abelian Lie groups, Ann. Mat. Pura Appl. 200 (2020), no. 4, 1781-1812.
Hermitian structures on a class of almost nilpotent solvmanifolds. A Fino, F Paradiso, J. Algebra. 609A. Fino, F. Paradiso, Hermitian structures on a class of almost nilpotent solvmanifolds, J. Algebra 609 (2022), 861-925.
Balanced Hermitian structures on almost abelian Lie algebras. A Fino, F Paradiso, J. Pure Appl. Algebra. 2272107186A. Fino, F. Paradiso, Balanced Hermitian structures on almost abelian Lie algebras, J. Pure Appl. Algebra 227 (2023), no. 2, 107186.
Families of strong KT structures in six dimensions. A Fino, M Parton, S Salamon, Comment. Math. Helv. 792A. Fino, M. Parton, S. Salamon, Families of strong KT structures in six dimensions, Comment. Math. Helv. 79 (2004), no. 2, 317-340.
Special Hermitian metrics on compact solvmanifolds. A Fino, L Vezzoni, J. Geom. Phys. 91A. Fino, L. Vezzoni, Special Hermitian metrics on compact solvmanifolds. J. Geom. Phys. 91 (2015), 40-53.
On the existence of balanced and SKT metrics on nilmanifolds. A Fino, L Vezzoni, Proc. Amer. Math. Soc. 1446A. Fino, L. Vezzoni, On the existence of balanced and SKT metrics on nilmanifolds, Proc. Amer. Math. Soc. 144 (2016), no. 6, 2455-2459.
Two-step solvable SKT shears. M Freibert, A Swann, Math. Z. 299M. Freibert, A. Swann, Two-step solvable SKT shears, Math. Z. 299 (2021), 1703-1739.
M Freibert, A Swann, 10.1007/s00031-023-09796-2Compatibility of balanced and SKT metrics on two-step solvable Lie groups, Transform. Groups (2023). M. Freibert, A. Swann, Compatibility of balanced and SKT metrics on two-step solvable Lie groups, Transform. Groups (2023), doi:10.1007/s00031-023-09796-2.
Balanced metrics on non-Kähler Calabi-Yau threefolds. J Fu, J Li, S.-T Yau, J. Diff. Geom. 901J. Fu, J. Li, S.-T. Yau, Balanced metrics on non-Kähler Calabi-Yau threefolds, J. Diff. Geom. 90 (2012), no. 1, 81-130.
On the cohomology of lattices in solvable Lie groups. H Garland, Ann. of Math. 84H. Garland, On the cohomology of lattices in solvable Lie groups, Ann. of Math. 84 (1966), 174-195.
On the symplectic curvature flow for locally homogeneous manifolds. J Lauret, C Will, J. Sympl. Geom. 151J. Lauret, C. Will, On the symplectic curvature flow for locally homogeneous manifolds, J. Sympl. Geom. 15 (2017), no. 1, 1-49.
Cohomological properties of unimodular six dimensional solvable Lie algebras. M Macrì, Diff. Geom. Appl. 311M. Macrì, Cohomological properties of unimodular six dimensional solvable Lie algebras, Diff. Geom. Appl. 31 (2013), no. 1, 112-129.
G M Mubarakzyanov, On solvable Lie algebras (Russian). G. M. Mubarakzyanov, On solvable Lie algebras (Russian), Izv. Vyssh. Uchebn. Zaved. Mat. 1963 (1963), no. 1, 114-123.
Classification of real structures of Lie algebras of fifth order (Russian). G M Mubarakzyanov, Izv. Vyssh. Uchebn. Zaved. Mat. 3G. M. Mubarakzyanov, Classification of real structures of Lie algebras of fifth order (Russian), Izv. Vyssh. Uchebn. Zaved. Mat. 1963 (1963), no. 3, 99-106.
Classification of solvable Lie algebras of sixth order with a non-nilpotent basis element (Russian). G M Mubarakzyanov, Izv. Vyssh. Uchebn. Zaved. Mat. 4G. M. Mubarakzyanov, Classification of solvable Lie algebras of sixth order with a non-nilpotent basis element (Russian), Izv. Vyssh. Uchebn. Zaved. Mat. 1963 (1963), no. 4, 104-116.
L Ornea, A Otiman, M Stanciu, 10.1007/s00031-022-09729-5Compatibility between non-Kähler structures on complex (nil)manifolds, Transform. Groups (2022). L. Ornea, A. Otiman, M. Stanciu, Compatibility between non-Kähler structures on complex (nil)manifolds, Transform. Groups (2022), doi:10.1007/s00031-022-09729-5.
A Otiman, Special Hermitian metrics on Oeljeklaus-Toma manifolds. 54A. Otiman, Special Hermitian metrics on Oeljeklaus-Toma manifolds, Bull. London Math. Soc. 54 (2022), no. 2, 655-667.
F Paradiso, Locally conformally balanced metrics on almost abelian Lie algebras. 8F. Paradiso, Locally conformally balanced metrics on almost abelian Lie algebras, Complex Manifolds 8 (2021), 196-207.
The Hull-Strominger system and the Anomaly flow on a class of solvmanifolds. M Pujia, J. Geom. Phys. 170104352M. Pujia, The Hull-Strominger system and the Anomaly flow on a class of solvmanifolds, J. Geom. Phys. 170 (2021), 104352.
Complex structures on nilpotent Lie algebras. S Salamon, J. Pure Appl. Algebra. 1572-3S. Salamon, Complex structures on nilpotent Lie algebras, J. Pure Appl. Algebra 157 (2001), no. 2-3, 311-333.
H Sawai, Locally conformal Kähler structures on compact nilmanifolds with left-invariant complex structures. 125H. Sawai, Locally conformal Kähler structures on compact nilmanifolds with left-invariant com- plex structures, Geom. Dedicata 125 (2007), 93-101.
Classification of six dimensional solvable indecomposable Lie algebras with a codimension one nilradical over R. A Shabanskaya, University of ToledoPhD-thesisA. Shabanskaya, Classification of six dimensional solvable indecomposable Lie algebras with a codimension one nilradical over R, PhD-thesis, University of Toledo, 2011.
Classification and Identification of Lie Algebras. L Šnobl, P Winternitz, CRM Monograph Series. 33American Mathematical SocietyL.Šnobl, P. Winternitz, Classification and Identification of Lie Algebras, CRM Monograph Series 33, American Mathematical Society (2014).
Hermitian structures on six-dimensional nilmanifolds. L Ugarte, Transform. Groups. 121L. Ugarte, Hermitian structures on six-dimensional nilmanifolds. Transform. Groups 12 (2007), no. 1, 175-202.
Rational curves and special metrics on twistor spaces. M Verbitsky, Geom. Topol. 182M. Verbitsky, Rational curves and special metrics on twistor spaces, Geom. Topol. 18 (2014), no. 2, 897-909.
On strongly Gauduchon metrics of compact complex manifolds. J Xiao, J. Geom. Anal. 25J. Xiao, On strongly Gauduchon metrics of compact complex manifolds, J. Geom. Anal. 25 (2015), 2011-2027.
| [] |
[
"Stragglers of the thick disc",
"Stragglers of the thick disc"
] | [
"V Cerqui \nGEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance\n",
"M Haywood \nGEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance\n",
"P Di Matteo \nGEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance\n",
"D Katz \nGEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance\n",
"F Royer \nGEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance\n"
] | [
"GEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance",
"GEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance",
"GEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance",
"GEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance",
"GEPI\nObservatoire de Paris\nPSL Research University\nCNRS\nPlace Jules Janssen92195MeudonFrance"
] | [] | Young alpha-rich (YAR) stars have been detected in the past as outliers to the local age − [α/Fe] relation. These objects are enhanced in α-elements but apparently younger than typical thick disc stars. We study the global kinematics and chemical properties of YAR giant stars in APOGEE DR17 survey and show that they have properties similar to those of the standard thick disc stellar population. This leads us to conclude that YAR are rejuvenated thick disc objects, most probably evolved blue stragglers. This is confirmed by their position in the Hertzsprung-Russel diagram (HRD). Extending our selection to dwarfs allows us to obtain the first general straggler distribution in an HRD of field stars. We also compare the elemental abundances of our sample with those of standard thick disc stars, and find that our YAR stars are shifted in oxygen, magnesium, sodium, and the slow neutron-capture element cerium. Although we detect no sign of binarity for most objects, the enhancement in cerium may be the signature of a mass transfer from an asymptotic giant branch companion. The most massive YAR stars suggest that mass transfer from an evolved star may not be the only formation pathway, and that other scenarios, such as collision or coalescence should be considered. | null | [
"https://export.arxiv.org/pdf/2306.03126v1.pdf"
] | 259,089,199 | 2306.03126 | 337889a94125f1f3f045736fc44c0b309306c93f |
Stragglers of the thick disc
June 7, 2023
V Cerqui
GEPI
Observatoire de Paris
PSL Research University
CNRS
Place Jules Janssen92195MeudonFrance
M Haywood
GEPI
Observatoire de Paris
PSL Research University
CNRS
Place Jules Janssen92195MeudonFrance
P Di Matteo
GEPI
Observatoire de Paris
PSL Research University
CNRS
Place Jules Janssen92195MeudonFrance
D Katz
GEPI
Observatoire de Paris
PSL Research University
CNRS
Place Jules Janssen92195MeudonFrance
F Royer
GEPI
Observatoire de Paris
PSL Research University
CNRS
Place Jules Janssen92195MeudonFrance
Stragglers of the thick disc
June 7, 2023Astronomy & Astrophysics manuscript no. YARstars: abundances -stars: kinematics and dynamics -Galaxy: solar neighbourhood -Galaxy: disk -Galaxy: evolution
Young alpha-rich (YAR) stars have been detected in the past as outliers to the local age − [α/Fe] relation. These objects are enhanced in α-elements but apparently younger than typical thick disc stars. We study the global kinematics and chemical properties of YAR giant stars in APOGEE DR17 survey and show that they have properties similar to those of the standard thick disc stellar population. This leads us to conclude that YAR are rejuvenated thick disc objects, most probably evolved blue stragglers. This is confirmed by their position in the Hertzsprung-Russel diagram (HRD). Extending our selection to dwarfs allows us to obtain the first general straggler distribution in an HRD of field stars. We also compare the elemental abundances of our sample with those of standard thick disc stars, and find that our YAR stars are shifted in oxygen, magnesium, sodium, and the slow neutron-capture element cerium. Although we detect no sign of binarity for most objects, the enhancement in cerium may be the signature of a mass transfer from an asymptotic giant branch companion. The most massive YAR stars suggest that mass transfer from an evolved star may not be the only formation pathway, and that other scenarios, such as collision or coalescence should be considered.
Introduction
The majority of the disc stars presents a trend in the sense that older stars have higher values of [α/Fe]. This can be explained in the context of Galactic chemical evolution: stars born at early times were formed from a gas enriched in α-elements produced by Type II supernovae (SNe) which dominated the chemical enrichment during the initial epochs. As time passed, the production of Fe by Type Ia SNe lowered the level of [α/Fe] (Tinsley 1979;Matteucci & Greggio 1986). The presence of two different stellar populations in the Galactic disc has been confirmed through the presence of two well defined and separated sequence in the [α/Fe] − [Fe/H] plane (e.g. Haywood et al. 2013;Recio-Blanco et al. 2014;Hayden et al. 2015), where the so-called high-α stars are older than the low-α ones (e.g. Haywood et al. 2013;Bensby et al. 2014). Young α-rich (YAR) stars, which have been known since at least Fuhrmann & Bernkopf (1999), who noted the abnormally young age of HR 4657 given its [α/Fe] abundance ratio (see also Fuhrmann et al. 2011Fuhrmann et al. , 2012, stand out as outliers to these general trends. For instance, some YAR objects were noted in Haywood et al. (2013), and discussed further in Haywood et al. (2015), as outliers of the observed [α/Fe] − age correlation of solar neighbourhood stars. Additional works made use of new asteroseismic measurements and spectroscopic observations and identified YAR objects being part of a population of stars not explained by standard chemical evolution models. They are found to be α-enhanced (typically [α/Fe] > 0.1 dex) but younger (age < 6.0 Gyr) than the typical high-α old thick disc stars, with a typical mass of 1.5 M ⊙ (e.g. Chiappini et al. 2015;Martig et al. 2015;Jofré et al. 2016;Matsuno et al. 2018;Silva Aguirre et al. 2018;Sun et al. 2020;Zhang et al. 2021;Jofre et al. 2022). The fraction of YAR stars is estimated to be ∼ 6% of the α-rich population (Martig et al. 2015).
Send offprint requests to: V.
Cerqui, e-mail: [email protected] Until now, two explanations for the origin of YAR stars have been individuated. One possible idea relies on star formation events in the region of the bar corotation area (Chiappini et al. 2015). In this view, the gas in this region would be kept isolated for long time, such that YAR stars that formed from this cloud may show the same chemical enrichment of thick disc stars but younger ages. In support of this thesis, Chiappini et al. (2015) found the YAR stars of their sample to be located in the inner region of the Galaxy and having dissimilar kinematics from other α-rich stars. The second explanation follows the evolved Blue Stragglers scenario. Blue Straggler Stars (BSSs) are stars believed to be the result of either a stellar merger (Hills & Day 1976;Momany et al. 2007) or mass transfer in binary systems (McCrea 1964;Paczyński 1971;Webbink 1985). In both cases, they have experienced an episode of mass acquisition and for this reason they can be identified as stars bluer and brighter than turn-off stars in clusters. The measured masses of these stars do not reflect their ages, but the age expected from their high mass is lower than their true age. Under this point of view, YAR stars are then thought to be stragglers stars of the thick disc, i.e. rejuvenated thick disc stars (e.g. Martig et al. 2015;Yong et al. 2016;Izzard et al. 2018). A plausible approach to probe this scenario is to search for the evidence of mass transfer due to binary evolution in the spotted YAR sample of stars. Jofré et al. (2016) and the follow-up work (Jofre et al. 2022), for instance, made a radial velocity monitoring campaign using HERMES spectrograph to evaluate if YAR stars were or not in binary systems. In particular, in Jofre et al. (2022) the authors conclude that YAR stars are very likely products of mass transfer, thus not effectively young. Moreover, if YAR objects are rejuvenated thick disc stars, their kinematics and spatial distribution in the Galaxy should be the same as those of this population. This has been verified by recent works as Sun et al. (2020) and Zhang et al. (2021) who exploited the LAMOST dataset to compare the characteristics of the thick and thin disc samples with the identified YAR population.
In this paper we aim at providing stronger constraints on the thick disc straggler scenario, examining and comparing the global properties of the YAR objects found in the APOGEE Data release 17 (DR17) (Abdurro'uf et al. 2022). To achieve this goal, we utilise the age estimates from the value added catalogue apogee_astroNN − DR17 (Leung & Bovy 2019) 1 , hereafter astroNN catalogue. These age estimates are derived from the [C/N] ratio-mass relationship, making them a useful tool for examining objects that are believed to have formed due to an increase in the parent star's mass, such as YAR stars. It's worth noting that the astroNN catalogue does not provide direct stellar mass estimates, but they are reflected in the age estimates. In the following section we select our YAR candidates; Section 3 presents the general properties of our sample, including the Hertzsprung-Russel diagram (HRD), the metallicity and α abundances and the kinematic properties of our YAR stars; in Section4 we look for possible YAR accreted candidates while in Section 5 we discuss the individual chemical patterns of our stars; in Section6 we extend our selection to all gravities; we discuss our results in Section 7 and conclude in Section 8.
Data
We use the APOGEE atmospheric parameters and stellar abundances from the data release seventeen (allStarLite − dr17 − synspec_rev1). When using abundances, we select 364,605 stars flagged as ASPCAPFLAG bit 23 == 0 (no flag for BAD overall for stars), EXTRATARG == 0 (main survey stars) and SNR > 50. The resulting selection contains stars with widely different atmospheric parameters as illustrated by the distribution of log g for the whole sample, see Fig. 1. We divide the sample in 3 groups according to log g: red giants at log g < 2.2, clump giants at 2.2 < log g < 2.7, and dwarfs at 3.5 < log g < 4.45 (top panel of Fig. 1). As illustrated in the bottom panel of Fig. 1, the stars in these 3 groups sample different range of mean radius R mean , defined as the mean of the galactic pericenter distance and the galactic apocenter distance. They also have chemical patterns that differ significantly (Fig. 2), for reasons that are possibly related to intrinsic age and mass distributions of each of these type of stars, but also to the spectroscopic analysis on different type of stars.
Age estimates
We use age and orbital parameters estimates provided by the as-troNN catalogue. Stellar ages in astroNN are estimated as explained in Mackereth et al. (2019), using a Bayesian neural network trained on asteroseismic ages. The age estimates are based on the [C/N] abundance ratio. This ratio is modified during the first dredge-up, when the star starts the red giant branch (RGB) phase. The intensity of the mixing (and hence the amplitude of the change of the [C/N] ratio) is linked to the depth of the first dredge-up, which is itself linked to the mass of the star. For this correlation to appear, it is necessary that the star passes the first dredge-up, after leaving the subgiant phase, and therefore is not expected to apply for dwarfs. The [C/N] abundance ratio has been utilised in numerous works in literature for dating giant stars, with the first being the work by Masseron & Gilmore (2015). The median uncertainty on ages reported in Mackereth et al. (2019) from applying their procedure to the DR14 is 30%, 1 The astroNN python package is available at https://github.com/ henrysky/astroNN. The dashed vertical lines represent the thresholds chosen to select stars in three log g intervals (red giants: log g < 2.2, in orange; red clump stars: 2.2 < log g < 2.7, in purple; dwarfs: 3.5 < log g < 4.45, in green). Bottom: mean orbital radii density histogram of the selected stars separated into the three log g intervals, with the same legend of the top panel.
while these authors also caution that ages above 10 Gyr are probably underestimated, by as much as 3.5 Gyr. Figure 3 shows the age − [α/Fe] distributions for the three type of stars selected to be within 8 kpc from the Galactic centre. This figure illustrates that the age scale for the different types of stars is different, with the break between thin and thick disc stars occurring at 6, 8 and around 9-10 Gyr respectively for red clump stars, giants and dwarfs, suggesting ∼ 2 Gyr offsets between the different age scales. The top plot shows that α-rich dwarf stars are correctly detected as being old objects, because in fact the neural network model probably learned that the α-rich stars are old stars, in general. The age−[α/Fe] relation for clump stars (bottom plot) shows two high-α sequences. This feature is caused by the possible contamination by red giant stars of the red clump selected sample. Because of the difference in the age scales for red giants and red clump stars, and the fact that the former cover a wider range in mean radii, we focus on red giants. We will comment on clump and dwarf YAR stars in Section 6.
For binaries that are stragglers, it is expected that the change in the [C/N] ratio will depend on when the mass transfer or merger occurs. If the straggler forms before the start of the ascent of the red giant branch, then the [C/N] ratio is expected to decrease as in a normal massive star. If it forms later, [C/N] may not be affected by the formation of the straggler, and not correlate with the mass of the straggler. This implies that thick disc stars detected as apparently young (and truly massive) on the basis of their [C/N] ratio will represent only a fraction of stars of the thick disc that have acquired mass due to the straggler mechanism. An unknown fraction of stars will remain undetected as stragglers because their [C/N] ratio will remain unaffected by the mass transfer or merger of the system. In Section 3.1, we show that the objects selected as YAR on the basis of the age estimates based on the [C/N] ratio are in effect more massive than standard thick disc stars.
The young α-rich (YAR) sample
We define our sample of YAR by selecting bright red giants (log g < 2.2) with: Notes. The second column represents the fraction of star within 6 kpc from the galactic centre over the stars in the range 6-9 kpc. The third column show the analogous quantity but for 6-9 kpc and 9-20 kpc ranges. The fraction can be minimised or maximised (min and max) taking into account the the Poisson uncertainties of N 0−6 , N 6−9 and N 9−20 .
- Fig. 2 which divides the high-α thick disc sequence from the low-α thin disc sequence).
We select in this way 249 stars. Figure 4 shows our sample divided in three different intervals of mean radii from the Galactic centre: 0-6, 6-9 and 9-20 kpc. The number of YAR stars increases sharply from R mean > 9 to R mean < 6 kpc, as is expected if these objects are dominated by thick disc stars. The precise fractional number of stars of both thick disc and YAR sample for different ranges of radii is listed in Table 1. The choice of the limit on the age error impacts the number of low metallicity stars. The selection of stars with errors on age less than 3 Gyr removes essentially all stars below metallicity of -0.9 dex, except for the youngest objects with age less than 4 Gyr, visible on the age -metallicity distribution plots. The limits at age less than 4 Gyr and age error age less than 3 Gyr results from a compromise between stars which can be reasonably considered as bona-fide young alpha-rich objects, and the number of objects. The mean error on age of the YAR is 2.08 Gyr. We made use of the Gaia DR3 data (Gaia Collaboration et al. 2022b) in order to investigate the binarity of our YAR sample. In particular, we explored the renormalised unit weight error (RUWE) included in the gaia_source table. Indeed, this parameter expresses an indication of the astrometric solution quality of a source, and can then give a good hint of a star being in an unresolved multiple system (this affecting the goodness and the uncertainty of the astrometric solution). Only 9 stars out of the 249 (about 4%) in our sample exhibit a ruwe > 1.4. This is the typical value above which the parallax measurement of a source can be considered less reliable (Lindegren et al. 2021). For this reason we can expect sources characterised by ruwe > 1.4 to be likely multiple systems. In addition just two YAR stars in the sample are catalogued as binaries (specifically, a single lined spectroscopic binary and a combined astrometric + single lined spectroscopic orbital model binary) by the nss_two_body_orbit catalogue of the non-single stars (NSS) tables of Gaia DR3 (Gaia Collaboration et al. 2022a). A possible reason for such a low percentage of multiple systems among our YAR sample can be the possible small orbital separation of the system detected, which should not highly impact the astrometric solutions (as suggested in Kounkel et al. 2021). In addition, as we will discuss in Section 7, stragglers stars can be the product of a merging or a coalescence phenomenon, and the YAR object formed in this case is a single star.
The source table of Gaia DR3 provides as well two variability indicators based on radial velocity time series: the radial velocity renormalised goodness of fit (rv_renormalised_gof) and the P-value for constancy based on a chi-squared criterion (rv_chisq_pvalue). They are both limited to stars brighter than G RVS = 12. The selection that we adopted (suggested in the Gaia DR3 documentation) to find potential radial velocity variable stars invokes also the total number of epochs used to obtain the radial velocity (rv_nb_transits) and is :
rv_renormalised_gof > 4 AND rv_chisq_pvalue < 0.01 AND rv_nb_transits ≥ 10.
12 YAR stars out of 249 (about 5%) are classified as variable with this method, 4 of them having ruwe > 1.4.
For a most complete binarity analysis possible we lastly explored the radial velocity scatter (corresponding to the parameter vscatter) from APOGEE DR17. We catalogued stars as likely binaries whether vscatter > 1 km/s, as suggested in the APOGEE DR17 documentation. In fact, the histogram of the radial velocity scatter (for the complete sample of APOGEE DR17) peaks at around 70 m/s and presents a long tail at higher scatter values, probably due to variability from stellar binaries. 1 km/s represents hence a good threshold compared to the peak of the distribution. 21 YAR stars (approximately 8%) are the outcome of this selection: 3 of them are classified as variable stars through the Gaia DR3 radial velocity and 1 of them has ruwe > 1.4. In total from these investigations 35 of the 249 YAR stars show at least one sign of binarity between RUWE, Gaia DR3 NSS solution, Gaia DR3 radial velocity time series and APOGEE DR17 radial velocity scatter, and only 1 star shows signs of binarity in all diagnostics.
We select a sample of thick disc stars as reference population, defined here as being at [
General properties
We now study the HRD, metallicity, α-abundance and kinematics of our sample and compare it with the thick disc. . 5 shows the (T eff , M K0 ) HRD of the 249 YAR stars of our sample colour-coded by age (left panel) and metallicity (right panel) of the stars. Background grey points are thick disc red giant stars over the same metallicity interval. M K0 magnitudes have been obtained using the AK_TARG estimate of extinction given in the APOGEE catalogue to correct the K band magnitudes, and the distance provided by astroNN catalogue. YAR stars are located to the left of the background distribution, as expected if they represent the evolution of more massive stars. We note that, as expected (left panel, Fig. 5), within the population of YAR stars, the most massive objects (as measured by their young age) are hotter than the less massive ones. The youngest (the most massive) objects and most metal-poor occupy the upper left part of the sequence, as expected. Inversely, the oldest (age greater than 3 Gyr) and most metal rich stars are dominant in the right, bottom part of the sequence.
HRD
Fig
In Fig. 6 we show the analogous HRD of Fig. 5 restricting the metallicity interval to −1 < [Fe/H] < −0.8 dex, to focus primarily on the effect of age (or mass). The figure illustrates that YAR stars are well detached from the background population at the same metallicity, due to the increase in mass producing apparently younger objects. This becomes clear looking at isochrones. The position of the two sequences reflects their age difference and coincides with the YAR and the reference samples.
To confirm these results, we searched for mass estimates of our stars. These are not provided in the astroNN catalogue, and come from the independent StarHorse 3 value added catalogue. We caution that YAR mass estimates are made as if these stars are normal stars, and only reflect their position in the HRD. If these objects are formed through mass transfer or collision, it is possible that their mass is related a bit differently to effec-2 https://waps.cfa.harvard.edu/MIST/ 3 https://data.sdss.org/sas/dr17/env/APOGEE_STARHORSE/ tive temperature and luminosity compared to normal stars. We do expect however this difference to be small (Glebbeek & Pols 2008;Sills 2015) and view these mass estimates as useful indicator of the true mass of these objects. In addition, we validate the StarHorse mass determination by comparing them with asteroseismic masses. Fig. 7 shows the APOKASC-3 asteroseismic mass determinations taken from Jofre et al. (2022) versus the StarHorse estimates. The figure shows that the two datasets are consistent, with only 5 stars being severe outliers of the 1 : 1 relation, highlighted by the red line in the plot. Fig. 8 shows the mass distribution of the YAR stars and thick disc sample. Mass estimates are available for 220 of our YAR stars, that is for around 88% of the total YAR sample. The thick disc reference sample has a mass distribution peaked at 0.9 − 0.95 M ⊙ while YAR stars sample a wider range of masses, with a significant number of stars with masses up to about 1 M ⊙ higher than thick disc stars. This result is in agreement with Zhang et al. (2021), and, together with the location of YAR stars in the HRD, confirms that many of the stars in our sample of YAR stars are more massive than standard thick disc objects.
Metallicity, C-, N-and α-abundance
We now study the metallicity, C-, N-and α-abundance of our YAR objects, and compare them with our thick disc reference sample, defined in the Subsection 2.2.
In Fig. 9 we display the carbon and nitrogen distribution as function of the iron content for YAR stars and the thick disc sample. The plots in the first row show a noticeable stratification in age across the thick disc sample, with increasing age as [C/Fe] decreases and [N/Fe] increases. This trend is observed for YAR objects as well, as emphasised in the bottom panels of Fig. 9, the YAR stars being located in the region of the plane where we expect younger (more massive) stars. However we notice two groups of outliers: YAR stars at [C/Fe] > 0.3 (left panels of Fig. 10 shows the stellar density iso-contours of the entire thick disc sample together with the scatter distribution of the 249 YAR stars within 20 kpc. The yellow squares in the figure highlight the YAR stars showing at least one indicator of binarity while the magenta square shows the only object that fulfils all indicator of binarity (see Section 2.2). The points follow globally the thick disc distribution (including the stars targeted as binaries) but are shifted at lower [α/Fe] at any given metallicity. For comparison, the bottom right plot of Fig. 10 presents the analogous plot, this time showing the distribution of a sub-sample of thick disc stars randomly selected to contain the same number of stars of the YAR sample. The white points perfectly follow the thick disc contours. We come back on this difference in the Section dedicated to detailed abundance ratios (Section 5).
Kinematics
We now investigate the kinematics of the YAR stars in our dataset exploiting the kinematic and orbital parameters given in the astroNN catalogue. Fig. 11 summarises these properties of our YAR sample compared to stars of the thick disc. YAR stars present a large spread in the Toomre diagram (top panel) resembling the characteristics of thick disc stars, the distribution of and V z velocities and tend to rotate more slowly than the younger disc. The middle panel of Fig. 11 represent another useful kinematic diagnostic: the eccentricity -angular momentum L z plane. Thick disc stars tend to distribute along the entire range of eccentricity and the majority of them presents low values of L z . YAR stars follow this distribution perfectly. The tail at high eccentricities is due to stars that are trapped in the bar. Finally, in the bottom panel of Fig. 11 we show the Zmax (maximum : Carbon and nitrogen distributions as a function of metallicity for thick disc and YAR stars. Top plots: thick disc stars are colour-coded according to their age while YAR stars are shown as black empty circles. Bottom plots: YAR stars are colour-coded according to their ages while the thick disc sample is shown as grey points.
height reached from the Galactic plane) -E (energy) distribution. Once again, the kinematic properties of YAR stars match those of the thick disc. In conclusion, the overall kinematic properties of YAR stars are consistent with the thick disc. This result is in agreement with the findings of previous works (Zhang et al. 2021;Jofre et al. 2022).
Accreted stars
As introduced in Section 1, the BSSs are the product of a mass acquisition event, either via mass transfer or via merger. Considering that these phenomena are known to occur in other galaxies (e.g. Momany 2015) we now look at stragglers candidates in the accreted stars in APOGEE DR17. The selection of halo-like population that we now describe is done being aware of the complexity of defining a clear and uncontaminated sample of likely accreted stars. We chose red giants with eccetricity greater than 0.7, orbital apocentre (R apo ) greater than 10 kpc, gravitationally bound to the Galaxy (E orb < 0 km 2 /s 2 ) and with errors on distances less than 1.5 kpc, similarly to what has been done in e.g. Myeong et al. (2022). In order to focus on the most likely accreted objects, we additionally removed from the defined halo sample the heated thick disc population (metal rich halo-like component interpreted as the in situ constituent of the , both containing the same number of stars (written the relative legends.) The potential YAR binaries are denoted by squares: yellow squares indicate stars that exhibit at least one indicator of being a binary, while the magenta square represents the single star that fulfils all the examined indicators (see Subsection 2.2). In both plots, the coloured lines represent stellar density iso-contours of the entire thick disc sample: 90%, 80%, 70%, 60%, 50% , 40%, 30%, 20%, 10% and 5% of the peak density.
same metallicity interval. Most accreted YAR stars have galactic rotational velocity below 50 km s −1 , while most thick disc stars are above this limit, and in agreement with the main thick disc population. Panel d of Fig. 12 shows the mass distributions of the young accreted sample compared to the YAR stars in the same range of metallicity ([Fe/H] < −0.8 dex). The mass determination is available for 19 stars and 35 stars respectively. The distributions are not normalised and the y-axis represents the real number of stars in each bin of mass. The mass distributions of the two populations slightly differ from each other. This is contrary to the expectations, given the position of the young accreted stars on the HRD. This sample has indeed masses which peak at around 0.9 M ⊙ . The distribution does not extend to the higher end of the mass range, except for 2 single stars (around 1.3 M ⊙ and 2.0 M ⊙ ). It is possible that our sample is not extensive enough and that we do not dispose of the statistics necessary to study this population of stars, especially its mass distribution. In addition, it is plausible that the [C/N] ratio exploited to determine the ages listed in the astroNN catalogue (following the method highlighted in Mackereth et al. (2019)) could loose sensitivity at low metallicities, causing a less reliable mass (and consequently age) estimate.
To conclude, given the observational evidences obtained with this current dataset we cannot claim with confidence that the accreted halo-like population contains rejuvenated objects.
Other abundance ratios
Chemical abundances can provide additional interesting clues on the possible formation pathways of YAR stars. Indeed, the formation pathways of stragglers lead to different chemical signatures, depending on how the mass was acquired (mass transfer or collision) and on the evolutionary phase of the donor star at the time of the straggler formation. Lombardi et al. (1995Lombardi et al. ( , 1996 and Sills et al. (2001) predict that the collision product will retain a chemical profile very similar to that of the parent stars, and so little effect on the surface abundances is expected. On the contrary, if stragglers are formed by mass transfer from an evolved star to its companion, surface contamination by elements typical of the advanced stage of stellar evolution (s-neutron capture elements) is expected (i.e. Boffin & Jorissen 1988). Below we check whether the chemical abundances of our sample of YAR stars differ from those of the parent thick disc population. YAR Fig. 11: Dynamical properties of YAR stars (black dots) and thick disc sample (iso-density contours at 90%, 80%, 70%, 60%, 50%, 40%, 30%, 20%, 10% and 1% of the peak density).
α-elements
In the APOGEE DR17 survey the α-elements abundance is defined as the combination of O, Mg, Si, S, Ca and Ti. To investigate which of these chemical elements are driving the trend of the YAR distribution described in Section 3. Fig. 13 shows, indeed, the tendency of the YAR sample distributions to be shifted to lower values with respect to the thick disc sample's distributions, particularly at [Fe/H] > −0.4 dex. This trend is confirmed by the measurements of O and Mg from the astroNN catalogue, which show an even larger discrepancy. Si and S, on the contrary, do not display discrepancies between the two populations as shown in Fig. 14 (this is also confirmed by the astroNN abundances). From Fig. 15 it is noticeable that Ca and Ti distributions exhibit a noticeable shift between the two samples, opposite to the trend for oxygen and magnesium. However, these differences are not confirmed by the astroNN abundances. In particular, the distributions of YAR and thick disc stars in Ca obtained from astroNN show no difference, while the Ti distributions exhibit a trend opposite to the APOGEE Ti abundance determination (Ti YAR < Ti THICK DISC ). The stars that were targeted as possible binaries, indicated by yellow and magenta squares in the corresponding figures, conform to the overall YAR trend for each α-element examined. Our results are partially confirmed by Jofre et al. (2022) who found that their sample of "over-massive" stars follow the trend in the individual α-capture elements (from APOGEE DR16) of the thick disc, while the "under-massive" stars show a slight lower abundances. An additional validation of our results comes from Zhang et al. (2021), whose Fig. 8 shows shifts in α-elements distribution comparable to what we find with APOGEE DR17. Only the titanium trend in Zhang et al. (2021) presents an inconsistency with APOGEE, displaying a shift more compatible with the astroNN trend. We remark, however, that titanium is noted as problematic in APOGEE DR17 documentation 5 for giants stars, its measurement in APOGEE DR17 deviating from literature expectations. For this reason, we tend to give more credence to astroNN and Zhang et al. (2021) titanium measurements. We note, however, that even in these cases the difference in the distribution of the two populations is small (∼0.02 dex), and possibly not significant.
Al & Na
248 stars and 246 stars out of 249 present estimates for Na and Al respectively. Fig. 16 shows that the YAR stars are characterised by a content in Al which is consistent with the thick disc. This is in agreement with astroNN catalogue, Zhang et al. (2021)'s andJofre et al. (2022)'s findings. The plots of Fig. 16 show a difference of about ∼ 0.1 dex in sodium abundance between the YAR stars and thick disc sample. The Na content is shifted towards higher values although this variation is not confirmed by the astroNN data. Zhang et al. (2021) have discarded the Na abundances from the LAMOST spectra in their analysis because of the potential contamination of sodium features by the interstellar medium. On the other side, Jofre et al. (2022) findings stress a scattered distribution with different YAR stars falling off the overall disk distribution. This is justified by the authors remarking that Na abundance in APOGEE DR16 is not derived with particular high precision, being obtained from two weak lines (Jönsson et al. 2020). The BAWLAS catalogue from Hayes et al. (2022) provides an additional term of comparison for problematic chemical elements for about 120,000 giant stars in APOGEE DR17. The check done with BAWLAS confirms the presence of a shift between the YAR stars and the thick disc distributions of about 0.15/0.20 dex in Na, suggesting that the shift between YAR and the standard thick disc population is real. It has been suggested that mixing could modify sodium abundance depending on the mass of the star (see Luck 1994;Smiljanic et al. 2009;Smiljanic 2012). We plot, on Fig. 17, Na abundance as a function of stellar mass, for the sample with available mass determination (see Section 3.1). The standard thick disc sample is further reduced by selecting stars with the best [Na/Fe] abundance (error on [Na/Fe] smaller than 0.06 dex) and presented with grey dots. YAR stars are plotted as orange dots, while those with an error smaller than 0.06 dex on sodium abundance are emphasised with a fuchsia square symbol. The larger spread in sodium abundance at masses less than 1.2 M ⊙ is an effect of the larger uncertainties in this mass range. The figure shows that thick disc stars are mostly concentrated in mass below 1 M ⊙ , with a small group around 1.7 M ⊙ . These peculiar thick disc stars exhibit lower sodium dispersion compared to the remaining thick disc stars at lower masses. By considering their position in the HRD, we observe that they are more luminous than the typical thick disc objects. However, the exact reason behind their increased mass remains uncertain. YAR stars are much more dispersed in term of mass, with about 54% of the sample having a mass higher than 1.2 M ⊙ . The plot shows that the shift to higher [Na/Fe] values is mainly due to these stars: the mean value of sodium content above this limit is in fact [Na/Fe] = 0.21 dex while is [Na/Fe] = 0.06 dex below. The lack of YAR stars at [Na/Fe] < 0 dex and masses greater than 1.5 M ⊙ suggests that the enhanced sodium abundance is linked to the mass of stars. Stellar models do not uniformly predict an increase of sodium abundance with stellar mass in the mass range of our sample (1-2 M ⊙ ). For example, models with rotation from Lagarde et al. (2012) show a steep increase between about 1 M ⊙ and 2.4 M ⊙ , while models without rotation remain flat to about 2 M ⊙ .
Ce & V
In Fig. 18 we report our results on cerium and vanadium. We note that 211 stars stars out of 249 have listed estimates of Ce (while every star in the YAR sample has a V estimate). The distributions show that YAR objects are more Ce enhanced with respect to the thick disc stars. This is confirmed by the BAWLAS Top panels: the thick disc sample is represented by the grey dashed histogram, while the distribution of YAR stars is represented by the black solid histogram. Bottom panels: the thick disc reference sample is highlighted with iso-density contours at 90%, 80%, 70%, 60%, 50%, 40%, 30%, 20%, 10% and 5% of the peak density, while the YAR sample is represented by black dots. The potential YAR binaries are denoted by squares: yellow squares indicate stars that exhibit at least one indicator of being a binary, while the magenta square represents the single star that fulfils all the examined indicators (see Subsection 2.2).
abundances, with a similar offset, although the absolute values are different. Ce is a neutron capture element which forms during the asymptotic giant branch (AGB) evolutionary phase. The enhancement we see in Fig. 18 could then be the result of the pollution by an AGB companion through super winds or Roche Lobe overflow. Vanadium is enhanced by a slightly smaller amount in the YAR sample compared to the thick disc, confirmed by a similar result from the BAWLAS catalogue.
Extending to higher gravities
In this Section we explore the possible YAR candidates among the entire range of surface gravities. We select candidates with log g < 3.5 (red clump and red giant stars) applying the selection criterion described in Section 2.2 (age less than 4 Gyr and age uncertainty less than 3 Gyr). As mentioned in Section 2.1, the age dating method used in astroNN should not be used with dwarfs because the [C/N] abundance variation linked to the stellar mass occurs during the first dredge-up, after the main sequence phase. For this reason, we directly identify dwarf straggler (3.5 < log g < 4.45) stars from their position relative to the main sequence Turn-Off (TO) point in the HRD (blue stragglers stars being located to the left and to the upper part of the main sequence TO). We additionally enlarge the original APOGEE DR17 sample from which the YAR are selected (see Section 2), including stars selected for telluric lines (EXTRATARG==5).
In order to obtain the most comprehensive sample possible, the selection of dwarf stragglers is subsequently made from this augmented sample. Figure 19 (top) shows the Gaia HRD of the sample of YAR stars selected within 2 kpc from the Sun, with points colour-coded by their age. The distance utilised to obtain the absolute magnitude are either the distance estimates provided in the catalogue for stars with log g < 3.5 or the inverse of the Gaia parallax for stars with log g > 3.5, astroNN distances overestimating the distances of the nearest stars. We choose to represents these nearby stars through the Gaia colour-magnitude diagram rather than the (M K0 , T eff ) HRD because good estimates of the extinction are available in visible bands within 2 kpc from the Sun, while the AK_TARG in the K band given in the APOGEE DR17 is overestimated. Because of the limit in distance, the number of giants is reduced, but the dwarf stragglers to the left and upper part of the thick disc TO (located at M G = 3.6 and (G BP −G RP ) = 0.77) are well visible. They represent around 57% of the overall solar vicinity YAR sample and around 1% of the solar vicinity thick disc sample. Note that our selection of dwarf stragglers includes stars that have a measured age of over 4 Gyr, as shown in Fig. 19 through the use of colour coding. The bottom plot of Fig. 19 shows the mass histogram of stars for which a mass is available in the StarHorse catalogue, illustrating the clear difference between standard thick disc and the YAR sample. Fig. 20 shows the (T eff , M K0 ) HRD for the YAR candidates in the entire range of log g and distances separated in different metal- licity intervals. As for the previous figure, we rely on astroNN distances, except for stars with 3.5 < log g < 4.45, for which we use the inverse of the Gaia parallaxes (given that astroNN distances overestimate the distances of the nearest stars). We also correct the magnitudes for extinction using the AK_TARG value given in the catalogue. We select the YAR stars, colour-coded by their age in Fig. 20, following the same method adopted to obtain the YAR solar vicinity sample of Fig. 19. In this case the main sequence TO utilised to distinguish dwarfs stragglers is located at (M K0 = 2.2, Teff = 5800 [K]). The fraction of stragglers selected in each metallicity interval relative to the total number of stars up to 1 magnitude below the turn-off is 2.4%, 1.6%, 1.8%, 1.9% and 2.0%. Among these stars, the percentage of dwarfs (3.5 < log g < 4.45) is 15.9%, 14.9%, 15.0% 13.8%, 13.1%. These numbers of course depend on the definition adopted for selecting YAR objects and should only be taken as indicative. This is lower than the number of BSSs found in the oldest open clusters, where this fraction is an increasing fraction of the cluster age, reaching values above 5% for clusters older than 1 Gyr (Rain et al. 2021), but this is unsurprising given our conservative limit on the upper stellar age (4 Gyr) of our YAR stars.
Discussion
We show that YAR red giant stars identified as outliers to the age − [α/Fe] relation have chemical (in term of [α/Fe] and [Fe/H] distributions) and kinematical characteristics similar to those of the thick disc population over the same metallicity interval. We show that these YAR are correctly identified as more massive stars in the HRD, being systematically hotter than standard thick disc stars and they overlap well with isochrone of 2 Gyr age. On the basis of these observational evidences the YAR stars in our dataset are most probably stars originating from the thick disc that became stragglers being subjected to a mass increase by either mass transfer or collision/coalescence. Chemical abundances are our best tool to investigate distinct stragglers formation scenarios, considering that scenarios lead to different . We relate this to a possible conversion of Na in more massive stars. Fig. 17 highlights, indeed, that the shift in Na between the YAR and thick disc populations (visible also in Fig. 16) is produced from the YAR stars of mass > 1.2 M ⊙ . Stars in this mass range can experience mixing which can modify the Na abundance at the surface of the star (Smiljanic et al. 2009;Smiljanic 2012;Luck 1994). Finally, Ce, a slow neutron-capture element formed during the AGB phase, shows also a small positive offset (∼ 0.1 dex) compared to the standard thick disc population also found in the BAWLAS catalogue. Such enhancement is expected if these YAR stars are component of binary systems where the more massive companion has evolved to the AGB phase and transfer some of its mass to the lower mass companion. Although these differences seem to be real, they are small and need to be confirmed.
While mass transfer scenario (by either stellar wind or Roche lobe overflow (RLO)) is in agreement with these observational indices, it is possible that some of the YAR stars have been formed through collision or coalescence mechanisms, for several reasons. First, little or no modification of stellar abundance is expected in the case of collision, as already mentioned (Lombardi et al. 1995(Lombardi et al. , 1996Sills et al. 2001), which may be the case for some of our stars. Second, the mass determinations shown on Fig. 8 and 19 suggest that the highest masses found may be difficult to achieve with mass transfer through stellar wind or RLO. Finally, recent studies of the dynamics of triple systems have shown that secular evolution of these systems may lead to collisions (He & Petrovich 2018; Toonen et al. 2020Toonen et al. , 2022Grishin & Perets 2022) and could provide an efficient way to form stragglers in low density environment such as in the field. In this case, the evolution of the most massive component can destabilise the system and lead to the collision of the two closest components. Merger or coalescence of close binaries can also occur and produce objects more massive than the mass transfer or wind scenarios (Sills 2015).
Conclusions
In this work we made use of the APOGEE DR17 survey and of the astroNN value added catalogue to identify a sample of 249 YAR red giants stars. We selected these objects to be α-enhanced and younger than 4 Gyr. In the first part of the paper we characterise the YAR population, comparing the global properties of the YAR stars with a selected thick disc reference sample (24,357 red giant stars). The relative position of the YAR stars in the (M K0 , T eff ) diagram indicates that they are more massive than thick disc reference stars. This is also supported by the comparison of the mass distributions of the two samples.
Overall, the YAR stars present [α/Fe] and [Fe/H] patterns which resemble those of the thick disc. However we point out the presence of a small discrepancy in the α-distribution of about ∼ 0.01 dex: the YAR stars follow the thick disc trend, but are shifted to lower values of [α/Fe] in particular at [Fe/H] > −0.4.
From the kinematical point of view, we demonstrate that the characteristics of our YAR sample stars are similar to the reference thick disc sample. Taking into account the described prop- : Properties of 328 dwarf, red clump, and red giant straggler stars within 2 kpc from the Sun. Top: Gaia HRD. YAR stars are colour-coded by their age. Red giant stars and red clump YAR candidates with log g < 3.5 are selected to have an age less than 4 Gyr and an error on age determination less than 3 Gyr, while dwarfs stragglers candidates with 3.35 < log g < 4.45 are selected from the position relative to the turn -off point in the HRD (see the text for details about the selection). Grey points are thick disc stars within 2 kpc from the Sun not selected on log g nor age. Photometry has been corrected for extinction and reddening using the reddening map from Lallement et al. (2019). Bottom: mass histogram of YAR stars for which a mass determination is available from the StarHorse catalogue (270 objects, black histogram), compared with the thick disc sample of the upper panel (grey dashed histogram).
erties, we do record the YAR stars in our dataset as part of the thick disc population. From this outlook these objects are most likely straggler stars (of the thick disc), as has already been suggested (references in Section 1).
We search candidates of accreted straggler stars in our dataset, picking out an halo-like population and excluding the so-called heated thick disc population. We obtain 25 candidates, the analysis of which lead to inconclusive results. Presumably, this sample is not extensive enough to outline its global properties with confidence.
In the second part of the paper, we study the chemical patterns of the YAR stars compared to the thick disc reference sample. The individual α-elements investigated are O, Mg, Si, S, Ca and Ti. Mg and O seem to be the main cause of the difference in the [α/Fe] − [Fe/H] distribution between YAR and thick disc stars. Ca and Ti exhibit a clear shift between the two samples, but opposite to the trend for oxygen and magnesium. We find a clear offset in sodium of about ∼ 0.1 dex. The [Na/Fe] − mass distribution suggests the enhanced Na abundance in the YAR stars is linked to their increased masses. The enhancement could be related to some mixing phenomena, which could themselves depend onto the mass of the stars. Lastly we notice that the YAR stars tend to be more vanadium and cerium enhanced with respect to the standard thick disc population. The case of Ce is noteworthy, considering that s-neutron capture elements have been proposed as clues to mass transfer between an asymptotic giant branch star and a companion.
Finally we report the results obtained extending the YAR selection criterion to higher gravities in the APOGEE DR17 dataset. In particular, the dwarf YAR stars are located in the HRD in the same position where standard BSSs are observed. The fraction of dwarf stragglers of our dataset is lower than the number of BSSs found in the oldest (age > 1 Gyr) open clusters, but this could be due to our conservative cut on age. In light of these investigations, the formation pathway best consistent with our results is the mass acquisition via mass transfer in a binary system. It is possible however that the most massive straggler stars of our sample are produced by collision or coalescence. -Teff diagram of YAR candidate stars in the entire log g range in different metallicity intervals. Red giant stars and red clump YAR candidates with log g < 3.5 are selected on the thick disc sequence to have an age < 4 Gyr and an error on age determination less than 3 Gyr. Dwarfs stragglers candidates with 3.5 < log g < 4.45 are selected to be to the left and to the upper part of the TO (M K0 = 2.2, Teff = 5800 [K]). The grey points are stars on the thick disc sequence and no selection on log g nor age.
Fig. 1 :
1Distributions of surface gravity and mean radius of APOGEE DR17 flag selected stars. Top: log g normalised histogram.
Fig. 4 :
4The sample used in this study divided in three separate intervals of mean radius: stars 0-6 kpc (top panels), 6-9 kpc (middle panels) and 9-20 kpc (bottom panels). The number of stars above the line at [α/Fe] = 0.15 and younger than 4 Gyr is indicated on each plot. All the stars in this figure are selected to have an error on age below 3 Gyr. the isochrones in the left panel of the figure. The blue and fuchsia lines represent respectively old (age = 12 Gyr, approximately the age of the thick disc) and young (age = 2 Gyr) MIST (MESA Isochrones and Stellar Tracks) 2 metal poor ([Fe/H] = −0.9 dex)
Fig. 5 :
5HRD of the YAR stars with log g < 2.2 and [Fe/H] > −1. The colour of the points is coding age (left plot) and metallicity (right plot). In both the panels the small grey dots in background represent the thick disc reference sample (details of the selection are given in Subsection 2.2).
Fig. 9 )
9and YAR stars at [N/Fe] > 0.75 (right panels ofFig. 9). The latter group seem to be separated from the rest of YAR in the [N/Fe]−[Fe/H] plane, outlining a possible bi-modality in the nitrogen distribution. Moreover, these N-enhanced objects correspond to the youngest stars in the HRD ofFig. 5 (left panel, dark blue colour). They occupy the leftmost position within the YAR sequence and do not overlap with the thick disc reference sample. In the top left panel of Fig. 10 we present the normalised histograms in metallicity and [α/Fe] for the YAR sample and the thick disc population in our dataset at R mean < 20 kpc. The two samples present some slight differences in the metal poor tail of the distribution (−1 < [Fe/H] < −0.9 dex), and at higher metallicities (−0.4 < [Fe/H] < −0.3 dex), where the YAR distribution decreases sharply with respect to the thick disc. Despite these dissimilarities, the YAR Metallicity Distribution Function (MDF) resembles globally the thick disc MDF, having similar characteristics and both peaking at [Fe/H] ∼ −0.5 dex. This result is confirmed by Sun et al. (2020)'s and Zhang et al. (2021)'s findings. Similarly, the plot on the top right of Fig. 10 highlights how the density histogram distribution in α-elements of the YAR sample clearly follows the thick disc stars distribution. Indeed, they present similar shapes and matching peaks ([α/Fe] ≈ 0.28 dex). However, we notice a small shift (possibly of about a hundredth of dex) between the two. This shift is confirmed when looking at the distributions in the [α/Fe] − [Fe/H] plane. The bottom left plot of
Fig. 6 :
6HRD of the YAR stars with log g < 2.2 and −1 < [Fe/H] < −0.8 dex. The colour of the points is coding age (left plot) and metallicity (right plot). MIST isochrones at 2 and 12 Gyr are also plotted on the left plot in fuchsia and dark blue respectively. In both the panels the small grey dots in background represent the thick disc reference sample (details of the selection are given in Subsection 2.2) restricted to[Fe/H]
Fig. 7 :Fig. 8 :
78Comparison between asteroseismic mass measurements taken from table 1 inJofre et al. (2022) and StarHorse. We distinguish between stars characterised from [α/Fe] ≥ 0.15 dex (gold dots) and from [α/Fe] < 0.15 dex (navy dots). The respective error bars are plotted. The red line is the bisector and represents the 1 : 1 relation. which is represented by iso-density contours in the figure. As shown by the plot, these objects span indeed a large range of V r Mass distribution function of the YAR (in black solid) and the thick disc (in dark grey dashed, and light grey dashdotted) samples. f = N i /N tot represents the fractional number density where N i is the number of stars in each bin of mass and N tot is the total number of stars of the sample. The light grey dash-dotted histogram restricts the thick disc sample to stars with mass error less than 0.05 M ⊙ .
Fig. 9
9Fig. 9: Carbon and nitrogen distributions as a function of metallicity for thick disc and YAR stars. Top plots: thick disc stars are colour-coded according to their age while YAR stars are shown as black empty circles. Bottom plots: YAR stars are colour-coded according to their ages while the thick disc sample is shown as grey points.
Fig. 12 :
12Chemo-dynamical characterisation of the young accreted candidate sample selected having eccentricity > 0.7 and R APO > 10 kpc (see the text for details). The sample is colour-coded by age in the panels(a)and (c) and is shown in fuchsia in the panels (b) and (d). YAR giants stars having [Fe/H] < −0.8 dex are represented in black in the panels (b) and (d). The under-plotted distribution in the [α/Fe] − [Fe/H] plane represents the red giants reference population in APOGEE DR17, while in the HRD the underlying comparison sample is made of thick disc stars stars having [Fe/H] < −0.8 dex.
Fig. 13 :
13Comparison of magnesium and oxygen abundance for stars in the YAR sample and stars in the thick disc.
Fig. 19: Properties of 328 dwarf, red clump, and red giant straggler stars within 2 kpc from the Sun. Top: Gaia HRD. YAR stars are colour-coded by their age. Red giant stars and red clump YAR candidates with log g < 3.5 are selected to have an age less than 4 Gyr and an error on age determination less than 3 Gyr, while dwarfs stragglers candidates with 3.35 < log g < 4.45 are selected from the position relative to the turn -off point in the HRD (see the text for details about the selection). Grey points are thick disc stars within 2 kpc from the Sun not selected on log g nor age. Photometry has been corrected for extinction and reddening using the reddening map from Lallement et al. (2019). Bottom: mass histogram of YAR stars for which a mass determination is available from the StarHorse catalogue (270 objects, black histogram), compared with the thick disc sample of the upper panel (grey dashed histogram).
Fig. 20: M K0 -Teff diagram of YAR candidate stars in the entire log g range in different metallicity intervals. Red giant stars and red clump YAR candidates with log g < 3.5 are selected on the thick disc sequence to have an age < 4 Gyr and an error on age determination less than 3 Gyr. Dwarfs stragglers candidates with 3.5 < log g < 4.45 are selected to be to the left and to the upper part of the TO (M K0 = 2.2, Teff = 5800 [K]). The grey points are stars on the thick disc sequence and no selection on log g nor age.
Table 1 :
1Fractional number of stars for thick disc and YAR sam-
ples in different mean radii ranges.
N 0−6 /N 6−9 N 6−9 /N 9−20
Thick disc 2.98 max=3.04
min=2.92
2.66 max=2.75
min=2.56
YAR
4.16 max=5.24
min=3.35
2.65 max=4.02
min=1.81
α/Fe] > 0.15 at [Fe/H] > 0 and [α/Fe] > −0.075 × [Fe/H] + 0.15 below solar metallicity and at age greater than 4 Gyr. It contains 24,357 stars.
Fig. 10: Chemical characterisation of the thick disc and YAR sample for red giants type stars. Top panels: MDFs and [α/Fe] distribution functions of the YAR (in solid black lines) and thick disc (in dashed grey lines) samples. f = N i /N tot represents the fractional number density where N i is the number of stars in each bin of [Fe/H] or [α/Fe] and N tot is the total number of stars of the sample. Bottom panels: [α/Fe] − [Fe/H] distribution for YAR stars (black dots, left panel) and a randomly selected thick disc subsample (empty dots, right panel)Galac-
tic halo consisting of kinematically heated thick disc stars (Di
Matteo et al. 2019; Belokurov et al. 2020), picking only stars at
[α/Fe] < −0.075 × [Fe/H] + 0.15 dex. We obtain 254 stars, 25 of
which are young (age < 4 Gyr and error on age < 3 Gyr). Their
position in the [α/Fe] − [Fe/H] plane is highlighted in the panel
(a) of Fig. 12. Most of these stars are characterised by metallicity
below −0.6 / −0.8 dex 4 and [α/Fe] < 0.20 dex. We show the dis-
tribution of the young accreted candidates in the HRD of panel c
in Fig. 12. They form a sequence located to the left of the thick
disc stars in the same range of metallicity ([Fe/H] < -0.8 dex),
as expected if they are more massive. The distribution in the
Toomre diagram (plot b) illustrates the difference in kinematics
between accreted YAR stars and those of the thick disc over the
1.0
0.8
0.6
0.4
0.2
0.0
0.2
[Fe/H]
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
f
Thick disc
YAR
0.15
0.20
0.25
0.30
0.35
0.40
[ /Fe]
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
f
Thick disc
YAR
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.15
0.20
0.25
0.30
0.35
[ /Fe]
The 249 YAR stars
Possible binaries
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.15
0.20
0.25
0.30
0.35
[ /Fe]
249 random thick disc stars
Fig. 14: Same as Fig. 13 for Si and S.Fig. 16: Same as Fig. 13 for Al and Na.0.0
0.1
0.2
0.3
0.4
0.5
[Si/Fe]
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
f
Thick disc
YAR
0.2
0.0
0.2
0.4
0.6
0.8
1.0
[S/Fe]
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
f
Thick disc
YAR
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.0
0.1
0.2
0.3
0.4
[Si/Fe]
YAR
Possible binaries
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.2
0.0
0.2
0.4
0.6
0.8
[S/Fe]
YAR
Possible binaries
0.1
0.0
0.1
0.2
0.3
0.4
[Ca/Fe]
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
f
Thick disc
YAR
0.2
0.0
0.2
0.4
0.6
0.8
[Ti/Fe]
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
f
Thick disc
YAR
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.2
0.1
0.0
0.1
0.2
0.3
0.4
0.5
[Ca/Fe]
YAR
Possible binaries
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.2
0.1
0.0
0.1
0.2
0.3
0.4
0.5
[Ti/Fe]
YAR
Possible binaries
Fig. 15: Same as Fig. 13 for Ca and Ti.
0.4
0.2
0.0
0.2
0.4
0.6
0.8
1.0
[Al/Fe]
0.00
0.05
0.10
0.15
0.20
0.25
f
Thick disc
YAR
1.5
1.0
0.5
0.0
0.5
1.0
[Na/Fe]
0.00
0.05
0.10
0.15
0.20
0.25
0.30
f
Thick disc
YAR
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.4
0.2
0.0
0.2
0.4
0.6
0.8
[Al/Fe]
YAR
Possible binaries
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
1.5
1.0
0.5
0.0
0.5
[Na/Fe]
YAR
Possible binaries
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Mass [M ]
0.4
0.2
0.0
0.2
0.4
0.6
[Na/Fe]
Selected Thick disc
YAR
YAR, e[Na/Fe]<0.06
Fig. 17: [Na/Fe] − mass distribution of our samples. The orange
dots represent the YAR stars; the fuchsia squares show the YAR
stars with an uncertainty on [Na/Fe] < 0.06 dex; the grey dots
symbolize the thick disc stars, selected to have an uncertainty on
mass < 0.05 M ⊙ and uncertainty on Na abundance < 0.06 dex.
Fig. 18: Same asFig. 13for Ce and V chemical patterns depending on several factors. The chemical characteristics of these objects show a possible small systematic departure (∼0.01 dex/0.02 dex) from the standard thick disc population for some elements as O, Ca, Ti and Al. The difference is larger for Na (∼ 0.1 dex), and is confirmed by Na abundances obtained byHayes et al. (2022) (BAWLAS catalogue)1.0
0.5
0.0
0.5
1.0
1.5
2.0
[Ce/Fe]
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
f
Thick disc
YAR
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
[V/Fe]
0.00
0.05
0.10
0.15
0.20
0.25
f
Thick disc
YAR
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
1.0
0.5
0.0
0.5
1.0
1.5
[Ce/Fe]
YAR
Possible binaries
1.0
0.8
0.6
0.4
0.2
0.0
[Fe/H]
0.6
0.4
0.2
0.0
0.2
0.4
0.6
[V/Fe]
YAR
Possible binaries
We remark that in general we discard from the APOGEE DR17 sample the stars having [Fe/H] < −1, the age estimate of them being not reliable(Mackereth et al. 2019). Article number, page 8 of 18 V. Cerqui et al.: Stragglers of the thick disc
https://www.sdss4.org/dr17/irspec/abundances/ Article number, page 10 of 18 V. Cerqui et al.: Stragglers of the thick disc
Acknowledgements. The authors would like to thank Ted Mackereth and Henry Leung for helpful comments on the age determination from the [C/N] ratio, and Christian Hayes for providing the BAWLAS catalogue. The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-13-BS01-0005 (project ANR-20-CE31-0004-01 MWDisc). This work has made use of data from the Euro-
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science. particular the institutions participating in the Gaia Multilateral Agreement. This research made use of Astropy, a community-developed core Python package for Astronomy. Max-PlanckAIPCarnegie Mellon University ; Instituto de Astrofísica de Canarias, The Johns Hopkins University ; Lawrence Berkeley National Laboratory ; Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale UniversityMax-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching)pean Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions partici- pating in the Gaia Multilateral Agreement. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collab- oration, 2018). Funding for the Sloan Digital Sky Survey IV has been pro- vided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the Univer- sity of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participa- tion Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / Uni- versity of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Obser- vatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
. References Abdurro'uf, K Accetta, C Aerts, ApJS. 25935References Abdurro'uf, Accetta, K., Aerts, C., et al. 2022, ApJS, 259, 35
. V Belokurov, J L Sanders, A Fattahi, MNRAS. 4943880Belokurov, V., Sanders, J. L., Fattahi, A., et al. 2020, MNRAS, 494, 3880
. T Bensby, S Feltzing, M S Oey, A&A. 56271Bensby, T., Feltzing, S., & Oey, M. S. 2014, A&A, 562, A71
. H M J Boffin, A Jorissen, A&A. 205155Boffin, H. M. J. & Jorissen, A. 1988, A&A, 205, 155
. C Chiappini, F Anders, T S Rodrigues, A&A. 57612Chiappini, C., Anders, F., Rodrigues, T. S., et al. 2015, A&A, 576, L12
. Di Matteo, P Haywood, M Lehnert, M D , MNRAS. 6322893A&ADi Matteo, P., Haywood, M., Lehnert, M. D., et al. 2019, A&A, 632, A4 Fuhrmann, K. 2011, MNRAS, 414, 2893
. K Fuhrmann, J Bernkopf, A&A. 347897Fuhrmann, K. & Bernkopf, J. 1999, A&A, 347, 897
. K Fuhrmann, R Chini, M Haas, ApJ. 761159Fuhrmann, K., Chini, R., Haas, M., et al. 2012, ApJ, 761, 159
. K Fuhrmann, R Chini, V H Hoffmeister, O Stahl, MNRAS. 416391Fuhrmann, K., Chini, R., Hoffmeister, V. H., & Stahl, O. 2011, MNRAS, 416, 391
. F Arenou, Gaia CollaborationC Babusiaux, Gaia CollaborationarXiv:2206.05595arXiv e-printsGaia Collaboration, Arenou, F., Babusiaux, C., et al. 2022a, arXiv e-prints, arXiv:2206.05595
. A Vallenari, Gaia CollaborationA G A Brown, Gaia CollaborationarXiv:2208.00211arXiv e-printsGaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2022b, arXiv e-prints, arXiv:2208.00211
. E Glebbeek, O R Pols, A&A. 4881017Glebbeek, E. & Pols, O. R. 2008, A&A, 488, 1017
. E Grishin, H B Perets, MNRAS. 5124993Grishin, E. & Perets, H. B. 2022, MNRAS, 512, 4993
. M R Hayden, J Bovy, J A Holtzman, ApJ. 808132Hayden, M. R., Bovy, J., Holtzman, J. A., et al. 2015, ApJ, 808, 132
. C R Hayes, T Masseron, J Sobeck, ApJS. 26234Hayes, C. R., Masseron, T., Sobeck, J., et al. 2022, ApJS, 262, 34
. M Haywood, P Di Matteo, M D Lehnert, D Katz, A Gómez, A&A. 560109Haywood, M., Di Matteo, P., Lehnert, M. D., Katz, D., & Gómez, A. 2013, A&A, 560, A109
. M Haywood, P Di Matteo, O Snaith, M D Lehnert, MNRAS. 57920A&AHaywood, M., Di Matteo, P., Snaith, O., & Lehnert, M. D. 2015, A&A, 579, A5 He, M. Y. & Petrovich, C. 2018, MNRAS, 474, 20
. J G Hills, C A Day, Astrophys. Lett. 1787Hills, J. G. & Day, C. A. 1976, Astrophys. Lett., 17, 87
. R G Izzard, H Preece, P Jofre, MNRAS. 4732984Izzard, R. G., Preece, H., Jofre, P., et al. 2018, MNRAS, 473, 2984
. P Jofre, A Jorissen, C Aguilera-Gomez, arXiv:2207.11084arXiv e-printsJofre, P., Jorissen, A., Aguilera-Gomez, C., et al. 2022, arXiv e-prints, arXiv:2207.11084
. P Jofré, A Jorissen, S Van Eck, A&A. 59560Jofré, P., Jorissen, A., Van Eck, S., et al. 2016, A&A, 595, A60
. H Jönsson, J A Holtzman, C Prieto, AJ. 160120Jönsson, H., Holtzman, J. A., Allende Prieto, C., et al. 2020, AJ, 160, 120
. M Kounkel, K R Covey, K G Stassun, AJ. 162184Kounkel, M., Covey, K. R., Stassun, K. G., et al. 2021, AJ, 162, 184
VizieR Online Data Catalog. N Lagarde, T Decressin, C Charbonnel, Lagarde, N., Decressin, T., Charbonnel, C., et al. 2012, VizieR Online Data Catalog, J/A+A/543/A108
. R Lallement, C Babusiaux, J L Vergely, A&A. 625135Lallement, R., Babusiaux, C., Vergely, J. L., et al. 2019, A&A, 625, A135
. H W Leung, J Bovy, MNRAS. 4833255Leung, H. W. & Bovy, J. 2019, MNRAS, 483, 3255
. L Lindegren, U Bastian, M Biermann, A&A. 6494Lindegren, L., Bastian, U., Biermann, M., et al. 2021, A&A, 649, A4
. James C Lombardi, J Rasio, F A Shapiro, S L , ApJ. 445117Lombardi, James C., J., Rasio, F. A., & Shapiro, S. L. 1995, ApJ, 445, L117
. James C Lombardi, J Rasio, F A Shapiro, S L , ApJ. 468797Lombardi, James C., J., Rasio, F. A., & Shapiro, S. L. 1996, ApJ, 468, 797
. R E Luck, ApJS. 91309Luck, R. E. 1994, ApJS, 91, 309
. J T Mackereth, J Bovy, H W Leung, MNRAS. 489176Mackereth, J. T., Bovy, J., Leung, H. W., et al. 2019, MNRAS, 489, 176
. M Martig, H.-W Rix, V Silva Aguirre, MNRAS. 4512230Martig, M., Rix, H.-W., Silva Aguirre, V., et al. 2015, MNRAS, 451, 2230
. T Masseron, G Gilmore, MNRAS. 4531855Masseron, T. & Gilmore, G. 2015, MNRAS, 453, 1855
. T Matsuno, D Yong, W Aoki, M Ishigaki, ApJ. 86049Matsuno, T., Yong, D., Aoki, W., & Ishigaki, M. N. 2018, ApJ, 860, 49
. F Matteucci, L Greggio, A&A. 154279Matteucci, F. & Greggio, L. 1986, A&A, 154, 279
. W H Mccrea, MNRAS. 128147McCrea, W. H. 1964, MNRAS, 128, 147
Y Momany, Astrophysics and Space Science Library. H. M. J. Boffin, G. Carraro, & G. Beccari413129Momany, Y. 2015, in Astrophysics and Space Science Library, Vol. 413, As- trophysics and Space Science Library, ed. H. M. J. Boffin, G. Carraro, & G. Beccari, 129
. Y Momany, E V Held, I Saviane, A&A. 468973Momany, Y., Held, E. V., Saviane, I., et al. 2007, A&A, 468, 973
. G C Myeong, V Belokurov, D S Aguado, ApJ. 93821Myeong, G. C., Belokurov, V., Aguado, D. S., et al. 2022, ApJ, 938, 21
. B Paczyński, ARA&A. 9183Paczyński, B. 1971, ARA&A, 9, 183
. M J Rain, J A Ahumada, G Carraro, A&A. 65067Rain, M. J., Ahumada, J. A., & Carraro, G. 2021, A&A, 650, A67
. A Recio-Blanco, P De Laverny, G Kordopatis, A&A. 5675Recio-Blanco, A., de Laverny, P., Kordopatis, G., et al. 2014, A&A, 567, A5
A Sills, Astrophysics and Space Science Library. H. M. J. Boffin, G. Carraro, & G. Beccari413277Sills, A. 2015, in Astrophysics and Space Science Library, Vol. 413, Astro- physics and Space Science Library, ed. H. M. J. Boffin, G. Carraro, & G. Bec- cari, 277
. A Sills, J A Faber, James C Lombardi, J Rasio, F A Warren, A R , ApJ. 548323Sills, A., Faber, J. A., Lombardi, James C., J., Rasio, F. A., & Warren, A. R. 2001, ApJ, 548, 323
. V Silva Aguirre, M Bojsen-Hansen, D Slumstrup, MNRAS. 4755487Silva Aguirre, V., Bojsen-Hansen, M., Slumstrup, D., et al. 2018, MNRAS, 475, 5487
. R Smiljanic, MNRAS. 4221562Smiljanic, R. 2012, MNRAS, 422, 1562
. R Smiljanic, R Gauderon, P North, A&A. 502267Smiljanic, R., Gauderon, R., North, P., et al. 2009, A&A, 502, 267
. W X Sun, Y Huang, H F Wang, ApJ. 90312Sun, W. X., Huang, Y., Wang, H. F., et al. 2020, ApJ, 903, 12
. B M Tinsley, ApJ. 2291046Tinsley, B. M. 1979, ApJ, 229, 1046
. S Toonen, T C N Boekholt, S Zwart, A&A. 66161Toonen, S., Boekholt, T. C. N., & Portegies Zwart, S. 2022, A&A, 661, A61
. S Toonen, S Portegies Zwart, A S Hamers, D Bandopadhyay, A&A. 64016Toonen, S., Portegies Zwart, S., Hamers, A. S., & Bandopadhyay, D. 2020, A&A, 640, A16
R F Webbink, Interacting Binary Stars. J. E. Pringle & R. A. Wade39Webbink, R. F. 1985, in Interacting Binary Stars, ed. J. E. Pringle & R. A. Wade, 39
. D Yong, L Casagrande, K A Venn, MNRAS. 459487Yong, D., Casagrande, L., Venn, K. A., et al. 2016, MNRAS, 459, 487
. M Zhang, M Xiang, H.-W Zhang, ApJ. 922145Zhang, M., Xiang, M., Zhang, H.-W., et al. 2021, ApJ, 922, 145
| [] |
[
"Emergence of synchronisation in a driven-dissipative hot Rydberg vapor",
"Emergence of synchronisation in a driven-dissipative hot Rydberg vapor"
] | [
"Karen Wadenpfuhl \nDepartment of Physics\nJoint Quantum Centre (JQC) Durham-Newcastle\nDurham University\nDH1 3LEUnited Kingdom\n\nPhysikalisches Institut\nUniversität Heidelberg\nIm Neuenheimer Feld 22669120HeidelbergGermany\n",
"C Stuart Adams \nDepartment of Physics\nJoint Quantum Centre (JQC) Durham-Newcastle\nDurham University\nDH1 3LEUnited Kingdom\n"
] | [
"Department of Physics\nJoint Quantum Centre (JQC) Durham-Newcastle\nDurham University\nDH1 3LEUnited Kingdom",
"Physikalisches Institut\nUniversität Heidelberg\nIm Neuenheimer Feld 22669120HeidelbergGermany",
"Department of Physics\nJoint Quantum Centre (JQC) Durham-Newcastle\nDurham University\nDH1 3LEUnited Kingdom"
] | [] | We observe synchronisation in a thermal (35-60 • C) atomic (Rb) ensemble driven to a highlyexcited Rydberg state (principle quantum number n ranging from 43 to 79). Synchronisation in this system is unexpected due to the atomic motion, however, we show theoretically that sufficiently strong interactions via a global Rydberg density mean field causes frequency and phase entrainment. The emergent oscillations in the vapor's bulk quantities are detected in the transmission of the probe laser for a two-photon excitation scheme. | null | [
"https://export.arxiv.org/pdf/2306.05188v1.pdf"
] | 259,108,237 | 2306.05188 | 6e788354afafe93048f719352ce9d29892aaabe8 |
Emergence of synchronisation in a driven-dissipative hot Rydberg vapor
Karen Wadenpfuhl
Department of Physics
Joint Quantum Centre (JQC) Durham-Newcastle
Durham University
DH1 3LEUnited Kingdom
Physikalisches Institut
Universität Heidelberg
Im Neuenheimer Feld 22669120HeidelbergGermany
C Stuart Adams
Department of Physics
Joint Quantum Centre (JQC) Durham-Newcastle
Durham University
DH1 3LEUnited Kingdom
Emergence of synchronisation in a driven-dissipative hot Rydberg vapor
(Dated: June 9, 2023)
We observe synchronisation in a thermal (35-60 • C) atomic (Rb) ensemble driven to a highlyexcited Rydberg state (principle quantum number n ranging from 43 to 79). Synchronisation in this system is unexpected due to the atomic motion, however, we show theoretically that sufficiently strong interactions via a global Rydberg density mean field causes frequency and phase entrainment. The emergent oscillations in the vapor's bulk quantities are detected in the transmission of the probe laser for a two-photon excitation scheme.
Nonlinear systems are abundant in nature, where the nonlinearities introduce a range of rich and varied phenomena. Well known is the ability of nonlinear systems to generate multiple steady states, so that the system's state is determined by its past trajectory and hysteresis loops may form. Such multistable states have been observed numerously in biological [1][2][3][4], mechanical [5][6][7], and atomic systems [8][9][10][11]. Nonlinear dynamics and bifurcation theory provide a modelling framework of these phenomena, enabling a fundamental understanding of the underlying processes from within a generalised mathematical framework.
When adding dissipation to a conservative nonlinear system, the resulting dynamics get even richer and the system can support rather unexpected types of stable solutions. Under certain conditions, dissipative systems with nonlinearities can support chaotic behavior [12,13] or limit cycles and time-periodic solutions [14,15]. A Hopf bifurcation may cause the appearance of attractive limit cycles, which leads to self-sustained oscillations of the system. This oscillatory behavior is not imprinted by an external drive but arises fundamentally from the system's dynamics. Such self-oscillating systems have been found to model biological processes [16][17][18][19][20] and physical systems [21][22][23][24].
A very curious question regards the behavior of an ensemble of self-sustained oscillators experiencing a form of coupling to another, or to an external force. First studied by Kuramoto for an ensemble of globally coupled oscillators with different natural frequencies [25], it has been found that -under certain conditions -all or a subset of the oscillators begin to lock in frequency and phase [26][27][28]. As a result, a transition towards a sychronised state occurs in the ensemble. This synchronisation transition has been used to explain e.g. the strong lateral vibrations of the Millennium bridge, London, on its opening day [29], though this is contested [30], or the Belousov-Zhabotinsky and other chemical reactions [31,32]. In nature, synchronisation occurs in ensembles of fireflies where the dark-red steady-state branch is repulsive and green indicates the limit cycle region. For a fixed detuning ∆c/Γge = −1, indicated by the dashed line, the time evolution from an initial state |Ψ⟩ t=0 = (1 − x) |g⟩ + x |r⟩ with x ∈ [0, 1] towards a limit cycle is shown in (c). For the same time traces, a phase space projection of the limit cycle in the ρge-plane is shown in (d). The other model parameters were set to: ∆p = 0, Ωp/Γge = 3.8, Ωc/Γge = 2, V /Γg = −12, Γer/Γge = 10 −5 , Γgr/Γge = 10 −2 and β = 3. flashing in unison [33], the chirps of snowy tree crickets [34], and occasionally in the applause of audiences [35].
To further study the emergence of synchronisation and the resulting non-equilibrium dynamics, a simple and easily controllable system with a macroscopic number of coupled oscillators and tunable properties is highly desirable. In the following, we demonstrate that the occurrence of a synchronised phase is expected in a continuously driven, dissipative three-level system with a power law coupling to a mean field, and report on the observation of synchronisation in a hot Rydberg vapor. A surprising, but expected, feature of this system is that oscillations of the bulk quantities remain observable even though the individual constituents are undergoing random motion.
Rydberg atoms are known to interact strongly with a power-law scaling in distance. This translates into a mean-field approach [36] with power law scaling β of the Rydberg level shift in Rydberg density ρ rr . A similar power-law scaling also can be used to model the level shift induced by ionization [37] or other mean-field inducing mechanisms. Adopting this mean field approach, the resulting equations of motion (EOMs) are formulated for a three-level basis set with coherent driving by Ω x and dissipation Γ yz , see figure 1 (a). For any β ̸ = 0, the EOMs are nonlinear and their steady states are defined by the roots of a polynomial of order max(4β + 1, 1) in ρ i er .
The resulting steady-state solutions of the nonlinear EOMs reveal regions of multistability where an odd number of equilibria exist for one set of parameters Ω x , ∆ x , Γ yz , V, β. To extract the stability of the solutions, the spectrum of eigenvalues λ j of the linearisation (Jacobi) is evaluated at the steady state [38]. Stability is guaranteed if Re(λ j ) < 0 for the eight non-constant eigenvalues. Consequently, the repulsive branch marked in red in figure 1 (b) is detected by spectral analysis. However, the steady states indicated in green are also unstable. Here, a Hopf bifurcation occurs where a complexconjugate pair of eigenvalues λ j crosses the imaginary axis and renders the steady state unstable. As a result, the system is attracted towards a limit cycle which leads to robust self-sustained oscillations of the system parameters in time. Figure 1 (c) and (d) show that the system is attracted to the same limit cycle for different initial states, but each initial state leads to a different phase in the limit cycle at any fixed time t. This freedom of phase in the limit cycle is indicative of a self-oscillating system and fundamentally distinguishes it from a driven periodic system where the phase in the limit cycle is locked to that of the drive. The freedom of phase in the resulting limit cycle has also be described using the language of continuous time crystals [39,40].
Although optical bistability has been found experimentally in driven-dissipative hot Rydberg vapors [10], one would intuitively expect any oscillations in this system to average out due to atomic motion. The motioninduced dephasing for different atomic velocities results in a spread of the natural frequencies of the limit cycles and the phases therein. Although about half of the velocity classes are attracted towards a limit cycle, no macroscopic oscillations can be seen, as shown by the black line in figure 2 (a).
The crucial point which has been neglected in the above argumentation is that the different velocity classes in the vapor do not develop independently of one another. Each atom experiences a Rydberg level shift determined by its surroundings. The atoms, or velocity classes, interact with their local environment and their individual dynamics are, in turn, directly influenced by the environment through the shared Rydberg atom density. When taking this global coupling into account, the resulting dynamics of the vapor are very different as shown in 2 (b). After an initial transient phase, synchronisation sets in where the velocity classes begin to oscillate in lockstep with a single frequency and fixed phase relation. This is possible because the phase of a velocity class within its limit cycle is free and therefore easily adjusted by the mean field. With a growing number of velocity classes oscillating in phase lock, the mean field strength increases which forces even more velocity classes to align their oscillations until eventually a partially or completely synchronised state is reached.
This transition towards a synchronised state of globally coupled oscillators is known since Christiaan Huygens' time [41] and has since been studied extensively from a mathematical perspective. After the initial work by Winfree [42] and Kuramoto [25], the study of synchronisation has been extended to more general forms of the global coupling force [27,28] and other situations. Famous examples where synchronisation is experimentally demonstrated for few oscillators is the synchronisation of pendulum clocks [41] or metronomes [43] fixed to a common support which provides the coupling. However, large numbers of globally coupled oscillators with widely tunable properties are not so easily available. Therefore, a hot Rydberg vapor with ∼ O(10 9 ) atoms in the beam volume, and a somewhat lower number of oscillators, provides an ideal testbed for an experimental study of the synchronisation transition for large numbers of constituent oscillators.
In our experiment, we use 87 Rb number densities of ρ 87Rb ∈ [0.1, 6.1] · 10 11 cm −3 , which corresponds to temperatures from 35 • C to 60 • C for a vapor of rubidium with natural abundance. The probe laser was locked to a detuning of ∆ p /2π = −140 MHz below the 87 Rb shows the relevant level scheme for two-photon spectroscopy in rubidium. In (c), an example set of traces obtained for fixed probe Rabi frequency Ωp/2π = 191 MHz and increasing coupling Rabi frequencies is shown. The Rydberg laser is coupled to the |43D 5/2 ⟩ state, and the number density is ρ 87Rb = (4.7 ± 0.2) · 10 10 cm −3 . In this example, the oscillatory regime is preceded by an onset of bistability.
resonance with the intermediate state |5S 1/2 , F = 2⟩ → |5P 3/2 , F = 3⟩.
The counterpropagating coupling laser was set to scan through two-photon resonance with a |nS 1/2 ⟩ or |nD 5/2 ⟩ Rydberg state at typical scan speeds of up to 10 MHz/ms. Typical Rabi frequencies were in range Ω p /2π ∈ [100, 330] MHz and Ω c /2π ≤ 35 MHz for Rydberg states with principal quantum numbers n ranging from 43 to 79. Different beam waists of up to w ≤ 1 mm and beam waist ratios of w p /w c ≈ 2, 0.9, 0.5 have been tried, but no direct dependence on the beam waists has been observed. The data presented here was obtained for w p = 390 µm and w c = 440 µm . Setup and relevant level scheme are shown in figure 3 (a) and (b), respectively. Figure 3 (c) shows a typical series of scans for fixed probe and increasing coupling Rabi frequency. After an onset of bistability in the optical response, a window featuring oscillations in the vapor transmission opens. This synchronisation window widens for a further increase in coupling Rabi frequency. When instead setting the coupling Rabi frequency to a fixed value, the width of the oscillation region decreases with increasing probe Rabi frequency. In the various parameter regimes that were explored experimentally, the synchronisation regime is often preceded by bistability but not necessarily so. We find a strong dependence of the onset of oscillations on the Rydberg state and vapor density. Higher atom number densities require lower Rabi frequencies for the oscillations to set in. This behavior is expected from a synchronisation perspective since larger global coupling strengths require lower mean-field strengths to initiate entrainment.
We observe an onset of synchronisation for coupling to both nS and nD Rydberg states, though it is easier to explore the behavior and scaling when coupling to D states due to the stronger dipole coupling at similar n. The oscillations were also observed when coupling a fourth P or F state with an additional rf field in both the weak and strong driving limit, respectively. In the fully Autler-Townes split regime, oscillations occurred as long as the Rydberg population was high enough. The presence of synchronisation is therefore neither a purely three-level phenomenon, nor does it depend on the orbital angular momentum of the Rydberg state.
With all system parameters held constant and fixed laser detunings, the synchronised state persist for > 10 minutes and the oscillations maintain their shape. Analysis of a time trace reveals a narrow frequency peak with a spectrum of weaker, higher harmonics. The oscillation frequency ν osc of the first peak was observed to lie between 10 kHz and 25 kHz, though persistent oscillations of up to 43 kHz were measured. In Fig. 3 (c) one can see that the oscillation frequency varies along the coupling laser scan. As a general trend, an increase in oscillation frequency ν osc with increasing Rabi frequencies was observed. Additionally, the formation of several separate synchronisation regions, typically with a different range ν osc but similar shapes of the oscillations along the region, has been found. This is also visible in 3 (c) where the two regions share a boundary at −∆ c /2π ≈ 26, 36, 48 MHz for Ω c /2π = 29, 33, 38 MHz, respectively. Figure 4(a) shows the change in oscillation shape and frequency with increasing ∆ c . The rightmost zoom-in, marked in red, belongs to the next synchronisation region beginning at ∆ c /2π ≈ −45 MHz. It shows again the sawtooth-like shape at its lower frequency end that can also be seen in the two leftmost insets. Figure 4 (b) -(d) show results obtained with the thermal vapor simulation. Two limit cycle regions appear in the spectrum (b), though a cross-section of phase space shows that the case ∆ c /Γ ge = −3 is not a limit cycle but resembles a system near a strange attractor. Generally, the thermal vapor model shows regions of multistability which implies that the pointwise integration technique in (b) cannot accurately model a laser scan. This is because the thermal vapor system's trajectory depends on its past state and the attractor it is drawn to, which pointwise integration does not account for. The thermal vapor model reproduces the observed experimental behavior phenomenologically. This includes scaling of the oscillation region with changes in Ω p , Ω c and V , as well as the expected shape of the oscillations. Therefore, we attribute the emergence of macroscopic oscillations in the bulk response of a hot Rydberg vapor to a Kuramoto-like synchronisation transition for sufficiently large global coupling strengths. Possible mechanisms causing the power-law scaling of the Rydberg density mean field are Rydberg interactions [36] or chargeinduced Stark shifts due to ionisation [37], though other effects could possibly lead to similar power-law scaling behaviors.
In summary, we observe the transition towards synchronisation in a strongly driven, dissipative, hot Rydberg vapor. The observed changes of the synchronised region with variation of the Rabi frequency, vapor density, and interaction strength is reproduced by a theoretical model extended to a thermal vapor simulation. The model's nonlinearity leads to the emergence of attractive limit cycles for individual velocity classes through a Hopf bifurcation. Under the influence of global coupling through the shared Rydberg density, the constituent oscillating velocity classes synchronise in a thermal vapor, which leads to periodic oscillations of the vapor's bulk quantities. The resulting synchronised phase is robust and stable, and therefore ideally suited for an experimental investigation of the emergent non-equilibrium phase of matter. It provides a simple platform for the study of synchronisation in a nonlinear system with a truly macroscopic number of oscillators.
Author's note: During completion of this work, two other reports of oscillations in a continuously driven hot Rydberg vapor were reported. In [44], the oscillations are of a transient nature and the probe Rabi frequency is significantly lower than in this work. The authors attribute the origin of the limit cycles to spatial inhomogeneities and clustering of Rydberg atoms. In [45], the experimental parameter regime is similar to this work. The limit cycles are here attributed to a competition for Rydberg population between energetically closely spaced Rydberg states.
ACKNOWLEDGMENTS
K.W. acknowledges insightful discussions with Finn
Münnich and Matt Jamieson, and thanks Matthias Wei-demüller. C.S.A. acknowledges fruitful discussions with Dong-Sheng Ding. The authors furthermore thank Lucy Downes, Max Festenstein, Oliver Hughes, and Kevin Weatherill. Financial support was provided by the UKRI, EPSRC grant reference number EP/V030280/1 ("Quantum optics using Rydberg polaritons").
SUPPLEMENTARY MATERIAL FOR EMERGENCE OF SYNCHRONISATION IN A DRIVEN-DISSIPATIVE HOT RYDBERG VAPOR
In the following supplementary material, we spell out the non-linear equations of motion for the three-level system in A. In B, we show that no Hopf bifurcation can occur in an effective two-level system after elimination of the intermediate state. The thermal vapor integration scheme is described in C, and supplementary experimental results are presented in D.
The Routh-Hurwitz criterion [46] for a cubic P(x) = x 3 + a 2 x 2 + a 1 x + a 0 states that all eigenvalues of P have a negative real part except for one purely imaginary pair iff a 2 > 0, a 0 > 0 and a 2 a 1 − a 0 = 0 are true. If these conditions are satisfied, a Hopf bifurcation occurs.
From B3 one sees immediately that a 2 = 2Γ > 0 is satisfied for any Γ > 0, as we assume in a dissipative system with non-vanishing population loss from state |r⟩. Looking at the last of the three conditions, the equality, then one finds that a 2 a 1 − a 0 = a 0 + Γ(Ω 2 + 2Γ 2 ) = 0 cannot be satisfied if the condition a 0 > 0 is true. Therefore, no complex conjugate pairs of eigenvalues crosses the imaginary axis and no Hopf bifurcation occurs in the effective two-level model.
Appendix C: thermal vapor integration scheme
In a thermal vapor model, the different velocity classes v j do not evolve independent of another. This is because the Rydberg atom density generates an effective mean field that all velocity classes couple to and are influenced by. Simple integration of the individual velocity classes therefore cannot account for the Rydberg density produced by other velocity classes, and the integration scheme has to be adapted accordingly. The mean field Hamiltonian for a given velocity class v j changes from H vj ∝ (ρ vj rr ) β |r vj ⟩ ⟨r vj | to H tot vj ∝ (ρ tot rr ) β |r vj ⟩ ⟨r vj |. To this end, an adapted rk4 integration scheme [47] has been implemented. After preforming a single rk4 time step for all velocity classes, the Rydberg population of the vapor is computed as the weighted sum over the Rydberg population of all velocity classes. Then, the density-dependent level shift V (ρ tot rr ) β is adjusted in the EOMs for all velocity classes and the next integration step is performed. A matrix-based implementation of the integration scheme in Python allows for a simple parallel computation of one time step for all velocity classes at once.
Appendix D: supplementary experimental data Figure 5 shows the observed behavior of the synchronisation window as well as the change in oscillation frequency when keeping coupling (a) or probe (b) Rabi frequency constant and changing the respective other. The observed closing and opening of the synchronisation window, as well as the change in oscillation frequency, are reproduced by the thermal vapor model. The scaling behavior in the thermal vapor model is indirectly inherited from the three-level single velocity class model, which initially seeds the oscillations.
An example time trace for all system parameters held constant is shown in (c) and the corresponding spectral density in (d). This Fourier transform analysis was used to determine the oscillation frequency ν osc .
The upper row of figure 6 shows the synchronisation window for three different vapor densities but otherwise identical system parameters with a coupling to the |79D⟩ states. The vapor temperatures were varied from T 1 = 40.5 ± 0.5 • C (blue) to T 2 = 51.0 ± 0.5 • C (yellow) and T 3 = 52.0 ± 0.5 • C (red). By varying the vapor temperature, the critical In these examples, the synchronisation region is not preceded by or overlaid with an optical bistability. When keeping all system parameters constant, the oscillations persist and maintain their shape as shown in (c). The resulting spectral intensity (d), plotted in log space, shows a pronounced frequency peak at νosc = 10.4 kHz and weaker higher harmonics.
FIG. 6. Dependence of synchronisation region on vapor density and Rydberg state. The upper row shows the width of the synchronisation region for the |79D 5/2 ⟩ state at three different temperatures and therefore vapor densities. The other parameters were held constant across the measurements. The width of the region leading to synchronisation, and the minimum coupling Rabi frequency Ωc depend on the vapor density ρ and probe Rabi frequency Ωp. The lower row shows the oscillation region for three different Rydberg states at similar vapor densities.
coupling Rabi frequency Ω c , where synchronisation sets in, is shifted for fixed probe Rabi frequencies and Rydberg state. Therefore, a direct dependence of the coupling strength V on the vapor density can be inferred. A higher vapor density leads to stronger effective Rydberg interactions and ion densities at the same Rabi frequencies due to the reduced interatomic spacing. For higher vapor densities, the onset of synchronisation is therefore shifted to lower Rabi frequencies where the critical Rydberg atom density is then located.
In the lower row of figure 6, the synchronisation window is shown for different combinations of Rydberg state but similar vapor densities. At the various probe Rabi frequencies Ω p , the minimum coupling Rabi frequency required for an onset of synchronisation is different for all three Rydberg states. Higher principal quantum numbers n need lower Rabi frequencies for meeting the synchronisation threshold. The states were selected by Rydberg interaction strength, the |43D 5/2 ⟩ state has a stronger Rydberg-interaction than the |50D 5/2 ⟩ state since C 6 (43D 5/2 )/C 6 (50D 5/2 ) = 3.7. The ratio C 6 (63D 5/2 )/C 6 (50D 5/2 ) = 10.1 is even higher [48].
In the green |63D 5/2 ⟩ data, the occurrence of spectrally separate synchronisation regions is clearly visible.
FIG. 1 .
1Single velocity class model. The basic model with the relevant parameters is shown in (a). An example steadystate solution of the resulting nonlinear OBEs is shown in (b)
FIG. 2 .
2Thermal vapor simulation showing emergence of synchronisation. A thermal vapor simulation for uncoupled (a) and coupled (b) velocity classes shows the emergence of synchronisation via the Rydberg density induced mean field. The time-evolution and corresponding steadystate spectrum are shown on the left and right, respectively. Simulation parameters were Ωp/Γge = 6, Ωc/Γge = 4, ∆p = 0, ∆c/Γge = −11, Γer/Γge = 10 −5 , Γgr/Γge = 10 −2 , V /Γge = −800, β = 2 and N vel = 101 velocity classes with equal populations. The atomic velocity distribution corresponds to that of a rubidium vapor at 48°C.
FIG. 3 .
3Setup and example onset of oscillations. (a) The counterpropagating probe and coupling lasers are polarisation cleaned with a polarising beamsplitter (PBS) after exiting the fibers. The subsequent acousto-optic modulator (AOM) and aperture are used to remote control the laser powers incident on the heated, 4 cm long rubidium cell. The probe light is detected by a photodetector (PD). (b)
FIG. 4 .
4Change in oscillation shape and frequency along coupling laser scan. (a) shows the oscillation region for a scan across resonance with the |43D 5/2 ⟩ state at a temperature T = (52.0 ± 0.5) • C with Ωp/2π = 191 MHz and Ωc/2π = 37 MHz. The colored insets show a zoom-in of the trace in the color-shaded regions, each of width 2π × 4.8 MHz. Different shapes of the oscillations can be distinguished. (b) Point wise integrated spectrum with the errorbars denoting the amplitude of the oscillations. The time evolution towards a limit cycle is shown in (c) with the inset showing only the limit cycles approached after an integration time of t = 5000Γge. In (d), the oscillations in Rydberg population ρ r rr (solid) and ρ i ge (dashed) are shown. The case ∆ = −3Γge did not approach a limit cycle within the maximum integration time but behaves similar to a system near a strange attractor. The simulation assumes a thermal vapor with N vel = 101 velocity classes with equal populations, Ωp = 1.5, Ωc = 1, ∆p = 0, Γer = 10 −6 , Γgr = 10 −3 and V = −300, in units of Γge, and β = 2.
FIG. 5 .
5Change in width of synchronisation window and spectral analysis of oscillations. (a) and (b) show example traces of the synchronisation region with coupling to the |50D 3/2 ⟩ state at ∆c/2π ≈ 0 MHz and the |50D 5/2 ⟩ state at ∆c/2π ≈ 93 MHz. (a) For the coupling Rabi fequency held constant at Ωc = 2π×18 MHz, the width of the synchronisation window decreases with increasing probe Rabi frequency and the oscillation frequency reduces. When instead keeping the probe Rabi frequency Ωp = 2π × 160 MHz constant (b), the width of the synchronisation region increases and so does the oscillation frequency.
Appendix A: 3-level modelThe three-level model with a Rydberg density-dependent power law level shift V ρ n rr leads to the nonlinear equations of motionρ gg = − Ω p Im(ρ ge ) + Γ ge ρ ee + Γ gr ρ rr (A1a)The steady state solutions are defined via a polynomial of order max(4β + 1, 1) in ρ i er . The steady state is stable if Re(λ j ) < 0 holds for all eight non-constant eigenvalues of the Jacobi defined by A1. Attractive limit cycles may form at a Hopf bifurcation, i.e. when a complex-conjugate pair λ k ,λ k crosses the imaginary axis. Such a Hopf bifurcation occurs in the three-level system for certain parameter regimes.can be solved for the steady state by setting the left hand side to zero. The steady state solutions are defined via the roots of the polynomialThe characteristic polynomial of the Jacobi J of the system defined by equations B1 has three constant eigenvalues {0, 0, −Γ} and a cubic term
Multistability in the lactose utilization network of escherichia coli. E M Ozbudak, M Thattai, H N Lim, B I Shraiman, A Van Oudenaarden, 10.1038/nature02298Nature. 427737E. M. Ozbudak, M. Thattai, H. N. Lim, B. I. Shraiman, and A. van Oudenaarden, Multistability in the lactose utilization network of escherichia coli, Nature 427, 737 (2004).
Detection of multistability, bifurcations, and hysteresis in a large class of biological positive-feedback systems. D Angeli, J E Ferrell, E D Sontag, 10.1073/pnas.0308265100Proceedings of the National Academy of Sciences. 1011822D. Angeli, J. E. Ferrell, and E. D. Sontag, Detection of multistability, bifurcations, and hysteresis in a large class of biological positive-feedback systems, Proceedings of the National Academy of Sciences 101, 1822 (2004).
Cellular compartments cause multistability and allow cells to process more information. H A Harrington, E Feliu, C Wiuf, M P Stumpf, 10.1016/j.bpj.2013.02.028Biophysical Journal. 1041824H. A. Harrington, E. Feliu, C. Wiuf, and M. P. Stumpf, Cellular compartments cause multistability and allow cells to process more information, Biophysical Journal 104, 1824 (2013).
Mechanism, dynamics, and biological existence of multistability in a large class of bursting neurons. J P Newman, R J Butera, 10.1063/1.3413995Chaos: An Interdisciplinary Journal of Nonlinear Science. 2023118J. P. Newman and R. J. Butera, Mechanism, dynamics, and biological existence of multistability in a large class of bursting neurons, Chaos: An Interdisciplinary Journal of Nonlinear Science 20, 023118 (2010).
Nonlinear micromechanical casimir oscillator. H B Chan, V A Aksyuk, R N Kleiman, D J Bishop, F Capasso, 10.1103/physrevlett.87.211801Physical Review Letters. 87211801H. B. Chan, V. A. Aksyuk, R. N. Kleiman, D. J. Bishop, and F. Capasso, Nonlinear micromechanical casimir os- cillator, Physical Review Letters 87, 211801 (2001).
A controllable nanomechanical memory element. R L Badzey, G Zolfagharkhani, A Gaidarzhy, P Mohanty, 10.1063/1.1808507Applied Physics Letters. 853587R. L. Badzey, G. Zolfagharkhani, A. Gaidarzhy, and P. Mohanty, A controllable nanomechanical memory ele- ment, Applied Physics Letters 85, 3587 (2004).
A noise-assisted reprogrammable nanomechanical logic gate. D N Guerra, A R Bulsara, W L Ditto, S Sinha, K Murali, P Mohanty, 10.1021/nl9034175Nano Letters. 101168D. N. Guerra, A. R. Bulsara, W. L. Ditto, S. Sinha, K. Murali, and P. Mohanty, A noise-assisted repro- grammable nanomechanical logic gate, Nano Letters 10, 1168 (2010).
Differential gain and bistability using a sodium-filled fabryperot interferometer. H M Gibbs, S L Mccall, T N C Venkatesan, 10.1103/physrevlett.36.1135Physical Review Letters. 361135H. M. Gibbs, S. L. McCall, and T. N. C. Venkatesan, Dif- ferential gain and bistability using a sodium-filled fabry- perot interferometer, Physical Review Letters 36, 1135 (1976).
Cooperative bistability in dense, excited atomic systems. M P Hehlen, H U Güdel, Q Shu, J Rai, S Rai, S C Rand, 10.1103/physrevlett.73.1103Physical Review Letters. 731103M. P. Hehlen, H. U. Güdel, Q. Shu, J. Rai, S. Rai, and S. C. Rand, Cooperative bistability in dense, excited atomic systems, Physical Review Letters 73, 1103 (1994).
Nonequilibrium phase transition in a dilute rydberg ensemble. C Carr, R Ritter, C G Wade, C S Adams, K J Weatherill, 10.1103/PhysRevLett.111.113901Physical Review Letters. 111113901C. Carr, R. Ritter, C. G. Wade, C. S. Adams, and K. J. Weatherill, Nonequilibrium phase transition in a dilute rydberg ensemble, Physical Review Letters 111, 113901 (2013).
A terahertz-driven non-equilibrium phase transition in a room temperature atomic vapour. C G Wade, M Marcuzzi, E Levi, J M Kondo, I Lesanovsky, C S Adams, K J Weatherill, 10.1038/s41467-018-05597-4Nature Communications. 9C. G. Wade, M. Marcuzzi, E. Levi, J. M. Kondo, I. Lesanovsky, C. S. Adams, and K. J. Weatherill, A terahertz-driven non-equilibrium phase transition in a room temperature atomic vapour, Nature Communica- tions 9, 10.1038/s41467-018-05597-4 (2018).
Deterministic nonperiodic flow. E Lorenz, Journal of the Atmospheric Sciences. 20130E. Lorenz, Deterministic nonperiodic flow, Journal of the Atmospheric Sciences 20, 130 (1963).
M Marek, I Schreiber, 10.1017/cbo9780511608162Chaotic Behaviour of Deterministic Dissipative Systems. Cambridge University PressM. Marek and I. Schreiber, Chaotic Behaviour of De- terministic Dissipative Systems (Cambridge University Press, 1991).
D Ruelle, Elements of Differentiable Dynamics and Bifurcation Theory. Elsevier Science & Technology BooksD. Ruelle, Elements of Differentiable Dynamics and Bi- furcation Theory (Elsevier Science & Technology Books, 2014).
J E Marsden, M Mccracken, 10.1007/978-1-4612-6374-6The Hopf Bifurcation and Its Applications. New YorkSpringerJ. E. Marsden and M. McCracken, The Hopf Bifurcation and Its Applications (Springer New York, 1976).
Impulses and physiological states in theoretical models of nerve membrane. R Fitzhugh, 10.1016/s0006-3495(61)86902-6Biophysical Journal. 1445R. FitzHugh, Impulses and physiological states in theo- retical models of nerve membrane, Biophysical Journal 1, 445 (1961).
A quantitative description of membrane current and its application to conduction and excitation in nerve. A L Hodgkin, A F Huxley, 10.1113/jphysiol.1952.sp004764The Journal of Physiology. 117500A. L. Hodgkin and A. F. Huxley, A quantitative descrip- tion of membrane current and its application to conduc- tion and excitation in nerve, The Journal of Physiology 117, 500 (1952).
Hopf bifurcation and bistability of a nutrient-phytoplankton-zooplankton model. T Zhang, W Wang, 10.1016/j.apm.2012.02.012Applied Mathematical Modelling. 366225T. Zhang and W. Wang, Hopf bifurcation and bistability of a nutrient-phytoplankton-zooplankton model, Applied Mathematical Modelling 36, 6225 (2012).
Bifurcation and bistability in a model of hematopoietic regulation. C Colijn, M C Mackey, 10.1137/050640072SIAM Journal on Applied Dynamical Systems. 6378C. Colijn and M. C. Mackey, Bifurcation and bistability in a model of hematopoietic regulation, SIAM Journal on Applied Dynamical Systems 6, 378 (2007).
S Pankavich, N Neri, D S , 10.3934/dcdsb.2020044Bistable dynamics and hopf bifurcation in a refined model of early stage HIV infection. 252867S. Pankavich, , N. Neri, and D. S. and, Bistable dynamics and hopf bifurcation in a refined model of early stage HIV infection, Discrete & Continuous Dynamical Systems -B 25, 2867 (2020).
Antiferromagnetic phase transition in a nonequilibrium lattice of rydberg atoms. T E Lee, H Häffner, M C Cross, 10.1103/PhysRevA.84.031402Physical Review A. 8431402T. E. Lee, H. Häffner, and M. C. Cross, Antiferromag- netic phase transition in a nonequilibrium lattice of ryd- berg atoms, Physical Review A 84, 031402 (2011).
Synchronization transitions in a disordered josephson series array. K Wiesenfeld, P Colet, S H Strogatz, 10.1103/PhysRevLett.76.404Physical Review Letters. 76404K. Wiesenfeld, P. Colet, and S. H. Strogatz, Synchroniza- tion transitions in a disordered josephson series array, Physical Review Letters 76, 404 (1996).
D Dreon, A Baumgärtner, X Li, S Hertlein, T Esslinger, T Donner, 10.1038/s41586-022-04970-0Self-oscillating pump in a topological dissipative atom-cavity system. 608494D. Dreon, A. Baumgärtner, X. Li, S. Hertlein, T. Esslinger, and T. Donner, Self-oscillating pump in a topological dissipative atom-cavity system, Nature 608, 494 (2022).
On relaxation-oscillations, The London, Edinburgh, and Dublin Philosophical Magazine and. B Van Der Pol, 10.1080/14786442608564127Journal of Science. 2978B. van der Pol, On relaxation-oscillations, The Lon- don, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2, 978 (1926).
Self-entrainment of a population of coupled non-linear oscillators. Y Kuramoto, 10.1007/BFb0013365International Symposium on Mathematical Problems in Theoretical Physics. Springer-VerlagY. Kuramoto, Self-entrainment of a population of cou- pled non-linear oscillators, in International Sympo- sium on Mathematical Problems in Theoretical Physics (Springer-Verlag) pp. 420-422.
A Pikovsky, M Rosenblum, J Kurths, 10.1017/CBO9780511755743.002Synchronization: A Universal Concept in Nonlinear Sciences. Cambridge University PressA. Pikovsky, M. Rosenblum, and J. Kurths, Synchroniza- tion: A Universal Concept in Nonlinear Sciences (Cam- bridge University Press, 2001).
From kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators. S H Strogatz, 10.1016/s0167-2789(00)00094-4Physica D. 1431S. H. Strogatz, From kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators, Physica D 143, 1 (2000).
The kuramoto model: A simple paradigm for synchronization phenomena. J A Acebrón, L L Bonilla, C J P Vicente, F Ritort, R Spigler, 10.1103/revmodphys.77.137Rev. Mod. Phys. 77137J. A. Acebrón, L. L. Bonilla, C. J. P. Vicente, F. Ritort, and R. Spigler, The kuramoto model: A simple paradigm for synchronization phenomena, Rev. Mod. Phys. 77 77, 137 (2005).
London millennium bridge: Pedestrian-induced lateral vibration. P Dallard, T Fitzpatrick, A Flint, A Low, R R Smith, M Willford, M Roche, 10.1061/(asce)1084-0702(2001)6:6(412)Journal of Bridge Engineering. 6412P. Dallard, T. Fitzpatrick, A. Flint, A. Low, R. R. Smith, M. Willford, and M. Roche, London millennium bridge: Pedestrian-induced lateral vibration, Journal of Bridge Engineering 6, 412 (2001).
Emergence of the london millennium bridge instability without synchronisation. I Belykh, M Bocian, A R Champneys, K Daley, R Jeter, J H G Macdonald, A Mcrobie, 10.1038/s41467-021-27568-yNature Communications. 12I. Belykh, M. Bocian, A. R. Champneys, K. Da- ley, R. Jeter, J. H. G. Macdonald, and A. McRobie, Emergence of the london millennium bridge instabil- ity without synchronisation, Nature Communications 12, 10.1038/s41467-021-27568-y (2021).
Y Kuramoto, Chemical Oscillations, Waves, and Turbulence. Berlin HeidelbergSpringerY. Kuramoto, Chemical Oscillations, Waves, and Turbu- lence (Springer Berlin Heidelberg, 1984) pp. 111-140.
Oscillatory kinetics and spatio-temporal selforganization in reactions at solid surfaces. G Ertl, 10.1126/science.254.5039.1750Science. 2541750G. Ertl, Oscillatory kinetics and spatio-temporal self- organization in reactions at solid surfaces, Science 254, 1750 (1991).
Synchronous rhythmic flashing of fireflies. II. J Buck, 10.1086/415929The Quarterly Review of Biology. 63265J. Buck, Synchronous rhythmic flashing of fireflies. II., The Quarterly Review of Biology 63, 265 (1988).
Acoustic synchrony: Two mechanisms in the snowy tree cricket. T J Walker, 10.1126/science.166.3907.891Science. 166891T. J. Walker, Acoustic synchrony: Two mechanisms in the snowy tree cricket, Science 166, 891 (1969).
The sound of many hands clapping. Z Néda, E Ravasz, Y Brechet, T Vicsek, A.-L Barabási, 10.1038/35002660Nature. 403849Z. Néda, E. Ravasz, Y. Brechet, T. Vicsek, and A.-L. Barabási, The sound of many hands clapping, Nature 403, 849 (2000).
. N R De Melo, C G Wade, N Šibalić, J M Kondo, C S , N. R. de Melo, C. G. Wade, N.Šibalić, J. M. Kondo, C. S.
Intrinsic optical bistability in a strongly driven rydberg ensemble. K J Adams, Weatherill, 10.1103/PhysRevA.93.063863Physical Review A. 9363863Adams, and K. J. Weatherill, Intrinsic optical bistability in a strongly driven rydberg ensemble, Physical Review A 93, 063863 (2016).
D Weller, J P Shaffer, T Pfau, R Löw, H Kübler, 10.1103/PhysRevA.99.043418Interplay between thermal rydberg gases and plasmas. 9943418D. Weller, J. P. Shaffer, T. Pfau, R. Löw, and H. Kübler, Interplay between thermal rydberg gases and plasmas, Physical Review A 99, 043418 (2019).
P Hartman, 10.1137/1.9780898719222Ordinary Differential Equations. Society for Industrial and Applied MathematicsP. Hartman, Ordinary Differential Equations (Society for Industrial and Applied Mathematics, 2002).
Observation of a continuous time crystal. P Kongkhambut, J Skulte, L Mathey, J G Cosme, A Hemmerich, H Keßler, 10.1126/science.abo3382Science. 377670P. Kongkhambut, J. Skulte, L. Mathey, J. G. Cosme, A. Hemmerich, and H. Keßler, Observation of a continu- ous time crystal, Science 377, 670 (2022).
Zheludev, Photonic metamaterial analogue of a continuous time crystal. T Liu, J.-Y Ou, K F Macdonald, N I , 10.1038/s41567-023-02023-5Nature Physics. T. Liu, J.-Y. Ou, K. F. MacDonald, and N. I. Zhe- ludev, Photonic metamaterial analogue of a continuous time crystal, Nature Physics 10.1038/s41567-023-02023-5 (2023).
Huygens' clocks revisited. A R Willms, P M Kitanov, W F Langford, 10.1098/rsos.170777Royal Society Open Science. 4170777A. R. Willms, P. M. Kitanov, and W. F. Langford, Huy- gens' clocks revisited, Royal Society Open Science 4, 170777 (2017).
Biological rhythms and the behavior of populations of coupled oscillators. A T Winfree, 10.1016/0022-5193(67)90051-3Journal of Theoretical Biology. 1615A. T. Winfree, Biological rhythms and the behavior of populations of coupled oscillators, Journal of Theoretical Biology 16, 15 (1967).
Synchronization of metronomes. J Pantaleone, 10.1119/1.1501118American Journal of Physics. 70992J. Pantaleone, Synchronization of metronomes, American Journal of Physics 70, 992 (2002).
Ergodicity breaking from rydberg clusters in a driven-dissipative many-body system. D.-S Ding, Z Bai, Z.-K Liu, B.-S Shi, G.-C Guo, W Li, C S Adams, 10.48550/ARXIV.2305.07032D.-S. Ding, Z. Bai, Z.-K. Liu, B.-S. Shi, G.-C. Guo, W. Li, and C. S. Adams, Ergodicity breaking from ry- dberg clusters in a driven-dissipative many-body system 10.48550/ARXIV.2305.07032 (2023).
X Wu, Z Wang, F Yang, R Gao, C Liang, M K Tey, X Li, T Pohl, L You, 10.48550/ARXIV.2305.20070Observation of a dissipative time crystal in a strongly interacting rydberg gas. X. Wu, Z. Wang, F. Yang, R. Gao, C. Liang, M. K. Tey, X. Li, T. Pohl, and L. You, Observation of a dissipa- tive time crystal in a strongly interacting rydberg gas 10.48550/ARXIV.2305.20070 (2023).
A new proof of the routh-hurwitz stability criterion using the second method of liapunov. P C Parks, 10.1017/S030500410004072XMathematical Proceedings of the Cambridge Philosophical Society. 58694P. C. Parks, A new proof of the routh-hurwitz stability criterion using the second method of liapunov, Mathe- matical Proceedings of the Cambridge Philosophical So- ciety 58, 694 (1962).
W H Press, S A Teukolsky, W T Vetterling, B P Flannery, Numerical Recipes. Cambridge University Press12563rd EditionW. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes 3rd Edition (Cam- bridge University Press, 2007) p. 1256.
ARC: An open-source library for calculating properties of alkali rydberg atoms. N Šibalić, J Pritchard, C Adams, K Weatherill, 10.1016/j.cpc.2017.06.015Computer Physics Communications. 220319N.Šibalić, J. Pritchard, C. Adams, and K. Weatherill, ARC: An open-source library for calculating properties of alkali rydberg atoms, Computer Physics Communica- tions 220, 319 (2017).
| [] |
[
"R-MAE: Regions Meet Masked Autoencoders",
"R-MAE: Regions Meet Masked Autoencoders"
] | [
"Duy-Kien Nguyen \nUniversity of Amsterdam\n\n",
"Vaibhav Aggarwal \nUniversity of Amsterdam\n\n",
"Yanghao Li \nUniversity of Amsterdam\n\n",
"Martin R Oswald \nUniversity of Amsterdam\n\n",
"Alexander Kirillov \nUniversity of Amsterdam\n\n",
"Cees G M Snoek \nUniversity of Amsterdam\n\n",
"Xinlei Chen \nUniversity of Amsterdam\n\n",
"Meta Fair \nUniversity of Amsterdam\n\n",
"Ai \nUniversity of Amsterdam\n\n"
] | [
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n"
] | [] | Vision-specific concepts such as 'region' have played a key role in extending general machine learning frameworks to tasks like object detection. Given the success of regionbased detectors for supervised learning and the progress of intra-image methods for contrastive learning, we explore the use of regions for reconstructive pre-training. Starting from Masked Autoencoding (MAE) both as a baseline and an inspiration, we propose a parallel pre-text task tailored to address the one-to-many mapping between images and regions. Since such regions can be generated in an unsupervised way, our approach (R-MAE) inherits the wide applicability from MAE, while being more 'region-aware'. We conduct thorough analyses during the development of R-MAE, and converge on a variant that is both effective and efficient (1.3% overhead over MAE). Moreover, it shows consistent quantitative improvements when generalized to various pretraining data and downstream detection and segmentation benchmarks. Finally, we provide extensive qualitative visualizations to enhance the understanding of R-MAE's behavior and potential. Code will be made available. 1 | null | [
"https://export.arxiv.org/pdf/2306.05411v1.pdf"
] | 259,108,250 | 2306.05411 | 77be9d602da311d8ab4be9a52bd92a9f44d6e7a5 |
R-MAE: Regions Meet Masked Autoencoders
Duy-Kien Nguyen
University of Amsterdam
Vaibhav Aggarwal
University of Amsterdam
Yanghao Li
University of Amsterdam
Martin R Oswald
University of Amsterdam
Alexander Kirillov
University of Amsterdam
Cees G M Snoek
University of Amsterdam
Xinlei Chen
University of Amsterdam
Meta Fair
University of Amsterdam
Ai
University of Amsterdam
R-MAE: Regions Meet Masked Autoencoders
Vision-specific concepts such as 'region' have played a key role in extending general machine learning frameworks to tasks like object detection. Given the success of regionbased detectors for supervised learning and the progress of intra-image methods for contrastive learning, we explore the use of regions for reconstructive pre-training. Starting from Masked Autoencoding (MAE) both as a baseline and an inspiration, we propose a parallel pre-text task tailored to address the one-to-many mapping between images and regions. Since such regions can be generated in an unsupervised way, our approach (R-MAE) inherits the wide applicability from MAE, while being more 'region-aware'. We conduct thorough analyses during the development of R-MAE, and converge on a variant that is both effective and efficient (1.3% overhead over MAE). Moreover, it shows consistent quantitative improvements when generalized to various pretraining data and downstream detection and segmentation benchmarks. Finally, we provide extensive qualitative visualizations to enhance the understanding of R-MAE's behavior and potential. Code will be made available. 1
Introduction
General machine learning paradigms can often benefit from key concepts when applied to specific domains. For computer vision and especially for localization-geared tasks like object detection, one of these concepts is 'region': Widely-accepted physiological theories [40] suggest that human perception will group similar elements and parts together to parse complex scenes and objects. This hypothesis is empirically validated by the R-CNN series [29] (note that the 'R' stands for 'region', which can be pre-computed [54] or jointly learned [50]). R-CNN successfully bridged the gap between the general supervised learning framework [41] that pre-trains the backbone, and the specific downstream task of finding objects (Fig. 1, left). Even today, region-refinement still remains an essential component for top-performing detectors [12,16,46,43] trained on human annotations. * Work done during an internship at FAIR. 1 https://github.com/facebookresearch/r-mae. Left: from supervised classification to region-based learning in the R-CNN series [29]. Middle: from inter-image contrast to region-level, intra-image contrast as explored in self-supervised pre-training [36]. Right: while being more effective [44], how to use region information in reconstructive pre-training remains under-explored. We aim to close this gap.
Besides supervised classification, un-or self-supervised learning methods [23,10,18,32] have recently emerged as powerful alternatives for pre-training representations. For computer vision, contrastive learning [18] shows solid gains in training efficiency against supervised baselines for object detection [33]. Meanwhile, reconstructive pre-training such as Masked Autoencoding (MAE) [32] has proven even more effective, improving the upper-bound of detection accuracy beyond faster convergence [44,64].
Although both paradigms are general, more efforts have been directed towards adapting contrastive methods to vision. In particular, since the standard formulation [18] represents each image with a single vector, it neglects the rich spatial structure of images and may not transfer as well to tasks that require accurate localization. Again, region as a key concept that allows for 'intra-image contrast' has been extensively researched to close this gap [51,36,61,60,26,62,65,59,6,37] (Fig. 1, middle). Nevertheless, while reconstructive methods are more powerful [44] and underlie many state-ofthe-art detectors [43,64], it is unclear how regions can be introduced to such frameworks, and whether they can further help downstream performance (Fig. 1, right).
We aim to fill in this blank. We begin with MAE [32] as a representative baseline, and explore the use of pre-computed regions [25] in an MAE-style. Specifically, we propose a pre-text task called 'masked Region Autoencoding' (RAE). Similar to MAE, RAE is also reconstructive. But different from MAE, RAE focuses on regions, or 'region maps' that represent regions as binary-valued maps indicating if a pixel belongs to a region. Moreover, as our goal is to pre-train image backbones like Vision Transformers (ViTs) [24], the corresponding masked image is also fed through these as 'pixel encoders' to compute additional inputs for RAE.
One distinctive challenge we face with RAE is a potential one-to-many mapping, since each image may contain an unknown number of regions. This makes RAE akin to object detection [45], where multiple instances can appear in one scene. The solution in R-CNN [29] essentially stacks regions in the batch axis and processes each of them separately. This ensures permutation equivariance among regions, but can be less efficient. Therefore, we extend our investigation to the remaining two axes within ViT -channel and length [24], and show that by treating pooled region embeddings as queries, the length-based variant offers the best trade-off between speed and accuracy for RAE.
RAE as a task is fully compatible with MAE, which can be optimized in parallel by simply restoring the pixel decoder [32]. From the standpoint of MAE, the addition of RAE makes the pre-trained pixel encoder more region-aware. Therefore, we name our joint approach R-MAE, short for 'Region-aware Masked Autoencoding'. By default, R-MAE uses unsupervised, image-computable regions [25], giving it the same range of applicability as MAE.
Different from prior practices [7,32], we develop R-MAE by pre-training on COCO train2017 [45], for its scenecentric images and ground-truth regions as potential oracles. Evaluation is again focused on localization tasks, transferring to COCO object detection and ADE20K semantic segmentation [67]. The development is carefully devised in two stages with extensive analyses: We first show RAE alone works well; then we show RAE fares well with MAE. Our default setup merely adds 1.3% FLOPs on top of MAE.
Further, we generalize by pre-training with more COCO data and on ImageNet [22], and by evaluating on long-tail object detection (LVIS [31]). Consistent gains are observed. 2 To highlight what's learned in R-MAE, we visualize both the output and the attention map of models pre-trained, and find R-MAE is indeed more region-, or instance-aware. Finally, as a side application, we show RAE itself has the potential for interactive segmentation [53], thanks to its ability to generate high-quality region maps from just a few visible patches. All these evidences suggest R-MAE/RAE learns useful and meaningful representations for downstream tasks, especially ones like detection and segmentation. 2 We also examined larger backbone and better regions in Appendix B.
Related Work
We first review two intrinsic properties of regions, which have driven their popularity in computer vision:
Local. Images are typically treated as holistic entities in machine learning algorithms [41,18], but real-world photos have rich spatial structures and local contents can vary across the same scene [3]. This becomes a strong motivation for the well-known R-CNN series [29,28,50,34], especially with Region-of-Interest (RoI) operations on local feature maps [28]. The same goes for contrastive or Siamese learning [18,33,49,30,20,14], where 2D signals are generally suppressed into global vectors for inter-image contrast. Realizing its potential downside on localization, many follow-up works [63,48,51,60,61,62,65,26,59,37] have shifted focus on intra-image contrast, which use features from local geometric entities (e.g. points [57], regions [36] or both [6]). On the other hand, reconstructive methods [32,7,58,19] as denoising autoencoders [56] preserve the 2D structure. It is therefore unclear how regions can further help in this regard.
Object-centric. Perhaps this is a more motivating reason for regions to meet MAE. Reconstructive learning is the dominating paradigm in pre-training natural language representations [23,10], and while steady progress is made [17,32], computer vision models are still lagging behind. One crucial difference between the two is that language consists of semantically meaningful words, while images are raw signals recorded in pixels. Meanwhile, in vision, objects can serve as a natural counterpart to words -they are constantly referred and manipulated as we interact with the visual world [40,66], and they can often be captured by regions [54,2]. By enhancing MAE's region awareness, we hope to uncover novel ways to bridge the gap between the two fields.
Then we discuss how regions are generated and utilized:
Source of regions. Regions can come from various sources (e.g. human annotations [45], spatial heuristics [36], clustering/segmentation [9,25,1], object proposals [54,2], motion segmentation [47]). As an initial exploration, we use precomputed, clustering-based regions [25]. However, regions can also be jointly discovered [37] or updated [6] with representation learning, which is left for future work.
Use of regions. There are at least three other ways to leverage regions in MAE. One is to bias the random masking strategy [42], which is less general and can be sensitive to region qualities [42]. Second is to revisit the RoI operation [50] and contrastive learning, which is costly with Siamese encoders [33,20], and has been extensively studied [36,60,62,59] even with MAE [4]. Third is to view regions as an extra modality, and treat the task as a multimodal learning one (e.g. with text [27,52], depth map [5]). This is closest to our work, yet the lightweight design of R-MAE makes it especially well-suited to handle regions. Figure 2: The pre-training pipeline of R-MAE. RAE as a standalone task takes a region encoder-decoder pair and the pixel encoder to reconstruct masked region maps. The MAE pixel decoder is optional (de-highlighted) but fully compatible with RAE, and we call our joint pipeline R-MAE. We default RAE to the variant that concatenates multiple pooled regions in the length axis, as it effectively balances speed and accuracy. But other variants also offer similar region-awareness to MAE.
Approach
MAE [32] is the foundation and baseline of our RAE and R-MAE. So we summarize it first as background knowledge.
Background: MAE
Task. As the name suggests, MAE uniformly masks out a portion of the image and learns to reconstruct by directly predicting raw pixel values. To provide a meaningful and challenging task for images, a high mask ratio β I (e.g. 75%) is used by default. The reconstruction is compared against the ground-truth with a simple ℓ 2 loss, L I .
Architecture. As an autoencoder [56], MAE instantiates its encoder and decoder with ViTs [24]. ViTs directly 'tokenize' images as sequences of patches, which paves the way for MAE's efficient encoder pre-training that removes (and not replaces) masked tokens. Only the fixed-sized (8-block, 512dimensional) pixel decoder processes in full sequence length. After pre-training, the pixel encoder is transferred as a visual backbone for downstream tasks [43].
RAE: Masked Region Autoencoding
Motivation. Before introducing RAE, let's first provide our high-level thoughts of using regions, or any extra information x to pre-train representations. There are three ways:
(i) Feeding x as an input -yet because of the additional signal, it can mitigate the difficulty of existing tasks to learn meaningful representations;
(ii) Predicting x as a target -this way the model can learn from x as a supervisory signal, but the task can be too challenging to accompolish and lead to overfitting [8]; (iii) And lastly, MAE-style usage of x -it stands between the two above extremes (100% input or 100% output), with (1 − β)×x as the input and β×x as the essential target, the mask ratio serves as a flexible control of the difficulty level for the pre-text task. Thus, an MAE-style approach is more flexible/powerful here.
Region maps. To adapt MAE to regions -or sets of location points, we first prepare them to be 'image-like'. Specifically, each region can be represented by a binary-valued region map similar in size to the image. Each element on the map, with a value of either in 0 or 1, indicates whether the corresponding location belongs to the region or not. Now, given any partially visible region map (mask ratio β R ), we can ask the model to complete it, the same as MAE does for pixels.
Architecture. Similar to MAE, RAE has an encoder and decoder for region autoencoding. We follow MAE and simply use ViT [24] blocks for both: m E -block p E -dimensional encoder and m D -block p D -dimensional decoder. However, just a region encoder-decoder pair is insufficient, as our ultimate goal is to obtain a pre-trained pixel encoder. Therefore, we maintain the encoder from MAE in RAE, and use a neck of m N ViT blocks to match dimensions and (optionally) propagate information before feeding into the region decoder. Such a configuration also makes effective use of the abundant contextual information available in the pixels to pre-train the encoder. Please see Fig. 2 for the overview.
One-to-many mapping. While regions can be considered as an additional modality to pixel-based MAE, the problem addressed here presents a distinctive challenge that cannot be fully captured by this view alone. Compared to other modalities (e.g. depth or semantic maps [5]) for which there is a one-to-one correspondence to pixels, the mapping between images and regions is one-to-many: One pixel can belong to an unknown number of regions.
Fortunately, this happens to be the very problem encountered in object detection. The mainstream solution, as promoted by R-CNN [29], is to sample and stack regions in the batch axis, and processes each of them separately. In RAE, this means each region map will go through the encoderdecoder in isolation: If there are b images and k regions per image, the network must be applied b×k times. This is expensive -so how to reduce the cost?
One naïve alternative is to merge the k regions in the channel axis. In this way, they can be viewed as a single image for encoding and decoding, and the computations are shared in the intermediate blocks. But unlike natural images which have fixed channel orders (e.g., RGB), randomly sampled regions can appear in any order. It would be ideal if the solution still preserves permutation equivariance. Qualitative results on COCO val2017 images, using R-MAE pre-trained with unsupervised region maps [25], and then applied on either COCO ground-truth regions (left column) or regions similar to the ones used during pre-training (right column). From left to right, each group contains: 1) the masked image, 2) the image reconstruction, 3) the original image; 4) the masked region, 5) the region reconstruction, 6) the original region, and 7) all regions in that image. Besides results, the figure also gives a sense of the differences between ground-truth and regions used in R-MAE. It's interesting the algorithm generalizes well from unsupervised regions to ground-truth ones.
Regions as queries -the length variant. Our final idea is inspired by DETR [13], which uses 'object queries' as substrates to decode objects. In a nutshell, each region is first encoded and pooled into a 1D embedding; then multiple region embeddings are concatenated along the sequence length [24] axis to form 'region queries'; and finally, these region queries will decode region maps from the output of the pixel encoder (through neck, see Fig. 2 for details). Since ViT blocks are set operations w.r.t. the input [55], this solution is permutation equivariant by design.
The last decoder block is responsible for expanding the region queries spatially. Note that because the decoder has two sets of inputs, its blocks follow the three-layer design [13], with an extra cross-attention layer that uses outputs from the neck to generate keys and values. Different from standard attention layers that compute a weighted sum (with keys) over values to produce the output (Fig. 4, left), we expand the query by directly adding it to all the values (Fig. 4, right). A small MLP head is attached afterwards to predict region maps on these spatially expanded features.
Since this variant alleviates the linear complexity w.r.t. number of regions k, and still maintains the desired property w.r.t. permutation, we choose it as the default for RAE.
Loss. While ℓ 2 loss fits real-valued pixel predictions, by default we use cross-entropy loss for binary-valued regions (L R ). Modeling it as a classification task allows easy balance of the weights between foreground and background (w b ).
Standard Cross-Attention
Modified for Spatial Expansion Figure 4: How the region query is spatially expanded. We modify the standard cross-attention layer [13] (left) and given a region query, it is summed with all the value vectors to expand its spatial axes (right). A small MLP head is attached afterwards.
R-MAE: Regions Meet MAE
Finally, as RAE is fully compatible with MAE, they can be trained in conjunction by simply restoring the pixel encoder and applying a joint loss: L I +λL R (λ defaults to 1).
In Fig. 2 we illustrate the default pre-training pipeline (including de-highlighted). Note that: (i) The pixel branch feeds to the region branch, but not vice versa; (ii) The mask is shared between two branches. We name this pipeline R-MAE, short for Region-aware Masked Autoencoding.
Experiments
In this section, we first develop RAE and R-MAE in two stages: (i) We verify RAE works well as a standalone task; (ii) We bring back MAE and show RAE also fares well with it. Extensive analyses are provided for both stages. Then, we extend our experiments to more data, more tasks, and compare against state-of-the-art. More results are found in Appendix B. Finally, we provide visualizations to better understand R-MAE's behavior and potential.
Default Setup
Source of regions. We use regions generated from the unsupervised Felzenswalb-Huttenlocher (FH) algorithm [25]. It is efficient and covers the whole image, and underlies classic object proposal methods (e.g. selective search [54]).
Pre-training data. Deviating from prior practices [7,32], we develop RAE and R-MAE by pre-training on COCO train2017 [45]. This default choice is due to the scenecentric nature of the images in COCO and the presence of ground-truth regions which can serve as useful oracles. Following [36], FH is run at three scales: {500, 1000, 1500}, which also set the minimum cluster sizes. Since this dataset (118k images) is significantly smaller than ImageNet (1.4m), we pre-train for 4k epochs instead of 800 [32] -it's about half the number of iterations compared to MAE default.
Other pre-training details. Unless otherwise specified, we follow MAE [32] for hyper-parameters. Our base learning rate is set to 1e-4, which offers better stability during training and maintains the baseline performance (see Appendix B). The length variant is used. ViT-B [24] is set as the pixel backbone, and a 1-block, 128-dimensional ViT is used for the neck, the region encoder and the region decoder. A 3layer MLP acts as the region predictor after the decoder block. k=8 regions are sampled (with repetition) per image, with a mask ratio of β R =0.75. Both λ and background loss weight is 1. When MAE is enabled, the pixel branch feeds the region branch, and the random masks are shared. Downstream transfer. We use the recipe from ViTDet [43] for object evaluation on COCO, and report mean Average Precision (AP) for both box detection (AP b ) and instance segmentation (AP m ). For semantic segmentation, we evaluate on ADE20K and report mean Intersection-over-Union (mIoU) as the main metric. All details follow MAE [32] (e.g., run each setting 3 times and take the mean).
Main Comparisons
We
Analyses of RAE and R-MAE
We present a full-page analysis for RAE and R-MAE, as shown in Tab. 3 and Fig. 5 (RAE); and Tab. 4 (R-MAE). Due to the space limit, we put our observations in the respective captions, and summarize our main findings below:
• RAE variants matter. The batch variant offers the best accuracy (Tab. 3a, Tab. 4a) but can be expensive in FLOPs (Fig. 5a); the channel variant is efficient, but lags behind especially when operating alone (Tab. 3a); the length variant strikes a trade-off between the two. • Number of regions matter. as shown in Fig. 5b, more regions help for both tasks. • Mask ratio matters. 75% with shared mask works best. • Cross-feeding patterns: the asymmetric design from pixels to regions achieves the best results (Tab. 4c). (j) Region decoder width. Table 3: Analysis of RAE on detection and segmentation. We cover: a) variants of RAE, including all three axes of ViT activation maps; b) ground-truth instance and panoptic segmentation on COCO as oracles; c,d) loss-type which ℓ2 also works and background weight in cross-entropy; e-j) architecture changes, where we find larger encoder/decoder/neck generally helps accuracy but can hurt speed, and a 3-layer normal MLP works best as a predictor compared to the inverted MLP layer (in ViT) and none. Default settings are shaded in gray . Top-right: # of regions helps performance, and even with 16 regions it still does not stop growing. Bottom: mask ratio matters -we either change the region mask ratio (βR) alone (left), or jointly change it with the image mask ratio (βR=βI, right). In both cases, a high mask ratio (∼0.75) is required. ADE20K numbers are averaged over three runs to reduce variance, following prior practice [32]. Figure 6: Attention map visualization on COCO val2017. In each group from left to right we show the original image with the selected query (denoted by red square); three attention maps corresponding to the query generated from i) MoCo v3 [21]; ii) MAE [32]; and iii) R-MAE. All of these methods are pre-trained on COCO train2017 split. In every row from top to down, we show 3 types of the query: i) rigid objects, ii) non-rigid objects, iii) multiple objects. Regions with darker red colors in the attention map denote larger attention weights. Compared to the baselines, the attention map from R-MAE is more local and focused. • Surprisingly, object instance annotations on COCO does not help, for which we suspect it's due to the sparse coverage of regions as opposed to FH [25]. As a partial validation, we switch to panoptic segments [38] which also includes 'stuff' categories [11], and indeed see an improvement -achieving the best mIoU and helping on detection scores (Tab. 3b). Next, we generalize to more data, more tasks; extensions to larger backbones and better regions are found in Appendix B.
More Pre-Training Data on COCO
Next we generalize our finding. The first is on pre-training data scale -if adding more data changes our observation. To this end, we add COCO unlabeled2017 to our pretraining set, and again train for 4k epochs following [35].
Results are summarized in Tab. 5. With no change of hyper-parameters, R-MAE continues to outperform MAE. Table 6: Comparison on LVIS between MAE and R-MAE. We also include LVIS-specific metrics for long-tail recognition.
Evaluation on LVIS Detection
The second generalization is on downstream task. We directly evaluate the COCO pre-trained MAE baseline and R-MAE on LVIS object detection [31] as another benchmark. LVIS builds on COCO images, but its key focus is on longtail recognition. The results are presented in Tab. 6, where we observe a similar gain to COCO detection.
ImageNet Pre-Training
The third generalization is to further pre-train on Ima-geNet [22]. We make the following adjustments from our default setting: (i) Setting the epoch number to 800/1600, following MAE [32]; (ii) extracting FH regions with a single scale of 1000, following [36]. As ImageNet is a standard pretraining benchmark [18,32], this allows a fair comparison to state-of-the-art methods. Table 7: State-of-the-art comparison among MAE variants pretrained on ImageNet [22]. R-MAE-with a negligible 1.3% overhead -outperforms all the methods compared ( †: 8×8 patch size; ‡: 448×448 input, both will increase pre-training cost by ∼4×).
Tab. 7 summarizes our comparison among latest MAE variants on detection and segmentation using the same transferring recipe [43,32]. For MultiMAE [5], we also implemented our own version by treating k region maps as a semantic map of k channels. Across all the methods compared, R-MAE achieves the best results on all three metrics. Noted that R-MAE even shows advantage over Long-Seq MAE [35] and SemMAE [42], both of which increased pretraining cost by ∼4× with longer sequence length. Fig. 3. In order to gain better understanding about the behaviour of our pre-trained pixel encoders, we visualize the attention map of the ViT after R-MAE in Fig. 6.
Qualitative Results
R-MAE generates reasonable prediction of regions as shown in
We randomly pick images from the COCO val2017 split for the qualitative examination. Given an input image, we first pick a patch as our query (denoted in red square), and visualize its averaged attention map of the last ViT block. The regions with more attention weights are shown in darker red color. For better comparisons, we also include the visualizations of MoCo v3 [21] and MAE [32] on the same image. To be fair, all of three models are pre-trained on COCO train2017 split till full convergence.
Observations of attention map. Given the query patch, MoCo v3 attends to a very large area. In particular, its attention map is hard to interpret when the scene becomes more complicated (shown in the last row). It may be because MoCo v3 is pre-trained with contrastive learning of holistic image representations, so it largely fails to capture the local information. On the other hand, MAE gives more reasonable attention map per query, which highly covers the object of interest. However, for objects of similar colors or shapes, MAE struggles to differentiate them. Finally, R-MAE demonstrates its localization capabilities through the attention map, with strong focus on objects across different locations. The last row of Fig. 6 shows the extreme case of a crowded scene with similar objects. It is impressive to see that our method could distinguish the corresponding object instance from others. image GT mask 90% mask 85% mask 80% Figure 8: RAE for interactive segmentation. Here we show RAE's region predictions on COCO val2017 set, given images and only masked region maps severing as a proxy to a potential user's input. Going from left to right, the user is supplying more 'annotations'. The model is pre-trained with a fixed region masking ratio (75%) but generates high-quality masks even when the inference ratio is significantly higher (90%).
Failure cases. We also show some failure cases from our visualization in Fig. 7. The query pointing to background is shown on the left group. While MoCo v3 and MAE produces a wide attention map which covers the whole background, the attention map from our R-MAE tends to focus on very local regions around the query. The same is observed for an image of very large object (e.g., the bear on the right). The R-MAE attention map only covers parts of the object.
RAE as an interactive segmentor. Finally, we find a pretrained RAE can act as 'interactive segmentor' [53]. Specifically, it takes the image along with some patches-of-interest as its inputs during the inference time. In an interactive segmentation setting, these patches can be provided by user clicks or eye gazing. The model then predicts the object corresponding to the given patches. From Fig. 8, we can see that RAE can predict high-quality regions even with 90% of the patches masked, and continues to refine when more hints are supplied (from left to right).
Conclusion
In this work, we present a simple yet effective pre-training approach (R-MAE) to explore the important vision concept region in MAE [32]. Through extensive quantitative and qualitative experiments, we show R-MAE is indeed more 'region-aware', and can consistently help downstream performance on localization-related tasks (e.g. detection and segmentation). By treating regions as queries, its region branch is designed to be highly efficient (1.3% overhead), yet it serves as a key for R-MAE to achieve state-of-the-art results among ImageNet pre-trained MAE variants. We hope our work will inspire more future efforts along this direction, and truly close the gap to natural language processing by learning the visual analogue of words in computer vision.
A. Implementation Details
Masking strategy. Different from [42] which deploys a biased sampling strategy using semantic parts, we aim to verify the effectiveness of RAE and R-MAE without changing the distribution of masked images. Therefore, during the pre-training stage, we simply follow the random uniform masking strategy as used in MAE [32]. To ensure the task on the region side is meaningful, we first sample the mask applied to the image, then sample from region maps that have at least one visible foreground patch.
To best describe our implemented model of RAE and R-MAE in detail, we resort to a more mathematical formulation of the problem and our solutions below.
Basic notations. We denote R ∈ R H×W ×k as the region maps corresponding to the input image, where k is the number of regions, and H, W are the dimensions of the input. RAE first patchifies R, and then masks R with a ratio of β R . The patch size p used in the regions is the same as the input image. The full sequence length is denoted by N = H p · W p . RAE batch variant. The RAE batch variant processes each region independently in the batch dimension. Note that the image features are shared among all k different regions.
Given R={R i } k i=1 , R i ∈ R H×W , our region encoder projects each visible patch of R i into a region embedding:
v renci = R−Encoder(v Ri ),(1)
where v Ri ∈ R N ·(1−βR)×(p·p) are visible patches of R i , and v renci ∈ R N ·(1−βR)×pE is the output of the region encoder. We then take the sum of the image features v ′ penc and v ′ renci , and feed it to the region decoder for prediction:
v ′ renci = MaskFill f (v renci ), [mask] ,(2)v rdeci = R−Decoder v ′ renci + v ′ penc ,(3)
where v ′ penc ∈ R N ×pD is the image features from the pixel encoder filled with [mask] token. Similarly, v ′ renci ∈ R N ×pD is region embeddings filled with the [mask] token. Here, f : p E → p D denotes the linear projection and v rdeci ∈ R N ×pD is the region decoder output which is then used to predict masked patches of R i .
While preserving the permutation equivariance 4 of k region maps, the RAE batch variant can be computationally expensive and resource-intensive (i.e., the total number of FLOPs increases linearly w.r.t. k).
RAE channel variant.
Here, we merge k region maps in the channel dimension, resulting in an input sequence of visible patches v R ∈ R N ·(1−βR)×(k·p·p) . This can be seen as converting region maps R ∈ R H×W ×k into an image 4 If one permutes the order for the k input regions, the output will be shuffled in the exactly same order. of k channels. The region encoder takes v R as its input to generate region embeddings:
v renc = R−Encoder(v R ),(4)
where v renc ∈R N ·(1−βR)×pE is the region encoder's output. We then add image features from the pixel encoder to the region embeddings from the region encoder. The augmented visual features are passed into the region decoder in order to make predictions for masked region patches:
v ′ renc = MaskFill f (v renc ), [mask] ,(5)v rdec = R−Decoder v ′ renc + v ′ penc ,(6)
where v ′ renc ∈ R N ×pD is the region embeddings filled with the [mask] token and v rdec ∈ R N ×pD is the output of the region decoder.
By treating R as an image of k channels, the channel variant demonstrates great efficiency during the pre-training process. This variant, however, fails to deal with the permutation equivariance between k regions -the shuffling of the outputs is not guaranteed given shuffled inputs. We also use this variant for our approximation of MultiMAE [5], which treats additional modalities as a single spatial entity.
RAE length variant.
Inspired by the design of object queries in the DETR decoder [13], the RAE length variant encodes each region map into a single vector using region encoder. The region queries will be concatenated along the sequence length dimension as follows:
v renci = AvgPool R−Encoder(v Ri ) ,(7)v emb = Concat(v renc1 , ..., v renc k ),(8)
where v Ri ∈ R N ·(1−βR)×(p·p) are visible patches of R i , v renci ∈ R pE is the region embedding of i-th region, v emb ∈ R k×pE denotes the region queries, and AvgPool is the average pooling operation. Different from the pixel decoder, the region decoder contains three sub-layers in each block: self-attention, crossattention, and feed-forward [55]. In addition, we use a Neck module to provide cross-attention with information from pixels as context. The blocks in Neck share the same design as the ones in the pixel decoder:
v context = Neck(v ′ penc ),(9)
where v ′ penc is the image features filled with [mask] tokens and v context ∈ R N ×pD is the output of Neck. The region decoder then decodes region queries with context information:
v query = R−Decoder(f (v emb ), v context ),(10)
where v query ∈ R k×pD is the output of the query decoder. Since masked region autoencoding predicts R ∈ R k×H×W during the pre-training, we modify the cross-attention sublayer of the last region decoder layer to expand each region embedding in v query into a region map as follow (see Fig. 4):
v rdec = W ⊤ v context + v query [:, None],(11)
where W ∈ R pD×pD is a learnable weight, v query [:, None] ∈ R k×1×pD 5 , and v rdec ∈ R k×N ×pD . The expansion in our cross-attention sub-layer can be viewed as the attention operation on each feature vector of v context (i.e., the attention score of a single feature over itself is equal to 1). A 3-layer MLP projection, g : R pD → R p·p , is then applied onto v rdec with a binary cross entropy loss to reconstruct R. We compare different projection layers after the crossattention with expansion: a 3-layer MLP, a linear projection as in MAE [32], and an inverted MLP with residual connection [55] followed by a linear projection. Interestingly, the 3-layer MLP projection works the best in our case.
Since there can be an extreme foreground-background pixel imbalance for small regions, we explore the use of a different loss weight for the background pixels. To do so, 5 [:, None] indicates the dimension expansion of vquery, as in numpy.
we simply multiply the binary cross entropy loss of each background pixel with a weight w b .
Cross-feeding. Let v ′ penc denotes the output of the pixel encoder filled with [mask] token and v ′ renc denotes the output of the region encoder filled with [mask] token before the pooling function. We examine three different cross-feeding styles between regions and pixels: RAE → MAE (region-topixel), RAE ← MAE (pixel-to-region), and RAE ↔ MAE (bidirectional). The default design in R-MAE follows RAE ← MAE (e.g. see Fig. 2), and we detail the other two below.
In RAE → MAE (region-to-pixel), we add region features to the pixel features and feed it as the input to the pixel decoder in order to regress the masked image patches:
v ′ renci = MaskFill h(v renci ), [mask] ,(12)v ′ renc = Concat(v ′ renc1 , ..., v ′ renc k ),(13)v pdec = P−Decoder v ′ penc + AvgPool v ′ renc ,(14)
where d D is the dimension of pixel decoder, v ′ renci ∈ R N ×d D is the output of the region encoder filled with [mask] token for i-th region, and h : p E →d D is a linear projection layer.
RAE ↔ MAE (bidirectional) feed in both directions. Table 9: Larger backbones pre-trained on ImageNet. The gains from R-MAE can hold despite less relative computation overheads.
B. More Comparisons MAE baselines. We first show the comparison of MAE with different base learning rates: 1.5e-4 in [32] and 1e-4 in our study. Here, models are pre-trained either on ImageNet [22] with 1600 epochs, or on COCO (train2017)/COCO++ (train2017 + unlabeled2017) with 4k epochs. All other settings are set as default. Tab. 8 shows that MAE with 1e-4 rate is able to reproduce ViTDet [43]. The only reason for this change is better pre-training stability which allows us to incorporate additional loss from RAE. Our R-MAE shows further improvements beyond Tab. 8.
Larger backbones. Tab. 9 shows the scaling trend of model size when pre-trained on ImageNet. Overall, the gains can hold at ViT-L [24], despite even more negligible computational overheads from RAE with larger backbones.
Better regions. To further validate the design of RAE, we explore better regions beyond the ground-truths in COCO.
To this end, we simply use off-the-shelf segmentation model from SAM [39] Table 11: ImageNet classification as downstream task for MAE and R-MAE. The representation from R-MAE is more locally focused and less fit for linear-eval, but fine-tuning fixes the gap. a larger region decoder and mask ratio 0.6, RAE can achieve better results than MAE with less compute in FLOPs (Tab. 10). While it's still unclear why COCO ground-truths fail, it shows better regions can indeed be leveraged in RAE.
ImageNet classification. To give a more complete assessment, we also evaluate our pre-trained models on ImageNet classification. To be consistent with MAE [32], we train R-MAE on ImageNet for 1600 epochs. It can be seen from Tab. 11 that our R-MAE achieves the same performance with MAE when being fine-tuned end-to-end. Interestingly, the linear probing performance of R-MAE lags behind MAE by a large margin. This observation indicates that our R-MAE is more focused on local patterns rather than global average features suited for image classification.
Additional visualizations. We provide extra qualitative results of our pre-trained models in Fig. 9 and Fig. 10.
Figure 1 :
1Regions are a key concept in adapting general machine learning paradigms to important vision tasks like object detection.
Figure 3 :
3Figure 3: Qualitative results on COCO val2017 images, using R-MAE pre-trained with unsupervised region maps [25], and then applied on either COCO ground-truth regions (left column) or regions similar to the ones used during pre-training (right column). From left to right, each group contains: 1) the masked image, 2) the image reconstruction, 3) the original image; 4) the masked region, 5) the region reconstruction, 6) the original region, and 7) all regions in that image. Besides results, the figure also gives a sense of the differences between ground-truth and regions used in R-MAE. It's interesting the algorithm generalizes well from unsupervised regions to ground-truth ones.
Fig. 3
3shows some qualitative examples from R-MAE.
FLOPs w.r.t. number of regions.
Performance w.r.t. number of regions.
Change region mask ratio only.
Jointly change region and image mask ratio.
Figure 5 :
5Analysis of RAE in figures. Top-left: With our lightweight design, RAE does not contribute significantly on top of MAE (9.7b), and among them the channel variant is the most efficient, length being second, and batch being the most expensive especially when the # of regions go up.
Figure 7 :
7Attention map (failure cases) on COCO val2017. We show the attention with the query pointing to background or a very large object in the image. The same visualization technique and ordering are used as inFig. 6. Our R-MAE tends to focus on a very local region instead of the whole background or object, and this can sometimes cause failures.
Figure 9 :
9Additional visualizations of attention maps on COCO val2017. In each group from left to right we show the original image with the selected query (denoted by red square); three attention maps corresponding to the query generated from i) MoCo v3[21]; ii) MAE[32]; and iii) R-MAE; all pre-trained on COCO train2017. Darker red colors in the attention map have larger attention weights.
Figure 10 :
10Additional qualitative results on COCO val2017, following the same format asFig. 3. See there for explanations.
Table 1: RAE alone works well. Using the default fine-tuning recipe from ViTDet[43] and MAE[32], our RAE shows significant improvement (47.2 vs. 41.2 in COCO AP b , and 42.1 vs. 24.4 in ADE20K mIoU) over training from scratch, suggesting RAE itself can serve as a pre-text task. MAE is better but RAE is lightweight and compatible, see Tab. 2. (* uses the longer, optimal recipe 3 ).pre-train
params (m)
FLOPs (b)
AP b
AP m
mIoU
/
-
-
41.2
37.1
24.4
RAE
86.3
4.7
47.2
41.8
42.1
MAE
111.9
9.7
50.1
44.6
45.9
/ * [43]
-
-
48.1
42.6
-
Table 2 :
2MAE vs. R-MAE across COCO pre-training epochs.By
develop R-MAE in two stages, first we show RAE itself is effective; then we show it fares well with MAE. This leads to two main comparisons: RAE vs. scratch. Tab. 1 mainly compares our RAE with no pre-training (i.e., from scratch). The improvement is significant: 47.2 vs. 41.2 in AP b , and 42.1 vs. 24.4 in mIoU with the MAE recipe. While MAE is still more effective, we show RAE is lightweight, and compatible (next).R-MAE vs. MAE. In Tab. 2, we jointly optimize RAE and MAE objectives in R-MAE. While the improvement at outset (2k COCO epochs) is less evident, it becomes more salient when the algorithm converges (4k and 8k epochs). On the other hand, MAE saturates around 4k epochs. And thanks to the lightweight design of our RAE, the improvement comes with a minimal computation cost: the region branch only adds 1.3% FLOPs to the MAE baseline (9.8b vs. 9.7b).
Table 4 :
4Analysis of R-MAE. We cover: a) variants of the design -all 3 are performing similarly well, different from RAE; b) loss weight of RAE; c) cross-feed between RAE and MAE, where we find our asymmetric design that only feeds pixels to regions works best, potentially because only the pixel encoder is used; d) whether to share masks within R-MAE (separate can incur cheating). Defaults are in gray .Query
MoCo v3
MAE
R-MAE
Query
MoCo v3
MAE
R-MAE
Table 5 :
5COCO pre-training with more data (unlabeled2017).
Table 8 :
8MAE with different base learning rates. For ImageNet w/ 1.5e-4, we directly cite the results from ViTDet[43], while others are from our own experiments. Our default setting (w/ 1e-4), chosen due to better stability, can reproduce all the MAE results.pre-train
ViT-Base
ViT-Large
AP b
AP m
mIoU
AP b
AP m
mIoU
MAE
51.8
46.1
47.9
55.6
49.3
52.3
R-MAE
52.3
46.4
47.5
55.8
49.7
52.5
to generate regions and replace FH. With RAE, p D =256, β I =β R =.6 pre-train settings
region FLOPs AP b AP m mIoU
MAE
-
9.7b
50.1 44.6
45.9
RAE, default
FH
4.7b
47.2 41.8
42.1
RAE, p D =256
4.8b
47.6 42.2
42.9
RAE, p D =256
SAM
4.8b
49.9 44.2
46.0
7.3b
50.6 45.1
46.8
Table 10 :
10Exploring better regions from SAM[39] to validate RAE. We simply swap FH regions with off-the-shelf SAM ones, and with a larger decoder and changes in mask ratios, we find RAE alone can achieve better results with less compute.pre-train
fine-tune
linear-eval
Acc@1
Acc@5
Acc@1
Acc@5
MAE
83.6
96.6
68.0
87.3
R-MAE
83.6
96.6
60.6
82.4
With efficiency in mind, we also did not use an optimal RAE configuration. With a larger region autoencoder, we can reach 48.9 AP b . So unlike supervised or contrastive learning[44,43], we find RAE pre-training improves the upper-bound of detection accuracy.
Slic superpixels. Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, Sabine Süsstrunk, EPFL. 2Technical reportRadhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. Slic superpixels. Technical report, EPFL, 2010. 2
Multiscale combinatorial grouping. Pablo Arbeláez, Jordi Pont-Tuset, Jonathan T Barron, Ferran Marques, Jitendra Malik, CVPR. Pablo Arbeláez, Jordi Pont-Tuset, Jonathan T Barron, Fer- ran Marques, and Jitendra Malik. Multiscale combinatorial grouping. In CVPR, 2014. 2
A critical analysis of self-supervision, or what we can learn from a single image. M Yuki, Christian Asano, Andrea Rupprecht, Vedaldi, arXiv:1904.13132arXiv preprintYuki M Asano, Christian Rupprecht, and Andrea Vedaldi. A critical analysis of self-supervision, or what we can learn from a single image. arXiv preprint arXiv:1904.13132, 2019. 2
Masked siamese networks for label-efficient learning. Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas, ECCV. 2022Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bo- janowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, and Nicolas Ballas. Masked siamese net- works for label-efficient learning. In ECCV, 2022. 2
Multimae: Multi-modal multi-task masked autoencoders. Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir, ECCV. 89Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multi-modal multi-task masked autoen- coders. In ECCV, 2022. 2, 3, 8, 9
Point-level region contrast for object detection pre-training. Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, Alexander C Berg, CVPR, 2022. 1Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, and Alexander C. Berg. Point-level region contrast for object detection pre-training. In CVPR, 2022. 1, 2
BEiT: BERT pretraining of image transformers. Hangbo Bao, Li Dong, Furu Wei, ICLR, 2022. 25Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre- training of image transformers. In ICLR, 2022. 2, 5
Pattern recognition and machine learning. M Christopher, Bishop, M Nasser, Nasrabadi, SpringerChristopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning. Springer, 2006. 3
Fast approximate energy minimization via graph cuts. Yuri Boykov, Olga Veksler, Ramin Zabih, TPAMI. 2Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approx- imate energy minimization via graph cuts. TPAMI, 2001. 2
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Ilya Sutskever, and Dario Amodei. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford1NeurIPSTom Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, Sand- hini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jef- frey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, 2020. 1, 2
Cocostuff: Thing and stuff classes in context. Holger Caesar, Jasper Uijlings, Vittorio Ferrari, CVPR. Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco- stuff: Thing and stuff classes in context. In CVPR, 2018. 7
Cascade r-cnn: Delving into high quality object detection. Zhaowei Cai, Nuno Vasconcelos, CVPR. Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018. 1
End-toend object detection with transformers. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko, ECCV, 2020. 49Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to- end object detection with transformers. In ECCV, 2020. 4, 9
Emerging properties in self-supervised vision transformers. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin, ICCV. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerg- ing properties in self-supervised vision transformers. In ICCV, 2021. 2
Efficient self-supervised vision pretraining with local masked reconstruction. Jun Chen, Ming Hu, Boyang Li, Mohamed Elhoseiny, arXiv:2206.00790arXiv preprintJun Chen, Ming Hu, Boyang Li, and Mohamed Elhoseiny. Efficient self-supervised vision pretraining with local masked reconstruction. arXiv preprint arXiv:2206.00790, 2022. 8
Hybrid task cascade for instance segmentation. Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, CVPR. Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaox- iao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, et al. Hybrid task cascade for instance seg- mentation. In CVPR, 2019. 1
Generative pretraining from pixels. Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever, ICML. 2020Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, and Ilya Sutskever. Gen- erative pretraining from pixels. In ICML, 2020. 2
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, ICML, 2020. 1. 27Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geof- frey Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020. 1, 2, 7
Context autoencoder for self-supervised representation learning. Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, Jingdong Wang, arXiv:2202.03026arXiv preprintXiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shen- tong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, and Jingdong Wang. Context autoencoder for self-supervised representation learning. arXiv preprint arXiv:2202.03026, 2022. 2
Exploring simple siamese representation learning. Xinlei Chen, Kaiming He, CVPR. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In CVPR, 2021. 2
An empirical study of training self-supervised vision transformers. Xinlei Chen, Saining Xie, Kaiming He, ICCV, 2021. 7. 810Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In ICCV, 2021. 7, 8, 10
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. 811Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, , and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 2, 7, 8, 11
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, ACL. 1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional trans- formers for language understanding. In ACL, 2019. 1, 2
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, arXiv:2010.11929Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. 511arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 2, 3, 4, 5, 11
Efficient graph-based image segmentation. F Pedro, Felzenszwalb, P Daniel, Huttenlocher, IJCV. 67Pedro F Felzenszwalb and Daniel P Huttenlocher. Efficient graph-based image segmentation. IJCV, 2004. 2, 4, 5, 6, 7
Unsupervised semantic segmentation by contrasting object mask proposals. Simon Wouter Van Gansbeke, Stamatios Vandenhende, Luc Georgoulis, Van Gool, ICCV, 2021. 1Wouter Van Gansbeke, Simon Vandenhende, Stamatios Geor- goulis, and Luc Van Gool. Unsupervised semantic segmenta- tion by contrasting object mask proposals. In ICCV, 2021. 1, 2
Multimodal masked autoencoders learn transferable representations. Xinyang Geng, Hao Liu, Lisa Lee, Dale Schuurams, Sergey Levine, Pieter Abbeel, arXiv:2205.142042022arXiv preprintXinyang Geng, Hao Liu, Lisa Lee, Dale Schuurams, Sergey Levine, and Pieter Abbeel. Multimodal masked autoen- coders learn transferable representations. arXiv preprint arXiv:2205.14204, 2022. 2
Fast r-cnn. Ross Girshick, ICCV. Ross Girshick. Fast r-cnn. In ICCV, 2015. 2
Rich feature hierarchies for accurate object detection and semantic segmentation. Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, CVPR. 13Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 1, 2, 3
Bootstrap your own latent-a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, NeurIPS. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doer- sch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Ghesh- laghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. In NeurIPS, 2020. 2
LVIS: A dataset for large vocabulary instance segmentation. Agrim Gupta, Piotr Dollar, Ross Girshick, CVPR. 27Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A dataset for large vocabulary instance segmentation. In CVPR, 2019. 2, 7
Masked autoencoders are scalable vision learners. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick, CVPR. 1011Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In CVPR, 2022. 1, 2, 3, 5, 6, 7, 8, 9, 10, 11
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, CVPR, 2020. 1Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual repre- sentation learning. In CVPR, 2020. 1, 2
Piotr Dollár, and Ross Girshick. Mask r-cnn. Kaiming He, Georgia Gkioxari, ICCV. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Gir- shick. Mask r-cnn. In ICCV, 2017. 2
Exploring long-sequence masked autoencoders. Ronghang Hu, Shoubhik Debnath, Saining Xie, Xinlei Chen, arXiv:2210.072247arXiv preprintRonghang Hu, Shoubhik Debnath, Saining Xie, and Xinlei Chen. Exploring long-sequence masked autoencoders. arXiv preprint arXiv:2210.07224, 2022. 7, 8
Efficient visual pretraining with contrastive detection. J Olivier, Skanda Hénaff, Jean-Baptiste Koppula, Aaron Alayrac, Oriol Van Den Oord, João Vinyals, Carreira, ICCV. 7Olivier J. Hénaff, Skanda Koppula, Jean-Baptiste Alayrac, Aaron van den Oord, Oriol Vinyals, and João Carreira. Effi- cient visual pretraining with contrastive detection. In ICCV, 2021. 1, 2, 5, 7
João Carreira, and Relja Arandjelović. Object discovery and representation networks. J Olivier, Skanda Hénaff, Evan Koppula, Daniel Shelhamer, Andrew Zoran, Andrew Jaegle, Zisserman, ECCV, 2022. 1Olivier J. Hénaff, Skanda Koppula, Evan Shelhamer, Daniel Zoran, Andrew Jaegle, Andrew Zisserman, João Carreira, and Relja Arandjelović. Object discovery and representation networks. In ECCV, 2022. 1, 2
Panoptic segmentation. Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollár, CVPR. Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, and Piotr Dollár. Panoptic segmentation. In CVPR, 2019. 7
. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, arXiv:2304.0264311Segment anything. arXiv preprintAlexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer White- head, Alexander C Berg, Wan-Yen Lo, et al. Segment any- thing. arXiv preprint arXiv:2304.02643, 2023. 11
Principles of Gestalt psychology. Routledge. Kurt Koffka, 1Kurt Koffka. Principles of Gestalt psychology. Routledge, 2013. 1, 2
Imagenet classification with deep convolutional neural networks. CACM. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, 1Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Im- agenet classification with deep convolutional neural networks. CACM, 2017. 1, 2
Semmae: Semantic-guided masking for learning masked autoencoders. Gang Li, Heliang Zheng, Daqing Liu, Chaoyue Wang, Bing Su, Changwen Zheng, NeurIPS, 2022. 89Gang Li, Heliang Zheng, Daqing Liu, Chaoyue Wang, Bing Su, and Changwen Zheng. Semmae: Semantic-guided mask- ing for learning masked autoencoders. In NeurIPS, 2022. 2, 8, 9
Exploring plain vision transformer backbones for object detection. Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He, ECCV. 811Yanghao Li, Hanzi Mao, Ross Girshick, and Kaiming He. Exploring plain vision transformer backbones for object de- tection. In ECCV, 2022. 1, 3, 5, 8, 11
Benchmarking detection transfer learning with vision transformers. Yanghao Li, Saining Xie, Xinlei Chen, Piotr Dollár, Kaiming He, Ross Girshick, arXiv:2111.1142915arXiv preprintYanghao Li, Saining Xie, Xinlei Chen, Piotr Dollár, Kaim- ing He, and Ross Girshick. Benchmarking detection transfer learning with vision transformers. arXiv preprint arXiv:2111.11429, 2021. 1, 5
Microsoft COCO: common objects in context. Tsung-Yi Lin, Michael Maire, Serge J Belongie, Lubomir D Bourdev, Ross B Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, ECCV. 25Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. In ECCV, 2014. 2, 5
Boxer: Box-attention for 2d and 3d transformers. Jihong Duy-Kien Nguyen, Olaf Ju, Booij, R Martin, Oswald, G M Cees, Snoek, CVPR. 2022Duy-Kien Nguyen, Jihong Ju, Olaf Booij, Martin R Oswald, and Cees GM Snoek. Boxer: Box-attention for 2d and 3d transformers. In CVPR, 2022. 1
Learning features by watching objects move. Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan, CVPR. Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In CVPR, 2017. 2
Unsupervised learning of dense visual representations. Pedro O Pinheiro, Amjad Almahairi, Ryan Y Benmalek, Florian Golemo, Aaron Courville, NeurIPS. 2020Pedro O. Pinheiro, Amjad Almahairi, Ryan Y. Benmalek, Florian Golemo, and Aaron Courville. Unsupervised learning of dense visual representations. In NeurIPS, 2020. 2
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, ICML. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021. 2
Faster R-CNN: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, NeurIPS. 1Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS, 2015. 1, 2
Spatilly consistent representation learning. Byungseok Roh, Wuhyun Shin, Ildoo Kim, Sungwoong Kim, CVPR, 2021. 1Byungseok Roh, Wuhyun Shin, Ildoo Kim, and Sungwoong Kim. Spatilly consistent representation learning. In CVPR, 2021. 1, 2
Flava: A foundational language and vision alignment model. Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, Douwe Kiela, CVPR. 2022Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guil- laume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In CVPR, 2022. 2
f-brs: Rethinking backpropagating refinement for interactive segmentation. Konstantin Sofiiuk, Ilia Petrov, Olga Barinova, Anton Konushin, CVPR, 2020. 2Konstantin Sofiiuk, Ilia Petrov, Olga Barinova, and Anton Konushin. f-brs: Rethinking backpropagating refinement for interactive segmentation. In CVPR, 2020. 2, 8
Selective search for object recognition. IJCV. Jasper Rr Uijlings, E A Koen, Theo Van De Sande, Arnold Wm Gevers, Smeulders, 25Jasper RR Uijlings, Koen EA Van De Sande, Theo Gevers, and Arnold WM Smeulders. Selective search for object recog- nition. IJCV, 2013. 1, 2, 5
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, NeurIPS. 410Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 4, 9, 10
Extracting and composing robust features with denoising autoencoders. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol, ICML. 23Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre- Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, 2008. 2, 3
Dense contrastive learning for self-supervised visual pre-training. Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei Li, CVPR. Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. In CVPR, 2021. 2
Masked feature prediction for self-supervised visual pre-training. Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, Christoph Feichtenhofer, CVPR. 2022Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature predic- tion for self-supervised visual pre-training. In CVPR, 2022. 2
Aligning pretraining for detection via object-level contrastive learning. Fangyun Wei, Yue Gao, Zhirong Wu, Han Hu, Stephen Lin, NeurIPS. 1Fangyun Wei, Yue Gao, Zhirong Wu, Han Hu, and Stephen Lin. Aligning pretraining for detection via object-level con- trastive learning. In NeurIPS, 2021. 1, 2
Region similarity representation learning. Tete Xiao, J Colorado, Xiaolong Reed, Kurt Wang, Trevor Keutzer, Darrell, ICCV, 2021. 1Tete Xiao, Colorado J Reed, Xiaolong Wang, Kurt Keutzer, and Trevor Darrell. Region similarity representation learning. In ICCV, 2021. 1, 2
Detco: Unsupervised contrastive learning for object detection. Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, Ping Luo, ICCV. 1Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, and Ping Luo. Detco: Unsuper- vised contrastive learning for object detection. In ICCV, 2021. 1, 2
Unsupervised object-level representation learning from scene images. Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, Ping Luo, NeurIPS. 1Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, and Ping Luo. Unsupervised object-level representation learning from scene images. In NeurIPS, 2021. 1, 2
Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, Han Hu, CVPR. Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, and Han Hu. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. In CVPR, 2021. 2
Simmim: A simple framework for masked image modeling. Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu, CVPR. 2022Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In CVPR, 2022. 1
Instance localization for self-supervised detection pretraining. Ceyuan Yang, Zhirong Wu, Bolei Zhou, Stephen Lin, CVPR, 2021. 1Ceyuan Yang, Zhirong Wu, Bolei Zhou, and Stephen Lin. Instance localization for self-supervised detection pretraining. In CVPR, 2021. 1, 2
Self-supervised visual representation learning from hierarchical grouping. Xiao Zhang, Michael Maire, NeurIPS. Xiao Zhang and Michael Maire. Self-supervised visual rep- resentation learning from hierarchical grouping. In NeurIPS, 2020. 2
Semantic understanding of scenes through the ade20k dataset. Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, Antonio Torralba, IJCV. 2Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understand- ing of scenes through the ade20k dataset. IJCV, 2019. 2
| [
"https://github.com/facebookresearch/r-mae."
] |
[
"Toward integrated tantalum pentoxide optical parametric oscillators",
"Toward integrated tantalum pentoxide optical parametric oscillators"
] | [
"Maximilian Timmerkamp \nInstitute of Applied Physics\nUniversity of Münster\nCorrensstr. 248149MünsterGermany\n",
"Niklas M Lüpken \nInstitute of Applied Physics\nUniversity of Münster\nCorrensstr. 248149MünsterGermany\n",
"Shqiprim Adrian Abazi \nInstitute of Physics\nUniversity of Münster\nHeisenbergstr. 1148149MünsterGermany\n\nCenter for NanoTechnology (CeNTech)\nHeisenbergstr. 1148149MünsterGermany\n\nCenter for Soft Nanoscience (SoN)\nBusso-Peus-Str. 1048149MünsterGermany\n",
"Julian Rasmus Bankwitz \nInstitute of Physics\nUniversity of Münster\nHeisenbergstr. 1148149MünsterGermany\n\nCenter for NanoTechnology (CeNTech)\nHeisenbergstr. 1148149MünsterGermany\n\nCenter for Soft Nanoscience (SoN)\nBusso-Peus-Str. 1048149MünsterGermany\n",
"Carsten Schuck \nInstitute of Physics\nUniversity of Münster\nHeisenbergstr. 1148149MünsterGermany\n\nCenter for NanoTechnology (CeNTech)\nHeisenbergstr. 1148149MünsterGermany\n\nCenter for Soft Nanoscience (SoN)\nBusso-Peus-Str. 1048149MünsterGermany\n",
"Carsten Fallnich \nInstitute of Applied Physics\nUniversity of Münster\nCorrensstr. 248149MünsterGermany\n"
] | [
"Institute of Applied Physics\nUniversity of Münster\nCorrensstr. 248149MünsterGermany",
"Institute of Applied Physics\nUniversity of Münster\nCorrensstr. 248149MünsterGermany",
"Institute of Physics\nUniversity of Münster\nHeisenbergstr. 1148149MünsterGermany",
"Center for NanoTechnology (CeNTech)\nHeisenbergstr. 1148149MünsterGermany",
"Center for Soft Nanoscience (SoN)\nBusso-Peus-Str. 1048149MünsterGermany",
"Institute of Physics\nUniversity of Münster\nHeisenbergstr. 1148149MünsterGermany",
"Center for NanoTechnology (CeNTech)\nHeisenbergstr. 1148149MünsterGermany",
"Center for Soft Nanoscience (SoN)\nBusso-Peus-Str. 1048149MünsterGermany",
"Institute of Physics\nUniversity of Münster\nHeisenbergstr. 1148149MünsterGermany",
"Center for NanoTechnology (CeNTech)\nHeisenbergstr. 1148149MünsterGermany",
"Center for Soft Nanoscience (SoN)\nBusso-Peus-Str. 1048149MünsterGermany",
"Institute of Applied Physics\nUniversity of Münster\nCorrensstr. 248149MünsterGermany"
] | [] | We present a hybrid waveguide-fiber optical parametric oscillator (OPO) exploiting degenerate fourwave mixing in tantalum pentoxide. The OPO, pumped with ultrashort pulses at 1.55 µm wavelength, generated tunable idler pulses with up to 4.1 pJ energy tunable between 1.63 µm and 1.68 µm center wavelength. An upper bound for the total tolerable cavity loss of 32 dB was found, rendering a chip-integrated OPO feasible as a compact and robust light source. | null | [
"https://export.arxiv.org/pdf/2306.05073v1.pdf"
] | 259,108,366 | 2306.05073 | 8fb8537e20d4fa700c8c010780c23cad947c3465 |
Toward integrated tantalum pentoxide optical parametric oscillators
Maximilian Timmerkamp
Institute of Applied Physics
University of Münster
Corrensstr. 248149MünsterGermany
Niklas M Lüpken
Institute of Applied Physics
University of Münster
Corrensstr. 248149MünsterGermany
Shqiprim Adrian Abazi
Institute of Physics
University of Münster
Heisenbergstr. 1148149MünsterGermany
Center for NanoTechnology (CeNTech)
Heisenbergstr. 1148149MünsterGermany
Center for Soft Nanoscience (SoN)
Busso-Peus-Str. 1048149MünsterGermany
Julian Rasmus Bankwitz
Institute of Physics
University of Münster
Heisenbergstr. 1148149MünsterGermany
Center for NanoTechnology (CeNTech)
Heisenbergstr. 1148149MünsterGermany
Center for Soft Nanoscience (SoN)
Busso-Peus-Str. 1048149MünsterGermany
Carsten Schuck
Institute of Physics
University of Münster
Heisenbergstr. 1148149MünsterGermany
Center for NanoTechnology (CeNTech)
Heisenbergstr. 1148149MünsterGermany
Center for Soft Nanoscience (SoN)
Busso-Peus-Str. 1048149MünsterGermany
Carsten Fallnich
Institute of Applied Physics
University of Münster
Corrensstr. 248149MünsterGermany
Toward integrated tantalum pentoxide optical parametric oscillators
(Dated: June 5, 2023)
We present a hybrid waveguide-fiber optical parametric oscillator (OPO) exploiting degenerate fourwave mixing in tantalum pentoxide. The OPO, pumped with ultrashort pulses at 1.55 µm wavelength, generated tunable idler pulses with up to 4.1 pJ energy tunable between 1.63 µm and 1.68 µm center wavelength. An upper bound for the total tolerable cavity loss of 32 dB was found, rendering a chip-integrated OPO feasible as a compact and robust light source.
I. INTRODUCTION
Optical parametric oscillators (OPOs) are versatile light sources for a variety of applications such as nonlinear microscopy [1,2], spectroscopy [3], and generation of squeezed light [4]. In recent years, fiber-based OPOs have been developed, exploiting four-wave mixing (FWM) for optical parametric gain and providing compact and robust light sources [1,2]. To reduce the size and the required pump power as well as to simplify thermal stabilization of the light source, fibers can be replaced by integrated optical waveguides.
Waveguide-based OPOs (WOPOs) have been demonstrated as hybrid waveguide-fiber devices using silicon [5,6] or silicon nitride [7] (Si 3 N 4 ) waveguides to provide parametric gain and fibers to build the feedback cavity. Although silicon [8] provides a 100 times higher nonlinearity than Si 3 N 4 [9], it suffers from significant linear and nonlinear losses, which render integration of the cavity on a chip unfeasible at repetition rates below a few 100 MHz. In contrast, Si 3 N 4 allows for ultra-low loss propagation [10], such that integration of the cavity has been proposed to be feasible at repetition rates as low as 66 MHz [7].
Recently, tantalum pentoxide (Ta 2 O 5 ) gained attraction in integrated nonlinear photonics [11][12][13][14], offering optical properties competitive with Si 3 N 4 . Although the fabrication of Ta 2 O 5 waveguides is less mature, both materials share the potential for ultra-low loss propagation [11,15]. Ta 2 O 5 shows an about three times higher nonlinear refractive index of approximately 6 · 10 −19 m 2 W −1 [9,11,16], which accomplishes decreased pump energies, and a ten times smaller thermo-optic coefficient of approximately 6 · 10 −6 K −1 [11,17], allowing for light sources with higher thermal stability. Moreover, Ta 2 O 5 allows for sustaining at least double the optical power compared to Si 3 N 4 [18] and offers more design freedom for satisfying phase-matching conditions in nonlinear opti- * [email protected] cal frequency mixing processes because of its low internal stress that facilitates the deposition of thicker films [11]. As such, Ta 2 O 5 waveguides have been used to generate supercontinua [12], Kerr-combs with microring resonators [13], and parametric amplification [14], while also being investigated for integrated quantum optics [19].
In this work, we present a proof-of-concept WOPO exploiting degenerate FWM in Ta 2 O 5 waveguides including a feedback cavity implemented with an optical fiber. The WOPO was pumped with ultrashort pulses and its output wavelength was dispersively tuned by changing the cavity length. The oscillation threshold and the maximum acceptable cavity loss were investigated to estimate the feasibility for integration of the whole cavity on a chip.
II. EXPERIMENTAL SETUP
The experimental setup of the WOPO (see Fig. 1(a)) comprised a Ta 2 O 5 waveguide as the parametric gain medium and a fiber to build the feedback ring cavity. The WOPO was pumped with pulses at 1.55 µm center wavelength with a repetition rate of 80 MHz. The duration of the pump pulses was adjusted with a Fourier-filter (not shown) between 300 and 1000 fs. The pump pulses were coupled into an air-cladded waveguide (H×W×L = 0.6 µm×1.7 µm×10 mm, schematically shown in Fig. 1(b)) with an aspheric lens (AL, f = 1.873 mm, NA = 0.85), exciting the fundamental TE mode.
The waveguides were fabricated from a 0.6 µm thin Ta 2 O 5 film on thermally grown silicon dioxide on silicon to achieve strong mode confinement due to high refractive index contrast. Waveguide layouts were patterned in a resist mask using a 100 kV electron-beam lithography system and transferred into the Ta were cleaved after inscribing a kerf on the backside of the handle wafer with a dicing system. The output from the waveguide was collected with an off-axis parabolic mirror (OAPM, f eff = 6.35 mm) to minimize chromatic aberrations. The total transmission (including input AL and output OAPM) at the pump wavelength was 2.6% due to several contributions: mode mismatch (≥7 dB loss) caused by residual higher-order transverse modes in the pump beam, absorption of the aspheric lens (1.1 dB loss), truncation loss at the OAPM (3.0 dB), Fresnel reflections at the facets (each 0.4 dB loss), and ≤4 dB/cm propagation loss along the waveguide resulting mainly from fabrication imperfections, including stitching of writing fields.
Inside the waveguide, signal and idler sidebands were generated via spontaneous degenerate FWM, of which the signal sideband was used to implement a feedback for oscillation. The beam coming from the OAPM was adjusted in diameter with a telescope for coupling into a single-mode fiber SMF-28. In order to match the repetition rates of pump and WOPO for harmonic pumping as well as to implement an amount of group delay dispersion (GDD) similar to a chip-integrated feedback path (-0.7 ps 2 to 3.3 ps 2 , depending on waveguide geometry), a feedback fiber length of 38 m was chosen, introducing a GDD of -0.59 ps 2 at the signal wavelength. A fiber polarization controller was used to adjust the polarization of the fed-back signal pulse to match the TE-polarization of the pump. The signal pulses were stretched by the fiber's GVD of -16 ps 2 km −1 , enabling dispersive tuning of the signal wavelength due to temporal gain-narrowing [1] by adjustment of the temporal overlap of signal and pump pulse via a free-space delay line (∆t in Fig. 1). Then, the signal beam was spatially overlapped with the pump beam at the dichroic mirror (DM) and coupled into the waveguide to provide a seed for stimulated FWM, leading to parametric oscillation when the gain was larger than the cavity loss. Since the signal pulse provided feedback, the idler as well as the residual pump pulse could have been completely extracted from the cavity with the addition of a second dichroic mirror. However, to analyze the whole spectrum instead of only the pump and idler waves, an uncoated glass substrate (BS) was used to extract a fraction of the intra-cavity field for detection with an optical spectrum analyzer (OSA). Unless noted otherwise, all pump energies in the following parts refer to (measurable) waveguide-external energies incident on the input aspheric lens in front of the waveguide. Output energies of the signal and idler waves refer to cavityinternal values, externally measured and accordingly rescaled.
III. OPERATION CHARACTERISTICS
Pumping the WOPO with pulses of 500 fs duration and 2.2 nJ energy, an amplification of 40 dB was achieved at the signal wavelength of 1.46 µm after feedback was implemented. In addition to the first-order signal and idler peaks at 1.46 µm and 1.67 µm wavelength, respectively, the output spectra showed additional peaks due to cascaded FWM (e.g., at 1.37 µm, see Fig. 2). Although the duration of the idler pulse was not measured due to insufficient power output at the OSA port, a potential Fourier-limited pulse duration of (130±10) fs was calculated from the (first-order) idler spectrum. The actual pulse duration was expected to be approximately 20% longer, which was estimated by comparing the actual and the Fourier-limited idler pulse durations of simulations of the WOPO by solving the corresponding nonlinear Schrödinger equation.
In order to determine the oscillation threshold, the output spectrum was measured for increasing pump pulse energies, showing a steep increase of energy in the sidebands once the threshold was exceeded. Oscillation was achieved at a pump energy of approximately 2 nJ (i.e. ≤280 pJ waveguide-internally) for a pump pulse duration of 500 fs (see Fig. 3(a)). The threshold values were identified with the roots of linear leastsquare fits near the threshold. At pump pulse energies significantly higher than the threshold, saturation was observed for the idler energy as well as the pump de- pletion (see Fig. 3(a) and 3(b)). In the saturated regime, the clear FWM peaks gradually merged and vanished (compare dash-dotted line in Fig. 2), indicating transition into supercontinuum generation (SCG) induced by modulation instability (MI) seeded by the fed-back signal pulse. The average waveguide-external slope efficiency just above the threshold was (1.9 ± 0.8)% and the maximum waveguide-external conversion efficiency of -27.3 dB (0.19%) was observed at 2.25 nJ pump energy, when the idler sideband showed a total energy of 4.1 pJ. Considering the combined coupling and collection efficiency of about 10%, the waveguide-internal slope and conversion efficiency amounted to -17.3 dB (1.9%) and (19 ± 8)%, respectively. In comparison to other works on WOPOs [5][6][7], the efficiency values observed here were comparable but at the lower end due to present fabrication imperfections. However, the residual pump at the output in Fig. 3(b) showed up to about 20% pump depletion, such that up to 9% conversion efficiency to the idler can be expected with improved coupling and collection efficiencies.
The operation of the WOPO was investigated for pump pulse durations between 450 fs and 700 fs as lower pulse durations led to soliton-driven SCG, and insufficient pump energy inhibited oscillation for higher pulse durations. The oscillation threshold increased linearly with the pump pulse duration (see Fig. 3(c)), indicating an approximately constant peak power value at threshold. This observation is in compliance with the proportionality between pump peak power and (maximum) parametric FWM gain, which, at threshold, exactly compensates the constant cavity loss. Figure 4. Power spectral density (PSD) of the WOPO output for different temporal delays between pump and fed-back signal pulses using pump pulses with 2.2 nJ energy and 500 fs duration.
IV. WAVELENGTH TUNABILITY
The center wavelength of the idler (signal) sideband was dispersively tuned by changing the cavity length via the free-space delay line, covering a tuning range of 50 nm (40 nm). To operate the WOPO outside the SCG regime over the whole tuning range, it was pumped by pulses of 2.2 nJ energy and 500 fs duration.
By changing the relative delay between pump and feedback pulses by 8 ps, the center wavelength of the idler and signal waves was tuned from 1.63 µm to 1.68 µm and from 1.44 µm to 1.48 µm, respectively (see Fig. 4). Since the FWM gain, indicated by the spontaneous FWM in Fig. 2, must exceed the cavity roundtrip loss (98.7%) for oscillation, the tuning range was smaller than the potential gain bandwidth. Furthermore, the tuning range was limited by chromatic aberrations caused by the lenses, and the bandwidth of the effective wave plate induced by the polarization controller, which was optimized only once for 1.46 µm signal wavelength. In the future, lower cavity round-trip loss and a polarization-maintaining feedback path should allow for an increased accessible tuning range, while using a 2 µm wide waveguide should extend the potential wavelength tuning range even to 250 nm (i.e., 46 THz bandwidth from 1.15 µm to 1.4 µm wavelength) for the signal wave or 640 nm (i.e., 46 THz bandwidth from 1.74 µm to 2.38 µm wavelength) for the idler wave.
V. ACCEPTABLE CAVITY LOSS
Additional loss was introduced into the cavity in order to estimate the feasibility of integrating the cavity on a chip with respect to propagation loss. Additionally, this information can be used to determine the fraction of the signal pulse that could be extracted from the cavity with an additional output coupler. By placing a combination of a half-wave plate and a polarizing beam splitter between the delay stage and the dichroic mirror, up to 30 dB additional loss was added to the cavity. Without addi- tional loss the cavity showed a round-trip transmission of 1.3% (-19 dB), which was estimated from transmission values measured at the pump wavelength: 2.6% through the waveguide, 65% through the fiber, and 79% through all other components. The pump pulse duration (500 fs) and the delay stage were fixed, such that the threshold was minimized. Then, the oscillation threshold was determined by measuring the energy of the signal sideband for a set of additional losses from 0 dB to 30 dB, corresponding to -19 dB to -49 dB feedback. The experimental data in Fig. 5(a) shows that pumping at a higher pump pulse energy allowed the WOPO to run with lower feedback. This behaviour is expected since pumping with a higher energy provides higher gain that can compensate the additional loss. For the same reason, the background signal, caused by spontaneous FWM, also increased. Therefore, to characterize the WOPO loss tolerance, the critical feedback (i.e., minimum feedback for oscillation, see Fig. 5(b)) for each pump energy was identified with the intersection between the background and linear least-square fits to the experimental data.
From Fig. 5(b), it is apparent that the critical feedback exponentially decreased for increasing pump energies. This observation can be explained by the fact that the gain increases exponentially with increasing pump energy, compensating the reduced feedback. When the feedback was increased beyond the critical value, the energy in the signal sideband first exhibited a maximum and then decreased due to excess feedback causing transition to MI-induced SCG. Hence, the feedback at which the maximum signal energy was reached (yellow dots in Fig. 5(b)) can be considered as an upper limit for stable operation.
Although higher pump energies reduced the critical feedback, fluctuations of the signal energy were observed for pump energies larger than 2.38 nJ, indicating less stable operation. The reason for this unexpected observation is not yet fully understood, however, the measured fluctuations in the signal energy of the WOPO appear to be caused by the high parametric gain resulting in an increased sensitivity to slightly fluctuating round-trip transmission due to optomechanical influences and a limited achievable extinction ratio behind the fiber polarization controller. Therefore, for reliable, stable operation, a pump energy of less than 2.38 nJ was used, corresponding to a minimum critical feedback of -32 dB or an additional loss of 13 dB, so that up to 95% of the energy of the seed pulse could be extracted within the cavity.
VI. FEASIBILITY OF ON-CHIP INTEGRATION
In order to implement the oscillator on a chip, both the nonlinear waveguide and the linear feedback path, whose minimum geometrical length of about 2 m is related to the repetition rate of 80 MHz and the effective refractive index of the TE-mode of about 1.9, must fit onto the chip. The feedback path should introduce a sufficient amount of GDD to allow for a tunable output while keeping the propagation loss low enough for oscillation.
In order to allow for dispersive tuning, the seed pulse has to be chirped by the feedback path. As already noted, the fiber introduced a GDD of -0.59 ps 2 at 1.46 µm signal wavelength, which allowed for dispersive tuning. Assuming the minimum chip-integrated cavity length of 2 m, a suitable GDD at the signal wavelength in the range from -0.7 ps 2 to 3.3 ps 2 can be implemented on-chip by choosing a waveguide width between 1 µm and 0.6 µm, respectively. Instead of using a free-space delay line, the WOPO could be tuned by changing the pump repetition rate or the pump center wavelength [2].
As the minimum critical feedback required for oscillation was found to be -32 dB at 2.38 nJ pump energy, the total cavity loss at the seed wavelength must not exceed 32 dB in order to operate the WOPO. Thus, given the minimum cavity length of 2 m, a realistic upper boundary for propagation loss of 0.16 dB/cm can be estimated. Since lower propagation losses have already been demonstrated in Ta 2 O 5 waveguides [11,15], integration of the oscillator is principally possible concerning material selection and waveguide fabrication.
VII. CONCLUSION
To sum up, a hybrid waveguide-fiber OPO (WOPO) exploiting four-wave mixing in a Ta 2 O 5 waveguide was investigated in terms of its operation as well as feasibility of chip-integration. Reliable oscillation was achieved when the WOPO was pumped by pulses with 500 fs duration and 2 nJ waveguide-external energy, generating idler pulses with 4.1 pJ waveguide-external energy and potentially down to 130 fs duration. While the external conversion efficiency was only 0.19%, the pump depletion indicated potential for up to 9%. By adjusting the cavity length, the idler wavelength was dispersively tuned by 50 nm from 1.63 µm to 1.68 µm, which can potentially be extended to up to 640 nm (equiv. to 250 nm for the signal wave) via waveguide dispersion engineering. By introducing loss to the cavity, the principal feasibility of a chip-integrated cavity, supporting 80 MHz repeti-tion rate, was confirmed by measurements, yielding an upper bound of 0.16 dB/cm for the propagation loss, a value that has already been demonstrated in Ta 2 O 5 waveguides [11,15].
Disclosures. The authors declare no conflicts of interest.
Data availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Figure 1 .
12 O 5 thin film via subsequent reactive-ion etching in fluorine chemistry (Ar, CF 4 and CHF 3 ). The desired group velocity dispersion (GVD) was achieved with high reproducibility in fully etched waveguides of suitable geometry and the facets arXiv:2306.05073v1 [physics.optics] (a) Schematic experimental setup. DM: dichroic mirror, WG: waveguide, AL: aspheric lens, OAPM: off-axis parabolic mirror, L: lens, BS: beam splitter, OSA: optical spectrum analyzer, SMF: single mode fiber, PC: polarization controller. (b) Schematic waveguide geometry with sidewall angle α ≈ 95 • .
Figure 3 .
3(a) Energy in the idler sideband (dots) versus external pump pulse energy. Linear functions (dotted) were fitted to the data. The roots of these functions mark the threshold values (dashed lines). (b) Residual pump energy at waveguide output (dots) with pulse parameters corresponding to (a). (c) Oscillation threshold (crosses) versus pump pulse duration with linear fit (dashed).
Figure 5 .
5(a) Energy in signal sideband versus feedback of signal pulse for different pump pulse energies (dots). Additional linear fits (solid) of the rising flank and the feedback threshold for oscillation (dashed) are plotted. (b) Critical feedback for oscillation (blue crosses) with linear fit (blue, dashed) and feedback at which the maximum signal energy occurred (yellow dots) with linear fit (dash-dotted). Also denoted are the different operation regimes of the WOPO.
Four-wave-mixing-based optical parametric oscillator delivering energetic, tunable, chirped femtosecond pulses for non-linear biomedical applications. T Gottschall, T Meyer, M Schmitt, J Popp, J Limpert, A Tünnermann, 10.1364/OE.23.023968Opt. Express. 2323968T. Gottschall, T. Meyer, M. Schmitt, J. Popp, J. Limpert, and A. Tünnermann, Four-wave-mixing-based optical para- metric oscillator delivering energetic, tunable, chirped femtosecond pulses for non-linear biomedical applications, Opt. Express 23, 23968 (2015).
Portable all-fiber dual-output widely tunable light source for coherent Raman imaging. M Brinkmann, A Fast, T Hellwig, I Pence, C L Evans, C Fallnich, 10.1364/boe.10.004437Biomedical Optics Express. 104437M. Brinkmann, A. Fast, T. Hellwig, I. Pence, C. L. Evans, and C. Fallnich, Portable all-fiber dual-output widely tun- able light source for coherent Raman imaging, Biomedical Optics Express 10, 4437 (2019).
Mid-infrared fourier transform spectroscopy with a broadband frequency comb. F Adler, P Masłowski, A Foltynowicz, K C Cossel, T C Briles, I Hartl, J Ye, 10.1364/OE.18.0218611821861Opt. ExpressF. Adler, P. Masłowski, A. Foltynowicz, K. C. Cossel, T. C. Briles, I. Hartl, and J. Ye, Mid-infrared fourier transform spectroscopy with a broadband frequency comb, Opt. Ex- press 18, 21861 (2010).
Generation of a squeezed vacuum resonant on a rubidium d1 line with periodically poled KTiOPO4. T Tanimura, D Akamatsu, Y Yokoi, A Furusawa, M Kozuma, 10.1364/OL.31.002344Opt. Lett. 312344T. Tanimura, D. Akamatsu, Y. Yokoi, A. Furusawa, and M. Kozuma, Generation of a squeezed vacuum resonant on a rubidium d1 line with periodically poled KTiOPO4, Opt. Lett. 31, 2344 (2006).
Wavelengthagile near-IR optical parametric oscillator using a deposited silicon waveguide. K.-Y Wang, M A Foster, A C Foster, 10.1364/oe.23.015431Optics Express. 2315431K.-Y. Wang, M. A. Foster, and A. C. Foster, Wavelength- agile near-IR optical parametric oscillator using a de- posited silicon waveguide, Optics Express 23, 15431 (2015).
A silicon-based widely tunable shortwave infrared optical parametric oscillator. B Kuyken, X Liu, R M Osgood, R Baets, G Roelkens, W M J Green, 10.1364/oe.21.005931Optics Express. 215931B. Kuyken, X. Liu, R. M. Osgood, R. Baets, G. Roelkens, and W. M. J. Green, A silicon-based widely tunable short- wave infrared optical parametric oscillator, Optics Express 21, 5931 (2013).
Toward integrated synchronously pumped optical parametric oscillators in silicon nitride. N M Lüpken, D Becker, T Würthwein, K.-J Boller, C Fallnich, 10.1364/OE.438910arXiv:2108.06219Optics Express. 29N. M. Lüpken, D. Becker, T. Würthwein, K.-J. Boller, and C. Fallnich, Toward integrated synchronously pumped optical parametric oscillators in silicon nitride, Optics Ex- press 29, 39895 (2021), arXiv:2108.06219.
Twophoton absorption and Kerr coefficients of silicon for 850-2200nm. A D Bristow, N Rotenberg, H M Van Driel, 10.1063/1.2737359Applied Physics Letters. 90191104A. D. Bristow, N. Rotenberg, and H. M. van Driel, Two- photon absorption and Kerr coefficients of silicon for 850-2200nm, Applied Physics Letters 90, 191104 (2007).
Thermal and Kerr nonlinear properties of plasma-deposited silicon nitride/silicon dioxide waveguides. K Ikeda, R E Saperstein, N Alic, Y Fainman, 10.1364/oe.16.012987Optics Express. 1612987K. Ikeda, R. E. Saperstein, N. Alic, and Y. Fainman, Ther- mal and Kerr nonlinear properties of plasma-deposited silicon nitride/silicon dioxide waveguides, Optics Express 16, 12987 (2008).
High-yield, wafer-scale fabrication of ultralow-loss, dispersion-engineered silicon nitride photonic circuits. J Liu, G Huang, R N Wang, J He, A S Raja, T Liu, N J Engelsen, T J Kippenberg, 10.1038/s41467-021-21973-zarXiv:2005.13949Nature Communications. 12J. Liu, G. Huang, R. N. Wang, J. He, A. S. Raja, T. Liu, N. J. Engelsen, and T. J. Kippenberg, High-yield, wafer-scale fabrication of ultralow-loss, dispersion-engineered silicon nitride photonic circuits, Nature Communications 12, 1 (2021), arXiv:2005.13949.
. H Jung, S.-P Yu, D R Carlson, T E Drake, T C Briles, S B Papp, 10.1364/OPTICA.411968arXiv:2007.12958Tantala Kerr nonlinear integrated photonics. 8OpticaH. Jung, S.-P. Yu, D. R. Carlson, T. E. Drake, T. C. Briles, and S. B. Papp, Tantala Kerr nonlinear integrated photonics, Optica 8, 811 (2021), arXiv:2007.12958.
Supercontinuum generation in tantalum pentoxide waveguides for pump wavelengths in the 900 nm to 1500 nm spectral region. J R C Woods, J Daykin, A S K Tong, C Lacava, P Petropoulos, A C Tropper, P Horak, J S Wilkinson, V Apostolopoulos, 10.1364/oe.403089Optics Express. 2832173J. R. C. Woods, J. Daykin, A. S. K. Tong, C. Lacava, P. Petropoulos, A. C. Tropper, P. Horak, J. S. Wilkinson, and V. Apostolopoulos, Supercontinuum generation in tantalum pentoxide waveguides for pump wavelengths in the 900 nm to 1500 nm spectral region, Optics Express 28, 32173 (2020).
Group-velocity-dispersion engineering of tantala integrated photonics. J A Black, R Streater, K F Lamee, D R Carlson, S.-P Yu, S B Papp, 10.1364/OL.414095arXiv:2009.14190Optics Letters. 46J. A. Black, R. Streater, K. F. Lamee, D. R. Carlson, S.-P. Yu, and S. B. Papp, Group-velocity-dispersion engineering of tantala integrated photonics, Optics Letters 46, 817 (2021), arXiv:2009.14190.
Broadband stimulated four-wave parametric conversion on a tantalum pentoxide photonic chip. R Y Chen, M D B Charlton, P G Lagoudakis, 10.1364/oe.19.026343Optics Express. 1926343R. Y. Chen, M. D. B. Charlton, and P. G. Lagoudakis, Broadband stimulated four-wave parametric conversion on a tantalum pentoxide photonic chip, Optics Express 19, 26343 (2011).
Blumenthal, Ultra-low-loss Ta 2 O 5 -core/SiO 2 -clad planar waveguides on si substrates. M Belt, M L Davenport, J E Bowers, D J , 10.1364/OPTICA.4.000532Optica. 4532M. Belt, M. L. Davenport, J. E. Bowers, and D. J. Blumen- thal, Ultra-low-loss Ta 2 O 5 -core/SiO 2 -clad planar waveg- uides on si substrates, Optica 4, 532 (2017).
Determination of nonlinear refractive index in a Ta 2 O 5 rib waveguide using self-phase modulation. C.-Y Tai, J S Wilkinson, N M B Perney, M C Netti, F Cattaneo, C E Finlayson, J J Baumberg, 10.1364/opex.12.005110Optics Express. 125110C.-Y. Tai, J. S. Wilkinson, N. M. B. Perney, M. C. Netti, F. Cattaneo, C. E. Finlayson, and J. J. Baumberg, Determi- nation of nonlinear refractive index in a Ta 2 O 5 rib waveg- uide using self-phase modulation, Optics Express 12, 5110 (2004).
Tantalum pentoxide (Ta 2 O 5 ) based athermal micro-ring resonator. C.-L Wu, Y.-J Hung, R Fan, D.-H Ou, J.-Y Huang, T.-H Yen, Y.-J Chiu, M.-H Shih, Y.-Y Lin, A.-K Chu, C.-K Lee, 10.1364/osac.2.001198OSA Continuum. 21198C.-L. Wu, Y.-J. Hung, R. Fan, D.-H. Ou, J.-Y. Huang, T.- H. Yen, Y.-J. Chiu, M.-H. Shih, Y.-Y. Lin, A.-K. Chu, and C.-K. Lee, Tantalum pentoxide (Ta 2 O 5 ) based athermal micro-ring resonator, OSA Continuum 2, 1198 (2019).
Fabrication of submicrometer high refractive index tantalum pentoxide waveguides for optical propulsion of microparticles. B Ahluwalia, A Subramanian, O Hellso, N Perney, N Sessions, J Wilkinson, 10.1109/lpt.2009.2027025IEEE Photonics Technology Letters. 211408B. Ahluwalia, A. Subramanian, O. Hellso, N. Perney, N. Sessions, and J. Wilkinson, Fabrication of submicrome- ter high refractive index tantalum pentoxide waveguides for optical propulsion of microparticles, IEEE Photonics Technology Letters 21, 1408 (2009).
L Splitthoff, M A Wolff, T Grottke, C Schuck, 10.1364/OE.388080Tantalum pentoxide nanophotonic circuits for integrated quantum technology. 2811921L. Splitthoff, M. A. Wolff, T. Grottke, and C. Schuck, Tanta- lum pentoxide nanophotonic circuits for integrated quan- tum technology, Optics Express 28, 11921 (2020).
| [] |
[
"Identifying Subcascades From The Primary Damage State Of Collision Cascades",
"Identifying Subcascades From The Primary Damage State Of Collision Cascades",
"Identifying Subcascades From The Primary Damage State Of Collision Cascades",
"Identifying Subcascades From The Primary Damage State Of Collision Cascades"
] | [
"Utkarsh Bhardwaj \nComputational Analysis Division\nBARC\n530 012VizagAPIndia\n",
"Manoj Warrier \nComputational Analysis Division\nBARC\n530 012VizagAPIndia\n\nHomi Bhabha National Institute\n400 094Anushaktinagar, MumbaiMaharashtraIndia\n",
"Utkarsh Bhardwaj \nComputational Analysis Division\nBARC\n530 012VizagAPIndia\n",
"Manoj Warrier \nComputational Analysis Division\nBARC\n530 012VizagAPIndia\n\nHomi Bhabha National Institute\n400 094Anushaktinagar, MumbaiMaharashtraIndia\n"
] | [
"Computational Analysis Division\nBARC\n530 012VizagAPIndia",
"Computational Analysis Division\nBARC\n530 012VizagAPIndia",
"Homi Bhabha National Institute\n400 094Anushaktinagar, MumbaiMaharashtraIndia",
"Computational Analysis Division\nBARC\n530 012VizagAPIndia",
"Computational Analysis Division\nBARC\n530 012VizagAPIndia",
"Homi Bhabha National Institute\n400 094Anushaktinagar, MumbaiMaharashtraIndia"
] | [] | The morphology of a collision cascade is an important aspect in understanding the formation of defects and their distribution. While the number of subcascades is an essential parameter to describe the cascade morphology, the methods to compute this parameter are limited. We present a method to compute the number of subcascades from the primary damage state of the collision cascade.Existing methods analyse peak damage state or the end of ballistic phase to compute the number of subcascades which is not always available in collision cascade databases. We use density based clustering algorithm from unsupervised machine learning domain to identify the subcascades from the primary damage state. To validate the results of our method we first carry out a parameter sensitivity study of the existing algorithms. The study shows that the results are sensitive to input parameters and the choice of the time-frame analyzed. On a database of 100 collision cascades in W, we show that the method we propose, which analyzes primary damage state to predict number of subcascades, is in good agreement with the existing method that works on the peak state. We also show that the number of subcascades found with different parameters can be used to classify and group together the cascades that have similar time-evolution and fragmentation. | null | [
"https://export.arxiv.org/pdf/2306.04975v1.pdf"
] | 259,108,381 | 2306.04975 | 756f42078a7eca531643bb2e7de0c5bd10770d95 |
Identifying Subcascades From The Primary Damage State Of Collision Cascades
8 Jun 2023
Utkarsh Bhardwaj
Computational Analysis Division
BARC
530 012VizagAPIndia
Manoj Warrier
Computational Analysis Division
BARC
530 012VizagAPIndia
Homi Bhabha National Institute
400 094Anushaktinagar, MumbaiMaharashtraIndia
Identifying Subcascades From The Primary Damage State Of Collision Cascades
8 Jun 2023Preprint submitted to Elsevier June 9, 2023Cascade morphologySubcascadesCollision cascadesRadiation damageMolecular dynamicsMachine learning applications
The morphology of a collision cascade is an important aspect in understanding the formation of defects and their distribution. While the number of subcascades is an essential parameter to describe the cascade morphology, the methods to compute this parameter are limited. We present a method to compute the number of subcascades from the primary damage state of the collision cascade.Existing methods analyse peak damage state or the end of ballistic phase to compute the number of subcascades which is not always available in collision cascade databases. We use density based clustering algorithm from unsupervised machine learning domain to identify the subcascades from the primary damage state. To validate the results of our method we first carry out a parameter sensitivity study of the existing algorithms. The study shows that the results are sensitive to input parameters and the choice of the time-frame analyzed. On a database of 100 collision cascades in W, we show that the method we propose, which analyzes primary damage state to predict number of subcascades, is in good agreement with the existing method that works on the peak state. We also show that the number of subcascades found with different parameters can be used to classify and group together the cascades that have similar time-evolution and fragmentation.
Introduction
The collision cascades that are caused by high energy irradiation can have different cascade morphologies. One of the standard parameter to understand the cascade morphology is the number of subcascades. Subcascades are damage spots with high density of atomic displacements, separated from each other by less dense regions of damage [1,2]. The collision cascades initiated by low energy primary knock-on atom (PKA) have a single damage spot. The PKA energy at which the average number of subcascades are greater than one is called the subcascade threshold energy and differs for different materials. At higher energies a collision cascade may get fragmented into smaller subcascades.
A subcascade can be either well-separated from other subcascades or it may overlap with one or more subcascades [3].
The cascade morphology and the formation of the subcascades affects the defect formation, defect sizes, defect morphologies and the spatial distribution of the defects [4,5]. Vacancies occupy the central region in a subcascade and interstitials form the surrounding in a roughly spherical fasion. This arrangement is very strictly found in low energy disconnected subcascades or low energy collision cascades. In case of connected subcascades, large SIA clusters are formed at the locations of the connections of the subcascades [6]. A cascade with more number of subcascades will also have a relatively higher fraction of smaller clusters of vacancies and interstitials [5], except for the case of connected subcascades where large interstitial clusters get formed at the overlapping region.
The damage from the disconnected subcascades caused by secondary knock-on atoms with a fraction of the energy of the PKA resemble a lower energy collision cascade. Above a certain energy threshold, cascade fragmentation becomes very probable and the damage regions do not show a linear increase in size and properties but rather they become a combination for lower energy cascades [7].
The time evolution of a cascade can be divided into three phases, initial recoil phase, peak displacement phase and the primary damage state at the end of the recombinations. The damage regions of all the phases have been shown to spatially correlate well with each other [3].
Subcascades simulated with Binary Collision Approximation Monte-Carlo (BCA-MC) method have been extensively studied since decades using various methods such as fuzzy clustering [8], fractal method [9,10,11], identifying vacancy rich regions [10] etc. A relatively recent BCA-MC study of subcascades uses a different method of decomposing the complete domain into cubes named as elementary cubes (ECs) and finding connected regions of cubes that are above an energy threshold to calculate the subcascades. A similar approach has been recently used for analysing the collision cascades in MD based on their peak damage state [12]. The method requires the position and energy of all the atoms at the peak damage state of the collision cascade for analysis. The other important parameters of interest in collision cascades such as the number of defects, defect sizes, defect morphologies [13], can all be calculated from the primary damage state. In databases of collision cascades such as the open CascadesDB database [14], only the primary damage state is provided. Outputting the intermediate steps also increases the simulation output data size and impacts the efficiency due to high amount of data that is required to be written to disk. Moreover, it is the distribution of defects and damage regions at the primary damage state which are of interest from the point of view of initializing higher scale models of radiation damage with better spatial distribution. The higher scale models that incorporate spatial distribution such as the Kinetic Monte Carlo account for reaction rates more accurately compared to other models such as the Mean Field Rate Theory which lack spatial correlation [15].
We present a method to calculate the number of subcascades using density based clustering algorithm (DBSCAN) applied on the vacancies found in the primary damage state. DBSCAN is a well establised unsupervised machine learning alogirthm [16] to classify dense regions into different clusters. The results of the DBSCAN algorithm with that from the EC method have been compared. We study the ECs method applied at the peak damage and extend it for better modelling of the subcascades count. We show that the nearby stages to peak in cascade evolution sometimes give a better idea about the number of subcascades. Moreover, the hyper-parameters in ECs method such as the threshold energy, can not be decided strictly. We propose a time averaged and parameter averaged way to deduce the number of subcascades and present the comparison of EC method with DBSCAN on a database of 100 collision cascades in W. We show that there is a good agreement between the two methods.
A classification of the collision cascades based on the subcascade volumes around the peak using unsupervised Machine Learning is presented. It is shown that the different cascade morphologies, such as the channelling cascades, connected subcascades, un-connected subcascades and single cascades form their own classes. These can be used to get important statistics for initialization in higher scale models to study the evolution of the defects. We explore the relationship of the classes with the properties of interest at primary damage state such as number of defects, maximum vacancy size, number of point defect clusters etc. A definite relationship of various classes and the properties of a collision cascade is observed and insights relating to morphology and the parameters is presented.
Methods
Molecular Dynamics Simulation Dataset
We carry out subcascades analysis in the MD simulations of collision cascades in W. A 200 unit cells cubic bcc crystal of W is initially equillibrated at 0 bars pressure and 300K temperature. The collision cascades are initiated with a PKA energy of 150 keV. The simulations were carried out at an initial temperature of 300 Kelvin and evolved for 40ps. Electronic stopping was applied to atoms with energies above 10 eV [17]. A PKA was selected at the center from the lattice atoms of the cubic simulation cell. The desired kinetic energy was given in a random initial direction. Periodic boundaries were used for each cascade.
It was ensured that simulation cell size and PKA chosen were such that no atom reaches the boundaries except occasionally in the case of channelling.
Temperature control at 300K was applied to all atoms within three atomic layers from the cell borders using a Berendsen thermostat [18]. MD simulations of high energy collision cascades with the Derlet potential [19] stiffened by Bjorkas [20] were carried out. The software used for MD simulations is LAMMPS [21]. The atomic coordinates are dumped every 500 time frames which is maximum of 0.5 picoseconds.
Method for identifying subcascades from the primary damage state
The algorithm to find the number of subcascades at the final primary damage state is based on the following facts:
• The defect density in the three main stages of the collision cascade evolution viz. initial recoil phase, peak and final primary damage state, has strong spatial correlation [3].
• Subcascades can be defined as the regions of high damage separated by regions of lower damage [12].
• The core of a subcascade has mostly vacancies surrounded by the interstitials [6].
• Density based clustering using the DBSCAN algorithm can group the high density regions into different clusters and ignore the low density regions as noise [22] .
We identify high density regions of vacancies by using the DBSCAN clustering algorithm. The DBSCAN is a well established density based clustering algorithm used for unsupervised classification in Machine Learning. It can ignore the regions of low density vacancy defects that might appear in between two connected subcascades. It requires two input parameters viz. the maximum distance and the minimum number of points. The first parameter denotes the threshold distance between two points (here vacancies) beyond which they are not considered neighbors. The minimum number of points is another parameter that denotes the number of neighbours a point should have to be considered a core point. The core points grow the cluster by including the neighbouring points. The points in the cluster that are non-core are not checked further for neighbours to be included in the cluster.
Overview of the subcascade analysis at and around the peak
The subcascade analysis at the peak is done using the decomposition method as described in [4]. The method has been applied to the subcascade analysis of BCA-MC [12] and MD simulations [23]. The steps of the algorithm are given below:
1. Decompose the simulation box in equal sized cubic domains (ECs) of volume V c and length l c .
2. Filter out the atoms that have kinetic energy below a threshold value T k .
3. Find out the total energy of atoms in each EC. 4. Filter out the ECs that have the total kinetic energy lower than a threshol value T c . 5. The volume of the cascade is the volume of the remaining ECs. 6. Find the peak frame as the frame at which the volume of the cascade is maximum.
7. At the peak frame, join the ECs that share an edge or corner together. 8. Each mutually connected group of ECs that is disconnected from others represents a different subcascade.
The value of the hyper-parameters l c , T k and T c have been suggested in earlier publications [12,4,23]. We use values in the same range in our analysis.
The value of l c is chosen to be 1.5 nm. It has been shown that using different values of l c we can get slight differences in the total volume as well as number of subcascades [4]. In previous studies [12,23], the value of T k has been chosen from 0.1 to 0.4 eV. With higher value of the threshold, the subcascade count in most cases increases slightly because the cells falling in the connections of two overlapping subcascades get filtered out. However, a very high value may decrease the count by not accounting for smaller subcascades especially in the case of channelling. 77eV /nm 3 has been suggested to be used as the value of T c for W. This is calculated based on the temperature needed to melt W [12]. However, the exact calculation of the energy required to melt W in a non-equillibrium state might vary atleast from half of this value to 20% higher [4]. The value used for T c varies from 27eV /nm 3 [23] to 77eV /nm 3 [12].
The number of subcascades may change depending on the value of the hyperparameters l c , T k and T c [23]. The exact values for these thresholds in the algorithms is ambiguous. Moreover, to understand the cascade morphology it is not sufficient to analyze just one frame at the peak [23]. In results section, we We use t-SNE [24] for dimensionality reduction. It is a neighbour graph based dimensionality reduction technique that specifically focus on keeping a similar neighbourhood of the points in reduced dimensional space to the neighbourhood in higher dimensional space. For classification HDBSCAN [16] is used which is a density based classification algorithm very similar to DBSCAN.
Results
The database analyzed contains 100 collision cascades simulated at 150 keV PKA energy for bcc W. Figure 1 shows the subcascades found using the DBSCAN algorithm and the high energy atoms at the peak and the end of the recoil phase of the cascade.
Number of subcascades found at primary damage using the DBSCAN
The marked subcascades can be seen as the regions of high vacancy core. The damage regions in all the three phases overlap. 1. The slightly connected damage regions that may appear as one overlapping region at peak may become disconnected if we look at the adjacent frames especially when the T c is less as shown in Figure 3. This may also happen as we raise the value of T c at peak.
Comparison of EC method for different parameters and time frames around the peak
2. Some of the low energy small disconnected subcascades may fade away before the peak volume of whole cascade is reached based on the volume of bigger subcascades. This is especially true when T c value for is high as shown in Figure 3. Inspite of all these differences the overall distribution of the number of subcascades is not significantly different for small changes in the hyperparameters.
However, if we take any one value as the ground truth for the number of subcascades, it may not be suggestive of the morphology and evolution of the cascade e.g. a cascade with a value of one subcascade at the peak might actually be a single spherical cascade or an overlapping connected pair of subcascades or it might have a small off-shooting subcascade that subsides before the peak is reached. It has been shown that subcascade count itself is not a good indicator of cascade morphology [23]. For this reason we take the feature vector of all the values found with different parameters and around the peak frame as the basis for classifying cascade morphology for which the results are shown in Section 3.4. Figure 3: Four example collision cascades showing subcascades at peak and its two adjacent frames for the two Tc values of 27 eV/nm 3 (left three plots) and 77 eV/nm 3 (right three plots). The different subcascades identified with the EC method are coloured with different colours. It can be noted at peak that for lower Tc value, the nearby subcascades merge into one while for higher Tc value, the smaller damage regions might disappear. The adjacent frames are 500 simulation steps apart which is less than 0.5 ps. Figure 4 shows the comparison of the subcascade counts found around the peak and the subcascade counts found at the primary damage state with different parameters. The mean value is shown with the line while the shaded region shows the 95% confidence region.The values are shown for all the hundred collision cascades in the dataset that are sorted by the value found using DBSCAN. We see that the two methods almost always have overlapping shaded region. The average difference between the mean subcacade count for each cascade found using the two different methods is 0.37. For 95% of the cascades the difference is below 1.0 and for 75% for the data the difference is below 0.5.
Comparison of EC method and DBSCAN
The maximum difference between the mean value of subcascades found using the two methods is 1.4. The cases where the differences are more show greater variability due to change in parameters as well. The rounded up mean value of the number of subcascades found using EC method and using DBSCAN are shown in Figure 5. The difference in most of the cases is limited to one however at higher number of subcascades the difference can be slightly more. This variation is similar to the variation observed between the subcascade counts found using different parameters at the peak ( Figure 2).
Mean subcascades
For 64% of the cascades the values match exactly while in 35% the values differ by one. Only one cascade shows a differnce of two. The small differences that we see arise in the cases where the ambiguity is inherent as described earlier.
Classification of cascade morphologies
The single value of subcascade count does not fully define the cascade morphology. We use subcascade count and subcascade volumes found using the different parameters and around the peak for representing the cascade morphology. To view the different classes of morphologies, we use the feature vector that consists of these values and apply the dimensionality reduction algorithm t-SNE to reduce it to two dimensions for plotting. Figure 6 shows a plot where each point represents a collision ccascade. The two axes represent the first and second coordinate values found using the dimensionality reduction. We then classify the different dense regions that signify prevalent morphologies into classes using HDBSCAN. We obtained six classes on the current dataset. Figure 7 shows distribution of various properties in different morphological classes.
The class label-1 represent collision cascades with only a primary knock-on and the mean number of subcascades slowly increase with class label. The fractional value of subcascade counts after taking the mean of subcascade counts using the different parameters is used for plotting. The collision cascades from label-1 to label-4 have subcascade counts between 1 and 3 but all other properties such as defects count, cluster sizes, percentage of defects in clusters show a decreasing trend. The properties of the label-5 only show typical trends in cluster sizes and to some degree in defects count as well. In Figure 6 label-5 appears separate from all the other morphologies. Figure 8 shows typical cascade morphologies for each label to show what the labels mean qualitatively.
The class label-1 represents collision cascades with only a single major sub- cascade which may also have a small slightly connected off-shoot. This small subcascade would rarely be counted. In 2nd class the single subcascade starts to appear more fragmented and the off shooting subcascade becomes more prominient in some cases. In 3rd and 4th cases the fragmentation further increases and more distinguished subcascades also appear. In label-5 the subcascades are further apart and there is sometimes a small energy channelling in addition to the bigger subcascades similar to label-4. Also, the vacancies seem to be smaller and more sparsely distributed. The label-6 has the cascades that have long range channelling sometimes piercing through the box boundaries which is large enough for all other cases.
Conclusion
The method developed in this study identifies the number of subcascades using just the primary damage state. This opens up the door to analyze vast databases such as the CascadesDB database of collision cascades that only have the primary damage data. The number of subcascades can be estimated from The dataset used in this work has a single material and uses the same interatomic potential. The distribution of collision cascade morphologies may change depending on the material and the interatomic potential. The comparative study of cascades morphologies across different interatomic potentials is out of the scope of the paper.
Data availability
The raw data required to reproduce these findings are partially available to
download from the open database https://cascadesdb.org/. The code required to reproduce these findings is available to download from https://github.com/haptork/csaransh/ as part of the open source Csaransh software [25].
The different values of maximum distance give the results at different level of details. We choose three different values of 20, 25 and 30 Angstroms, which is slightly higher values at primary damage than the values chosen for the length of effective cubes for the ECs algorithm at peak. Similarly, for minimum number of points we choose three different values, 2, 5 and 10, to get results at different level of details. The rounded-off value of the mean of subcascade counts found using different parameters is considered as the number of subcascades. The number of subcascades do not differ for different parameters in the case of well separated subcascades of substantial sizes. However, for the cases where the overlap may or may not appear depending on the level of detail, the results differ as we show later in the results section.
The increase in the volume of the cascade with decrease in l c is due to the fractal nature of the collision cascade. With lower values of l c the number of subcascades increases separating out more and more subcascades due to smaller unconnected regions. For values between 1.0 to 2.0 nm the value of subcascades does not change much.
show cases where it is difficult to acertain the number of subcascades as ground truth. The values at different time-steps around the peak and with different hyper-parameters look reasonable and acceptable. Finally, for comparison with the results of the algorithm at primary damage state we use T k values of 0.1, 0.5 and 1.0 eV and T c values of 18, 27 and 54eV /nm 3 . We take all the nine combinations of these two variables. The nine combinations of these values in a window of two time frames before and two time frames ahead from the peak make total of 9 * (2 + 2 + 1) = 45. All these values are considered for the classification. The mean of these 45 values is taken as the fragmentation factor.The round-off of this fragmentation factor is the estimate of the number of subcascades. The mean of the inter-quartile range as the fragmentation factor can also be taken to remove the effect of outliers. However, in the present study we have taken the mean of all the values.
2. 4 .
4Classification of cascade morphology Collision cascades are grouped into different classes based on their evolution and subcascade distribution. The number and volumes of subcascades at different time-frames around the peak are used as the feature vector for dimensionality reduction and unsupervised classification. The dimensionality reduction can help in visualizing and exploring the different classes having similar morphology.
Figure 2
2shows a comparison of the number of subcascades found for different cascades with different hyper-parameters and time-frames using ECs. The xaxis represents all the 100 cascades sorted by the number of subcascades found
Figure 1 :
1The damage regions at three different phases viz. at the end of recoil phase, peak damage and primary damage state of six different collision cascades. The subcascades found using DBSCAN algorithm at primary damage state overlap with the damage regions at all the phases.at the peak frame with T k = 0.1eV and T c = 27eV /nm 3 . The y-axis shows the number of subcascades for different parameters represented by the different colors and line-styles. We see that there are slight differences in the exact value of the number of subcascades for both the adjacent frames around the peak as well as if we decrease the value of T c by half. The reasons why the number of subcascades differ are listed below.
Figure 2 :
2Number of subcascades found with different parameters. The variations are small especially for the lower subcascade counts.
3 .
3In case of channelling when a lot of cascades are created far from each other the peak of one region of damage might not be the peak of the other region, in these cases the concept of peak is weak and might give high or low number of subcascades. This is the reason why we see more disagreements in the region where subcascade count is particularly high due to channelling of the knock-on atoms. 4. The connected subcascades may show overlap when threshold energies (T c , T k ) are low while if we increase the threshold energies the less dense regions of connectivity may disappear Figure 3. 5. Some of the smaller subcascades that are often less dense might also disappear if the threshold energies are increased Figure 3.
Figure 4 :
4Number of subcascades found with the EC method and DBSCAN method with different parameters. The lines show the mean and the shaded region show the 95% confidence region. The mean values are similar and the shaded regions generally overlap.
Figure 5 :
5Number of subcascades found with the EC method and DBSCAN method after rounding up the mean value for different parameters. The methods show good agreement and the variations are within the same range as found with different parameters for the same method.
Figure 6 :
6The similarity relationship and classes of all the collision cascade morphologies. Each point represents a collision cascade. The points representing collision cascades with similar morphologies are placed close to each other using t-SNE dimensionality reduction applied on the feature vector based on subcascade volumes. Different colours represent different classes of morphologies found using HDBSCAN density based clustering algorithm.
Figure 7 :
7Distribution of various cascade properties in different morphological classes.
Figure 8 :
8Sample cascade morphologies for each class labels. the final frame just like all the other important parameters such as the number of defects formed, defect morphologies etc. The application of well-established techniques from machine learning such as DBSCAN and t-SNE have enabled us to provide an efficient and easy to understand method for the estimation of the number of subcascades and the classification of cascade morphologies. We have also shown a classification of the cascade morphologies based on the subcascades volumes throughout evolution and with different levels of details. The classes show regular trends with the other parameters of interest which provide useful insights. The classes also show qualitative morphological characteristics which are not captured in the subcascade count alone. The morphologies such as channeling, single cascades which may or may not have nearby small subcascades etc. fall in different classes. The classification can be used in an exploratory way or may be used to classify new cascades into one of the known categories. In future more feature vectors or representations of collision cascades can be tried for such a categorization and can be used to study morphological correlations with defect morphologies etc. This information can be used to initialize the collision cascades in a higher scale model like KMC.
On the structure of irradiation-induced collision cascades in metals as a function of recoil energy and crystal structure. H Heinisch, B Singh, Philosophical Magazine A. 672H. Heinisch, B. Singh, On the structure of irradiation-induced collision cascades in metals as a function of recoil energy and crystal structure, Philosophical Magazine A 67 (2) (1993) 407-424.
The importance of the pka-energy spectrum for radiation damage simulation. R Dierckx, Journal of Nuclear Materials. 1443R. Dierckx, The importance of the pka-energy spectrum for radiation dam- age simulation, Journal of Nuclear Materials 144 (3) (1987) 214-227.
Fragmentation of displacement cascades into subcascades: A molecular dynamics study. E Antoshchenkova, L Luneville, D Simeone, R E Stoller, M Hayoun, Journal of Nuclear Materials. 458E. Antoshchenkova, L. Luneville, D. Simeone, R. E. Stoller, M. Hayoun, Fragmentation of displacement cascades into subcascades: A molecular dynamics study, Journal of Nuclear Materials 458 (2015) 168-175.
A model of defect cluster creation in fragmented cascades in metals based on morphological analysis. A Backer, C Domain, C Becquart, L Luneville, D Simeone, A E Sand, K Nordlund, Journal of Physics: Condensed Matter. 3040405701A. De Backer, C. Domain, C. Becquart, L. Luneville, D. Simeone, A. E. Sand, K. Nordlund, A model of defect cluster creation in fragmented cas- cades in metals based on morphological analysis, Journal of Physics: Con- densed Matter 30 (40) (2018) 405701.
Cascade fragmentation: deviation from power law in primary radiation damage. A E Sand, D Mason, A Backer, X Yi, S Dudarev, K Nordlund, Materials Research Letters. 55A. E. Sand, D. Mason, A. De Backer, X. Yi, S. Dudarev, K. Nordlund, Cascade fragmentation: deviation from power law in primary radiation damage, Materials Research Letters 5 (5) (2017) 357-363.
On the origin of large interstitial clusters in displacement cascades. A Calder, D J Bacon, A V Barashev, Y N Osetsky, Philosophical Magazine. 907-8A. Calder, D. J. Bacon, A. V. Barashev, Y. N. Osetsky, On the origin of large interstitial clusters in displacement cascades, Philosophical Magazine 90 (7-8) (2010) 863-884.
Primary radiation damage formation. R E Stoller, Comprehensive Nuclear Materials. R. J. M. KoningsElsevierR. E. Stoller, Primary radiation damage formation, in: R. J. M. Konings (Ed.), Comprehensive Nuclear Materials, Elsevier, 2012.
Fuzzy clustering methods an application to atomic displacement cascades in solids. M Hou, Physical Review A. 3962817M. Hou, Fuzzy clustering methods an application to atomic displacement cascades in solids, Physical Review A 39 (6) (1989) 2817.
Cascade fragmentation under ion beam irradiation: A fractal approach. D Simeone, L Luneville, Y Serruys, Physical Review E. 82111122D. Simeone, L. Luneville, Y. Serruys, Cascade fragmentation under ion beam irradiation: A fractal approach, Physical Review E 82 (1) (2010) 011122.
Criterion of subcascade formation in metals from atomic collision calculation. Y Satoh, S Kojima, T Yoshiie, M Kiritani, Journal of nuclear materials. 179Y. Satoh, S. Kojima, T. Yoshiie, M. Kiritani, Criterion of subcascade for- mation in metals from atomic collision calculation, Journal of nuclear ma- terials 179 (1991) 901-904.
From cascade to spike-a fractalgeometry approach. Y.-T Cheng, M.-A Nicolet, W Johnson, Physical review letters. 58202083Y.-T. Cheng, M.-A. Nicolet, W. Johnson, From cascade to spike-a fractal- geometry approach, Physical review letters 58 (20) (1987) 2083.
Dudarev, Subcascade formation and defect cluster size scaling in high-energy collision events in metals. A Backer, A E Sand, K Nordlund, L Luneville, D Simeone, S , Europhysics Letters. 115226001A. De Backer, A. E. Sand, K. Nordlund, L. Luneville, D. Simeone, S. Du- darev, Subcascade formation and defect cluster size scaling in high-energy collision events in metals, Europhysics Letters 115 (2) (2016) 26001.
Graph theory based approach to characterize self interstitial defect morphology. U Bhardwaj, A E Sand, M Warrier, Computational Materials Science. 195110474U. Bhardwaj, A. E. Sand, M. Warrier, Graph theory based approach to characterize self interstitial defect morphology, Computational Materials Science 195 (2021) 110474.
Open database of cascade damage configurations hosted by iaea. Open database of cascade damage configurations hosted by iaea. URL https://cascadesdb.iaea.org/
Mean field rate theory and object kinetic monte carlo: A comparison of kinetic models. R E Stoller, S I Golubov, C Domain, C Becquart, Journal of Nuclear Materials. 3822-3R. E. Stoller, S. I. Golubov, C. Domain, C. Becquart, Mean field rate theory and object kinetic monte carlo: A comparison of kinetic models, Journal of Nuclear Materials 382 (2-3) (2008) 77-90.
hdbscan: Hierarchical density based clustering. L Mcinnes, J Healy, S Astels, J. Open Source Softw. 211205L. McInnes, J. Healy, S. Astels, hdbscan: Hierarchical density based clus- tering., J. Open Source Softw. 2 (11) (2017) 205.
H Hemani, A Majalee, U Bhardwaj, A Arya, K Nordlund, M Warrier, arXiv:2005.11940Inclusion and validation of electronic stopping in the open source lammps code. arXiv preprintH. Hemani, A. Majalee, U. Bhardwaj, A. Arya, K. Nordlund, M. Warrier, Inclusion and validation of electronic stopping in the open source lammps code, arXiv preprint arXiv:2005.11940 (2020).
Molecular dynamics with coupling to an external bath. H J C Berendsen, J P M Postma, W F Van Gunsteren, A Dinola, J R Haak, 10.1063/1.448118The Journal of Chemical Physics. 818H. J. C. Berendsen, J. P. M. Postma, W. F. van Gunsteren, A. DiNola, J. R. Haak, Molecular dynamics with coupling to an external bath, The Journal of Chemical Physics 81 (8) (1984) 3684-3690. doi:10.1063/1.448118.
Multiscale modeling of crowdion and vacancy defects in body-centered-cubic transition metals. P M Derlet, D Nguyen-Manh, S L Dudarev, 10.1103/PhysRevB.76.054107Phys. Rev. B. 7654107P. M. Derlet, D. Nguyen-Manh, S. L. Dudarev, Multiscale modeling of crow- dion and vacancy defects in body-centered-cubic transition metals, Phys. Rev. B 76 (2007) 054107. doi:10.1103/PhysRevB.76.054107.
Modelling radiation effects using the ab-initio based tungsten and vanadium potentials. C Björkas, K Nordlund, S Dudarev, 10.1016/j.nimb.2009.06.123Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 26718proceedings of the Ninth International Conference on Computer Simulation of Radiation Effects in SolidsC. Björkas, K. Nordlund, S. Dudarev, Modelling radiation effects using the ab-initio based tungsten and vanadium potentials, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 267 (18) (2009) 3204 -3208, proceedings of the Ninth Interna- tional Conference on Computer Simulation of Radiation Effects in Solids. doi:10.1016/j.nimb.2009.06.123.
. A P Thompson, H M Aktulga, R Berger, D S Bolintineanu, W M Brown, P S Crozier, P J Veld, A Kohlmeyer, S G Moore, T D , A. P. Thompson, H. M. Aktulga, R. Berger, D. S. Bolintineanu, W. M. Brown, P. S. Crozier, P. J. in't Veld, A. Kohlmeyer, S. G. Moore, T. D.
Lammps-a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Nguyen, Computer Physics Communications. 271108171Nguyen, et al., Lammps-a flexible simulation tool for particle-based materi- als modeling at the atomic, meso, and continuum scales, Computer Physics Communications 271 (2022) 108171.
A density-based algorithm for discovering clusters in large spatial databases with noise. M Ester, H.-P Kriegel, J Sander, X Xu, kdd. 96M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al., A density-based algo- rithm for discovering clusters in large spatial databases with noise., in: kdd, Vol. 96, 1996, pp. 226-231.
Modelling the primary damage in fe and w: influence of the short-range interactions on the cascade properties: Part 2-multivariate multiple linear regression analysis of displacement cascades. A Backer, C S Becquart, P Olsson, C Domain, Journal of Nuclear Materials. 549152887A. De Backer, C. S. Becquart, P. Olsson, C. Domain, Modelling the pri- mary damage in fe and w: influence of the short-range interactions on the cascade properties: Part 2-multivariate multiple linear regression analysis of displacement cascades, Journal of Nuclear Materials 549 (2021) 152887.
Visualizing data using t-sne. L Van Der Maaten, G Hinton, Journal of machine learning research. 911L. Van der Maaten, G. Hinton, Visualizing data using t-sne., Journal of machine learning research 9 (11) (2008).
Csaransh: Software suite to study molecular dynamics simulations of collision cascades. U Bhardwaj, H Hemani, M Warrier, N Semwal, K Ali, A Arya, 10.21105/joss.01461Journal of Open Source Software. U. Bhardwaj, H. Hemani, M. Warrier, N. Semwal, K. Ali, A. Arya, Csaransh: Software suite to study molecular dynamics simulations of collision cascades, Journal of Open Source Software (Sep 2019). doi: 10.21105/joss.01461.
| [
"https://github.com/haptork/csaransh/"
] |
[
"Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints",
"Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints"
] | [
"Huazi Cao ",
"Jiahao Shen ",
"Cunjia Liu ",
"Bo Zhu ",
"Shiyu Zhao "
] | [] | [] | This paper studies the motion planning problem of the pick-and-place of an aerial manipulator that consists of a quadcopter flying base and a Delta arm. We propose a novel partially decoupled motion planning framework to solve this problem. Compared to the state-of-the-art approaches, the proposed one has two novel features. First, it does not suffer from increased computation in high-dimensional configuration spaces. That is because it calculates the trajectories of the quadcopter base and the end-effector separately in the Cartesian space based on proposed geometric feasibility constraints. The geometric feasibility constraints can ensure the resulting trajectories satisfy the aerial manipulator's geometry. Second, collision avoidance for the Delta arm is achieved through an iterative approach based on a pinhole mapping method, so that the feasible trajectory can be found in an efficient manner. The proposed approach is verified by three experiments on a real aerial manipulation platform. The experimental results show the effectiveness of the proposed method for the aerial pickand-place task.Note to Practitioners-Aerial manipulators have attracted increasing research interest in recent years due to their potential applications in various domains. In this paper, we particularly focus on the motion planning problem of the pick-and-place of aerial manipulators. We propose a novel partially decoupled motion planning framework, which calculates the trajectories of the quadcopter base and the end-effector in Cartesian space, respectively. Geometric feasibility constraints are proposed to coordinate the trajectories to ensure successful execution. Three experiments on a real aerial manipulator platform demonstrate the effectiveness of the approach. In future research, we will address the motion planning problem of aerial manipulators in complex environments. | null | [
"https://export.arxiv.org/pdf/2306.04970v1.pdf"
] | 259,108,485 | 2306.04970 | 0144ac309dd43bd60f8dc3263ede56e2cacb4f9a |
Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints
Huazi Cao
Jiahao Shen
Cunjia Liu
Bo Zhu
Shiyu Zhao
Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints
1Index Terms-Aerial manipulatorDelta armAerial pick- and-placeMotion planningCollision avoidance
This paper studies the motion planning problem of the pick-and-place of an aerial manipulator that consists of a quadcopter flying base and a Delta arm. We propose a novel partially decoupled motion planning framework to solve this problem. Compared to the state-of-the-art approaches, the proposed one has two novel features. First, it does not suffer from increased computation in high-dimensional configuration spaces. That is because it calculates the trajectories of the quadcopter base and the end-effector separately in the Cartesian space based on proposed geometric feasibility constraints. The geometric feasibility constraints can ensure the resulting trajectories satisfy the aerial manipulator's geometry. Second, collision avoidance for the Delta arm is achieved through an iterative approach based on a pinhole mapping method, so that the feasible trajectory can be found in an efficient manner. The proposed approach is verified by three experiments on a real aerial manipulation platform. The experimental results show the effectiveness of the proposed method for the aerial pickand-place task.Note to Practitioners-Aerial manipulators have attracted increasing research interest in recent years due to their potential applications in various domains. In this paper, we particularly focus on the motion planning problem of the pick-and-place of aerial manipulators. We propose a novel partially decoupled motion planning framework, which calculates the trajectories of the quadcopter base and the end-effector in Cartesian space, respectively. Geometric feasibility constraints are proposed to coordinate the trajectories to ensure successful execution. Three experiments on a real aerial manipulator platform demonstrate the effectiveness of the approach. In future research, we will address the motion planning problem of aerial manipulators in complex environments.
Abstract-This paper studies the motion planning problem of the pick-and-place of an aerial manipulator that consists of a quadcopter flying base and a Delta arm. We propose a novel partially decoupled motion planning framework to solve this problem. Compared to the state-of-the-art approaches, the proposed one has two novel features. First, it does not suffer from increased computation in high-dimensional configuration spaces. That is because it calculates the trajectories of the quadcopter base and the end-effector separately in the Cartesian space based on proposed geometric feasibility constraints. The geometric feasibility constraints can ensure the resulting trajectories satisfy the aerial manipulator's geometry. Second, collision avoidance for the Delta arm is achieved through an iterative approach based on a pinhole mapping method, so that the feasible trajectory can be found in an efficient manner. The proposed approach is verified by three experiments on a real aerial manipulation platform. The experimental results show the effectiveness of the proposed method for the aerial pickand-place task.
Note to Practitioners-Aerial manipulators have attracted increasing research interest in recent years due to their potential applications in various domains. In this paper, we particularly focus on the motion planning problem of the pick-and-place of aerial manipulators. We propose a novel partially decoupled motion planning framework, which calculates the trajectories of the quadcopter base and the end-effector in Cartesian space, respectively. Geometric feasibility constraints are proposed to coordinate the trajectories to ensure successful execution. Three experiments on a real aerial manipulator platform demonstrate the effectiveness of the approach. In future research, we will address the motion planning problem of aerial manipulators in complex environments.
Index Terms-Aerial manipulator, Delta arm, Aerial pickand-place, Motion planning, Collision avoidance
I. INTRODUCTION
An aerial manipulator is a novel type of flying robot that consists of a multirotor and a robotic arm. Due to their ability to move quickly and operate precisely in high-altitude and complex workspaces, the aerial manipulator has potential applications in various domains, including transportation, inspection, and maintenance (see [1]- [4] for recent surveys).
Aerial manipulation has been studied from various aspects such as platform design [5]- [7], motion control [8]- [10], motion planning [11]- [13] and visual servoing [14]- [17] to now. Our work focuses on the motion planning problem of aerial pick-and-place tasks, where the aerial manipulator is required to grasp and move objects in the environment (see Fig. 1). It is noted that safety and high efficiency are important to the aerial pick-and-place task. This motivates our study to focus on an effective motion planning scheme that ensures collision-free trajectories.
Different from the motion planning of multirotors, the motion planning of an aerial manipulator is more challenging since the aerial manipulator has more degrees of freedom and is required to manipulate the objects. Different from the motion planning of a ground mobile manipulator, the motion planning of an aerial manipulator is more challenging since the aerial manipulator flies in a 3D environment rather than a 2D environment. In addition, the robotic arm and the multirotor base are dynamically coupled, which means their movements mutually affect each other. Existing approaches for motion planning for aerial manipulation can be classified into two categories based on the space in which planners calculate trajectories.
The first category is to plan the motion of the aerial manipulator in the configuration space. In early works, the RRT* method has been used to plan the path of the aerial manipulator in the configuration space without considering the dynamics [11], [12]. As a consequence, the resulting trajectory may not be executable for the aerial manipulator when its movement is fast. To address this issue, the dynamics of the aerial manipulator must be considered in motion planning. The existing methods that consider the dynamics in motion planning can be classified into three types.
The first type uses a kinematics controller as a local plan-ner in the sampling-based global planner [13]. It guarantees the feasibility of the trajectory for the real system and also enables searching for a solution directly in the reduced and more relevant task space. However, collision avoidance is not inherently embedded in the local planning, which may cause its result is not collision-free. The second type uses the differential flatness principle to ensure the dynamical feasibility [18]. In particular, motion planning methods for a special long-reach aerial manipulator have been proposed in [19], [20] based on this point of view. The platform in these works consists of a multirotor with a long bar extension that incorporates a lightweight dual arm in the tip. Since the dynamical feasibility constraints represented by the differential flatness are nonlinear, this type of method may be computationally expensive. The third type uses trajectory generation to ensure the dynamical feasibility [21].
In the trajectory generation, the trajectories are represented by spline curves. The dynamical feasibility constraints are considered in the trajectory generation problem by utilizing the derivative property of the spline curves. Planning in the configuration space, however, suffers high computation costs when the dimension of the space is high [22]. Unfortunately, aerial manipulators generally have high degrees of freedom (DoFs), which therefore motivates researchers to study other approaches to solve the motion planning problem. The second category of approaches directly plans the trajectory of the end-effector in the Cartesian space. The motion planning of the whole aerial manipulator is often solved practically by decoupling the flying base and the manipulator [23]. Firstly, the trajectory of the flying base approaching the manipulation position is calculated. Then, the motion of the end-effector is planned by assuming that the flying base stays in the same pose during manipulation. However, this method is conservative and inefficient in terms of energy and execution time [1]. To address this issue, the dynamic feasibility constraint must be considered in the trajectory planning of the end-effector. Therefore, a dynamically feasible task space planning method for underactuated aerial manipulators based on the differential flatness principle has been proposed in [24]. However, this method does not consider obstacle avoidance which is generally required in real scenarios.
The above analysis reveals the limitations of the existing motion planning approaches for aerial manipulators. Planning in the configuration space incurs high computational costs due to the high DoF of aerial manipulators, while the existing methods of planning in Cartesian space do not consider obstacle avoidance, a crucial factor in real-world scenarios. To address these limitations, this paper proposes a novel framework that integrates the motion planning of both the flying base and the manipulator in a constrained workspace. The proposed algorithm is designed for an aerial manipulator consisting of a quadcopter and a Delta arm. The novelty of our approach is outlined below: 1) We propose a novel partially decoupled motion planning method for the aerial pick-and-place task. This method calculates the dynamically feasible and collision-free trajectories of the flying base and the manipulator in Cartesian space, respectively. The resulting trajectories are coordinated for successful execution. By solving the motion planning problem in Cartesian space, the high DoF of the aerial manipulator can be handled more efficiently than planning in the configuration space with a much lower computational load. Compared with the existing methods that plan trajectories in the configuration space, this method does not suffer from the problem of increased computation in high-dimensional configuration spaces. Compared with the existing methods that plan trajectories in Cartesian space, the proposed method ensures that the trajectories are collision-free.
2) We propose novel geometric feasibility constraints to ensure the trajectories of the quadcopter and the end-effector can be successfully executed. Our proposed constraints are linearly represented by the positions of the quadcopter and the end-effector, whereas the original geometry constraints are nonlinearly represented by the configuration of the aerial manipulator. By using the constraints, our method ensures that the resulting trajectories satisfy the geometry of the aerial manipulator. This is particularly important for motion planning of the aerial manipulator in Cartesian space.
3) Collision avoidance for the Delta arm is achieved through an efficient iterative approach based on a pinhole mapping method. At each iteration, a quadratic programming (QP) problem is solved to determine the collision-free trajectory for the end-effector. A collision avoidance term, designed based on the pinhole mapping method and collision check results, is formulated into the QP problem, so that the aerial manipulator is driven away from the obstacles in the local environment. Compared to collision avoidance in the configuration space [18], [21], the proposed iterative approach is faster as it is computed in Cartesian space.
The proposed algorithms are verified by three experiments on an aerial manipulator platform in the real aerial pickand-place task. Unlike the traditional Delta arm, the Delta arm used in this paper drives the joint angles by three fourbar linkages to magnify the control forces [25]. Experiments including collision avoidance, aerial retrieval, and aerial transport are conducted to validate the novelties.
The rest of this paper is structured as follows. The problem statement and preliminaries are given in Section II. Kinematics and geometric feasibility constraints of the aerial manipulator are presented in Section III. The motion planning of the quadcopter base is proposed in Section IV. Section V gives the motion planning of the Delta arm. Then, the experimental verification is given in Section VI. Conclusions are drawn in Section VII.
II. PROBLEM STATEMENT AND PRELIMINARIES
A. Problem statement
The platform is an aerial manipulator that consists of a quadcopter and a Delta arm (see Fig. 2(a)). The base of the Delta arm is attached underneath the quadcopter. The endeffector used in this paper is a gripper and it is mounted on the end of the Delta arm, whose position can be controlled by the three actuators attached to the base of the Delta arm. The orientation of the end-effector is set the same as the orientation of the quadcopter [26].
The aerial manipulator has three reference frames: the inertial frame Σ I , the quadcopter body-fixed frame Σ B , and the Delta arm frame Σ D (see Fig. 2(a)). Σ I is an inertial frame where the z-axis is in the direction of the gravity vector. Σ B is rigidly attached to the quadcopter base. Its origin coincides with the center of gravity of the quadcopter. Σ D is rigidly attached to the Delta arm base at its geometric center p C .
Let p B ∈ R 3 and R B ∈ SO(3) denote the position of the quadcopter in Σ I and the rotation matrix from Σ B to Σ I , respectively. Let p E ∈ R 3 denote the position of the endeffector in Σ I . Then, the geometric relationship between p E and p B can be represented as
p E − p B = R B p B E ,(1)
where p B E ∈ R 3 is a function of the Delta arm's actuated joint angles q 1 , q 2 , q 3 .
For a pick-and-place task, denote p O ∈ R 3 and ψ O ∈ R as the position and the orientation of the target object in Σ I , respectively, whereas ψ E ∈ R denotes the orientation angle of the end-effector. Let t G denote the time that p E arrives at p O and t grip the closing time of the gripper.
The goal of the motion planning for the aerial pickand-place is to calculate the collision-free trajectories for the quadcopter and the Delta arm to move from a starting position to a feasible grasping configuration and from that grasping configuration to the end position. Given the geometric relationship, dynamical feasibility constraints, obstacles in the environment, the start position p B,start , and the end position p B,end , the resulting trajectories must be collisionfree and satisfy that p
E (t) = p O and ψ E (t) = ψ O when t ∈ [t G , t G + t grip ].
B. Overview of the proposed motion planning method
The proposed motion planning method is partially decoupled, which calculates the trajectories of the quadcopter base p B (t) and the end-effector p E (t) in Cartesian space, respectively. The geometric feasibility constraints are proposed to coordinate the trajectories to ensure successful execution (see Section III-B for details). The overall architecture of the motion planning and control system is shown in Fig. 3. The system is decomposed into three components.
1) The first component is the motion planning of the quadcopter base. Its inputs are the positions of the object and the obstacles. Its output is the trajectory of the quadcopter base p B,ref (t). The motion planning of the quadcopter base can be further decomposed into four steps. The first step is feasible grasping position calculation. Its role is to find a suitable position for the quadcopter base to allow the aerial manipulator to grasp the object. The details of this step can be seen in Section IV-A. The second step is path planning. Its role is to find a path for the quadcopter base to move from a given starting position to a feasible grasping position and from that grasping position to a given end position. In this paper, we use the A* method to calculate the path [27, Section 12.1.1]. The third step is flight corridor generation. Its role is to generate a safe flight corridor for the quadcopter base which constrains the motion of the quadcopter base to avoid collisions. The details of this step can be seen in Section IV-B. The fourth step is trajectory generation. Its role is to calculate the trajectory of the quadcopter base based on the piecewise Bézier curve. We use the method proposed in [28] to ensure the resulting trajectory satisfies the safety, the dynamical feasibility, and the waypoints constraints. Compared with the existing methods of aerial manipulators (e.g., [21]), the proposed method calculates the trajectory of the quadcopter base in Cartesian space. Compared with the existing methods of the standard quadcopter (e.g., [28]), the proposed method guarantees the aerial manipulator arrives at the feasible grasping configuration without collisions.
2) The second component is the motion planning of the . This motion planning method can be further decomposed into three steps. The first step is the initial condition calculation. Its role is to calculate the position, velocity, and acceleration of the end-effector at the beginning of the manipulation stage. The second step is the optimal trajectory planning of the end-effector based on the Bézier curve. Its role is to calculate the trajectory of the end-effector from the initial position to the object with several constraints. The trajectory planning of the end-effector is represented as a QP problem form. In particular, we propose geometric feasibility constraints of the aerial manipulator and encode this constraint into the QP problem to ensure the trajectories satisfy the geometry of the aerial manipulator. The third step is collision avoidance. It is important and its role is to ensure the trajectory of the end-effector is collision-free. In this step, the collisions between the aerial manipulator and the obstacles in a local map are detected based on the GJK method. The second and third steps are run iteratively. If there is a collision, then the objective function of the QP problem in the second step is updated by a pinhole mapping method. Repeat the second and third steps until no collision occurs. All the corresponding sections introducing these steps are listed in Fig. 3. Compared with the existing methods [21], [29], the proposed method requests lower computational power since the collision avoidance of the Delta arm is achieved by an iterative approach in Cartesian space.
3) The third component is the controller of the aerial manipulator. Its inputs are the trajectories of the quadcopter base and the end-effector. Its outputs are total force f ∈ R of the rotors, torque vector τ ∈ R 3 of the rotors, torque τ G ∈ R of the gripper, and the torque vector that each actuator should generate τ M ∈ R 3 . The controller consists of three subcomponents. The first step is an extended state observer (ESO) -based flight controller. It was proposed in our previous work [30] and uses ESOs to estimate dynamic coupling between the aerial manipulator and the Delta arm. Its role is to generate the force f and torque vector τ for the quadcopter base so that the trajectory of the quadcopter can be tracked. The second step is the end-effector controller. Its role is to control the gripper to grasp or release objects. The third step is the Delta arm controller. Its role is to generate the torque vector τ M for the Delta arm so that the trajectory of the end-effector can be tracked. The details of the Delta arm controller can be seen in our previous work [30].
The steps can be classified into offboard processes and onboard processes. In Fig. 3, steps in the small grey rectangle are done on an offboard computer, while processes in the white rectangle run onboard the aerial manipulator during flights.
C. preliminaries to Bézier curves
A n-th degree Bézier curve is defined by a set of control points and Bernstein polynomial bases. Let c i ∈ R 3 , b i,n (τ ) ∈ R denote the i-th control point and Bernstein polynomial basis, respectively. Then, the n-th degree 3D Bézier curve is written as
B(τ ) = i=n i=0 c T i b i,n (τ ), where b i,n (τ ) = n i τ i (1 − τ ) n−i ,(2)
where τ ∈ [0, 1], ( n i ) is the binomial coefficient. According to [31,Section 2.4], the derivative of the Bézier curve can be obtained by Lemma 1. In addition, the Bézie curve B(t) is entirely confined within the convex hull defined by all these control points, which is referred to as the convex hull property (see Lemma 2).
Lemma 1 (Derivative [31]): Let B (k) (τ ) = n−j i=0 c (k) i b i,n−j (τ ) denote the k-th derivative of B(τ ), then the control points of B (k) (τ ) can be calculated iteratively by c (k) i = (n − k + 1)(c (k−1) i+1 − c (k−1) i ), where i = 0, 1, · · · , n − j.
Lemma 2 (Convex hull property [31]): Let H = {a 0 c 0 + a 1 c 1 + · · · + a n c n |a 0 + a 1 + · · · + a n = 1, a i ≥ 0} denote the convex hull defined by all the control points, then B(τ ) ∈ H for all τ ∈ [0, 1].
III. KINEMATICS AND GEOMETRIC FEASIBILITY CONSTRAINTS
This section proposes the kinematics and geometric feasibility constraints of the aerial manipulator.
A. Kinematics of the aerial manipulator
According to (1), the time derivative of p E iṡ
p E =ṗ +Ṙ B p B E + Rṗ B E =ṗ + R B R B Dṗ D E − [R B p B E ] × ω,(3)
where ω ∈ R 3 is the angular velocity vector of the quadcopter expressed in Σ B , and [·] × denotes the skew-symmetric matrix.
Let p B C ∈ R 3 denote the position of the center of the base in Σ B . Let p B E ∈ R 3 and p D E ∈ R 3 denote the positions of the end-effector in Σ B and Σ D , respectively. The relationship between
p B E and p D E is p B E = R B D p D E + p B C ,(4)
where R B D ∈ SO (3) is the rotation matrix from Σ D to Σ B . The lengths for the upper and lower arms are represented by l U and l L as illustrated in Fig. 2(a). Circumradius of the top base and the bottom end-effector base are, respectively, defined as r F and r M . The length of the gripper is denoted as l g . The relationship between the end-effector position p D E and the joint vector
q = [q 1 , q 2 , q 3 ] T ∈ R 3 is p D E + l G − h i 2 = l 2 L , i = 1, 2, 3,(5)
where l G = [0, 0, l g ] T , and
h i = −(r F − r M + l U cos q i ) cos[(i − 1)π/3] (r F − r M + l U cos q i ) sin[(i − 1)π/3] l U sin q i .(6)
On the one hand, given a joint vector q, the position p D E can be solved from (5) based on the forward kinematics. On the other hand, given a position p D E , the joint vector q can be solved from (5) by the inverse kinematics. Details can be found in [32], [33].
As can be seen from Fig. 2(a), the joint angles of the Delta arm are driven by planar four-bar linkages. The relationship between the joint angles and the crank position angles can be calculated by the kinematics of the planar four-bar linkage [34, Section 3.6].
B. Geometric feasibility constraints
Combining (1) and (4), the geometric relationship between the end-effector and the quadcopter is
p E − p B = R B (R B D p D E + p B C ), p D E ∈ W,(7)
where W is the workspace of the Delta and can be calculated by the forward kinematics of the Delta arm. The workspace is approximated as a convex polyhedron [35]. Therefore, the expression of the workspace is
W = {p|A D p ≤ b D }.(8)
According to (7), the range of p E − p B is determined by W and R B . We define R B = R ψ R θ,ϕ , where R ψ is the rotation matrix determined by the yaw angle ψ, R θ,ϕ are the rotation matrix determined by the pitch angle θ and roll angle ϕ. The yaw angle of the quadcopter is constant, i.e., ψ = ψ O , when the aerial manipulator is grasping or placing an object. Then, (7) is rewritten as
R T ψ O (p E − p B ) = R θ,ϕ (R B D p D E + p B C ), p D E ∈ W. (9)
To make the above equation more concise, we define
W θ,ϕ = {R θ,ϕ (R B D p D E + p B C )|p D E ∈ W}.(10)
Therefore, (9) is rewritten as
R T ψ O (p E − p B ) ∈ W θ,ϕ .
To linearize the geometric relationship (9), we define W R = {p|w min ≤ p ≤ w max } as the revised workspace, and it satisfy that W R ⊂ W θ,ϕ . Since the roll and pitch angles of the quadcopter are small when the aerial manipulator is manipulating, the bounds of θ and ϕ can be determined with several experiments. Let θ min , θ max denote the minimum and maximum of θ. Let ϕ min , ϕ max denote the minimum and maximum of ϕ. We calculate W R by two steps.
The first step is calculating boundaries of W θ,ϕ . Combining (8) and (10), the expression of W θ,ϕ can be rewritten as
W θ,ϕ = {p|A D (R θ,ϕ R B D ) T p ≤ b D + A D R BT D p B C }.(11)
According to (11), we obtain the boundaries W θ=θmin,ϕ=0 , W θ=θmax,ϕ=0 , W θ=0,ϕ=ϕmin , W θ=0,ϕ=ϕmax . The second step is calculating the intersection of these sets W I . According to the definition of the intersection, we have
W I = {p|A D (R θmin,0 R B D ) T p ≤ b D + A D R BT D p B C , A D (R θmax,0 R B D ) T p ≤ b D + A D R BT D p B C , A D (R 0,ϕmin R B D ) T p ≤ b D + A D R BT D p B C , A D (R 0,ϕmax R B D ) T p ≤ b D + A D R BT D p B C }.(12)
Since the expression of the intersection (12) is complicated, it may be inconvenient when applied to real systems. We let the largest cuboid that can be inscribed within the intersection as W R . The cuboid can be calculated by the method proposed in [36]. Then, w min = [w x,min , w y,min , w z,min ] T and w max = [w x,max , w y,max , w z,max ] T are subsequently determined by the size of the cuboid. Fig. 2(b) gives an illustration for calculating the revised workspace in the pitch direction. Then, the geometric feasibility constraints are
w min ≤ R T ψ O (p E − p B ) ≤ w max ,(13)
where
R ψ O = cos ψ O − sin ψ O 0 sin ψ O cos ψ O 0 0 0 1 .(14)
IV. MOTION PLANNING FOR THE QUADCOPTER BASE This section presents a method to generate the trajectory of the quadcopter for the aerial pick-and-place task. This method consists of four steps: feasible grasping position, path planning, flight corridor generation, and Bézier curvebased trajectory generation. The purposes and relationships of these steps are given in Section II-B. In the algorithm, the path planning can be achieved by the existing method. In our work, we use the A* method to obtain the path in the 3D grid map which is used to represent the environment of the task. The Bézier curve-based trajectory generation is achieved by an existing method proposed in [28]. It bounds positions and higher order dynamics of the trajectory entirely within safe regions by using Bernstein polynomial basis and formulating the trajectory generation problem as typical convex programs.
Compared to the existing methods for standard quadcopters [28], the proposed method for the quadcopter base of the aerial manipulator has two novelties. First, the feasible grasping position is calculated to ensure the aerial manipulator can manipulate the object. Second, the volume of the aerial manipulator changes with the movement of the Delta arm. To address this issue, the aerial pick-and-place task is divided into two stages: moving and manipulation stages. The flight corridors in the two stages are obtained, respectively. The details are shown as follows.
A. Feasible grasping position
To grasp the object, the position of the end-effector must arrive at p O with an orientation angle of ψ O . The feasible grasping position of the quadcopter is constrained by the geometric shape of the aerial manipulator. Let p B,f ∈ R 3 denote the feasible grasping position. Let R B,f ∈ SO(3) denote the desired rotation matrix of the quadcopter base at the feasible grasping position. According to (1), the feasible grasping position of the quadcopter is
p B,f = p O − R B,f p B E ,(15)
According to (15), one can conclude that R B,f and p B E need to be determine before calculating p B,f .
In the manipulation stage, the yaw angle of the quadcopter is set as ψ O to satisfy the grasp angle constraint of the endeffector. We assume that the roll and pitch angles of the quadcopter are small when the quadcopter base is around p B,f . This assumption is reasonable since the motion of the quadcopter is conservative. According to the assumption, we have R B,f = R ψ O . To ensure the manipulability of the Delta arm, we let the end-effector stay at the center of W R when the aerial manipulator picks up the object. Then, (15) is rewritten as
p B,f = p O − 0.5R ψ O (w min + w max ).(16)
For an aerial pick-and-place task, we already have the start position p B,start , feasible grasping position p B,f , and the end position p B,start . Then, the path of the quadcopter can be obtained by the A* method.
B. Flight corridor generation
The flight corridor is a collection of convex overlapping polyhedra that models free space and provides a connected corridor containing the resulting path. A convex decomposition method proposed in [37] is adopted to generate the flight corridor by inflating the resulting path. However, this method was originally designed for a traditional quadcopter with a fixed volume, while the volume of the aerial manipulator changes with the movement of the Delta arm. Therefore, the method cannot be directly used for the aerial manipulator. To address this issue, we calculate the flight corridors in the moving and the manipulation stages, respectively. In the moving stage, the position of the end-effector is set to stay at the top point p B top ∈ R 3 of the Delta arm's workspace W R . From the definition of W R , we have
p B top = 0.5(w x,min + w x,max ) 0.5(w y,min + w y,max ) w z,min .(17)
The shape of the aerial manipulator now can be approximated as a sphere with a radius r S . Then, we can use the convex decomposition method to generate the flight corridor in the moving stage.
In the manipulation stage, we use a designed polyhedron as the flight corridor to ensure the object is reachable for the aerial manipulator (see Fig. 4). The designed polyhedron is designed based on the geometric feasibility constraints (13) and it is represented as
w min ≤ R ψ O (p − p B,d ) ≤ w max .(18)
The duration time in this polyhedron is determined by the mechanical behavior of the gripper. We set the duration time as the closing time of the gripper t grip .
V. MOTION PLANNING FOR THE DELTA ARM
In this section, we calculate the collision-free trajectory of the Delta arm in the Cartesian coordinate. The proposed method for the Delta arm utilizes the resulting trajectory of the quadcopter. In the moving stage, the Delta arm stays at an initial state and its end-effector stays at a fixed position p B top relative to the quadcopter base. The position p B top can be calculated by (17). Therefore, the Delta arm does not require additional motion planning calculations in the moving stage.
Let t B denote the time at which the quadcopter base enters the designed polyhedron. Let t B denote the beginning time of the manipulation stage. The time t B can be determined by t B = t G − 2v E,max /a E,max , where v E,max and a E,max are maximum velocity and acceleration of the end-effector, respectively. The procedure of the manipulation stage is given here. From t B to t G , the end-effector moves to the object. Then, the aerial manipulator keeps the position of the endeffector for the duration time t grip to pick up or place the object. After picking up or placing the object, the Delta arm returns to its initial state. According to the procedure, one can find that the trajectory of the end-effector from t B to t G needs to be calculated. The details to calculate the trajectory from t B to t G are shown as follows.
A. Initial condition
The initial condition for the end-effector consists of the initial position p E,t B , the initial velocityṗ E,t B , and the initial accelerationp E,t B . They are calculated as follows.
1) Initial position: According to (1), the initial position p E,t B is calculated by
p E,t B = p B,t B + R B,t B p B top ,(19)
where
p B,t B , R B,t B = [r 1,t B , r 2,t B , r 3,t B ] denote p B , R B
at the time t B , respectively. According to (19), we calculate p B,t B and R B,t B to obtain p E,t B . p B,t B can be directly obtained by the trajectory of the quadcopter. The matrix R B,t B is calculated based on the differential flatness of the quadcopter. At the time t B , the yaw angle of the quadcopter base is ψ O to ensure the orientation angle of the end-effector equals ψ O . The unit orientation vector in the ground plane is r g = [cos ψ O , sin ψ O , 0] T . According to [38], we have
r 3,t B =p t k + ge 3 ∥p t B + ge 3 ∥ ,(20)
and vectors r 1,t B and r 2,t B can be determined by
r 2,t B = r 3,t B × r g ∥r 3,t B × r g ∥ , r 1,t B = r 2,t B × r 3,t B .(21)
2) Initial velocity and acceleration: Let δ I denote a small time step. We can calculate p E,t B −δ I and p E,t B +δ I according to the above method. Then, the derivatives are approximated by the differences. The initial velocity and acceleration arė
p E,t B = (p E,t B − p E,t B −δ I )/δ I , p E,t B = (p E,t B +δ I − 2p E,t B + p E,t B −δ I )/δ 2 I .(22)
B. Optimal trajectory planning
The trajectory of the end-effector is calculated by an iterative approach. The trajectory planning of the end-effector is formulated as a QP problem. At each iteration, the objective function of the QP problem is updated and the QP problem is solved to calculate the collision-free trajectory of the end-
effector. Let p E,ref (t), t ∈ [t B , t G ] denote the trajectory.
A n E -th order Bézier curve is adopted to represent the trajectory and it is
p E,ref (τ M ) = n E i=0 c E,i b i,n E (τ M ),(23)
where
c E,i = [c E,x,i , c E,y,i , c E,z,i ] T ∈ R 3 and b i,n E (τ M )min J = c T E Q O,E c E + q T O,E c E s.t. A E,eq c E = b E,eq , A E,ie c E ≤ b E,ie ,(24)
where Q O,E ∈ R 3(n E +1)×3(n E +1) is the Hessian matrix of the objective function and it is semidefinite, q O,E ∈ R 3(n E +1) is a vector, A E,eq ∈ R 18×3(n E +1) and A E,ie ∈ R (9n E +4)×3(n E +1) are constraint matrices, b eq ∈ R 18 and b ie ∈ R 9n E +4 are constraint vectors. The linear equality constraint (A E,eq = b E,eq ) is endpoint constraints. The linear inequality constraint (A ie = b ie ) consists of dynamical feasibility, geometric feasibility, and grasp constraints. These constraints are adopted to ensure the solution of the problem (24) is collision-free and can be executed successfully.
Definitions and roles of the objective and the constraints are as follows.
1) Objective function:
The objective function is denoted as J = J J + J O , where J J is the cost function to minimize the jerk along the trajectory, and J O is a penalty function for the collision. The details of the two terms are
J J = m i=1 Ti−1 Ti−1 (j 2 x (t) + j 2 y (t) + j 2 z (t))dt,(25)J O = n O k=1 λ k n E i=0 (c i,x −x M,k ) 2 +(c i,y −y M,k ) 2 +(c i,z −z M,k ) 2 ,(26)
where j x , j y , j z denote the jerks of the trajectory in the corresponding three dimensions, respectively, λ k is a changing weighting factor, x M,k , y M,k , z M,k are corresponding elements of the obstacle mirror position p M,k . We define the obstacle mirror set as O M = {p M,1 , p M,2 , . . . , p M,n O }, where n O is the number of the obstacles that collide with the aerial manipulator during the whole iteration process. The obstacle mirror position p M,k can be obtained through pinhole mapping of the corresponding obstacle position. As the iterations progress, the algorithm can guide the trajectory of the end-effector towards the obstacle mirror positions while ensuring that it is collision-free for the obstacles. See Section V-C for details of calculating the changing weighting factors λ 1 , λ 2 , . . . , λ n O and the obstacle mirror set O M . By using the Lemma 1 into (25), we can obtain
J = c T E Q o,E c E + q T o,E c E .
We leave the details of the Q o,E and q o,E for brevity.
2) Constraints: The constraints for the trajectory planning problem of the end-effector consist of endpoint, dynamical feasibility, geometric feasibility, and grasp constraints. The details of the constraints are given as follows:
The endpoint constraints are introduced to ensure the trajectory of the end-effector starts at p E,t B and ends at p O with desired velocities and accelerations. The endpoint constraints are given as
c 0,µ,E = µ E,t B , s −1 E c (1) 0,µ,E =μ E,t B , s −2 E c (2) 0,µ,E =μ E,t B , c n E ,µ,E = µ O , s −1 E c (1) n E ,µ,E =μ E,t G , s −2 E c (2) n E ,µ,E =μ E,t G ,(27)
where c (k)
i,µ,E denotes the i-th control point of d k f µ (τ )/dτ k and can be calculated by Lemma 1, µ ∈ [x, y, z], s E = t G − t B , p E,t B ,ṗ E,t B ,p E,t B can be obtained from the Section V-A andṗ E,t B = 0,p E,t B = 0
The dynamical feasibility constraints consist of velocity and acceleration constraints to ensure the generated trajectory is dynamically feasible. The dynamical feasibility constraints areμ
min ≤s −1 E c (1) i,µ,E ≤μ max , i = 0, 1, . . . , n E − 1, µ min ≤s −2 E c (2) i,µ,E ≤μ max , i = 0, 1, . . . , n E − 2,(28)
where µ ∈ [x, y, z], the subscript min denotes the lower bound of the corresponding variable, and the subscript max denotes the upper bound of the corresponding variable. The geometric feasibility constraints are introduced to ensure the trajectories of the quadcopter and the end-effector are geometrically feasible for the Delta arm. According to (13), it can be describe as
R ψ O w min ≤ p E,ref (t) − p B,ref (t) = R ψ O w max .(29)
As stated above, the trajectory of the end-effector is represented by a n E -th order Bézier curve. The trajectory of the quadcopter is part of a n B -th order Bézier curve.
To reveal the geometric feasibility constraints on the parameters, we use a n E -th order Bézier curve to fit the trajectory of the quadcopter from t B to t G . Then, the geometric feasibility constraints on the parameters can be formulated as linear algebraic equations. Let p B,0 , p B,1 , . . . , p B,n E denote n E + 1 points of the trajectory p B,ref (t), t ∈ [t B , t G ]. These points divide the trajectory into n E segments. The time interval between each two adjacent points is the same. The n E -th order Bézier curve is denoted as
h(t) = n E i=0 c B,i b i,n E (τ M ), where c B,i = [c B,x,i , c B,y,i , c B,z,i ] T is the i-th control point of h(t).
The control points can be obtained by fitting h(t) to the points p B,0 , p B,1 , . . . , p B,n E . Then, the geometric feasibility constraints can be rewritten as
R ψ O w min ≤ n E i=0 (c E,i − c B,i )b i,n E (τ M ) ≤ R ψ O w max ,(30)
According to the convex hull property (see Lemma 2), the geometric feasibility constraints on the parameters is w r,µ,min + c B,µ,i ≤c E,µ,i ≤ w r,µ,max + c B,µ,i ,
i = 0, 1, . . . , n E ,(31)
where w r,µ,min is the element of R ψ O w min , w r,µ,max is the element of R ψ O w max , and µ ∈ {x, y, z}.
The grasp constraints are introduced to ensure the gripper does not collide with the object. To avoid such collision, we let the end of the trajectory in a cone. Let t C denote the time to enter the cone.
Let p E,ref (t C ) = [x E,t C , y E,t C , z E,t C ]
T denote the position of the end-effector at the time t c . Then, we have
− tan γ ≤ x E,t C − x O z E,t C − z O ≤ tan γ, − tan γ ≤ y E,t C − y O z E,t C − z O ≤ tan γ,(32)
where γ is the angle of the cone. By substituting the defined (23) into (32), the grasp constraints (32) can be rewritten as a linear form
n E i=0 (c E,x,i + c E,z,i tan γ)b i,n E (τ C ) ≤x O + z O tan γ, n E i=0 (c E,x,i − c E,z,i tan γ)b i,n E (τ C ) ≥x O − z O tan γ, n E i=0 (c E,y,i + c E,z,i tan γ)b i,n E (τ C ) ≤y O + z O tan γ, n E i=0 (c E,y,i − c E,z,i tan γ)b i,n E (τ C ) ≥y O − z O tan γ,(33)where τ C = (t C − t B )/(t G − t B ).
C. Collision avoidance
The subsection proposes a method to detect collisions and calculate the changing weighting factors λ 1 , λ 2 , . . . , λ n O and the obstacle mirror set O M in (26). Before the iteration process, O M is set as an empty set, i.e., O M = ∅, and n O is set as zero. At each iteration, the collision is detected using the solution of the QP problem (24). If the solution is collisionfree, the iteration process is terminated and the collision-free solution is outputted as the trajectory of the end-effector. If there are collisions between the aerial manipulator and the obstacles in the environment with the solution, we calculate the changing weighting factors and O M for the next iteration calculation.
The collision detection method is proposed to detect if the solution of the QP problem (24) is collision-free. The method considers the collision of the Delta arm and the endeffector. The trajectory of the quadcopter is collision-free, which is ensured by the flight corridor. Therefore, we do not consider the collision of the quadcopter. The proposed collision detection method consists of three steps.
We first use a shape polyhedron to represent the Delta arm and the end-effector in collision detection. The vertices of the shape polyhedron are p U,i , p L,i , i = 1, 2, 3 (see the blue points in Fig.2(c)). Then, we have
p U,i = p B + R ψ R B DpU,i , i = 1, 2, 3,(34)
wherep
U,i = r S cos( 1+2i 3 π) r S sin( 1+2i 3 π) 0 (35)
In addition, the lower vertices can be calculated by
p L,i = P E + R ψ R B DpL,i , i = 1, 2, 3,(36)
wherep
L,i = l C cos( 1+2i 6 π) l C sin( 1+2i 6 π) l C (37)
where l C is a constant parameter for safety and is determined by the size of the gripper when the gripper is open. Second, we introduce a local map to reduce the computational cost of the collision detection. This is because detecting collisions in the entire environment can be computationally expensive, especially if the environment is large or if there are many obstacles. We decrease the number of obstacles to be calculated by adding a box around the end-effector and the object and thus only detecting collisions inside it. The size of the box is determined by p B,t B and p O . We let M local = {p ∈ R 3 |l min ≤ p ≤ l max } denote the box. In addition, l min and l max can be calculated by
l µ,min = min{µ B,t B , µ O } − l s , l µ,max = max{µ B,t B , µ O } + l s ,(38)
where µ ∈ {x, y, z}, l µ,min , l µ,max , µ B,t B , µ O are corresponding element of l min , l min , p B,t B , p O , respectively, l s is a constant parameter for safety. Third, we use the GJK method proposed in [39] to detect collisions. If the QP problem solution reveals a collision between the aerial manipulator and obstacle i, then the two endpoints T i,L and T i,R of the solution within the collision area with respect to the obstacle can be determined (see Fig. 4). Let O i denote the center of the obstacle i. Then, we calculate the obstacle mirror position p M,i by a pinhole mapping method. The position of the pinhole is p P,i = 0.5(T i,L + T i,R ). And, we have
p M,i = 2p P,i − O i .(39)
The values of the weighting factors are updated by λ i = λ i + α∆λ i , where i = 1, 2, . . . , n O , and α > 0 is a constant gain. The parameter ∆λ i is the step size used for updating the i-th weighting factor and is a critical factor that affects the computation time of the method. The expression for ∆λ i is given as
∆λ i = ∥T i,L − T i,R ∥.(40)
VI. EXPERIMENTAL VERIFICATION
This section presents experimental results to verify the effectiveness of the proposed motion planning algorithms. The experimental video is available at https://youtu.be/q7O9v7l2Oho.
First of all, we describe the experimental setup. The aerial manipulator platform used in the experiments consists of a quadcopter and a Delta arm. The wheelbase of the quadcopter is 0.65 m. The mass of the quadcopter (including a battery) is 3.60 kg. The Delta arm consists of a mounting base (0.56 kg), a movable robotic arm (0.44 kg), and a gripper (0.32 kg). A flight controller proposed in our previous work [30] runs on a Pixhawk 4 autopilot. This controller uses extended state observers (ESOs) to estimate dynamic coupling between the aerial manipulator and the Delta arm. The proposed motion planning method runs on an onboard Intel NUC i7 computer with ROS (an open-source robotics middleware suite). The experiments are conducted in a Vicon system, which provides accurate position measurements of the quadcopter base and the end-effector. The measurement data of the Vicon system is sent to a ground control station through an ethernet switch. Then, the ground control station sends the measurement data to the aerial manipulator with a frequency of 100 Hz through a 5 GHz wireless router.
The perception of the aerial manipulator is not surveyed in this paper. We assume that the obstacles in the environment are already known. In particular, the locations of the obstacles and the object can be obtained by the Vicon system. Then, the environment can be previously built as a grid map which consists of a set of cubes. The size of each cube is set as 0.1 m. This map is used for the path planning of the quadcopter base. The description of the controllers is provided in Section II-B.
In
A. Example 1: Collision avoidance
We validate the effectiveness of the proposed method in the collision avoidance task. The environment of this example is illustrated in Fig. 5. There are two types of obstacles in the environment. The first type of obstacles restrict the motion of the quadcopter base and the size of the flight corridor. The collision avoidance for this type of obstacles is achieved by the flight corridor. The second type of obstacles restrict the motion of the Delta arm and must be avoided through the motion planning of the Delta arm. In Section V-C, we propose an iterative collision avoidance method to avoid the second type of obstacles. In order to show its effectiveness, the result of the motion planning with the collision avoidance method is calculated. Fig. 5 shows the results of the motion planning with the collision avoidance method and without the collision avoidance method. The generated flight corridor is shown in Fig. 5(a). As shown in Fig. 5(b), there are four obstacles near the object. In particular, the aerial manipulator collides with one of these obstacles in the resulting trajectory without the collision avoidance method. The collision area is shown as a red dot line in Fig. 5 and its length is 0.26 m. The resulting trajectory with the collision avoidance method is shown in Fig. 5 (see the blue line). As can be seen, the result of the proposed method is collision-free. The total computational time for calculating the path, flight corridor, and trajectory of the quadcopter in the collision avoidance task is 46.3 ms, while the computational time for calculating the trajectory of the end-effector is 36.4 ms.
B. Example 2: Aerial retrieval
The goal of this experiment is to retrieve an object by the aerial manipulator. In the task, the aerial manipulator moves to and picks up the object. Then, the aerial manipulator returns to the start position. The start position is set as [0, 0, −2.00]. The position and the orientation angle of the object are set as [0, −2.00, −1.24] and 0 • (see Fig. 6(a)), respectively. As shown in Fig. 6(b), screens are in the flight environment, which needs to be avoided by the aerial manipulator in the moving stage. The aerial manipulator is set to fly around the screens. As shown in Fig. 6(a), there are three obstacles near the object. The aerial manipulator has to avoid colliding with these obstacles. Fig. 6(b)-(d) shows the result of the aerial retrieval experiment. The generated flight corridor is shown in Fig. 6(c). The span time of the experiment is 58 s. The mean tracking error of the quadcopter base in the moving stage is 0.05 m, while that in the manipulation stage is 0.01 m. The quadcopter flies faster in the moving stage than in the manipulation stage. However, the higher velocity also causes a larger tracking error. The computational time for calculating the path, flight corridor, and trajectory of the quadcopter in the aerial retrieval task is 43.9 ms, while the computational time for calculating the trajectory of the end-effector is 25.7 ms.
C. Example 3: Aerial transport
The goal of this experiment is to grasp an object and put it on the target location. In the task, the aerial manipulator first flies to and picks up the object. Then, the aerial manipulator flies to the target location and puts the object on the target location. Finally, the aerial manipulator returns to the start position. The start position is set as Fig. 7(c). In the experiment, the aerial manipulator is also set to fly around the screens to utilize the experiment field. As shown in Fig. 7(a) and (b), there are two obstacles near the object and three obstacles near the target location. The aerial manipulator has to avoid colliding with these obstacles. Fig. 7(c)-(e) show the result of the aerial transport experiment. The generated flight corridor is shown in Fig. 7(d). The span time of the experiment is 88 s. The mean tracking error of the quadcopter base in the moving stage is 0.06 m, while that in the manipulation stage is 0.01 m. The computational time for calculating the trajectory of the quadcopter in the aerial transport task is 45.6 ms, while the computational time for calculating the trajectory of the end-effector is 32.8 ms. The experiment result validates the effectiveness of the proposed motion planning method in the aerial transport task.
VII. CONCLUSION
This paper proposed a novel partially decoupled motion planning method of the aerial manipulator for the aerial pick-and-place task. This method calculates the dynamically feasible and collision-free trajectories of the flying base and the manipulator respectively in Cartesian space. The proposed geometric feasibility constraints can ensure the resulting trajectories are coordinated to complete tasks. The proposed method is verified by three experimental results. It is verified by these experiments that the proposed geometric feasibility constraints can ensure the trajectories of the quadcopter base and the end-effector satisfy the geometry of the aerial manipulator. The results also illustrate the ability of the proposed method to avoid obstacles. This ability is limited by the partially decoupled structure since the obstacles nearby the object are avoided by the Delta arm rather than the whole aerial manipulator. However, in order to avoid large obstacles nearby the object, both the quadcopter base and the Delta arm must be used. This will be one important research direction for future research.
Fig. 1 :
1Aerial pick-and-place by an aerial manipulator. The experimental video is available at https://youtu.be/q7O9v7l2Oho.
Fig. 2 :
2Coordinates, revised workspace, and shape polyhedron of the aerial manipulator.
Fig. 3 :
3Structure of the proposed motion planning method for aerial pick-and-place. Delta arm. Its inputs are the position of the object p O and p B,ref (t). Its output is the trajectory of the end-effector p E,ref (t)
Fig. 4 :
4An illustration for the motion planning of the aerial pick-and-place.
are the i-th control point and Bernstein polynomial basis of the Bézier curve, respectively, andτ M = (t − t B )/(t G − t B ).We denote the parameter vector of the trajectory as c E = [c E,x,0 , . . . , c E,x,n E , c E,y,0 , . . . , c E,y,n E , c E,z,0 , . . . , c E,z,n E ] T . The QP problem is then formulated as
Fig. 5 :
5Results of the collision avoidance experiment.
Fig. 6 :
6Results of the aerial retrieval experiment.
all the examples, we use the same set of parameters of the motion planner: α = 3.0, r S = 0.50 m, l C = 0.06 m, l S = 0.20 m. The velocity and the acceleration constraints for the quadcopter base are set as 0.5 m/s and 1.0 m/s 2 , respectively. The velocity and the acceleration constraints for the end-effector are set as 0.5 m/s and 2.0 m/s 2 , respectively. The bounds of the geometric feasibility constraints are set as w min = [−0.06, −0.06, −0.60] T and w max = [0.06, 0.06, −0.40] T .
Fig. 7 :
7Results of the aerial transport experiment.
The position and the orientation angle of the object are set as [0, 2.00, −1.22] and 0 • , respectively. The position of the target location is set as [0, −2.00, −1.24]. The whole process of the experiment is shown in
Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints Huazi Cao, Jiahao Shen, Cunjia Liu, Bo Zhu, Shiyu Zhao
up H. Cao, J. Shen, and S. Zhao are with the School C. Liu is with the Department of Aeronautical and Automotive Engineering at Loughborough University, Loughborough, UK. [email protected] B. Zhu is with the School of Aeronautics and Astronautics at Sun Yat-sen University, Guangzhou, China. [email protected]
Engineering
at
Westlake
University,
Hangzhou,
China.
{caohuazi,shenjiahao,zhaoshiyu}@westlake.edu.cn
Past, present, and future of aerial robotic manipulators. A Ollero, M Tognon, A Suarez, D Lee, A Franchi, IEEE Transactions on Robotics. 381A. Ollero, M. Tognon, A. Suarez, D. Lee, and A. Franchi, "Past, present, and future of aerial robotic manipulators," IEEE Transactions on Robotics, vol. 38, no. 1, pp. 626-645, 2021.
A review of aerial manipulation of small-scale rotorcraft unmanned robotic systems. D Xilun, G Pin, X Kun, Y Yushu, Chinese Journal of Aeronautics. 321D. Xilun, G. Pin, X. Kun, and Y. Yushu, "A review of aerial manip- ulation of small-scale rotorcraft unmanned robotic systems," Chinese Journal of Aeronautics, vol. 32, no. 1, pp. 200-214, 2019.
A survey of single and multi-uav aerial manipulation. A Mohiuddin, T Tarek, Y Zweiri, D Gan, Unmanned Systems. 802A. Mohiuddin, T. Tarek, Y. Zweiri, and D. Gan, "A survey of single and multi-uav aerial manipulation," Unmanned Systems, vol. 8, no. 02, pp. 119-147, 2020.
Aerial manipulation: A literature review. F Ruggiero, V Lippiello, A Ollero, IEEE Robotics and Automation Letters. 33F. Ruggiero, V. Lippiello, and A. Ollero, "Aerial manipulation: A literature review," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1957-1964, 2018.
Active interaction force control for contactbased inspection with a fully actuated aerial vehicle. K Bodie, M Brunner, M Pantic, S Walser, P Pfändler, U Angst, R Siegwart, J Nieto, IEEE Transactions on Robotics. 373K. Bodie, M. Brunner, M. Pantic, S. Walser, P. Pfändler, U. Angst, R. Siegwart, and J. Nieto, "Active interaction force control for contact- based inspection with a fully actuated aerial vehicle," IEEE Transac- tions on Robotics, vol. 37, no. 3, pp. 709-722, 2020.
Aerial additive manufacturing with multiple autonomous robots. K Zhang, P Chermprayong, F Xiao, D Tzoumanikas, B Dams, S Kay, B B Kocer, A Burns, L Orr, C Choi, Nature. 6097928K. Zhang, P. Chermprayong, F. Xiao, D. Tzoumanikas, B. Dams, S. Kay, B. B. Kocer, A. Burns, L. Orr, C. Choi, et al., "Aerial additive manufacturing with multiple autonomous robots," Nature, vol. 609, no. 7928, pp. 709-717, 2022.
A novel robotic platform for aerial manipulation using quadrotors as rotating thrust generators. H.-N Nguyen, S Park, J Park, D Lee, IEEE Transactions on Robotics. 342H.-N. Nguyen, S. Park, J. Park, and D. Lee, "A novel robotic platform for aerial manipulation using quadrotors as rotating thrust generators," IEEE Transactions on Robotics, vol. 34, no. 2, pp. 353-369, 2018.
Design, modeling, estimation and control for aerial grasping and manipulation. D Mellinger, Q Lindsey, M Shomin, V Kumar, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. D. Mellinger, Q. Lindsey, M. Shomin, and V. Kumar, "Design, mod- eling, estimation and control for aerial grasping and manipulation," in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2668-2673, 2011.
Adaptive sliding-mode disturbance observer-based finite-time control for unmanned aerial manipulator with prescribed performance. Y Chen, J Liang, Y Wu, Z Miao, H Zhang, Y Wang, IEEE Transactions on Cybernetics. Y. Chen, J. Liang, Y. Wu, Z. Miao, H. Zhang, and Y. Wang, "Adaptive sliding-mode disturbance observer-based finite-time control for unmanned aerial manipulator with prescribed performance," IEEE Transactions on Cybernetics, 2022.
Aerial grasping and the velocity sufficiency region. T G Chen, K A Hoffmann, J E Low, K Nagami, D Lentink, M R Cutkosky, IEEE Robotics and Automation Letters. 74T. G. Chen, K. A. Hoffmann, J. E. Low, K. Nagami, D. Lentink, and M. R. Cutkosky, "Aerial grasping and the velocity sufficiency region," IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10009-10016, 2022.
Planning and control for collision-free cooperative aerial transportation. H Lee, H Kim, H J Kim, IEEE Transactions on Automation Science and Engineering. 151H. Lee, H. Kim, and H. J. Kim, "Planning and control for collision-free cooperative aerial transportation," IEEE Transactions on Automation Science and Engineering, vol. 15, no. 1, pp. 189-201, 2016.
Aerial robotic contact-based inspection: planning and control. K Alexis, G Darivianakis, M Burri, R Siegwart, Autonomous Robots. 40K. Alexis, G. Darivianakis, M. Burri, and R. Siegwart, "Aerial robotic contact-based inspection: planning and control," Autonomous Robots, vol. 40, pp. 631-655, 2016.
Control-aware motion planning for task-constrained aerial manipulation. M Tognon, E Cataldi, H A T Chavez, G Antonelli, J Cortés, A Franchi, IEEE Robotics and Automation Letters. 33M. Tognon, E. Cataldi, H. A. T. Chavez, G. Antonelli, J. Cortés, and A. Franchi, "Control-aware motion planning for task-constrained aerial manipulation," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2478-2484, 2018.
Toward image based visual servoing for aerial grasping and perching. J Thomas, G Loianno, K Sreenath, V Kumar, 2014 IEEE International Conference on Robotics and Automation (ICRA). J. Thomas, G. Loianno, K. Sreenath, and V. Kumar, "Toward image based visual servoing for aerial grasping and perching," in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 2113-2118, 2014.
Aerial grasping of cylindrical object using visual servoing based on stochastic model predictive control. H Seo, S Kim, H J Kim, 2017 IEEE International Conference on Robotics and Automation (ICRA). H. Seo, S. Kim, and H. J. Kim, "Aerial grasping of cylindrical object using visual servoing based on stochastic model predictive control," in 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 6362-6368, 2017.
Image-based visual servoing of unmanned aerial manipulators for tracking and grasping a moving target. Y Chen, Y Wu, Z Zhang, Z Miao, H Zhong, H Zhang, Y Wang, IEEE Transactions on Industrial Informatics. Y. Chen, Y. Wu, Z. Zhang, Z. Miao, H. Zhong, H. Zhang, and Y. Wang, "Image-based visual servoing of unmanned aerial manipulators for tracking and grasping a moving target," IEEE Transactions on Indus- trial Informatics, 2022.
Grasp planning and visual servoing for an outdoors aerial dual manipulator. P Ramon-Soria, B C Arrue, A Ollero, 6EngineeringP. Ramon-Soria, B. C. Arrue, and A. Ollero, "Grasp planning and visual servoing for an outdoors aerial dual manipulator," Engineering, vol. 6, no. 1, pp. 77-88, 2020.
Exploiting null space in aerial manipulation through model-in-the-loop motion planning. A Ivanovic, M Car, M Orsag, S Bogdan, 2020 International Conference on Unmanned Aircraft Systems (ICUAS). A. Ivanovic, M. Car, M. Orsag, and S. Bogdan, "Exploiting null space in aerial manipulation through model-in-the-loop motion planning," in 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 686-693, 2020.
First experimental results on motion planning for transportation in aerial long-reach manipulators with two arms. A Caballero, A Suarez, F Real, V M Vega, M Bejar, A Rodriguez-Castaño, A Ollero, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). A. Caballero, A. Suarez, F. Real, V. M. Vega, M. Bejar, A. Rodriguez- Castaño, and A. Ollero, "First experimental results on motion planning for transportation in aerial long-reach manipulators with two arms," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8471-8477, 2018.
An aerodynamic extension for motion planning with dynamics awareness in aerial long-reach manipulators. A Caballero, P J Sanchez-Cuevas, M Bejar, G Heredia, M A Trujillo, A Ollero, International Journal of Aerospace Engineering. 2020A. Caballero, P. J. Sanchez-Cuevas, M. Bejar, G. Heredia, M. A. Trujillo, and A. Ollero, "An aerodynamic extension for motion plan- ning with dynamics awareness in aerial long-reach manipulators," International Journal of Aerospace Engineering, vol. 2020, pp. 1-17, 2020.
Sampling-based motion planning for aerial pick-and-place. H Kim, H Seo, J Kim, H J Kim, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). H. Kim, H. Seo, J. Kim, and H. J. Kim, "Sampling-based motion planning for aerial pick-and-place," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7402-7408, 2019.
Current issues in sampling-based motion planning. S R Lindemann, S M Lavalle, Robotics Research. The Eleventh International Symposium. S. R. Lindemann and S. M. LaValle, "Current issues in sampling-based motion planning," in Robotics Research. The Eleventh International Symposium, pp. 36-54, 2005.
Improving the reliability of pick-and-place with aerial vehicles through fault-tolerant software and a custom magnetic end-effector. G Garimella, M Sheckells, S Kim, G Baraban, M Kobilarov, IEEE Robotics and Automation Letters. 64G. Garimella, M. Sheckells, S. Kim, G. Baraban, and M. Kobi- larov, "Improving the reliability of pick-and-place with aerial vehicles through fault-tolerant software and a custom magnetic end-effector," IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7501-7508, 2021.
Dynamically feasible task space planning for underactuated aerial manipulators. J Welde, J Paulos, V Kumar, IEEE Robotics and Automation Letters. 62J. Welde, J. Paulos, and V. Kumar, "Dynamically feasible task space planning for underactuated aerial manipulators," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3232-3239, 2021.
A novel classification of planar four-bar linkages and its application to the mechanical analysis of animal systems. M Muller, Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences. 351M. Muller, "A novel classification of planar four-bar linkages and its application to the mechanical analysis of animal systems," Philosoph- ical Transactions of the Royal Society of London. Series B: Biological Sciences, vol. 351, no. 1340, pp. 689-720, 1996.
Robotics: modelling, planning and control. B Siciliano, L Sciavicco, L Villani, G Oriolo, Springer Science & Business MediaB. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: mod- elling, planning and control. Springer Science & Business Media, 2010.
The quest for artificial intelligence. N J Nilsson, Cambridge University PressN. J. Nilsson, The quest for artificial intelligence. Cambridge Univer- sity Press, 2009.
Online safe trajectory generation for quadrotors using fast marching method and bernstein basis polynomial. F Gao, W Wu, Y Lin, S Shen, 2018 IEEE International Conference on Robotics and Automation (ICRA). F. Gao, W. Wu, Y. Lin, and S. Shen, "Online safe trajectory gener- ation for quadrotors using fast marching method and bernstein basis polynomial," in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 344-351, 2018.
Motion planning with dynamics awareness for long reach manipulation in aerial robotic systems with two arms. A Caballero, M Bejar, A Rodriguez-Castano, A Ollero, International Journal of Advanced Robotic Systems. 1531729881418770525A. Caballero, M. Bejar, A. Rodriguez-Castano, and A. Ollero, "Mo- tion planning with dynamics awareness for long reach manipulation in aerial robotic systems with two arms," International Journal of Advanced Robotic Systems, vol. 15, no. 3, p. 1729881418770525, 2018.
ESO-based robust and highprecision tracking control for aerial manipulation. H Cao, Y Li, C Liu, S Zhao, IEEE Transactions on Automation Science and Engineering. H. Cao, Y. Li, C. Liu, and S. Zhao, "ESO-based robust and high- precision tracking control for aerial manipulation," IEEE Transactions on Automation Science and Engineering, 2023.
H Prautzsch, W Boehm, M Paluszny, Bézier and B-spline techniques. Springer6H. Prautzsch, W. Boehm, and M. Paluszny, Bézier and B-spline techniques, vol. 6. Springer, 2002.
Delta robot: inverse, direct, and intermediate Jacobians. M López, E Castillo, G García, A Bashir, Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science. 2201M. López, E. Castillo, G. García, and A. Bashir, "Delta robot: inverse, direct, and intermediate Jacobians," Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, vol. 220, no. 1, pp. 103-109, 2006.
A geometric method for kinematics of Delta robot and its path tracking control. X Yang, Z Feng, C Liu, X Ren, 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014). X. Yang, Z. Feng, C. Liu, and X. Ren, "A geometric method for kinematics of Delta robot and its path tracking control," in 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), pp. 509-514, 2014.
Standard handbook of machine design. J E Shigley, C R Mischke, T H BrownJr, McGraw-Hill EducationJ. E. Shigley, C. R. Mischke, and T. H. Brown Jr, Standard handbook of machine design. McGraw-Hill Education, 2004.
Analysis and dimensional synthesis of the delta robot for a prescribed workspace. M Laribi, L Romdhane, S Zeghloul, Mechanism and machine theory. 42M. Laribi, L. Romdhane, and S. Zeghloul, "Analysis and dimensional synthesis of the delta robot for a prescribed workspace," Mechanism and machine theory, vol. 42, no. 7, pp. 859-870, 2007.
Finding the largest empty cuboid inside a 3d digital object. S Mondal, A Biswas, A Sarkar, Multimedia Tools and Applications. S. Mondal, A. Biswas, and A. Sarkar, "Finding the largest empty cuboid inside a 3d digital object," Multimedia Tools and Applications, pp. 1-21, 2021.
Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3D complex environments. S Liu, M Watterson, K Mohta, K Sun, S Bhattacharya, C J Taylor, V Kumar, IEEE Robotics and Automation Letters. 23S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, "Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3D complex environments," IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1688-1695, 2017.
Minimum snap trajectory generation and control for quadrotors. D Mellinger, V Kumar, 2011 IEEE International Conference on Robotics and Automation. D. Mellinger and V. Kumar, "Minimum snap trajectory generation and control for quadrotors," in 2011 IEEE International Conference on Robotics and Automation, pp. 2520-2525, 2011.
Fast versions of the gilbert-johnsonkeerthi distance algorithm: Additional results and comparisons. C J Ong, E G Gilbert, IEEE transactions on robotics and automation. 174C. J. Ong and E. G. Gilbert, "Fast versions of the gilbert-johnson- keerthi distance algorithm: Additional results and comparisons," IEEE transactions on robotics and automation, vol. 17, no. 4, pp. 531-539, 2001.
| [] |
[
"Chi-square approximation for the distribution of individual eigenvalues of a singular Wishart matrix",
"Chi-square approximation for the distribution of individual eigenvalues of a singular Wishart matrix"
] | [
"Koki Shimizu \nTokyo University of Science\n1-3 Kagurazaka, Shinjuku-ku162-8601TokyoJapan\n",
"Hiroki Hashiguchi \nTokyo University of Science\n1-3 Kagurazaka, Shinjuku-ku162-8601TokyoJapan\n"
] | [
"Tokyo University of Science\n1-3 Kagurazaka, Shinjuku-ku162-8601TokyoJapan",
"Tokyo University of Science\n1-3 Kagurazaka, Shinjuku-ku162-8601TokyoJapan"
] | [] | This paper discusses the approximate distributions of eigenvalues of a singular Wishart matrix. We give the approximate joint density of eigenvalues by Laplace approximation for the hypergeometric functions of matrix arguments. Furthermore, we show that the distribution of each eigenvalue can be approximated by the chi-square distribution with varying degrees of freedom when the population eigenvalues are infinitely dispersed. The derived result is applied to testing the equality of eigenvalues in two populations. | null | [
"https://export.arxiv.org/pdf/2306.05160v1.pdf"
] | 259,108,621 | 2306.05160 | 7d0ae86bc4423ba628c37cc4ddf1ded5274a9b75 |
Chi-square approximation for the distribution of individual eigenvalues of a singular Wishart matrix
8 Jun 2023
Koki Shimizu
Tokyo University of Science
1-3 Kagurazaka, Shinjuku-ku162-8601TokyoJapan
Hiroki Hashiguchi
Tokyo University of Science
1-3 Kagurazaka, Shinjuku-ku162-8601TokyoJapan
Chi-square approximation for the distribution of individual eigenvalues of a singular Wishart matrix
8 Jun 2023Hypergeometric functionsLaplace approximationSpiked covariance model 2010 MSC: 62E1562H10
This paper discusses the approximate distributions of eigenvalues of a singular Wishart matrix. We give the approximate joint density of eigenvalues by Laplace approximation for the hypergeometric functions of matrix arguments. Furthermore, we show that the distribution of each eigenvalue can be approximated by the chi-square distribution with varying degrees of freedom when the population eigenvalues are infinitely dispersed. The derived result is applied to testing the equality of eigenvalues in two populations.
Introduction
In multivariate analysis, some exact distributions of eigenvalues for a Wishart matrix are represented by hypergeometric functions of matrix arguments. James (1964) classified multivariate statistics problems into five categories based on hypergeometric functions. However, the convergence of these functions is slow, and their numerical computation is cumbersome when sample sizes or dimensions are large. Consequently, the derivation of approximate distributions of eigenvalues has received a great deal of attention. Sugiyama (1972) derived the approximate distribution for the largest eigenvalue through the integral representation of the confluent hypergeometric function. Sugiura (1973) showed that the asymptotic distribution of the individual eigenvalues is expressed by a normal distribution for a large sample size. The chi-square approximation was discussed when the population eigenvalues are infinitely dispersed in Takemura and Sheena (2005) and Kato and Hashiguchi (2014). Approximations for hypergeometric functions have been developed and applied to the multivariate distribution theory in Butler and Wood (2002. Butler and Wood (2002) provided the Laplace approximation for the hypergeometric functions of a single matrix argument. The numerical accuracies for that approximation were shown in the computation of noncentral moments of Wilk's lambda statistic and the likelihood ratio statistic for testing block independence. This approximation was extended for the case of two matrix arguments in Butler and Wood (2005). All the results addressed above were carried out for eigenvalue distributions for a non-singular Wishart matrix.
Recently, the distribution of eigenvalues for the non-singular case has been extended to the singular case; see Hashiguchi (2021, 2022) and Shinozaki et al. (2022). Shimizu and Hashiguchi (2021) showed the exact distribution of the largest eigenvalue for a singular case is represented in terms of the confluent hypergeometric function as well as the non-singular case. The generalized representation for the non-singular and singular cases under the elliptical model was provided by Shinozaki et al. (2022).
The rest of this paper is organized as follows. In Section 2, we apply the Laplace approximation introduced by Butler and Wood (2005) to the joint density of eigenvalues of a singular Wishart matrix. Furthermore, we show that the approximation for the distribution of the individual eigenvalues can be expressed by the chi-square distribution with varying degrees of freedom when the population covariance matrix has spiked eigenvalues. Section 3 discusses the equality of the individual eigenvalues in two populations. Finally, we evaluate the precision of the chi-square approximation by comparing it to the empirical distribution through Monte Carlo simulation in Section 4.
Approximate distributions of eigenvalues of a singular Wishart matrix
Suppose that an m × n real Gaussian random matrix X is distributed as X ∼ N m,n (O, Σ ⊗ I n ), where O is the m × n zero matrix, Σ is a m × m positive symmetric matrix, and ⊗ is the Kronecker product. This means that the column vectors of X are independently and identically distributed (i.i.d.) from N m (0, Σ) with sample size n, where 0 is the m-dimensional zero vector. The eigenvalues of Σ are denoted by λ 1 , λ 2 , . . . , λ m and λ 1 ≥ λ 2 ≥ · · · ≥ λ m > 0. Subsequently, we define the singular Wishart matrix as W = XX ⊤ , where m > n and its distribution is denoted by W(n, Σ). The spectral decomposition of W is represented as W = H 1 L 1 H ⊤ 1 , where L 1 = diag(ℓ 1 , . . . , ℓ n ) with ℓ 1 > ℓ 2 > · · · > ℓ n > 0 and the m × n matrix For the definition of the above exterior product (H ⊤ 1 dH 1 ), see page 63 of Muirhead (1982). If m = n, Stiefel manifold V m,m coincides with the orthogonal groups O(m). Uhlig (1994) gave the density of W as
f (W) = π (−mn+n 2 )/2 2 mn/2 Γ n (n/2)|Σ| n/2 |L 1 | (n−m−1)/2 etr(−Σ −1 W/2), where Γ m (a) = π m(m−1)/4 m i=1
Γ{a−(i−1)/2} and etr(·) = exp(tr(·)). Srivastava (2003) represented the joint density of eigenvalues of W in a form that includes an integral over the Stiefel manifold;
f (ℓ 1 , . . . , ℓ n ) = 2 −nm/2 π n 2 /2 |Σ| n/2 Γ n (n/2)Γ n (m/2) n i=1 ℓ (m−n−1)/2 i n i< j (ℓ i − ℓ j ) × H 1 ∈V n,m etr − 1 2 Σ −1 H 1 L 1 H ⊤ 1 (dH 1 ),(1)
where (dH 1 ) =
(H ⊤ 1 dH 1 )
Vol(V n,m ) and H 1 ∈V n,m (dH 1 ) = 1. 2
The above integral over the Steifel manifold was evaluated by Shimizu and Hashiguchi (2021) as the hypergeometric functions of the matrix arguments. We approximate (1) by Laplace approximation for the hypergeometric functions of two matrix arguments provided in Butler and Wood (2005).
For a positive integer k, let κ = (κ 1 , κ 2 , . . . , κ m ) denote a partition of k with κ 1 ≥ κ 2 ≥ · · · ≥ κ m ≥ 0 and κ 1 + · · · + κ m = k. The set of all partitions with less than or equal to m is denoted by
P k m = { κ = (κ 1 , . . . , κ m ) | κ 1 + · · · + κ m = k, κ 1 ≥ κ 2 ≥ · · · ≥ κ m ≥ 0}. The Pochhammer symbol for a partition κ is defined as (α) κ = m i=1 {α − (i − 1)/2} κ i , where (α) k = α(α + 1) · · · (α + k − 1) and (α) 0 = 1.
For integers, p, q ≥ 0 and m × m real symmetric matrices A and B, we define the hypergeometric function of two matrix arguments as
p F q (m) (α; β; A, B) = ∞ k=0 κ∈P k m (α 1 ) κ · · · (α p ) (β 1 ) κ · · · (β q ) C κ (A)C κ (B) k!C κ (I m ) ,(2)
where α = (α 1 , . . . , α p ) ⊤ , β = (β 1 , . . . , β q ) ⊤ and C κ (A) is the zonal polynomial indexed by κ with the symmetric matrix A, see the details given in Chapter 7 of Muirhead (1982). The hypergeometric functions with a single matrix are defined as
p F q (α; β; A) = p F q (α; β; A, I m ).(3)
The special cases 1 F 1 and 2 F 1 are called the confluent and Gauss hypergeometric functions, respectively. Butler and Wood (2002) proposed a Laplace approximation of 1 F 1 and 2 F 1 through their integral expressions. They showed the accuracy of that approximation is greater than the previous results. This approximation was extended to the complex case in Butler and Wood (2022). The important property of (2) is the integral representation over the orthogonal group
p F q (m) (α; β; A, B) = H∈O(m) p F q (α; β; AHBH ⊤ )(dH),(4)
where (dH) is the invariant measure on the m×m orthogonal group O(m). Integral representations (4) are a useful tool for obtaining approximation of p F (m) q . Asymptotic expansions of 0 F (m) 0 are given in Anderson (1965) when both two positive definite matrix arguments are widely spaced. Constantine and Muirhead (1976) gave the asymptotic behavior of 0 F (m) 0 when the population eigenvalues are multiple. From the integral expression (4), Butler and Wood (2005) provided Laplace approximations for p F q (m) .
Lemma 1. Let the two diagonal matrices be
A = diag(a 1 , . . . , a m ) and B = diag(b 1 , . . . , b 1 , b 2 , . . . , b r , . . . , b r ), where a 1 > a 2 > · · · > a m > 0, b 1 > b 2 > · · · > b r ≥ 0, and b j have multiplicity m j in which m = r j=1 m j . Let Ω(m 1 , . . . , m r ) = Vol(O(m)) −1 r j=1 Vol(O(m j )). Then the Laplace approximation of p F q (m) is given as pFq (m) (α; β; A, B) = (2π) s 2 Ω(m 1 , . . . , m r )J − 1 2 p F q (α; β; AB), where s = r−1 i=1
r j=i+1 m i m j and Hessian J is defined in Butler and Wood (2005). Shimizu and Hashiguchi (2021) showed the following relationship
H 1 ∈V n,m p F q (AH 1 B 1 H ⊤ 1 )(dH 1 ) = H∈O(m) p F q (AHBH ⊤ )(dH)(5)
for an m × m matrix B =
B 1 O O O ,
where B 1 is an n × n symmetric matrix and O is the zero matrix. From (5), the joint density (1) can be rewritten by
f (ℓ 1 , . . . , ℓ n ) ∝ H∈O(m) etr − 1 2 Σ −1 HLH ⊤ (dH) = 0 F 0 (m) − 1 2 Σ −1 , L ,(6)
where L = diag(ℓ 1 , . . . , ℓ n , 0, . . . , 0) is the m×m matrix and the symbol "∝" means that a constant required for scaling is removed. Applying Laplace's method to the above joint density, we have the approximation for the joint density of eigenvalues.
Proposition 1. The joint density of eigenvalues of a singular Wishart matrix by Laplace approximation is expressed by
π n(n−m)/2 2 nm/2 |Σ| n/2 Γ n (n/2) n i=1 ℓ (m−n−1)/2 i n i< j (ℓ i − ℓ j ) exp − 1 2 n i=1 ℓ i λ i n i< j 2π c i j 1/2 n i=1 m j=n+1 2π d i j 1/2 , (7) where c i j = (ℓ i −ℓ j )(λ i −λ j ) λ i λ j , d i j = ℓ i (λ i −λ j ) λ i λ j .
Proof. Applying Lemma 1 to the hypergeometric functions in (6), the integral over the Stiefel manifold in (1) is approximated by
2 n Vol(V n,m ) exp − 1 2 n i=1 ℓ i λ i n i< j 2π c i j 1/2 n i=1 m j=n+1 2π d i j 1/2 .(8)
Substituting (8) to (1), we have the desired result.
In order to derive the approximate distributions of individual eigenvalues, we define the spiked covariance model ρ k that implies the first k-th eigenvalues of Σ > 0 are infinitely dispersed, namely
ρ k = max λ 2 λ 1 , λ 3 λ 2 , . . . , λ k+1 λ k → 0(9)
for k = 1, . . . , n. Under the condition of (9), Takemura and Sheena (2005) proved that the distribution of individual eigenvalues for a non-singular Wishart matrix is approximated by a chisquare distribution. The improvement for that approximation, that is, when the condition listed in (9) cannot be assumed, was discussed in Tsukada and Sugiyama (2021
= max ℓ 2 ℓ 1 , ℓ 3 λ 2 , . . . , ℓ k+1 ℓ k p → 0, (k = 1, . . . , n)
in the sense that ∀ǫ > 0, ∃δ > 0,
ρ k < δ → Pr (r k > ǫ) < ǫ.
From Proposition 1 and Lemma 2, we obtain the chi-square approximation that is the main result of this paper. Σ), where m > n and ℓ 1 , ℓ 2 , . . . , ℓ n be the eigenvalues of W. If ρ n → 0, it holds that
Theorem 1. Let W ∼ W m (n,ℓ i /λ i d → χ 2 n−i+1 , 1 ≤ i ≤ n,
where χ 2 is a chi-square distribution with n−i+1 degrees of freedom and the symbol " d →" means convergence in the distribution.
Proof. First, we rewrite the approximate distribution (7) as
f (ℓ 1 , . . . , ℓ n ) = 1 2 n(n+1)/4 |Σ| n/2 n i=1 Γ( n−i+1 2 ) n i=1 ℓ (m−n−1)/2 i exp − 1 2 n i=1 ℓ i λ i × n i< j (ℓ i − ℓ j ) λ i λ j λ i − λ j 1/2 n i=1 m j=n+1 1 d i j 1/2 . From Lemma 2, we have n i< j (ℓ i − ℓ j ) 1/2 = n i< j ℓ 1/2 i 1 − ℓ j ℓ i 1/2 ≈ n i=1 ℓ (n−i)/2 i , n i< j λ i λ j λ i − λ j 1/2 = n i< j λ j 1 − λ i /λ j 1/2 ≈ n i=1 λ (i−1)/2 i . Furthermore, we note |Σ| n/2 = n i=1 λ n/2 i n i=1 m j=n+1 λ 1/2
j , the joint density (7) can be written as
f (ℓ 1 , . . . , ℓ n ) ≈ n i=1 ℓ (n−i−1)/2 i (2λ i ) (n−i+1)/2 Γ( n−i+1 2 ) exp − ℓ i 2λ i n i=1 ℓ (m−n)/2 i n i=1 m j=n+1 1 λ j 1/2 n i=1 m j=n+1 1 d i j 1/2 = n i=1 ℓ (n−i−1)/2 i (2λ i ) (n−i+1)/2 Γ( n−i+1 2 ) exp ℓ i 2λ i n i=1 m j=n+1 ℓ 1 2 i n i=1 m i= j+1 1 λ j 1/2 n i=1 m j=n+1 1 d i j 1/2 = n i=1 ℓ (n−i−1)/2 i (2λ i ) (n−i+1)/2 Γ( n−i+1 2 ) exp − ℓ i 2λ i = n i=1 g n−i+1 (ℓ i /λ i ),
where g n−i+1 (·) is the density function of the chi-square distribution with degree of freedom n − i + 1.
Remark 1.
Note that if only the first k population eigenvalues are infinitely dispersed, that is ρ k → 0, ℓ k /λ k can be approximated by g n−k+1 similar to the proof in Theorem 1.
In the context of the High Dimension-Low Sample Size (HDLSS) setting, the asymptotic behavior of the eigenvalue distribution of a sample covariance matrix was discussed in Ahn et al. (2007), Jung and Marron (2009), and Bolivar-Cime and Perez-Abreu (2014). Jung and Marron (2009) showed that the spiked sample eigenvalues are approximated by the chi-square distribution with a degree of freedom n. In contrast, Theorem 1 gives the approximation of the distribution of ℓ k by a chi-square distribution with varying degrees of freedom.
Application to test for equality of the individual eigenvalues
This section discusses testing for equality of individual eigenvalues of the covariance matrix in two populations. For testing problems, we give the approximate distribution of the statistic based on the derived results from the previous section.
Let an m × n i Gaussian random matrix X (i) be distributed as
X (i) ∼ N m,n (O, Σ (i) ⊗ I n i ), where Σ (i) > 0 and i = 1, 2. The eigenvalues of Σ (i) are denoted by λ (i) 1 , λ (i) 2 , . . . , λ (i) m , where λ (i) 1 ≥ λ (i) 2 ≥ · · · ≥ λ (i) m > 0. We denote the eigenvalues of W (i) = X (i) X (i) ⊤ by ℓ (i) 1 , ℓ (i) 2 , . . . , ℓ (i) m , where ℓ (i) 1 > ℓ (i) 2 > · · · > ℓ (i) m ≥ 0.
For fixed k, we consider the test of the equality of the individual eigenvalues in two populations as
H 0 : λ (1) k = λ (2) k , vs. H 1 : λ (1) k λ (2) k .(10)
Sugiyama and Ushizawa (1998) reduced (10) to the equality of variance test for the principal components and proposed a testing procedure using the Ansari-Bradley test. Takeda (2001) proposed the test statistic ℓ (1) k /ℓ (2) k with n ≥ m for (10) and derived the exact distribution of ℓ (1) 1 /ℓ (2) 1 . Since Johnstone (2001) indicated that the first few eigenvalues are very large compared to the others in the large dimensional setting, it is essential to understand how the distribution for the first few eigenvalues is constructed. We provide the exact density function of ℓ (1) 1 /ℓ (2) 1 with n < m in the same way as Takeda (2001).
Theorem 2. Let W (1) and W (2) be two independent Wishart matrices with distribution W m (n 1 , Σ (1) ) and W m (n 2 , Σ (2) ), respectively, where m > n i (i = 1, 2). Then we have the density of q = ℓ (1) 1 /ℓ (2) 1 as
f (q) = C ∞ k=0 κ∈P k m ∞ t=0 τ∈P t m {(m + 1)/2} κ C κ (Σ (1) −1 /2) {(n 1 + m + 1)/2} κ k! {(m + 1)/2} τ C τ (Σ (2) −1 /2) {(n 2 + m + 1)/2} τ t! × (mn 1 /2 + k)(mn 2 /2 + t)q mn 2 /2+t−1 Γ(u)/v u − (mn 1 /2 + k)(trΣ (2) −1 /2)q mn 2 /2+t Γ(u + 1)/v u+1 − (mn 2 /2 + t)(trΣ (1) −1 /2)q mn 2 /2+t−1 Γ(u + 1)/v u+1 + (trΣ (1) −1 /2)(trΣ (2) −1 /2)q mn 2 /2+t Γ(u + 2)/v u+2 ,(11)
where u = m(n 1 + n 2 )/2 + k + t, v = trΣ (1) −1 − qtrΣ (2) −1 and C = Γ n 1 {(n 1 + 1)/2}Γ n 2 {(n 2 + 1)/2} 2 m(n 1 +n 2 )/2 Γ n 1 {(n 1 + m + 1)/2)Γ n 2 {(n 2 + m + 1)/2)}|Σ| n 1 /2 |Σ| n 2 /2 .
6
Proof. The exact expression of ℓ (i) 1 was given by Shimizu and Hashiguchi (2021) as
Pr(ℓ (i) 1 < x) = Γ n i ( n i +1 2 )( x 2 ) mn i /2 Γ n i ( n i +m+1 2 )|Σ (i) | n i /2 exp − x 2 trΣ (i) −1 1 F 1 m + 1 2 ; n i + m + 1 2 ; x 2 Σ (i) −1 .(12)
The derivative of (12) is represented by
f (x) = Γ n i {(n i + 1)/2} 2 mn i /2 Γ n i {(n i + m + 1)/2)}|Σ (i) | n i /2 ∞ k=0 κ∈P k m {(m + 1)/2} κ C κ (Σ (i) −1 /2) {(n i + m + 1)/2} κ k! × exp − x 2 trΣ (i) −1 (n i m/2 + k)x mn i /2+k−1 − (trΣ (i) −1 /2)x mn i /2+k .(13)
From (13), we have the joint density of ℓ (1) 1 and ℓ (2) 1 as
f (x, y) = C ∞ k=0 κ∈P k m ∞ t=0 τ∈P t m {(m + 1)/2} κ C κ (Σ (1) −1 /2) {(n 1 + m + 1)/2} κ k! {(m + 1)/2} τ C τ (Σ (2) −1 /2) {(n 2 + m + 1)/2} τ t! × (mn 1 /2 + k)x mn 1 /2+k−1 − (trΣ (1) −1 /2)x mn 1 /2+k × (mn 2 /2 + t)y mn 2 /2+t−1 − (trΣ (2) −1 /2)y mn 2 /2+t exp − x 2 trΣ (1) −1 exp − y 2 trΣ (2) −1 .
Translating x and y to q = y/x and r = x, we have
f (q, r) = C ∞ k=0 κ∈P k m ∞ t=0 τ∈P t m {(m + 1)/2} κ C κ (Σ (1) −1 /2) {(n 1 + m + 1)/2} κ k! {(m + 1)/2} τ C τ (Σ (2) −1 /2) {(n 2 + m + 1)/2} τ t!
× (mn 1 /2 + k)(mn 2 /2 + t)q mn 2 /2+t−1 r m(n 1 +n 2 )/2+k+t−1 − (mn 1 /2 + k)(trΣ (2) −1 /2)q mn 2 /2+t r m(n 1 +n 2 )/2+k+t − (mn 2 /2 + t)(trΣ (1) −1 /2)q mn 2 /2+t−1 r m(n 1 +n 2 )/2+k+t
+ (trΣ (1) −1 /2)(trΣ (2) −1 /2)q mn 2 /2+t r m(n 1 +n 2 )/2+k+t+1 × exp −(trΣ (1) −1 − qtrΣ (2) −1 )r .
Noting that ∞ 0 x α−1 e −βx dx = Γ(α)/β α , where α, β > 0, and integrating r with respecet to f (q, r), we have the desired result.
As the dimension increases, it is difficult to perform the numerical computation of (11) due to the high computational complexity. From Theorem 1, we give the approximate distribution for (11) by F-distribution. Corollary 1. Let W (1) and W (2) be two independent Wishart matrices with distribution W m (n 1 , Σ (1) ) and W m (n 2 , Σ (2) ), respectively, where m > n i (i = 1, 2) and ℓ (i) 1 , ℓ (i) 2 , . . . , ℓ (i) n are the eigenvalues of W (i) . If the first k-th eigenvalues of Σ (i) are spiked, then we have
ℓ (1) k /{(n 1 − k + 1)λ (1) n 1 −k+1 } ℓ (2) k /{(n 2 − k + 1)λ (2) n 2 −k+1 } d → F (n 1 −k+1,n 2 −k+1) ,
where F is an F distribution with n 1 and n 2 degrees of freedom.
Simulation study
We investigate the accuracy of the approximation for the derived distributions. In the simulation study, we consider the following population covariance matrix;
Σ = diag(a b , a b/2 , . . . , a b/m ),(14)
where a, b > 0. In the large-dimensional setting, mainly the accuracy of the approximate distributions for the largest and second eigenvalues was investigated; see Iimori et al. (2013) and . We set (a, b) = (200, 3) as Case 1 and (a, b) = (50, 3) as Case 2. These two cases imply that the population covariance matrix has two spiked eigenvalues. Parameter ρ k in (9) is smaller in Case 1 than in Case 2. We denote F 1 (x) and F 2 (x) as the chi-square distributions with n and n − 1 degrees of freedom, which are the approximate distributions of the largest and second eigenvalues, respectively. The empirical distribution based on 10 6 Monte Carlo simulations is denoted by F sim . Tables 1 and 2 shows the α-percentile points of the distributions of ℓ 1 and ℓ 2 for m = 50 and n = 10, respectively. From the simulation study, we know that sufficient accuracy of approximation for the largest eigenvalue has already been obtained in Case 2. Case 1 is more accurate than Case 2 for the second eigenvalue. It is seen that the desired accuracy can be achieved when the parameter ρ k is small. Tables 3 and 4 present the chi-square probabilities for Case 1 in the upper percentile points from the empirical distribution. In this simulation study, we set m = 20, 30, 40, 100 and n = 5, 15. We can observe that all probabilities are close to α. Finally, we give the graph of the density of F distribution in Corollary 1 compared to the empirical distribution function. In Fig 1, we superimpose the graph of F approximation with the histogram of ℓ (1) 1 /ℓ (2) 1 for n i = 10 (i = 1, 2) and m = 30 in Case 2. The vertical line and histograms show the empirical distribution of the ℓ (1) 1 /ℓ (2) 1 based on 10 6 iteration, respectively. The solid line is the density function of the F distribution. From the 95% points of F sim , we can confirm that the approximate probability is 0.950.
α F −1 sim (α) F −1 1 (α) α F −1 sim (α) F −1 2 (α) 0α F −1 sim (α) F −1 1 (α) α F −1 sim (α) F −1 2 (α) 0
H 1 is satisfied by H 1 H ⊤ 1 = I n . The set of all m × n matrices H 1 with orthonormal columns is called the Stiefel manifold, denoted by V n,m = {H 1 | H 1 H ⊤ 1 = I n }, where m ≥ n. The volume of V n,m is represented by Vol(V n,m ) = H 1 ∈V n,m (H ⊤ 1 dH 1 ) = 2 n π mn/2 Γ n (m/2) .
Fig. 1 :
1n i = 10 (i = 1, 2) and m = 30
Table . 1
.: Percentile points of the distributions of ℓ 1 and ℓ 2 of W 50 (10, Σ) (Case 1)
Table. 2: Percentile points of the distributions of ℓ 1 and ℓ 2 of W 50 (10, Σ) (Case 2).99 23.2359 23.2093
0.99 21.791
21.666
0.95 18.3026 18.307
0.95 17.0601 16.919
0.90 15.9825 15.9872
0.90 14.8377 14.6837
0.50 9.34466 9.34182
0.50 8.48676 8.34283
0.05 3.94389 3.9403
0.05 3.47796 3.32511
AcknowlegmentsThe first author has received partial funding from Grant-in-Aid for JSPS Fellows (No.22KJ2804).
An asymptotic expansion for the distribution of the latent roots of the estimated covariance matrix. G A Anderson, Annals of Mathematical Statistics. 36G. A. Anderson, An asymptotic expansion for the distribution of the latent roots of the estimated covariance matrix, Annals of Mathematical Statistics. 36 (1965) 1153-1173.
The high-dimension, low-sample-size geometric representation holds under mild conditions. J Ahn, J S Marron, K M Muller, Y Chi, Biometrika. 94J. Ahn, J. S. Marron, K. M. Muller and Y. Chi, The high-dimension, low-sample-size geometric representation holds under mild conditions, Biometrika. 94 (2007) 760-766.
PCA and eigen-inference for a spiked covariance model with largest eigenvalues of same asymptotic order. A Bolivar-Cime, V Perez-Abreu, Brazilian Journal of Probability and Statistics. 28A. Bolivar-Cime and V. Perez-Abreu, PCA and eigen-inference for a spiked covariance model with largest eigen- values of same asymptotic order, Brazilian Journal of Probability and Statistics. 28 (2014) 255-274.
Laplace approximations for hypergeometric functions with matrix argument. R W Butler, T A Wood, Annals of Statistics. 30R. W. Butler and T. A. Wood, Laplace approximations for hypergeometric functions with matrix argument, Annals of Statistics. 30 (2002) 1155-1177.
Laplace approximations to hypergeometric functions of two matrix arguments. R W Butler, T A Wood, Journal of Multivariate Analysis. 94R. W. Butler and T. A. Wood, Laplace approximations to hypergeometric functions of two matrix arguments, Journal of Multivariate Analysis. 94 (2005) 1-18.
Laplace approximations for hypergeometric functions with Hermitian matrix argument. R W Butler, T A Wood, Journal of Multivariate Analysis. 192105087R. W. Butler and T. A. Wood, Laplace approximations for hypergeometric functions with Hermitian matrix argu- ment, Journal of Multivariate Analysis. 192 (2022) 105087.
Asymptotic expansions for distributions of latent roots in multivariate analysis. A G Constantine, R T Muirhead, Journal of Multivariate Analysis. 3A. G. Constantine and R. T. Muirhead, Asymptotic expansions for distributions of latent roots in multivariate analysis, Journal of Multivariate Analysis. 3 (1976) 369-391.
On the distribution of the second-largest latent root for certain high dimensional Wishart matrices. T Iimori, T Ogura, T Sugiyama, International Journal of Knowledge Engineering and Soft Data Paradigms. 4T. Iimori, T. Ogura and T. Sugiyama, On the distribution of the second-largest latent root for certain high di- mensional Wishart matrices, International Journal of Knowledge Engineering and Soft Data Paradigms. 4 (2013) 187-197.
Distribution of matrix variates and latent roots derived from normal samples. A T James, Annals of Mathematical Statistics. 35A. T. James, Distribution of matrix variates and latent roots derived from normal samples, Annals of Mathematical Statistics. 35 (1964) 475-501.
On the distribution of the largest eigenvalue in principal components analysis. I M Johnstone, Annals of Statistics. 29I. M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis, Annals of Statistics. 29 (2001) 295-327.
PCA consistency in high dimension, low sample size context. S Jung, J S Marron, Annals of Statistics. 37S. Jung and J. S. Marron, PCA consistency in high dimension, low sample size context, Annals of Statistics. 37 (2009) 4104-4130.
Chi-square approximations for eigenvalue distributions and confidential interval construction on population eigenvalues. H Kato, H Hashiguchi, Bulletin of the Computational Statistics of Japan. 27H. Kato and H. Hashiguchi, Chi-square approximations for eigenvalue distributions and confidential interval con- struction on population eigenvalues, Bulletin of the Computational Statistics of Japan. 27 (2014) 11-28.
Approximate eigenvalue distribution for the ratio of Wishart matrices. S Matsubara, H Hashiguchi, SUT Journal of Mathematics. 52S. Matsubara and H. Hashiguchi, Approximate eigenvalue distribution for the ratio of Wishart matrices, SUT Jour- nal of Mathematics. 52 (2016) 141-158.
Aspects of multivariate statistical theory. R J Muirhead, WileyNew YorkR. J. Muirhead, Aspects of multivariate statistical theory. Wiley, New York, 1982.
Asymptotic behavior of the distributions of eigenvalues for beta-Wishart ensemble under the dispersed population eigenvalues. R Nasuda, K Shimizu, H Hashiguchi, Communications in Statistics-Theory and Methods. in pressR. Nasuda, K. Shimizu and H. Hashiguchi, Asymptotic behavior of the distributions of eigenvalues for beta-Wishart ensemble under the dispersed population eigenvalues. Communications in Statistics-Theory and Methods. (2022), in press.
Heterogeneous hypergeometric functions with two matrix arguments and the exact distribution of the largest eigenvalue of a singular beta-Wishart matrix. K Shimizu, H Hashiguchi, Journal of Multivariate Analysis. 183104714K. Shimizu and H. Hashiguchi, Heterogeneous hypergeometric functions with two matrix arguments and the exact distribution of the largest eigenvalue of a singular beta-Wishart matrix, Journal of Multivariate Analysis. 183 (2021) 104714.
Algorithm for the product of Jack polynomials and its application to the sphericity test. K Shimizu, H Hashiguchi, Statistics & Probability Letters. 187109505K. Shimizu and H. Hashiguchi, Algorithm for the product of Jack polynomials and its application to the sphericity test. Statistics & Probability Letters. 187 (2022) 109505.
Generalized heterogeneous hypergeometric functions and the distribution of the largest eigenvalue of an elliptical Wishart matrix. Random Matrices: Theory and Applications. A Shinozaki, K Shimizu, H Hashiguchi, 2250034A. Shinozaki, K. Shimizu and H. Hashiguchi, Generalized heterogeneous hypergeometric functions and the distri- bution of the largest eigenvalue of an elliptical Wishart matrix. Random Matrices: Theory and Applications. (2022) 2250034.
Singular Wishart and multivariate beta distributions. M S Srivastava, Annals of Statistics. 31M. S. Srivastava, Singular Wishart and multivariate beta distributions, Annals of Statistics. 31 (2003) 1537-1560.
On the distribution of the largest latent root of the covariance matrix. T Sugiyama, Annals of Mathematical Statistics. 38T. Sugiyama, On the distribution of the largest latent root of the covariance matrix, Annals of Mathematical Statis- tics. 38 (1967) 1148-1151.
Approximation for the distribution of the largest latent root of a Wishart matrix. T Sugiyama, Australian Journal of Statistics. 14T. Sugiyama, Approximation for the distribution of the largest latent root of a Wishart matrix, Australian Journal of Statistics. 14 (1972) 17-24.
Approximation of upper percentile points for the second largest latent root in principal component analysis. T Sugiyama, T Ogura, Y Takeda, H Hashiguchi, International Journal of Knowledge Engineering and Soft Data Paradigms. 4T. Sugiyama, T. Ogura, Y. Takeda and H. Hashiguchi, Approximation of upper percentile points for the second largest latent root in principal component analysis, International Journal of Knowledge Engineering and Soft Data Paradigms. 4 (2013) 107-117.
A non-parametric method to test equality of intermediate latent roots of two populations in a principal component analysis. T Sugiyama, K Ushizawa, Journal of the Japan Statistical Society. 28T. Sugiyama and K. Ushizawa, A non-parametric method to test equality of intermediate latent roots of two popu- lations in a principal component analysis, Journal of the Japan Statistical Society. 28 (1998) 227-235.
Derivatives of the characteristic root of a symmetric or a hermitian matrix with two applications in multivariate analysis. N Sugiura, Communications in Statistics-Theory and Methods. 1N. Sugiura, Derivatives of the characteristic root of a symmetric or a hermitian matrix with two applications in multivariate analysis. Communications in Statistics-Theory and Methods. 1 (1973) 393-417.
Permutation test for equality of each characteristic root in two populations. Y Takeda, Journal of the Japanese Society of Computational Statistics. 14Y. Takeda, Permutation test for equality of each characteristic root in two populations, Journal of the Japanese Society of Computational Statistics. 14 (2001) 1-10.
Distribution of eigenvalues and eigenvectors of Wishart matrix when the population eigenvalues are infinitely dispersed and its application to minimax estimation of covariance matrix. A Takemura, Y Sheena, Journal of Multivariate Analysis. 94A. Takemura and Y. Sheena, Distribution of eigenvalues and eigenvectors of Wishart matrix when the population eigenvalues are infinitely dispersed and its application to minimax estimation of covariance matrix, Journal of Multivariate Analysis. 94 (2005) 271-299.
Distribution approximation of covariance matrix eigenvalues. S Tsukada, T Sugiyama, Communications in Statistics-Simulation and Computation. in pressS. Tsukada and T. Sugiyama, Distribution approximation of covariance matrix eigenvalues, Communications in Statistics-Simulation and Computation. (2021), in press.
On Singular Wishart and singular multivariate beta distributions. H Uhlig, Annals of Statistics. 22H. Uhlig, On Singular Wishart and singular multivariate beta distributions, Annals of Statistics. 22 (1994) 395-405.
| [] |
[
"Exploring the Impact of Galactic Interactions and Mergers on the Central Star Formation of APEX/EDGE-CALIFA Galaxies",
"Exploring the Impact of Galactic Interactions and Mergers on the Central Star Formation of APEX/EDGE-CALIFA Galaxies"
] | [
"Yeny Garay-Solis \nInstituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico\n",
"Jorge K Barrera-Ballesteros \nInstituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico\n",
"Dario Colombo \nArgelander-Institut für Astronomie\nAuf dem Hügel 7153121BonnGermany\n",
"Sebastián F Sánchez \nInstituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico\n",
"Alejandra Z Lugo-Aranda \nInstituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico\n",
"Vicente Villanueva \nDepartment of Astronomy\nUniversity of Maryland\n20742College ParkMDUSA\n",
"Tony Wong \nDepartment of Astronomy\nUniversity of Illinois\n61801UrbanaILUSA\n",
"Alberto D Bolatto \nDepartment of Astronomy\nUniversity of Maryland\n20742College ParkMDUSA\n"
] | [
"Instituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico",
"Instituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico",
"Argelander-Institut für Astronomie\nAuf dem Hügel 7153121BonnGermany",
"Instituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico",
"Instituto de Astronomía\nUniversidad Nacional Autónoma de México\nA.P. 70-26404510MéxicoD.FMéxico",
"Department of Astronomy\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Astronomy\nUniversity of Illinois\n61801UrbanaILUSA",
"Department of Astronomy\nUniversity of Maryland\n20742College ParkMDUSA"
] | [] | Galactic interactions and subsequent mergers are a paramount channel for galaxy evolution. In this work, we use the data from 236 star forming CALIFA galaxies with integrated molecular gas observations in their central region (approximately within an effective radius) -from the APEX millimeter telescope and the CARMA millimeter telescope array. This sample includes isolated (126 galaxies) and interacting galaxies in different merging stages (110 galaxies; from pairs, merging and post-merger galaxies). We show that the impact of interactions and mergers in the center of galaxies is revealed as an increase in the fraction of molecular gas (compared to isolated galaxies). Furthermore, our results suggest that the change in star formation efficiency is the main driver for both an enhancement and/or suppression of the central star formation -except in merging galaxies where the enhanced star formation appears to be driven by an increase of molecular gas. We suggest that gravitational torques due to the interaction and subsequent merger transport cold molecular gas inwards, increasing the gas fraction without necessarily increasing star formation. | null | [
"https://export.arxiv.org/pdf/2306.03385v1.pdf"
] | 259,088,598 | 2306.03385 | 97e35de4104fd8d5a0bacba79f4f179ac88a055b |
Exploring the Impact of Galactic Interactions and Mergers on the Central Star Formation of APEX/EDGE-CALIFA Galaxies
June 7, 2023
Yeny Garay-Solis
Instituto de Astronomía
Universidad Nacional Autónoma de México
A.P. 70-26404510MéxicoD.FMéxico
Jorge K Barrera-Ballesteros
Instituto de Astronomía
Universidad Nacional Autónoma de México
A.P. 70-26404510MéxicoD.FMéxico
Dario Colombo
Argelander-Institut für Astronomie
Auf dem Hügel 7153121BonnGermany
Sebastián F Sánchez
Instituto de Astronomía
Universidad Nacional Autónoma de México
A.P. 70-26404510MéxicoD.FMéxico
Alejandra Z Lugo-Aranda
Instituto de Astronomía
Universidad Nacional Autónoma de México
A.P. 70-26404510MéxicoD.FMéxico
Vicente Villanueva
Department of Astronomy
University of Maryland
20742College ParkMDUSA
Tony Wong
Department of Astronomy
University of Illinois
61801UrbanaILUSA
Alberto D Bolatto
Department of Astronomy
University of Maryland
20742College ParkMDUSA
Exploring the Impact of Galactic Interactions and Mergers on the Central Star Formation of APEX/EDGE-CALIFA Galaxies
June 7, 2023Draft version Typeset using L A T E X twocolumn style in AASTeX631galaxies: interactions/mergers -galaxies: star formation -galaxies: molecular gas - galaxies: evolution
Galactic interactions and subsequent mergers are a paramount channel for galaxy evolution. In this work, we use the data from 236 star forming CALIFA galaxies with integrated molecular gas observations in their central region (approximately within an effective radius) -from the APEX millimeter telescope and the CARMA millimeter telescope array. This sample includes isolated (126 galaxies) and interacting galaxies in different merging stages (110 galaxies; from pairs, merging and post-merger galaxies). We show that the impact of interactions and mergers in the center of galaxies is revealed as an increase in the fraction of molecular gas (compared to isolated galaxies). Furthermore, our results suggest that the change in star formation efficiency is the main driver for both an enhancement and/or suppression of the central star formation -except in merging galaxies where the enhanced star formation appears to be driven by an increase of molecular gas. We suggest that gravitational torques due to the interaction and subsequent merger transport cold molecular gas inwards, increasing the gas fraction without necessarily increasing star formation.
INTRODUCTION
The evolutionary processes that a galaxy goes through affect its internal properties. Among the most significant processes are: star formation, gas recycling, metallicity enrichment, and the energy produced by supernovae or in active nuclei (Kormendy & Kennicutt 2004;Kormendy 2013). An external and fast process that significantly affects the evolution of galaxies on a large scale is galactic interactions/mergers (Toomre 1977;Conselice et al. 2003;Christensen et al. 2009;Kormendy 2013). A large number of observational and numerical studies have suggested an enhancement of star formation in galactic centres, related to the increase in molecular gas in these objects due to the tidal forces exerted by the galaxies that undergo this merger (e.g., Patton et al. 1997Patton et al. , 2005Smith et al. 2007;Li et al. 2008;Ellison et al. 2008;Ellison et al. 2010;Patton et al. 2011;Ellison et al. 2011;Scudder et al. 2012;Patton et al. 2013;Ellison et al. 2013;Pan et al. 2018;Moreno et al. 2021;Ueda et al. 2021). However, recent studies using sub-millimeter observations have suggested that rather than an enhancement in the central star formation rate (SFR), the interaction tend to increase the molecular gas fraction, i.e. the molecular gas mass-over-stellar mass ratio (e.g., Kaneko et al. 2022). Given the lack of a statistically robust sample with homogeneous observations at both optical and sub-millimeter wavelengths, it has been difficult to reliably quantify the impact of the merging process in shaping the central star formation in galaxies.
In this study, we explore how the specific star formation rate (sSFR), molecular gas fraction (f mol ), and star formation efficiency (SFE) behave in the centre of 236 Calar Alto Legacy Integral Field Area (CAL-IFA) survey (Sánchez, S. F. et al. 2012) star-forming galaxies at different merging stages compared to isolated galaxies. Our sample includes 140 galaxies from the Atacama Pathfinder EXperiment (APEX) submillimeter telescope (Güsten et al. 2006), first presented by Colombo et al. (2020), and 96 galaxies with observations obtained by the Combined Array for Research in Millimeter-wave Astronomy (CARMA) (Bock et al. 2006), encompassed by the Extragalactic Database for Galaxy Evolution (EDGE-CALIFA) survey (Bolatto et al. 2017). These observations provide integrated measurements from the central region covering approximately one effective radius (∼1 R eff ). This is the largest CO database with optical Integral Field Spectroscopy (IFS) data so far. This article has the following structure: in Sec. 2 we describe the data and sample, while in Sec. 3 we present our results of the analyzes carried out at each merger stage sample. In Sec. 4 we present discussion of our results. Finally, we summarize our study and present the conclusions of the results in Sec. 5. For the derived data, we use a cosmology with the parameters: H 0 = 71 km s −1 Mpc −1 , Ω M = 0.27, Ω Λ = 0.73. In this study, we use galaxies included in the Calar Alto Legacy Integral Field Area (CALIFA) sample (Sánchez, S. F. et al. 2012). This survey observed more than 600 galaxies in the local Universe (0.005 < z < 0.03) using Integral-Field Spectroscopy (IFS). CALIFA used the Integral Field Unit (IFU) PPAK (Kelz et al. 2006), which was attached to the 3.5 m telescope, with 74 × 64 arcsec of Field of View (FoV) of the Calar Alto Observatory in Spain. For each observed galaxy, this FoV covers ∼ 2.5 R eff with a typical spatial physical resolution of ∼1 kpc. The spectrograph used was the PMAS (Potsdam Multi Aperture Spectrograph) (Roth et al. 2005). This spectrograph has two superimposed grid configurations: V500 (3750-7000Å) with a (low) spectral resolution of 6Å; and V1200 (3700-4700Å) with an (intermediate) spectral resolution of 2.7Å. The raw data cube obtained from PMAS/PPAK was reduced with a specialized pipeline for the CALIFA survey (Sánchez et al. 2016a). The reduced data cubes were analyzed with PIPE3D (Sánchez et al. 2016b,c) to derive physical parameters of the stellar populations and emission lines of each spaxel for a galaxy. The reduced data cubes of 667 galaxies are publicly available (CAL-IFA DR3 1 ). Below we present the integrated derived parameters from these data cubes.
Molecular gas mass
We used one of the largest databases of CO observations available for galaxies with IFS data in order to derive the central molecular gas mass (M mol ). Our sample includes measurements made by the Atacama Pathfinder EXperiment (APEX) submillimetre telescope (Güsten et al. 2006) and Combined Array for Research in Millimeter-wave Astronomy (CARMA) (Bock et al. 2006). This data and its calibration are described in detail in Colombo et al. (2020) and Bolatto et al. (2017), respectively; here, we summarize their main features. Colombo et al. (2020) presented APEX measurements of the 12 CO(2-1) line (ν = 230.538 GHz) for a sample of 296 galaxies included in the CALIFA survey. These objects were selected to have an inclination angle below 65 o . Furthermore, all galaxies have central measurements and 39 of these objects also have off center measurements. The APEX beam coverage is 26.3 arcsec, so the APEX beam coverage is approximate 1.12 R eff for this sample of galaxies.
On the other hand, the Extragalactic Database for Galaxy Evolution (EDGE) survey (Bolatto et al. 2017) is observed a sample of galaxies drawn from the CALIFA survey using the CARMA interferometer. EDGE provides CO maps with a sensitivity of Σ mol ∼ 11 M pc −2 (before tilt correction) and a physical resolution of ∼ 1.4 kpc. The EDGE-CALIFA survey was aimed to complement the optical information derived from the CALIFA survey with CO maps observed by CARMA. The CO observations at the transitions of 12 CO(1-0) at ν ∼115.3 GHz and 13 CO(1-0) at ν ∼ 110.2 GHz, in the same coverage as the FoV of CALIFA (74 x 64 arcsec) (Bolatto et al. 2017).
Derived parameters
For this study, we use the SFR, the M , and the M mol estimated within the APEX beam aperture, both for APEX and CARMA galaxies. The parameters are convoluted by a W T function before being integrated (Colombo et al. 2020). The optical parameters are integrated from their angular-resolved properties (i.e., the SFR and stellar mass surface density).
We derive the intensive properties by dividing each extensive property (SFR, M , M mol ) by the area of the beam, thus correcting for the inclination angle of each galaxy. This correction is based on the ellipticity and position angle derivations of the CALIFA galaxies made by López-Cobá et al. (2018); Lacerda et al. (2020);Sánchez et al. (2021). Using this correction, we obtained the Σ SFR , Σ , and Σ mol for each galaxy in our sample. The intensive properties of both APEX-CALIFA and EDGE-CALIFA are presented in Sánchez et al. (2021).
Since we are interested only in star-forming galaxies, we select those galaxies with an Hα equivalent width (EW(Hα)) larger than 6Å (measured within the beam) (see Sánchez 2020, for more details). Furthermore, using the emission line ratio diagnostics (also known as BPT diagram) (Baldwin et al. 1981), we select galaxies that fall below the Kewley demarcation line (Kewley et al. 2001). Finally, we also consider galaxies with central reliable measurements of the CO flux, these are measurements with a signal-to-noise ratio greater than 3. These selection criteria yield a final sample of 236 galaxies. It consists mostly of Sa to Sd galaxies and few Irr, E, and S0 (230 and 6 galaxies respectively). Also, this sample covers a wide range of stellar masses (10 8.3 − 10 11.0 M ), SFRs (10 −1.8 − 10 0.4 M yr −1 ) and molecular mass (10 5.8 − 10 10.0 M ) within the APEX beam coverage.
Classification of the merger stages
To gauge the interaction/merging stage, we use the classification scheme introduced by Veilleux et al. (2002). This scheme was based on numerical simulations of a spiral-spiral merger, tracing different snapshots of that merger (Surace 1998). This classification has been successfully used to characterize the merging stage for a sub-sample of CALIFA galaxies (Barrera-Ballesteros et al. 2015). Following this study, here we classify galaxies according to their features observed in the r-band Sloan Digital Sky Survey (SDSS, York et al. 2000) at each merging stage. Below we briefly describe the main morphological feature to classify a galaxy in a given interacting/merging stage: Pre-merger stage (pre): The galaxy has a companion, each of the galaxies presents an unperturbed morphology (i.e., they can be associated to either a spiral or an elliptical). We further use the following criteria:
• The projected separation between each companion is r p < 160 kpc (Barrera-Ballesteros et al. 2015). The above is based on the results of Patton et al. (2013), who found (in observations and simulations) that in galaxy pair with a mass ratio of 2.5:1, an increase in star formation up to projected separations of ∼150 kpc.
• systemic velocity difference ∆v < 300 km s −1 ( Barrera-Ballesteros et al. 2015), which is based on the constraints defined for large studies of galaxy pairs in from Ellison et al. (2008); Ellison et al. (2013).
• difference in magnitudes in the r-band < 2 mag (Barrera-Ballesteros et al. 2015).
Merging stage (mer): merging galaxies with features such as tidal tails, bridges, plumes but with a nucleus well defined in each companion.
Post-merger stage (post): galaxies with their nuclei merged and prominent tidal tails. The emission in the core can be spread out and narrowed by dust lanes.
Merger remnant stage (rem): remnant with features such as weak tidal tails, shell shapes, and ripples but fused cores. For our interacting sample we classify 70, 17, 13, and 10 galaxies in the pre-merger, merging, post-merger, and merger remnant stage, respectively.
2.5. Characterizing the physical properties of Star-Forming Galaxies
One of the main goals of this study is to shed light on how the properties related to the star-formation in the center of galaxies are affected by the merger sequence. To achieve it, we explore in this section three of the most important scaling relations related to the variation of star-formation in galaxies: (1) the Star Formation Main Sequence (SFMS) (e.g., Cano-Díaz et al. 2016;Cano-Díaz et al. 2019;Lin et al. 2019;Sánchez 2020;Ellison et al. 2021;Sánchez et al. 2021), (2) the Schmidt-Kennicutt law (SK) (e.g., Kennicutt & Evans 2012;Sánchez 2020;Lin et al. 2020;Morokuma-Matsui et al. 2020;Ellison et al. 2021;Sánchez et al. 2021;Kaneko et al. 2022 Ellison et al. (2021), we use intensive central properties, i.e., normalized to the physical area of the observed aperture (see Sect. 2.3 for more details) to derive these scaling relations for our sample. To characterize these scaling relations we follow a similar procedure described in Sánchez et al. (2021). In a nutshell, for each of the scaling relations we perform a linear fit (in logarithm scales) to the data. These scaling relations, as well as their fits, are presented in Appendix A. The best fit parameters of these three scaling relations are in excellent agreement with those derived by Sánchez et al. (2021).
Using those best fits, we define the residuals for each of the explored scaling relations:
∆SFMS = log(Σ SFR ) − (α log(Σ ) + β), ∆SK = log(Σ SFR ) − (α log(Σ mol ) + β),
and ∆MGMS = log(Σ mol ) − (α log(Σ ) + β) for the SFMS, SK, and MGMS relations, respectively 2 . To account for individual errors, we performed a Monte Carlo (MC) iteration by perturbing the original data within the error range and repeating the complete analysis 1000 times. The reported results in this study are the average of all MC iterations. For each of these scaling relations, the ratio of the two parameters corresponds to another physical property (i.e., the specific SFR, sSFR = Σ SFR /Σ ; the star-formation efficiency, SFE = Σ SFR /Σ mol ; and the gas fraction, f mol = Σ mol /Σ ). We find a significant good correlation between the ratio derived from the parameters and the residuals from the best fit (r > 0.9 for the three scaling relations). This suggests that intensive properties are less prone to be affected by distance effects as they are normalized to the physical area. Our results indicate that exploring the residuals of the scaling relations using intensive properties is equivalent to explore their corresponding ratios. In Sect. 3 we use these residuals to quantify the impact of the merger sequence in the central star-forming properties. In the next section we use the best fit parameters to replicate the observed scaling relations using a sample of randomly distributed data.
Scaling relations using random data
As we mention in the previous section, we use the residuals of the aforementioned scaling relations to explore the impact of the merger sequence on the star formation stage in the central region of galaxies. Recently, the exploration of these residuals has been used to estimate the drivers of star-formation in galaxies (e.g., Lin et al. 2020). On the other hand, using large samples of galaxies, different studies have also cautioned on the use of these relations between residuals or ratios as they may not have a physical but rather statistical origin (e.g., Sánchez et al. 2021;Barrera-Ballesteros et al. 2021). To account for the scenario in which the correlations between residuals could have a statistical origin, we derive them using a set of mock data based on the best fit of each scaling relation provided in the previous section.
We follow a similar procedure as the one described in Sánchez et al. (2021). First, we generate a distribution of Σ for 5000 targets. This distribution is designed to be similar as the one presented by the observed galaxies and is randomly populated. Then, using the bestparameters of the SFMS presented in Sect. 2.5 we derive an estimation of Σ SFR . In a similar way, we estimated Σ mol using the best fit of the MGMS. Using these estimations of Σ mol we also derive the Σ SFR from the best fit of the SK law. In order to include the uncertainty given by the systematic error of the observations, we perturb these estimations of Σ SFR by their typical uncertainties (i.e., ∼0.2 dex). A similar procedure applies to the Σ and Σ mol with typical uncertainties of ∼0.15 dex and ∼0.28 dex, respectively (see, Barrera-Ballesteros et al. 2021).
As expected, the mock-data are closely distributed around the best fit from the observed data set. However, these have a slightly smaller spread than the observed data, which may be due to the uncertainties assumed to create the synthetic data. Using these mock data, we perform the same procedure as in Sect. 2.5 to derive the best-fit parameters for each of the scaling relations. Finally, as for the observed data set, we derive the residuals of the mock data using their best-fit relations. In the next section we used the distributions derived between the residuals of this mock data to compare them with those derived from observations. These comparisons provide us with a measurement on how similar (or different) is the distribution of the residual for galaxies in different interaction/merger stages with those expected from a statistical origin.
3. RESULTS 3.1. The ∆SK − ∆MGMS diagram
The residuals of the explored scaling relations quantify either the relative enhancement or diminish of an explored property (e.g., the variation of the Σ SFR with respect to both Σ and Σ mol , or the variation of Σ mol with respect to Σ ). Using these residuals, we want to explore the actual impact of the merging process in the star formation activity as well as what could be the main drivers of this activity.
In Fig. 1 we plot the residuals of the SK relation (∆SK) against the residuals of the MGMS (∆MGMS) color coded by the residuals of the SFMS (∆SFMS). In each panel, we plot the different merger stages explored in this study. Top-left panel shows the mock-data sample whereas the top-right panel shows the results from the isolated sample. From left to right and from top to bottom, the other panels show the galaxies in different interaction/merging stages in ∆SK − ∆MGMS plane. We divide each panel into 4 quadrants (indicated by: I, II, III, IV). The goal of this division is to compare the fraction of the different samples into each of those quadrants (denoted by the fractions in parenthesis).
The green (yellow) contours enclose ∼ 95% and ∼ 68% of the distribution of the mock (isolated) sample. When we compare the green and yellow contours encircling ∼ 68%, we find that both samples cover a similar area in the ∆SK − ∆MGMS diagram. We also find that both samples, mock and isolated, show similar Spearman correlation coefficient for these two parameters (ρ ∼ −0.327, and −0.331, respectively). In addition, the Pearson's chi-squared (applied to the mock data) and Fisher's exact tests (applied for the isolated sample) for contingency tables resulted in p-values of 0 and 0.01, respectively. In contrast to previous studies (Elli- Yellow contours in all other panels correspond to the areas encircling ∼ 68% of the observed data. Each panel is divided into four quadrants: I, II, III and IV, in parentheses the fraction of galaxies that are located in said quadrant is indicated. The light red area indicates that galaxies within it have a ∆SK greater than their ∆MGMS. The sky blue area indicates that galaxies within it have a ∆MGMS greater than their ∆SK. son et al. 2020), this suggests that the relations between ∆SK and ∆MGMS may not have a physical origin (e.g., Sánchez et al. 2021). In other words, these relations between residuals could be induced by the stochasticity of the data rather than a physical driver. We further quantified the similarity due to between the mock and isolated distributions by deriving the two-dimensional Kolmogorov-Smirnov test (KS test) p-value (p=0). Although this value suggests that these two samples are not drawn from a parent sample, we find that the isolated sample has similar fractions for each of the quadrants. This is further evidence that these two samples are similar to each other. We also note that there are significant outliers in the isolated sample in comparison to the mock data. For instance NGC 3773 (located at -0.4, 1.1) and NGC 5630 (located at -0.9, 0.9) show an increase in their SFE and in their sSFR, but a decrease in their f mol . Using the two-dimensional p-value we compare each interacting/merging sample with the isolated one. We find the highest p-value for the merger remnant galaxies (p rem = 0.12) suggesting that both samples may be extracted from a single two-dimensional distribution. For the other samples, we find that exhibits lower p-values, close to zero (p pre = 0.06, p mer = 0.01 and p post = 0.08 for the pre-merger, merging and postmerger samples, respectively). This indicates that such samples are significantly different from the isolated sample. Regarding the location of the interacting/merging galaxies in each of the quadrants, we find that, unlike the control sample, a significant fraction of galaxies have a diminished in their SFE traced by their ∆SK (i.e., most of the galaxies are located in the quadrants III and IV). This is significantly evident for the merger and postmerger samples (∼ 75 ± 6% and 68 ± 8%, respectively). From these galaxies in these two quadrants, we also find that a significant fraction of galaxies (in comparison to the isolated sample) is located in the quadrant IV (i.e., where galaxies having a diminish in SFE but an increase in f mol ). It is interesting to note that in our sample we find a few outliers. For example, the interacting galaxy NGC 7253B located in the quadrant IV. This galaxy, although has a negative value of ∆SK, shows both the largest ∆MGMS and ∆SFMS from the entire sample. This interacting galaxy and its pair (NGC 7253A), show evident dust lines in their center as well as blue tails and plumbs. From this analysis, we find that interacting galaxies, in comparison to the isolated sample, tend to have a diminish of their SFE in their centers, traced by the negative values of ∆SK. However, we find a significant enhancement in their gas fraction, traced by their ∆MGMS. This analysis suggests that rather than having an impact in the increment of the central star-formation, the interaction/merging mechanisms affect the amount of central molecular gas in galaxies.
Main drivers of the excess/deficit of SFR
In the previous section, we explore how the merging event affects the SFE, and f mol , traced by the ∆SK and ∆MGMS, respectively. In this section we explore the dependency of the excess/deficit in SFR for a given stellar mass (gauged by ∆SFMS) with a possible excess/deficit on the other two explored parameters for the different merging stages. To achieve this, we compare the fraction of galaxies in different bins of ∆SFMS, segregated into two groups: (i) those galaxies with an excess of f mol with respect to SFE (i.e., ∆MGMS > ∆SK), and (ii) those galaxies with an excess of SFE with respect to f mol (i.e., ∆SK > ∆MGMS). Following Moreno et al. (2021), comparing the fraction of galaxies between (i) and (ii), for a given bin of ∆SFMS, we could suggest a possible main driver for that variation of SFR. Thus, if the fraction of galaxies in (i) is larger than the fraction in (ii), we suggest that the main driver for that enhancement (deficit) of SFR is fuel (efficiency) driven. On the contrary, if the fraction (i) is smaller than the fraction (ii) we consider than the SFR enhancement (deficit) is efficiency (fuel) driven.
In each panel of Fig. 2 we show the fraction of galaxies at different bins of ∆SFMS (gray squares) for each merging stage (including the mock data). In each of these bins, we also show the fraction of galaxies with ∆MGMS > ∆SK, and ∆SK > ∆MGMS (blue stars and red circles, respectively). For the mock-data, we find that ∼ 50.0% of the sample has an increase in their ∆SFMS. Furthermore, for this fraction of the sample we find that half of it has ∆SK > ∆MGMS. Given the fact that we build this mock sample by adding random values to the best scaling relations, it is expected to find similar fractions of galaxies for groups (i) and (ii). Thus, in the central region of these simulated galaxies it is not possible to distinguish whether star formation enhancements are mainly driven by efficiency (a large fraction of ∆SK > ∆MGMS) or by the amount of molecular gas (a large fraction of ∆MGMS > ∆SK). In other words, both parameters are equally important in setting the star-formation activity. In comparison to the mock sample, we observe a similar behavior in the isolated sample (top middle panel of Fig. 2). ∼ 52 ± 3% of the control sample has an increase in their ∆SFMS. For these galaxies, more than half of them present a ∆SK > ∆MGMS (∼ 62 ± 3%). Although this could indicate that the enhancement in SFR for the isolated galaxies is efficiencydriven (see the scheme above), we acknowledge that the fraction of galaxies with ∆MGMS > ∆SK is simi- Figure 2. Fraction of galaxies in bins of ∆SFMS for each different merger stage. Grey squares show the total number of galaxies per bins. Blue-stars indicate those galaxies in which ∆MGMS is greater than ∆SK (located in the light blue area of Fig. 1), thus, objects with a larger excess in f mol than SFE with respect to the average population. Red-circles indicate just the opposite, i.e., galaxies in which ∆SK is greater than ∆MGMS (located in the light red area of Fig. 1), thus, objects with a larger excess in SFE than f mol with respect to the average population. lar (∼ 38 ± 3%), preventing us to significantly constrain the main driver of SFR enhancement in this sample. Similar conclusions can be drawn from isolated galaxies with negative values of ∆SFMS. These results suggest that for the sample of isolated galaxies it is not evident what parameter drives the diminishment or enhancement of star formation activity. These results suggest that these three parameters may not strongly correlate to each other for isolated star-forming galaxies Barrera-Ballesteros et al. 2021).
In the pre-merger sample (top right panel of Fig. 2), we find a similar fraction of galaxies with an enhancement in sSFR (∼ 53 ± 4% with ∆SFMS> 0) in comparison to the control sample. Furthermore, these galaxies with ∆SFMS> 0 have a similar fraction of galaxies with ∆SK > ∆MGMS (∼ 62 ± 4%) in comparison to the isolated sample. For galaxies with ∆SFMS < 0 (∼ 47 ± 4%), we find that, unlike the isolated sample, the fraction of galaxies with ∆MGMS > ∆SK is significantly larger (∼ 76 ± 4%). Thus, for galaxies in pairs, our results suggest that there is no significant increase in the fraction of galaxies with an enhancement in their SFR in comparison to the isolated sample. Furthermore, for companions with an enhancement in their SFR, we are not able to clearly disentangle the possible driver of these enhancements as we find that those galaxies present a similar fraction of galaxies with ∆SK > ∆MGMS and ∆MGMS > ∆SK. This is similar to what we find in isolated galaxies. However, for companions with a diminish of SFR, we find a large fraction of galaxies with ∆MGMS > ∆SK suggesting that the lack of star formation is efficiency-driven. This is that, even though there is a large fraction of galaxies with an increment in their gas fraction there is no enhancement in their star formation.
For the other merging stages (merging, post-merger and merger remnant) we find that a large fraction of galaxies have a deficit of SFR, in contrast to isolated and pre-merger stages where the samples have a similar fraction of galaxies with an enhanced and deficit of SFR (∼ 59 ± 7%, ∼ 62 ± 7% and ∼ 70 ± 7%, respectively). For these galaxies with ∆SFMS < 0, the fraction of galaxies with a ∆MGMS > ∆SK is greater than for isolated galaxies, and this fraction increases at each stage (∼ 80 ± 7%, ∼ 88 ± 7%, ∼ 86 ± 7% for the merging, post-merger and merger remnant, respectively). These results derived from the samples at different interaction/merging stages suggest that the merger event induces a decrement (rather than an increase) in the central star formation activity in comparison to the isolated sample. Furthermore, for those galaxies with a decrement in the star-formation activity we find that the majority of those have an excess in their gas fraction in comparison to their efficiency. According to the scheme presented above, this indicates an efficiency-driven halt of the star formation due to the merging process.
In general, our results suggest that for interacting/merging galaxies, the decrease (increase) in star formation, traced by ∆SFMS, is driven by the inability (ability) of the molecular gas to be efficiently transformed into new stars, rather than the amount of raw material available to form those stars. In other words, the efficiency seems to be a significant driver in shaping the star formation activity across the interaction/merger event.
DISCUSSION
In this study, we explore the impact of galactic interactions/mergers on the specific star formation rate (sSFR), molecular gas fraction (f mol ) and star formation efficiency (SFE), as well as the possible driving mechanism of star formation, in the central region of CALIFA galaxies. Similar studies with smaller samples and/or heterogeneous data set, have been previously carried out. Pan et al. (2018) studied a sample of 58 interacting galaxies in pairs (r p < 70 kpc) with measurements of molecular gas from four different surveys (and different beam coverage): JCMT PI, JINGLE, JINGLE and xCOLD GASS (see Pan et al. 2018). They used the global SFR and M estimations taken from the MPA-JHU DR7 catalog. Assuming a Schmidt-Kennicutt relation between the global SFR and the expected CO emission Leroy et al. (2009);Saintonge et al. (2012), they recalibrated the observed CO measurements (for a given beam coverage) to a global CO estimation. They found that a significant fraction of galaxies in pairs exhibit an increase in their SFR and gas fraction in comparison to a control sample. Although we find that mergers and post-merger galaxies indeed show an increase in their gas fractions in comparison to our sample of isolated galaxies, we do not find significant differences in our sample of pre-merger (pairs) galaxies. We also note that, contrary to this study, we do not find a significant increment in the SFE (traced by ∆SK) between our interacting/merger sample at different stages and the control sample. These differences between our results and those provided by Pan et al. (2018), could be due to different factors. On the one hand, our CO measurements come from homogeneous observations (i.e., the same beam coverage) provided by both the resolved CARMA and integrated APEX observations. Furthermore, thanks to the IFU CALIFA data set, we are able to provide direct measurements of the Hα flux at the same aperture of the CO observations. Thus, different assumptions in the estimations of the CO values and SFR could lead to increments of the SFE and the gas fraction as presented by Pan et al. (2018). On the other hand, combining our pre-merger and merging stages as interacting galaxies in pairs (following the classification by Pan et al. 2018), we have a larger sample of galaxies (∼ 61% more, 87 vs. 54 galaxies in pairs). This could also lead to differences in the main conclusion of our results in comparison to those reported by Pan et al. (2018).
Using a sample of 11 interacting galaxies drawn from the xCOLD-GASS (Saintonge et al. 2016) survey, Violino et al. (2018 also studied the variations between the SFR and the gas fraction. As a control sample they used a set of non-interacting galaxies also drawn from this survey. Although they used the optical properties derived from the MPA-JHU catalogue, unlike Pan et al. (2018), they corrected the SFR to match the aperture of the IRAM millimeter telescope data. Violino et al. (2018) found that for their interacting sample with r p < 30 kpc, (in our classification pre-merger and merging galaxies), the variations in SFR and gas fraction are similar to those derived from the control sample. They suggest that both interactions and internal processes can lead to variation in molecular gas and SFR in the center of galaxies. Although we find similar results for our sample of pairs of galaxies, we find a significant increment in the gas fraction for our sample of merging galaxies. These differences may be due to the differences in samples and measurements as those described above. On the one hand we use a homogeneous data set and on the other hand, our sample is significantly larger than the one presented by Violino et al. (2018).
Using spatially resolved data, Kaneko et al. (2022) studied the behavior of these properties in 4 pairs of galaxies (ARP 84, VV 219, VV 254, and the Antennae Galaxies), that corresponds to merging galaxies in our classification scheme. They compare these measurements to 11 non-interacting galaxies. The CO maps for these samples were obtained using the Nobeyama radio telescope observations, whereas the SFRs were derived using data from GALEX observations. They found that for integrated properties, neither the sSFR nor the SFE, exhibit significant increments in comparison to the global measurements of isolated galaxies. Although their sample is smaller compared to ours, the results from this study agree with those presented here.
Besides observational studies (e.g. Thorp et al. 2022), there has been simulations to exploring star formation in interacting galaxies, the aim is to investigate whether the enhancement of star formation is due to the amount of cold gas or to effi-ciency. For example, Moreno et al. (2021) conducted high-resolution hydrodynamical simulations focused on the central kiloparsec of Milky-Way like galaxies in pairs (with mass ratios from 2.5:1 to 1:1) with projected separations of 20 < r p < 120 kpc. We should remark that, strictly speaking, it is very difficult to compare quantitatively our results with these simulations. On the one hand, our sample has a wide range of morphologies (furthermore, we are unaware of the morphology of the progenitors of post-merger and merger remnants). On the other hand, in contrast to simulations, we are not able to have a rigorously mass ratio selection for our sample of pairs and merging sample. Furthermore, the physical area used to measure the properties in those simulations is fixed (1 kpc) whereas the area used in this study varies depending on the APEX beam coverage (∼ 5.2 kpc). Therefore, the possible discrepancies between our results could be due to these limitations. In particular, we consider that the different spatial coverages in these studies could induce a major discrepancy. Despite these, qualitative comparisons can provide guidelines on what to expect in terms of the numerical simulations on merging galaxies. In these simulations, Moreno et al. (2021) found that the percentage of primary galaxies showing an increase in their amount of molecular gas is 85.4%. This is in agreement with our results; in our pre-merger and merging samples ∼ 57 ± 4% and ∼ 81 ± 6% present an increase in their f mol , respectively. However, we should also note that in our samples the percentage of galaxies with increases in their sSFR (∼ 53 ± 4% and ∼ 41 ± 7%) is lower compared to the results from these simulations (67.7%).
Furthermore, Moreno et al. (2021) found that for those galaxies with an increment in their SFR (in comparison to the isolated galaxies) the fraction of these with an increment in the amount of gas is larger in comparison to those exhibit an increase in their star formation efficiency, suggesting that the enhancement in the SFR for interacting galaxies is driven by the amount of gas in the central kpc (so called fuel driven enhanced star-formation). In comparison to the analysis from Moreno et al. (2021), we find mixed results. On the one hand, those simulations show that for those galaxies with enhanced SFR in the pre-merger stage the fraction of galaxies with an enhancement in the f mol is larger in comparison to the fraction with an enhancement of SFE. Contrary to the results from the simulations, our analysis suggests, that the enhancement in the SFR for galaxies in pairs is efficiency driven, rather than fuel driven (see top-right panel in Fig 2). We also note that a similar, but reduced effect, is observed in the isolated sample: for those galaxies with an enhancement in sSFR the fraction of galaxies with enhanced SFE is slightly larger than those with a f mol enhancement. On the other hand, for the merging galaxies, we find that for some bins of positive ∆SFMS the enhancement in sSFR is fuel-driven. Although not explored in those simulations, we should note that the observed enhancements in sSFR for the remaining stages of interaction are seem to be driven by efficiency instead of the excess of molecular gas. Regarding galaxies with a decrement in their SFR, Moreno et al. (2021) found that the fraction of these with an increment in the amount of gas is larger in comparison to those exhibit an increase in their SFE, suggesting that the suppressed in the SFR by interacting galaxies is efficiency driven. In this study, we agree with these simulations for the suppressed star formation. These results suggest that the primary effect of interactions and mergers rather than increase the amount of gas to enhance the SFR in the center of galaxies is efficiency that drives either an enhancement or a suppression in star formation, although there is an increase in the amount of molecular gas. More research is needed to try to understand in detail which processes govern the level of SFE to enhance or suppress star formation. For example, there are studies that conclude that the SFR is primarily controlled by interstellar turbulence and magnetic fields (Federrath & Klessen 2012;Federrath et al. 2016).
Including this study, most of the explored properties of interacting galaxies are derived from integrated measurements (either from the central or entire target). However, as spatially-resolved entities, the impact of the interacting and merger can be different at different locations in the galaxy. Thus, spatially resolved observations are necessary to analyze the gas distribution and compare it in galaxies interacting with isolated galaxies. Unfortunately, the studies available using IFS in this field are scarce. Using a sample of 31 galaxies (17 pairs and 14 post-merger galaxies) including in the AL-MaQUEST sample 3 , Thorp et al. (2022) explored how the interaction and merger affects the drivers of the enhancement of star-formation at both integrated and spatially resolved measurements. Their results suggest that for a given interacting galaxy, even though at global scales a given driver (either fuel or efficiency) dominates the star formation, it may no be the same dominant driver at kpc scales. Furthermore, they found that even within a given galaxy different regions 3 ALMA MaNGA QUEnching and STar formation (AL-MaQUEST) survey includes optical IFS observations from the MaNGA survey and sub-millimeter interferometric observations from ALMA could exhibit a different driver for the enhancement of star-formation, and this is also independent on the interaction stage. Although our results are not directly comparable with spatially-resolved measurements, they suggest that at the central region of interacting/merging, galaxies the efficiency appears to be the most significant driver of both the enhancement and diminish of starformation. In a future study we explore how our explored parameters (∆SFMS, ∆SK and ∆MGMS) vary at kpc in a sample of interacting/merging EDGE galaxies. Finally, we also note that our findings align well with the observations and simulations of the Central Molecular Zone (CMZ) of the Milky Way (Morris & Serabyn 1996). The CMZ presents a lower star formation rate than expected based on its amount of molecular gas content (e.g. Immer et al. 2012;Longmore et al. 2013;Barnes et al. 2017). The reasons for this lower SFR are currently being intensely investigated. For example, Henshaw et al. (2022) propose that star formation in the CMZ may be linked to the overall macroscopic evolution of the system, which could be driven by discrete episodes of feedback (and/or accretion), and/or by local interstellar medium conditions that raise the critical density for star formation. Consequently, studying the CMZ will provide insight into the physical mechanisms that result in a reduced SFR, allowing us to extrapolate to the extragalactic context and better understand how interactions and mergers affect the evolution of galaxies in the central region.
SUMMARY AND CONCLUSIONS
In this study, we explore the impact of galactic interactions/mergers on the star formation using a sample of 236 star-forming galaxies with homogeneous measurements of the optical and molecular gas properties across their central region (at ∼ 1 R eff , i.e., the APEX beam coverage, ∼ 26.3 arcsec) from the APEX-CALIFA survey. We segregate our sample of interacting galaxies (110 targets) into different merger stages (from premergers to merger-remnant). For the entire sample, we derive the best fits of scaling relations using their intensive properties (i.e., normalizing by the APEX beam area): the star formation main sequence (SFMS), the Schmidt-Kennicutt relation (SK) and the molecular gas main sequence (MGMS). These fits are shown in Appendix A. We also generate a mock data set using the best fit of these scaling relations (SFMS, SK, MGMS). Using the best fits of these scaling relations, we obtain the residuals for each of these (∆SFMS, ∆SK and ∆MGMS). As expected, these residuals are similar to the usual ratios that characterize these three scaling Figure A1. Distribution of explored galaxies in the ΣSFR-Σ (left-hand panel), ΣSFR-Σ mol (middle panel), and Σ mol -Σ (righthand panel) diagrams. The red lines represent the best fits to the data in our sample (236 star-forming galaxies). The black lines represent the fits of these relations derived by Sánchez et al. (2021). relations: the specific star formation rate (sSFR), the star formation efficiency (SFE), and the gas fraction (f mol ). Based on these residuals, we explore how the variation (excess or deficit) for the different interacting stage compare with the control sample (see Fig. 1). Furthermore, to explore what drives the star formation enhancement or suppression at different interaction/merging stages, we plot in Fig. 2 the fraction of galaxies with ∆SK > ∆MGMS, and ∆MGMS > ∆SK for different bins of ∆SFMS. Based on the analysis presented in the previous sections, compared to isolated sample, we conclude the following:
• In comparison to the control sample, we do not find a significant fraction of interacting/merging galaxies with an enhancement on their sSFR and SFE. However, we do find that interacting/merging galaxies tend to have a large fraction of galaxies with large central molecular gas fraction (see Fig. 1).
• Although we are not able to fully identify the main mechanism that modulates the star-formation activity in isolated galaxies (i.e., efficiency vs. fuel driven star formation), we find that, in general, for interacting and merging galaxies both suppression and enhancement of star formation appear to be efficiency driven (see Fig. 2).
Our results indicate that the effect of tidal forces due to the interaction/merger increases the amount of molecular gas in the center of the galaxies, gravitational torques reduced the angular momentum of the molecular gas in the galaxies, moving it inwards. Despite this, in contrast to previous results, we do not find a significant increase in the central star formation for interacting galaxies in comparison to the control sample. We suggest that a possible cause on why the central molecular gas has not been yet transformed into new star could be that it still is in a turbulent stage preventing it to cool down. Further studies using homogeneous spatially-resolved data (such as the EDGE data set) will allow us to better constrain the role of mergers in shaping the star formation across the optical extension of galaxies in the nearby universe. intensive properties are obtained by dividing each extensive property (SFR, M , M mol ) by the beam area, as described in Sect. 2.3. Furthermore the scaling relations for these properties are quite similar regardless the observed angular scale (i.e., either integrated or spatially-resolved properties, Sánchez et al. 2021). In each panel of Fig. A1 we show the scaling relations used in this: the star formation main sequence (SFMS), the Schmidt-Kennicutt relation (SK), and the molecular gas main sequence (MGMS). Our best fits (red-dashed lines) are in good agreement with those presented previously with a similar data set (solid black lines Sánchez et al. 2021). In Table A1 we present the parameters of those best fits. Table A1. Best fitting for intensive scaling relations (SFMS, SK, MGMS), β intercepts and α slopes. Also, we include the Spearman correlation coefficients ρ.
), and (3) the Molecular Gas Main Sequence (MGMS) (e.g., Lin et al. 2019; Sánchez 2020; Barrera-Ballesteros et al. 2020; Ellison et al. 2021; Sánchez et al. 2021). Following Lin et al. (2019); Ellison et al. (2020); Barrera-Ballesteros et al. (2020); Sánchez et al. (2021);
Figure 1 .
1Distribution of the residuals of the explored relations: ∆SK − ∆MGMS color coded by the residuals ∆SFMS. Each panel indicates the number of galaxies studied in the different merging stages, where a circle represents the residual derived in the central region of each galaxy, approximately within 1 Reff (see Sect. 2). The p-value of the two-dimensional Kolmogorov-Smirnov test (KS test) is represented by p. And ρ represents the Spearman correlation coefficient between the datasets shown in the upper panels. Green contours in the top-left panel correspond to the areas encircling ∼ 95% and ∼ 68% of the mock-data.
https://califa.caha.es/
The α, and β best-fitted parameters are different depending on the explored scaling relation, see Appendix A.
ACKNOWLEDGEMENTS Y.G.S. thanks C. Espinosa-Ponce for his support and comments throughout the development of this work. Y.G.S, and J.B.B. acknowledge support from the grant IA-101522 (DGAPA-PAPIIT, UNAM) and funding from the CONACYT grant CF19-39578.DATA AVAILABILITYThe optical parameters used in this study are taken from the CALIFA survey. These parameters are publicly available. Part of the molecular gas mass has been presented inColombo et al. (2020). An updated version of that data set will be presented elsewhere and it will be publicly available.APPENDIXA. INTENSIVE SCALING RELATIONSAs mentioned in Sect. 2.5 of this study, we use the residuals of the scaling relations of the intensive properties (i.e., intensive scaling relations) to quantify the impact of interactions and mergers on central star formation. These
. J A Baldwin, M M Phillips, R Terlevich, 10.1086/130766PASP. 93Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5, doi: 10.1086/130766
. A T Barnes, S N Longmore, C Battersby, 10.1093/mnras/stx941MNRAS. 4692263Barnes, A. T., Longmore, S. N., Battersby, C., et al. 2017, MNRAS, 469, 2263, doi: 10.1093/mnras/stx941
. J K Barrera-Ballesteros, B García-Lorenzo, J Falcón-Barroso, 10.1051/0004-6361/201424935A&A. 58221Barrera-Ballesteros, J. K., García-Lorenzo, B., Falcón-Barroso, J., et al. 2015, A&A, 582, A21, doi: 10.1051/0004-6361/201424935
. J K Barrera-Ballesteros, D Utomo, A D Bolatto, 10.1093/mnras/stz3553MNRAS. 4922651Barrera-Ballesteros, J. K., Utomo, D., Bolatto, A. D., et al. 2020, MNRAS, 492, 2651, doi: 10.1093/mnras/stz3553
. J K Barrera-Ballesteros, S F Sánchez, T Heckman, 10.1093/mnras/stab755MNRAS. 5033643Barrera-Ballesteros, J. K., Sánchez, S. F., Heckman, T., et al. 2021, MNRAS, 503, 3643, doi: 10.1093/mnras/stab755
D C Bock, .-J Bolatto, A D Hawkins, D W , 10.1117/12.674051International Society for Optics and Photonics (SPIE). L. M. Stepp6267Ground-based and Airborne TelescopesBock, D. C.-J., Bolatto, A. D., Hawkins, D. W., et al. 2006, in Ground-based and Airborne Telescopes, ed. L. M. Stepp, Vol. 6267, International Society for Optics and Photonics (SPIE), 379 -388, doi: 10.1117/12.674051
. A D Bolatto, T Wong, D Utomo, 10.3847/1538-4357/aa86aaApJ. 846159Bolatto, A. D., Wong, T., Utomo, D., et al. 2017, ApJ, 846, 159, doi: 10.3847/1538-4357/aa86aa
. M Cano-Díaz, S F Sánchez, S Zibetti, 10.3847/2041-8205/821/2/L26ApJL. 82126Cano-Díaz, M., Sánchez, S. F., Zibetti, S., et al. 2016, ApJL, 821, L26, doi: 10.3847/2041-8205/821/2/L26
. M Cano-Díaz, V Ávila Reese, S F Sánchez, 10.1093/mnras/stz1894MNRAS. 4883929Cano-Díaz, M.,Ávila Reese, V., Sánchez, S. F., et al. 2019, MNRAS, 488, 3929, doi: 10.1093/mnras/stz1894
L Christensen, R Shida, D Martin, ? direct=true&db=cat02025a&AN=lib.MX001001488432& lang=es&site=eds-live. SpringerCosmic Collisions: The Hubble Atlas of Merging GalaxiesChristensen, L., Shida, R., & Martin, D. 2009, Cosmic Collisions: The Hubble Atlas of Merging Galaxies. (Springer). https://search.ebscohost.com/login.aspx? direct=true&db=cat02025a&AN=lib.MX001001488432& lang=es&site=eds-live.
. D Colombo, S F Sanchez, A D Bolatto, 10.1051/0004-6361/202039005A&A. 644Colombo, D., Sanchez, S. F., Bolatto, A. D., et al. 2020, A&A, 644, A97, doi: 10.1051/0004-6361/202039005
. C J Conselice, M A Bershady, M Dickinson, C Papovich, 10.1086/377318AJ. 1261183Conselice, C. J., Bershady, M. A., Dickinson, M., & Papovich, C. 2003, AJ, 126, 1183, doi: 10.1086/377318
. S L Ellison, L Lin, M D Thorp, 10.1093/mnras/staa3822MNRAS. 5014777Ellison, S. L., Lin, L., Thorp, M. D., et al. 2021, MNRAS, 501, 4777, doi: 10.1093/mnras/staa3822
. S L Ellison, J T Mendel, D R Patton, J M Scudder, 10.1093/mnras/stt1562MNRAS. 4353627Ellison, S. L., Mendel, J. T., Patton, D. R., & Scudder, J. M. 2013, MNRAS, 435, 3627, doi: 10.1093/mnras/stt1562
. S L Ellison, D R Patton, J T Mendel, J M Scudder, 10.1111/j.1365-2966.2011.19624.xMNRAS. 4182043Ellison, S. L., Patton, D. R., Mendel, J. T., & Scudder, J. M. 2011, MNRAS, 418, 2043, doi: 10.1111/j.1365-2966.2011.19624.x
. S L Ellison, D R Patton, L Simard, A W Mcconnachie, 10.1088/0004-6256/135/5/1877The Astronomical Journal. 1351877Ellison, S. L., Patton, D. R., Simard, L., & McConnachie, A. W. 2008, The Astronomical Journal, 135, 1877, doi: 10.1088/0004-6256/135/5/1877
. S L Ellison, D R Patton, L Simard, 10.1111/j.1365-2966.2010.17076.xMNRAS. 4071514Ellison, S. L., Patton, D. R., Simard, L., et al. 2010, MNRAS, 407, 1514, doi: 10.1111/j.1365-2966.2010.17076.x
. S L Ellison, M D Thorp, H.-A Pan, 10.1093/mnras/staa001MNRAS. 4926027Ellison, S. L., Thorp, M. D., Pan, H.-A., et al. 2020, MNRAS, 492, 6027, doi: 10.1093/mnras/staa001
. S L Ellison, M D Thorp, H.-A Pan, 10.1093/mnras/staa001Monthly Notices of the Royal Astronomical Society. 4926027Ellison, S. L., Thorp, M. D., Pan, H.-A., et al. 2020, Monthly Notices of the Royal Astronomical Society, 492, 6027, doi: 10.1093/mnras/staa001
. C Federrath, R S Klessen, 10.1088/0004-637X/761/2/156The Astrophysical Journal. 761156Federrath, C., & Klessen, R. S. 2012, The Astrophysical Journal, 761, 156, doi: 10.1088/0004-637X/761/2/156
. C Federrath, J M Rathborne, S N Longmore, 10.3847/0004-637X/832/2/143The Astrophysical Journal. 832143Federrath, C., Rathborne, J. M., Longmore, S. N., et al. 2016, The Astrophysical Journal, 832, 143, doi: 10.3847/0004-637X/832/2/143
. R Güsten, L Å Nyman, P Schilke, 10.1051/0004-6361:20065420A&A. 13Güsten, R., Nyman, L.Å., Schilke, P., et al. 2006, A&A, 454, L13, doi: 10.1051/0004-6361:20065420
. J D Henshaw, A T Barnes, C Battersby, 10.48550/arXiv.2203.11223arXiv:2203.11223arXiv e-printsHenshaw, J. D., Barnes, A. T., Battersby, C., et al. 2022, arXiv e-prints, arXiv:2203.11223, doi: 10.48550/arXiv.2203.11223
. K Immer, F Schuller, A Omont, K M Menten, 10.1051/0004-6361/201117857A&A. 537121Immer, K., Schuller, F., Omont, A., & Menten, K. M. 2012, A&A, 537, A121, doi: 10.1051/0004-6361/201117857
. H Kaneko, N Kuno, D Iono, arXiv:2201.02270arXiv e-printsKaneko, H., Kuno, N., Iono, D., et al. 2022, arXiv e-prints, arXiv:2201.02270. https://arxiv.org/abs/2201.02270
. A Kelz, M A W Verheijen, M M Roth, 10.1086/497455PASP. 118129Kelz, A., Verheijen, M. A. W., Roth, M. M., et al. 2006, PASP, 118, 129, doi: 10.1086/497455
. R C Kennicutt, N J Evans, 10.1146/annurev-astro-081811-125610ARA&A. 50531Kennicutt, R. C., & Evans, N. J. 2012, ARA&A, 50, 531, doi: 10.1146/annurev-astro-081811-125610
. L J Kewley, M A Dopita, R S Sutherland, C A Heisler, J Trevena, 10.1086/321545The Astrophysical Journal. 556121Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, The Astrophysical Journal, 556, 121, doi: 10.1086/321545
J Kormendy, 10.48550/arXiv.1311.2609Secular Evolution of Galaxies. J. Falcón-Barroso & J. H. KnapenCambridge University Press1Kormendy, J. 2013, in Secular Evolution of Galaxies, ed. J. Falcón-Barroso & J. H. Knapen (Cambridge University Press), 1, doi: 10.48550/arXiv.1311.2609
. J Kormendy, Robert C Kennicutt, J , 10.1146/annurev.astro.42.053102.134024ARA&A. 42603Kormendy, J., & Kennicutt, Robert C., J. 2004, ARA&A, 42, 603, doi: 10.1146/annurev.astro.42.053102.134024
. E A D Lacerda, S F Sánchez, R Fernandes, 10.1093/mnras/staa008MNRAS. 4923073Lacerda, E. A. D., Sánchez, S. F., Cid Fernandes, R., et al. 2020, MNRAS, 492, 3073, doi: 10.1093/mnras/staa008
. A K Leroy, F Walter, F Bigiel, 10.1088/0004-6256/137/6/4670The Astronomical Journal. 1374670Leroy, A. K., Walter, F., Bigiel, F., et al. 2009, The Astronomical Journal, 137, 4670, doi: 10.1088/0004-6256/137/6/4670
. C Li, G Kauffmann, T M Heckman, Y P Jing, S D M White, 10.1111/j.1365-2966.2008.13000.xMNRAS. 385Li, C., Kauffmann, G., Heckman, T. M., Jing, Y. P., & White, S. D. M. 2008, MNRAS, 385, 1903, doi: 10.1111/j.1365-2966.2008.13000.x
. L Lin, H.-A Pan, S L Ellison, 10.3847/2041-8213/ab4815ApJ. 88433Lin, L., Pan, H.-A., Ellison, S. L., et al. 2019, ApJ, 884, L33, doi: 10.3847/2041-8213/ab4815
. L Lin, S L Ellison, H.-A Pan, 10.3847/1538-4357/abba3aApJ. 903145Lin, L., Ellison, S. L., Pan, H.-A., et al. 2020, ApJ, 903, 145, doi: 10.3847/1538-4357/abba3a
. S N Longmore, J Bally, L Testi, 10.1093/mnras/sts376MNRAS. 429987Longmore, S. N., Bally, J., Testi, L., et al. 2013, MNRAS, 429, 987, doi: 10.1093/mnras/sts376
. C López-Cobá, S F Sánchez, J Bland-Hawthorn, 10.1093/mnras/sty2960MNRAS. 4824032López-Cobá, C., Sánchez, S. F., Bland-Hawthorn, J., et al. 2018, MNRAS, 482, 4032, doi: 10.1093/mnras/sty2960
. J Moreno, P Torrey, S L Ellison, 10.1093/mnras/staa2952MNRAS. 5033113Moreno, J., Torrey, P., Ellison, S. L., et al. 2021, MNRAS, 503, 3113, doi: 10.1093/mnras/staa2952
. K Morokuma-Matsui, K Sorai, Y Sato, 10.1093/pasj/psaa084PASJ. 7290Morokuma-Matsui, K., Sorai, K., Sato, Y., et al. 2020, PASJ, 72, 90, doi: 10.1093/pasj/psaa084
. M Morris, E Serabyn, 10.1146/annurev.astro.34.1.645ARA&A. 34645Morris, M., & Serabyn, E. 1996, ARA&A, 34, 645, doi: 10.1146/annurev.astro.34.1.645
. H.-A Pan, L Lin, B.-C Hsieh, 10.3847/1538-4357/aaeb92ApJ. 868132Pan, H.-A., Lin, L., Hsieh, B.-C., et al. 2018, ApJ, 868, 132, doi: 10.3847/1538-4357/aaeb92
. D R Patton, S L Ellison, L Simard, A W Mcconnachie, J T Mendel, 10.1111/j.1365-2966.2010.17932.xMNRAS. 412591Patton, D. R., Ellison, S. L., Simard, L., McConnachie, A. W., & Mendel, J. T. 2011, MNRAS, 412, 591, doi: 10.1111/j.1365-2966.2010.17932.x
. D R Patton, J K Grant, L Simard, 10.1086/491672AJ. 1302043Patton, D. R., Grant, J. K., Simard, L., et al. 2005, AJ, 130, 2043, doi: 10.1086/491672
. D R Patton, C J Pritchet, H K C Yee, E Ellingson, R G Carlberg, 10.1086/303535ApJ. 47529Patton, D. R., Pritchet, C. J., Yee, H. K. C., Ellingson, E., & Carlberg, R. G. 1997, ApJ, 475, 29, doi: 10.1086/303535
. D R Patton, P Torrey, S L Ellison, J T Mendel, J M Scudder, 10.1093/mnrasl/slt058MNRAS. 43359Patton, D. R., Torrey, P., Ellison, S. L., Mendel, J. T., & Scudder, J. M. 2013, MNRAS, 433, L59, doi: 10.1093/mnrasl/slt058
. M M Roth, A Kelz, T Fechner, 10.1086/429877PASP. 117Roth, M. M., Kelz, A., Fechner, T., et al. 2005, PASP, 117, 620, doi: 10.1086/429877
. A Saintonge, L J Tacconi, S Fabello, 10.1088/0004-637X/758/2/73The Astrophysical Journal. 75873Saintonge, A., Tacconi, L. J., Fabello, S., et al. 2012, The Astrophysical Journal, 758, 73, doi: 10.1088/0004-637X/758/2/73
. A Saintonge, B Catinella, L Cortese, 10.1093/mnras/stw1715MNRAS. 4621749Saintonge, A., Catinella, B., Cortese, L., et al. 2016, MNRAS, 462, 1749, doi: 10.1093/mnras/stw1715
. S F Sánchez, 10.1146/annurev-astro-012120-013326Annual Review of Astronomy and Astrophysics. 5899Sánchez, S. F. 2020, Annual Review of Astronomy and Astrophysics, 58, 99, doi: 10.1146/annurev-astro-012120-013326
. S F Sánchez, R García-Benito, S Zibetti, 10.1051/0004-6361/201628661A&A. 59436Sánchez, S. F., García-Benito, R., Zibetti, S., et al. 2016a, A&A, 594, A36, doi: 10.1051/0004-6361/201628661
. S F Sánchez, E Pérez, P Sánchez-Blázquez, RMxAA. 52RMxAASánchez, S. F., Pérez, E., Sánchez-Blázquez, P., et al. 2016b, RMxAA, 52, 21. https://arxiv.org/abs/1509.08552 -. 2016c, RMxAA, 52, 171. https://arxiv.org/abs/1602.01830
. S F Sánchez, J K Barrera-Ballesteros, D Colombo, 10.1093/mnras/stab442MNRAS. 5031615Sánchez, S. F., Barrera-Ballesteros, J. K., Colombo, D., et al. 2021, MNRAS, 503, 1615, doi: 10.1093/mnras/stab442
. S F Sánchez, R C Kennicutt, A Gil De Paz, 10.1051/0004-6361/201117353A&A. 5388Sánchez, S. F., Kennicutt, R. C., Gil de Paz, A., et al. 2012, A&A, 538, A8, doi: 10.1051/0004-6361/201117353
. J M Scudder, S L Ellison, P Torrey, D R Patton, J T Mendel, 10.1111/j.1365-2966.2012.21749.xMNRAS. 426549Scudder, J. M., Ellison, S. L., Torrey, P., Patton, D. R., & Mendel, J. T. 2012, MNRAS, 426, 549, doi: 10.1111/j.1365-2966.2012.21749.x
. B J Smith, C Struck, M Hancock, 10.1086/510350AJ. 133791Smith, B. J., Struck, C., Hancock, M., et al. 2007, AJ, 133, 791, doi: 10.1086/510350
. J A Surace, M D Thorp, S L Ellison, H.-A Pan, 10.1093/mnras/stac2288MNRAS. University of Hawaii, Manoa, Institute for AstronomyPhD thesisSurace, J. A. 1998, PhD thesis, University of Hawaii, Manoa, Institute for Astronomy Thorp, M. D., Ellison, S. L., Pan, H.-A., et al. 2022, MNRAS, doi: 10.1093/mnras/stac2288
A Toomre, Evolution of Galaxies and Stellar Populations. B. M. Tinsley & D. C. Larson, Richard B. Gehret401Toomre, A. 1977, in Evolution of Galaxies and Stellar Populations, ed. B. M. Tinsley & D. C. Larson, Richard B. Gehret, 401
. J Ueda, D Iono, M S Yun, 10.3847/1538-4365/ac257aApJS. 25757Ueda, J., Iono, D., Yun, M. S., et al. 2021, ApJS, 257, 57, doi: 10.3847/1538-4365/ac257a
. S Veilleux, D C Kim, D B Sanders, 10.1086/343844ApJS. 143315Veilleux, S., Kim, D. C., & Sanders, D. B. 2002, ApJS, 143, 315, doi: 10.1086/343844
. G Violino, S L Ellison, M Sargent, 10.1093/mnras/sty345MNRAS. 4762591Violino, G., Ellison, S. L., Sargent, M., et al. 2018, MNRAS, 476, 2591, doi: 10.1093/mnras/sty345
. D G York, J Adelman, E John, J Anderson, 10.1086/301513The Astronomical Journal. 1201579York, D. G., Adelman, J., John E. Anderson, J., et al. 2000, The Astronomical Journal, 120, 1579, doi: 10.1086/301513
| [] |
[
"Photonic Floquet skin-topological effect",
"Photonic Floquet skin-topological effect"
] | [
"Yeyang Sun 1# \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n",
"Hou Xiangrui ",
"Tuo 1# \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n",
"Wan \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n",
"Fangyu Wang \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n",
"Shiyao Zhu \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n\nHefei National Laboratory\n230088HefeiChina\n",
"Zhichao Ruan [email protected] \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n",
"Zhaoju Yang zhа[email protected]. \nSchool of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina\n"
] | [
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina",
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina",
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina",
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina",
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina",
"Hefei National Laboratory\n230088HefeiChina",
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina",
"School of Physics\nInterdisciplinary Center for Quantum Information\nZhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\nZhejiang Province310027HangzhouChina"
] | [] | Non-Hermitian skin effect and photonic topological edge states are of great interest in non-Hermitian physics and optics. However, the interplay between them is largly unexplored. Here, we propose and demonstrate experimentally the non-Hermitian skin effect that constructed from the nonreciprocal flow of Floquet topological edge states, which can be dubbed 'Floquet skin-topological effect'. We first show the non-Hermitian skin effect can be induced by pure loss when the onedimensional (1D) system is periodically driven. Next, based on a two-dimensional (2D) Floquet topological photonic lattice with structured loss, we investigate the interaction between the non-Hermiticity and the topological edge states. We observe that all the one-way edge states are imposed onto specific corners, featuring both the non-Hermitian skin effect and topological edge states. Furthermore, a topological switch for the skin-topological effect is presented by utilizing the gap-closing mechanism. Our experiment paves the way of realizing non-Hermitian topological effects in nonlinear and quantum regimes. | null | [
"https://export.arxiv.org/pdf/2306.03705v1.pdf"
] | 259,088,660 | 2306.03705 | 6d7331728831a420abf5dfb542459c4faab58f2e |
Photonic Floquet skin-topological effect
Yeyang Sun 1#
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Hou Xiangrui
Tuo 1#
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Wan
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Fangyu Wang
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Shiyao Zhu
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Hefei National Laboratory
230088HefeiChina
Zhichao Ruan [email protected]
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Zhaoju Yang zhа[email protected].
School of Physics
Interdisciplinary Center for Quantum Information
Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
Zhejiang Province310027HangzhouChina
Photonic Floquet skin-topological effect
#These authors contributed equally to this work *
Non-Hermitian skin effect and photonic topological edge states are of great interest in non-Hermitian physics and optics. However, the interplay between them is largly unexplored. Here, we propose and demonstrate experimentally the non-Hermitian skin effect that constructed from the nonreciprocal flow of Floquet topological edge states, which can be dubbed 'Floquet skin-topological effect'. We first show the non-Hermitian skin effect can be induced by pure loss when the onedimensional (1D) system is periodically driven. Next, based on a two-dimensional (2D) Floquet topological photonic lattice with structured loss, we investigate the interaction between the non-Hermiticity and the topological edge states. We observe that all the one-way edge states are imposed onto specific corners, featuring both the non-Hermitian skin effect and topological edge states. Furthermore, a topological switch for the skin-topological effect is presented by utilizing the gap-closing mechanism. Our experiment paves the way of realizing non-Hermitian topological effects in nonlinear and quantum regimes.
Topological insulators are a new phase of matter that is constituted by insulating bulk and conducting edges. They have been extensively explored in condensed matter physics [1,2], photonics [3][4][5][6][7][8][9][10][11], phononics [12][13][14], and so on. In photonics, shortly after the observation of topologically protected edge states in microwaves [4], the topological states in the optical frequency range relying on artificial gauge fields were experimentally realized [5,6]. One paradigmatic example is the photonic Floquet topological insulator [5] consisting of a honeycomb array of helical optical waveguides.
The periodic driving force results in the artificial gauge field and Floquet topological phases. The one-way topological edge states that are immune to backscattering were predicted and observed. The realizations of the topological states in classical-wave systems show potential in lasing [15][16][17][18] and quantum sources [19,20].
Characterized by complex eigenenergies and nonorthogonal eigenstates, non-Hermitian physics [21][22][23] governing systems interacting with the environment has led to many frontiers, such as PT symmetric physics [24][25][26][27][28][29][30][31][32][33][34] and non-Hermitian topological phases [35][36][37][38][39][40][41][42][43][44]. Recently, the non-Hermitian skin effect (NHSE) [45][46][47][48][49][50][51][52][53][54][55][56][57][58] has drawn a lot of attention both in theory and experiment. The NHSE features the coalescence of the extended bulk states into the edges of 1D systems, which can be well described by the non-Bloch band theory [48,51]. The interplay between the NHSE and aforementioned topological states brings us a new concept of the hybrid skintopological effect [59][60][61]. Different from the higher-order non-Hermitian skin effect [62][63][64][65][66], the action of the coalescence of extended eigenstates for the skintopological effect is only on the topological edge modes [60,61]. Therefore, the number of the skin-topological modes is proportional to the length size of the system. So far, such an effect has only been realized by introducing asymmetric coupling into higherorder topological insulators [66], whereas the interaction between the one-way propagating topological edge states and NHSE remains less explored.
In this work, we bridge this gap by adopting a 2D optical array of lossy helical waveguides and observe the photonic Floquet skin-topological effect. First, by introducing staggered loss into a 1D optical array of helical waveguides, we realize the Floquet non-Hermitian skin effect. Next, we pile up the 1D lattice and arrive at a 2D non-Hermitian Floquet topological insulator. The complex spectrum of the non-Hermitian system shows that the gapless unidirectional edge states spanning across the topological band gap (corresponding to a nonzero Chern number of 1) can acquire the non-trivial point-gap winding topology [45,60,61,67], which indicates the existence of the NHSE that induced by the nonreciprocal flow of these edge states. The sign of the winding number determines which corner of the sample is the topological funnel of light. Moreover, by introducing a large enough on-site energy difference and closing the Floquet topological band gap, a topological switch [61,68] for the Floquet skintopological effect is demonstrated.
We start from a 1D optical array consisting of helical waveguides, as shown in Fig. 1a. This 1D optical array contains two sublattices A and B, and the sublattice B is endowed with considerable loss, as marked in blue. The paraxial propagation of light in this non-Hermitian system can be described by the tight-binding equation [5]:
i ∂ z = ∑ ( )• 〈 〉 − ∑ = ( )(1)
where is the electric field amplitude in the nth waveguide, c is the coupling strength between the two nearest waveguides, is the displacement pointing from [45,69] for Floquet systems
= ∫ 2 arg[ ( ) − 0 ] 2 0(2)
where 0 is a reference quasienergy for numerical calculations. The winding number for the 1D optical array is = 1 indicating the existence of the NHSE.
In experiments, we fabricate the 1D optical array of helical waveguides by utilizing the femtosecond laser writing method [5]. For comparison, we fabricate a 2D Hermitian photonic Floquet topological insulator with no structured loss. We reproduce the similar experiments and observe that the initial tilted wave packet can propagate counterclockwise along the outer perimeter and bypasses corner I (see methods for more details).
A topological switch [59,68] manifests that topology can provide a switch for the NHSE through topological phase transitions. In our model, by introducing a large enough on-site energy difference between the two sublattices (see methods for more details), the topological phase transition occurs and the Flouqet topological band gap closes. Without the topological edge states providing asymmetric coupling for the NHSE, the complex spectrum shows no point-gap winding and therefore no Floquet skin-topological modes can be found. We adopt the same experimental philosophy as shown in Fig. 3. As we can see in Fig. 4a-c, the injected light penetrates into bulk and cannot accumulate at corner I. Therefore, the skin-topological effect is switched off.
In [71,72] in nonlinear [73][74][75] and quantum regimes [76,77].
waveguide m to n. This z-dependent equation describing paraxial light propagation can be mapped to a time-dependent Schrodinger equation and the z axis plays the role of time. The periodic driving is equivalent to adding a time-dependent vector potential ( ) = ( ( ), − ( ), 0) to the optical array. The distance between the two nearest waveguides is = 15 μm and the lattice constant is = 15√3 μm. The helix radius is = 8 μm and the period is = 1 . For the above tight-binding model with z-periodic Hamiltonian, its eigenstates can be calculated from the equation of for wavefunctions over one period and is the quasienergy for Floquet systems. The complex quasienergy spectrum of this 1D optical array under periodic boundary condition (PBC) and open boundary condition (OBC) is shown in Fig. 1b, as labeled by grey dotted curves and black dots, respectively. We can see that the complex spectrum under PBC forms a closed loop and drastically collapses into a line under OBC. The eigenfunctions displayed in Fig. 1b reveal that the eigenstates all localize at the left boundary, which is the direct result of the NHSE. The closed loop in the complex spectrum results in the non-trivial point-gap topology, which can be characterized by the winding number
The optical loss in sublattice B is introduced by setting breaks periodically into the waveguides[70] (see methods for more details). To observe the 1D Floquet NHSE, a laser beam with a wavelength of 635 nm is initially launched into the center waveguide of the 1D array. We perform a series of measurements at different propagation lengths of z=2, 4, 6, 8 and 10 cm. The results are shown inFig. 1d. The white dashed circles mark the location of the input waveguide.As we can see, the light propagates continuously to the left, which indicates the collapse of the extended eigenstates onto the left boundary. The overall reduced light intensity as increasing the propagation length is due to the passive setting of the optical array.This observation unravels the existence of 1D NHSE induced by pure loss as well as periodic driving and provides a cornerstone for the next exploration of the interplay between the NHSE and photonic topological edge states.Having observed the Floquet NHSE in a 1D array, we investigate the interaction between the non-Hermiticity and topological edge states. We pile up the 1D Floquet lattice composed of helical waveguides and arrive at a 2D photonic non-HermitianFloquet topological insulator with the nonzero non-Hermitian Chern number [35,67] of = 1. The structured loss is introduced into one sublattice of the honeycomb lattice. The non-trivial topology guarantees the existence of the topological boundary states when we consider a finite sample. The complex spectra for two cases of x-PBC/y-OBC (periodic boundary along the x-axis and open boundary in the y-direction, as pointed out in panel d) and x-OBC/y-OBC are shown in Fig. 2a,c. Apart from the bulk states marked by grey dotted points, there exist two counter-propagating edge states localized at the opposite edges spanning across the topological band gap in the complex spectrum, as displayed in Fig. 2a, b. The blue (red) dots correspond to the right (left)-propagating edge mode with relatively larger (negligible) loss. The complex spectral loop in the topological band gap reveals the non-trivial point-gap topology characterized by the winding number of = 1, which results in the NHSE induced by the one-way edge flow of light. For the case of double OBC (x-OBC/y-OBC), the closed loop in the complex spectrum drastically collapses (Fig. 2c) and all non-Hermitian topological edge modes localize at the upper-left corner (labeled as corner I in Fig. 2d), whereas the bulk states stay extended. The solid (dashed) white arrows pointing to the propagating direction of the edge states correspond to the flow of light with negligible (large) loss. As a result, all the energy of the topological edge states accumulates at corner I, which indicates the existence of the so-called hybrid skin-topolgoical modes. Note that if we add the loss configuration on the sublattice B, the point-gap winding will change to = −1, which gives rise to the skin-topological modes at corner III. To experimentally study the Floquet skin-topological effect, a 2D non-Hermitian honeycomb lattice of helical waveguides is fabricated. The breaks that can generate relatively larger optical loss are introduced into the waveguides of sublattice A. To excite the topological edge modes, a broad tilted Gaussian beam with the momentum of = π is initially launched into the outer perimeter of the sample. The white ellipse indicates the position and shape of the launched beam. The total propagation length in the sample is 10 cm (z-axis). By moving the injected beam along the outer perimeter, we can see in Fig. 3a,b that the input light can propagate unidirectionally along the edge of IV-III, circumvent corner III (panel a), and propagate along the edge of III-II, circumvent the sharp corner II (panel b) without backscattering. However, the situation changes drastically when the input light encounters corner I. By moving the input Gaussian beam leftwards along the edge, as shown in Fig. 3c, we observe that the injected light propagates along the upper edge and stays trapped at corner I without penetration into the bulk, which is the direct evidence of the existence of the hybrid Floquet skin-topological modes. The observations in Fig. 3 together elucidate that the Floquet skin-topological effect features both the NHSE and topological protection of the propagating light.
conclusion, we have experimentally demonstrated the Floquet NHSE in a 1D lossy optical array and Floquet skin-topological effect in a 2D non-Hermitian photonic Floquet topological insulator. By introducing structured loss into the periodically driven optical waveguides, we have found the point-gap topology in the complex spectra of the 1D and 2D non-Hermitian photonic systems. In experiments, we have fabricated the optical lattices by the standard femtosecond laser-writing method and observed the topological funnel of light at the left boundary and corner I of the 1D and 2D optical arrays, respectively. Moreover, a key to switching on/off the skin-topological effect has been realized by utilizing the topological phase transition. Our work investigates the interaction between the NHSE and photonic topological edge states and provides the first example of the NHSE in an optical Floquet topological insulator, which may pave the way for further exploration of non-Hermitian topological effects
Figure 1 .
1Floquet NHSE in a 1D optical array. a, Schematic of a 1D non-Hermitian optical array consisting of helical waveguides. The color grey (blue) represents normal (lossy) waveguides. The optical loss is introduced by setting breaks, as can be seen in the blue waveguides. b, Complex quasi-energy spectrum of the 1D optical array. The spectral loop under PBC indicates the existence of the point-gap topology. c, Floquet skin eigenmodes (grey curves) and the summing eigenmodes (red curve). We set 200 sites in the array for these numerical results. The parameters for simulations are: coupling strength = 1.5 cm −1 , optical loss γ = 0.2 cm −1 , lattice constant = 15√3 μm , helix radius = 8 μm and the period = 1 cm . d, Experimental observation of the Floquet NHSE. The light shifts continuously to the left indicating the collapse of the eigenstates into the left boundary.
Figure 2 .
2Hybrid skin-topological effect in a non-Hermitian 2D photonic Floquet topological insulator. a, b, Complex quasienergy spectrum under x-PBC/y-OBC. The blue (red) dots correspond to the right (left)-propagating edge mode with relatively larger (negligible) loss. The spectral loop in the topological band gap reveals the nontrivial point-gap topology indicating the existence of the NHSE of light at the boundary. c, d, Complex quasienergy spectrum under x-OBC/y-OBC. The skin-topological modes (black dots) emerge and localize at the upper-left corner of the sample (corner I).
Figure 3 .
3Experimental observation of the Floquet skin-topological effect. By moving the input tilted Gaussian beam along the outer perimeter, we observe the output light distribution at the end facet of the sample after 10 cm long propagation. a, The injected light propagates along the edge of IV-III and bypasses corner III without backscattering. b, The light propagates along the edge of III-II and bypasses the sharp corner III without backscattering. c, The light propagates along the edge of II-I and accumulates at corner I, which confirms the existence of the Floquet skin-topological modes.
Figure 4 .
4Topological switch for the skin-topological effect. a-c, Introducing large enough on-site energy difference of ∆ = 3 results in the topological phase transition and closes the Flouqet topological band gap. In experiments, by moving the input tilted Gaussian beam leftwards along the upper edge, we can see that the injected light penetrates into the bulk. The skin-topological effect is switched off by the gapclosing mechanism.
Colloquium: Topological Insulators. M Z Hasan, C L Kane, Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, Colloquium: Topological Insulators, Rev. Mod. Phys. 82, 3045 (2010).
Topological Insulators and Superconductors. X.-L Qi, S.-C Zhang, Rev. Mod. Phys. 831057X.-L. Qi and S.-C. Zhang, Topological Insulators and Superconductors, Rev. Mod. Phys. 83, 1057 (2011).
Possible Realization of Directional Optical Waveguides in Photonic Crystals with Broken Time-Reversal Symmetry. F D M Haldane, S Raghu, Phys. Rev. Lett. 10013904F. D. M. Haldane and S. Raghu, Possible Realization of Directional Optical Waveguides in Photonic Crystals with Broken Time-Reversal Symmetry, Phys. Rev. Lett. 100, 013904 (2008).
Observation of Unidirectional Backscattering-Immune Topological Electromagnetic States. Z Wang, Y Chong, J D Joannopoulos, M Soljačić, Nature. 461Z. Wang, Y. Chong, J. D. Joannopoulos, and M. Soljačić, Observation of Unidirectional Backscattering-Immune Topological Electromagnetic States, Nature 461, 772-775 (2009).
M C Rechtsman, J M Zeuner, Y Plotnik, Y Lumer, D Podolsky, F Dreisow, M Stefan, A Segev, Szameit, Photonic Floquet Topological Insulators. 496M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, D. Podolsky, F. Dreisow, M. Nolte Stefan and Segev, and A. Szameit, Photonic Floquet Topological Insulators, Nature 496, 196-200 (2013).
Imaging Topological Edge States in Silicon Photonics. M Hafezi, S Mittal, J Fan, A Migdall, J M Taylor, Nat. Photonics. 7M. Hafezi, S. Mittal, J. Fan, A. Migdall, and J. M. Taylor, Imaging Topological Edge States in Silicon Photonics, Nat. Photonics 7, 1001-1005 (2013).
. A B Khanikaev, S H Mousavi, W.-K Tse, M Kargarian, A H Macdonald, G Shvets, Nat. Mater. 12Photonic Topological InsulatorsA. B. Khanikaev, S. H. Mousavi, W.-K. Tse, M. Kargarian, A. H. MacDonald, and G. Shvets, Photonic Topological Insulators, Nat. Mater. 12, 233-239 (2013).
Experimental Realization of Photonic Topological Insulator in a Uniaxial metacrystal Waveguide. W.-J Chen, S.-J Jiang, X.-D Chen, L , Zhu Baocheng, J.-W Zhou, C T Dong, Chan, Nat. Commun. 55782W.-J. Chen, S.-J. Jiang, X.-D. Chen, L. Zhu Baocheng and Zhou, J.-W. Dong, and C. T. Chan, Experimental Realization of Photonic Topological Insulator in a Uniaxial metacrystal Waveguide, Nat. Commun. 5, 5782 (2014).
Scheme for Achieving a Topological Photonic Crystal by Using Dielectric Material. L.-H Wu, X Hu, Phys. Rev. Lett. 114223901L.-H. Wu and X. Hu, Scheme for Achieving a Topological Photonic Crystal by Using Dielectric Material, Phys. Rev. Lett. 114, 223901 (2015).
. L Lu, J D Joannopoulos, M Soljacic, Topological Photonics. 8Nat. Photonics.L. Lu, J. D. Joannopoulos, and M. Soljacic, Topological Photonics, Nat. Photonics. 8, 821- 829 (2014).
. T Ozawa, H M Price, A Amo, M , Goldman Nathan, L Hafezi, M C Lu, D Rechtsman, J Schuster, O Simon, I Zilberberg, Carusotto, Topological Photonics. 9115006Rev. Mod. Phys.T. Ozawa, H. M. Price, A. Amo, M. Goldman Nathan and Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and I. Carusotto, Topological Photonics, Rev. Mod. Phys. 91, 015006 (2019).
. S D Huber, Topological Mechanics, Nat. Phys. 12S. D. Huber, Topological Mechanics, Nat. Phys. 12, 621-623 (2016).
Topological Phases in Acoustic and Mechanical Systems. G Ma, M Xiao, C T Chan, Nat. Rev. Phys. 1G. Ma, M. Xiao, and C. T. Chan, Topological Phases in Acoustic and Mechanical Systems, Nat. Rev. Phys. 1, 281-294 (2019).
. H Xue, Y Yang, B Zhang, Topological Acoustics. 7Nat. Rev. Mater.H. Xue, Y. Yang, and B. Zhang, Topological Acoustics, Nat. Rev. Mater. 7, 974-990 (2022).
Nonreciprocal Lasing in Topological Cavities of Arbitrary Geometries. B Bahari, A Ndao, F Vallini, A El Amili, Y Fainman, B Kante, Science. 358B. Bahari, A. Ndao, F. Vallini, A. El Amili, Y. Fainman, and B. Kante, Nonreciprocal Lasing in Topological Cavities of Arbitrary Geometries, Science 358, 636-640 (2017).
. M A Bandres, S Wittek, G Harari, M Parto, J Ren, M Segev, D N Christodoulides, M Khajavikhan, Topological Insulator Laser: Experiments, Science. 3594005M. A. Bandres, S. Wittek, G. Harari, M. Parto, J. Ren, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Topological Insulator Laser: Experiments, Science 359, eaar4005 (2018).
A High-Performance Topological Bulk Laser Based on Band-Inversion-Induced Reflection. Z.-K Shao, H.-Z Chen, S Wang, X.-R Mao, Z.-Q Yang, S.-L Wang, X.-X Wang, X Hu, R.-M Ma, Nat. Nanotechnol. 15Z.-K. Shao, H.-Z. Chen, S. Wang, X.-R. Mao, Z.-Q. Yang, S.-L. Wang, X.-X. Wang, X. Hu, and R.-M. Ma, A High-Performance Topological Bulk Laser Based on Band-Inversion- Induced Reflection, Nat. Nanotechnol. 15, 67-72 (2020).
Mode-Locked Topological Insulator Laser Utilizing Synthetic Dimensions. Z Yang, E Lustig, G Harari, Y Plotnik, Y Lumer, M A Bandres, M Segev, Phys. Rev. X. 1011059Z. Yang, E. Lustig, G. Harari, Y. Plotnik, Y. Lumer, M. A. Bandres, and M. Segev, Mode- Locked Topological Insulator Laser Utilizing Synthetic Dimensions, Phys. Rev. X 10, 011059 (2020).
A Topological Source of Quantum Light. S Mittal, E A Goldschmidt, M Hafezi, Nature. 561S. Mittal, E. A. Goldschmidt, and M. Hafezi, A Topological Source of Quantum Light, Nature 561, 502-506 (2018).
Topologically Protected Quantum Entanglement Emitters. T Dai, Nat. Photonics. 16T. Dai et al., Topologically Protected Quantum Entanglement Emitters, Nat. Photonics 16, 248-257 (2022).
Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry. C M Bender, S Boettcher, Phys. Rev. Lett. 805243C. M. Bender and S. Boettcher, Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry, Phys. Rev. Lett. 80, 5243 (1998).
Non-Hermitian Physics. Y Ashida, Z Gong, M Ueda, Adv. Phys. 69Y. Ashida, Z. Gong, and M. Ueda, Non-Hermitian Physics, Adv. Phys. 69, 249-435 (2020).
Exceptional Topology of Non-Hermitian Systems. E J Bergholtz, J C Budich, F K Kunst, Rev. Mod. Phys. 9315005E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional Topology of Non-Hermitian Systems, Rev. Mod. Phys. 93, 015005 (2021).
Observation of PT-Symmetry Breaking in Complex Optical Potentials. A Guo, G J Salamo, D Duchesne, R Morandotti, M Volatier-Ravat, V Aimez, G A Siviloglou, D N Christodoulides, Phys. Rev. Lett. 10393902A. Guo, G. J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D. N. Christodoulides, Observation of PT-Symmetry Breaking in Complex Optical Potentials, Phys. Rev. Lett. 103, 093902 (2009).
Observation of Parity-Time Symmetry in Optics. C E Rüter, K G Makris, R El-Ganainy, D N Christodoulides, M Segev, D Kip, Nat. Phys. 6C. E. Rüter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, Observation of Parity-Time Symmetry in Optics, Nat. Phys. 6, 192-195 (2010).
Experimental Demonstration of a Unidirectional Reflectionless Parity-Time Metamaterial at Optical Frequencies. L Feng, Y L Xu, W S Fegadolli, M H Lu, J E B Oliveira, V R Almeida, Y F Chen, A Scherer, Nat. Mater. 12L. Feng, Y. L. Xu, W. S. Fegadolli, M. H. Lu, J. E. B. Oliveira, V. R. Almeida, Y. F. Chen, and A. Scherer, Experimental Demonstration of a Unidirectional Reflectionless Parity-Time Metamaterial at Optical Frequencies, Nat. Mater. 12, 108-113 (2013).
H Hodaei, M A Miri, M Heinrich, D N Christodoulides, M Khajavikhan, Parity-Time-Symmetric Microring Lasers. 346H. Hodaei, M. A. Miri, M. Heinrich, D. N. Christodoulides, and M. Khajavikhan, Parity- Time-Symmetric Microring Lasers, Science 346, 975-978 (2014).
L Feng, Z J Wong, R M Ma, Y Wang, X Zhang, Single-Mode Laser by Parity-Time Symmetry Breaking. 346L. Feng, Z. J. Wong, R. M. Ma, Y. Wang, and X. Zhang, Single-Mode Laser by Parity-Time Symmetry Breaking, Science 346, 972-975 (2014).
Spawning Rings of Exceptional Points out of Dirac Cones. B Zhen, C W Hsu, Y Igarashi, L Lu, I Kaminer, A Pick, S.-L Chua, J D Joannopoulos, M Soljacic, Nature. 525B. Zhen, C. W. Hsu, Y. Igarashi, L. Lu, I. Kaminer, A. Pick, S.-L. Chua, J. D. Joannopoulos, and M. Soljacic, Spawning Rings of Exceptional Points out of Dirac Cones, Nature 525, 354-358 (2015).
S Weimann, M Kremer, Y Plotnik, Y Lumer, K G Nolte, S Makris, M Segev, M C Rechtsman, A Szameit, Topologically Protected Bound States in Photonic Parity-Time-Symmetric. 16S. Weimann, M. Kremer, Y. Plotnik, Y. Lumer, K. G. Nolte S. and Makris, M. Segev, M. C. Rechtsman, and A. Szameit, Topologically Protected Bound States in Photonic Parity-Time- Symmetric, Nat. Mater. 16, 433-438 (2017).
PT Phase Transitions of Edge States at PT Symmetric Interfaces In-Hermitian Topological Insulators. X Ni, D Smirnova, A Poddubny, Y Daniel, Chong , A B Khanikaev, Phys. Rev. B. 98165129X. Ni, D. Smirnova, A. Poddubny, Y. Leykam Daniel and Chong, and A. B. Khanikaev, PT Phase Transitions of Edge States at PT Symmetric Interfaces In-Hermitian Topological Insulators, Phys. Rev. B 98, 165129 (2018).
R El-Ganainy, K G Makris, Z H Mercedeh, S Musslimani, D N Rotter, Christodoulides, Non-Hermitian Physics and PT Symmetry. 14R. El-Ganainy, K. G. Makris, Z. H. Khajavikhan Mercedeh and Musslimani, S. Rotter, and D. N. Christodoulides, Non-Hermitian Physics and PT Symmetry, Nat. Phys. 14, 11-19 (2018).
K Özdemir, S Rotter, F Nori, L Yang, Parity-Time Symmetry and Exceptional Points in Photonics. 18K. Özdemir, S. Rotter, F. Nori, and L. Yang, Parity-Time Symmetry and Exceptional Points in Photonics, Nat. Mater. 18, 783-798 (2019).
Exceptional Points in Optics and Photonics. M A Miri, A Alù, Science. 3637709M. A. Miri and A. Alù, Exceptional Points in Optics and Photonics, Science 363, eaar7709 (2019).
Non-Hermitian Chern Bands. S Yao, F Song, Z Wang, Phys. Rev. Lett. 121136802S. Yao, F. Song, and Z. Wang, Non-Hermitian Chern Bands, Phys. Rev. Lett. 121, 136802 (2018).
Symmetry and Topology in Non-Hermitian Physics. K Kawabata, K Shiozaki, M Ueda, M Sato, Phys. Rev. X. 941015K. Kawabata, K. Shiozaki, M. Ueda, and M. Sato, Symmetry and Topology in Non- Hermitian Physics, Phys. Rev. X 9, 041015 (2019).
Second-Order Topological Phases in Non-Hermitian Systems. T Liu, Y.-R Zhang, Q Ai, Z Gong, K Kawabata, M Ueda, F Nori, Phys. Rev. Lett. 12276801T. Liu, Y.-R. Zhang, Q. Ai, Z. Gong, K. Kawabata, M. Ueda, and F. Nori, Second-Order Topological Phases in Non-Hermitian Systems, Phys. Rev. Lett. 122, 076801 (2019).
Higher-Order Topological Corner States Induced by Gain and Loss. X.-W Luo, C Zhang, Phys. Rev. Lett. 12376801X.-W. Luo and C. Zhang, Higher-Order Topological Corner States Induced by Gain and Loss, Phys. Rev. Lett. 123, 076801 (2019).
Non-Hermitian Sonic Second-Order Topological Insulator. Z Zhang, M Rosendo Lopez, Y Cheng, X.-J Liu, J Christensen, Phys. Rev. Lett. 122195501Z. Zhang, M. Rosendo Lopez, Y. Cheng, X.-J. Liu and J. Christensen, Non-Hermitian Sonic Second-Order Topological Insulator, Phys. Rev. Lett. 122, 195501 (2019).
H Zhao, X Qiao, T Wu, B Midya, S Longhi, L Feng, Non-Hermitian Topological Light Steering. 365H. Zhao, X. Qiao, T. Wu, B. Midya, S. Longhi, and L. Feng, Non-Hermitian Topological Light Steering, Science 365, 1163-1166 (2019).
Observation of a Non-Hermitian Phase Transition in an Optical Quantum Gas. F E Öztürk, T Lappe, G Hellmann, J Schmitt, J Klaers, F Vewinger, J Kroha, M Weitz, Science. 372F. E. Öztürk, T. Lappe, G. Hellmann, J. Schmitt, J. Klaers, F. Vewinger, J. Kroha, and M. Weitz, Observation of a Non-Hermitian Phase Transition in an Optical Quantum Gas, Science 372, 88-91 (2021).
B Hu, Non-Hermitian Topological Whispering Gallery. 597B. Hu et al., Non-Hermitian Topological Whispering Gallery, Nature 597, 655-659 (2021).
Generating Arbitrary Topological Windings of a Non-Hermitian Band. K Wang, A Dutt, K Y Yang, C C Wojcik, J Vučković, S Fan, Science. 371K. Wang, A. Dutt, K. Y. Yang, C. C. Wojcik, J. Vučković, and S. Fan, Generating Arbitrary Topological Windings of a Non-Hermitian Band, Science 371, 1240-1245 (2021).
Topological Complex-Energy Braiding of Non-Hermitian Bands. K Wang, A Dutt, C C Wojcik, S Fan, Nature. 598K. Wang, A. Dutt, C. C. Wojcik, and S. Fan, Topological Complex-Energy Braiding of Non- Hermitian Bands, Nature 598, 59-64 (2021).
Edge States and Topological Invariants of Non-Hermitian Systems. S Yao, Z Wang, Phys. Rev. Lett. 12186803S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018).
Non-Hermitian Skin Effect and Chiral Damping in Open Quantum Systems. F Song, S Yao, Z Wang, Phys. Rev. Lett. 123170401F. Song, S. Yao, and Z. Wang, Non-Hermitian Skin Effect and Chiral Damping in Open Quantum Systems, Phys. Rev. Lett. 123, 170401 (2019).
Non-Hermitian Topological Invariants in Real Space. F Song, S Yao, Z Wang, Phys. Rev. Lett. 123246801F. Song, S. Yao, and Z. Wang, Non-Hermitian Topological Invariants in Real Space, Phys. Rev. Lett. 123, 246801 (2019).
Non-Bloch Band Theory of Non-Hermitian Systems. K Yokomizo, S Murakami, Phys. Rev. Lett. 12366404K. Yokomizo and S. Murakami, Non-Bloch Band Theory of Non-Hermitian Systems, Phys. Rev. Lett. 123, 066404 (2019).
Non-Hermitian Skin Modes Induced by On-Site Dissipations and Chiral Effect. Y Yi, Z Yang, Phys. Rev. Lett. 125186802Y. Yi and Z. Yang, Non-Hermitian Skin Modes Induced by On-Site Dissipations and Chiral Effect, Phys. Rev. Lett. 125, 186802 (2020).
Correspondence between Winding Numbers and Skin Modes in Non-Hermitian. K Zhang, Z Yang, C Fang, Phys. Rev. Lett. 125126402K. Zhang, Z. Yang, and C. Fang, Correspondence between Winding Numbers and Skin Modes in Non-Hermitian, Phys. Rev. Lett. 125, 126402 (2020).
Non-Hermitian Bulk-Boundary Correspondence and Auxiliary Generalized Zone Theory. Z Yang, K Zhang, C Fang, J Hu, Phys. Rev. Lett. 125226402Z. Yang, K. Zhang, C. Fang, and J. Hu, Non-Hermitian Bulk-Boundary Correspondence and Auxiliary Generalized Zone Theory, Phys. Rev. Lett. 125, 226402 (2020).
. S Weidemann, M Kremer, T Helbig, T Hofmann, A Stegmaier, M Greiter, R Thomale, A Szameit, Science. 368Topological Funneling of LightS. Weidemann, M. Kremer, T. Helbig, T. Hofmann, A. Stegmaier, M. Greiter, R. Thomale, and A. Szameit, Topological Funneling of Light, Science 368, 311-314 (2020).
Non-Hermitian Bulk-Boundary Correspondence in Quantum Dynamics. L Xiao, T Deng, K Wang, G Zhu, Z Wang, W Yi, P Xue, Nat. Phys. 16L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Non-Hermitian Bulk- Boundary Correspondence in Quantum Dynamics, Nat. Phys. 16, 761-766 (2020).
Topological Origin of Non-Hermitian Skin Effects. N Okuma, K Kawabata, K Shiozaki, M Sato, Phys. Rev. Lett. 12486801N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Topological Origin of Non-Hermitian Skin Effects, Phys. Rev. Lett. 124, 086801 (2020).
Generalized Bulk-Boundary Correspondence in Non-Hermitian Topolectrical Circuits. T Helbig, T Hofmann, S Imhof, M Abdelghany, T Kiessling, L W Molenkamp, C H Lee, A Szameit, M Greiter, R Thomale, Nat. Phys. 16T. Helbig, T. Hofmann, S. Imhof, M. Abdelghany, T. Kiessling, L. W. Molenkamp, C. H. Lee, A. Szameit, M. Greiter, and R. Thomale, Generalized Bulk-Boundary Correspondence in Non-Hermitian Topolectrical Circuits, Nat. Phys. 16, 747-750 (2020).
Dynamic Signatures of Non-Hermitian Skin Effect and Topology in Ultracold Atoms. Q Liang, D Xie, Z Dong, H Li, H Li, B Gadway, W Yi, B Yan, Phys. Rev. Lett. 12970401Q. Liang, D. Xie, Z. Dong, H. Li, H. Li, B. Gadway, W. Yi, and B. Yan, Dynamic Signatures of Non-Hermitian Skin Effect and Topology in Ultracold Atoms, Phys. Rev. Lett. 129, 070401 (2022).
Anomalous Floquet Non-Hermitian Skin Effect in a Ring Resonator Lattice. H Gao, H Xue, Z Gu, L Li, W Zhu, Z Su, J Zhu, B Zhang, Y D Chong, Phys. Rev. B. 106134112H. Gao, H. Xue, Z. Gu, L. Li, W. Zhu, Z. Su, J. Zhu, B. Zhang, and Y. D. Chong, Anomalous Floquet Non-Hermitian Skin Effect in a Ring Resonator Lattice, Phys. Rev. B 106, 134112 (2022).
Transient Non-Hermitian Skin Effect. Z Gu, H Gao, H Xue, J Li, Z Su, J Zhu, Nat. Commun. 137668Z. Gu, H. Gao, H. Xue, J. Li, Z. Su, and J. Zhu, Transient Non-Hermitian Skin Effect, Nat. Commun. 13, 7668 (2022).
Hybrid Higher-Order Skin-Topological Modes in Nonreciprocal Systems. C H Lee, L Li, J Gong, Phys. Rev. Lett. 12316805C. H. Lee, L. Li, and J. Gong, Hybrid Higher-Order Skin-Topological Modes in Nonreciprocal Systems, Phys. Rev. Lett. 123, 016805 (2019).
Gain-Loss-Induced Hybrid Skin-Topological Effect. Y Li, C Liang, C Wang, C Lu, Y.-C Liu, Phys. Rev. Lett. 128223903Y. Li, C. Liang, C. Wang, C. Lu, and Y.-C. Liu, Gain-Loss-Induced Hybrid Skin-Topological Effect, Phys. Rev. Lett. 128, 223903 (2022).
Hybrid Skin-Topological Modes without Asymmetric Couplings. W Zhu, J Gong, Phys. Rev. B. 10635425W. Zhu and J. Gong, Hybrid Skin-Topological Modes without Asymmetric Couplings, Phys. Rev. B 106, 35425 (2022).
Higher-Order Non-Hermitian Skin Effect. K Kawabata, M Sato, K Shiozaki, Phys. Rev. B. 102205118K. Kawabata, M. Sato, and K. Shiozaki, Higher-Order Non-Hermitian Skin Effect, Phys. Rev. B 102, 205118 (2020).
Second-Order Topological Non-Hermitian Skin Effects. R Okugawa, R Takahashi, K Yokomizo, Phys. Rev. B. 102241202R. Okugawa, R. Takahashi, and K. Yokomizo, Second-Order Topological Non-Hermitian Skin Effects, Phys. Rev. B 102, 241202(R) (2020).
Non-Hermitian Second-Order Skin and Topological Modes. Y Fu, J Hu, S Wan, Phys. Rev. B. 10345420Y. Fu, J. Hu, and S. Wan, Non-Hermitian Second-Order Skin and Topological Modes, Phys. Rev. B 103, 045420 (2021).
Observation of Higher-Order Non-Hermitian Skin Effect. X Zhang, Y Tian, J.-H Jiang, M.-H Lu, Y.-F Chen, Nat. Commun. 125377X. Zhang, Y. Tian, J.-H. Jiang, M.-H. Lu, and Y.-F. Chen, Observation of Higher-Order Non- Hermitian Skin Effect, Nat. Commun. 12, 5377 (2021).
Observation of Hybrid Higher-Order Skin-Topological Effect in non-Hermitian Topolectrical Circuits. D Zou, T Chen, W He, J Bao, C H Lee, H Sun, X Zhang, Nat. Commun. 127201D. Zou, T. Chen, W. He, J. Bao, C. H. Lee, H. Sun, and X. Zhang, Observation of Hybrid Higher-Order Skin-Topological Effect in non-Hermitian Topolectrical Circuits, Nat. Commun. 12, 7201 (2021).
Topological Band Theory for Non-Hermitian Hamiltonians. H Shen, B Zhen, L Fu, Phys. Rev. Lett. 120146402H. Shen, B. Zhen, and L. Fu, Topological Band Theory for Non-Hermitian Hamiltonians, Phys. Rev. Lett. 120, 146402 (2018).
Topological Switch for Non-Hermitian Skin Effect in Cold-Atom Systems with Loss. L Li, C H Lee, J Gong, Phys. Rev. Lett. 124250402L. Li, C. H. Lee, and J. Gong, Topological Switch for Non-Hermitian Skin Effect in Cold- Atom Systems with Loss, Phys. Rev. Lett. 124, 250402 (2020).
Topological Phases of Non-Hermitian Systems. Z Gong, Y Ashida, K Kawabata, K Takasan, S Higashikawa, M Ueda, Phys. Rev. X. 831079Z. Gong, Y. Ashida, K. Kawabata, K. Takasan, S. Higashikawa, and M. Ueda, Topological Phases of Non-Hermitian Systems, Phys. Rev. X 8, 031079 (2018).
Experimental Realization of a Weyl Exceptional Ring. A Cerjan, S Huang, M Wang, K P Chen, Y Chong, M C Rechtsman, Nat. Photonics. 13A. Cerjan, S. Huang, M. Wang, K. P. Chen, Y. Chong, and M. C. Rechtsman, Experimental Realization of a Weyl Exceptional Ring, Nat. Photonics 13, 623-628 (2019).
Universal Non-Hermitian Skin Effect in Two and Higher Dimensions. K Zhang, Z Yang, C Fang, Nat. Commun. 132496K. Zhang, Z. Yang, and C. Fang, Universal Non-Hermitian Skin Effect in Two and Higher Dimensions, Nat. Commun. 13, 2496 (2022).
T Wan, K Zhang, J Li, Z Yang, Z Yang, ArXiv 2303.11109Observation of Dynamical Degeneracy Splitting for the Non-Hermitian Skin Effect. T. Wan, K. Zhang, J. Li, Z. Yang, and Z. Yang, Observation of Dynamical Degeneracy Splitting for the Non-Hermitian Skin Effect, ArXiv 2303.11109 (2023).
. L J Maczewsky, Nonlinearity-Induced Photonic Topological Insulator. 370ScienceL. J. Maczewsky et al., Nonlinearity-Induced Photonic Topological Insulator, Science 370, 701-704 (2020).
Observation of Floquet Solitons in a Topological Bandgap. S Mukherjee, M C Rechtsman, Science. 368S. Mukherjee and M. C. Rechtsman, Observation of Floquet Solitons in a Topological Bandgap, Science 368, 856-859 (2020).
S Xia, D Kaltsas, D Song, I Komis, J Xu, A Szameit, H Buljan, K G Makris, Z Chen, Nonlinear Tuning of PT Symmetry and Non-Hermitian Topological States. 372S. Xia, D. Kaltsas, D. Song, I. Komis, J. Xu, A. Szameit, H. Buljan, K. G. Makris, and Z. Chen, Nonlinear Tuning of PT Symmetry and Non-Hermitian Topological States, Science 372, 72-76 (2021).
S Barik, A Karasahin, C Flower, T Cai, H Miyake, W Degottardi, M Hafezi, E Waks, A Topological Quantum Optics Interface. 359S. Barik, A. Karasahin, C. Flower, T. Cai, H. Miyake, W. DeGottardi, M. Hafezi, and E. Waks, A Topological Quantum Optics Interface, Science 359, 666-668 (2018).
Observing the Quantum Topology of Light. J Deng, Science. 378J. Deng et al., Observing the Quantum Topology of Light, Science 378, 966-971 (2022).
| [] |
[
"REFINED PROPERTIES OF THE HD 130322 PLANETARY SYSTEM",
"REFINED PROPERTIES OF THE HD 130322 PLANETARY SYSTEM"
] | [
"Natalie R Hinkel ",
"Stephen R Kane ",
"Gregory W Henry ",
"Y Katherina Feng ",
"Tabetha Boyajian ",
"Jason Wright ",
"Debra A Fischer ",
"Andrew W Howard "
] | [] | [] | Exoplanetary systems closest to the Sun, with the brightest host stars, provide the most favorable opportunities for characterization studies of the host star and their planet(s). The Transit Ephemeris Refinement and Monitoring Survey uses both new radial velocity measurements and photometry in order to greatly improve planetary orbit uncertainties and the fundamental properties of the star, in this case HD 130322. The only companion, HD 130322b, orbits in a relatively circular orbit, e = 0.029 every ∼10.7 days. Radial velocity measurements from multiple sources, including 12 unpublished from the Keck I telescope, over the course of ∼14 years have reduced the uncertainty in the transit midpoint to ∼2 hours. The transit probability for the b-companion is 4.7%, where M p sin i = 1.15 M J and a = 0.0925 AU. In this paper, we compile photometric data from the T11 0.8m Automated Photoelectric Telescope at Fairborn Observatory taken over ∼14 years, including the constrained transit window, which results in a dispositive null result for both full transit exclusion of HD 130322b to a depth of 0.017 mag and grazing transit exclusion to a depth of ∼0.001 mag. Our analysis of the starspot activity via the photometric data reveals a highly accurate stellar rotation period: 26.53±0.70 days. In addition, the brightness of the host with respect to the comparison stars is anti-correlated with the Ca II H and K indices, typical for a young solar-type star. | 10.1088/0004-637x/803/1/8 | [
"https://arxiv.org/pdf/1502.03441v1.pdf"
] | 12,657,851 | 1502.03441 | 5a8db798d0e45f10d7a5b4c11cc51331872e5bd7 |
REFINED PROPERTIES OF THE HD 130322 PLANETARY SYSTEM
11 Feb 2015 Draft version February 13, 2015 February 13, 2015
Natalie R Hinkel
Stephen R Kane
Gregory W Henry
Y Katherina Feng
Tabetha Boyajian
Jason Wright
Debra A Fischer
Andrew W Howard
REFINED PROPERTIES OF THE HD 130322 PLANETARY SYSTEM
11 Feb 2015 Draft version February 13, 2015 February 13, 2015Preprint typeset using L A T E X style emulateapj v. 5/2/11 Draft versionSubject headings: planetary systems -techniques: photometric -techniques: radial velocities -stars: individual (HD 130322)
Exoplanetary systems closest to the Sun, with the brightest host stars, provide the most favorable opportunities for characterization studies of the host star and their planet(s). The Transit Ephemeris Refinement and Monitoring Survey uses both new radial velocity measurements and photometry in order to greatly improve planetary orbit uncertainties and the fundamental properties of the star, in this case HD 130322. The only companion, HD 130322b, orbits in a relatively circular orbit, e = 0.029 every ∼10.7 days. Radial velocity measurements from multiple sources, including 12 unpublished from the Keck I telescope, over the course of ∼14 years have reduced the uncertainty in the transit midpoint to ∼2 hours. The transit probability for the b-companion is 4.7%, where M p sin i = 1.15 M J and a = 0.0925 AU. In this paper, we compile photometric data from the T11 0.8m Automated Photoelectric Telescope at Fairborn Observatory taken over ∼14 years, including the constrained transit window, which results in a dispositive null result for both full transit exclusion of HD 130322b to a depth of 0.017 mag and grazing transit exclusion to a depth of ∼0.001 mag. Our analysis of the starspot activity via the photometric data reveals a highly accurate stellar rotation period: 26.53±0.70 days. In addition, the brightness of the host with respect to the comparison stars is anti-correlated with the Ca II H and K indices, typical for a young solar-type star.
INTRODUCTION
Studying the ephemerides, or orbital parameters, of nearby planets is one of the oldest sub-fields in astronomy. It took nearly 1500 years for a new celestial model to supplant Ptolemy's stationary, geocentric system specifically because, through the use of a number of clever maneuvers (equants, epicycles, and deferents), he was able to accurately predict the motion of the Solar System planets (Gingerich 1997). And it was precisely because Copernicus' heliocentric model did not predict accurate planetary phenomena that nearly 100 years passed before the theory was generally accepted by the scientific community. With the purpose of improving orbital uncertainties and fundamental properties of the host star, the Transit Ephemeris Refinement and Monitoring Survey (TERMS) team seeks to characterize individual nearby planetary systems (Kane et al. 2009).
The giant planet around HD 130322, a K0 star, was first detected by Udry et al. (2000) using the radial velocity (RV) technique via the CORALIE echelle spectrograph. They reported the period of the planet was P = 10.720 ± 0.007 days, m sin i = 1.02 M J , and eccentricity of e = 0.044 ± 0.018. The planet was later confirmed by , who observed an additional 12 RV measurements using Keck and determined that the period was P = 10.70875 ± 0.00094 days, m sin i = 1.089 M J , and an eccentricity of e = 0.025 ± 0.032. The giant planet was further observed by Wittenmyer et al. (2009) using both the Hobby-Eberly Telescope (HET) and the Harlan J. Smith telescope. They found the combination of all four datasets produced a large root-meansquare (rms) variability of 14.8 m s −1 and an anomalous periodicity at 35 days, specifically due to CORALIE data. Their orbital parameters, without the RV measurements by Udry et al. (2000), found P = 10.7085±0.0003, m sin i = 1.04 ± 0.03 M J , and e = 0.011 ± 0.020. Because of the small eccentricity, Trilling (2000) was not able to put a lower bound on the mass of the planet as determined by tidal constraints, only an upper limit of 43.8 M J . Observations using the Spitzer infrared spectrograph by Dodson-Robinson et al. (2011) resulted in the detection of a debris disk around the host star.
Here we present our complete RV data set from a number of sources (including those mentioned above in addition to previously unpublished measurements from the 10m Keck I telescope) that has a time baseline of ∼14 years, discussed in §2. The analysis of the Keplerian orbital solution of these data produce refined orbital ephemeris for the host star HD 130322, with a predicted transit depth of 1.57%, and 1σ transit window of 0.329 days ( §3). In §4, we determine the differential magnitude of the host star with respect to multiple comparison stars in order to better understand seasonal and nightly brightness fluctuations. The evaluation of starspot variability ( §4.1) allows us to calculate a stellar rotation period, while understanding the stellar magnetic activity ( §4.2) gives us insight into the age of HD 130322.
HOST STAR PROPERTIES
The HD 130322 system has been monitored via the RV technique several times in the past. We provide a complete RV data set that consists of 118 measurements acquired with CORALIE at the 1.2m Euler-Swiss telescope (Udry et al. 2000), 35 measurements acquired with the 2.7m Harlan J. Smith telescope and the High Resolution Spectrograph (HRS) on the HET (Wittenmyer et al. 2009), and 24 measurements acquired with the HIRES echelle spectrometer on the 10m Keck I telescope , the most recent 12 of which are previously unpublished. ‡ ‡ We use this combined data set (shown in Table 1) in order to calculate the fundamental properties of the star as well as the Keplerian orbital solution of the planet.
Fundamental Parameters
We derive the host star properties by fitting the high resolution HIRES, HRS, and CORALIE data with Spectroscopy Made Easy (Valenti & Piskunov 1996), or SME, via the wavelength intervals, line data, and technique of Valenti & Fischer (2005).
We applied the revised Hipparcos parallaxes (van Leeuwen 2007) to the Valenti et al. (2009) methodology as well as surface gravity from a Yonsei-Yale stellar structure model (Demarque et al. 2004). As a result, the fundamental stellar parameters are listed in Table 2, where Vmagnitude and distance were determined by Hipparcos, B − V from Tycho-2, while effective temperature, surface gravity, projected rotational velocity, stellar mass, and stellar radius were a result of SME. The high precision of the stellar radius, namely R * = 0.85±0.04 R ⊙ , is important when determining the depth and duration of a possible planetary transit. As a comparison to the SME result, the stellar radius was determined using the Torres relation (Torres et al. 2010): R * = 0.88±0.04 R ⊙ . We have also conducted an empirical surface brightness calculation per Boyajian et al. (2014) which averages the V − J, V − H, and V − K surface brightness relations, resulting in an angular diameter of 0.252 ± 0.006 mas. Folding in the parallax and error, the radius is 0.89±0.04 R ⊙ . All three of these techniques show a very strong consensus for both the stellar radius and error of the host star. The iron abundance, [Fe/H], as determined by SME, as well as other element abundances, will be discussed in §2.2.
While our results are consistent with a typical K-type star (Boyajian et al. 2012), we note the differences between stellar properties in Udry et al. (2000), namely, their Table 1 and our Table 2 29.76 pc. Our effective temperature, 5387 ± 44, is also +50 K above their referenced temperatures.
Stellar Abundances
Stellar abundances have been measured for HD 130322 by at least a dozen different groups, for example Valenti & Fischer (2005); Neves et al. (2009);Delgado Mena et al. (2010). Due to the proximity of the host star to the Sun, 31.54 pc, a wide variety of elements have been measured within HD 130322, from α-type to neutron-capture. Using the same analysis as seen in Hypatia Catalog (Hinkel et al. 2014), we renormalized the abundance measurements for each dataset to the same solar scale (Lodders et al. 2009) and then determined the maximum measurement variation between the groups, or the spread, to quantify the consistency of the abundances. When analyzing the abundances in the Hypatia Catalog, element measurements were only considered where the spread was less than the respective error bar associated with that element, or where group-to-group variations were small, and then the median value was used. For HD 130322, [Fe/H] = 0.12 dex, however the spread between the groups was 0.23 dex. In other words, the iron ratio was not agreed upon by the various groups, where the renormalized Ecuvillon et al. For this reason, HD 130322 was not included in the analysis (or reduced version) of the Hypatia Catalog. Per the SME analysis, [Fe/H] = 0.07 ± 0.03 dex, using the discussion and solar abundance scale in Valenti & Fischer (2005). The renormalization gives [Fe/H] = 0.11, which is very close to the median value found for the other data sets in Hypatia.
There were a number of other elements within the star that were measured by different groups. Per the Hypatia analysis, where the spread in the abundances were less than respective error and the median value taken: [N/Fe] = -0.14 dex, [
KEPLERIAN ORBIT AND TRANSIT EPHEMERIS
We fit a Keplerian orbital solution to the RV data (shown in Table 1) using the partially linearized, leastsquares fitting procedure described in Wright & Howard (2009) with parameter uncertainties estimated using the BOOTTRAN bootstrapping routines from Wang et al. (2012). The resulting Keplerian orbital solution is shown in Table 3, where the stellar parameters for the host star described in Section 2 and summarized in Table 2 were used to determine the minimum mass and semi-major axis of the planet. The phased data and residuals to the fit are shown in Fig. 1. We find the offsets to be 24.3, 24.7, -27.2, and -23.6 m s −1 for data from 2.7m McDonald Observatory telescope (2.7m), CORALIE, HIRES (pre-2004 or BJD prior to 13005.5), and HIRES (post-2004), respectively. Regarding the CORALIE data, the median velocity value was subtracted from the data and the velocities were converted from km/s to m/s, which were then used to calculate the offsets with respect to the HET's HRS data. The fit including the CORALIE data has a χ 2 red = 1.35 and RMS = 14.6 m s −1 . Without the CORALIE data, these numbers change to 1.46 and 8.67, respectively. However, the time baseline of the CORALIE data significantly improves the determination of the orbital period so our fit includes these data for the subsequent analysis. The lack of a linear trend over a long period time in Fig. 1 opens up the possibility to constrain the presence of additional companions in the HD 130322 system. If m sin i is the "minimum mass" (accounting for inclination) then we can also consider the minimum value that m sin i could possibly have given a linear trend that persists over time (the "minimum minimum mass"), which is described in detail Feng et al. (2015). Given that we only have an upper limit on a trend, we have measured the maximum value that the minimum m sin i could take, or the "maximum minimum minimum mass" (M mmm). We use BOOTTRAN to find the 1σ maximal value of the linear velocity over ∼14 years, where |dv/dt| = 0.0047 m/s/day = 1.716 m/s/yr. In addition to the values from Table 2, we employ Equation 1 in Feng et al. (2015) to calculate M mmm = 1.83M J as an upper-bound for a possible additional companion in the HD 130322 system.
Finally, we use the revised orbital properties of the planet described above to derive the predicted transit properties. The predicted time of mid-transit produced by the Keplerian orbital solution is T c =16745.594±0.085 (see Table 3). Since the orbit is close to circular, the eccentricity has a negligible effect on the transit properties (Kane & von Braun 2008). Using the mass-radius relationship of Kane & Gelino (2012), we adopt a radius of the planet of R p = 1.0R J . These combined parameters for the planet result in a transit probability of 4.7%, a predicted transit duration of 0.16 days, and a predicted transit depth of of 1.57%. The size of the 1σ transit window is 0.329 days which can be adequately monitored in a single night of observations (Kane et al. 2009). F2). In the initial 2001 observing season, we found that our original comparison star C3 was a low-amplitude variable, so we replaced it with HD 132932 in 2002. Therefore, we have only two comparison stars, C1 and C2, in common for all 14 observing seasons, whereas seasons 2-14 have in common the three comparison stars given above. Like the other telescopes operated on site by Tennessee State University, the Strömgren b and y bands are separated and concurrently measured by a photometer with two-channel precision, a dichroic filter, and two EMI 9124QB bi-alkali photomultiplier tubes (Henry 1999).
We compute the six permutations of the differential Table 1, resulting in the fit parameters shown in Table 3. The typical internal error bars for each data point are plotted. Left: RV data phased on the best-fit solution, where the origin of the data is indicated by the different symbols, shown in the figure. Right: Residual velocities with respect to the fitted orbital solution, with the same symbols as the left-side panel.
magnitudes of the four stars in a combinitorical fashion, namely P − C1, P − C2, P − C3, C3 − C1, C3 − C2, and C2 − C1. The magnitudes are then corrected for extinction due to the atmosphere and transformed to the Strömgren system, such that the differential b and y observations are combined into a single (b + y)/2 band, indicated with the subscript by. To achieve the maximum possible precision, we also combine the three comparison stars to determine the differential magnitudes of HD 130322 with respect to the mean brightness of the comparison stars. The precision of the individual differential magnitudes P − (C1 + C2 + C3)/3 by ranges between ∼ 0.0010 mag and ∼ 0.0015 mag on clearer nights, as determined from the nightly scatter of the comparison stars. Further details can be found in Henry (1999), Eaton et al. (2003), and references therein.
The 1470 individual P − (C1 + C2 + C3)/3 by differential magnitudes computed from the 13 observing seasons (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) are plotted in the top panel of Fig. 2. The observations are normalized so that all 13 seasons have the same mean as the first season 2002, indicated by the horizontal line in the top panel, to remove seasonto-season variability in HD 130322 caused by a possible starspot cycle (see below). The normalized nightly observations scatter about their grand mean of 1.02748 mag with a standard deviation of σ = 0.00331 mag, which is more than a factor of two larger than the ∼ 0.0010 -∼ 0.0015 mag measurement precision, which suggests HD 130322 has nightly low-amplitude variation.
The normalized differential magnitudes from the last 13 observing seasons are shown in Fig. 2 (middle panel), where they are phased with the planetary 10.7 day orbital period and the mid transit time (T c ) given in Table 3. A fit using a least-squares sinusoid provides a photometric semi-amplitude of 0.00023 ± 0.00011 mag and places a one milli-magnitude (0.001 mag) upper bound on the brightness variability of the host star. In addition, per similar results found in Queloz et al. (2001), Paulson et al. (2004), and Boisse et al. (2012), we dismiss the possibility that jitter induced stellar activity can account for the 10.7-day RV fluctuations. The con-stancy of the photometric measurements reveals the true planetary reflex motion seen in the RV variations of HD 130322 is a result of the orbiting planet.
A closer view of the predicted transit window is shown in the bottom panel of Fig. 2, plotted with an expanded abscissa. Similar to the middle panel, the solid curves shows the predicted central transit, phased at 0.0, for a duration of 0.16 days or ∼ 0.015 units of phase and a depth of 1.57% or ∼ 0.017 mag. These values were determined using the stellar radius (Table 2) and orbital ephemeris of the planet (Table 3). The ±1σ uncertainty in the transit window timing, as determined by the error bars for both the stellar radius (Table 2) and the improved orbital ephemeris (Table 3), is indicated by the vertical dotted lines. There are 1405 observations that lie outside the predicted transit window, which have a mean of 1.027490 ± 0.000089 mag. The 65 observations that fell within the transit window have a mean of 1.027278 ± 0.000303 mag. The difference in these two light levels is our "observed transit depth," −0.00021 ± 0.00032 mag, consistent to four decimal places. Thus, we are able to rule out full transits, a dispositive null result, with a predicted depth near 0.017 mag and also grazing transits near the predicted time to a depth of ∼0.001 mag.
Starspot Analysis
Given that the scatter of 0.00331 mag in the normalized data set of Fig. 1 is significantly larger than the observation precision, we suspect low-amplitude, nightto-night starspot variability in HD 130322. Inspection of the top panel of Fig. 2 reveals differences in the amount of scatter from year to year that could also be caused by starspot activity. A solar-type star's rotation period may be determined from the rotational modulation of starspots on the photosphere by measuring the variation in stellar brightness per Simpson et al. (2010). In addition, starspots can resemble an orbiting planet by generating periodic RV fluctuations . Therefore, to determine the behavior of potential starspots, we analyzed all 13 seasons of normalized photometry using a periodogram analysis. While lowamplitude (0.002-0.017 mag) periodic brightness fluctuations were found during each season, there was no unusual periodicity for the yearly C2−C1 comparison stars. The frequency spectrum for the penultimate 2013 observing season is shown in the top panel of Figure 3, while the phase curve is given in the bottom panel.
Results of our complete seasonal period analyses are given in Table 4. Here we include the first observing season from 2001, in which comparison star 3 was later found to be variable and was replaced for the subsequent seasons, as previously mentioned. The period analysis of season 1 (2001) is therefore based on differential magnitudes computed as P − (C1 + C2)/2 by . Another minor caveat should be noted for the 2004 and 2011 observing seasons. In both cases, the first harmonic of the rotation period was found to have a slightly higher peak than the rotation period. For these two seasons, we estimated the rotation periods and their uncertainties by doubling both values. The mean of the 12 rotation periods, excluding 2004 and 2011, is 26.53 days. The individual rotation periods scatter about that mean with a standard deviation of 2.44 days, and the standard de- A frequency spectrum of the 2013 observing season of HD 130322, such that the best frequency is at 0.03773 ± 0.00036 cycles per day (or c/d). Bottom: the 2013 seasonal observations phased with the corresponding best rotational period of 26.53 ± 0.70 days. The peak-to-peak amplitude of 0.011 mag shows coherent variability, which may be due to the rotational modulation of starspots, and low noise around the trend. All 14 observing seasons exhibit similar modulation (see Table 4).
viation of the mean is 0.70 days. Therefore, we determine that 26.53 ± 0.70 days is our most accurate calculation for the star's rotation period, which matches well with the rotation period derived by Simpson et al. (2010), or 26.1 ± 3.5 days, from the first six seasons of our APT data set. We note the significantly smaller uncertainty in our determination due to the greatly extended 14 year baseline. We found typical starspot-filling factors of one percent or less in Column 6 of Table 4, where the peak-to-peak amplitudes for each season range from 0.002-0.017 mag. Because the stellar rotation period of 26.53 days is more than a factor of two from the 10.7 day planetary RV period and harmonics, in addition to the small impact from starspots, we find that the 10.7-day planet could not be the result of stellar activity.
Examining the stellar rotation with respect to inclination, we first assume that the inclination of rotation axis for HD 130322 is close to 90 • , in which case the stellar radius (0.85 M ⊙ ) and v sin i (0.5 km s −1 ) predict a stellar rotation period of ∼ 85.5 days. Because this value is over three times longer than our observed value of P rot = 26.53 days, the logical conclusion is that the stellar rotation axis must have a low inclination and a low planetary transit probability. However, if we substitute the value of v sin i = 1.61 km s −1 from , the predicted rotation period is 26.55 days, identical to our observed rotation period within the uncertainties. This implies a very high inclination near 90 • and, therefore, a high probability of transits. Unfortunately, our photometry unambiguously rules them out.
Magnetic Activity
To look for evidence of magnetic cycles in HD 130322, we analyze the variability in the Ca II H and K indices, both proxies for stellar magnetic activity (Baliunas et al. 1995;Lockwood et al. 2007), and APT photometry over the entire observing period. These magnetic cycles could potentially resemble the period of a long-period planet within the RV data. At the top of Fig. 4, we show the seasonal means for the Mount Wilson S-index as determined from the Keck I RV spectra, described in Wright et al. (2004); Isaacson & Fischer (2010). While we do not have the Keck H and K (or RV) measurements for all 13 of our photometric observing seasons, there is notable variability on the order of several years. The middle two panels show the seasonal mean of HD 130322 (P) varying with respect to the two comparison stars (C1 and C2) throughout the 14-year observations without normalization (Table 4). The horizontal dotted line indicates the standard deviation of each seasonal mean as compared to the grand mean, given numerically in the lower right corner. The range in magnitude of the seasonal mean is printed in the lower left corner. The brightness curves in the middle of Fig. 4 show HD 130322 varying on the order of multiple mmag with respect to both of the comparison stars. Similarly, the yearly average of the comparison stars C2 − C1 is given in the bottom panel, with a very small standard deviation of 0.0005 mag. Since the comparison stars demonstrate photometric stability over the 13 observing seasons, the fluctuations seen in middle two panels must be intrinsic to the host-star HD 130322.
The variations in H and K and APT observations plotted in Fig. 4 appear to be cyclic. Analyses of the yearly means for the Ca II indices, P − C1, and P − C2 using a least-squares, sine-fit periodogram results in the same periods to within uncertainty: 5.22 ± 0.16, 5.19 ± 0.20, and 5.1208 ± 0.22 yr, respectively. These amplitudes and timescales for HD 130322 are similar to previously monitored long-term cycles of solar-type stars (see Henry 1999;Lockwood et al. 2007;Hall et al. 2009). Figure 4 reveals that HD 130322's brightness variability is anti-correlated with the strength of its H and K emission, as is common among young, lower-mainsequence dwarfs. For example, Lockwood et al. (2007) demonstrate the difference in behavior between young, solar-type stars with light curves dominated by dark spots and old solar-type stars with light curves dominated by bright faculae. In the young stars, photo-metric variability exhibits an inverse (or negative) correlation with chromospheric activity. In older stars, brightness variability and chromospheric activity exhibit a direct (or positive) correlation. Our Sun exhibits clear direct correlation between total solar irradiance and Ca II H and K emission (see Fig. 2 in Lockwood et al. 2007). Lockwood et al. (2007) estimate the dividing line between spot-dominated and faculae-dominated bright- ness variations to be around log R ′ HK = −4.7. The original discovery paper of HD 130322b by Udry et al. (2000) quoted a log R ′ HK value of -4.39 from Santos et al. (2001), corresponding to an age of only 0.35 Gyr and a rotation period of approximately 9 days, according to the calibrations in (Wright et al. 2004). This predicted rotation period is much shorter than our observed P rot = 26.53 days. By way of comparison, Wright et al. (2004) demonstrate that the correlation between chromospheric activity and stellar brightness in the G8 dwarf HD 154345 is positive, despite its similar properties to HD 130322. However, the log R ′ HK value of HD 154345 is -4.91 compared to -4.78 for HD 130322, which shows it to be much older than HD 130322 so that a positive correlation is expected. a Periodogram analysis gave half of the quoted period, suggesting that the star had spots on both hemispheres at those epochs. We doubled the photometric periods and their errors in these cases to get Prot.
CONCLUSIONS
Accurately determining the properties of planetary systems is extremely important as we move towards characterizing the thousands of exoplanets that are now confirmed. It is only through understanding the host star that we are able to precisely measure the properties of the orbiting planet(s), which fuels both dynamic formation and evolution models. Through the Transit Ephemeris Refinement and Monitoring Survey (TERMS), we have studied HD 130322 because of the extensive RV coverage offered by HIRES, HRS, and CORALIE over the last ∼14 years. The new and combined data has allowed us to determine a highly precise stellar radius of 0.85±0.04 R ⊙ , resulting in an updated Keplerian orbital solution to significantly limit the orbital dynamics of the b-planet. Through an extensive ∼14 year photometric baseline at the APT, we carefully monitored the planetary phase during the predicted transit window, which did not reveal any long-term variability of the host-star due to the presence of a companion. The HD 130322b planet had a transit probability of 4.7% at a depth of 1.57%. Significant observations during the predicted transit window yielded a dispositive null result excluding a full transit to a depth of 0.017 mag and grazing transit to ∼0.001 mag. We were able to quantify the stellar rotation period with an unprecedented accuracy (26.53±0.70 days) by using the extensive photometric coverage. The variation in differential magnitudes between the target and reference stars, as compared to the Mt. Wilson S-indices, also allowed us to better understand the stellar magnetic activity. However, the characterization of the HD 130322 planetary system was only possible through the coming together of both collaborators and techniques, such that we were able to greatly improve the ephemeris of this system. The TERMS project consistently and systematically provides accurate characterization of bright, nearby planetary systems, forwarding the understanding of exoplanets and their host stars in general.
(2004) measurement was [Fe/H] = 0.04 and renormalized Bodaghee et al. (2003) determined [Fe/H] = 0.27 dex.
2, 2001 and June 26, 2014, 1569 observations were obtained for HD 130322 at Fairborn Observatory in Arizona using the T11 0.80m APT. The APT is able to determine the differential brightness of the primary star HD 130322 (P: V = 8.04, B-V = 0.781, K0V) with respect to three comparison stars: HD 130557 (C1: V=6.15, B-V=-0.02, B9V), HD 129755 (C2: V=7.58, B-V=0.41, F2), and HD 132932 (C3: V=7.74, B-V=0.40,
Fig. 1 .
1-The Keplerian orbital solution using all of the data shown in
Fig. 2 .
2-Top: the 1470 differential magnitudes (P -(C1 + C2 + C3) / 3 by ) for HD 130322, taken using the 0.8m APT from 2002-2014, where all 13 observing seasons are normalized to standardize the yearly average. Middle: Observations phased to the planet's ephemeris. The orbital phase curve semi-amplitude is 0.00023 ± 0.00011 mag, fit with a least-squares sine, which confirms the bplanet with the lack of periodic light variability in the host star. Bottom: A zoomed-in portion of the middle-plot, centered on the central transit midpoint. The solid curve shows the predicted central transit, with a depth of 1.57% or 0.017 mags and duration of 0.16 days or 0.015 units of phase. The vertical dotted lines show the ±1σ transit window. Transits are excluded to a depth of 0.017 mag while grazing transits are excluded to a depth of ∼0.001 mag.
Fig. 3 .
3-Top: the 91 differential magnitudes (P -(C1 + C2 + C3) / 3 by ) for HD 130322, taken during the 2013 season. Middle:
Fig. 4 .
4-Top: The fluctuations in the Mt. Wilson S-index during the 13 observing seasons. Upper-Middle: Brightness of the primary target with respect to the C1 comparison star, measured with Keck I and the T11 APT. Lower-Middle: Brightness of the primary target with respect to the C2 comparison star. Bottom: Differential magnitudes of the comparison stars, which show stability to 0.0005 mag. The small variability in the two comparison stars, while the primary shows significant fluctuation, and perfect anti-correlation with seasonal mean brightness indicates that the variability in the light curves is intrinsic to HD 130322. The inverse correlation of S-index and brightness is typical of young, solar-type stars.
. We have used the updated Hipparcos (van Leeuwen 2007) catalog, which may account for the varying B − V and distance determinations; they cited B − V = 0.781 and the distance to be ‡ ‡ Based on observations obtained at the W. M. Keck Observatory, which is operated jointly by the University of California and the California Institute of Technology. Keck time has been granted by the University of Hawaii, the University of California, Caltech, and NASA.
Al/Fe] = -0.12 dex, [S/Fe] = -0.14 dex, [Cu/Fe] = -0.15 dex, [Sr/Fe] = 0.07 dex, [YII/Fe] = -0.09 dex, [BaII/Fe] = -0.06 dex, [Ce/Fe] = 0.07 dex, [CeII/Fe] = -0.05 dex, and [EuII/Fe] = -0.18 dex. In general, we find that the majority of abundances well measured in HD 130322 are significantly sub-solar. Despite measuring [Fe/H] = 0.07 ± 0.03 via SME, not much can be said conclusively about the overall [Fe/H] content, given the large spread in the abundances determined by different methods.
TABLE 1
1Radial velocities measured for HD 130322BJD
RV
±1σ
Tel
(-2440000)
(m/s)
(m/s)
11755.76855
−54.8
1.0
HIRES
11984.06010 −102.7
1.4
HIRES
12065.94426
−55.6
1.4
HIRES
12127.80910
77.8
1.3
HIRES
12128.76352
44.8
1.4
HIRES
12162.72653
−60.7
1.4
HIRES
12335.12452 −127.8
1.3
HIRES
12488.77933
−24.9
1.5
HIRES
12683.09362
54.9
1.4
HIRES
12805.91934
−94.5
1.4
HIRES
13153.85287
42.5
1.3
HIRES
-----
--
--
----
13426.11560
−43.4
1.1
HIRES
13842.00573
48.6
1.1
HIRES
15351.81672
75.5
1.1
HIRES
15636.05652
−90.5
1.1
HIRES
15673.83685
25.2
1.2
HIRES
15700.79799
−59.4
1.5
HIRES
15700.80123
−62.7
1.3
HIRES
15734.89254
56.2
1.2
HIRES
15789.75142
79.1
1.3
HIRES
15961.16166
86.0
1.2
HIRES
16000.02899 −102.0
1.2
HIRES
16075.79698
−51.6
1.2
HIRES
16451.82279
23.7
1.2
HIRES
13585.64900
83.7
7.5
2.7m
13843.89253
−18.0
7.5
2.7m
13863.78301
75.5
8.6
2.7m
13910.78043
−68.5
8.1
2.7m
14251.84318
−72.8
9.4
2.7m
13471.80558
−99.8
7.2
HRS
13481.88526 −106.9
6.6
HRS
13486.85864
105.1
6.4
HRS
13488.75815
72.2
5.9
HRS
13509.79117
101.3
6.1
HRS
13512.78123
−65.6
5.1
HRS
13527.74971
27.0
6.2
HRS
13542.69985
55.4
5.6
HRS
13543.70614
−4.9
6.1
HRS
13550.70420
105.5
6.1
HRS
13837.89677
−12.6
5.9
HRS
13842.88880
27.6
6.3
HRS
13868.80896
−78.8
5.7
HRS
13882.78043
83.0
6.0
HRS
13897.72683
−44.2
6.1
HRS
13900.72079
−85.4
6.0
HRS
13936.63557
110.4
6.6
HRS
14122.01834
−12.7
6.8
HRS
14128.00335
47.0
6.7
HRS
14135.98084 −113.2
6.5
HRS
14139.97029
89.9
7.2
HRS
14140.96840
98.1
6.1
HRS
14144.96962
−99.4
6.6
HRS
14157.01611 −112.4
6.8
HRS
14158.92425
−40.9
6.8
HRS
14163.92465
26.3
6.7
HRS
14168.90656
−71.6
6.5
HRS
14173.98269
69.7
7.4
HRS
14176.87914
−90.6
5.7
HRS
TABLE 1 -
1ContinuedBJD
RV
±1σ
Tel
(-2440000)
(m/s)
(m/s)
14191.92631
20.5
6.2
HRS
11257.85195
99.0
9.0
CORALIE
11267.80486
39.0
9.0
CORALIE
11267.81699
45.0
9.0
CORALIE
11273.86307
2.0
9.0
CORALIE
11287.71123
−52.0
9.0
CORALIE
11287.72337
−68.0
9.0
CORALIE
11291.77019
137.0
9.0
CORALIE
11294.82378
44
15
CORALIE
11295.76291
−31.0
9.0
CORALIE
11296.60937
−72
10
CORALIE
11296.83574
−86
10
CORALIE
11297.61667
−86
10
CORALIE
11297.82760
−99
10
CORALIE
11298.60898
−71
10
CORALIE
11298.83006
−52
10
CORALIE
11299.60924
−18.0
9.0
CORALIE
11299.82827
2
10
CORALIE
11300.60428
53.0
9.0
CORALIE
11300.82617
69
10
CORALIE
11301.59933
103
10
CORALIE
11301.81846
119.0
9.0
CORALIE
11302.73856
105
12
CORALIE
11304.74462
71
11
CORALIE
11305.79957
−1
10
CORALIE
11306.70099
−58
10
CORALIE
11307.78007
−129
14
CORALIE
11307.79214
−122
14
CORALIE
11308.77008
−131
10
CORALIE
11308.78216
−134
10
CORALIE
11309.74470
−52
11
CORALIE
11309.75677
−47
10
CORALIE
11310.65574
13
10
CORALIE
11310.66781
5
10
CORALIE
11311.76937
51
34
CORALIE
11312.71231
130.0
9.0
CORALIE
11313.70809
123.0
9.0
CORALIE
11314.71380
100
12
CORALIE
11315.65132
46
24
CORALIE
11316.70334
−21
14
CORALIE
11317.73465
−81
11
CORALIE
11318.71336
−95
13
CORALIE
11319.67233
−73.0
9.0
CORALIE
11320.73422
−37
11
CORALIE
11320.74629
−40
11
CORALIE
11335.68044
141
14
CORALIE
11335.69251
122
14
CORALIE
11336.59905
73
14
CORALIE
11336.61110
84
13
CORALIE
11339.65310
−100
17
CORALIE
11339.66512
−122
17
CORALIE
11340.65091
−109
12
CORALIE
11340.66296
−109
12
CORALIE
11342.62586
−16
11
CORALIE
11342.63788
0
10
CORALIE
11355.64903
119
10
CORALIE
11364.57708
43
11
CORALIE
11366.58262
146
11
CORALIE
11367.59944
124
11
CORALIE
11368.57076
83
11
CORALIE
11369.60470
41
15
CORALIE
11369.61675
49
15
CORALIE
11370.51479
−7
15
CORALIE
11370.52684
−46
17
CORALIE
11373.52546
−66
27
CORALIE
11373.53762
−103
27
CORALIE
11374.54843
2
18
CORALIE
11374.56049
−14
17
CORALIE
11375.55068
47
11
CORALIE
11375.56276
57
11
CORALIE
11376.51940
58
21
CORALIE
11376.53149
71
19
CORALIE
11380.59301
19
15
CORALIE
11380.60510
−14
16
CORALIE
11381.51359
−46.0
9.0
CORALIE
11381.52568
−47.0
9.0
CORALIE
TABLE 1 -
1Continued Note. -The line separates the pre-2004 and post-2004 HIRES data.BJD
RV
±1σ
Tel
(-2440000)
(m/s)
(m/s)
11382.48455
−85
10
CORALIE
11382.49669
−78
10
CORALIE
11382.56236
−83
11
CORALIE
11382.57447
−87
11
CORALIE
11383.53612
−71
10
CORALIE
11383.54681
−88
10
CORALIE
11384.54932
−71.0
9.0
CORALIE
11384.56142
−62
10
CORALIE
11385.54622
12
10
CORALIE
11385.55834
10
11
CORALIE
11386.54337
86
10
CORALIE
11386.55545
74
11
CORALIE
11388.53150
132
14
CORALIE
11388.54362
145
14
CORALIE
11389.49681
127
14
CORALIE
11389.50891
121
12
CORALIE
11390.46734
73
11
CORALIE
11390.47944
81
11
CORALIE
11391.46814
5.0
9.0
CORALIE
11391.48030
11.0
9.0
CORALIE
11392.52992
−48.0
9.0
CORALIE
11392.54196
−38
10
CORALIE
11393.51766
−80
10
CORALIE
11393.52971
−91
10
CORALIE
11394.47208
−68
11
CORALIE
11394.48410
−78
11
CORALIE
11395.47053
−25
17
CORALIE
11395.48250
−42
11
CORALIE
11397.47145
88
20
CORALIE
11397.48921
90
15
CORALIE
11398.47062
128.0
9.0
CORALIE
11398.48258
121.0
9.0
CORALIE
11399.47201
139
10
CORALIE
11400.48982
118
10
CORALIE
11401.46724
63
11
CORALIE
11402.48058
−1
17
CORALIE
11403.47387
−52
12
CORALIE
11404.47239
−109
12
CORALIE
11405.47319
−46
22
CORALIE
11406.48267
−6
21
CORALIE
11412.48141
28
11
CORALIE
11412.49345
35
11
CORALIE
11424.49242
−61
20
CORALIE
TABLE 2 Stellar
2ParametersParameter
Value
Source
V
8.04
Hipparcos
B-V
−0.16
Tycho-2
Distance (pc)
31.54 ± 1.18 Hipparcos
T eff (K)
5387 ± 44
SME
log g
4.52 ± 0.06
SME
v sin i (km s −1 )
0.5 ± 0.5
SME
M * (M ⊙ )
0.92 ± 0.03
SME
R * (R ⊙ )
0.85 ± 0.04
SME
TABLE 3
Keplerian Fit Parameters
Parameter
Value
P (days)
10.70871 ± 0.00018
Tc a (JD -2440000)
16745.594 ± 0.085
Tp b (JD -2440000)
13996.4 ± 1.1
e
0.029 ± 0.016
TABLE 3 -
3ContinuedParameter
Value
K (m s −1 )
112.5 ± 2.4
ω (deg)
193 ± 36
χ 2
red
1.35
RMS (m s −1 )
14.60
a Time of transit.
b Time of periastron passage.
TABLE 4
4Summary of Photometric Observation for HD 130322Observing
Julian Date Range Sigma
Prot
Full Amplitude
< P − C1 >
< P − C2 >
< C2 − C1 >
Season
N obs (HJD − 2,400,000) (mag)
(days)
(mag)
(mag)
(mag)
(mag)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
2001
99
51912-52076
0.0034
23.0 ± 0.2
0.0046 ± 0.0009 2.1189 ± 0.0004 0.5608 ± 0.0004 1.5580 ± 0.0001
TABLE 4 -
4ContinuedObserving
Julian Date Range Sigma
Prot
Full Amplitude
< P − C1 >
< P − C2 >
< C2 − C1 >
Season
N obs (HJD − 2,400,000) (mag)
(days)
(mag)
(mag)
(mag)
(mag)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
2002
230
52288-52462
0.0019
24.0 ± 0.2
0.0022 ± 0.0003 2.1193 ± 0.0002 0.5619 ± 0.0002 1.5574 ± 0.0001
2003
79
52645-52816
0.0039
29.0 ± 0.4
0.0067 ± 0.0010 2.1244 ± 0.0005 0.5664 ± 0.0005 1.5580 ± 0.0002
2004
84
53010-53189
0.0042 31.8 ± 0.2 a 0.0049 ± 0.0012 2.1253 ± 0.0005 0.5675 ± 0.0005 1.5578 ± 0.0002
2005
69
53379-53551
0.0029
29.0 ± 0.3
0.0052 ± 0.0009 2.1243 ± 0.0004 0.5668 ± 0.0004 1.5575 ± 0.0002
2006
68
53742-53913
0.0020
24.2 ± 0.3
0.0028 ± 0.0006 2.1205 ± 0.0003 0.5625 ± 0.0003 1.5580 ± 0.0002
2007
90
54104-54282
0.0025
24.3 ± 0.2
0.0037 ± 0.0006 2.1195 ± 0.0003 0.5621 ± 0.0003 1.5574 ± 0.0002
2008
98
54475-54637
0.0034
28.5 ± 0.4
0.0081 ± 0.0007 2.1254 ± 0.0004 0.5681 ± 0.0004 1.5574 ± 0.0001
2009
81
54839-55003
0.0042
30.7 ± 0.4
0.0060 ± 0.0012 2.1261 ± 0.0005 0.5679 ± 0.0005 1.5582 ± 0.0002
2010
93
55201-55382
0.0069
26.1 ± 0.3
0.0169 ± 0.0014 2.1278 ± 0.0007 0.5692 ± 0.0007 1.5586 ± 0.0002
2011
89
55570-55738
0.0031 24.8 ± 0.1 a 0.0040 ± 0.0008 2.1229 ± 0.0003 0.5650 ± 0.0003 1.5580 ± 0.0002
2012
86
55930-56098
0.0019
25.4 ± 0.2
0.0019 ± 0.0006 2.1200 ± 0.0002 0.5634 ± 0.0003 1.5566 ± 0.0002
2013
91
56302-56470
0.0046
26.5 ± 0.3
0.0110 ± 0.0008 2.1249 ± 0.0006 0.5677 ± 0.0005 1.5573 ± 0.0002
2014
99
56659-56834
0.0029
27.7 ± 0.2
0.0042 ± 0.0008 2.1256 ± 0.0003 0.5681 ± 0.0003 1.5576 ± 0.0002
ACKNOWLEDGEMENTSThe authors would like to thank Howard Isaacson and Geoff Marcy in recognition of their time spent observing the S-indices. N.R.H. would like to acknowledge financial support from the National Science Foundation through grant AST-1109662. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. G.W.H. acknowledges long-term support from NASA, NSF, Tennessee State University, and the State of Tennessee through its Centers of Excellence program. T.S.B. acknowledges support provided through NASA grant ADAP12-0172. J.T.W. and Y.K.F. acknowledge support from the National Science Foundation through grant AST-1211441. A.W.H would like thank the many observers who contributed to the measurements reported here and gratefully acknowledge the efforts and dedication of the Keck Observatory staff. Finally, we extend special thanks to those of Hawai'ian ancestry on whose sacred mountain of Maunakea we are privileged to be guests. Without their generous hospitality, the Keck observations presented herein would not have been possible.
. S L Baliunas, ApJ. 438269Baliunas, S. L., et al. 1995, ApJ, 438, 269
. A Bodaghee, N C Santos, G Israelian, M Mayor, A&A. 404715Bodaghee, A., Santos, N. C., Israelian, G., & Mayor, M. 2003, A&A, 404, 715
. I Boisse, X Bonfils, N C Santos, A&A. 545109Boisse, I., Bonfils, X., & Santos, N. C. 2012, A&A, 545, A109
. T S Boyajian, G Van Belle, K Braun, AJ. 14747Boyajian, T. S., van Belle, G., & von Braun, K. 2014, AJ, 147, 47
. T S Boyajian, ApJ. 757112Boyajian, T. S., et al. 2012, ApJ, 757, 112
. R P Butler, ApJ. 646505Butler, R. P., et al. 2006, ApJ, 646, 505
. R P Butler, ApJ. 646505Butler, R. P., et al. 2006, ApJ, 646, 505
. E Delgado Mena, G Israelian, J I González Hernández, J C Bond, N C Santos, S Udry, M Mayor, ApJ. 7252349Delgado Mena, E., Israelian, G., González Hernández, J. I., Bond, J. C., Santos, N. C., Udry, S., & Mayor, M. 2010, ApJ, 725, 2349
. P Demarque, J.-H Woo, Y.-C Kim, S K Yi, ApJS. 155667Demarque, P., Woo, J.-H., Kim, Y.-C., & Yi, S. K. 2004, ApJS, 155, 667
. S E Dodson-Robinson, C A Beichman, J M Carpenter, G Bryden, AJ. 14111Dodson-Robinson, S. E., Beichman, C. A., Carpenter, J. M., & Bryden, G. 2011, AJ, 141, 11
. J A Eaton, G W Henry, F C Fekel, Astrophysics and Space Science Library. T. D. Oswalt288189Astrophysics and Space Science LibraryEaton, J. A., Henry, G. W., & Fekel, F. C. 2003, in Astrophysics and Space Science Library, Vol. 288, Astrophysics and Space Science Library, ed. T. D. Oswalt, 189
. A Ecuvillon, G Israelian, N C Santos, M Mayor, V Villar, G Bihain, A&A. 426619Ecuvillon, A., Israelian, G., Santos, N. C., Mayor, M., Villar, V., & Bihain, G. 2004, A&A, 426, 619
. F K Feng, J T Wright, B Nelson, S Wang, E Ford, G W Marcy, H Isaacson, A W Howard, arXiv:1501.00633ApJ. Feng, F. K., Wright, J. T., Nelson, B., Wang, S., Ford, E., Marcy, G. W., Isaacson, H., & Howard, A. W. 2015, accepted to ApJ, arXiv:1501.00633
O Gingerich, The Eye of Heaven. SpringerGingerich, O. 1997, The Eye of Heaven (Springer)
. J C Hall, G W Henry, G W Lockwood, B A Skiff, S H Saar, AJ. 138312Hall, J. C., Henry, G. W., Lockwood, G. W., Skiff, B. A., & Saar, S. H. 2009, AJ, 138, 312
. G W Henry, PASP. 111845Henry, G. W. 1999, PASP, 111, 845
. N R Hinkel, F X Timmes, P A Young, M D Pagano, M C Turnbull, AJ. 14854Hinkel, N. R., Timmes, F. X., Young, P. A., Pagano, M. D., & Turnbull, M. C. 2014, AJ, 148, 54
. H Isaacson, D Fischer, ApJ. 725875Isaacson, H., & Fischer, D. 2010, ApJ, 725, 875
. S R Kane, D M Gelino, PASP. 124323Kane, S. R., & Gelino, D. M. 2012, PASP, 124, 323
. S R Kane, S Mahadevan, K Von Braun, G Laughlin, D R Ciardi, PASP. 1211386Kane, S. R., Mahadevan, S., von Braun, K., Laughlin, G., & Ciardi, D. R. 2009, PASP, 121, 1386
. S R Kane, K Braun, ApJ. 689492Kane, S. R., & von Braun, K. 2008, ApJ, 689, 492
. G W Lockwood, B A Skiff, G W Henry, S Henry, R R Radick, S L Baliunas, R A Donahue, W Soon, ApJS. 171260Lockwood, G. W., Skiff, B. A., Henry, G. W., Henry, S., Radick, R. R., Baliunas, S. L., Donahue, R. A., & Soon, W. 2007, ApJS, 171, 260
Landolt-Börnstein -Group VI Astronomy and Astrophysics Numerical Data and Functional Relationships in Science and Technology Volume 4B: Solar System. K Lodders, H Plame, H.-P Gail, J.E. Trümper444Lodders, K., Plame, H., & Gail, H.-P. 2009, Landolt-Börnstein - Group VI Astronomy and Astrophysics Numerical Data and Functional Relationships in Science and Technology Volume 4B: Solar System. Edited by J.E. Trümper, 4B, 44
. V Neves, N C Santos, S G Sousa, A C M Correia, G Israelian, A&A. 497563Neves, V., Santos, N. C., Sousa, S. G., Correia, A. C. M., & Israelian, G. 2009, A&A, 497, 563
. D B Paulson, S H Saar, W D Cochran, G W Henry, AJ. 1271644Paulson, D. B., Saar, S. H., Cochran, W. D., & Henry, G. W. 2004, AJ, 127, 1644
. D Queloz, A&A. 379279Queloz, D., et al. 2001, A&A, 379, 279
N C Santos, M Mayor, D Naef, F Pepe, D Queloz, S ; R J Udry, R Garcia Lopez, & M R Rebolo, Osorio, 11th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun. 2231562Astronomical Society of the Pacific Conference SeriesSantos, N. C., Mayor, M., Naef, D., Pepe, F., Queloz, D., & Udry, S. 2001, in Astronomical Society of the Pacific Conference Series, Vol. 223, 11th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, ed. R. J. Garcia Lopez, R. Rebolo, & M. R. Zapaterio Osorio, 1562
. E K Simpson, S L Baliunas, G W Henry, C A Watson, MNRAS. 4081666Simpson, E. K., Baliunas, S. L., Henry, G. W., & Watson, C. A. 2010, MNRAS, 408, 1666
. G Torres, J Andersen, A Giménez, A&A Rev. 1867Torres, G., Andersen, J., & Giménez, A. 2010, A&A Rev., 18, 67
. D E Trilling, ApJL. 53761Trilling, D. E. 2000, ApJL, 537, L61
. S Udry, A&A. 356590Udry, S., et al. 2000, A&A, 356, 590
. J A Valenti, D A Fischer, ApJS. 159141Valenti, J. A., & Fischer, D. A. 2005, ApJS, 159, 141
. J A Valenti, N Piskunov, A&AS. 118595Valenti, J. A., & Piskunov, N. 1996, A&AS, 118, 595
. J A Valenti, ApJ. 702653A&AValenti, J. A., et al. 2009, ApJ, 702, 989 van Leeuwen, F. 2007, A&A, 474, 653
. Sharon Wang, X , ApJ. 76146Wang, Sharon, X., et al. 2012, ApJ, 761, 46
. R A Wittenmyer, M Endl, W D Cochran, H F Levison, G W Henry, ApJS. 18297Wittenmyer, R. A., Endl, M., Cochran, W. D., Levison, H. F., & Henry, G. W. 2009, ApJS, 182, 97
. J T Wright, A W Howard, ApJS. 182205Wright, J. T., & Howard, A. W. 2009, ApJS, 182, 205
. J T Wright, G W Marcy, R P Butler, S S Vogt, ApJS. 152261Wright, J. T., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2004, ApJS, 152, 261
. J T Wright, G W Marcy, R P Butler, S S Vogt, G W Henry, H Isaacson, A W Howard, ApJL. 68363Wright, J. T., Marcy, G. W., Butler, R. P., Vogt, S. S., Henry, G. W., Isaacson, H., & Howard, A. W. 2008, ApJL, 683, L63
| [] |
[
"Improving Vietnamese Legal Question-Answering System based on Automatic Data Enrichment",
"Improving Vietnamese Legal Question-Answering System based on Automatic Data Enrichment"
] | [
"Thi-Hai-Yen Vuong \nVNU University of Engineering and Technology\nHanoiVietnam\n",
"Ha-Thanh Nguyen [email protected] \nNational Institute of Informatics\nTokyoJapan\n",
"Quang-Huy Nguyen \nVNU University of Engineering and Technology\nHanoiVietnam\n",
"Le-Minh Nguyen [email protected] \nJapan Advanced Institute of Science and Technology\nIshikawaJapan\n",
"Xuan-Hieu Phan \nVNU University of Engineering and Technology\nHanoiVietnam\n"
] | [
"VNU University of Engineering and Technology\nHanoiVietnam",
"National Institute of Informatics\nTokyoJapan",
"VNU University of Engineering and Technology\nHanoiVietnam",
"Japan Advanced Institute of Science and Technology\nIshikawaJapan",
"VNU University of Engineering and Technology\nHanoiVietnam"
] | [] | Question answering (QA) in law is a challenging problem because legal documents are much more complicated than normal texts in terms of terminology, structure, and temporal and logical relationships. It is even more difficult to perform legal QA for low-resource languages like Vietnamese where labeled data are rare and pre-trained language models are still limited. In this paper, we try to overcome these limitations by implementing a Vietnamese article-level retrieval-based legal QA system and introduce a novel method to improve the performance of language models by improving data quality through weak labeling. Our hypothesis is that in contexts where labeled data are limited, efficient data enrichment can help increase overall performance. Our experiments are designed to test multiple aspects, which demonstrate the effectiveness of the proposed technique. | null | [
"https://export.arxiv.org/pdf/2306.04841v1.pdf"
] | 259,108,327 | 2306.04841 | 672158c849d61875b37d18406984dc63889b4d2b |
Improving Vietnamese Legal Question-Answering System based on Automatic Data Enrichment
Thi-Hai-Yen Vuong
VNU University of Engineering and Technology
HanoiVietnam
Ha-Thanh Nguyen [email protected]
National Institute of Informatics
TokyoJapan
Quang-Huy Nguyen
VNU University of Engineering and Technology
HanoiVietnam
Le-Minh Nguyen [email protected]
Japan Advanced Institute of Science and Technology
IshikawaJapan
Xuan-Hieu Phan
VNU University of Engineering and Technology
HanoiVietnam
Improving Vietnamese Legal Question-Answering System based on Automatic Data Enrichment
Vietnamese Legal QAData EnrichmentLegal Retrieval
Question answering (QA) in law is a challenging problem because legal documents are much more complicated than normal texts in terms of terminology, structure, and temporal and logical relationships. It is even more difficult to perform legal QA for low-resource languages like Vietnamese where labeled data are rare and pre-trained language models are still limited. In this paper, we try to overcome these limitations by implementing a Vietnamese article-level retrieval-based legal QA system and introduce a novel method to improve the performance of language models by improving data quality through weak labeling. Our hypothesis is that in contexts where labeled data are limited, efficient data enrichment can help increase overall performance. Our experiments are designed to test multiple aspects, which demonstrate the effectiveness of the proposed technique.
Introduction
The performance of question-answering (QA) has increased significantly thanks to the rapid development and recent breakthroughs in natural language processing. With these advances, QA has been used actively in various business domains in order to save human labor, get more automation as well as enhance user experience. Among application areas, QA in the legal domain has attracted a lot of interest from the research community as well as the awareness and support from legal practitioners, experts, law firms, and government agencies. Legal QA could assist them to find relevant legal information quickly, accurately, and reliably.
Technically, the legal retrieval-based QA problem is simply stated as follows: given a query q and a text corpus D = {d 1 , d 2 , . . . , d n }, the retrieval-based QA finds the most likely document d * that maximizes the relevance score R:
d * = arg max d∈D R(q, d)(1)
where R(q, d) represents the relevance score of the query q and document d.
Traditionally, lexical weighting and ranking approaches like TF-IDF or BM25 are used to find the relevant documents based on the match of vocabulary terms. Despite their limited accuracy and simplicity, these techniques are normally costeffective. Meanwhile, representation and deep learning based models are likely to give better results but they are much more expensive in terms of large training data, computing power, storage, and deployment. Various deep learning models have been introduced to enhance the representation of queries and documents, such as CNN [4], RNN and LSTM [11,17]. Pre-trained language models (BERT [2], GPTs [1]) also significantly improve text representation in retrieval tasks.
In the legal domain, there are several challenges to building a reliable QA system. First, legal documents are much more complex than normal texts. They contain legal terms and concepts that are not commonly observed in general texts. Legal texts are usually long and have complex structures. There are also temporal constraints, logical relations, cross-document references etc. that are even difficult for human readers to follow and understand. Second, data annotation for legal documents is a real challenge, making it hard to construct even a medium-sized high-quality labeled dataset for training QA models.
Today, one popular way to improve accuracy is to build large deep-learning models with a huge number of parameters. This is obviously an obstacle because building such models requires powerful computing resources and a huge source of data. In this work, we want to concentrate on enhancing data quality and quantity in the context where expanding labeled data is infeasible. A heuristic method for automatically creating weak label datasets and supporting relationship representation models in case law retrieval is presented by Vuong et al. [20]. Therefore, we apply this technique to create more training data to improve our models without the need of increasing number of model parameters.
Technically, we address the problem of article-level retrieval-based legal QA. We use the Vietnamese civil law QA dataset, which was introduced by Nguyen et al. [10], to conduct an empirical study on the proposed methods. Table 1 illustrates an example of a legal query and the anticipated response. It is difficult to represent, retrieve and determine the correct answer when the articles are often long and complex. In addition, a notable feature of this dataset is that each article usually has a title, which serves as a brief summary.
The main contributions of our work are twofold. First, we built an end-to-end article retrieval system to solve the legal QA task. Second, we show how efficient automated data enrichment is and we conducted a variety of experiments to contrast our model with the most cutting-edge approaches in this domain.
Related Work
In natural language processing, the term question answering (QA) is commonly used to describe systems and models that are capable of providing information based on a given question. Depending on the characteristics of the task, we can divide it into different categories. Factoid QA [6] is a class of problems for which the answer is usually simple and can be further extracted from a given question or context. Problems in this category can often be solved with generation models or sequence tagging approaches. Retrieval-based QA [3] is a class of problems where the answer should be retrieved from a large list of candidates based on relevancy and ability to answer the question. This class of problems can also be called List QA. Confirmation QA [15] is the class of problems where systems or models need to confirm whether a statement is true or false. Systems for this type of problem can be an end-2-end deep learning model, knowledge-based systems, or neuro-symbolic systems.
In the legal field, question-answering has been posed in the research community for many years [12]. The main challenges of this problem on the rule language include fragmented training data, complex language, and long text. With the emergence of transformer-based [19] language models as well as transfer learning and data representation techniques, the performance of systems on tasks is significantly improved. In legal information retrieval, a number of neural approaches are also introduced to address the problem of word differences and characteristics of complex relationships [5,16,18,10].
Dataset
Original dataset: the corpus is collected from Vietnamese civil law. The labeled dataset was introduced by Nguyen et al. [10]. Table 2 & 3 give a statistical summary of the corpus and dataset. There are 8587 documents in the corpus. Vietnamese civil law documents have a long and intricate structure. The longest document contains up to 689 articles, and the average number of articles per document is also comparatively high at 13.69. The average title length in this dataset is 13.28 words, whereas the average content length is 281.83 words. This is also worth noting because one of the challenges and restrictions is the presentation of long texts. On average, the questions are less than 40 words long. Because of the similarity in their distributions, it is expected that the model trained on the training set will yield good performance on the test set. Weak labeled dataset: Vuong et al. have the assumption that the sentences in a legal article will support a topic sentence [20]. On the basis of this supposition, the weak labeled dataset is created. There is also a similar relationship in this dataset. The title serves as a brief summary of the article, so the sentences in the article content support to title. We apply this assumption to our method. By considering the title to be the same as the question, we will produce a dataset with weak labels. A title and content pair would be a positive example equivalent to a question and related articles pair. We randomly generated negative examples at a ratio of 1:4 to positive labels and obtained a weak label dataset consisting of 551,225 examples.
Methods
For a legal question-answering system at the article level, given a question q, and a corpus of Civil Law CL = {D 1 , D 2 , ..., D n }, the system should return a list of related articles A = {a i |a i ∈ D j , D j ∈ CL}. The following section provides a detailed description of the phases involved in resolving the problem. Preprocessing phase: the result of this phase is an article-level database, which involves processing the raw Vietnamese civil law documents.
General Architecture
-Vietnamese Civil law is a corpus of Vietnamese legal documents.
-Parser segment legal documents into list of articles.
-Cleaning will filter out documents with metadata. Special symbol characters are also removed from the article. Numbers and vocabulary are retained and converted to lowercase. -Tokenizer is crucial to the processing of Vietnamese natural language. Vietnamese word structure is quite complicated, a word might contain one or more tokens. -Indexing is a task to represent and put articles into the database. Given a query, the search engine will return the response quickly and accurately.
Training phase: we construct a supervised machine-learning model to rank the articles pertaining to the input question.
-Original dataset is a legal QA dataset provided by Nguyen et al. [10].
-Articles is result of the preprocessing phase.
-Weak label dataset was create by our heuristic method.
-Preprocessing includes tasks similar to the preprocessing phase for question processing. -Training , we will construct a deep learning model to rank the texts related to the question.
Inference phase: is the process to generate the response to a new input question.
-Question is query in natural language.
-Preprocessing is same as previous phases to process input question.
Indexing
There are numerous methods for indexing text into a database; in this work, we conduct experiments in two ways: word indexing and dense indexing. Word indexing: During the indexing process, the words in the text will be analyzed, normalized, and assigned a corresponding index. When given a query, the system searches the index the most related. Word indexing helps to find and look up information in the text faster and more accurately.
Dense vector indexing: In addition to word indexing, word-to-vec and sequence-to-vec are both common methods for representing text semantically. These dense vectors can be used to represent text and index the database for search purposes. We apply two ways of representing text as dense vector according to w2v (FastText [7]) and contextual embedding (BERT [2]) to encode the given question and the legal articles. FastText is a model that converts each word into a dense vector of 300 dimensions. To construct a vector representation of a text, we average over the word vectors to form a single representation vector. Sentence-BERT converte the text into a dense vector with 768 dimensions that can represent the contextual semantics of the document by the Sentence-BERT model [13]. Table 2 shows that the length of articels is often large, which is a limitation of the text representation by FastText and BERT. On the other hand, most questions just partially match articles, we overcome this long presentation weakness by splitting the legal article into a list of sentences and then generating dense vectors before indexing them into the database.
Quickview Retrieval Model
There are 117,575 legal articles in this corpus. This is a huge number, so in order to ensure the effectiveness of the question-answering system, we build a so-called quickview retrieval model using unsupervised machine learning techniques in order to rapidly return a limited candidate set.
Word matching: to compare questions and articles in the word indexing database, we use the BM25 algorithm [14]. The bag-of-words retrieval function BM25 estimates the relevance of a document to a given search query by ranking documents according to the query terms that appear in each document.
Given a question Q, containing tokens {t 1 , t 2 , ..., t n }, the BM25 score of a article A is:
BM 25S(Q, A) = n i=1 IDF (t i ) · f (t i , A) · (k 1 + 1) f (t i , A) + k 1 · (1 − b + b · |A| avgdl )(2)
in which:
f (t i , A): t i 's term frequency in the legal article A -|A|: a number of word in in the legal article A in terms avgdl: the average article length in the legal corpus.
k 1 : a saturation curve parameter of term frequency.
b: the importance of document length.
-IDF (t i ) is the inverse document frequency weight of the given question t i , follow as: IDF (t i ) = ln(1 + N −n(ti+0.5) n(ti)+0.5 ). N is amount of articles in the legal corpus, and n(q i ) is amount of articles containing q i .
While a content article is intense with full meaning, the article title contains a significant meaning. In this instance, the quickview retrieval score is determined using the formula below:
QS(Q, A) = α * BM 25S(Q, T A) + β * BM 25S(Q, CA)(3)
in which, α and β are boosting weights. T A and CA are the titles and content of the article. Dense vector matching: to estimate the semantic similarity between questions and legal articles in the dense indexing database, we use cosine similarity to calculate quickview retrieval score:
Cosine(V Q, V SA) = V Q T · V SA ||V Q|| · ||V SA|| (4) QS(Q, A) = max 1≤j≤n (Cosine(V Q, V SA j ))(5)
in which, V Q is presentation vectors of the question.0. V SA j is presentation vectors of the j th sentence in the legal article. n is the number of sentences in the legal article. Finally, We use minmaxscaler to normalize scores and generate a list of ranked candidates.
Supervised Model
Pre-trained language models have proven useful for natural language processing tasks. Particularly, BERT significantly enhanced common language representation [2]. We use the BERT pre-training model and adjust all its parameters to build the related classifier model. We use the first token's final hidden state h as the presentation for the question-article pair. The last layer is a single fully connected added on the top of BERT. The output of the model is a binary classification. Cross-entropy loss is applied to the loss function. Adam [9] is used to optimize all model parameters during the training phase with a learning rate of e −5 . The supervised score between the question and the legal article is the classification probability of label 1:
SS(Q, A) = P label=1 (Q, A)(6)
Lastly, we also use minmaxscaler to normalize scores and reranking a list of candidates. In this model, we proceed to build a related classification model based on two training datasets: the original dataset and a full dataset (original and weak label dataset). In the training process with the full dataset, we fit the model on weak label data first. Then use the best model to fine-tune with the original dataset.
Ensemble Model
We utilize the quickview retrieval model to generate a list of the top − k candidates. These candidates are then refined using a supervised ensemble model, which provides higher precision but is slower. The quickview model serves as a preliminary selection step due to its fast computation despite its lower precision.
We use a variety of measures of similarity, including lexical similarity (the quickview retrieval model) and semantic similarity (the supervised model). Despite the fact that lexical and semantic similarities are very different from one another, they can work in tandem and are complementary. The combined score of the question Q and the candidate article CA i is calculated as follows:
CombineS(Q, CA i ) = γ * QS(Q, CA i ) + (1 − γ) * SS(Q, CA i )(7)
where γ ∈ [0, 1]. Table 2 indicates that each question can have one or more related articles (the average is about 1.6). The most relevant article M RCA is returned by default, to determine a set of candidates to return, we would normalize the combined score and use the threshold parameter: a final returned articles set
F RA = {CA i |CombineS(Q, M RCA) − CombineS(Q, CA i ) < threshold}.
Experimental Results and Discussion
To ensure fairness in the training process and selection of hyperparameters, we divided the training dataset into training and validation with a ratio of 9:1. In the quickview retrieval phase, we utilize the Recall@k measure to assess the list of returned candidates. Recall@k is (Number of correctly predicted articles in the top − k results) / (Total number of gold articles). Macro-F2 is a metric to evaluate the end-to-end question-answering system. Precision, recall, and average response time per question are also used to evaluate the system's performance.
The processing phase and the quickview retrieval model are carried out on CPU Intel core i5 10500 and 32Gb ram. The supervised model is trained and inference on NVIDIA Tesla P100 GPU 15Gb. In the indexing step and the quickview retrieval model, we use Elasticsearch 4 with the configuration setting 8Gb heap size. Besides, during the experiment with some pre-trained BERT models, the BERT multilingual model produces the best results, so it is used to generate vector representation for the given question and the articles in the dense vector indexing and is used in a supervised model. Table 4 shows the results of the word matching method, it is easy to see the superiority in execution time. It only takes 14.43 ms to return the set of 50 candidates and 115.63 ms for 1000 candidates. The results also demonstrate how the title and content of the article have an impact on the retrieval. Recall@1000 is only from 0.75 to 0.87 on the datasets if we solely utilize word matching based on either title or content. While using both of them, Recall@1000 is nearly 0.9. As a sort of written summary, the title frequently includes important keywords. Consequently, we achieve the best results, 0.9128 in Recall@1000, when increasing the question-title matching score by 1.5 times compared to the question content. The experimental result of the dense vector matching method is illustrated in Table 5. Both the dense vector matching on BERT and FastText have lengthy execution times but just average Recall@k. In the dense vector indexing method, the articles were indexed at the sentence level, we need to return larger records than the word indexing method based on article level. Calculating the similarity between vectors with large dimensions is also a challenge. Therefore, this method takes a long time to execute. Retrieving 10000 sentences that take 1.7 and 5,2 seconds is not possibly applied in the real-time question-answering system. R@10000 is 0.61 for the FastText and 0.67 for the BERT, It is also simple to understand these scores. because the advantage of FastText is a semantic representation at the word level. Whereas BERT is known for its powerful contextual representation of paragraphs, splitting the article into sentences loses this contextual property. Based on the aforementioned experiment results, we decided to build the quickview retrieval model using BM25 with the α = 1.5 and β = 1. For the real-time response, we obtain respectable Recall@k scores of 0.7214, 0.7973 and 0.8453 for the k values in (50, 100, 200), which indicates that the number of candidates will be returned following this phase. Table 6 indicates the experimental results of the end-to-end question answering system result with a top 200 candidates from the quickview retrieval model. The word-matching model with BM25 and the supervised model built from the original data gives F2 score is about 0.38. The ensemble model outperforms the other models in F2 score with 0.6007, which is 22% higher than the single models. As was pointed out in the previous section, lexical and semantic similarity are highly dissimilar. But we believe they can cooperate and support one another. Results certainly support that. Table 6 also clearly illustrates the contribution of the weak label dataset. It improved the supervised machine learning model's F2 score by 8%. The weak label data continues to have an impact on the F2 score when the lexical and semantic matching models are combined. The ensemble model that used the weak label data had a 1% increase in F2 scores. Additionally, there is a sizeable distinction between precision and recall. The recall is given more consideration because of its great impact on F2 score. We discovered that similarity in lexical and semantics has the same effect during the experimental and evaluation phases. Consequently, γ is set at 0.5. Infer time is also a remarkable point in the construction of the question-answering system, which shows the feasibility of the system when applied in practice. Table 7 illustrate the results with the computational resources in the experimental environment, we can use the model with the top 50-100 candidates with an execution time of 1 second and 1.7 seconds per question. Their F2 scores are also only 2-5% lower than the best model. Table 8 shows that our recall and F2 scores are incredibly high when compared to the Attentive CNN [8] and the Paraformer [10] models (0.6651 and 0.6007). Their models return small amounts of related articles, while our system is designed to return flexible amounts of articles with threshold. This explains why their precision is great, about 0.5987, whereas our precision is only 0.4331. A set of thresholds for each top − k is listed in Table 9. Table 10 describes an example of our legal question-answering system, compared with Paraformer [10]. A small number of related articles are frequently returned by Paraformer models. Our system is more flexible with 3 returned related articles. While the gold label number is 2. As an outcome, a paragraph model like Paraformer is produced that has great precision but low recall, whereas our method leans in the opposite direction. Since recall has a greater impact on F2 scores, our model has a significantly higher F2 score of 11%. 5. Trường hợp vay có lãi mà khi đến hạn bên vay không trả hoặc trả không đầy đủ thì bên vay phải trả lãi như sau: a) Lãi trên nợ gốc theo lãi suất thỏa thuận trong hợp đồng tương ứng với thời hạn vay mà đến hạn chưa trả; trường hợp chậm trả thì còn phải trả lãi theo mức lãi suất quy định tại khoản 2 Điều 468 của Bộ luật này; b) Lãi trên nợ gốc quá hạn chưa trả bằng 150% lãi suất vay theo hợp đồng tương ứng với thời gian chậm trả, trừ trường hợp có thỏa thuận khác. Where parties agree that interest will be payable but fail to specify the interest rate, or where there is a dispute as to the interest rate, the interest rate for the duration of the loan shall equal 50% of the maximum interest prescribed in Clause 1 of this Article at the repayment time.)
Quickview Retrieval Result
End-to-end Question Answering System Result
Our model predicts that "Article 466 from Doc 91/2015/QH13" is relevant to the given query but the gold label is 0. Considering this article, we believe the article is pertinent to the given question but it seems that the annotator's point of view is different. In addition, we discovered some similar cases in our error analysis. Defining and agreeing on a measure of relevance is an important research question that needs the participation of the AI and Law community in its research. This not only benefits the development of automated methods but also makes legal judgments and decisions more reliable and accurate.
Conclusions
In this paper, we present a method to improve performance in the task of legal question answering for Vietnamese using language models through weak labeling. By demonstrating the effectiveness of this method through experiments, we verify the hypothesis that improving the quality and quantity of datasets is the right approach for this problem, especially in low-resource languages like Vietnamese. The results of our work can provide valuable insights and serve as a reference for future attempts to tackle similar challenges in low-resource legal question-answering.
Figure 1
1demonstrates our proposed system. There are three main phases: preprocessing, training, and inference phase.
Fig. 1 :
1Pipeline in the end-to-end article retrieval-base question answering system
-
Quickview retrieval model matches questions and texts using unsupervised machine learning techniques . The processing speed of this model is typically fast. -Candidates are a list of limited candidates returned from quickview retrieval model. -Supervised model is result of the training phase. Its inputs are the question and the article candidates. -Candicate scores are outputs of Supervised model. -Ensemble model will combine the scores of the quickview retrieval model and the supervised model to make a final decision.
([...] 5. If a borrower fails to repay, in whole or in part, a loan with interest, the borrower must pay: a) Interest on the principal as agreed in proportion to the overdue loan term and interest at the rate prescribed in Clause 2 Article 468 in case of late payment; b) Overdue interest on the principal equals one hundred and fifty (150) per cent of the interest rate in proportion to the late payment period, unless otherwise agreed.) Candidate 3: Id: Article 468 from Doc 91/2015Lãi suất (Interest rates) Content: 1. Lãi suất vay do các bên thỏa thuận.[...] 2. Trường hợp các bên có thỏa thuận về việc trả lãi, nhưng không xác định rõ lãi suất và có tranh chấp về lãi suất thì lãi suất được xác định bằng 50% mức lãi suất giới hạn quy định tại khoản 1 Điều này tại thời điểm trả nợ. (1. The rate of interest for a loan shall be as agreed by the parties.[...] 2.
Table 1 :
1A sample in the datasetQuestion
Hợp đồng ủy quyền có hiệu lực khi đáp ứng tiêu chí nào?
(An authorization contract is effective when it meets what criteria?)
Answer
Article 117 form Document 91/2015/QH13
Article
Điều kiện có hiệu lực của giao dịch dân sự
Title
(Valid conditions of civil transactions)
Article
Giao dịch dân sự có hiệu lực khi có đủ các điều kiện sau đây:
Content
a) Chủ thể có năng lực pháp luật dân sự, năng lực hành vi dân sự phù
hợp với giao dịch dân sự được xác lập;
b) Chủ thể tham gia giao dịch dân sự hoàn toàn tự nguyện;
c) Mục đích và nội dung của giao dịch dân sự không vi phạm điều cấm
của luật, không trái đạo đức xã hội. Hình thức của giao dịch dân sự là
điều kiện có hiệu lực của giao dịch dân sự trong trường hợp luật có quy
định.
(A civil transaction takes effect when the following conditions are satisfied:
a) The subject has civil legal capacity and civil act capacity suitable to the
established civil transactions;
b) Entities participating in civil transactions completely voluntarily;
c) The purpose and content of the civil transaction do not violate the
prohibition of the law and do not violate social ethics. The form of a civil
transaction is the effective condition of a civil transaction in case it is
provided for by law.)
Table 2 :
2Corpus of Vietnamese legal documents statisticsAttribute
Value
Number of legal documents
8,587
Number of legal articles
117,557
Number of articles missing title
1,895
The average number of articles per document 13.69
Maximum number of articles per document
689
The average length of article title
13.28
The average length of article content
281.83
Table 3 :
3Original dataset statisticsTrain set Test set
Table 4 :
4Recall@k of Word matching method in quickview retrieval model
Table 5 :
5Recall@k of quickview retrieval model on the dense vector indexingk EmbeddingMethod R@k Time(ms)
1000
FastText(D=300)
0.40
203
BERT(D=768)
0.38
755
2000
FastText(D=300)
0.48
384
BERT(D=768)
0.45
1,059
5000
FastText(D=300)
0.56
896
BERT(D=768)
0.60
2,433
10000
FastText(D=300)
0.61
1,757
BERT(D=768)
0.67
5,204
Table 6 :
6The result of end-to-end QA system result with top k = 200Model
R
P
F2
Quickview Model(1.5,1)
0.4454 0.2399 0.3803
Supervised Model (original data) 0.6165 0.1461 0.3750
Supervised Model (full data)
0.6651 0.1998 0.4538
Ensemble Model (original data) 0.6681 0.4080 0.5925
Ensemble Model (full data)
0.6651 0.4331 0.6007
Table 7 :
7The result of end-to-end QA system result with ensemble modelEnsemble Model
R
P
F2 Time(s)
(full data, k=20)
0.5677 0.4034 0.5252
0.5
(full data, k=50)
0.5842 0.4428 0.5491
1
(full data, k=100) 0.6222 0.4475 0.5771
1.7
(full data, k=200) 0.6651 0.4331 0.6007
3.4
(full data, k=500) 0.6793 0.4015 0.5967
8.5
(full data, k=1000) 0.6583 0.4261 0.5936
17
Table 8 :
8The result compared with other research groupsSystems
R
P
F2
Attentive CNN [8] 0.4660 0.5919 0.4774
Paraformer [10]
0.4769 0.5987 0.4882
Our model (k=50) 0.5842 0.4428 0.5491
Our model (k=100) 0.6222 0.4475 0.5771
Our model (k=200) 0.6651 0.4331 0.6007
Table 9 :
9Threshold list of the ensemble modeltop k
20 50 100 200 500 1000
threshold 0.38 0.28 0.26 0.26 0.25 0.2
Table 10 :
10An output example of ours System, compared with Paraformer[10].Question: Vay tiền để kinh doanh nhưng không còn khả năng chi trả phải trả lãi suất thì như thế nào? Ours Paraformer Gold (In the case of insolvency, how does one address the issue of paying the interest on a business loan?) Candidate 1: Id: Article 357 from Doc 91/2015/QH13 1 1 1 Title: Trách nhiệm do chậm thực hiện nghĩa vụ trả tiền (Liability for late performance of the obligation to pay) Content: 1. Trường hợp bên có nghĩa vụ chậm trả tiền thì bên đó phải trả lãi đối với số tiền chậm trả tương ứng với thời gian chậm trả. 2. Lãi suất phát sinh do chậm trả tiền được xác định theo thỏa thuận của các bên nhưng không được vượt quá mức lãi suất được quy định tại khoản 1 Điều 468; nếu không có thỏa thuận thì thực hiện theo quy định tại khoản 2 Điều 468. (1. Where the obligor makes late payment, then it must pay interest on the unpaid amount corresponding to the late period. 2. Interest arising from late payments shall be determined by agreement of the parties, but may not exceed the interest rate specified in paragraph 1 of Article 468 of this Code; if there no agreement mentioned above, the Clause 2 of Article 468 of this Code shall apply.) Candidate 2: Id: Article 466 from Doc 91/2015/QH13 1 0 0 Title: Nghĩa vụ trả nợ của bên vay (Obligations of borrowers to repay loans) Content: [...]
https://www.elastic.co/
AcknowledgementThis work was supported by VNU University of Engineering and Technology under project number CN22.09.
Language models are few-shot learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, Advances in neural information processing systems. 33Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Nee- lakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020)
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M W Chang, K Lee, K Toutanova, NAACL. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: NAACL. pp. 4171-4186 (Jun 2019)
Multi-hop paragraph retrieval for open-domain question answering. Y Feldman, R El-Yaniv, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFeldman, Y., El-Yaniv, R.: Multi-hop paragraph retrieval for open-domain question answering. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pp. 2296-2309 (2019)
Convolutional neural network architectures for matching natural language sentences. B Hu, Z Lu, H Li, Q Chen, Advances in neural information processing systems. Hu, B., Lu, Z., Li, H., Chen, Q.: Convolutional neural network architectures for matching natural language sentences. In: Advances in neural information process- ing systems. pp. 2042-2050 (2014)
Learning deep structured semantic models for web search using clickthrough data. P S Huang, X He, J Gao, L Deng, A Acero, L Heck, Proceedings of the 22nd ACM international conference on Information & Knowledge Management. the 22nd ACM international conference on Information & Knowledge ManagementHuang, P.S., He, X., Gao, J., Deng, L., Acero, A., Heck, L.: Learning deep struc- tured semantic models for web search using clickthrough data. In: Proceedings of the 22nd ACM international conference on Information & Knowledge Management. pp. 2333-2338 (2013)
A neural network for factoid question answering over paragraphs. M Iyyer, J Boyd-Graber, L Claudino, R Socher, Iii Daumé, H , Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Iyyer, M., Boyd-Graber, J., Claudino, L., Socher, R., Daumé III, H.: A neural network for factoid question answering over paragraphs. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pp. 633-644 (2014)
Bag of tricks for efficient text classification. A Joulin, E Grave, P Bojanowski, T Mikolov, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterAssociation for Computational Linguistics2Short PapersJoulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. pp. 427-431. Association for Computational Linguistics (April 2017)
Answering legal questions by learning neural attentive text representation. P M Kien, H T Nguyen, N X Bach, V Tran, M Le Nguyen, T M Phuong, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsKien, P.M., Nguyen, H.T., Bach, N.X., Tran, V., Le Nguyen, M., Phuong, T.M.: Answering legal questions by learning neural attentive text representation. In: Pro- ceedings of the 28th International Conference on Computational Linguistics. pp. 988-998 (2020)
Adam: A method for stochastic optimization. D Kingma, J Ba, International Conference on Learning Representations. 12Kingma, D., Ba, J.: Adam: A method for stochastic optimization. International Conference on Learning Representations (12 2014)
Attentive deep neural networks for legal document retrieval. H T Nguyen, M K Phi, X B Ngo, V Tran, L M Nguyen, M P Tu, Artificial Intelligence and Law. Nguyen, H.T., Phi, M.K., Ngo, X.B., Tran, V., Nguyen, L.M., Tu, M.P.: Attentive deep neural networks for legal document retrieval. Artificial Intelligence and Law pp. 1-30 (2022)
Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. H Palangi, L Deng, Y Shen, J Gao, X He, J Chen, X Song, R Ward, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 244Palangi, H., Deng, L., Shen, Y., Gao, J., He, X., Chen, J., Song, X., Ward, R.: Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Language Processing 24(4), 694-707 (2016)
Overview and discussion of the competition on legal information extraction/entailment (coliee) 2021. J Rabelo, R Goebel, M Y Kim, Y Kano, M Yoshioka, K Satoh, The Review of Socionetwork Strategies. 161Rabelo, J., Goebel, R., Kim, M.Y., Kano, Y., Yoshioka, M., Satoh, K.: Overview and discussion of the competition on legal information extraction/entailment (col- iee) 2021. The Review of Socionetwork Strategies 16(1), 111-133 (2022)
Sentence-bert: Sentence embeddings using siamese bertnetworks. N Reimers, I Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingReimers, N., Gurevych, I.: Sentence-bert: Sentence embeddings using siamese bert- networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP). pp. 3982-3992 (2019)
Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. S E Robertson, S Walker, SIGIR'94. SpringerRobertson, S.E., Walker, S.: Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In: SIGIR'94. pp. 232-241. Springer (1994)
Disentangling indirect answers to yes-no questions in real conversations. K Sanagavarapu, J Singaraju, A Kakileti, A Kaza, A Mathews, H Li, N Brito, E Blanco, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSanagavarapu, K., Singaraju, J., Kakileti, A., Kaza, A., Mathews, A., Li, H., Brito, N., Blanco, E.: Disentangling indirect answers to yes-no questions in real conver- sations. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 4677-4695 (2022)
Legal document retrieval using document vector embeddings and deep learning. K Sugathadasa, B Ayesha, N De Silva, A S Perera, V Jayawardana, D Lakmal, M Perera, Intelligent Computing: Proceedings of the 2018 Computing Conference. Springer2Sugathadasa, K., Ayesha, B., de Silva, N., Perera, A.S., Jayawardana, V., Lak- mal, D., Perera, M.: Legal document retrieval using document vector embeddings and deep learning. In: Intelligent Computing: Proceedings of the 2018 Computing Conference, Volume 2. pp. 160-175. Springer (2019)
Improved semantic representations from treestructured long short-term memory networks. K S Tai, R Socher, C D Manning, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree- structured long short-term memory networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 1556-1566. Association for Computational Linguistics, Beijing, China (Jul 2015)
Encoded summarization: summarizing documents into continuous vector space for legal case retrieval. V Tran, M Le Nguyen, S Tojo, K Satoh, Artificial Intelligence and Law. 28Tran, V., Le Nguyen, M., Tojo, S., Satoh, K.: Encoded summarization: summa- rizing documents into continuous vector space for legal case retrieval. Artificial Intelligence and Law 28, 441-467 (2020)
Attention is all you need. Advances in neural information processing systems. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, 30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Advances in neural information pro- cessing systems 30 (2017)
Sm-bert-cr: a deep learning approach for case law retrieval with supporting model. Y T H Vuong, Q M Bui, H T Nguyen, T T T Nguyen, V Tran, X H Phan, K Satoh, L M Nguyen, Vuong, Y.T.H., Bui, Q.M., Nguyen, H.T., Nguyen, T.T.T., Tran, V., Phan, X.H., Satoh, K., Nguyen, L.M.: Sm-bert-cr: a deep learning approach for case law re- trieval with supporting model. Artificial Intelligence and Law pp. 1-28 (2022)
| [] |
[
"A Graph-Based Approach to the Computation of Rate-Distortion and Capacity-Cost Functions with Side Information",
"A Graph-Based Approach to the Computation of Rate-Distortion and Capacity-Cost Functions with Side Information"
] | [
"Deheng Yuan ",
"Tao Guo ",
"Zhongyi Huang "
] | [] | [] | We consider the point-to-point lossy coding for computing and channel coding problems with two-sided information. We first unify these problems by considering a new generalized problem. Then we develop graph-based characterizations and derive interesting reductions through explicit graph operations, which reduce the number of decision variables. After that, we design alternating optimization algorithms for the unified problems, so that numerical computations for both the source and channel problems are covered. With the help of extra root-finding techniques, proper multiplier update strategies are developed. Thus our algorithms can compute the problems for a given distortion or cost constraint and the convergence can be proved. Also, extra heuristic deflation techniques are introduced which largely reduce the computational time. Numerical results show the accuracy and efficiency of our algorithms.Index TermsLossy coding for computing, rate-distortion with side information, capacity-cost, channel with state information, alternating optimization. | null | [
"https://export.arxiv.org/pdf/2306.04981v1.pdf"
] | 259,108,335 | 2306.04981 | 2eb3f569cbc7e4b5888a986a39a099cc30d8a0aa |
A Graph-Based Approach to the Computation of Rate-Distortion and Capacity-Cost Functions with Side Information
8 Jun 2023
Deheng Yuan
Tao Guo
Zhongyi Huang
A Graph-Based Approach to the Computation of Rate-Distortion and Capacity-Cost Functions with Side Information
8 Jun 20231
We consider the point-to-point lossy coding for computing and channel coding problems with two-sided information. We first unify these problems by considering a new generalized problem. Then we develop graph-based characterizations and derive interesting reductions through explicit graph operations, which reduce the number of decision variables. After that, we design alternating optimization algorithms for the unified problems, so that numerical computations for both the source and channel problems are covered. With the help of extra root-finding techniques, proper multiplier update strategies are developed. Thus our algorithms can compute the problems for a given distortion or cost constraint and the convergence can be proved. Also, extra heuristic deflation techniques are introduced which largely reduce the computational time. Numerical results show the accuracy and efficiency of our algorithms.Index TermsLossy coding for computing, rate-distortion with side information, capacity-cost, channel with state information, alternating optimization.
I. INTRODUCTION
The point-to-point source and channel coding problems with side information and their duality were studied by many researchers [1]- [3]. The theoretical limits are described by the rate-distortion and the capacity-cost functions.
These functions reflect the fundamental trade-offs between communication resources and other considerations. Important special cases include the Gelfand-Pinsker channel problem [4] and the Wyner-Ziv lossy compression problem [5].
As an extension of [5], some work [6] considered a lossy computing problem with decoder side information and obtained the rate-distortion function. However, the expression is in terms of an auxiliary random variable, for which however the intuitive meaning is not clear.
The notions of graph entropy and characteristic graph were introduced by Körner [7] and Witsenhausen [8] for zero-error coding problems. Orlitsky and Roche [9] extended the tools and obtained a graph-based characterization of the minimum rate for lossless computing with decoder side information. The auxiliary random variable involved therein is clearly represented by the independent set of a characteristic graph.
To better understand the lossy computing problem, a natural generalization of the graph entropy approach in [9] was given in [10] and [11] by defining the D-characteristic graph, where an efficient but suboptimal coding scheme was obtained. In [12] and [13], Basu, Seo and Varshney generalized the independent sets to hyperedges and defined an ǫ-characteristic hypergraph. The rate-distortion function was characterized for a limited class of distortion measure whose average represents the probability that the distance between f and the reconstruction is larger than a given distortion level. Their generalization, however, cannot cope with general distortion measures.
The Blahut-Arimoto (BA) algorithm [14], [15] was the first attempt to numerically compute the classical channel capacity and rate-distortion function. It was also generalized to compute these limits in many other settings [16]- [18], including the source and channel coding problem with two-sided information in [19]. However, BA type algorithms cannot compute rate-distortion functions for any fixed distortion constraint, because it is the dual variables associated with the distortion constraints rather than the distortion constraints themselves that are fixed during iterations. Also, numerical computations for the lossy computing problem have not been considered.
Recently, some work [20], [21] introduced OT models and algorithms for numerical computation in information theory. Other work [22] designed algorithms based on Bregman Divergence. Their algorithms updated the dual variables during iterations, which guaranteed that the extra distortion constraints in the problems considered therein were satisfied.
In the current paper, we note the structural similarity of lossy computing and channel coding problems with two-sided information and introduce a new unified source-channel problem to analyze these problems together. By developing a bipartite-graph-based characterization for the new problem and performing feasible contractions on the graphs, we are able to largely reduce the number of edges and vertices and hence the number of decision variables for some important special cases. These efforts help with the computation of the trade-offs both analytically and numerically.
Applying the results to the lossy computing problem, we give the explicit meaning of the auxiliary random variable for any general distortion measures. Also, when two-sided information has a common part or the distortion is minimum, the characterization can be largely reduced. Our result naturally subsumes that in [12] as a special case by specifying a distortion measure. The capacity-cost function for channel with state information is analogously studied and reduced.
With the help of root-finding techniques, proper update strategies for the dual variables are added into an BA type alternating minimization process and algorithms are developed for the unified problem. The convergence of algorithms is proved. The specialized version of our algorithms can compute rate-distortion function for a given distortion constraint (or the capacity-cost function for a given cost constraint), overcoming the weaknesses of BA type algorithms. Also, an O( 1 n ) convergence for the optimal value can be showed for the lossy computing problem. Extra heuristic deflation techniques exploit the sparsity of solutions and greatly reduce computational cost for most cases. Numerical experiments show the accuracy and efficiency of our algorithms.
Compared with directly traversing the function with BA type algorithms in the expressions like in [19], our methods can greatly reduce computational complexity and compute the rate-distortion (capacity-cost) function for a given distortion (cost) constraint. Moreover, the graph characterizations combined with the heuristic deflation techniques can also accelerate the traditional BA type algorithms and hence the computation of the whole ratedistortion (capacity-cost) curves.
II. PROBLEM FORMULATION AND PRELIMINARIES
A. The Lossy Computing Problem
Denote a discrete random variable by a capital letter and its finite alphabet by the corresponding calligraphic letter, e.g., S 1 ∈ S 1 andẐ ∈Ẑ. We use the superscript n to denote an n-sequence, e.g., S n 1 = (S 1i ) n i=1 . Let (S 1i , S 2i ) ∼ p(s 1 , s 2 ), i ∈ {1, 2, · · · , n} be i.i.d. random variables distributed over S 1 × S 2 . Without loss of generality, assume p(s 1 ) > 0 and p(s 2 ) > 0, ∀s 1 ∈ S 1 , s 2 ∈ S 2 .
Consider the lossy computing problem with two-sided information depicted in Fig. 1. The source messages S n 1 and S n 2 are observed by the encoder and the decoder, respectively. Let f : S 1 × S 2 → Z be the function to be computed and d : Z ×Ẑ → [0, ∞) be a distortion measure. Denote f (S 1i , S 2i ) by Z i for 1 ≤ i ≤ n. Without ambiguity, we abuse the notation of f and g to denote their vector extensions, and define
f (s n 1 , s n 2 ) = f (s 1i , s 2i ) n i=1 , d(z n ,ẑ n ) = 1 n M S n 1 Encoder X n p(y|x, s 1 , s 2 ) (S n 1 , S n 2 ) Y n Decoder S n 2M (B)Let (X , p(y|x, s 1 , s 2 ), Y, S 1 × S 2 ) be a discrete-memoryless channel with i.i.d. state information (S 1i , S 2i ) ∼ p(s 1 , s 2 ), i ∈ {1, 2, · · · , n} distributed over S 1 × S 2 . Also assume p(s 1 ) > 0, ∀s 1 ∈ S 1 .
Consider the channel coding problem with two-sided state information depicted in Fig. 2. The encoder wishes to communicate a message M which is uniformly distributed over {1, 2, ..., 2 nR } and independent of (S n 1 , S n 2 ) through a channel (X , p(y|x, s 1 , s 2 ), Y, S 1 × S 2 ). And states S n 1 and S n 2 are observed by the encoder and the decoder, respectively. Let b : X × S 1 × S 2 → [0, ∞) be a cost measure depending on the input and the channel state. We abuse the notation of b to denote its vector extension, and define
b(x n , s n 1 , s n 2 ) = 1 n n i=1 b(x i , s 1i , s 2i ).
An (n, 2 nR ) code is defined by an encoding function g e : {1, 2, ..., 2 nR } × S n 1 → X n and a decoding function
g d : Y n × S n 2 → {1, 2, ..., 2 nR }.
Then the channel inputs are X n = g e (M, S n 1 ) and the decoded messages areM = g d (Y n , S n 2 ). A capacity-cost pair (R, B) is called achievable if there exists an (n, 2 nR ) code such that
P[M = M ] → 0, n → ∞, and lim n→∞ E[b(X n , S n 1 , S n 2 )] ≤ B.
We define the rate-distortion function C(B) to be the supremum of all the achievable rates such that (R, B) is achievable.
C. Preliminary Results
We can adapt the results of [1] and formulate the expressions for these two trade-offs.
Lemma 1. The rate-distortion function for lossy computing and the capacity-cost function with two-sided information are given by
R(D) = min p(u|s1): ∃g, E[d(f (S1,S2),g(U,S2))]≤D I(U ; S 1 ) − I(U ; S 2 ),(1)C(B) = max p(u|s1): ∃g, E[b(g(U,S1),S1,S2)]≤B I(U ; Y, S 2 ) − I(U ; S 1 ).(2)
The proof is in the same manner as in [1]. The coding scheme is similar to the lossy compression case f (s 1 , s 2 ) = s 1 and channels without input cost discussed in [1]. Only the objectives for coding are different and the additional constraints need to be modified.
D. Bipartite Graph
Let G = (V , E ) be a simple graph with the vertex set V and edge set E . It is a bipartite graph [23] if V can be split into the disjoint sets V 1 and V 2 so that each edge in E has one end in V 1 and one end in V 2 . Denote by
G [V 1 , V 2 ] such a bipartite graph. A bipartite graph G [V 1 , V 2 ] is called complete if every vertex in V 1 is joined to every vertex in V 2 . Let G [V 1 , V 2 ]
be a bipartite graph with the edge set E . If there is a weight function ω : E → R, then the graph
G [V 1 , V 2 ] associated with the weight ω is called a weighted bipartite graph, denoted by (G [V 1 , V 2 ]
, ω). In this work, we only consider weighted bipartite graphs with weight functions ω(e) ∈ [0, 1], ∀e ∈ E .
III. THE UNIFIED SOURCE-CHANNEL PROBLEM
The similarity between (1) and (2) is often called the source-channel duality. Still, more can be revealed by breaking such a border lying between source and channel problems and examining their substantial structures as optimization problems. Both problems are convex and the convexity mainly arises from the mutual information term I(U ; S 1 ) directly depending on the decision variable p(u|s 1 ). Also, the distortion or cost constraints are both linear. These observations motivate us to generalize them to a new source-channel problem, so that both problems can be studied together.
We are going to show (1), (2) and their interesting reductions are all special cases of the new problem (3).
Therefore, designing algorithms for (3) is enough to solve both (1) and (2).
Let E ⊆ V × U and for each v ∈ V, there exists some u ∈ U such that (v, u) ∈ E . We consider the following optimization problem with a given loss function l :
E × W → [0, ∞), min p(u|v) I(U ; V ) − I(U ; W ),(3a)s.t.P[(V, U ) ∈ E ] = 1,(3b)E[l(V, U, W )] ≤ L,(3c)
where the distribution of V and the conditional distribution W |e for e ∈ E are also given. In other words, we always have
p(u, v, w) = p(v)p(u|v)p(w|u, v),
where p(u|v) is the decision variable, while p(v) and p(w|u, v) are given (the latter is defined as 0 when (v, u) / ∈ E ).
Without loss of generality, we assume that for any v,
p(v) > 0.(4)
Also, for any w,
∃(v, u) ∈ E , s.t. p(w|u, v) > 0.(5)
Or we can just eliminate v and w that do not satisfy the assumptions.
Define the target-loss function to be the optimal value of the problem (3), which is denoted by T (L).
In the remaining part of this section, the problem is transformed into an equivalent form in Section III-A and its properties for different loss constraints L are investigated in Section III-B, preparing for the designing of algorithms.
A. Properties and Equivalent Forms of the Problem
It is easy to see that the problem (3) only depends on the loss function l through
l(v, u) w ′ p(w ′ |u, v)l(v, u, w ′ ).(6)
We denote by supp(p) {(v, u)|p(u|v) > 0} the support of p(u|v) as a function of (v, u), then (3b) is equivalent
to supp(p) ⊆ E .
The problem (3) is equivalent to the following
min p(u|v) (v,u)∈E ,w p(v)p(u|v)p(w|u, v) log p(u|v) p(u|w) , (7a) s.t. supp(p) ⊆ E , (7b) (v,u)∈E p(v)p(u|v)l(v, u) ≤ L,(7c)
where
p(u|w) = p(u, w) p(w) = v:(v,u)∈E p(v)p(u|v)p(w|u, v) (v,u)∈E p(v)p(u|v)p(w|u, v) .(8)
is the conditional distribution of U given W , which is defined when p(w) = (v,u)∈E p(v)p(u|v)p(w|u, v) > 0.
To simplify the notation, we rewrite the conditional distribution of U |V and U |W by q and r, respectively.
Define
Loss(q) = (v,u)∈E p(v)q(u|v)l(v, u).(9)
and the generalized Kullback-Leibler (K-L) divergence
GD E (q||r) = (v,u)∈E ,w p(v)q(u|v)p(w|u, v) log q(u|v) r(u|w) .(10)
The generalized K-L divergence is not necessarily nonnegative as the classical one, which is defined between two distributions on the same set. But it is nonnegative if q and r are two conditional distributions on the same set.
With this definition, the objective function of (3) can be written as O(p) = GD E (p||p). The following theorem shows p on the second position can be relaxed and this gives an equivalent form of the problem. It motivates our alternating minimization algorithms.
It can be seen from (8) that for any feasible solution p of (7), supp(p(u|w)) ⊆ F as a function of (u, w), where
F = {(u, w)|∃v, (v, u) ∈ E , p(w|u, v) > 0} .(11)
Then the following equivalent form of (3) is immediate, which is a variant of BA type equivalent form.
Lemma 2. The problem (3) has the same optimal value T (L) with the following problem.
min q,r GD E (q||r),(12a)s.t. supp(q) ⊆ E ,(12b)Loss(q) ≤ L, (12c) supp(r) ⊆ F . (12d)
The proof of the theorem can be found in Appendix D. The equivalent problem (12) and the original one (3) share good properties, which help with their solution.
Lemma 3. Both (12) and (3) are convex optimization problems.
The proof can also be found in Appendix A.
B. Properties of the Problem for Different Loss Constraints
The newly defined target-loss function preserves the key properties of the original rate-distortion and capacity-cost functions, as is summarized as follows and proved in Appendix A. Further investigation into the problem enables us to identify trivial cases and focus on difficulties we really need to tackle. We first define the boundaries for different cases of problems (3) to be L min min
supp(q)⊆E {Loss(q)} ,(13)L max min L ≥ 0 T (L) = min L ′ ≥0 T (L ′ ) ,(14)
L Max max
supp(q)⊆E {Loss(q)} .(15)
Then it is easy to find that L min ≤ L max ≤ L Max . Also, L min and L max are easy to compute by the following formula,
L min = v p(v) min u:(v,u)∈E l (v, u) ,(16)L Max = v p(v) max u:(v,u)∈E l (v, u) .(17)
In contrast, L max does not have an explicit formula in general, except for special cases such as in the lossy computing problem discussed in Section V.
The problem (3) can be classified into the following cases with different ranges of L, which directly help with the computation. 2) For L ≥ L min , (3) is feasible and T (L min ) < ∞.
3) For L > L min , the Slater Constraint Qualification (SLCQ) is satisfied, the Karush-Kuhn-Tucker (KKT)
conditions are both necessary and sufficient for optimality.
4) T (L) is continuous in L ∈ [L min , ∞). 5) T (L) is strictly decreasing in L ∈ [L min , L max ].
In this case, the optimal value of the problem (3) is achieved as equality holds in the loss constraint.
6) For L ≥ L Max , the loss constraint (3c) is always satisfied and can be neglected.
Theorem 1 can be easily derived from Lemma 4 and the proof is omitted. Also note that the problem (12) shares the same properties described in Theorem 1.
For the problem (3), we can first compute D min and D Max by (16)(17) and compare D with them. By Theorem 1,
we only need to compute the target-loss function for D ≥ D min . The ideas are briefly summarized here and details will be described in Section VII.
1) For L = L min , the problem can be solved by a variant BA type algorithm.
2) For L ∈ (L min , L Max ), the problem is solved by our improved alternating minimization algorithm.
3) For L ≥ L Max , after directly eliding the constraint (3c), the problem can be solved by a variant BA type algorithm.
IV. GRAPH-BASED CHARACTERIZATIONS
We introduce graph-based characterizations for the problem (3) and explore its reductions, which helps with its solution both analytically and numerically.
A. Construction and Operations on the Graph Characterization
In the problem (3), the decision variable p(u|v) is a transition probability from V to U. This observation motivates us to represent v ∈ V and u ∈ U by vertices, connect v and u when (v, u) ∈ E and view each p(u|v) as the weight on the edge. This constructs a bipartite graph G[V, U] with edge set E . Denote by E u ⊆ V and E v ⊆ U the set of vertices adjacent to u and v, respectively.
We say ω is a feasible weight on a bipartite graph
G[V, U] with edge set E if 1) for any v ∈ V, u∈Ev ω(v, u) = 1,
2) ω satisfies the loss constraint
(v,u)∈E p(v)ω(v, u)l(v, u) ≤ L.
In this case, we denote the bipartite graph with a feasible weight by (G[V, U], ω). Each feasible solution p of (3) naturally corresponds to a feasible weight ω p on G χ [V, U] with ω p (v, u) = p(u|v).
The problem (3) is then an optimization problem over all ω p (v, u). Or equivalently, we have a canonical graph characterization immediately.
Lemma 5. Let T (L) be the target-loss function, then
T (L) = min (Gχ[V,U ],ωp) O(p),(18)
where the minimum is taken over all feasible weights
ω p on G χ [V, U].
We introduce a contraction operation for a bipartite graph (G[V, U], ω p ), which will be proved to decrease the objective function and keep the weight feasible.
Definition 2. Let G[V, U] be a bipartite graph with edge set E and G ′ [V, U ′ ] is its subgraph with edge set E ′ ( U ′ ⊆ U and E ′ ⊆ E ). We say a feasible contraction transforms (G[V, U], ω p ) into (G ′ [V, U ′ ], ω p ′ ) if (i) there exists a function h : U → U ′ such that for any (v, u) ∈ E , (v, h(u)) ∈ E ′ ; (ii) for any (v, u) ∈ E ,l (v, h(u)) ≤l(v, u);(19)
(iii) for any (v, u) ∈ E and w ∈ W,
p(w|h(u), v) = p(w|u, v);(20)
(iv) p ′ is naturally induced by h from p, to be precise, for any (v, u ′ ) ∈ E ′ ,
ω p ′ (v, u ′ ) p ′ (u ′ |v) = u∈Ev h(u)=u ′ p(u|v).(21)
Remark 1. To show the definition is well-defined, we need to show p ′ induced by h from p is actually a feasible
weight on G ′ [V, U].
The well-definedness of Definition 2 as well as Definition 3 can be found in the proof of Lemma 6.
Theorem 2. If there exists a subgraph G ′ [V, U ′ ] of G χ [V, U]. such that each feasible weight ω p on G χ [V, U] can be transformed into some feasible weight ω p ′ on G ′ [V, U ′ ] by a feasible contraction. Then T (L) = min (G ′ [V,U ′ ],ωp) O(p).(22)
Note that G ′ [V, U ′ ] always has less vertices and edges than G χ [V, U]. So the number of decision variables is reduced.
Theorem 2 serves as a basic tool for the reduction of the problem in various special cases, and it will be proved along with its generalized form Theorem 3. In these cases, the feasible contraction is always chosen to be uniform 1 There is an easy transformation between a bipartite graph and a multi-hypergraph and our discussion here can also formulated by the equivalent notions of multi-hypergraph as in [24]. We adopt the bipartite graph approach to simplify the operations on graphs.
in ω p , that is, there exists one feasible contraction which transforms any ω p into some ω p ′ and makes the latter satisfy the condition.
Two special cases are discussed in Section IV-B and IV-C. We will apply and extend these results to our main objects in Section V and VI.
B. When Two-Sided Information Has a Common Part
To further simplify the graph characterization in the case when two-sided information has a common part, we generalizes the ideas of the Gács-Körner-Witsenhausen common information. We find the contraction function h can be allowed to be dependent on the common part.
Definition 3. Let G[V, U] be a bipartite graph with edge set E and G ′ [V, U ′ ] is its subgraph with edge set E ′ ( U ′ ⊆ U and E ′ ⊆ E ). We say a generalized feasible contraction transforms (G[V, U], ω p ) into (G ′ [V, U ′ ], ω p ′ ) if (i) V = ∪ K k=1 V k and W = ∪ K k=1 W k be partitions satisfying the strict separation condition: for any (v, w), ∃u, p(w|u, v) > 0 ⇒ ∃k, v ∈ V k , w ∈ W k ; (23) (ii) there exist functions h k : U → U ′ , k = 1, 2, ..., K such that for any (v, u) ∈ E , (v, h k (u)) ∈ E ′ ; (iii) for any v ∈ V k and (v, u) ∈ E ,l (v, h k (u)) ≤l(v, u); (24) (iv) for any v ∈ V k , (v, u) ∈ E and w ∈ W k , p(w|h k (u), v) = p(w|u, v);(25)(v) for any v ∈ V k and (v, u ′ ) ∈ E ′ , ω p ′ (v, u ′ ) p ′ (u ′ |v) = u∈Ev h k (u)=u ′ p(u|v).(26)
The aim of introducing the generalized feasible contraction is to simplify the graph without increasing the value of the objective function, which is shown by the following result.
Lemma 6. Suppose a generalized feasible contraction transforms
(G[V, U], ω p ) into (G ′ [V, U], ω p ′ ).
Then ω p ′ is actually a feasible weight and attains less value of the objective function.
The proof can be found in Appendix C.
Theorem 3. If there exists a subgraph G ′ [V, U ′ ] of G χ [V, U], such that each feasible weight ω p on G χ [V, U] can be transformed into some feasible weight ω p ′ on G ′ [V, U ′ ] by a generalized feasible contraction. Then T (L) = min (G ′ [V,U ′ ],ωp) O(p).(27)
Proof: Note that the edge set of Now we formally investigate an important case when two-sided information has a common part, which will be used in Section V and VI. Suppose W = (W , V ′ ) and p(w, v ′ |u, v) = p(w|u, v)p(v ′ |v). We will see the meaning of V ′ will be the decoder side information, whose distribution is given and may be correlated to the encoder side information V .
G ′ [V, U ′ ] is contained in which of G χ [V, U],
By relabeling the alphabet V and V ′ , we can arrange p(v, v ′ ) in a block diagonal form with the maximum positive number K of nonzero blocks. The common part [25]
of V and V ′ is the random variable V 0 that takes value k if (V, V ′ ) is in block k, k = 1, ..., K.
In this case, there exists some function g 1 : V → {1, ..., K},
g 2 : V ′ → {1, ..., K} such that V 0 = g 1 (V ) = g 2 (V ′ ). Let V k = g −1 1 (k) and W k =W × g −1 2 (k), k = 1, ..., K. Then for any v ∈ V k , w = (w, v ′ ) ∈ W k ′ , we always have p(v, v ′ ) = 0 if k = k ′ ,
and hence p(w|u, v) = 0 for any u. So the strict separation condition is satisfied in this case. Also, the family of functions h k is dependent on the value of the common information.
C. The Minimum Loss Case
Consider the problem (18)
when L = L min . In this case, ω p (v, u) = p(u|v) > 0 only if the following condition is satisfied,l (v, u) = min u ′ ∈Ev l (v, u ′ ) .(28)
We can construct a subgraph
G * [V, U] of G χ [V, U] by defining its edge set E * . Let e = (v, u) be an edge if (28) is satisfied by v ∈ V and u ∈ E v . It is immediate from (28) that for each v ∈ V, there is at least one u ∈ U such that (v, u) ∈ E * . Also, E * v ⊆ E v and E u * ⊆ E u .
By the construction, the information about the loss constraint has been absorbed into the structure of the graph
G * [V, U].
Let h(u) = u in the feasible contraction and we can apply Theorem 2 to reduce the graph. Also, for the resulting graph the second condition (the loss constraint) of feasible weights is redundant and hence we have
T (L min ) = min (G * [V,U ],ωp) O(p),(29)
and equivalently, the minimum can be taken over all weights ω p on G * [V, U] induced by some p.
Remark 2. By simply deleting vertices in U not adjacent to any edges, we can construct the subgraph
G * [V, U ′ ], where U ′ ⊆ U. Then Theorem 4 is also true if G * [V, U] is replaced by G * [V, U ′ ].
V. LOSSY COMPUTING PROBLEMS WITH SIDE INFORMATION
A. Canonical Graph Characterization
We first show this problem can be reduced to special cases of (3) and the canonical graph characterization for this problem can be derived as well. Some interesting reductions will be discussed subsequently.
By Lemma 1,
R(D) = min p(u|s1): ∃g, E[d(f (S1,S2),g(U,S2))]≤D I(U ; S 1 ) − I(U ; S 2 ).(30)
We interpret the right hand side in a graph-based manner. Let V = S 1 and W = S 2 , where p(s 2 |u, s 1 ) = p(s 2 |s 1 )
is given. Also, let l(s 1 , u, s 2 ) = d(f (s 1 , s 2 ), g(u, s 2 )). Then for each feasible solution (g, p(u|s 1 )), We can construct a complete bipartite graph G g [V, U] and ω p (v, u) = p(u|v) is a feasible weight on the graph. So we have
R(D) = min g,(Gg [V,U ],ωp) O(p).
We can always assume that for any u 1 , u 2 ∈ U, there exists some s 2 such that g(u 1 , s 2 ) = g(u 2 , s 2 ). Otherwise, we apply a feasible contraction on (G g [V, U], ω p ) as follows. First classify u by the value of g(u, s 2 ), that is, u 1 and u 2 are classified into one equivalent class if g(u 1 , s 2 ) = g(u 2 , s 2 ) for each s 2 ∈ S 2 . This leads to a partition (20) and (19) can be easily verified. Denote the graph after contraction by
U = ∪ i U i . Then choose one representative u i from each class U i . Finally, let h(u) = u i if u ∈ u i . Then(G 1 [V, ∪ i {u i }], ω p ′ ). By Lemma 6, ω p ′ is a feasible weight on G 1 [V, ∪ i {u i }] and O(p ′ ) ≤ O(p). But G 1 [V, ∪ i {u i }]
satisfies the condition we assume.
Let U ′ = Z S2 and construct a complete bipartite graph G[V, U ′ ].
We can observe that any G g [V, U ] can be seen as a subgraph of G[V, U ′ ] by an injection taking u to (g(u, s 2 )) s2∈S2 , as well as edges adjacent to these vertices.
Hence each feasible weight ω p on G g [V, U ] can be seen as a feasible weight on G[V, U ′ ] after zero extension. So we have min (G[V,U ′ ],ωp) O(p) ≤ min g,(Gg[V,U ],ωp) O(p). Finally, we define g(u, s 2 ) =ẑ s2 for each u ′ = (ẑ s2 ) s2∈S2 and s 2 ∈ S 2 . Then g, (G[V, U ′ ], ω p ) is a feasible solution of the left hand side. So we have R(D) = min g,(Gg[V,U ],ωp) O(p) = min (G[V,U ′ ],ω p ′ ) O(p ′ ).
We also call this result the canonical graph characterization for the problem (1). Or equivalently, the above graphbased derivative leads to the following result.
Lemma 7. We have the following characterization for the lossy computing problem,
R(D) = min p(u|s1):U=(Ẑs 2 )s 2 ∈S 2 E[d(f (S1,S2),ẐS 2 )]≤D I(U ; S 1 ) − I(U ; S 2 ).(31)
The right hand side is a special case of (3).
Let U = (Ẑ s2 ) s2∈S2 , V = S 1 and W = S 2 , where p(s 2 |u, s 1 ) = p(s 2 |s 1 ) is given. Also, let l(s 1 , u, s 2 ) = d(f (s 1 , s 2 ),ẑ s2 ) where u = (ẑ s2 ) s2∈S2 .
So the canonical graph characterization for (3) and (1) actually coincide.
In this specific case, since p(w|u, v) = p(s 2 |s 1 ) = p(w|v), U − V − W is a Markov chain. And hence we have
I(U ; V ) − I(U ; W ) = I(U ; V |W ) ≥ 0.(32)
So the optimal value is always nonnegative.
Also, the explicit value of L max can be computed, which is useful for the computation. The proof is given in
Appendix B.
Proposition 1. For the rate-distortion problem,
L max = min u∈U v p(v)l(v, u) .(33)
For L ≥ L max , the optimal value is 0 and can be obtained by some q(u|x) = 1(u = u 0 ).
B. When Two-Sided Information Has a Common Part
Let S 0 ∈ {1, ..., K} be the common part of S 1 and S 2 and S 0 = g 1 (S 1 ) = g 2 (S 2 ). We start from the canonical graph characterization of this problem and derive its reduction.
We apply a generalized feasible contraction on the characteristic bipartite graph G χ [S 1 , U] following the discussion in Section IV-B. Let S 1k = g −1 1 (k) and S 2k = g −1 2 (k), k = 1, ..., K, then the partition is strictly separate. Arbitrarily choose one elementẑ 0 fromẐ. For any k = 1, ...,
K, define H k (s 2 ,ẑ) =ẑ if g 2 (s 2 ) = k and H k (s 2 ,ẑ) =ẑ 0 otherwise. Define h k (u) = (H k (s 2 ,ẑ s2 )) s2∈S2 for any u = (ẑ s2 ) s2∈S2 .
Then for any s 1 ∈ S 1k and s 2 ∈ S 2k , p(s 2 |h k (u), s 1 ) = p(s 2 |s 1 ) = p(s 2 |u, s 1 ), and for any
s 1 ∈ S 1k ,l (s 1 , h k (u)) = s2∈S 2k d(f (s 1 , s 2 ), H k (s 2 ,ẑ s2 ))p(s 2 |s 1 ) = s2∈S 2k d(f (s 1 , s 2 ),ẑ s2 )p(s 2 |s 1 ) =l(s 1 , u),
satisfying the condition (25) and (24).
In the graph after the contraction, each s 1 ∈ S 1k is connected to vertices in U with the form (H k (s 2 ,ẑ s2 )) s2∈S2
which can be seen as of the form (ẑ s2 ) s2∈S 2k by eliminating redundant components with valueẑ 0 . The resulting graph based characterization by Theorem 3 can be reformulated in the following form.
Theorem 5. Let S 0 ∈ {1, ..., K} be the common part of S 1 and S 2 and S 0 = g 1 (S 1 ) = g 2 (S 2 ). Then we have
R(D) = min p(u|s1):U|S1∈Ẑ g −1 2 (g 1 (S 1 )) s 2 E[d(f (S1,S2),ẐS 2 )]≤D I(U ; S 1 ) − I(U ; S 2 ).(34)
An important special case is when both S 1 and S 2 can be partitioned into two parts and the form of U is the same for each S 1 . In this case, Theorem 5 directly implies a suboptimal but simple formula.
Corollary 1. Let S 1 = (S 0 , S ′ 1 ) and S 2 = (S 0 , S ′ 2 ). Then we have R(D) = min p(u|s1):U=(Ẑ s ′ 2 ) s ′ 2 ∈S ′ 2 E[d(f (S1,S2),ẐS 2 )]≤D I(U ; S 1 ) − I(U ; S 2 ).(35)
The alphabet U isẐ S ′ 2 , which is strictly smaller thanẐ S2 given by Lemma 7. And hence the number of decision variables is |S 1 | · |Ẑ| |S ′ 2 | , a sharp decreasing from |S 1 | · |Ẑ| |S2| . The algorithm for (3) can be applied to (35) as well.
C. Minimum Distortion Case
The problem for D = L min is a generalization of the classical graph entropy problem discussed in [9].
The graph characterization is given by Lemma 7 and Theorem 4, where U =Ẑ S2 . Also, by Remark 2, U can be replaced by the subset U ′ ⊆ U where each vertex is adjacent to at least one edge. So we start from the result (29)
R(D) = min (G * [S1,U ′ ],ωp) O(p),
and discuss its reduction for lossy computing problems.
Definition 4. Let Γ d (S 1 )
be the collection of all nonempty subsets C of S 1 satisfying: there exists some u ∈ U ′ such thatl
(s 1 , u) = min u ′ l (s 1 , u ′ ) ,(36)
for each s 1 ∈ C.
It is easy to see that each s 1 ∈ S 1 must be contained in some
C ∈ Γ d (S 1 ) since s 1 ∈ {s 1 } ∈ Γ d (S 1 ).
The above definition helps us to analyze the structure of the graph. For each u ∈ U ′ , denote by C u = {s 1 ∈ S 1 : (s 1 , u) ∈ E * } the set of vertices connected to u. We can observe that C u ∈ Γ d (S 1 ). This defines a map
C : U ′ → Γ d (S 1 ) with C (u) = C u . So U ′ = ∪ C∈Γ d (S1) C −1 (C) is a partition. For each nonempty C −1 (C), we
can choose a representative u C .
We can perform a feasible contraction on the bipartite graph
G * [S 1 , U ′ ] by defining h(u) = u C if u ∈ C −1 (C).
Since p(s 2 |s 1 , u) = p(s 2 |s 1 ), (25) is guaranteed. Also,l(s 1 , h(u)) = min u ′l(s 1 , u ′ ) =l(s 1 , u) for any (s 1 , u).
So Lemma 6 can be applied to G * [S 1 , U ′ ]. Also, by noting that the reduced graph is a subgraph of the original one, the converse is also valid. The graph-based characterization is equivalent to the following.
Theorem 6. For the minimum distortion case, R(L min ) = min p(u|s1):S1∈U∈Γ d (S1)
I(U ; S 1 ) − I(U ; S 2 ),(37)
where the minimum is taken over all p(u|s 1 ) satisfying S 1 ∈ U ∈ Γ d (S 1 ).
Then the subsets in Γ d (S 1 ) become independent sets of the characteristic graph in [9] and Theorem 6 reduces to the significant results of Theorem 2 therein.
The optimization problem (37) can be solved by algorithms for (29) as well.
VI. CHANNEL CODING PROBLEMS WITH STATE INFORMATION
Similar to the discussion in Section V, graph-based characterizations for the channel problem with state can be developed and reductions can be descirbed. Details are mostly easy to verify and only given while necessary.
Lemma 8. We have the following characterization for the channel problem with state information,
C(B) = max p(u|s1):U=(Xs 1 )s 1 ∈S 1 E[b(XS 1 ,S1,S2)]≤B I(U ; Y, S 2 ) − I(U ; S 1 ).(39)Let U = (X s ′ 1 ) s ′ 1 ∈S1 , V = S 1 and W = (Y, S 2 ), where p(y, s 2 |u, s 1 ) = p(s 2 |s 1 )p(y|x s1 , s 1 , s 2 ) is given for u = (x s ′ 1 ) s ′ 1 ∈S1 . Also, let l(s 1 , u, (y, s 2 )) = b(x s1 , s 1 , s 2 ) for u = (x s ′ 1 ) s ′ 1 ∈S1
, then the problem (3) is specialized to the capacity-cost problem (2) with state information. Note that in this case, the maximization in the original problem is replaced by the minimization problem in (3).
For u = (x s1 ) s1∈S1 ,l (s 1 , u) = s2 p(s 2 |s 1 )b(x s1 , s 1 , s 2 ),
and it only depends on the specific component x s1 . So
min u l (s 1 , u) = min x∈X s2 p(s 2 |s 1 )b(x, s 1 , s 2 ) .(40)
Denote by x * s1 one of such x achieving the minimum on the right. Let u * = (x * s1 ) s1∈S1 , then the conditional distribution p(u|s 1 ) = 1(u = u * ) is a feasible solution to (3) Also, when two-sided state information has a common part, we have similar characterizations.
Theorem 7. Let S 0 ∈ {1, ..., K} be the common part of S 1 and S 2 and S 0 = g 1 (S 1 ) = g 2 (S 2 ). Then we have
C(B) = max p(u|s1):U|S1∈X g −1 1 (g 1 (S 1 )) s 1 E[b(S1,S2,XS 1 )]≤B I(U ; Y, S 2 ) − I(U ; S 1 ).(41)
Corollary 2. Let S 1 = (S 0 , S ′ 1 ) and S 2 = (S 0 , S ′ 2 ). Then we have
C(B) = max p(u|s1):U=(X s ′ 1 ) s ′ 1 ∈S ′ 1 E[b(S1,S2,XS 1 )]≤B I(U ; Y, S 2 ) − I(U ; S 1 ).(42)
The case for B = L min is slightly different. We adopt the graph-based characterizations by Remark 2
C(B) = − min (G * [S1,U ′ ],ωp) O(p),
where U = X S1 and U ′ ⊆ U is the set of vertices adjacent to edges. By (40), vertices in U connected to some
Then for u = (x s1 ) s1∈S1 , (s 1 , u) ∈ E * only if x s1 ∈ X s1 .
Again we define a feasible contraction as follows. First choose x * s1 ∈ X s1 , and define H(s 1 , x) = x if x ∈ X s1 and H(s 1 , x) = x * s1 otherwise. For any u = (x s1 ) s1∈S1 , define h(u) = (H(s 1 , x s1 )) s1∈S1 . We need to verify (25) and (24) for (s 1 , u) ∈ E * . With the above assumptions, it is easy to see thatl(s 1 , h(u)) = min u l (s 1 , u) =l(s 1 , u). Also, p(y, s 2 |h(u), s 1 ) = p(s 2 |s 1 )p(y|H(s 1 , x s1 ), s 1 , s 2 ) = p(s 2 |s 1 )p(y|x s1 , s 1 , s 2 ) = p(y, s 2 |u, s 1 ).
Again, Lemma 6 can be applied to G * [S 1 , U ′ ] and the converse is still valid. The graph-based characterization can be summarized as the following.
I(U ; Y, S 2 ) − I(U ; S 1 ),(44)
where the minimum is taken over all p(u|s
1 ) satisfying U = (x s1 ) s1∈S1 , where x s1 ∈ X s1 .
The problem (44) is a special case of (29). And the number of decision variables is |S 1 | · s1∈S1 |X s1 | <<
|S 1 | · |X | |S1| in general.
Remark 5. We can also study an interesting special case. Let b ǫ (x, s 1 , s 2 ) = 1{b(x, s 1 , s 2 ) > ǫ} for any ǫ ≥ 0.
The case for C = 0 can model the channel coding problem with limited power, an objective demanded by green communications.
VII. ALTERNATING MINIMIZATION ALGORITHMS
We are aimed at solving the problem (3) to obtain T (L). First we derive the algorithm from the pointview of alternating minimization and formulate a generalized condition for convergence in Section VII-A. The convergent update strategies for the dual variables are then developed in Section VII-B and VII-C. Finally, deflation techniques are introduced in Section VII-D and are used to reduce the computational costs.
A. The Prototypical Algorithms
We only need to solve the equivalent form (12) discussed in Section III. To derive the algorithm, the Lagrangian multiplier s is introduced for the linear loss constraints (12c). Throughout this subsection, suppose that some point (L, T (L)), L ∈ (L min , L max ) on the curve is corresponding to the Lagrange multiplier s * , where s * is clearly unknown.
The usual BA type approach fixes s * , but not the loss L in the constraints (12c). So the target-loss function for a given L can not be computed directly. We try to overcome the weaknesses by updating s properly, motivated by [20], [21].
We first construct the penalty function, which is useful for deriving algorithms.
Definition 5. For a fixed s > 0, the penalty function (Lagrangian function) is defined as F s (q, r) GD E (q||r) + s · Loss(q).
Definition 6. Define the partial minimization process to be, for any (v, u) ∈ E and (u, w) ∈ F ,
q * s (r)(u|v) e −sl(v,u) w ′ r(u|w ′ ) p(w ′ |u,v) u ′ ∈Ev e −sl(v,u ′ ) w ′ r(u ′ |w ′ ) p(w ′ |u ′ ,v) (46) r * (q)(u|w) v∈E u p(v)q(u|v)p(w|u, v) (v,u)∈E p(v)q(u|v)p(w|u, v)
.
(47)
With each step choosing a suitable s (n) , we want the algorithm to converge in a proper sense. To achieve the goal, we select suitable s (n) which both bounds the value of Loss(q (n) ) and makes the value of F s * (q, r) descend.
To derive the strategy for convergence, first define G r (s) Loss(q * s (r)).
To be more explicit, we have
G r (s) = v p(v) u∈Evr (u, v)e −sl(v,u)l (v, u) u∈Evr (u, v)e −sl(v,u) , wherer(u, v) = w ′ (r(u|w ′ )) p(w ′ |u,v) . Let Θ i (v) = u∈Evr (u, v)e −sl(v,u) (l(v, u)) i , i = 0, 1, 2, then G ′ r (s) = v p(v) (Θ 1 (v)) 2 − Θ 0 (v)Θ 2 (v) (Θ 0 (v)) 2 ≤ 0
is by the Cauchy-Schwarz inequality, so G r (s) is decreasing. Suppose thatr(u, v) > 0, ∀v, u, then
lim s→∞ G r (s) = v p(v) min u∈Ev {l(v, u)} = L min , lim s→−∞ G r (s) = v p(v) max u∈Ev {l(v, u)} = L Max .
Therefore, the equation
G r (s) = L,(49)
has a root s for L ∈ (L min , L Max ).
Let (q 0 , r 0 ) ∈ arg min q,r F s * (q, r), then define the discriminant for convergence to be
∆ n (q 0 ) (s (n) − s * )(Loss(q (n) ) − Loss(q 0 )).(50)
The algorithm is summarized as Algorithm 1.
Theorem 9. 1) If there exists some (q 0 , r 0 ) ∈ arg min q,r F s * (q, r) such that ∆ n (q 0 ) ≥ 0, ∀n, then lim n→∞ F s * (q (n+1) , r (n) ) = min q,r F s * (q, r).
2) Consider the following problems: min q,r F s * (q, r), min q,r:Loss(q)=L GD E (q, r), min q,r:Loss(q)≤L GD E (q, r).
If for any optimal solution (q 0 , r 0 ), ∆ n (q 0 ) ≥ 0, ∀n; q (n) satisfies the corresponding constraint for any n.
Then further we have the solutions (q (n+1) , r (n) ) generated by Algorithm 1 converge to an optimal solution (q 0 , r 0 ) for the corresponding problem. If for any optimal solution (q 0 , r 0 ), ∆ n (q 0 ) = 0, then the convergent rate for the objective function is O 1 n .
Algorithm 1 Prototypical Alternating Minimization Algorithm
Input: Loss matrixl(v, u), distributions p(v), p(w|u, v), maximum iteration number max iter.
Output: An point on the target-loss curve and an corresponding optimal solution. 1: Initialize q (1) (u|v) = 1(u∈Ev) |Ev| . 2: for n = 1 : max iter do 3: r (n) = r * (q (n) ).
4:
Solve s (n+1) such that ∆ n+1 (q 0 ) ≥ 0.
5:
q (n+1) = q * s (n+1) (r (n) ). 6: end for 7: return (Loss(q (n+1) ), GD E (q (n+1) ||r (n) )) and (q (n+1) , r (n) ).
The proof of Theorem 9 as well as convergence theorems for specific algorithms can be found in Appendix A.
Algorithm 1 naturally leads to two algorithms whose convergence is implied by Theorem 9 by letting ∆ n (q 0 ) = 0.
First, a generalized BA algorithm is derived as a special case by letting s (n) = s * in Algorithm 1 and the iteration step is directly chosen as r (n) = r * (q (n) ),
q (n+1) = q * s * (r (n) ).
It does not solve the problem (3) directly. However, it is still useful for computing the whole target-loss curve because of its easy implementation and lower computation complexity.
Corollary 3. The solutions (q (n+1) , r (n) ) generated by the generalized BA algorithm converge to an optimal solution (q 0 , r 0 ) and
F s * (q (n+1) , r (n) ) − F s * (q 0 , r 0 ) = O 1 n .(51)
By (63), the returned optimal value by the algorithm can be computed through the following formula with better
numerical stability. − v p(v) log u ′ ∈Ev e −sl(v,u ′ ) w ′ r (n) (u ′ |w ′ ) p(w ′ |u ′ ,v) − s (n) L.(52)
Second, the problem min q,r:Loss(q)=L GD E (q, r) with equality constraint is easily solved by finding the root G r (n−1) (s) = L by Newton's method and we have Corollary 4. The solutions (q (n+1) , r (n) ) generated by the algorithm for equality constraint problem converge to an optimal solution (q 0 , r 0 ) and
G E (q (n+1) , r (n) ) − G E (q 0 , r 0 ) = O 1 n .(53)
Such algorithm is also useful for the rate-distortion problem discussed in Section VII-C.
B. The Improved Alternating Minimization Algorithm
We want to specialize the convergence condition ∆ n (q 0 ) ≥ 0 and develop a practical update strategy for s (n) . It should be implemented without the knowledge of both s * and L * to be practical. If the equality holds in the loss constraint (3b) as for the case L min < L < L max , then it would be enough to solve (49) by Newton's method, as discussed in the end of Section VII-A. However, for the general case, since L max can not be computed easily, special care needs to be taken to handle the inequality constraints.
The following strategy achieves our goals. Also, s (n) is kept nonnegative so this strategy is structure-preserved.
Definition 7 (Strategy 1). Evaluate G r (n−1) (0) and solve s (n) as follows.
i) If G r (n−1) (0) ≤ L, then let s (n) = 0.
ii) If G r (n−1) (0) > L, then solve G r (n−1) (s) = L for the solution s (n) ≥ 0 by Newton's method.
s k+1 = s k − G r (n−1) (s k ) G ′ r (n−1) (s k )
. Lemma 9. The choice of s (n) by Definition 7 satisfies the convergence condition ∆ n (q 0 ) ≥ 0 for any optimal solution (q 0 , r 0 ) for the problem (12).
Proof: By analyzing KKT conditions for the problem (12) when L ∈ (L min , L Max ), there are two main cases.
1) For L ≤ L max , s * ≥ 0 and L * = L. We have ∆ n (q 0 ) = (s (n) − s * )(G r (n−1) (s (n) ) − L). For case i), s (n) = 0 ≤ s * and G r (n−1) (s (n) ) = G r (n−1) (0) ≤ L, so ∆ n (q 0 ) ≥ 0.
For case ii), G r (n−1) (s (n) ) = L and hence ∆ n (q 0 ) = 0.
2) For L > L max , s * = 0 and L * ≤ L. For case i), s (n) = 0 = s * and hence ∆ n (q 0 ) = 0. For case ii), s (n) ≥ 0 = s * and G r (n−1) (s (n) ) = L ≥ L * , so ∆ n (q 0 ) ≥ 0.
Also, it is easy to verify (52) is still applicable since s (n) is a good approximation of s * whether or not the latter is 0.
For L ≥ L Max , the loss constraint (3c) can be directly neglected, while for L = L min it is absorbed if E is replaced by a subset E * ⊆ E (we assume E has been reduced and E * = E in this case). For both cases, there is no need to introduce the dual variable s. The partial minimization process for q is replaced by the following and that for r remains,
q * 0 (r)(u|v) w ′ (r(u|w ′ )) p(w ′ |u,v) u ′ ∈Ev w ′ (r(u ′ |w ′ )) p(w ′ |u ′ ,v) .(54)
The overall algorithm is summarized as Algorithm 2.
Theorem 10. The solutions (q (n) , r (n) ) generated by Algorithm 2 converge to an optimal solution (q (0) , r (0) ).
Furthermore, for L = L min and L ≥ L Max ,
GD E (q (n+1) ||r (n) ) − GD E (q 0 ||r 0 ) = O 1 n .(55)
For the lossy computing problem with decoder-side information with zero distortion, Algorithm 2 for L = L min can be specialized to the algorithm for the conditional graph entropy discussed in [26]. However, for the general Initialize q (1) (u|v) = 1(u∈Ev) |Ev| . 6: for n = 1 : max iter do 7: r (n) = r * (q (n) ). 8: q (n+1) = q * 0 (r (n) ). Initialize q (1) (u|v) = 1(u∈Ev) |Ev| .
12:
for n = 1 : max iter do 13: r (n) = r * (q (n) ).
14:
Solve s (n+1) by Definition 7. 15:
q (n+1) = q * s (n+1) (r (n) )
. 16: end for 17: end if 18: return GD E (q (n+1) ||r (n) ) and (q (n+1) , r (n) ).
problem (3) we propose or even the channel capacity problem with minimum input cost, we have not seen existing algorithms.
C. An Alternative Update Strategy of the Dual Variable for the Rate-Distortion Function
Since p(w|u, v) = p(w|v) in the rate-distortion problem, the partial minimization process for r can be simplified to
r * (q)(u|w) = v∈E u q(u|v)p(v|w).(56)
Furthermore, by Proposition 1, we only need to compute the target-loss (rate-distortion) function for L min < L < L max , where L max is easy to compute from (33). The loss constraint in the problem becomes an equality and the strategy for the choice of s (n) gets easier and has better theoretical properties, as discussed in the end of Section VII-A and summarized as the following Strategy 2.
Definition 8 (Strategy 2). Solve G r (n−1) (s) = L for the solution s (n) .
The specialized version of Algorithm 1 with Strategy 2 is Algorithm 3. The convergence result Theorem 11 is a direct corollary of Corollary 4.
Algorithm 3 Improved Alternating Minimization Algorithm for Rate-Distortion
Input: Loss matrixl(v, u), distributions p(v), p(w|u, v), maximum iteration number max iter, loss constraint L.
Output: Optimal value and an optimal solution for (3).
1: Initialize q (1) (u|v) = 1(u∈Ev) |Ev| . 2: for n = 1 : max iter do 3: r (n) = r * (q (n) ).
4:
Solve s (n+1) by Definition 8. 5: q (n+1) = q * s (n+1) (r (n) ). 6: end for 7: return GD E (q (n+1) ||r (n) ) and (q (n+1) , r (n) ).
Theorem 11. The solution (q (n+1) , r (n) ) generated by Algorithm 3 converge to an optimal solution (q 0 , r 0 ) and
GD E (q (n+1) , r (n) ) − GD E (q 0 , r 0 ) = O 1 n .(57)
For the computation of classical rate-distortion functions, Algorithm 3 subsumes which in [22] as special cases.
Algorithm 3 is simpler and has better theoretical properties, however, Algorithm 2 performs almost as well as
Algorithm 2 and has an extra benefit that it preserves the nonnegetivity of s.
D. Deflation Techniques
The case when |U| is far more than |V| and |W| is common since the problems induced by (1) and (2) have |U| = |Ẑ| |S2| and |X | |S1| in general. The difficulty is, large |U| increases the computational cost for each iteration.
However, the existence of sparse solutions in the following sense helps to reduce the cost. (1) and (2), the bounds can be further tightened to |S 1 | + 1 and min{|X | · |S 1 | + 1, |Y| + |S 1 |} by similar arguments, respectively [25].
The proof of Lemma 10 can be found in Appendix F.
We want to solve such a sparse solution of (3). The intuition is as follows. The process of BA type and our algorithms can be regarded as the feature enhancement process with each u being a feature. We start with equal opportunities for each u. The probabilities of features u with smaller distortions are enhanced during the iterations.
The probabilities of features with large distortions will converge to 0. But if we find them decreasing very quickly after several iterations (for instance, p (n) (u) < 10 −2 /|U|), we can delete them from the alphabet U in advance.
Algorithm 4 Alternating Minimization Algorithm with deflation
Input: Loss matrixl(v, u), distributions p(v), p(w|u, v), maximum iteration number max iter, deflation period k.
Output: Optimal value and an optimal solution.
1: Initialize q (1) (u|v) = 1(u∈Ev) |Ev| . 2: for n = 1 : max iter do 3: r (n) = r * (q (n) ).
4:
Update s (n+1) with one of strategies discussed. 5: q (n+1) = q * s (n+1) (r (n) ).
6:
if n mod k == k − 1 then 7:
Do the deflation for q.
8:
end if 9: end for 10: return GD E (q (n+1) ||r (n) ) and (q (n+1) , r (n) ).
As will be shown in experiments, the algorithm with heuristic deflation techniques greatly saves computational time and can handle problems with larger size.
E. Computational Complexity Analysis and Comparisons
The computational complexity of each outer iteration in these algorithms is proportional to |U||V||W| since the Newton's method can find the root in a few inner iterations. In practice, the convergence is linear most of the time and choosing the maximum iteration time as a constant with moderate size is sufficient to guarantee the accuracy.
Considering V and W are always fixed by the problem, the main objective is to reduce |U|. When the two-sided information has common part, for instance S 1 = (S 0 , S ′ 1 ) and S 2 = (S 0 , S ′ 2 ), then we have |U| = |Ẑ| |S ′ 2 | << |Ẑ| |S2| or |U| = |X | |S ′ 1 | << |X | |S1| . For the minimum loss case, the effects are similar.
It is also noteworthy that the blocked dense structure of p(w|u, v) can be exploited to further reduce the cost of each iteration. For example consider S 1 = (S 0 , S ′ 1 ) and S 2 = (S 0 , S ′ 2 ). By Definition 6, the cost of each iteration can be reduced by a factor |S 0 |.
For the general cases, the exponential size of |U| is annoying but unavoidable. Take the computation of ratedistortion functions for an example. For our method, |U| = |Ẑ| |S2| . For the method [19] starting from (1), |U| = |S 1 | + 1. However, an additional factor of at most |U| |S1||S2| is introduced to traverse g therein. In this sense, our approach greatly saves the computational time at the expense of relative admissible memory cost, since |Ẑ| |S2| ≤
(|S 1 ||S 2 |) |S2| << (|S 1 | + 1) |S1||S2| in general.
Furthermore, with the aid of deflation techniques, |U| decreases rather quickly, as is verified by numerical experiments in Section VIII. At first |U| decreases exponentially and then gradually slows down. Finally |U| is comparable to |V| and |W|, so that hundreds of additional iterations can be applied to achieve sufficiently high
R 1 (D) X Y 2 3 1(D ≤ 1 6 )(H( 1+6D 4 ) − H(3D)) R 2 (D) (X, Y ) ∅ 1{D ≤ 1 2 }(1 − H(D)) R 3 (D) (X, Y ) Y 1 3 1{D ≤ 1 6 }(1 − H(3D))C 1 (p) ∅ ∅ 1 − H( p 2 ) C 2 (p) S ∅ 1 − p C 3 (p) ∅ S 1 − p C 4 (p) S S 1 − p
accuracy. Even though, the most computational cost is spent at the first few iterations. In a word, our graph-based characterizations make deflation techniques rather efficient.
VIII. NUMERICAL RESULTS AND DISCUSSIONS
A. Numerical Results for Classical Problems
This section is devoted to verifying the performance of the algorithm by numerically computing several examples.
All the experiments are conducted on a PC with 16G RAM, and one Intel(R) Core(TM) i7-7500U CPU @2.70GHz.
First consider two set of examples. One is the online card game discussed in Section III-D of [24], and the other is Example 7.3 about memory with stuck-at faults in [25]. The analytical results (except trivial cases) of the rate-distortion function and the capacity can be found in [24] and [25], respectively.
For the first one, let X = Y = {1, 2, 3}, p(i, j) = 1 6 · 1(i = j), i, j = 1, 2, 3 and f (x, y) = 1{x > y}. Also, Z = Z = {0, 1} and the distortion measure d is set to be the Hamming distortion. Cases considered are summarized in Table I. For the second one, X = Y = {0, 1}. When the channel state S is 1 and 2, the output Y is always 0 and 1 respectively, independent of the input X. When S = 3, no fault is caused and Y = X. The probabilities of these states are p/2, p/2 and 1 − p, respectively. We compute the capacity parameterized by the error probability p with different cases in Table II. In Fig. 3, we plot the curve of the rate-distortion function and the capacity with respect to the parameter p given by analytical results (superscript A) as well as points computed by our algorithms (superscript C). We observe that the points computed by the algorithm almost exactly lie on the analytical curve, which shows the accuracy of our algorithm. Considering the number of iterations for each point is relatively limited (150 steps), the efficiency of our algorithm is also illustrated.
B. Applications to Problems without Exact Solutions
Consider the rate-distortion and capacity-cost functions for some complex scenarios. In these cases, analytical solutions have not been found and our algorithm plays an important role. Compared with some previous work [19], our algorithms can be applied to practical problems with larger scale. Fig.4. It is also rather convenient for our algorithm to adjust the computational granularity of any parts of the curves. 2) Capacity-cost function for the Gaussian additive channel with quantized state information: We consider the channel
Y = X + S + Z,(58)
where the channel state S ∼ N(0, 1 2 ) and the noise Z ∼ N(0, 1) are independent. S can also be used to model the channel input of the other terminal in a Gaussian MAC. In this case, one of the encoders optimizes its input distribution in order to maximize its capacity, on the condition that the input distribution of the other is fixed.
A more practical situation is that S is measured with a given degree of accuracy, so that a quantized version of S instead of S itself is known by the encoder. More formally, let
Q 4 (s) = sgn(s)(1.5 · 1{|s| > 1} + 0.5 · 1{|s| ≤ 1})(59)
be the quantization function and S 1 = Q 4 (S/ 1 2 ) be the two-bit quantized state information. We perform uniform quantization of X and Y over intervals [−4, 4] and [−8, 8], respectively. Also, |X| = 2 b and |Y | = 2 b+1 , which means there are b bits to represent the input X and b + 1 bits to express the output Y . The transition probability Y |(X s1 ) s1∈S1 , S 1 is computed by the 5-point closed Newton-Cotes quadrature rule applied on the probability density function, since the density of S + Z can be expressed through the Q-function.
The capacity-cost function for b = 3, 4 is plotted in Fig. 5. Note that the writing on dirty paper scheme gives a capacity 1 2 log(1 + B) when S is fully gotten by the encoder. Thus it gives an upper bound for the capacity with the quantized side information computed here.
C. The Speed-up Effects of Deflation Techniques
The computational time and the differences of the computed optimal rates before and after deflation techniques are summarized in Table III. The deflation period k is set to be 5 and the criterion is set to be p (n) (u) < 10 −2 /|U|. We find that the additional deflation techniques only incur a small difference for the computed results. Since the original algorithm is bound to converge, so does the accelerated one. The speed-up ratio becomes larger as the size of the problems increases. It makes the computational time admissible for some cases, which is shown by the last Table III. It is not surprising that the speed-up effect is more significant for lossy computing problems than for channel coding problems with state, since the cardinality bounds for the former are tighter.
The typical trend of time and error during iterations is described by These results demonstrate the deflation techniques largely reduce computational time without loss of accuracy.
IX. CONCLUSION
In this paper, we systematically studied the lossy computing and channel coding problems through a unified graphbased approach. Interesting reductions for some special cases were derived by our graph-based characterizations.
Also, we developed efficient algorithms for the computation of rate-distortion and capacity-cost functions with a given distortion or cost. The algorithms were proved to be convergent and accelerated by heuristic deflation techniques. Applying graph methods and algorithms to more practical problems would be one direction for further work.
APPENDIX A PROOF OF LEMMA 3 AND 4
Proof of Lemma 3: First we show (12) is a convex optimization problem. We only need to verify the convexity of the objective function GD E (q||r).
Let (q 1 , r 1 ) and (q 2 , r 2 ) be two feasible solutions and 0 < α < 1. By the log-sum inequality, we have
[(1 − α)q 1 (u|v) + αq 2 (u|v)] log (1 − α)q 1 (u|v) + αq 2 (u|v) (1 − α)r 1 (u|w) + αr 2 (u|w) ≤ (1 − α)q 1 (u|v) log q 1 (u|v) r 1 (u|w) + αq 2 (u|v) log q 2 (u|v) r 2 (u|w) .
Multiplying both sides by p(v)p(w|u, v) and taking the sum over all (u, v) ∈ E and w ∈ W, we have
GD E ((1 − α)q 1 + αq 2 ||(1 − α)r 1 + αr 2 )
≤ (1 − α)GD E (q 1 ||r 1 ) + αGD E (q 2 ||r 2 ).
Next we show (3) is convex and we only need to demonstrate its objective function O(p) is convex. From Lemma 2,
O(q) = GD E (q||q) = min r GD E (q||r).(60)
We have shown GD E (q||r) is convex in (q, r). O(q) is the minimum of GD E (q||r) with respect to r, so it is also convex.
Proof of Lemma 4: Consider the problem (3). Since the feasible region expands as L increases, the optimal value T (L) is decreasing in D.
Let q 1 and q 2 be two feasible solutions of (3) for L 1 and L 2 , respectively. Let α ∈ (0, 1). Since the constraints are only linear, (1 − α)q 1 + αq 2 is a feasible solution for (1 − α)L 1 + αL 2 . Also, by Lemma 3 and the definition of T (L),
T ((1 − α)L 1 + αL 2 ) ≤ O((1 − α)q 1 + αq 2 ) ≤ (1 − α)O(q 1 ) + αO(q 2 ).
Since the choice of q 1 and q 2 is arbitrary, then we have
T ((1 − α)L 1 + αL 2 ) ≤ (1 − α)T (L 1 ) + αT (L 2 ),
which gives the convexity of T (L).
APPENDIX B
PROOF OF PROPOSITION 1
Let
L ′ max = min u∈U v p(v)l(v, u) . For D ≥ L ′ max , let u 0 be one of the elements in U such that v p(v)l(v, u 0 ) = min u ′ ∈U v p(v)l(v, u ′ ) .
Then it is easy to see that p(u|s 1 ) = 1(u = u 0 ) is a feasible solution, such that the value of the objective function We first show ω p ′ is a feasible weight. For any v ∈ V k ,
E[l(V, U, W )] = u p(u) v,w p(v, w)l(v, u, w) ≥ u p(u) min u ′ ∈U v p(v)l(v, u ′ ) = min u ′ ∈U v p(v)l(v, u ′ ) = L ′ max .u ′ ∈E ′ v p(u ′ |v) = u ′ ∈E ′ v u∈Ev h k (u)=u ′ p(u ′ |v) = u∈Ev p(u|v) = 1. By (24), we always havel(u ′ , v) ≤l(v, u) if v ∈ V k , (v, u ′ ) ∈ E ′ and h k (u) = u ′ . so Loss(p ′ ) = k,v∈V k , (v,u ′ )∈E ′ p(v)p ′ (u ′ |v)l(u ′ , v) ≤ k,v∈V k , (v,u ′ )∈E ′ u∈Ev h k (u)=u ′ p(v)p(u|v)l(v, u) = Loss(p)
is by (23) and (26).
It remains to show such an operation does not increase O(p). For any w ∈ W k , by (23), (25) and (26) we have
p ′ (u ′ , w) = v∈V k (v,u ′ )∈E ′ p(v)p ′ (u ′ |v)p(w|u ′ , v) = v∈V k (v,u ′ )∈E ′ u∈Ev h k (u)=u ′ p(v)p(u|v)p(w|u, v) = u:h k (u)=u ′ v:(v,u)∈E p(v)p(u|v)p(w|u, v) = u:h k (u)=u ′ p(u, w),
which implies p ′ (u ′ |w) = u:h k (u)=u ′ p(u|w). Then by (23), (25) and (26) we have The following lemma characterizes the decreasing value of the partial minimization processes. The proof is by direct calculation.
O(p ′ ) − O(p) = k,v∈V k , w∈W k , (v,u ′ )∈E ′ u: h k (u)=u ′ p(v)p(u|v)p(w|u ′ , v) log p(u|w)p ′ (u ′ |v) p(u|v)p ′ (u ′ |w) ≤ k,v∈V k , w∈W k , (v,u ′ )∈E ′ u: h k (u)=u ′ p(v)p(u|v)p(w|u ′ , v) p(u|w)p ′ (u ′ |v) p(u|v)p ′ (u ′ |w) − 1 = k,v∈V k , w∈W k , (v,u ′ )∈E ′ p(v)p ′ (u ′ |w)p(w|u ′ , v) − 1 = 0,
Lemma 11. We have the following identities for q with supp(q) ⊆ E , where the generalized K-L divergence for q(u|v) and r(u|w) are naturally defined as
GD E (q 1 ||q 2 ) = (v,u)∈E p(v)q 1 (u|v) log q 1 (u|v) q 2 (u|v)
and GD q (r 1 ||r 2 ) = u,w q(w)r 1 (u|w) log
r 1 (u|w) r 2 (u|w) ,
where q(w) v,u p(v)q(u|v)p(w|u, v).
F s (q, r) = F s (q * s (r), r) + GD E (q||q * s (r)),
F s (q, r) = F s (q, r * (q)) + GD q (r * (q)||r).
Also we have GD E (q * s (r)||r) = −s · Loss(q * s (r))
− v p(v) log u ′ ∈Ev e −sl(v,u ′ ) w ′ r(u ′ |w ′ ) p(w ′ |u ′ ,v) .(63)
Proof of Lemma 2: Immediately from (62).
The partial minimization for q depends on s, the next corollary describe the behavior of the penalty function with parameter s while doing the minimization for a different parameter s ′ .
Corollary 5. F s (q, r) = F s (q * s ′ (r), r) + GD E (q||q * s ′ (r)) +(s − s ′ ) [Loss(q) − Loss(q * s ′ (r))] .(64)
Proof: By the identity F s (q, r) = F s ′ (q, r) + (s − s ′ )Loss(q) and Lemma 11.
Corollary 6. The conditional distribution pair (q 0 , r 0 ) minimizing the penalty function F s * (q, r) exists and satisfies q * s * (r 0 ) = q 0 . Also, r * (q 0 )(·|w) = r 0 (·|w) for any w such that q 0 (w) > 0.
Proof: The domain of (q, r) is compact and F s * (q, r) is continuous, so there exists conditional distribution pair (q 0 , r 0 ) minimizing the penalty function F s * (q, r).
Then by the definition of (q 0 , r 0 ),
F s * (q 0 , r 0 ) ≤ F s * (q * s * (r 0 ), r 0 ), F s * (q 0 , r 0 ) ≤ F s * (q 0 , r * (q 0 )).
So by Lemma 11 we have GD E (q 0 ||q * s * (r 0 )) = 0 and GD(r * (q 0 )||r 0 ) = 0. By (4) we always have q * s * (r 0 ) = q 0 . Similarly, r * (q 0 )(·|w) = r 0 (·|w) for any w such that q 0 (w) > 0.
Proposition 2. During the iterations of Algorithm 1 , F s * (q (n) , r (n) ) ≤ F s * (q (n) , r (n−1) ).
If either s n = s * , ∀n or for some (q 0 , r 0 ) ∈ arg min q,r F s * (q, r), Loss(q (n) ) = Loss(q 0 ), ∀n holds (for Algorithm 1 with ∆ n (q 0 ) = 0), then we have F s * (q (n+1) , r (n) ) ≤ F s * (q (n) , r (n) ) ≤ F s * (q (n) , r (n−1) ).
Proof: By Lemma 11, we have F s * (q (n) , r (n−1) ) =F s * (q (n) , r * (q (n) )) + GD q (n) (r * (q (n) )||r (n−1) ) =F s * (q (n) , r (n) ) + GD q (n) (r (n) ||r (n−1) ) ≥ F s * (q (n) , r (n) ).
Furthermore, if either s n = s * , ∀n or Loss(q (n) ) = L * , ∀n holds, then since q * s (n) (r (n) ) = q (n+1) , F s * (q (n) , r (n) ) = F s * (q (n+1) , r (n) ) + GD E (q (n) ||q (n+1) ) +(s * − s (n) )(Loss(q (n) ) − Loss(q (n+1) )) =F s * (q (n+1) , r (n) ) + GD E (q (n) ||q (n+1) ) ≥F s * (q (n+1) , r (n) ).
Lemma 12.
For some (q 0 , r 0 ) ∈ arg min q,r F s * (q, r), define Γ n (q 0 ) = GD E (q 0 ||q (n) ) − GD q 0 (r 0 ||r (n) ), then Γ n (q 0 ) ≥ 0. Also, we have F s * (q (n) , r (n−1) ) − F s * (q 0 , r 0 ) + Γ n−1 (q 0 ) + ∆ n (q 0 ) =GD E (q 0 ||q (n−1) ) − GD E (q 0 ||q (n) ).
(67)
Proof: By the definition of Γ n (q 0 ), r 0 and r (n) , Γ n (q 0 ) = − v,u,w p(v)q 0 (u|v)p(w|u, v) log q (n) (u|v)r 0 (u|w) q 0 (u|v)r (n) (u|w) ≥ − v,u,w p(v)q 0 (u|v)p(w|u, v) q (n) (u|v)r 0 (u|w) q 0 (u|v)r (n) (u|w) − 1 = − u,w v p(v)p(w|u, v) q (n) (u|v)r 0 (u|w) r (n) (u|w) + 1 = − u,w r 0 (u|w) u ′ ,v ′ p(v ′ )p(w|u ′ , v ′ )q (n) (u ′ |v ′ ) + 1 = 0.
Since Loss(q 0 ) = L * and Loss(q (n) ) = G r (n−1) (s (n) ), by Corollary 5 we have F s * (q 0 , r (n−1) ) = F s * (q (n) , r (n−1) ) + GD E (q 0 ||q (n) ) +(s (n) − s * )(G r (n−1) (s (n) ) − Loss(q 0 )).
Note that by Corollary 6, r 0 = r * (q 0 ). Then by Lemma 11 we have F s * (q 0 , r (n−1) ) = F s * (q 0 , r 0 ) + GD q 0 (r 0 ||r (n−1) ).
By (68) and (69) we have finished the proof.
Proof of Theorem 9: Consider the first part and suppose (q 0 , r 0 ) ∈ arg min q,r F s * (q, r). By Lemma 12, we take the sum and get m k=n+1 F s * (q (k) , r (k−1) ) − F s * (q 0 , r 0 ) + Γ k−1 (q 0 ) + ∆ k (q 0 )
= GD E (q 0 ||q (n) ) − GD E (q 0 ||q (m) ),
for m > n ≥ 1. Take n = 1 and we can obtain m k=2 F s * (q (k) , r (k−1) ) − F s * (q 0 , r 0 ) + Γ k−1 (q 0 ) + ∆ k (q 0 ) ≤GD E (q 0 ||q (1) ) ≤ log |U|, which is because q (1) (u|y) = 1 |U | . Let m → ∞, then ∞ k=2 F s * (q (k) , r (k−1) ) − F s * (q 0 , r 0 ) ≤ log |U|
Each term in the sum is nonnegative, so lim k→∞ F s * (q (k) , r (k−1) ) = F s * (q 0 , r 0 ).
Also, lim k→∞ F s * (q (k) , r (k) ) = F s * (q 0 , r 0 ) = min q,r F s * (q, r) by (65). It proves the first part.
Then we consider the second part. {q (n) } n≥1 is a sequence in a compact set, and hence has a convergent subsequence, donoted by {q (n k ) } ∞ k=1 . Let q (0) be its limit, and let r (0) = r * (q (0) ). Then r (n k ) = r * (q (n k ) ) and lim k→∞ F s * (q (n k ) , r (n k ) ) = lim k→∞ F s * (q (n k ) , r * (q (n k ) )) =F s * (q (0) , r * (q (0) )) = F s * (q (0) , r (0) ), which implies F s * = min q,r F s * (q, r). Also, we have Loss(q (0) ) = lim n→∞ Loss(q (n k ) ), and hence q (0) satisfies the corresponding constraint in one of the three problems. So (q (0) , r (0) ) is an optimal solution for the corresponding problem, which implies that (70) is still satisfied when (q 0 , r 0 ) is replaced by (q (0) , r (0) ).
By the version of (70) where (q 0 , r 0 ) is replaced by (q (0) , r (0) ), we have GD E (q (0) ||q (n) ) is decreasing. Since we have lim k→∞ q (n k ) = q (0) , then we have lim k→∞ GD E (q (0) ||q (n k ) ) = 0 and hence lim n→∞ GD E (q (0) ||q (n) ) = 0.
So q (n) → q (0) , which implies r (n) → r (0) as n → ∞. In other words, the solutions (q (n+1) , r (n) ) generated by Algorithm 1 converge to an optimal solution (q (0) , r (0) ) for the corresponding problem.
For the case when ∆ n (q 0 ) = 0, by (66) F s * (q (n+1) , r (n) ) − F s * (q 0 , r 0 ) is decreasing and hence it is no greater than log |U | n by (71). By (65) we finish the proof.
APPENDIX E PROOF OF THEOREM 10
The case for L ∈ (L min , L Max ) is contained in Theorem 9. It remains to consider the case for L = L min or L ≥ L Max and we only list the skeleton results which are slightly different from which in Section D.
Lemma 13. The following holds for q with supp(q) ⊆ E , GD E (q||r) = GD E (q * 0 (r)||r) + GD E (q||q * 0 (r)),
GD E (q||r) = GD E (q||r * (q)) + GD q (r * (q)||r).
Proposition 3. During the iterations of Algorithm 2 for L = L min or L ≥ LM ax, GD E (q (n+1) , r (n) ) ≤ GD E (q (n) , r (n) )
≤ GD E (q (n) , r (n−1) ).
(74)
Lemma 14. Let (q 0 , r 0 ) ∈ arg min q,r GD E (q||r) and definẽ Γ n (q 0 ) = GD E (q 0 ||q (n) ) − GD q 0 (r 0 ||r (n) ), then Γ n (q 0 ) ≥ 0. Also, we have GD E (q (n) ||r (n−1) ) − GD E (q 0 ||r 0 ) +Γ n−1 (q 0 ) =GD E (q 0 ||q (n−1) ) − GD E (q 0 ||q (n) ).
(75)
Proof of Theorem 10: Note that the choice of initial value guarantees that GD E (q 0 ||q (1) ) ≤ ∞, so the proof is by similar arguments as for Theorem 9.
APPENDIX F PROOF OF LEMMA 10
Let U be a feasible random variable with distribution p(u) for (3). With fixed p(v, w|u), we construct another feasible random variable U ′ with distributionp(u), satisfying the property above. is also fixd.
Fig. 1 .
1Lossy computing with side information
Lemma 4 .
4The target-loss function T (L) is decreasing and convex in L ∈ [0, ∞).
Theorem 1 .
1For the problem (3), we have the following. 1) For 0 ≤ L < L min , (3) is infeasible, so T (L) = ∞.
Definition 1 .
1The characteristic bipartite graph 1 for(3)is defined as a bipartite graph G χ [V, U] with edge set E .
and hence any feasible weight on G ′ [V, U ′ ] must be a feasible weight on G χ [V, U] by zero extension. This gives one direction. The other is immediate from Lemma 6.
Theorem 4 .
4For the minimum loss case,
Remark 3 .
3Let d ǫ (z,ẑ) = 1{d(z,ẑ) > ǫ} for any ǫ ≥ 0, where 1 denotes the indicator function. It is easy to check that Theorem 6 is valid for both discrete and continuous alphabetẐ. Then the main result of[12, Theorem 3] can be obtained by applying Theorem 6 to the distortion measure d ǫ .
Remark 4 .
4Assume Z =Ẑ and d satisfies d(z,ẑ) = 0 ⇐⇒ z =ẑ.
for L ≥ L min . Also, I(U ; V ) = I(U ; W ) = 0 for such p(u|s 1 ). So T (L) ≤ 0 in this case and C(B) = −T (L) ≥ 0 for L ≥ L min .
s 1 have
1limited value for the component with index s 1 . To be precise, let X s1 = arg min x∈X s2 p(s 2 |s 1 )b(x, s 1 , s 2 ) .
Theorem 8 .
8For the minimum cost case, C(L min ) = max p(u|s1):U∈ s 1 ∈S 1 Xs 1
Loss matrixl(v, u), distributions p(v) p(w|u, v), maximum iteration number max iter, loss constraint L. Output: Optimal value and an optimal solution for (3). 1: if L = L min or L ≥ L Max then
Lemma 10 .
10For the problem (3) with E = V × U, there exists some optimal solution q(u|x) such that there are at most |V| + |W| of u with p(u) > 0. Remark 6. For the problem (3) induced by
Fig. 3 .
3Analytical and Numerical results for the first (left) and second (right) examples in Section VIII-A. In each case, we compute the optimal rate (capacity) with 150 iterations for each point.
1 )
1Rate-distortion functions for another two lossy computing problems: Let S 1 = {1, 2, ..., 6}, S 2 = {1, 2, 3, 4}, p(i, j) = 1 24 , ∀i, j. The most common sum function f (s 1 , s 2 ) = s 1 +s 2 for the first example and a general nonlinear function f (s 1 , s 2 ) = s 1 s 2 − s 2 + 5 for the second example. We set Z =Ẑ = {2, 3, ..., 10} for the first one and Z =Ẑ = {5, ..., 25} for the second one. In both examples, the distortion measure d is set to be the quadratic distortion. Our algorithm computes the rate-distortion functions for 50 different D and reconstructs the curves. The ratedistortion curves for both examples are depicted in
Fig. 4 .
4Numerical results for the first (left) and second (right) examples in Section VIII-B1. In each case, we choose 50 points from the intervals uniformly and compute the corresponding optimal rate with 1000 iterations.
Fig. 5 .
5Numerical results for the channel problem with quantized side information in Section VIII-B2. Capacity-cost curves for two quantization schemes with different granularity are plotted. In each case, 50 points are chosen from the intervals uniformly and the corresponding capacity is computed with 1000 iterations. The capacity-cost curve of the writing on dirty scheme is also plotted as an upper bound.
The number of iterations is 1000 for the first two examples in Section VIII-B1 and 2000 for the last example for b = 3, 4, 5 in Section VIII-B2 to ensure enough accuracy. To eliminated the effect of noise, the listed time is averaged over 50 experiments.
is 0 .
0Also, by monotonicity R(D) = 0 for D ≥ D max . Now assume that R(D) = 0. Since the feasible region is always compact, there exists some p(u|v) such that I(U ; V |W ) = I(U ; V ) − I(U ; W ) = 0. Then U and V are conditional independent given Y . Since U − V − W is a Markov chain, p(u|v) = p(u), ∀v. So we have
Thus we have R(D) = 0 if and only if D ≥ L ′ max . By the definition of L max , L max = L ′ max . APPENDIX C PROOF OF LEMMA 6
u)(H(W |U = u) − H(V |U = u)) If we want to fix H(W |U ) − H(V |U ), E[l(v, u, W )], p(v), ∀v and p(w), ∀w, we only need to impose 1 + 1 + (|V| − 1) + (|W| − 1) = |V| + |W| equations. By the Fenchel-Eggleston-Carathéodory theorem, there exists some U ′ such that there are at most |V| + |W| of u which satisfyp(u) > 0. Also, While p(u) is replaced byp(u), p(w) p(w|u, v) and p(v) are fixed. Hence I(U ; V ) − I(U ; W ) = H(V ) − H(W ) − H(V |U ) + H(W |U )
TABLE I THE
IRATE-DISTORTION FUNCTION FOR THE ONLINE CARD GAMER-D functions
S 1
S 2
Analytical results
TABLE II THE
IICAPACITY FOR THE MEMORY WITH STUCK-AT FAULTS
S 1
S 2
Analytical results
TABLE III THE
IIICOMPUTATIONAL TIME AND ERROR BEFORE AND AFTER ADDING DEFLATION TECHNIQUES FOR SOME EXAMPLESExamples
(L, T (L))
Time (s)
Speed-up
Speed-up
before
after
Ratio
penalty
Sum
(0.5, 1.1503)
8.074
0.092
87.77
1.60e−16
computation
(2.5, 0.1107)
4.912
0.125
39.39
1.44e−15
Nonlinear
(0.5, 2.1827)
481.1
1.754
274.3
8.88e−16
function
(5.0, 0.9295)
175.2
1.820
96.26
9.97e−9
computation
(20.0, 0.0652)
172.5
3.948
43.69
1.11e−8
Gaussian
|X| = 8
(1.5, 0.5615)
8.472
0.389
21.78
3.32e−10
channel
|Y | = 16
(5.0, 1.1414)
8.397
0.333
25.22
2.96e−13
with
|X| = 16
(1.5, 0.6237)
285.0
6.889
41.37
3.16e−9
quantized
|Y | = 32
(5.0, 1.2066)
279.9
6.509
43.00
2.43e−9
state
|X| = 32
(1.5, 0.6321)
-
174.6
-
-
information
|Y | = 64
(5.0, 1.2238)
-
162.2
-
-
TABLE IV THE
IVTIME AND ERROR OF DIFFERENT NUMBER OF ITERATIONS WITH DEFLATIONSExamples
0
30
60
125
250
500
1000
2000
Nonlinear
Time (s)
0.184
1.437
1.537
1.654
1.728
1.761
1.820
1.907
Function
Error
-
6.42e−2
3.29e−2
1.35e−2
3.90e−3
9.22e−4
2.75e−4
6.61e−5
Gaussian
Time (s)
1.865
4.979
5.679
6.184
6.543
6.612
6.719
6.889
channel
Error
-
8.28e−2
3.53e−2
1.28e−2
3.89e−3
9.49e−4
1.84e−4
3.12e−5
two lines of
Table IV .
IVThe first case is to compute R(5.0) for the nonlinear function computation problem in Section VIII-B1 and the second case is to compute C(1.5) for the channel with quantized state information in Section VIII-B2, where we set b = 4. The true value is estimated through 1600000 iterations until the computed result converges to a sufficiently high accuracy. Again the time is averaged over 50 experiments. Note that the time for 0 iteration is purely used for initialization.The numerical experiments verify our discussions in Section VII-E, since each iteration becomes rather cheap
after the first few dozens. If we want to compute the rate to a high accuracy, thousands of iterations can be applied
and the computational cost is very low. Such strategy can preserve the accuracy because the computed result after
deflation techniques closely follow which before such techniques. The cost is almost unaffordable if not aided by
deflation techniques.
Duality between channel capacity and rate distortion with two-sided state information. T Cover, M Chiang, IEEE Transactions on Information Theory. 486T. Cover and M. Chiang, "Duality between channel capacity and rate distortion with two-sided state information," IEEE Transactions on Information Theory, vol. 48, no. 6, pp. 1629-1638, Jun. 2002. 1, 4, 5
Duality between source coding and channel coding and its extension to the side information case. S Pradhan, J Chou, K Ramchandran, IEEE Transactions on Information Theory. 495S. Pradhan, J. Chou, and K. Ramchandran, "Duality between source coding and channel coding and its extension to the side information case," IEEE Transactions on Information Theory, vol. 49, no. 5, pp. 1181-1203, May 2003. 1
The duality between information embedding and source coding with side information and some applications. R Barron, B Chen, G Wornell, IEEE Transactions on Information Theory. 495R. Barron, B. Chen, and G. Wornell, "The duality between information embedding and source coding with side information and some applications," IEEE Transactions on Information Theory, vol. 49, no. 5, pp. 1159-1180, May 2003. 1
Coding for channel with random parameters. S I , M S Pinsker, Probl. Contr. and Inform. Theory. 91S. I. Gel'fand and M. S. Pinsker, "Coding for channel with random parameters," Probl. Contr. and Inform. Theory, vol. 9, no. 1, pp. 19-31, 1980. 1
The rate-distortion function for source coding with side information at the decoder. A Wyner, J Ziv, IEEE Transactions on Information Theory. 221A. Wyner and J. Ziv, "The rate-distortion function for source coding with side information at the decoder," IEEE Transactions on Information Theory, vol. 22, no. 1, pp. 1-10, Jan. 1976. 1
Wyner-Ziv theory for a general function of the correlated sources. H Yamamoto, IEEE Transactions on Information Theory. 285H. Yamamoto, "Wyner-Ziv theory for a general function of the correlated sources," IEEE Transactions on Information Theory, vol. 28, no. 5, pp. 803-807, Sep. 1982. 1
Coding of an information source having ambiguous alphabet and the entropy of graphs. J Körner, 6th Prague Conference on Information Theory. Prague, CzechJ. Körner, "Coding of an information source having ambiguous alphabet and the entropy of graphs," in 6th Prague Conference on Information Theory, etc., Prague, Czech, Sep. 1973, pp. 411-425. 1
The zero-error side information problem and chromatic numbers. H Witsenhausen, IEEE Transactions on Information Theory. 225H. Witsenhausen, "The zero-error side information problem and chromatic numbers," IEEE Transactions on Information Theory, vol. 22, no. 5, pp. 592-593, Sep. 1976. 1
Coding for computing. A Orlitsky, J Roche, IEEE Transactions on Information Theory. 47314A. Orlitsky and J. Roche, "Coding for computing," IEEE Transactions on Information Theory, vol. 47, no. 3, pp. 903-917, Mar. 2001. 1, 2, 14
Source coding with distortion through graph coloring. V Doshi, D Shah, M Medard, 2007 IEEE International Symposium on Information Theory. Nice, FranceV. Doshi, D. Shah, and M. Medard, "Source coding with distortion through graph coloring," in 2007 IEEE International Symposium on Information Theory, Nice, France, Jun. 2007, pp. 1501-1505. 2
Functional compression through graph coloring. V Doshi, D Shah, M Médard, M Effros, IEEE Transactions on Information Theory. 568V. Doshi, D. Shah, M. Médard, and M. Effros, "Functional compression through graph coloring," IEEE Transactions on Information Theory, vol. 56, no. 8, pp. 3901-3917, Aug. 2010. 2
Functional epsilon entropy. S Basu, D Seo, L R Varshney, 2020 Data Compression Conference (DCC). Snowbird, UT, USA214S. Basu, D. Seo, and L. R. Varshney, "Functional epsilon entropy," in 2020 Data Compression Conference (DCC), Snowbird, UT, USA, Mar. 2020, pp. 332-341. 2, 14
Hypergraph-based source codes for function computation under maximal distortion. --, "Hypergraph-based source codes for function computation under maximal distortion," 2022. [Online]. Available: https://arxiv.org/abs/2204.02586 2
Computation of channel capacity and rate-distortion functions. R Blahut, IEEE Transactions on Information Theory. 184R. Blahut, "Computation of channel capacity and rate-distortion functions," IEEE Transactions on Information Theory, vol. 18, no. 4, pp. 460-473, Jul. 1972. 2
An algorithm for computing the capacity of arbitrary discrete memoryless channels. S Arimoto, IEEE Transactions on Information Theory. 181S. Arimoto, "An algorithm for computing the capacity of arbitrary discrete memoryless channels," IEEE Transactions on Information Theory, vol. 18, no. 1, pp. 14-20, Jan. 1972. 2
Computation of the Wyner-Ziv rate-distortion function, ser. F Willems, EUT report. E, Fac. of Electrical Engineering. Eindhoven: Technische Hogeschool EindhovenF. Willems, Computation of the Wyner-Ziv rate-distortion function, ser. EUT report. E, Fac. of Electrical Engineering. Eindhoven: Technische Hogeschool Eindhoven, 1983. 2
Blahut-Arimoto algorithms for computing channel capacity and rate-distortion with side information. F Dupuis, W Yu, F Willems, International Symposium on Information Theory. Chicago, IL, USA179F. Dupuis, W. Yu, and F. Willems, "Blahut-Arimoto algorithms for computing channel capacity and rate-distortion with side information," in International Symposium on Information Theory, Chicago, IL, USA, Jun. 2004, p. 179. 2
Computation and analysis of the N-layer scalable rate-distortion function. E Tuncel, K Rose, IEEE Transactions on Information Theory. 495E. Tuncel and K. Rose, "Computation and analysis of the N-layer scalable rate-distortion function," IEEE Transactions on Information Theory, vol. 49, no. 5, pp. 1218-1230, May 2003. 2
Computing the channel capacity and rate-distortion function with two-sided state information. S Cheng, V Stankovic, Z Xiong, IEEE Transactions on Information Theory. 511224S. Cheng, V. Stankovic, and Z. Xiong, "Computing the channel capacity and rate-distortion function with two-sided state information," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4418-4425, Dec. 2005. 2, 22, 24
An optimal transport approach to the computation of the LM rate. W Ye, H Wu, S Wu, Y Wang, W Zhang, H Wu, B Bai, IEEE Global Communications Conference. Rio de Janeiro, Brazil216W. Ye, H. Wu, S. Wu, Y. Wang, W. Zhang, H. Wu, and B. Bai, "An optimal transport approach to the computation of the LM rate," in IEEE Global Communications Conference, Rio de Janeiro, Brazil, Dec. 2022. 2, 16
A communication optimal transport approach to the computation of rate distortion functions. S Wu, W Ye, H Wu, H Wu, W Zhang, B Bai, 1216S. Wu, W. Ye, H. Wu, H. Wu, W. Zhang, and B. Bai, "A communication optimal transport approach to the computation of rate distortion functions," 12 2022. [Online]. Available: https://arxiv.org/abs/2212.10098 2, 16
Bregman divergence based em algorithm and its application to classical and quantum rate distortion theory. M Hayashi, IEEE Transactions on Information Theory. 696M. Hayashi, "Bregman divergence based em algorithm and its application to classical and quantum rate distortion theory," IEEE Transactions on Information Theory, vol. 69, no. 6, pp. 3460-3492, Jun. 2023. 2, 21
J Bondy, U Murty, Graph Theory. New YorkSpringerJ. Bondy and U. Murty, Graph Theory. New York: Springer, 2008. 5
Lossy computing with side information via multi-hypergraphs. D Yuan, T Guo, B Bai, W Han, 2022 IEEE Information Theory Workshop. Mumbai, India923D. Yuan, T. Guo, B. Bai, and W. Han, "Lossy computing with side information via multi-hypergraphs," in 2022 IEEE Information Theory Workshop, Mumbai, India, Nov. 2022, pp. 344-349. 9, 23
A El Gamal, Y.-H Kim, Network Information Theory. CambridgeCambridge University Press2123A. El Gamal and Y.-H. Kim, Network Information Theory. Cambridge: Cambridge University Press, 2011. 11, 21, 23
Conditional graph entropy as an alternating minimization problem. V Harangi, X Niu, B Bai, V. Harangi, X. Niu, and B. Bai, "Conditional graph entropy as an alternating minimization problem," 2022. [Online]. Available: https://arxiv.org/pdf/2209.00283v1 19
| [] |
[
"Picard and Brauer groups of K(n)-local spectra via profinite Galois descent",
"Picard and Brauer groups of K(n)-local spectra via profinite Galois descent"
] | [
"Itamar Mor "
] | [] | [] | Using the proétale site, we construct models for the continuous actions of the Morava stabiliser group on Morava E-theory, its ∞-category of K(n)-local modules, and its Picard spectrum. For the two sheaves of spectra, we evaluate the resulting descent spectral sequences: these can be thought of as homotopy fixed point spectral sequences for the profinite Galois extension L K(n) S → En. We show that the descent spectral sequence for the Morava E-theory sheaf is the K(n)-local En-Adams spectral sequence. The spectral sequence for the sheaf of Picard spectra is closely related to one recently defined by Heard; our formalism allows us to compare many differentials with those in the K(n)-local En-Adams spectral sequence, and isolate the exotic Picard elements in the 0-stem. In particular, we show how this recovers the computation due to Hopkins, Mahowald and Sadofsky of the group Pic1 at all primes. We also use these methods to bound the Brauer group of K(n)-local spectra, and compute this bound at height one. | null | [
"https://export.arxiv.org/pdf/2306.05393v1.pdf"
] | 259,108,698 | 2306.05393 | 894975cb07fcc64c801cf33bd34d2870f678f229 |
Picard and Brauer groups of K(n)-local spectra via profinite Galois descent
June 9, 2023
Itamar Mor
Picard and Brauer groups of K(n)-local spectra via profinite Galois descent
June 9, 2023
Using the proétale site, we construct models for the continuous actions of the Morava stabiliser group on Morava E-theory, its ∞-category of K(n)-local modules, and its Picard spectrum. For the two sheaves of spectra, we evaluate the resulting descent spectral sequences: these can be thought of as homotopy fixed point spectral sequences for the profinite Galois extension L K(n) S → En. We show that the descent spectral sequence for the Morava E-theory sheaf is the K(n)-local En-Adams spectral sequence. The spectral sequence for the sheaf of Picard spectra is closely related to one recently defined by Heard; our formalism allows us to compare many differentials with those in the K(n)-local En-Adams spectral sequence, and isolate the exotic Picard elements in the 0-stem. In particular, we show how this recovers the computation due to Hopkins, Mahowald and Sadofsky of the group Pic1 at all primes. We also use these methods to bound the Brauer group of K(n)-local spectra, and compute this bound at height one.
Introduction
In [HMS94], Hopkins, Mahowald and Sadofsky study the Picard group of a symmetric monoidal category: by definition, this is the group of isomorphism classes of invertible objects with respect to the monoidal product. This is a notion that goes back much further, and gives a useful invariant of a ring or scheme. Its particular relevance to homotopy theory comes from the observation that if the category C is a Brown category (for example, C might be the homotopy category of a compactly-generated stable ∞-category), then the representability theorem applies and shows that the Picard group Pic(C) classifies homological automorphisms of C, each of these being of the form T ⊗ (−) for some invertible object. The objective of op. cit. is to develop techniques for studying Picard groups in some examples coming from chromatic homotopy: the main theorem is the computation of the Picard group of K(1)-local spectra at all primes, where K(1) is Morava K-theory at height one.
The aim of this project is to give a new proof of these computations using Galois descent, inspired by the formalism developed in [MS16]. We will write Pic n for the Picard group of K(n)-local spectra. There are still many open questions regarding these groups: for example, it is even unknown if they are finitely generated as modules over Z p . The question of computing Pic 2 has been studied by many authors (for example [KS04;Kar10;Goe+15]); using recent work at the prime 2 [Bea+22a], our results give a new potential approach to the computation of Pic 2 in [Bea+22b]. Following [GL21], we also extend these techniques to the Brauer group of Sp K(n) , giving a cohomological method to approach these. This allows us to bound the size of the group of K(1)-local Azumaya algebras trivialised over E 1 , at all primes.
The notion of Galois descent in algebra is very classical, and says that if A → B is a Galois extension of rings, then Mod A can be recovered as the category of descent data in Mod B : in particular, invertible A-modules can be recovered as invertible B-modules M equipped with isomorphisms ψ g : M ∼ = g * M for each g ∈ Gal(B/A), subject to a cocycle condition. This gives a useful way to compute Picard groups and other invariants. One can try to play the same game in higher algebra, and use descent techniques to get a handle on the groups Pic n . Fundamental to this approach is the notion of a Galois extension of commutative ring spectra, as set down in [Rog08]: this is a direct generalisation of the classical axioms in [AG60]. Given a finite G-Galois extension A → B which is faithful, the analogous descent statement (due to [Mei12;GL21]) is that the canonical functor
Mod A → (Mod B ) hG := lim Mod B G Mod B · · · (1)
is an equivalence of symmetric monoidal ∞-categories. Taking the Picard spectrum of a symmetric monoidal ∞-category preserves (homotopy) limits, and therefore any such Galois extension gives rise to an equivalence pic(Mod A ) ≃ τ ≥0 (pic(Mod B ) hG ). In particular one gets a homotopy fixed points spectral sequence (hereafter HFPSS), whose 0-stem converges to Pic(Mod A ). This technique has proved very fruitful in Picard group computations: for example, the Picard groups of KO and T mf are computed in [MS16], and the Picard group of the higher real K-theories EO(n) in [HMS17]. In each case the starting point is a theorem of Baker and Richter [BR05], which says that the Picard group of an even-periodic ring spectrum E with π 0 E regular Noetherian is a Z/2-extension of the Picard group of the ring π 0 E. One can for example study the action of the Morava stabiliser group G n on Morava E-theory: In order to leverage this to compute the groups Pic n it is necessary to understand not just the finite Galois descent technology mentioned above, but how this assembles over the entire system of extensions. Our main theorem is the following descent result for Picard groups:
Theorem A. The unit L K(n) S → E n induces an equivalence of spectra
pic(Sp K(n) ) ≃ τ ≥0 pic(E n ) hGn ,(2)
where the right-hand side denotes continuous homotopy fixed points. The resulting spectral sequence takes the form E s,t 2 = H s cont (G n , π t pic(E n )) =⇒ π t−s pic(Sp K(n) ).
(
3)
In an explicit range, it agrees with the K(n)-local E n -Adams spectral sequence, including differentials.
Here pic(E n ) denotes the Picard spectrum of K(n)-local E n -modules. One of the major tasks is to properly interpret the right-hand side of (2), in order to take into account the profinite topology on the Morava stabiliser group; to do so, we make use of the proétale (or condensed/pyknotic) formalism of [BS14; Sch ; BH]. We elaborate on our approach later in the introduction, but will first mention some consequences.
Picard group computations As a first corollary, we show how to recover the computation of Pic 1 . (i) At odd primes, Pic 1 = Z p × Z/2(p − 1), (ii) When p = 2, Pic 1 = Z 2 × Z/2 × Z/4. use the ∞-category of sheaves on the proétale site BG proét , whose objects are the profinite G-sets. This was studied in [BS14]; as shown there, in many cases it gives a site-theoretic interpretation of continuous group cohomology. Even when the comparison fails, proétale sheaf cohomology exhibits many desirable functorial properties absent in other definitions.
The equivalence in theorem A is therefore interpreted as the existence of a sheaf of connective spectra pic(E) on the proétale site, having Γ(G n / * , pic(E)) ≃ pic(Mod En (Sp K(n) )) and Γ(G n /G n , pic(E)) ≃ pic(Sp K(n) ).
To this end, we begin by proving a descent result for Morava E-theory itself.
Theorem A.1 (Proposition 2.8 and Proposition 2.29). There is a hypercomplete sheaf of spectra E on B(G n ) proét with Γ(G n / * , E) ≃ E n and Γ(G n /G n , E) ≃ L K(n) S.
(4)
Its homotopy sheaves are given by π t E ≃ Cont G (−, π t E n ), and its descent spectral sequence agrees with the K(n)-local E n -Adams spectral sequence (including differentials).
This may be of independent interest as it gives a novel construction of the K(n)-local E n -Adams spectral sequence, which may be extended to an arbitrary spectrum X; its E 2 -page (given a priori in terms of sheaf cohomology on the proétale site) is continuous group cohomology for suitable X (Remark 2.28). Note moreover that proétale cohomology enjoys excellent functoriality properties, and that the category of Lcomplete abelian sheaves on BG proét is abelian, as opposed to L-complete (E n ) ∨ * E n -comodules. Next, we deduce a descent result for ∞-categories of K(n)-local modules, which is really an extension to the condensed world of the following significant theorem:
Theorem 1.2 ([Mat16]). The diagram of symmetric monoidal ∞-categories
Sp K(n) Mod En (Sp K(n) ) Mod L K(n) En∧En (Sp K(n) ) · · · is a limit cone.
Namely, in Section 3 we prove the following profinite Galois descent result, which can be seen as the identification Sp K(n) ≃ Mod En (Sp K(n) ) hGn analogous to (1).
Theorem A.2 (Theorem 3.1). There is a hypercomplete sheaf Mod E,K of symmetric monoidal ∞-categories on B(G n ) proét with Γ(G n / * , Mod E,K ) ≃ Mod En (Sp K(n) ) and Γ(G n /G n , Mod E,K ) ≃ Sp K(n) .
Morally, one recovers the first part of Theorem A by taking Picard spectra pointwise 1 . For the second part of that theorem, we must identify the E 2 -page of the descent spectral sequence, which a priori begins which sheaf cohomology on the proétale site. The results of [BS14] allow us to deduce this, as a consquence of the following identification: Theorem A.3 (Proposition 3.15). There is a hypercomplete sheaf of connective spectra pic(E) on B(G n ) proét with Γ(G n / * , E) ≃ pic(E n ) and Γ(G n /G n , E) ≃ pic(Sp K(n) ).
The homotopy sheaves of pic(E) are π t pic(E) = Cont G (−, π t pic(E n )),
i.e. represented by the homotopy groups of pic(E n ) (with their natural profinite topology). 1 There is a small further subtlety, as discussed in Section 3.2.
In fact we go a bit further, relating the spectral sequence of Theorem A to the K(n)-local E n -Adams spectral sequence. In degrees t ≥ 2, the homotopy groups of the Picard spectrum of an E ∞ -ring are related by a shift to those of the ring itself. It is a result of [MS16] that this identification lifts to one between truncations of the two spectra, in a range that grows with t: that is, for every t ≥ 2 there is an equivalence
τ [t,2t−2] pic(A) ≃ τ [t,2t−2] ΣA,
functorial in the ring spectrum A. Using the proétale model, it is quite straightforward to deduce the following comparison result, as proven in op. cit. for finite Galois extensions.
Theorem A.4 (Corollary 3.22 and Corollary 3.25; c.f. [MS16]).
(i) Suppose 2 ≤ r ≤ t − 1. Under the identification E s,t 2 = H s (G n , π t pic(E n )) ∼ = H s (G n , π t−1 E n ) = E s,t−1 2 (ASS),
the d r -differential on the group E s,t r in (3) agrees with the differential on E s,t−1 r (ASS) on classes that survive to E r in both.
(ii) If x ∈ H t (G n , π t pic(E n )) ∼ = H t (G n , π t−1 E n ) (and again x survives to the r-th page in both spectral sequences), then the differential d t (x) is given by the following formula in the K(n)-local E n -Adams spectral sequence:
d t (x) = d ASS t (x) + x 2 .
Comparison with Morava modules Finally, we will says some words on how to derive Theorems C and D from the main result. Recall that a useful technique for computing Picard groups, originating already in [HMS94], is to use completed E-theory to compare the category of K(n)-local spectra to the category of Morava modules, i.e. I n -adically complete π * E n -modules equipped with a continuous action of the Morava stabiliser group G n :
(E n ) ∨ * (−) := π * L K(n) (E n ∧ (−)) : Sp K(n) → Mod Gn π * En .(6)
This carries invertible K(n)-local spectra to invertible Morava modules, and hence induces a homomorphism on Picard groups. The category on the right-hand side is completely algebraic in nature, and its Picard group Pic alg n can (at least in theory) be computed as a Z/2-extension of Pic alg,0 n = H 1 (G n , (π 0 E n ) × ); the strategy is therefore to understand the comparison map and how much of Pic n it can see. A remarkable theorem of Pstrągowski says that the map (E n ) ∨ * : Pic n → Pic alg n is an isomorphism if p ≫ n (more precisely, if 2p − 2 > n 2 + n). This reflects the more general phenomenon that chromatic homotopy theory at large primes is well-approximated by algebra, as is made precise in [BSS20a;BSS20b].
As noted in [Pst22], the existence of a spectral sequence of the form (3) immediately yields an alternative proof, by sparseness of the K(n)-local E n -Adams spectral sequence at large primes (in fact, this improves slightly the bound on p). Heard gave such a spectral sequence in [Hea22], and our results can be seen as a conceptual interpretation of that spectral sequence, analogous to the relation of [Dav06;Qui10] to [DH04]. Beyond the conceptual attractiveness, our derivation of the spectral sequence also clarifies certain phenomena: for example, we justify the claim made in [Hea22] that the exotic part of the Picard group is given precisely by those elements in filtration at least 2:
Theorem E (Theorem 4.4). For any pair (n, p), the algebraic Picard group is computed by the truncation to filtration ≤ 1 of (3), and the exotic Picard group κ n agrees with the subgroup of Pic n in filtration at least 2 for (3).
For example, when n 2 = 2p − 1 this leads to the description of the exotic Picard group given in Theorem C.
Note also that the group in bidegree (s, t) = (1, 0) of Heard's Picard spectral sequence is somewhat mysterious, which makes it less helpful in computing Brauer groups; as discussed in Section 3.2, the relevant group in (3) really is H 1 (G n , Pic(E n )). As expected, the computation simplifies at sufficiently large primes, and this should give rise to an algebraicity statement for the group Br 0 n . We intend to explore the algebraic analogue Br 0,alg n in future work.
Outline
In Section 2, we collect the results we need on the Devinatz-Hopkins action and the proétale site, showing how to define the sheaf of spectra E. One can in fact deduce it is a sheaf from Theorem 3.1. Nevertheless, we wanted to give a self-contained proof of the spectrum-level hyperdescent; we also explain how this compares with Davis' construction of the continuous action on E n . In the second half of Section 2 we compute the homotopy sheaves of E, and explain how this leads to the identification with the K(n)-local E n -Adams spectral sequence; the requisite décalage results are collected in Appendix A.
In Section 3 we categorify, obtaining descent results for the presheaf of K(n)-local module ∞-categories over E. We discuss how the Picard spectrum functor yields a sheaf of connective spectra exhibiting the identification of fixed points pic(Sp K(n) ) ≃ τ ≥0 pic(E n ) hGn , and investigate the resulting descent spectral sequence.
In Section 4 we use the previous results to compute Picard groups. We first identify the algebraic and exotic Picard groups in the descent spectral sequence. Combining this with the well-known form of the K(1)-local E 1 -Adams spectral sequence allows us to reprove the results of [HMS94]: we are particularly interested in computing the exotic Picard group at the prime 2. We also consider Picard groups in the boundary case n 2 = 2p − 1. In Appendix B we give a method to compute the height one Adams spectral sequence at p = 2 using the Postnikov tower for the sheaf E.
Finally, in Section 5 we show how to use our results in Brauer group computations. We show that the (−1)stem in the Picard spectral sequence gives an upper bound for the relative Brauer group, and compute this bound at height one.
Relation to other work
As already mentioned, Heard has obtained a spectral sequence similar to that of Theorem A, and one of our objectives in this work was to understand how to view that spectral sequence as a HFPSS for the Goerss-Hopkins-Miller action. Recently, we found out that a result close to Theorem A is proven independently in work in progress of Guchuan Li and Ningchuan Zhang. Their approach differs somewhat from ours, using Burklund's result on multiplicative towers of generalised Moore spectra to produce pro-object presentations of Mod En (Sp K(n) ) and pic n ; a detailed comparison between the two would certainly be of interest.
Acknowledgements
I am grateful to Dustin Clausen for an outline the argument of Theorem 3.1; in fact, part of my motivation for this project was his talk on Morava E-theory at the University of Copenhagen masterclass Condensed Mathematics. I'd also like to thank Guchuan Li and Ningchuan Zhang for sharing with me their upcoming work. In addition, I have benefited greatly from conversations with Jack Davies, Ivo Dell'Ambrogio, David Gepner
Notation and conventions
• Throughout, we will work at a fixed prime p and height n, mostly kept implicit. Also implicit is the choice of a height n formal group law Γ n defined over F p n ; for concreteness we fix the Honda formal group, but this choice will not be important. For brevity, we will therefore write E, K and G for Morava E-theory, Morava K-theory and the extended Morava stabiliser group, respectively. These will be our principal objects of study.
• We will freely use the language of ∞-categories (modeled as quasi-categories) as pioneered by Joyal and Lurie [HTT; HA; SAG]. In particular, all (co)limits are ∞-categorical. We will mostly be working internally to the K(n)-local category, and as such we stress that the symbol ⊗ will denote the K(n)-local smash product throughout; where we have a need for it, we will use the notation ∧ for the smash product of spectra (despite this having become archaic in some circles). On the other hand, we will distinguish K(n)-local colimits by writing for example L K colim X or L K lim − → X i , as we feel that not to do so would be unnecessarily confusing. We use the notation lim − → to denote a filtered colimit, and similarly for cofiltered limits. In particular, if T is a profinite set, we will use the expression 'T = lim ← − T i ' to refer to a presentation of T as a pro-object, leaving implicit that each T i is finite. We will also assume throughout we have made a fixed choice of decreasing open subgroups I ⊂ G with zero intersection; the symbols lim ← −I and lim − →I will always refer to the (co)limit over such a family.
• We only consider spectra with group actions, and not any more sophisticated equivariant notion. When G is a profinite group, we will write H * (G, M ) for continuous group cohomology with pro-p (or more generally profinite) coefficients, as defined for example in [SW00] (resp. [Jan88]).
• We remark that the sheaf of spectra pic(E) defined in Section 3.2 is not just pic • Mod E,K , as this may have the 'wrong' profinite topology; see the discussion at the beginning of Section 3.2.
• A few words about spectral sequences. When talking about the 'Adams spectral sequence', we always have in mind the K(n)-local E n -based Adams spectral sequence; the classical Adams spectral sequence (based on HF p ) makes no appearance in this document. We will freely use abbreviations such as 'ASS', 'HFPSS', 'BKSS'. We will also use the name 'descent spectral sequence' for either the t-structure oř Cech complex definition, since in the cases of interest we show they agree up to reindexing; when we need to be more explicit, we refer to the latter as the 'Bousfield-Kan' or 'Čech' spectral sequence. The name 'Picard spectral sequence' will refer to the descent spectral sequence for the sheaf pic(E) (Definition 3.14).
• In Sections 2 and 3 we form spectral sequences using the usual t-structure on spectral sheaves; this is useful for interpreting differentials and filtrations, for example in Theorem 4.4. To obtain familiar charts, we will declare that the spectral sequence associated to a filtered object starts at the E 2 -page; in other words, this is the page given by homotopy groups of the associated graded object. Thus our spectral sequences run E s,t 2 = H s (G, π t E) =⇒ π t−s E hG , with differentials d r of (s, t − s)-bidegree (r, −1), and this is what we display in all figures. However, we also make use of the Bousfield-Kan definition of the descent spectral sequence using theČech complex of a covering, and we relate the two formulations by décalage (see Appendix A): there is an isomorphism between the two spectral sequences that reads
E s,t 2 ∼ =Ě 2s−t,s 3
if we use the same grading conventions for each of the underlying towers of spectra. We will stick to the indexing of the left-hand side throughout, and hope that this will not cause confusion; to make things easier to follow, we will sometimes talk about the 'starting page' of a spectral sequence, leaving to the reader the job of interpreting this according to context and their favourite convention. We will always use s for filtration, t for internal degree, and t − s for stem.
Our Lyndon-Hochschild-Serre spectral sequences are in cohomological Serre grading.
• Finally, we largely ignore issues of set-theory, since these are discussed at length in [Sch] and [BH]. For concreteness, one could fix a hierarchy of strongly inaccessible cardinals κ < δ 0 < δ 1 such that |G n |< κ and the unit in Sp K(n) is κ-compact, and work throughout over the 'δ 1 -topos' of sheaves of δ 1 -small spaces on profinite G n -sets of cardinality less than δ 0 .
List of symbols
1 A A-local sphere spectrum. E Morava E-theory, E n = E(F p n , Γ n ). K Morava K-theory, K(n) = E n /(p, v 1 , . . . , v n−1 ). G Extended Morava stabiliser group G n = S n ⋊ Gal(F p n /F p ) = Aut(Γ n , F p n ). I n Maximal ideal (p, v 1 , . . . , v n−1 ) ⊂ π * E n .
BGé tÉ tale classifying site of a profinite group G.
BG proét Proétale classifying site of a profinite group G.
Free G Subsite of free G-sets in BG proét .
Cont G (T, T ′ ) Set of continuous G-equivariant maps between profinite G-sets.
E δÉ tale Morava E-theory sheaf. E Proétale Morava E-theory sheaf. Mod A,K K(n)-local module ∞-category, Mod A (Sp K(n) ) = L K(n) Mod A .
Pic(−) (Pic(−), pic(−)) Picard group (resp. space, spectrum).
pic(E) Proétale Picard sheaf of Morava E-theory, pic(Mod perf E,K ). Pic n (Pic n , pic n ) Picard group (resp. space, spectrum) of K(n)-local category, Pic(Sp K(n) ).
Pic alg n (Pic alg n ) Algebraic Picard group (resp. groupoid) of K(n)-local category, Pic(Mod Gn π * En ).
Mod ♥cpl R Category of L-complete discrete modules over a (classical) local ring R.
Mod cpl R ∞-category of complete modules (in spectra) over a local ring R.
⊗ K(n)-localised smash product of spectra, L K(n) (− ∧ −). ⊗ Tensor product on Mod ♥cpl R , L 0 (− ⊗ R −). ⊗ L Tensor product on Mod cpl R , L(− ⊗ L R −). E ∨ * (−) Completed E-homology functor π * L K(n) (E n ∧ −).
The continuous action on Morava E-theory 2.1 Recollections on Morava E-theory
Let E := E(F p n , Γ n ) be Morava E-theory based on the Honda formal group at height n and prime p; n and p will henceforth be fixed, and kept implicit to ease notation. Let K := K(F p n , Γ n ) be Morava K-theory, its residue field. Recall that E is the K-local Landweber exact spectrum whose formal group is the universal deformation of Γ n to the Lubin-Tate ring π * E = W(F p n )[[u 1 , . . . , u n−1 ]][u ±1 n ]. Functoriality yields an action of the extended Morava stabiliser group G = G n := Aut(F p n , Γ n ) on the homotopy ring spectrum E, and celebrated work of Goerss, Hopkins and Miller [GH04] promotes E to an E ∞ -ring and the action to one by E ∞ maps. This action controls much of the structure of the K-local category, and is the central object of study in this document. In this section, we formulate the action of G on E in a sufficiently robust way for our applications; this builds on the work of Davis, Quick, and collaborators [Dav03; Dav06; Qui10; BD10; DQ16], who gave a number of formulations of this action as the continuous action of the profinite group G. One can view this first section as a reformulation of their work in ∞-categorical language, which will allow us to pass to the level of sheaves of module categories.
Recall that continuous actions and continuous cohomology of a topological group G are generally much more straightforward when we assume our modules to have discrete topology. There are notable categorical benefits in this case: for example, it is classical that the category of discrete G-modules is abelian with enough injectives, which is not true of the full category of topological modules. Further, in the discrete context we can understand actions completely by looking at the induced actions of all finite quotients of G. This was pioneered by Thomason in his study of K(1)-local descent for algebraic K-theory, and formalised in a model-theoretic sense by Jardine.
Any profinite group G has anétale classifying site, denoted BGé t , whose objects are the (discrete) finite Gsets and whose coverings are surjections; as shown in [Jar97,§6], the category of sheaves of abelian groups on BGé t gives a category equivalent to the category Ab δ G of discrete G-modules in the sense of [Ser97]. As noted below, sheaf cohomology on BGé t corresponds to continuous group cohomology with discrete coefficients, again in the sense of Serre. Motivated by this, Jardine defines a model structure on presheaves of spectra on BGé t , which models the category of 'discrete' continuous G-spectra, i.e. those that can be obtained as the filtered colimit of their fixed points at open subgroups. We will more generally refer to objects of Sh(BGé t , C) as discrete G-objects of an arbitrary (cocomplete) ∞-category C. Davis uses this as his starting point, and we observe below that in this formalism it is easy to pass to the ∞-categorical setting. Namely, we begin by collecting the following facts:
Theorem 2.1 (Devinatz-Hopkins, Davis, Rognes, Dugger-Hollander-Isaksen). There is a hypercomplete sheaf of spectra E δ on BGé t , such that
(i) Any ordered, cofinal sequence (U i ) of open subgroups of G induces L K lim − →i E δ (G/U i ) ≃ E, (ii) On global sections, ΓE δ := Γ(G/G, E δ ) ≃ 1 K is the K-local sphere spectrum, (iii) E δ takes values in CAlg(Sp K ) ⊂ Sp, (iv) For any normal inclusion of open subgroups V ⊂ U ⊂ G , the map E δ (G/U ) → E δ (G/V ) is a faithful U/V -Galois extension.
Proof. The presheaf of spectra E δ is constructed in [DH04,§4], with (ii) being part of Theorem 1 therein and the identification (i) being the trivial case of Theorem 3; see also [Bar+21,§2] for a nice summary. Devinatz and Hopkins construct E δ by hand (by taking the limit of the a priori form of its Amitsur resolution in K(n)-local E n -modules); they denote Γ(G/U, E δ ) by E hU n , but we copy [BD10] and write E dhU n . By [DH04, Theorem 4], E δ is a sheaf on BGé t ; in fact, Devinatz and Hopkins already define E δ to land in K-local E ∞ -rings, and so we may consider it as an object of Sh(BGé t , CAlg(Sp K )) since limits in CAlg(Sp K ) are computed at the level of underlying spectra [HA, Corollary 3.2.2.5]. Item (iv) is [Rog08, Theorem 5.4.4], so it remains to show that E is hypercomplete; this we will deduce from Davis' work.
More specifically, Davis utilises the Jardine model structure on the category of presheaves of spectra on BGé t , denoted Spt G 2 ; this is defined in such a way that there is a Quillen adjunction
Spt const − −−− → ← −−− − (−) hG Spt G .
Recall that the main result of [DHI04] says that the fibrant objects of Spt G are precisely those projectively fibrant presheaves that satisfy (i) the (1-categorical) sheaf condition for coverings in BGé t , and (ii) descent for all hypercovers, and so the ∞-category associated to Spt G is a full subcategory of Sh(BGé t , Sp).
In this setting, Davis attention to open subgroups cuts out some of the complexity and makes clear the equivalences happen naturally in U (we remark that they also appeared in Chapter 7 of Davis' thesis [Dav03]). This provides a projective equivalence between E δ and L K F, and hence an equivalence of presheaves between E δ and a hypercomplete sheaf of spectra.
If we pick a cofinal sequence of open normal subgroups (U i ) we can identify the starting page of the descent spectral sequence for E δ :
Lemma 2.2. Let G be a profinite group and F ∈ Sh(BGé t , C), where C = Sp or Sp ≥0 . There is a spectral sequence with starting page
E s,t 2 = H s (G, π t lim − → F(G/U i )),
and converging conditionally to π t−s Γ lim t τ ≤t F.
Proof. This is the spectral sequence for the Postnikov tower of F, formed as in [HA, §1.2.2]. Its starting page is given by sheaf cohomology of the graded abelian sheaf π * F on BGé t . Explicitly, form the Postnikov tower in sheaves of spectra
F · · · τ ≤1 F τ ≤0 F · · · Σπ 1 F π 0 F
Applying global sections and taking homotopy groups gives an exact couple, and we obtain a spectral sequence with
E s,t 2 = π t−s ΓΣ t π t F = R s Γπ t F = H s (BGé t , π t F)
, with abutment π t−s Γ lim τ ≤t F. To identify this with continuous group cohomology, we make use of the equivalence
Ab δ G → Sh(BGé t , Ab) M → G/U → M U of [Jar97]
, whose inverse sends F → lim − →i F(G/U i ). Under this equivalence the fixed points functor on Ab δ G corresponds to global sections, and so taking derived functors identifies sheaf cohomology on the right-hand side with derived fixed points on the left; as in [Ser97, §2.2], when we take discrete coefficients this agrees with the definition in terms of continuous cochains.
Writing
Mod (−),K := Mod L K (−) (Sp K ) = L K Mod (−)
, our strategy is is to apply the functor pic • Mod (−),K : CAlg → Sp ≥0 pointwise to the sheaf E δ , in order to try to obtain a sheaf pic K (E δ ) ∈ Sh(BGé t , Sp ≥0 ); we'd then like to apply the above lemma to deduce the existence of the descent spectral sequence. This does not quite work for the same reason that the lemma applied to E δ does not recover the K-local E-Adams spectral sequence: while E is K-locally discrete, it is certainly not discrete as a G-spectrum (for example, even the action on π 2 E is not discrete). Nevertheless, it is worth remarking that the first step of this approach does work: since E is a discrete G-object of K-local spectra, Mod E,K is discrete as a presentable ∞-category with G-action.
Lemma 2.3. The composition Mod E δ ,K : BGé t op → CAlg(Sp K ) → Pr L,smon is a sheaf.
Proof. To check the sheaf condition for F : BG oṕ et → C we need to show that finite coproducts are sent to coproducts, and that for any inclusion U ⊂ U ′ of open subgroups the canonical map F(G/U ′ ) → F(G/U × G/U ′ •+1 ) is an equivalence. For the presheaf Mod E δ ,K , the first is obvious (using the usual idempotent splitting), while the second is finite Galois descent [Mei12, Proposition 6.2.6] or [GL21, Theorem 6.10], at least after refining U to a normal open subgroup of U ′ .
Fixing again a cofinal sequence (U i ), we write F ij ⊣ R ij : Mod E δ (G/Ui),K ⇄ Mod E δ (G/Uj ),K and F j ⊣ R j for the composite adjunction Mod E δ (G/Ui),K ⇄ Mod E,K , and
lim − → Mod E δ (G/Ui),K F ⇄ R Mod E,K
for the colimit (along the maps F ij ) in Pr L,smon .
Proposition 2.4. The functors F and R define an adjoint equivalence lim − →
Mod E δ (G/Ui),K ≃ Mod E,K .
Proposition 2.4 follows from a more general result, a case of Grothendieck's Noetherian descent. We remark that [MS16, Proposition 2.4.1] is a similar statement, but that restricting to compact objects is problematic in the case of Sp K , whose unit is not compact.
Lemma 2.5. Let C be a presentably symmetric monoidal stable ∞-category. Suppose A (−) : I → CAlg(C) is a filtered diagram. Write A for a colimit (formed equivalently in C or CAlg(C)). Then the induced adjunction
lim − → Mod Ai (C) F ⇄ R Mod A (C)
is an equivalence of presentable symmetric-monoidal ∞-categories.
Remark 2.6. In the above, the colimit is again formed in Pr L,smon , and in particular the left adjoint is symmetric monoidal. This implies in particular that the right adjoint is lax monoidal, but it is not in general true that R is strong monoidal.
Proof. We assume without loss of generality that I is a filtered partially ordered set. For brevity, write
A i := Mod Ai (C) and A := lim − → A i , so that we have a diagram A i Fij ⇄ Rij A j ⇄ · · · ⇄ A ⇄ Mod A (C).
The result relies on an identification of the ∞-category A, and to this end we recall that filtered colimits of presentably symmetric monoidal ∞-categories are computed in Pr L , and that Pr L has colimits, computed as limits in Pr R or equivalently in Cat ∞ . So informally, objects of A consist of the following data:
• a tuple (X i ∈ A i ) i∈I , • equivalences ϕ ij : X i ≃ R ij X j for each i → j,
• higher coherences between these.
The right adjoint Mod A (C) → A is also given to us by this identification: it is the evident functor
R : X → (R i X) i∈I .
We will show that R is fully faithful and essentially surjective. Our first claim is that the left adjoint F : A → Mod A (C) is the functor described informally by
(X i ) → lim − → F i X i .
More precisely, the definition of A together with universal properties gives us a string of equivalences on mapping spaces
Map Mod A (C) (F (X i ) i∈I , Y ) ≃ Map A ((X i ), (R i Y )) ≃ lim ← − Map Ai (X i , R i Y ) ≃ lim ← − Map Mod A (C) (F i X i , Y ) ≃ Map Mod A (C) (lim − → F i X i , Y ), natural in Y . Under coYoneda this corresponds to an equivalence F (X i ) ≃ lim − → F i X i in Mod A (C)
, and this determines the action of F on objects. Taking (X i ) = RY , one can moreover read off the counit F R → 1 from the above: this corresponds to the identity in the right-hand side of the first row, which becomes the canonical map lim − → F i R i X → X induced by the respective counits F i R i X → X of the induction-restriction adjunctions Mod Ai (C) ⇄ Mod A (C). On the other hand, the unit 1 → RF is given by taking Y = lim − → F i X i and the identity in the bottom row, and in particular its component on X i is
X i ηi − → R i F i X i Riιi − −− → R i lim − → j F j X j
Given this description for the left adjoint, it is straightforward to show that R is an equivalence. To show it is fully faithful we must show that the counit is an equivalence, and this is now clear:
F RY ≃ lim − → F i R i Y = lim − → (A ⊗ Ai Y ) ≃ A ⊗ lim − → Ai Y ≃ A ⊗ A Y ≃ Y
since each relative tensor product is computed as the realisation of the simplicial object Y ⊗ A ⊗• i ⊗ A in C, and realisation commutes with colimits.
We can show similarly that the unit is an equivalence for each (X i ) ∈ A; equivalently, R is essentially surjective. For this it suffices to show that each of the maps
η i : X i → R i lim − → j F j X j is an equivalence. But the equivalences ϕ ij : X i → R ij X j , which exist for j ≥ i, imply that the underlying A i -module of F j X j is given by A ⊗ Aj X j = |A ⊗ Ai A • j ⊗ Ai R ij X j | ≃ |A ⊗ Ai A • j ⊗ Ai X i | ,
in which the simplicial structure is given by the A j -module structures on A and X j (and we are thinking of this as a diagram in Mod Ai (C)). Under this identification, the component η i of the unit is the map
X i → A ⊗ Ai X i (i,0) − −− → colim I i/ ×∆ op A ⊗ Ai A • j ⊗ Ai X i
induced by the unit of the algebra A. Passing to the colimit in j, this factors (in Mod Ai (C)) as
X i → A ⊗ Ai X i → A ⊗ A X i
which is an equivalence.
Warning 2.7. The result above fails if we consider the same diagram in Cat smon ∞ (after restricting to κcompact objects for κ a regular cardinal chosen such that 1 K ∈ Sp K is κ-compact, say). Indeed, one can see that the homotopy type of the mapping spectra map(1, X) out of the unit in this colimit would be different from that in Mod E,K , an artefact of the failure of K-localisation to be smashing.
Applying pic : Pr L,smon → Sp ≥0 to the coefficients, one obtains a sheaf of connective spectra on BGé t given by
G/U i → pic(Mod E δ (G/Ui),K ).
In particular, Γpic(Mod E δ ,K ) = pic n := pic(Sp K ). Unfortunately, this sheaf is unsuitable for the spectral sequence we would like to construct: the E 1 -page will be group cohomology with coefficients in the homotopy of lim − → pic(Mod E δ (G/Ui),K ), which by [MS16, Proposition 2.2.3] is the Picard spectrum of the colimit of module categories computed in Cat smon ∞ ; as noted above, this need not agree with pic(Mod E,K ).
Morava E-theory as a proétale spectrum
In Section 2, we showed how the work of Devinatz-Hopkins and Davis defines Morava E-theory as a discrete G-object in the category of K-local E ∞ -rings, and that using this it is straightforward to construct a discrete G-action on the presentable ∞-category Mod E,K , having Sp K as fixed points. As noted in Warning 2.7, this does not suffice to prove our desired descent result for Picard spectra. Likewise, the descent spectral sequence for the sheaf E δ on BGé t is not the K-local E-Adams spectral sequence: the issue is that the action on the (unlocalised) spectrum E is not discrete. In order to utilise the setup of the previous section, it is necessary at this point to pass to a bigger site, so that the stalk operation above turns into an evaluation (and in particular, its value is preserved after applying any limit-preserving functor to the coefficients). To this end, we denote by BG proét the category of all continuous profinite G-sets (for G a profinite group), equipped with the topology generated by finite jointly surjective families. The aim of this section is to show that the proétale site allows us to capture the continuous action on E as a sheaf of (unlocalised) spectra on BG proét .
The site BG proét and resulting (1-)topos were extensively studied by Bhatt and Scholze in [BS14]; when G is trivial, one recovers the condensed /pyknotic formalism of [Sch] and [BH] respectively. It is related to the site of finite G-sets by a map of sites ν : BGé t → BG proét , which induces a geometric morphism at the level of topoi. More generally, if C is any complete and cocomplete ∞-category we write ν * : Sh(BGé t , C) ⇄ Sh(BG proét , C) : ν * for the resulting adjunction; the left adjoint is the sheafification of the presheaf extension, given in turn by
ν p F : S = lim ← − i S i → lim − → i F(S i )(7)
when each S i is a finite G-set.
When C = Set, it is a basic result of [BS14] that ν * = ν p : essentially, this is because (i) the sheaf condition is a finite limit, (ii) it therefore commutes with the filtered colimit in (7).
This fails for sheaves valued in an arbitrary ∞-category C with limits and filtered colimits, where both properties might fail; nevertheless, one can make certain statements about sheaves of spaces or spectra, for example under cohomological dimension assumptions on G.
Our first task is to exhibit Morava E-theory itself as a proétale sheaf of spectra, so as to recover the K-local E-Adams spectral sequence as a descent spectral sequence. This will allow us to compare it to the descent spectral sequence for the Picard spectrum.
Proposition 2.8. The presheaf of K-local spectra
E := ν p E δ : S = lim ← − i S i → L K lim − → i E δ (S i )
is a hypercomplete sheaf on BG proét .
Remark 2.9. Since Sp K ⊂ Sp is a right adjoint, the same formula defines a sheaf of spectra (or even of E ∞ -rings). We stress however that ν p will hereafter always refer to the Kan extension internal to Sp K , or equivalently the K-localisation of the Kan extension in spectra.
When p is sufficiently large (or more generally, when p−1 does not divide n), Proposition 2.8 is a consequence of the descent results of [CM21]. Recall the following definitions: (i) Let Y : Z op ≥0 → C be a pretower in a stable ∞-category C. If C has directed limits, we can form the map
f : {lim k Y k } → {Y n } in the ∞-category Fun(Z op ≥0 , C) (in which the source is a constant tower). Recall that Y is d-rapidly convergent to its limit if each d-fold composite in fib(f ) ∈ Fun(Z op ≥0 , C) is null (c.f. [CM21, Definition 4.8]; see also [HPS99]). More generally, if Y : (Z ≥0 ∪ {∞}) op → C is a tower, we say Y is d-rapidly convergent if the same condition holds for the map f : {Y ∞ } → {Y n }.
In particular, this implies that
Y ∞ ≃ lim n Y n ; in fact, F (Y ∞ ) ≃ lim n F (Y n ) for any functor F : C → D.
(ii) Now suppose C also has finite limits, and X • :
∆ + → C is an augmented cosimplicial object. Recall that X is d-rapidly convergent if the Tot-tower {Tot n X • } is d-rapidly convergent.
In particular X ∞ ≃ Tot X • , and the same is true after applying any exact functor F . Example 2.12. Given an E 1 -algebra A in any presentably symmetric monoidal stable ∞-category C, the Adams tower T (A, M ) for a module M over A is defined by the property that A ⊗ T (A, M ) is 1-rapidly convergent. It is well-known to agree with the Tot-tower for the Amitsur complex for M over A (for example, this is worked out in detail in [MNN17, §2.1]), and in particular the cosimplicial object A ⊗•+2 ⊗ M , given by smashing the Amitsur complex with a further copy of A, is always 1-rapidly convergent.
Definition 2.13. Let G be a profinite group, and F a presheaf on BG proét . We will make use of the following assumption:
(⋆) There exists d ≥ 0 such that for any inclusion U ⊂ V of normal subgroups, theČech complex F(G/U ) F(G/V ) F(G/V × G/U G/V ) · · ·
is d-rapidly convergent.
In this case, Proposition 2.8 follows immediately from the following corollary of [CM21, Theorem 4.16] (which itself goes back to Thomason's work on L K(1) K(−)).
Lemma 2.14. Suppose G is a profinite group of finite cohomological dimension mod p, and k any p-local spectrum. Suppose F ∈ Sh(BGé t , Sp k ) is a hypercomplete sheaf on BGé t . Then ν p F is a sheaf of k-local spectra on BG proét .
Proof. As above, ν p F is given by the formula
S = lim S i → L k lim − → F(S i ).
Suppose we are given a surjection
S ′ = lim j ′ ∈J ′ S ′ j ′ α − → S = lim j∈J S j of profinite G-sets.
By Lemma 2.15, one can present this as the limit of finite covers. Implicitly replacing J and J ′ by I, we therefore want to show that the following is a limit diagram:
L k lim − → F(S i ) L k lim − → F(S ′ i ) L k lim − → F(S ′ i × Si S ′ i ) · · · .(8)
Localisation at k commutes with totalisations, since these preserve the category of k-acyclic spectra: indeed, finite limits of acyclics are clearly acyclic, while Z-shaped limits of acyclics are acyclic by virtue of the Milnor sequence. It therefore suffices to check (8) is a limit diagam before localising at k, instead working in Sp (p) .
We claim that each of the cosimplicial objects
F(S i ) F(S ′ i ) F(S ′ i × Si S ′ i ) · · · .
is d-rapidly rapidly convergent. This is a property preserved by colimits, and will therefore yield the equivalence in the second line below:
lim − → i F(S i ) ≃ lim − → i lim n Tot n F(S ′ i × S i •+1 ) ≃ lim n lim − → i Tot n F(S ′ i × S i •+1 ) ≃ lim n Tot n lim − → i F(S ′ i × S i •+1 ) ≃ lim ∆ ν p F(S ′ i × S i •+1 ).(9)
Suppose therefore that S ′ → S is a finite covering. Decomposing S into transitive G-sets splits (8) into a product of theČech complexes for S ′
H = S ′ × S G/H → G/H, and further writing S ′ H := i G/K i we will reduce to the case G/K → G/H, where K ⊂ H are open subgroups. For this last point, note that if S ′ 1 , S ′ 2 → S are coverings then S ′ 1 ⊔ S ′ 2 → S factors as S ′ 1 ⊔ S ′ 2 → S ⊔ S ′ 2 → S. For the first map we have (S ′ 1 ⊔ S ′ 2 ) × S⊔S ′ 2 • ≃ S ′ 1 × S • ⊔ S ′ 2 , so d-rapid convergence for S → S ′ 1 implies the same for S ′ 1 ⊔ S ′ 2 → S ⊔ S ′ 2 .
On the other hand, the second is split, and so theČech complex for
S ′ 1 ⊔ S ′ 2 → S is a retract of that for S ′ 1 ⊔ S ′ 2 → S ⊔ S ′ 2 . In particular, d-rapid convergence for the covering S ′ 1 → S implies the same for S ′ 1 ⊔ S ′ 2 → S.
This allows us to reduce to covers by a single finite G-orbit, as desired. Writing cd p (G) = d < ∞, we are in the context of [CM21, Lemma 4.16], which says that hypercompleteness is equivalent to condition (⋆) when the mod p cohomological dimension is finite. Thus each of the diagrams
F(G/H) F(G/K) F(G/K × G/H G/K) · · ·
is d-rapidly convergent.
The following was used above:
Lemma 2.15. Let f : S ′ → S be a map of profinite sets. We can find a cofiltered poset I and presentations
S = lim i S i , S ′ = lim S ′ i , such that f is induced by a natural transformation S ′ i → S i .
Proof. Without loss of generality we assume that J and J ′ are cofiltered posets. For each j ∈ J, the map S ′ → S → S j factors through some S ′ j ′ ; we write J ′ j for the subcategory of j ′ ∈ J ′ such that such a factorisation exists (necessarily uniquely since S ′ → S ′ j ′ is epic). The functor j → J ′ j defines a coCartesian fibration I → J, and there is a functor I → J ′ sending
(j, j ′ ) → j ′ ∈ J ′ j ⊂ J ′ .
The category I is still cofiltered, and we can form the diagrams I → J
S (−) − −− → BGé t and I → J ′ S ′ (−) − −− → BGé t ,
having limits S and S ′ respectively; these are related by a natural transformation α, by construction of I.
When p − 1 | n, the Morava stabiliser group has p-torsion elements, and therefore infinite cohomological dimension mod p. Its virtual cohomological dimension is nevertheless finite, since this is true for any compact p-adic analytic group. Lemma 2.14 therefore gives hyperdescent for the Kan extension of E δ to BU proét for some open subgroup U ⊂ G, but not to BG proét itself. We will nevertheless prove Proposition 2.8 in this generality, following essentially the same method.
Definition 2.16.
(i) Suppose (C, ⊗, 1) is symmetric monoidal, and Z ∈ C. Then Thick ⊗ (Z) is the smallest full subcategory of C containing Z and closed under extensions, retracts, and (−) ⊗ X for every X ∈ C. Note that Thick ⊗ (Z) is the union of full subcategories Thick ⊗ r (Z), for r ≥ 1, spanned by retracts of those objects that can be obtained by at most r-many extensions of objects Z ⊗ X; each of these is a ⊗-ideal closed under retracts, but not thick.
(ii) Suppose now that A ∈ Alg(C). Recall that M ∈ C is A-nilpotent ([Rav92, Definition 7.1.6]) if M ∈ Thick ⊗ (A), and that A is descendable ([Mat16, Definition 3.18]) if Thick ⊗ (A) = C. Thus A is descendable if and only if 1 is A-nilpotent.
In [Mat16, Proposition 3.20] it is shown that A ∈ Alg(C) is descendable if and only if the Tot-tower of the Amitsur complex
1 A A ⊗ A · · ·
defines a constant pro-object converging to 1. It will be useful to have the following quantitative refinement of this result, which explicates some of the relations between various results in op. cit. with [CM21; MNN17; Mat15]:
Lemma 2.17. Let C be stable and symmetric monoidal. Consider the following conditions:
(1) d The Amitsur complex for A is d-rapidly convergent to 1.
(2) d The canonical map 1 → Tot d A ⊗•+1 admits a retraction.
(
3) d The canonical map Tot d A ⊗•+1 := fib(1 → Tot d A ⊗•+1 ) → 1 is null.
In the notation of [MNN17,§4],
this says that exp A (1) = d.
(4) d For any X ∈ C, the spectral sequence for the tower of mapping spectra
· · · → map C X, Tot n A ⊗• → · · · → map C X, Tot 0 A ⊗• (10)
collapses at a finite page, with a horizontal vanishing line at height d.
Then we have implications (1) d ⇔ (4) d , (2) d ⇔ (3) d , and (1) d ⇒ (2) d ⇒ (1) d+1 .
Proof. We begin with the first three conditions. The implication
(1) d ⇒ (2) d is immediate from the diagram Tot d A ⊗•+1 1 Tot d A ⊗•+1 Tot 0 A ⊗•+1 1 Tot 0 A ⊗•+1 0 ∃ in which the rows are cofibre sequences. The equivalence (2) d ⇔ (3) d is clear, so we now prove (2) d ⇒ (1) d+1 .
Recall from the proof of [Mat16, Proposition 3.20] that the full subcategory
{X : X ⊗ A ⊗•+1 is a constant pro-object} ⊂ C is a thick ⊗-ideal containing A (since A ⊗ A ⊗•+1
is split), and therefore contains 1. We can in fact define full subcategories
C r := {X : X ⊗ A ⊗•+1 is r-rapidly convergent} ⊂ C,
which are closed under retracts and (−) ⊗ X for any X ∈ C. Each of these is not thick, but if X → Y → Z is a cofibre sequence with X ∈ C r and Z ∈ C r ′ , then Y ∈ C r+r ′ (this follows by contemplating the diagram . In particular, Tot r A ⊗•+1 ∈ C r+1 since this can be constructed iteratively by taking r + 1-many extensions by free A-modules (while A ∈ C 1 by Example 2.12).
X ⊗ Tot n A ⊗•+1 Y ⊗ Tot n A ⊗•+1 Z ⊗ Tot n A ⊗•+1 X ⊗ Tot n+r ′ A ⊗•+1 Y ⊗ Tot n+r ′ A ⊗•+1 Z ⊗ Tot n+r ′ A ⊗•+1 X ⊗ Tot n+r+r ′ A ⊗•+1 Y ⊗ Tot n+r+r ′ A ⊗•+1 Z ⊗ Tot n+r+r ′ A ⊗•+1
Now assumption (2) implies that 1 ∈ C d+1 , that is, (1) d+1 holds.
The implication (1) d ⇔ (4) d is proven in [Mat15, Proposition 3.12] (in the case U = C), by keeping track of the index d (called N therein).
Remark 2.18. The implications in this lemma are special to the Amitsur/cobar complex. For an arbitrary cosimplicial object (even in spectra), condition (2) d does not in general imply (1) d ′ for any d ′ (although (1) d ⇒ (2) d still holds).
Proof. (Proposition 2.8).
Following the method of Lemma 2.14, to show that E = ν p E δ is a sheaf it will suffice to show that there is some d ≥ 1 such that
E(G/U ) E(G/V ) E(G/V × G/U G/V ) · · ·
is d-rapidly convergent for all finite coverings G/V → G/U , even when cd p (U ) = ∞. This claim implies the equivalence in the third line of (9), and the other identifications are formal. In fact, this will give us hypercompleteness too: this follows from [CM21, Proposition 2.25], noting that the implication (2) ⇒ (1) therein uses only that the site is finitary and no assumption of finite cohomological dimension.
We therefore recall [Mat16, Proposition 10.10] that E ∈ CAlg(Sp K ) is descendable, and so 1 K is a retract of Tot d E ⊗•+1 for some d ≫ 0, which we shall now fix. In particular, d is chosen only in terms of the extension 1 K → E, and independently of the subgroups U ⊂ V .
By Lemma 2.20 immediately below, theČech complex in question is the Amitsur complex
E U E V E V ⊗ E U E V · · · .(11)
For every r ≥ 1, we shall consider the following full subcategory of Sp K :
C r = C r (U, V ) := {X : X ⊗ (E V ) ⊗ E U •+1 is r-rapidly convergent}.
Our aim is to show that 1 K ∈ C d . To this end, we observe the following properties of C r :
(i) C r is closed under retracts.
(ii) C r is a ⊗-ideal: if X ∈ C r then X ⊗ Y ∈ C r for any Y . (iii) C r is not thick. However, if X → Y → Z is a cofibre sequence such that X ∈ C r and Z ∈ C r ′ , then Y ∈ C r+r ′ .
Next, note that
E ⊗ (E V ) ⊗ E U •+1 ≃ E ⊗ E V E V ⊗ (E V ) ⊗ E U •+1 ≃ G/U E ⊗ E V E V ⊗ E U (E V ) ⊗ E U •+1 .(12)
The last equivalence is given by extending the equivalence of spectra
G/U E V ⊗ E U E V ≃ G/U U/V E V ≃ G/V E V ≃ E V ⊗ E V to a map of cosimplicial E V -algebras G/U E V ⊗ E U (E V ) ⊗ E U •+1 = G/U Ran [0] →∆ E V ⊗ E U E V → E V ⊗ (E V ) ⊗ E U •+1 . The equivalence (12) presents E ⊗ (E V ) ⊗ E U •+1
as a direct sum of split cosimplicial objects, and so as split itself: that is,
E ∈ C 1 . Since 1 K ∈ Thick ⊗ d (E), property (iii) above implies that 1 K ∈ C d .
Remark 2.19. This shows that nilpotence for the entire extension 1 K → E is the critical property for extending to a sheaf on the proétale site; it is therefore a reasonable axiom to require for arbitrary profinite Galois extensions. In fact, using nilpotence meant we did not have to appeal to any cohomological dimension assumptions (although of course these go into the proof of nilpotence for the particular extension
1 E → E in [Rav92, §8.2-8.5]).
The following lemma was used above, and will be useful for evaluating the sheaf E.
Lemma 2.20. Given any cospan of G-sets
S 1 → S 0 ← S 2 , we have ν p E δ (S 1 × S0 S 2 ) = ν p E δ (S 1 ) ⊗ ν p E δ (S0) ν p E δ (S 2 ). (13) Proof. Assume first that each S i is of the form G/U i where U i is an open subgroup. We can write S 1 × S0 S 2 = g∈U1\U0/U2 G/U 1 ∩ g −1 U 2 g, so that ν p E δ (S 1 × S0 S 2 ) = U1\U0/U2 E U1∩g −1 U2g .
This admits a map from E U1 ⊗ E U 0 E U2 , and we can check this map is an equivalence after base-change to E, a faithful algebra over E U0 . But
E ⊗ E U 0 E U1 ⊗ E U 0 E U2 ≃ U0/U1 E ⊗ E U 0 E U2 ≃ U0/U1 U0/U2 E ≃ U1\U0/U2 U0/U1∩g −1 U2g E ≃ E ⊗ E U 0 U1\U0/U2 E U0/U1∩g −1 U2g , using the isomorphism of U 0 -sets U 0 /U 1 × U 0 /U 2 ≃ g∈U1\U0/U2 U 1 ∩ g −1 U 0 g.
The result for finite G-sets follows from the above by taking coproducts, and for arbitrary G-sets by taking cofiltered limits of finite G-sets.
In fact, the proof of Proposition 2.8 allows us likewise to consider E-homology of any spectrum X:
Corollary 2.21. If X is any K-local spectrum, then the presheaf E ⊗ X is a hypercomplete sheaf.
Proof. Given a covering S ′ → S in BG proét , the proof of Proposition 2.8 showed that the Amitsur complex for S ′ → S is d-rapidly convergent. Since the functor (−) ⊗ X : Sp K → Sp K preserves finite limits, the same is true for the augmented cosimplicial object given by tensoring everywhere by X.
We can now define a descent spectral sequence for the sheaf E. Again, we do this by invoking the results of Sp). The upshot of Proposition 2.8 is:
[HA, §1.2.2] for the sheaf E ∈ Sh(BG proét , Sp K ) ⊂ Sh(BG proét ,Corollary 2.22.
There is a conditionally convergent spectral sequence of the form
E s,t 2,+ = H s (BG proét , π t E) =⇒ π s Γ lim t τ ≤t E.(14)
To identify the E 1 -page and the abutment, we need to identify the homotopy sheaves of E.
Lemma 2.23. The homotopy sheaves of E are given by
π t E : S → Cont G (S, π t E),(15)
continuous equivariant maps from S to the homotopy groups of Morava E-theory (equipped with their profinite topology).
Proof. To prove the lemma, it is enough to prove that the homotopy presheaves of E take the form Cont G (−, π t E), since the topology is subcanonical. In fact, the sub-site Free G of free G-sets generates the proétale topos, and so it is enough to show that (π p t E)| Free G : S → Cont G (S, π t E) = Cont(S/G, π t E). Using Lemma 2.20, we have for any free G-set of the form S = T × G (with trivial action on T ),
E(S) ≃ E(T ) ⊗ E(G) ≃ L K lim − → i Ti E.(16)
Since E-localisation is smashing, the spectrum lim − → Ti E is E-local, and so its K-localisation can be computed by smashing with a tower of generalised Moore spectra M I ; see for example [HS99]. Thus
π t (E(S)) = π t lim I (lim − → i Ti E ⊗ M I ),
and we obtain a Milnor sequence
0 → lim I π t lim − → i Ti E ⊗ M I → π t (E(S)) → lim I 1 π t lim − → i Ti E ⊗ M I → 0. Now observe that π t lim − → i Ti E ⊗ M I = lim − → i Ti π t E ⊗ M I = lim − → i Ti (π t E)/I = lim − → i Cont(T i , (π t E)/I) = Cont(T, (π t E)/I),
using for the last equality that the target is finite. In particular, each Cont(T, (π t E)/J) → Cont(T, (π t E)/I) is surjective, since the inclusion J ⊂ I induces a surjection of finite sets (π t E)/J ↠ (π t E)/I, and so admits a (set-theoretic) splitting. Thus lim 1 vanishes and
π t (E(S)) = lim I Cont(T, (π t E)/I) = Cont(T, π t E).
Corollary 2.24. The Postnikov tower of the sheaf E converges. Thus (14) converges conditionally to π * ΓE = π * 1 K .
Proof. Both properties are local, so we can restrict to the subsite Free G . There, the proof of Lemma 2.23 showed that the homotopy presheaves of E are its homotopy sheaves. Taking cofibres and limits of presheaves (both of which preserve sheaves), we see that the truncation τ ≤t E is the presheaf U → τ ≤t E(U ) (that is, no sheafification is necessary). But Postnikov towers in presheaves of spectra converge since the same is true in spectra; thus
E ≃ lim t τ ≤t E.
Remark 2.25. In fact, we will compare this spectral sequence to the K-local E-Adams spectral sequence (Proposition 2.30) to show existence of a horizontal vanishing line (at least for t ≥ 2); thus the spectral sequence converges completely in this region.
E s,t 2,+ = H s (G, π t E).(17)
Proof. Using the identification in Lemma 2.23 of the homotopy sheaves, this follows from [BS14, Lemma 4.3.9(4)], which implies that the canonical map
Φ M : H * (G, M ) → H * (BG proét , Cont G (−, M ))
for a topological G-module M is an isomorphism whenever M can be presented as the limit of a countable tower of finite G-modules.
Remark 2.28 (c.f. [BH16]). The proof of Lemma 2.23 also goes through for the hypercomplete sheaf E ⊗ X, as long as π * E ∧ X ∧ M I ≃ (E * X)/I for each of the ideals I. Thus one obtains a conditionally convergent descent spectral sequence
E s,t 2 (X) = H s (BG proét , Cont G (−, E t X)) =⇒ π t−s X.
for any such K-local spectrum X, and if each of the G-modules E t X satisfies one of the assumptions of [BS14, Lemma 4.3.9], then the E 1 -page is given by the continuous group cohomology H s (G, E t X). For example, this happens when X is finite.
We have thus identified the E 1 -page of the descent spectral sequence for E with the E 2 -page of the Klocal E-Adams spectral sequence. The next step is to show this extends to an identification of the spectral sequences. We do this by using the décalage technique originally due to Deligne [Del71]; the following theorem is standard, but for the sake of completeness (and to fix indexing conventions, one of the great difficulties in the subject) we include the argument in Appendix A.
Proposition 2.29 (Lemma A.3). Let F be a sheaf of spectra on a site C, and let X ↠ * be a covering of the terminal object. Suppose that for every t and q > 0 we have
Γ(X q , τ t F) = τ t Γ(X q , F).E s,t 2,+ = π s Γτ t E = H s (G, π t E) =⇒ π t−s 1 K , E 2s−t,s 3,+ = π s (π t E ⊗•+1 ) = H s (G, π t E) =⇒ π t−s 1 K .
The first is the descent spectral sequence for the sheaf E, and the second is the K-local E-Adams spectral sequence.
Proof. By Lemma 2.20, the Tot-filtration associated to the cosimplicial spectrum Γ(G • , ν * E) is precisely the Adams tower for the Amitsur complex of 1 K → E. The resulting spectral sequence is the K-local E-Adams spectral sequence by definition.
According to Lemma 2.29, all that remains to check is that each spectrum
Γ(G q , τ t ν * E), q > 0
is Eilenberg-Mac Lane. Note that when q = 1 this is immediate since any cover in BG proét is split; when q > 1 the profinite set G q−1 is not extremally disconnected, and we will deduce this from Lemma 2.23.
Indeed, we know that Γ(G q , τ t ν * E) is t-truncated, while for s ≥ t we have π s Γ(G q , τ t ν * E) = H t−s (BG proét /G q , π t ν * E) = H t−s (Profin /G q−1 , Cont(−, π t E)) = H t−s cond (G q−1 , Cont(−, π t E)).
We now argue that the higher cohomology groups vanish, essentially as in the first part of [Sch, Theorem 3.2]. Namely, the sheaves Cont(−, π t E) on BG proét /G q satisfy the conditions of [BS14, Lemma 4.3.9(4)], and so the cohomology groups in question can be computed byČech cohomology: theČech-to-derived spectral sequence collapses, since the higher direct images of Cont(−, π t E) vanish. As a result, to check they vanish it will be enough to check that theČech complex
Cont G (G q , π t E) → Cont G (S, π t E) → Cont G (S × G q S, π t E) → · · ·(18)
is exact for any surjection S ↠ G q . Writing this as a limit of surjections of finite G-sets
S i ↠ S ′ i (with lim S ′ i = G q and lim S i = S), and writing A j i,I := Cont G (S i × S ′ i j , π t E/I) for brevity, (18) is the complex lim I colim i A 0 i,I → lim I colim i A 1 i,I → lim I colim i A 2 i,I → · · · .
Its cohomology therefore fits in a Milnor sequence 3
0 → lim I 1 H j (colim i A * i,I ) → H j (lim I colim i A * i,I ) → lim I H j (colim i A * i,I ) → 0.(19)
But for each fixed pair (i, I), the complex A * i,I is split; the same is therefore also true of the colimit, which thus has zero cohomology. Now (19) shows that (18) is exact, and so
π s Γ(G q , τ t ν * E) = π t Γ(G q , ν * E) s = t 0 s < t That is, Γ(G q , τ t ν * E) = Σ t π t Γ(G q , ν * E) as required.
Remark 2.31. The same proof shows that H * cond (T, Cont(−, M )) = 0 in positive degrees, for any profinite set S and profinite abelian group M . One can also deduce this from the following facts: (i) condensed cohomology of any profinite set (or indeed, any compact Hausdorff space) agrees with sheaf cohomology on the same topological space; (ii) profinite sets have homotopy dimension zero, and therefore cohomological dimension zero.
Remark 2.32. There are other ways to obtain a proétale or condensed object from E. We have chosen the approach above as we feel it gives the most self-contained proof of the result, and explicates the relation to the work of Davis; it also makes clear that E is Kan extended from E δ (in the K-local sense).
Descent for modules and the Picard spectrum
In the previous section, we showed that Morava E-theory defines a hypercomplete sheaf of spectra E on the proétale classifying site of G. Our next aim is to improve this to a statement about its module ∞-category and therefore its Picard spectrum. The main result of this section is the construction of a hypercomplete sheaf of connective spectra pic(E), with global sections Γpic(E) = pic n := pic(Sp K ). The functoriality of the construction via the proétale site will allow us to compare the resulting descent spectral sequence to to the K-local E-Adams spectral sequence, including differentials.
Descent for K(n)-local module categories
A similar strategy to that of Proposition 2.8 fails for the sheaf of module categories, since it is not clear that filtered colimits in Pr L,smon are exact; moreover, we would need an analogue of [CM21,Theorem 4.16]. Nevertheless, the rest of the section is devoted to the following result:
Corollary 3.3. There is a hypercomplete sheaf pic • Mod E,K ∈ Sh(BG proét , Sp ≥0 ) having Γ(G, pic • Mod E,K ) ≃ pic(Mod E,K ) and Γ( * , pic • Mod E,K ) ≃ pic(Sp K ) = pic n .
In particular, we get a conditionally convergent spectral sequence
E s,t 2,♠ = H s (BG proét , π t pic • Mod E,K ) =⇒ π t−s pic n .(20)
As explained in Section 3.2 this will not quite be the Picard sheaf we use, but a closely related sheaf will give us the result we are after.
We begin with the following observation, stated for later use in slightly greater generality than needed for this section.
Lemma 3.4. Let C ⊗ be a symmetric monoidal ∞-category with geometric realisations, and 1 ≤ k ≤ ∞.
Suppose that A ∈ C is an E k -algebra, and that B ∈ CAlg(C) is a descent algebra 4 in the sense that C ≃ lim Mod B (C) Mod B⊗B (C) · · · .(21)
Then
RMod A (C) ≃ lim RMod A⊗B (C) RMod A⊗B⊗B (C) · · · .
Proof. Write B ′ := A ⊗ B; this is itself an E k -algebra. We will verify the hypotheses of the Barr-Beck-Lurie theorem. Namely, (ii) RMod A (C) has limits of B ′ -split cosimplicial objects: given M • : ∆ → RMod A (C) with M • ⊗ A B ′ split, we can form the limit M in C, by the descendability assumption. Since RMod A (C) ⊂ C is closed under cosifted (in fact, all) limits, M is also a limit in RMod A (C). This limit is clearly preserved by
(i) (−) ⊗ A B ′ ≃ (−) ⊗ B is(−) ⊗ A B ′ .
Equipped with this and Lemma 2.20, we can now prove the key descent result of this section. It is a pleasure to thank Dustin Clausen for the outline of this argument, part of which has since appeared in [Hai22].
Proposition 3.5. The restriction of ν p Mod E δ ,K to the site Free G ⊂ BG proét is a sheaf: equivalently, the value of ν * Mod E δ ,K on the free G-set S with S = lim S i is ν p Mod E δ ,K (S) = lim − →i Mod E δ (Si),K .
Proof. We do this in a number of steps.
(1) The equivalence of the two statements follows from the observation that Free G also generates the proétale topos, since any G-set is covered by a free one. Note also that every free G-set is split (i.e. of the form S ≃ S/G × G), since sections always exist over its finite quotients and these can be chosen compatibly; compare [Sch12, Prop. 3.7]. A consequence is that for any free G-set S, the functor S ′ → S ′ /G is an equivalence (BG proét ) /S ≃ {G} × Profin /(S/G) , since any G-set over S is itself free. Thus suppose T = lim i T i is a profinite set over S/G; applying Lemmas 2.5 and 2.20 we deduce the equivalences
ν p Mod E δ ,K (T × G) = lim − → i,j Mod E δ (Ti×G/Uj ),K ≃ lim − → i,j Mod E δ (Ti)⊗E δ (G/Uj ),K ≃ lim − → i Mod L K lim − →j E δ (Ti)⊗E δ (G/Uj ),K ≃ lim − → i Mod E δ (Ti)⊗E,K ≃ lim − → i Mod T i E,K ≃ lim − → i Ti Mod E,K .
Under the aforementioned equivalence we think of this as a presheaf on Profin /(S/G) : that is, if T is a profinite set over S/G, then
T = lim i T i → lim i Ti Mod E,K . (2) Since T i is a finite set, Mod E,K (T i × G) ≃ Ti Mod E,K = Sh(T i , Mod E,K )
; we claim that the same formula holds for arbitrary T . Writing T = lim i T i , it will suffice to prove that the adjunction
q * : lim − → Sh(T i , Mod E,K ) ⇆ Sh(T, Mod E,K ) : q *
induced by the adjunctions (q i ) * ⊣ (q i ) * for each of the projections q i : T → T i is an equivalence. In fact, since the adjunction is obtained by tensoring the adjunction
q * : lim − → Sh(T i ) ⇆ Sh(T ) : q * ,
with Mod E,K (combine [SAG, Remark 1.3.1.6 and Prop. 1.3.1.7]), it will suffice to prove that this is an equivalence.
Let us assume for notational convenience that the diagram T i is indexed over a filtered poset J, which we can do without loss of generality. Then the claim is a consequence of the fact that the topology on T is generated by subsets q −1 i (x i ). Indeed, let C := Open(T ); subsets of the form q −1 i (x i ) form a clopen basis, and we will write B ⊂ C for the full subcategory spanned by such. Note that
Sh(T ) ≃ P Σ (B),
where the right-hand side denotes the full subcategory of presheaves that send binary coproducts to products. On the other hand, an object of lim − → Sh(T i ) is a Cartesian section of the fibration determined by i → Sh(T i ); abusively, we will denote such an object by (F i ), where F i ∈ Sh(T i ), leaving the coherence data implicit. Write also (q j,∞ ) * :
Sh(T j ) ⇄ lim − → Sh(T i ) : (q j,∞ ) *
for the colimit adjunction. Similarly to Lemma 2.5, one verifies that the map
lim − → (q i ) * F i → q * ((F i )),
obtained by adjunction from the maps (q j,∞ ) * (η):
F j = (q j,∞ ) * ((F i )) → (q j,∞ ) * q * q * ((F j )) = (q j ) * q * ((F i ))
as j varies, is an equivalence. Likewise, adjunct to q * ((
q i ) * F) ≃ lim − → (q i ) * (q i ) * F → F is an equivalence ((q i ) * F) ∼ − → q * F.
By restricting to B (where no sheafification is required for forming the left adjoint), we will show that the unit and counit of the adjunction are equivalences. For the counit q * q * F → F, this is clear:
[q * q * F] (q −1 i (x i )) ≃ lim − → j≥i (q j ) * (q j ) * F (q −1 i (x i )) ≃ lim − → j≥i (q j ) * (q j ) * F(q −1 i (x i )) ≃ lim − → j≥i (q j ) * F(q −1 ij (x i )) ≃ lim − → j≥i F(q −1 i (x i ) ≃ F(q −1 i (x i )).
For the unit (F j ) → q * q * (F j ), it will suffice to prove for each i that the canonical map
F i → (q i ) * lim − → j≥i (q j ) * F j(22)
is an equivalence. But
(q i ) * lim − → j≥i (q j ) * F j (x i ) ≃ lim − → j≥i (q j ) * F j (q −1 i (x i )) ≃ lim − → j≥i (q j ) * F j (q −1 i x i ) ≃ lim − → j≥i F j (q −1 ij x i ) ≃ lim − → j≥i [(q ij ) * F j (x i )] ,
with respect to which (22) is the structure map for j = i. This is an equivalence: since each of the coherence maps (ii ) Sp K , and so Mod E,K , is compactly generated 5 (so in particular compactly assembled);
F i → (q ij ) * F j defining the colimit is an equivalence by definition of lim − → Sh(T i ), the diagram j → (q ij ) * F j (x i )
(iii ) any profinite set T has homotopy dimension zero by [
S × G × G T 0 × G × G T 1 × G × G · · · S × G T 0 × G T 1 × G · · · S T 0 T 1 · · ·
To prove descent for the hypercover, it is enough to show descent for each column and for each non-negative row. For each such row we are in the context of Lemma (3.5) and the following remark, and so obtain a limit diagram after applying ν p Mod E δ ,K . On the other hand, applying ν p Mod E δ ,K to a column we obtain the complex
lim − →i Mod E(T i j ),K lim − →i Mod E(T i j ×G/U k ),K lim − →i Mod E(T i j ×G/U k ×G/U k ),K · · ·
Using Lemma 2.5 and the equivalences of K-local E ∞ -rings
lim − → i,k E(T i j × (G/U k ) n ) ≃ ν p E(T j ) ⊗ E ⊗n one identifies this with the complex Mod ν p E(Tj ),K Mod ν p E(Tj )⊗E,K Mod ν p E(Tj )⊗E⊗E,K · · · .(23)
When T j = * , this is the complex
Sp K Mod E,K Mod E⊗E,K · · ·
which is a limit diagram according to [Mat16]. For general T , (23) is a limit diagram by combining the T = * case with Lemma 3.4.
The Picard spectrum as a proétale spectrum
In
π t (pic • Mod E,K ) = Cont G (−, π t pic(E)) .(24)
for t = 0, 1; for t ≥ 2 the result follows from Lemma 2.23 and the isomorphism π t pic(A) ≃ π t−1 A, natural in the ring spectrum A. In fact, it will be more straightforward instead to modify the sheaf pic • Mod E,K slightly so that (24) holds.
Definition 3.6. Given a K-local E ∞ -ring A, write Pic ′ (A) (resp. Pic ′ (A), pic ′ (A)) for the Picard group (space, spectrum) of the symmetric monoidal subcategory Mod perf A,K := Thick(A) ⊂ Mod A,K .
We will prove that the presheaf pic ′ • E is also a hypercomplete sheaf, and that its homotopy sheaves are given by
π t (pic ′ • E) = Cont G (−, π t pic(E))
as desired.
Remark 3.7. If T is a finite set, then E(T × G) ≃ T E, and so
Cont (T, Pic(E)) ≃ T Pic(E) ≃ Pic(Mod E(T ×G),K ).
If T is an arbitrary profinite set, the isomorphisms above induce a canonical map
χ : Cont (T, Pic(E)) → Pic(Mod E(T ×G),K ),
defined explicitly as follows: any continuous map f : T → Pic(E) defines a clopen decomposition T = T 0 ⊔T 1 , and projecting to finite quotients gives
T i = T 0 i ⊔ T 1 i for i sufficiently large. Using E(T × G) = L K lim − →i Ti E we set χ(f ) = L K lim − → i T 0 i E ⊕ T 1 i ΣE ∈ Pic(Mod E(T ×G),K ).
Consequently, to understand the E 2 -page of the descent spectral sequence for the unmodified sheaf pic • Mod E,K , we would like to know:
Question 3.8. Is χ an isomorphism?
Under the equivalence Mod E(T ×G),K ≃ Sh(T, Mod E,K ) of Proposition 3.5, the E(T × G)-module χ(f ) corresponds to a locally free sheaf; in fact, it is the retract of the free sheaf E T ⊕ ΣE T along the idempotent
1 T 0 0 0 1 T 1 : E T ⊕ ΣE T → E T ⊕ ΣE T ,
where 1 X denotes an indicator function. In particular, χ(f ) is a perfect object of Mod E(T ×G),K . Rephrasing, Question 3.9. Similarly, is any K-locally invertible object of Sh(T, Mod E,K ) perfect?
Remark 3.10. In certain cases we do know that pic ′ (E(S)) ≃ pic(E(S)): for example, this happens for E, since Pic(E) = {E, ΣE}. In fact, this implies the same for any G-set with finitely many orbits, by the following descent argument.
Suppose S is a G-set, and choose a covering S ′ → S. Suppose moreover that all K-locally invertible E(S ′ )modules are perfect, so Pic ′ (E(S ′ )) = Pic(E(S)); we claim that the same is true for E(S). Indeed, if M ∈ Pic(E(S)), then M ⊗ E(S) E(S ′ ) ∈ Pic(E(S ′ )) = Pic ′ (E(S ′ )), and in particular M is perfect as an E(S ′ )-module. But in the proof of Proposition 2.8 we observed that the Amitsur complex for E(S) → E(S ′ ) is d-rapidly convergent, and this extension is therefore descendable in the sense of [Mat16]. Perfect objects satisfy descent with respect to descendable extensions [Mat16, Theorem 3.28], so that M is perfect in Mod E(S) . For computations of Pic n there is therefore no harm in restricting to perfect objects. Proof. Suppose S • → S −1 = S is a hypercovering; by Proposition 2.8 we know that the diagram
Mod E(S),K Mod E(S0),K Mod E(S1),K · · ·(25)
is a limit. If M ∈ Mod E(S−1,K is perfect then so is each of its base-changes to E(S n ), since each of these functors preserves the unit. Thus the (fully faithful) inclusion Mod perf
E(S),K → Mod E(S),K ≃ Tot Mod E(S•),K factors through Mod perf E(S),K θ − → lim Mod perf E(S0),K Mod perf E(S1),K · · · ,
and θ is necessarily fully faithful too (since mapping spaces in a limit are the limit of mapping spaces, and each inclusion of perfect objects is fully faithful). To conclude, we must show that θ is essentially surjective. That is, suppose N ∈ Tot Mod E(S•),K , the image under (25) of an E(S)-module M ; we must show that in fact M is perfect. But by assumption, N has image in Mod perf E(S0),K ⊂ Mod perf E(S0),K . As above, the extension E(S) → E(S 0 ) is descendable, which implies that M itself is perfect.
Remark 3.13. Since Mod perf E ≃ Mod perf E,K , we deduce that Mod perf E
is a hypercomplete sheaf of ∞-categories on BG proét . We do not know if the same holds for Mod E , since one no longer has a description of this as sheaves on a profinite space.
Definition 3.14. The Picard sheaf of E is the hypercomplete sheaf of connective spectra
pic(E) : S → pic Mod perf E(S),K = Γ(S, pic ′ • E).
The Picard spectral sequence is its descent spectral sequence, formed using a Postnikov tower in proétale sheaves:
E s,t 2 = H s (BG proét , π t pic(E)) =⇒ π t−s pic n .(26)
Our next aim is to compute the homotopy groups of pic(E); we will show that restricting to perfect objects implies that π 0 pic(E) has the expected condensed topology. That is, we will prove the following result:
Proposition 3.15. The homotopy sheaves of the Picard sheaf are given by
π t pic(E) = Cont G (−, π t pic(E)) = Cont G (−, Pic(E)) t = 0 Cont G (−, (π 0 E) × ) t = 1 Cont G (−, π t−1 E) t ≥ 2(27)
Corollary 3.16. The starting page of the descent spectral sequence for the Picard spectrum pic(E) is given by continuous group cohomology: E s,t 2 = H s (G, π t pic(E)).
Proof. We have already noted that the requisite conditions hold for the sheaves π t pic(E) ≃ π t−1 E when t ≥ 2; all that remains to justify is what happens at t = 0 and 1. But by Proposition 3.15, π 0 pic(E) ≃ Cont G (−, Z/2) certainly satisfies the hypotheses of [BS14, Lemma 4.3.9]), while π 1 pic(E) = Cont G (−, (π 0 E) × ), and (π 0 E) × is the limit of finite G-modules (π 0 E/I) × .
To prove Proposition 3.15 it will be convenient to work with the sheaf model of E-modules. We begin by recording two basic lemmas: the following is standard, e.g. [Sta, Tag 0081].
Lemma 3.17. Let T be a topological space, and A a set. The constant sheaf A T on the T takes the form
U → LC (U, A) ,(29)
that is, locally constant functions U → A.
Lemma 3.18. For any F ∈ Sh(T, Mod E,K ), Proof. Since F is perfect, its homotopy sheaves are finitely generated over π * E T (since this is a thick property), and we can choose a surjection
[E T , F] ≃ Hom (π * E T , π * F) Proof. We have isomorphisms [E T , F] ≃ π 0 Map (E T , F) ≃ π 0 ΓF ≃ Γπ 0 F ≃ Hom (π * E T , π * F) ,β * : n 1 π * Σ εi E T → π * F.
This lifts to a map β:
n 1 Σ εi E T → F inducing a surjection on π 0 , whose fibre must also be perfect since Sh(T, Mod E,K ) perf is thick. Choosing another surjection, we form a finite presentation
n ′ 1 π * Σ εi E T α * −→ n 1 π * Σ εi E T β * −→ π * F → 0,(30)
where ε i ∈ {0, 1}. We first claim that π * F is locally free. But α * is a global section of the sheaf
H * := H om n ′ 1 π * Σ εi E T , n 1 π * Σ εi E T , obtained by sheafifying U → Hom n ′ 1 π * Σ εi E U , n 1 π * Σ εi E U = n ′ ×n Hom (π * Σ εi E U , π * Σ εi E U ) = M n ′ 0 ,n0 (Hom (π * E U , π * E U )) × M n ′ 1 ,n1 (Hom (π * ΣE U , π * ΣE U )) ≃ M n ′ 0 ,n0 (Γπ * E U ) × M n ′ 1 ,n1 (Γπ * E U ) = M n ′ 0 ,n0 (LC (U, π * E)) × M n ′ 1 ,n1 (LC (U, π * E)) = LC U, M n ′ 0 ,n0 (π * E) × M n ′ 1 ,n1 (π * E) .
In particular this is already a sheaf, so H * = LC −, M n ′ 0 ,n0 (π * E) × M n ′ 1 ,n1 (π * E) and α * corresponds to some locally constant matrix T → M n ′ ,n (π * E). Now choose x ∈ T ; since F is invertible, F x is an invertible E-module, so F x = Σ ε E for some ε ∈ {0, 1}. Thus
coker(α * ,x ) = (π * F) x = π * (F x ) ≃ π * Σ ε E
Since α is locally constant as a function T → M n (π * E), this implies that it is given on some neighbourhood U ∋ x by the constant function α * ,x , and so π * F| U is free of rank one.
This gives an isomorphism β ′ * : π * Σ ε E U ≃ π * F| U ; but this lifts to a map β ′ : E U → F| U , which is an equivalence by checking on homotopy sheaves.
Proof. (Proposition 3.15). For t ≥ 1, this follows from the equivalences of sheaves of infinite loop-spaces
ΩPic (Mod E,K ) ≃ ΩPic Mod perf E,K ≃ GL 1 (E),
which hold because each of the terms depends only on the component of the unit in the Picard space.
Restricting to perfect objects is only necessary for evaluating π 0 , and Proposition 3.19 implies that any perfect invertible module is locally free, so that
χ : Cont (T, Pic(E)) → π 0 pic ′ (E(T × G))
is an isomorphism. This induces an isomorphism of sheaves on Free G
Cont G (−, Pic(E)) ≃ Cont ((−)/G, Pic(E)) ≃ π 0 pic(E).
This completes the construction of the Picard sheaf pic(E), and the proof that its homotopy sheaves take the desired form; Corollary 3.16 follows. We will now study the resulting descent spectral sequence, as we did for the descent spectral sequence of E in Section 2.
The construction of the descent spectral sequence using a Postnikov-style filtration is useful in making the comparison with the Picard spectral sequence (26). Namely, we make the following observation: Proof. This can be seen directly by comparing the Postnikov towers. The first part is clear, since the t-th Postnikov section of a spectrum (for t ∈ [a, b]) depends only on its [a, b]-truncation. We therefore have an equivalence τ t p 1 F ≃ τ t p 2 F at the level of presheaves, and so also after sheafifying.
For the second part, suppose x ∈ E s,t r,p1F is represented by x ∈ π s−t Γτ t p 1 F with liftx ∈ π s τ [t,t+r−1] p 1 F. Then d r,p1F (x) is the image ofx in π s−1 τ t+r p 1 F. But the natural isomorphism τ [a,b] p 1 ≃ τ [a,b] p 2 induces an equivalence between the diagrams
τ [a,b] p i F τ [a,b−1] p i F · · · τ [a,a+1] p i F τ [a,a] p i F τ b p i F τ b−1 p i F · · · τ a+1 p i F τ a p i F
for i = 1, 2, which ensures that d r,p1F (x) = d r,p2F (x) as long as t + r ≤ b.
Remark 3.21. We are not asserting that x ∈ E 2,p1F survives to E r if and only if the corresponding class in E 2,p2F does; this should really be taken as another assumption on the class x.
In our setting, the equivalence
τ [t,2t−2] pic ′ (A) ≃ τ [t,2t−2] ΣA(31)
of [HMS17], valid for t ≥ 2 and functorial in the ring spectrum A, implies immediately the following corollary:
Corollary 3.22. Let A ∈ Sh(BG proét , CAlg (p) ), and consider the two spectral sequences (14) and (26). Then
(i) E s,t 2 ≃ E s,t−1 2,+ if t ≥ 2.
(ii) The differentials d r and d r,+ on E s,t r ≃ E s,t−1 r,+ agree as long as r ≤ t − 1 (whenever both are defined).
Finally, we want to prove décalage for the sheaf pic(E), as we did for the sheaf E itself. This will allow us to determine the differential d t on classes in E s,t 2 too. Proposition 3.23. Décalage of the Postnikov filtration induces an isomorphism between the following spectral sequences:
E s,t 2 = π t−s Γτ t E = H s (G, π t pic(E)) =⇒ π t−s pic n , E 2s−t,s 3 = π s π t pic ′ (E ⊗•+1 ) =⇒ π s pic n .
The first is the Picard spectral sequence (26), and the second is the Bousfield-Kan spectral sequence for the cosimplicial spectrum Γ(G • , pic(E)) = pic(Mod perf E ⊗• ,K ). Remark 3.24. The Bousfield-Kan spectral sequence of Proposition 3.23 agrees in the region t > 0 with Heard's spectral sequence [Hea22, Theorem 6.13]. The starting pages may differ in internal degree t = 0, at least when the filtration is positive.
Proof. As in Proposition 2.30, this follows from Proposition 2.29 once we prove the following equivalences:
Γ(G q , τ 0 pic(E)) ≃ π 0 Γ(G q , pic(E)), Γ(G q , Σπ 1 pic(E)) ≃ τ 1 Γ(G q , pic(E)).
Using Proposition 3.15, we compute that
H t−s (G q , Pic(E)) ≃ H t−s (G q , (π 0 E) × ) = 0
for t − s > 0, since condensed cohomology with profinite coefficients vanishes (Remark 2.31).
Following the method of [MS16], we can now identify the first new differential in the Picard spectral sequence: Corollary 3.25. Suppose that t ≥ 2 and x ∈ E t,t 2 . We abuse notation to identify x with its image in the starting page of the descent spectral sequence for E, and assume that both classes survive to the respective E t -pages. The first nonadditive differential on x in the Picard spectral sequence is
d t x = d ASS t x + x 2(32)
The ring structure in the right hand side is that of the K-local E-Adams spectral sequence (14).
Proof. The same proof that appears in [MS16] goes through: namely, this formula holds for the universal cosimplicial E ∞ -ring having a class in E t,t 2 of its Bousfield-Kan spectral sequence, and so for the cosimplicial spectrum E(G •+1 ) = E ⊗•+1 too; note that since x has internal degree at least two, it does not matter if we use the sheaf pic(E) or pic • Mod E,K .
Picard group computations
In the previous parts we constructed proétale models for the continuous action of G on Morava E-theory, its K-local module ∞-category and its Picard spectrum. Respectively, these are E (constructed in Section 2), Mod E,K (in Section 3.1), and pic(E) (in Section 3.2). We also described the resulting spectral sequences. In this section, we compute the Picard spectral sequence in the height one case and use this to give a new proof of the results of [HMS94] at all primes. As is common at height one, this splits into two cases: the case of odd primes, and the case p = 2. In both cases, the strategy is first to compute the descent spectral sequence for E (which by Proposition 2.30 is the K-local E-Adams spectral sequence) and then to use this to compute the Picard spectral sequence. However, the spectral sequences look somewhat different in the two cases. We start with some generalities, which are true uniformly in n and p.
Morava modules and the algebraic Picard group
A productive strategy for studying Pic n is to compare it to a certain algebraic variant Pic alg n first defined in [HMS94, §7]; we recall its definition below. In [Pst22], Pstrągowski shows that the algebraic approximation is precise if p ≫ n, a consequence of the fact that in this case the vanishing line in the Adams spectral sequence occurs at the starting page. While there is always a vanishing line, when p − 1 | n it appears only at a later page, since G no longer has finite cohomological dimension mod p. In this section we will show that this leaves a possibility for exotic Picard elements, and more importantly explain how to identify these in the Picard spectral sequence (Theorem 4.4).
To do so, we recall that the completed E-homology of a K-local spectrum X is
E ∨ * X := π * (E ⊗ X) = π * L K (E ∧ X).
This is naturally a π * E-module: indeed K-localisation is symmetric monoidal, and therefore sends the Emodule E ∧ X to a module over L K E = E. As we discuss below, the abelian group E ∨ * X has significantly more structure, and is a crucial tool in understanding the K-local category. It was first studied by Hopkins, Mahowald and Sadofsky in [HMS94], where it is denoted K n, * (−); it is almost a homology theory, but fails to preserve infinite coproducts as a result of the failure of K-localisation to be smashing. It is nevertheless an extremely effective invariant for Picard group computations, by virtue of the following theorem: In particular, Theorem 4.1 implies that completed E-homology is not a useful invariant of invertible K-local spectra when we only remember its structure as a π * E-module: since π 0 E is a Noetherian local ring, its Picard group is trivial. To get a more interesting invariant, we should remember the equivariant structure coming from the Morava action on E. That is, if X is a K-local spectrum then G acts on E ⊗ X by acting on the first factor, and therefore acts on E ∨ * X. This action makes E ∨ * X into a twisted G-π * E-module, which means by definition that g(a · x) = ga · gx for x ∈ E ∨ * X, a ∈ π * E and g ∈ G. We will write Mod G π * E for the category of twisted G-π * E-modules.
Remark 4.2.
For any twisted G-π * E-module M , the G-action is continuous for the I n -adic topology: if gx = y ∈ M , then Section 4.1 implies that
g(x + ax ′ ) = y + ga · gx ′ ∈ y + I k n M
for a ∈ I k n and x ′ ∈ M , since the action of G on π * E fixes the I n -adic filtration; that is, g −1 (y + One of the main theorems of [HMS94] is the computation κ 1 ≃ Z/2 at the prime 2 (this is Theorem 3.3 therein). We want to show how the descent spectral sequence for pic(E) recovers this computation, and we begin by identifying the algebraic Picard group in the spectral sequence. The aim of this subsection is therefore to prove the following result: (1) For any R, the full subcategory spanned by the L-complete modules is a thick abelian subcategory of Mod ♥ R , with enough projective generators [HS99, Theorem A.6 and Corollary A.12]. This is denoted M in op. cit., but to avoid a clash of notation later on we will write Mod ♥cpl R . The projective objects are precisely those R-modules which are pro-free, that is M = L 0 S R for some (possibly infinite) set S.
(2) The functor L 0 from modules to L-complete modules is a localisation. In particular, colimits in Lcomplete are computed as
L 0 colim M,
where colim M denotes the colimit at the level of modules. Thus Mod ♥cpl R is still generated under colimits by the L-completion of the unit, and in particular κ-presentable where κ is chosen so that L 0 R is κ-compact (note that we may have to take κ > ω).
(3) Any m-adically complete module is L-complete [HS99,Theorem A.6]. In particular, π * E is an Lcomplete module over itself.
(4) The category Mod ♥cpl R admits a unique symmetric monoidal product making L 0 a monoidal functor. This is given by the formula
M ⊗ L0R N = L 0 (M ⊗ R N )
for M and N L-complete. For m-adically complete modules, one can also define the m-complete module
(M ⊗ R N ) ∧ m ≃ (M ⊗ L0R N ) ∧ m ,
but we will have no use for this.
(5) Derived completion agrees with ordinary completion on finitely generated modules, and on projective modules: in other words, ϵ M is an isomorphism in either of these cases. Moreover, for any N the composition If A is an L-complete R-algebra (not necessarily Noetherian), we can define a category of A-modules which are L-complete with respect to m ⊂ R, Mod ♥cpl
M ⊗ R L 0 N → L 0 M ⊗ R L 0 N → L 0 (M ⊗ R N ) = L 0 M ⊗ L0R L 0 N isA := Mod A (Mod ♥cpl R
). In this case it is not clear that every invertible module is finitely presented (compare the discussion in Section 3.2); any invertible module which is finitely presented will also be invertible with respect to the uncompleted tensor product.
We now specialise to the case (R, m) = (π 0 E, I n ), and work towards the proof of Theorem 4.4. It is shown in [HS99,Proposition 8.4] that for any K-local spectrum X, the π 0 E-module E ∨ 0 X is L-complete, and that E ∨ * X is finitely generated over π * E if and only if X is K-locally dualisable [HS99, Theorem 8.6]. Our first task is to prove that the presheaf of 1-categories
S → Mod ♥cpl π * E(S)
is a stack on Free G .
Warning 4.5. We would like to proceed as in Section 3.1, but there is a small subtlety: namely, to deduce descent from the results of [Hai22] (as in part (3) of the proof of Proposition 3.5), we would need to show that (the nerve of) Mod ♥cpl π * E is compactly generated; in fact, it would suffice to show it is compactly assembled. This is not clear: for example, Barthel and Frankland observe [BF15, Appendix A] that the unit in Mod ♥cpl R (which is a generator) cannot be compact. For our purposes it is enough to find any (small) set of compact generators: by comparison, the K-local category is compactly generated by the K-localisation of any finite type-n spectrum, even though its unit is not compact.
At this point it will be useful to pass to the ∞-category of complete modules over the discrete rings π 0 E(S), as defined in [SAG,§7.3] or [BS14,§3.4]. This can be seen to be compactly generated by virtue of local (=Greenlees-May) duality; we will make use of this observation in the proof of Proposition 4.8. Given a discrete commutative ring R complete with respect to a finitely generated maximal ideal m, we will view R as a (connective) E ∞ -ring and write Mod cpl R ⊂ Mod R for the sub-∞-category of complete objects [SAG, Definition 7.3.1.1], which is a localisation of Mod R . There is a unique symmetric monoidal product on Mod cpl R for which the localisation is a monoidal functor, and to avoid confusion we denote this by ⊗ Example 4.6. To give an example which makes the difference between 1-categories and ∞-categories apparent, one can consider the colimit along the multiplication-by-p maps Z/p → Z/p 2 → · · · over Z p . The colimit in Mod ♥cpl Zp is L 0 (Z/p ∞ ) = 0; on the other hand, in the ∞-categorical setting one has L lim − → Z/p = ΣL 1 (Z/p ∞ ) = ΣZ p . We do not know if Z p can be written as the filtered colimit of finite p-groups in Mod ♥cpl Zp in some other way.
In particular, this example shows that the t-structure on Mod cpl R is generally not compatible with colimits in the sense of [HA, Definition 1.2.2.12].
Lemma 4.7. For any profinite set T , we have an equivalence
Mod cpl π0E(G×T ) ≃ Sh(T, Mod cpl π0E ),(33)
Proof. According to [Hov08, Corollary 2.5], if i → X i ∈ Sp K is a filtered diagram such that E ∨ * X i is pro-free for each i ∈ I, then the natural map
L 0 lim − → i E ∨ 0 X i → E ∨ 0 lim − → i X i
is an isomorphism. In particular this applies to give the middle equivalence in
π 0 E(G × T ) ≃ E ∨ 0 lim − → i Ti 1 K ≃ L 0 lim − → i Ti π 0 E = L 0 lim − → i π 0 E(G × T i ),(34)
since E ∨ 0 Ti 1 K = Ti E ∨ 0 1 K = Ti π 0 E is certainly pro-free, each T i being finite. As a result, item (2) implies that π 0 E(G × T ) is the colimit in Mod ♥cpl π0E of the algebras π 0 E(G × T i ) = Ti π 0 E. In fact this particular example is also the limit in Mod cpl π0E : indeed, for s > 0 we see that
L s lim − → π 0 E(G × T i ) = 0 by [HS99, Theorem A.2(b)], since lim − → Ti π 0 E is projective in Mod ♥ π0E
. This yields the first of the following equivalences of (presentably symmetric monoidal) ∞-categories:
Mod cpl π0E(G×T ) ≃ Mod cpl L0 lim − → π0E(G×Ti) ≃ lim − → Mod cpl π0E(G×Ti) ≃ Sh(T, Mod cpl π0E ).
The second equivalence follows from Lemma 2.5, and the second can be proved identically to part (2) of the proof of Proposition 3.5 (replacing E there by π 0 E).
Proposition 4.8. The presheaf S → Mod ♥cpl,perf π0E(S)
it is a stack of 1-categories; in other words, it is a sheaf of 1-truncated symmetric monoidal ∞-categories on Free G .
Proof. We will proceed in a few steps: we will begin by showing that S → Mod cpl π0E(S) is a sheaf of ∞categories, and then deduce the desired result at the level of 1-categories.
(1) Since any covering in Free G is of the form p × G:
T ′ × G → T × G for p : T ′ → T aT → Sh(T, Mod cpl π0E ) ≃ Sh(T ) ⊗ Mod cpl π0E
is a sheaf on Profin by the proof of Theorem 3.1, and so Lemma 4.7 implies that (35) is a sheaf too.
(2) Next, we restrict item (1) to perfect objects. As in the proof of Proposition 3.12, it will suffice to show that the following holds for any covering T ′ → T of profinite sets:
a π 0 E(T × G)-module M is finitely presented if M ⊗ π0E(T ×G) π 0 E(T ′ × G)
is. As in loc. cit. this follows from the map π 0 E(S) → π 0 E(S ′ ) being descendable: in fact, this is the colimit of maps Ti π 0 E → T ′ i π 0 E, each of which is the equaliser of
T ′ i π 0 E ⇒ T ′ i π 0 E ⊗ T i π0E T ′ i π 0 E = T ′ i × T i T ′ i π 0 E.
(3) Finally, we will deduce descent for the presheaf of 1-categories S → Mod ♥cpl π0E(S) . Restricting to perfect (=finitely presentable) objects using item (2) will give the proposition. Given a covering p : T ′ → T of profinite sets, we form the diagram
Mod ♥cpl π0E(S) lim Mod ♥cpl π0E(S ′ ) Mod ♥cpl π0E(S ′ × S 2 ) Mod ♥cpl π0E(S ′ × S 3 ) · · · Mod cpl π0E(S) lim Mod cpl π0E(S ′ ) Mod cpl π0E(S ′ × S 2 ) Mod cpl π0E(S ′ × S 3 ) · · · ∼ θ
where S (i) = T (i) × G as usual. Note that the limit of the top row can be computed as the limit of the truncated diagram ∆ ≤2 → ∆ → Cat ∞ , since each term is a 1-category. Moreover, the diagram shows that θ is fully faithful, and so to prove descent it remains to show that θ is essentially surjective. That is, given
M ∈ Mod cpl π0E(S) with M ⊗ L π0E(S) π 0 E(S ′ ) discrete,
we must show that M was discrete to begin with. To this end, we claim first that π 0 E(S ′ ) is projective over π 0 E(S); since the graph of p exhibits π 0 E(S ′ ) = Cont (T ′ , π 0 E) as a retract of Cont (T ′ × T, π 0 E) = Cont (T ′ , Cont (T, π 0 E)), it will suffice for this part to show that the latter is projective. But Cont (T, π 0 E) is pro-discrete, which implies that
Cont (T ′ , Cont (T, π 0 E)) ∼ = lim I lim − → i T ′ i Cont (T, π 0 E/I) ∼ = lim I lim − → i T ′ i Cont (T, π 0 E) ⊗ π0E π 0 E/I ∼ = lim I lim − → i T ′ i Cont (T, π 0 E) ⊗ π0E π 0 E/I ∼ = L 0 lim − → i T ′ i Cont (T, π 0 E)
is pro-free. To obtain the final isomorphism, we've used the fact that each term in the colimit is free over Cont (T, π 0 E), so that the (uncompleted) colimit is projective. As a result, for any complete π 0 E(S)-module spectrum M we have
π * M ⊗ L π0E(S) π 0 E(S ′ ) = (π * M ) ⊗ π0E(S) π 0 E(S ′ ).
Since π 0 E(S ′ ) is faithful over π 0 E(S), we deduce that M is discrete whenever its basechange is.
It is convenient at this stage to work with Picard spaces, which as usual we denote Pic.
Corollary 4.9. The presheaf S → Pic Mod ♥cpl,perf
π * E(S)
is a sheaf of groupoids on Free G .
Proof. By Proposition 4.8, the assignment
S → Pic Mod ♥cpl,perf π0E(S)(36)
is a sheaf. But any finitely presented invertible module is locally concentrated either in even or in odd degrees, and so (36) extends to the graded case.
We are now equipped to prove the promised result, identifying the algebraic elements in the Picard spectral sequence.
Proof. (Theorem 4.4). In the 0-stem of the descent spectral sequence, the bottom two lines compute the image of the map π 0 Γpic(E) → π 0 Γτ ≤1 pic(E). We will argue by computing the target, identifying it with the algebraic Picard group.
First recall that by definition, pic(E) = pic(Mod perf E,K ). Observe that τ ≤1 Pic(Mod perf E(S),K ) = Pic(hMod perf E(S),K ), and so τ ≤1 Pic(E) is the sheafification of
S → Pic hMod perf E(S),K .
Given E(S)-modules M and M ′ with M ∈ Pic(Mod perf E(S),K ), we saw in Proposition 3.19 that M and so π * M is locally free, and therefore the latter is projective over π * E(S). The universal coefficient spectral sequence [EKMM,Theorem 4.1] over E(S) therefore collapses. Thus
[M, M ′ ] E(S) ≃ Hom π * E(S) (π * M, π * M ′ ) ,
and we see that the functor
π * : Pic hMod perf E(S),K → Pic Mod ♥cpl,perf π * E(S)(37)
is fully faithful. It is in fact an equivalence: any finitely presented invertible module over π * E(S) is also invertible with respect to the unlocalised tensor product, and so projective, and in particular lifts to Mod E(S),K [Wol98, Theorem 3].
By Corollary 4.9, no sheafification is therefore required when we restrict τ ≤1 Pic Mod perf E,K to Free G and we obtain
Γτ ≤1 Pic(E) ≃ lim Pic hMod perf E(G),K Pic hMod perf E(G 2 ),K Pic hMod perf E(G 3 ),K ≃ lim Pic Mod ♥cpl,perf π * E Pic Mod ♥cpl,perf Cont(G,π * E) Pic Mod ♥cpl,perf Cont(G×G,π * E) ≃ lim Pic (Mod π * E ) Pic Mod Cont(G,π * E) Pic Mod Cont(G×G,π * E)
This is the groupoid classifying twisted G-π * E-modules with invertible underlying module; in particular, π 0 Γτ ≤1 Pic(E) ≃ Pic alg n . On free G-sets, the truncation map
Pic Mod perf E(S),K → τ ≤1 Pic Mod ♥cpl,perf E(S),K ≃ Pic Mod ♥cpl,perf π * E(S)
is just M → π * M . On global sections, it therefore sends M to the homotopy groups of the associated descent datum for the covering G → * ; this is its Morava module E ∨ * M = π * E ⊗ M , by definition. (ii) surjective if and only if there are no differentials in the Picard spectral sequence having source (0, 1) (the generator of the Z/2 in bidegree (0, 0) certainly survives, and represents Σ1 K ).
This refines the algebraicity results in [Pst22], although in practise it is hard to verify either assertion without assuming a horizontal vanishing line at E 1 in the ASS (which is what makes the results of op. cit. go through).
κ n ∼ = H 2p−1 (G, π 2p−2 E).
Proof. This follows from sparsity in the Adams spectral sequence. By [Hea14, Prop. 4.2.1], the lowestfiltration contribution to the exotic Picard group comes from E 2p−1,2p−1 ∞ . If p − 1 ∤ n the vanishing line in the Adams spectral sequence occurs at the starting page, and if moreover n 2 ≤ 4p − 4 then the group E 2p−1,2p−1 ∞ is the only possibly nonzero entry in this region, and hence fits in the exact sequence
1 → (π 0 E) × d2p−1 −−−→ H 2p−1 (G, π 2p−2 E) → E 2p−1,2p−1 ∞ → 0.
In fact, the differential must vanish. Indeed, a weak form of chromatic vanishing proven in [BG18, Lemma 1.33] shows that
H 0 (G, π 0 E) ≃ H 0 (G, W(F p n )) ≃ H 0 (Gal(F p n /F p ), W(F p n )) ≃ Z p .
This isomorphism is a component of the map on the E 2 -pages of Adams spectral sequences associated to the diagram 1 p 1 K
SW E
Here 1 p denotes the p-complete sphere, and SW the spherical Witt vectors of F p n ; the bottom map is defined under the universal property of SW [Lur18, Definition 5.2.1] by the inclusion F p n → (π 0 E)/p. Since (π 0 SW hGal ) × = (π 0 1 p ) × = Z × p , the group E 0,1 2 ≃ Z × p in the Picard spectral sequence consists of permanent cycles.
Example 4.12. At the prime three this gives κ 3 ∼ = H 5 (G, π 4 E). In this case, the Morava stabiliser group has cohomological dimension nine.
Example 4.13. In the boundary case 2p − 1 = cd p (G) = n 2 , we can use Poincaré duality to simplify the relevant cohomology group: this gives To our knowledge, it is not known if there are infinitely primes p for which 2p − 1 is a perfect square; this is closely tied to Landau's (unsolved) fourth problem, which asks if there are infinitely many primes of the form n 2 + 1.
κ n ∼ = H 2p−1 (G, π 2p−2 E) ∼ = H 0 (S, π 2p−2 E) Gal .(38)
Picard groups at height one
It is well-known that Morava E-theory at height one (and a fixed prime p) is p-completed complex K-theory KU p , and as such its homotopy is given by
π * E = Z p [u ±1 ],(39)
with u ∈ π 2 E the Bott element. In this case, the Morava stabiliser group is isomorphic to the p-adic units Z × p , acting on KU p by Adams operations ψ a : u → au.
The K-local E-Adams spectral sequence therefore reads
E s,t 2,+ = H s (Z × p , Z p (t/2)) =⇒ π t−s L K S,(40)
−5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 1 1 1 Figure 1: The E1-page of the HFPSS for E at odd primes (implicitly at p = 3). Squares denote Zp-summands, and circles are p-torsion summands (labelled by the degree of the torsion). This recovers the well-known computation of π * 1 K at height 1 and odd primes.
where Z p (t/2) denotes the representation
Z × p t/2
− − → Z × p → Z p when t is even, and zero when t is odd. Note that these are never discrete Z × p -modules, except at t = 0. Nevertheless, cohomology of continuous pro-p modules is sensible for profinite groups G of type p-FP ∞ [SW00, §4.2]. The small Morava stabiliser groups S are in general p-adic Lie groups, and so satisfy this assumption; this implies that G is type p-FP ∞ , since S is a finite index subgroup. In this case cohomology is continuous, in the sense that its value on a pro-p module is determined by its value on finite quotients:
H * (G, lim ← − M i ) = lim ← − H * (G, M i ).
Under the same assumption on G, there is also a Lyndon-Hochschild-Serre spectral sequence for any closed normal subgroup N < o G [SW00, Theorem 4.2.6]: that is,
E i,j 2 = H i (G/N, H j (N, M )) =⇒ H i+j (G, M ).(41)
See also [Jan88, Theorem 3.3] for a similar result for profinite coefficients that are not necessarily pro-p: one obtains a Lyndon-Hochschild-Serre spectral sequence by replacing the Galois covering X ′ → X therein by a map of sites BGé t → BNé t .
The descent spectral sequence for pic(E) at height one (and all primes) therefore has starting page
E s,t 2 = H s (Z × p , Z/2) t = 0 H s (Z × p , Z p (0) × ) t = 1 H s (Z × p , Z p ( t−1 2 )) t ≥ 2(42)
The results of Section 3.2 also tell us how to discern many differentials in the Picard spectral sequence from those in the Adams spectral sequence: in particular, we will make use of Corollary 3.22 and Corollary 3.25. Our input is the well-known computation of the K-local E-Adams spectral sequence at height one. A convenient reference is [BGH22, §4], but for completeness a different argument is presented in Appendix B.
Odd primes
When p > 2, the Adams spectral sequence collapses immediately:
Lemma 4.14 (Lemma B.1). The starting page of the descent spectral sequence for E is given by
E s,t 2,+ = H s (Z × p , π t E) = Z p t = 0 and s = 0, 1 Z/p νp(t)+1 t = 2(p − 1)t ′ ̸ = 0 and s = 1(43)
and zero otherwise. The result is displayed in Figure 1.
As a result of the vanishing line, the computation of Pic(Sp K ) in this case depends only on H * (Z × p , Pic(E)) and H * (Z × p , (π 0 E) × ). This recovers the computation in [HMS94, Proposition 2.7]. Corollary 4.15. The height one Picard group is algebraic at odd primes:
Pic 1 ∼ = Pic alg 1 ∼ = Z p × Z/2(p − 1)(44)
4.2.2
The case p = 2 At the even prime, the Morava stabiliser group contains 2-torsion, and therefore its cohomology with 2-complete coefficients is periodic.
Lemma 4.16. The starting page of the descent spectral sequence for E is given by
E s,t 2,+ = H s (Z × 2 , π t E) =
Z 2 t = 0 and s = 0, 1 Z/2 t ≡ 4 2 and s ≥ 1 Z/2 ν2(t)+2 0 ̸ = t ≡ 4 0 and s = 1 Z/2 t ≡ 4 0 and s > 1
(45)
and zero otherwise.
This time we see that the spectral sequence can support many differentials. These can be computed by various methods, as for example in [BGH22]. We give another proof, more closely related to our methods, in Appendix B.
Proposition 4.17. The descent spectral sequence collapses at E 4 with a horizontal vanishing line. The differentials on the third page are displayed in Fig. 2.
As a result of the previous subsection, we can compute the groups of exotic and algebraic Picard elements at the prime 2. We will need one piece of the multiplicative structure: write η for the generator in bidegree (s, t) = (1, 2), and u −2 η 2 for the generator in bidegree (s, t) = (2, 0).
Lemma 4.18.
In the descent spectral sequence for E, the class
x := u −2 η 2 · η ∈ E 3,2 2
is non-nilpotent. In particular, x j generates the group in bidegree (s, t) = (3j, 2j) of the descent spectral sequence for E.
Proof. The classes u −2 η 2 and η are detected by elements of the same name in the HFPSS for the conjugation action on KU 2 , under the map of spectral sequences induced by the square of Galois extensions
1 K KO 2 KU 2 KU 2
To see this, one can trace through the computations of Appendix B: indeed, the proof of Proposition B.3 identifies the map of spectral sequences induced by the map of µ 2 -Galois extensions
1 K KO 2 KU 1+4Z2 2 KU 2
and Lemma B.5 identifies the descent spectral sequence for E with the HFPSS for KU 1+4Z2 2 , up to a filtration shift. But the starting page of the HFPSS for KU 2 is
H * (µ 2 , π * KU 2 ) = Z 2 [η, u ±2 ]/2η, and here in particular (u −2 η 2 · η) j = u −2j η 3j ̸ = 0.
Proposition 4.19. At the prime 2, the exotic Picard group κ 1 is Z/2.
Proof. We will deduce this from Proposition 4.17, which implies that the descent spectral sequence for pic(E) takes the form displayed in Figure 4. According to Theorem 4.4, the only differential remaining for the computation of κ 1 is that on the class in bidegree (s, t) = (3, 0), which corresponds to the class x ∈ E 3,2 2,+ of the Adams spectral sequence. Applying the formula from Lemma 3.25,
Brauer groups
In the previous sections, we considered Galois descent for the Picard spectrum of the K-local category. Recall that the Picard spectrum deloops to the Brauer spectrum, which classifies derived Azumaya algebras. In this section we consider Galois descent for the K-local Brauer spectrum; see [BRS12; AG14; GL21; AMS22] for related work. Unlike most of these sources, the unit in our context is not compact, and this makes the corresponding descent statements slightly more delicate. We begin with the basic definitions:
Definition 5.1. Suppose R ∈ CAlg(Sp K ). Two R-algebras A and B are Morita equivalent if there is an R-linear equivalence LMod A (Mod R,K ) ≃ LMod B (Mod R,K ). We will write A ∼ B.
Definition 5.2 ([HL], Definition 2.2.1). Suppose R ∈ CAlg(Sp K ). An associative algebra A ∈ Alg R,K is a K-local Azumaya R-algebra if there exists B ∈ Alg R,K such that A ⊗ R B is Morita equivalent to R.
(i) A is nonzero; (ii) A is dualisable in Alg R,K ; (iii) The canonical map A ⊗ R A op → End R (A) is an equivalence.
We will define the space Az(R) ⊂ ιAlg R,K to be the subgroupoid spanned by K-local Azumaya algebras over R, and Az(R) := π 0 Az(R). In analogy with Picard groups we will write Br n := Br(1 K ). Our objective is to show how the Picard spectral sequence (20) can be used to compute this. First recall the main theorem of op. cit.:
Theorem 5.5 ( [HL], Theorem 1.0.11). There is an isomorphism
Br(E) ≃ BW(κ) × Br ′ (E),
where BW(κ) is the Brauer-Wall group classifying Z/2-graded Azumaya algebras over κ := π 0 E/I n , and Br ′ (E) admits a filtration with associated graded k≥2 I k n /I k+1 n .
We will not discuss this result, and instead focus on the orthogonal problem of computing the group of Klocal Brauer algebras over the sphere which become Morita trivial over Morava E-theory; in the terminology of [GL21] this is the relative Brauer group, and will be denoted by Br 0 n . The full Brauer group can then (at least in theory) be obtained from the exact sequence
1 → Br 0 n → Br n → Br(E) G .
While interesting, the problem of understanding the action of G on Br(E) is somewhat separate, and again we do not attempt to tackle this.
Brauer groups and descent
In this subsection we show that the Picard spectral sequence gives an upper bound on the size of the relative Brauer group, as proven in [GL21, Theorem 6.32] for finite Galois extensions of unlocalised ring spectra. To this end, we define a 'cohomological' Brauer group; this might also be called a 'Brauer-Grothendieck group' of the K-local category, as opposed to the 'Brauer-Azumaya' group discussed above. The ideas described in this section goes back to work of Toën on Brauer groups in derived algebraic geometry [Toë12].
Given R ∈ CAlg(Sp K ), the ∞-category Mod R,K is symmetric monoidal, and therefore defines an object of CAlg Sp K (Pr L ). We will consider the symmetric monoidal ∞-category
Cat R,K := Mod Mod R,K (Pr L κ ),
where κ is chosen to be large enough that 1 K is κ-compact.
Definition 5.6. The cohomological K-local Brauer space of R is the Picard space
Br coh (R) := Pic(Cat R,K ).
Since Pr L κ is presentable [HA, Lemma 5.3.2.9(2)], this is once again a small space. In analogy with the Picard case, we also write Br 0,coh n for the full subspace of Br coh n := Br coh (1 K ) spanned by invertible Sp K -modules C for which there is an R-linear equivalence C ⊗ Sp K Mod E,K ≃ Mod E,K .
Definition 5.7. Passing to ∞-categories of left modules defines a functor Alg(Mod R,K ) → Cat R,K , with algebra maps acting by extension of scalars. This restricts by the Azumaya condition to
Az(R) → Br coh (R)(46)
We define Br(R) ⊂ Br coh (R) to be the full subgroupoid spanned by the essential image of Az(R). We moreover define Br n := Br(1 K ) and Br 0 n := Br n ∩ Br 0,coh n .
Warning 5.8. When working with plain E ∞ -rings, one can take κ = ω in the definition of the cohomological Brauer group, and so the two groups agree by Schwede-Shipley theory. Indeed, a cohomological Brauer class is then an invertible compactly generated R-linear ∞-category C, and in particular admits a finite set {C 1 , . . . , C n } of compact generators [AG14, Lemma 3.9]. Thus C ≃ Mod A for A := End C ( i C i ) an Azumaya algebra. This argument fails in Pr L κ for κ > ω; on the other hand, Sp K ̸ ∈ CAlg(Pr L ω ) since the unit is not compact.
We now provide a descent formalism suitable for our context, based on the approach of [GL21]. If C is a presentably symmetric-monoidal ∞-category with κ-compact unit and R ∈ CAlg(C), we will write Cat R := Mod Mod R (C) Pr L κ . We will be interested in descent properties of the functor Cat (−) : CAlg(C) → Pr L,smon : that is, if R → R ′ is a map of commutative algebras, we would like to know how close the functor θ below is to an equivalence:
Cat R θ − → lim Cat R ′ Cat R ′ ⊗ R R ′ · · · .(47)
Lemma 5.9. If R ′ is a descent R-algebra, then θ is fully faithful when restricted to the full subcategory spanned by ∞-categories of left modules.
Proof. Let A, A ′ ∈ Alg R (C). Writing LMod A = LMod A (C), we have equivalences of ∞-categories Fun R (LMod A , LMod A ′ ) ≃ RMod A⊗ R A ′ op ≃ lim RMod A⊗ R A ′ op ⊗ R R ′ • ≃ lim RMod (A⊗ R R ′• )⊗ R ′ • (A ′ ⊗ R R ′• ) op ≃ lim Fun R ′ • (LMod A⊗ R R ′ • , LMod A ′ ⊗ R R ′ • ) .(48)
Here we have twice appealed to [HA,Theorem 4.8.4.1] (applied to A op and A ′ op ), and to Lemma 3.4 for the second equivalence. Taking maximal subgroupoids, we obtain
Map Cat R (LMod A , LMod A ′ ) ≃ lim Map Cat R ′ • (LMod A⊗ R R ′• , LMod A ′ ⊗ R R ′ • ) ≃ Map lim Cat R ′ • (LMod A⊗ R R ′• , LMod A ′ ⊗ R R ′ • ) .
Corollary 5.10. For any covering S ′ → S in BG proét , the functor
θ : Cat E(S),K → lim Cat E(S ′ × S • ),K
is fully faithful when restricted to left module ∞-categories.
Using this, we can give a bound on the size of Br 0 n . Proposition 5.11. The group Br 0 n is a subgroup of π 0 Tot BPic(E •+1 ).
Proof. If S ′ → S is a covering in BG proét , we will write R → R ′ for the extension E(S) → E(S ′ ). Writing Br(R | R ′ ) for the full subgroupoid of Br(R) spanned by objects LMod A,K such that
LMod A,K ⊗ Mod R,K Mod R ′ ,K ≃ Mod R ′ ,K ,Br(R | R ′ ) lim BPic(R ′ • ) BPic(R ′ ) BPic(R ′ ⊗ R R ′ ) · · · Br(R) lim Br(R ′ • ) Br(R ′ ) Br(R ′ ⊗ R R ′ ) · · · ι Cat R,K lim ι Cat R ′ • ,K ι Cat R ′ ,K ι Cat R ′ ⊗ R R ′ ,K · · · θ (⋆)(49)
Here hooked arrows denote fully faithful functors: this is essentially by definition in most cases, with the starred functor being fully faithful by virtue of Corollary 5.10 (and the fact that passing to the maximal subgroupoid preserves limits). The dashed arrow, which clearly exists, is necessarily fully faithful by 2-outof-3.
Remark 5.12. The group π 0 lim BPic(E • ) is computed by the (−1)-stem in the spectral sequence for Pic(E • ), and as noted in Remark 3.7 we do not know the group in position (s, t) = (1, 0) in the E 2 -page. Thus while Proposition 5.11 gives us a spectral sequence bounding the relative Brauer group, its starting page is mysterious. In order to relate the relative Brauer group to the Picard spectral sequence, we will repeat the argument of Proposition 5.11 using the cosimplicial space BPic(Mod perf R ′ • ,K ) in place of BPic(R ′ • ). For this, the following definition will be useful.
Definition 5.13. For R ∈ CAlg(Sp K ), define hCat ′ R,K to be the following subcategory of h Cat R,K :
(i) An object C ∈ h Cat R,K lies in hCat ′ R,K if C ≃ LMod A,K for some A ∈ Alg R,K . (ii) Given A, A ′ ∈ Alg R,K , set Hom hCat ′ R,K (LMod A,K , LMod A ′ ,K ) := π 0 ιRMod perf A⊗ R A ′ ,K ⊂ π 0 ιRMod A⊗ R A ′ ,K = Hom h Cat R,K (LMod A,K , LMod A ′ ,K ) .
Define Cat ′ R,K by the following pullback diagram:
Cat ′ R,K Cat R,K hCat ′ R,K h Cat R,K
Proposition 5.14. The relative Brauer group Br 0 n is a subgroup of π 0 Tot BPic(Mod perf E • ,K ).
Proof. First note that the equivalences in (48) restrict to perfect objects in each side: that is, using Proposition 3.12 we have
LMod perf A⊗ R A ′ op ≃ lim LMod perf A⊗ R A ′ op ⊗ R R ′ • ≃ lim LMod perf (A⊗ R R ′ • )⊗ R ′ • (A ′ ⊗ R R ′ • ) op .
In the notation of Proposition 5.11, if R ′ (and therefore R, by Remark 3.10) is such that Pic(R ′ ) consists of perfect R ′ -modules, we may replace (49) by the diagram
Br(R | R ′ ) lim BPic(Mod perf R ′ • ,K ) BPic(Mod perf R ′ ,K ) BPic(Mod perf R ′ ⊗ R R ′ ,K ) · · · Pic(Cat ′ R,K ) lim Pic(Cat ′ R ′ • ,K ) Pic(Cat ′ R ′ ,K ) Pic(Cat ′ R ′ ⊗ R R ′ ,K ) · · · ιCat ′ R,K lim ιCat ′ R ′ • ,K ιCat ′ R ′ ,K ιCat ′ R ′ ⊗ R R ′ ,K · · · θ ′
For example, this is the case for R ′ = E. The same proof as in Corollary 5.10 now exhibits Br(R | R ′ ) as a full subspace of Tot BPic(Mod perf R ′ • ,K ).
In particular, the (−1)-stem in the descent spectral sequence for the Picard sheaf pic(E) gives an upper bound on the size of the relative Brauer group. We will draw consequences from this in the next subsection. Fig. 7), and the span of Galois extensions
1 K KO p KO KU p KU p KU
allows us to transport this differential. Note that the induced span on E 2 -pages is
µ p−1 µ p−1 Z/2 Z/2 Z/2 Z/2 d2 d2 d2 =
in bidegrees (s, t) = (1, 0) and (3, 1). Thus
π 0 Tot BPic(Mod perf E •+1 ,K ) ∼ = µ p−1 .
Remark 5.17. We now give a conjectural description of the possible nonzero elements of Br 0 1 . Recall the cyclic algebra construction of [BRS12]: its input is (i) a finite G-Galois extension R → S, (ii) an isomorphism χ : G ∼ = Z/k, and (iii) a strict unit u ∈ π 0 G m (R) := [Z, gl 1 (R)]. From this data, Baker, Richter and Szymik construct an Azumaya R-algebra A(S, χ, u) such that A(S, χ, u) ⊗ R S ≃ M k (S). This construction works equally well for a K-local cyclic Galois extension.
In [Car22], Carmeli shows that the subgroup µ p−1 ⊂ Z × p = (π 0 1 p ) × lifts to strict units, and hence we have µ p−1 ⊂ G m (1 K ) too. Fixing χ : µ p−1 ∼ = Z/p − 1, any root of unity ω gives an element
A(E h(1+pZp) , χ, ω) ∈ Br(1 K | E h(1+pZp) ) ⊂ Br(1 K | E),
and thus natural candidates for realisations of the classes in E 2,1 ∞ . In upcoming work we will show that these Brauer elements are indeed nontrivial, so that the cyclic algebra construction yields
A(E h(1+pZp) , χ, −) : µ p−1 ∼ = Br(1 K | E).
Remark 5.18. In fact, the machinery developed in Sections 2 and 3 should be useful in computing the groups π 0 G m (1 K ) in general. Indeed, choose an algebraic closure F p and recall that the spectrum of strict [BSY22,Theorem 8.17]. In the form stated, the result in this form is due to Hopkins and Lurie-Burklund, Schlank and Yuan compute the strict Picard spectrum of E. Since Gal(F p /F p n ) = n Z ≃ Z has finite cohomological dimension, it follows from Lemma 2.14 that the extension E → E is n Z-Galois, and so the extension 1 K → E is G := Aut(Γ n , F p ) = G ⋊ n Z-Galois. Taking strict units preserves limits, and so one obtains a descent spectral sequence
units of Morava E-theory E = E(F p , Γ n ) based on F p is F × p ⊕ Σ n+1 Z p byH s (BG proét , π t G m (E)) =⇒ π t−s G m (1 K )
for the resulting proétale sheaf E. To evaluate this spectral sequence, we must identify the sheaf cohomology on the starting page; this seems to be a surprisingly delicate problem. Assuming however that it is given by the expected group cohomology H * (G, π * G m (E)), the spectral sequence collapses to yield π 0 G m (1 K ) = µ p−1 .
5.2.2
The case p = 2 We now proceed with the computation of the (−1)-stem for the even prime.
Lemma 5.19. We have
H s (Z × 2 , Pic(E)) = Z/2 s = 0 (Z/2) 2 s ≥ 1 H s (Z × 2 , (π 0 E) × ) = Z 2 ⊕ Z/2 s = 0 Z 2 ⊕ (Z/2) 2 s = 1 (Z/2) 3 s ≥ 2
The resulting spectral sequence is displayed in Figure 6.
Proof. We need to compute H * (Z × 2 , Z/2) and H * (Z × 2 , Z × 2 ). Again we will use the LHSSS; for the first this reads
E i,j 2 = H i (µ 2 , H j (Z 2 , Z/2)) ∼ = Z/2[x, y]/x 2 i,j =⇒ H i+j (Z × 2 , Z/2),
The generators have (i, j)-bidegrees |x| = (0, 1) and |y| = (1, 0) respectively, and the spectral sequence is therefore determined by the differential d 2 (x) = λy 2 , where λ ∈ Z/2; since H 1 (Z × 2 , Z/2) = Hom Z × 2 , Z/2 = Z/2 ⊕ Z/2, we deduce that λ = 0. Finally, the potential extension 2x = y is ruled out by the fact that Z/2[y] = H * (µ 2 , Z/2) is a split summand. For the second group, the splitting Z × 2 ∼ = Z 2 × Z/2 gives a summand isomorphic to H * (Z × 2 , Z/2). The complement is computed by the LHSSS
E i,j 2 = H i (µ 2 , H j (Z 2 , Z 2 )) = Z 2 [w, z]/(w 2 , 2z) i,j =⇒ H i+j (Z × 2 , Z 2 ),
where now |w| = (0, 1) and |z| = (2, 0). In this case, the spectral sequence collapses immediately (again by computing H 1 ), with no space for extensions.
Corollary 5.20. At the prime two, |Br 0 1 | = 2 j for j ≤ 5.
Proof. Once again, what remains is to determine the possible differentials on classes in the (−1)-stem; these are displayed in Fig. 6. Of these, two can be obtained by comparison to the Picard spectral sequence for the Galois extension KO 2 → KU 2 . There are three remaining undetermined differentials, which appear as the dashed arrows in Fig. 6. Invoking Proposition 5.14 now gives the claimed upper bound. Remark 5.21. We comment on the realisation of classes in Br 0 1 at the prime two. Firstly, observe that the surviving class in E 6,5 ∞ corresponds to the nontrivial Brauer class in Br(KO | KU ), and in particular one might hope to show it is descended from a nonzero element of Br(KO 2 ). On the other hand, the conjectural computation of π 0 G m (1 K ) (Remark 5.18) would imply that π 0 G m (1 K ) = Z/2⟨1 + ε⟩, where π 0 1 K = Z 2 [ε]/(2ε, ε 2 ) (see [CY22,Corollary 5.5.5]); in particular, this unit becomes trivial in any Galois extension, and is detected in positive filtration of the descent spectral sequence. Moreover, there is at least one further E ∞ -class in the Brauer spectral sequence, and thus the two candidates above are not enough to determine everything. Realisation of Brauer classes is therefore significantly more subtle, and we will discuss this in future work.
A Appendix: Décalage results
We make the derivation of the descent spectral sequence a little more explicit, and relate it to the spectral sequence obtained from the covering G ↠ * . For clarity most of this section is written in a general context, but the main result is Lemma A.3, which will be used to relate the descent spectral sequence to the K-local E-Adams spectral sequence through décalage.
Let C be a site, and write A := Sh(C, Sp). Given any object F ∈ A, there are two natural filtrations one can consider:
(i) The usual t-structure on Sp induces one on A, defined by the property that F ′ ∈ A is t-truncated if and only if F ′ (X) ∈ Sp ≤t for every X ∈ C. One can therefore form the Postnikov tower in A, and obtain a tower of spectra on taking global sections.
(ii) Suppose U ↠ * is a covering of the terminal object. Since A is a sheaf, we can recover ΓF as the limit of itsČech complex for the covering, and this has an associated tower. Explicitly, write Tot q = lim ∆ ≤q so that ΓF ≃ Tot F(U • ) ≃ lim(· · · → Tot 0 F(U • )).
Any tower of spectra X = lim(· · · → X t → · · ·) gives rise to a (conditionally convergent) spectral sequence, as in Lemma 2.2. Respectively, in the cases above these read
E s,t 2 = π t−s Γτ t F =⇒ π t−s ΓF,(51)
andĚ p,q
2 = π q−p f q F(U • ) =⇒ π q−p ΓF,(52)
where f q denotes the fibre of the natural transformation Tot q → Tot q−1 . In each case, the d r differential has bidegree (r + 1, r) in the displayed grading.
For our purposes, the first spectral sequence, whose E 2 -page and differentials are both defined at the level of truncations, is useful for interpreting the descent spectral sequence: for example, we use this in Theorem 4.4. On the other hand, the second spectral sequence is more easily evaluated once we know the value of a proétale sheaf on the free G-sets. It will therefore be important to be able to compare the two, and this comparison is made using the décalage technique originally due to Deligne in [Del71]. The following material is well known (see for example [Lev15]), but we include an exposition for convenience and to fix indexing conventions. The décalage construction of [Hed21], which 'turns the page' of a spectral sequence by functorially associating to a filtered spectrum its decalée filtration, is closely related but not immediately equivalent.
Recall that any tower dualises to a filtration (this will be recounted below); the proof is cleanest in the slightly more general context of bifiltered spectra, and so we will make the connection between (51) and (52) explicit after proving a slightly more general result.
We suppose therefore that X is a spectrum equipped with a (complete and decreasing) bifiltration. That is, we have a diagram of spectra X t,q : Z op × Z op → Sp such that X = colim p,q X t,q . We will write X −∞,q := colim t X t,q for any fixed q, and likewise X t,−∞ := colim q X t,q for fixed t. Finally, write X t/t ′ ,q/q ′ := cofib(X t ′ ,q ′ → X t,q ).
Proposition A.1. Let X be a bifiltered spectrum, and suppose that for all t and q we have π s X t/t+1,q/q+1 = 0 unless s = t − q. Then there is an isomorphism
1 E s,t 2 ≃ 2 E 2s−t,s 3 ,(53)
where the left-hand side is the spectral sequence for the filtration X = colim t X t,−∞ and the right-hand for
X = colim q X −∞,q .
This isomorphism is compatible with differentials, and so extends to isomorphisms
1 E s,t r ≃ 2 E 2s−t,s r+1 for 1 ≤ r ≤ ∞.
Proof. We begin with the isomorphism (53): it is obtained by considering the following trigraded spectral sequences, which converge to the respective E 2 -pages.
E s,t,q 2 = π q−s X t/t+1,q+1/q =⇒ π q−s X t/t+1,−∞ = 1 E s+t−q,t 2 ,(54)E s,t,q 2 = π t−s X t+1/t,q+1/q =⇒ π t−s X −∞,q+1/q = 2 E s+q−t,q 2 .(55)
The d r differentials have (s, t, q)-tridegrees (r + 1, 0, r) and (r + 1, r, 0) respectively. Both spectral sequences therefore take a very simple form, because we have assumed each object X t+1/t,q+1/q is Eilenberg-Mac Lane. They are displayed in Fig. 9. In particular, the first collapses after the E 2 -page, so that 1 E * ,t 2 is the cohomology of the complex · · · → π t−q X t/t+1,q/q+1 → π t−q−1 X t/t+1,q+1/q+2 → · · · (56) whose differentials are induced by the composites
X t+1/t,q/q+1 → ΣX t+1/t,q+1 → ΣX t+1/t,q+1/q+2 .
More precisely, 1 E s,t 2 = π t−s X t/t+1,−∞ ∼ = H s π t− * X t/t+1, * / * +1 . The second spectral sequence is even simpler, collapsing immediately to give
2 E 2s−t,s 2 = π t−s X −∞,s/s+1 ∼ = π t−s X t/t+1,s/s+1 .
We can further identify the first differential on 2 E 2s−t,s 2 : it is induced by the map X −∞,2s−t/2s−t+1 → ΣX −∞,2s−t+1/2s−t+2 , and so the identification 1 E s,t 2 ∼ = 2 E 2s−t,s 3 follows from the commutative diagram below, in which the top row is part of the complex (56).
π t−s X t/t+1,s/s+1 π t−s ΣX t/t+1,s+1 π t−s ΣX t/t+1,s+1/s+2 π t−s X −∞,s/s+1 π t−s ΣX −∞,s+1 π t−s ΣX −∞,s+1/s+2 ∼ ∼
We next argue that this extends to an isomorphism of spectral sequences 1 E s,t r ∼ = 2 E 2s−t,s r+1 . To do so we will give a map of exact couples as below:
1 D s,t 2 1 D s,t 2 1 E s,t 2 → 2 D 2s−t,s 3 2 D 2s−t,s 3 2 E 2s−t,s 3
By definition of the derived couple on the right-hand side, this amounts to giving maps
π t−s X t,−∞ → im(π t−s X −∞,s → π t−s X −∞,s−1 )
for all s and t, subject to appropriate naturality conditions. To this end we claim that the first map in each of the spans below induces an isomorphism on π t−s : Indeed, writing C := cofib(X t,s−1 → X t,−∞ ), we have a filtration C = colim C t ′ , where t ′ ≥ t and C t ′ = cofib(X t ′ ,q−1 → X t ′ ,−∞ ). The resulting spectral sequence reads
X t,−∞ ← X t,s−1 → X −∞,s−1 . q ′ − s ′ s ′ * * (a) Spectral sequence (58). The leftmost nonzero entry is (q ′ − s ′ , s ′ ) = (t ′ − s + 2, 2s − t ′ − 4).π t ′ −s ′ C t ′ /t ′ +1 =⇒ π t ′ −s ′ C (t ′ ≥ t),(57)
and its E 2 -page is in turn computed by a trigraded spectral sequence
π q ′ −s ′ X t ′ /t ′ +1,q ′ /q ′ +1 X =⇒ π q ′ −s ′ C t ′ /t ′ +1 (q ′ ≤ s − 2).(58)
Spectral sequence (58) can be thought of as a truncation of Fig. 9a; it is in turn displayed in Fig. 10a. the form of its E 2 -page implies that C t ′ /t ′ +1 is (t ′ − s + 2)-connected, so that (57) takes the form displayed in Fig. 10b.
In particular, C is t − s + 1-connected and so the map X t,s−1 → X t,−∞ is (t − s + 1)-connected. Applying an identical analysis to X t,s → X t,−∞ shows that this map has (t − s)-connected cofibre, and so induces a surjection on π t−s ; the diagram below therefore produces the requisite map 1 D s,t 2 → 2 D 2s−t,s
3 . π t−s X t,s π t−s X t,s−1 π t−s X t,−∞ π t−s X −∞,s π t−s X −∞,s−1 ∼ ∃
We will not show that this indeed defines a map of exact couples, since this result is well-known: for example, see [Lev15,§6].
We now relate this to the context of Postnikov towers and cobar complexes. If X ≃ lim t (· · · → X t → · · ·) is a convergent tower of spectra with colim X t = 0, we can form a dual filtered spectrum
X t := fib(X → X t−1 ).
Then colim X t ∼ − → X, and we get another spectral sequence
π t−s f t X =⇒ π t−s X,
where once again X t/t+1 := cofib(X t+1 → X t ). Observe that X t/t+1 ≃ X t/t−1 := fib(X t → X t−1 ), by the octahedral axiom:
X t+1 X t X X t/t−1 X t X t−1
Lemma A.2. The spectral sequences for (X t ) and (X t ) agree.
Proof. The observation above is that the E 2 -pages agree. To show that the entire spectral sequences match it is enough to show that the differentials d 2 do, in other words that the outer diagram below commutes.
X t/t−1 = X t/t+1 ΣX t+1 X t ΣX t+1/t = ΣX t+1/t+2
The dashed arrow is given by applying the 3 × 3-lemma [Nee01] to obtain the diagrams below; note that each triangle above only anticommutes, and so the outer square is commutative.
Σ −1 X t+1 Σ −1 X t Σ −1 X t/t+1 X t+1 Σ −1 X Σ −1 X 0 X Σ −1 X t Σ −1 X t−1 X t/t−1 X t X t+1 X t X t/t+1 ΣX t+1 (−1) X t−1 ΣX t/t−1 ΣX t ΣX t−1 ΣX t ΣX t/t+1 Σ 2 X t+1 Σ 2 X t ΣX 0 Σ 2 X Σ 2 X ΣX t−1 Σ 2 X t/t−1 Σ 2 X t Σ 2 X t−1 (−1)
By dualising the Postnikov and Tot-towers, it will therefore suffice to verify that the induced bifiltration satisfies the assumptions of Proposition A.1.
Lemma A.3. Let F be a sheaf of spectra on a site C, and let X ↠ * be a covering of the terminal object. Suppose that for every t and every q > 0 we have Γ(X q , τ t F) = τ t Γ(X q , F). Then there is an isomorphism between the descent and Bousfield-Kan spectral sequences, up to reindexing: for all r,
E s,t r ∼ =Ě 2s−t,s r+1 .
Proof. We form the bifiltrations (ΓF) t,q = Tot q Γ(U • , τ ≤t F). Then
(ΓF) t,−∞ = lim q Tot q Γ(U • , τ ≤t F) = Tot Γ(U • , τ ≤t F) = Γτ ≤t F, while (ΓF) −∞,q = lim t Tot q Γ(U • , τ ≤t F) = Tot q Γ(U • , lim t τ ≤t F) = Tot q Γ(U • , F).
Applying Proposition A.1 to the dual filtration, we need to verify that
Tot q/q+1 Γ(X • , τ t F)
is Eilenberg-Mac Lane. But recall that for any cosimplicial spectrum B • we have
fib(Tot q B • → Tot q−1 B • ) ≃ Ω q N q B • ,
where N q denotes the fibre of the map from X q to the q-th matching spectrum; N q B • is a pointed space with π j N q B • = π j B q ∩ ker s 0 ∩ ker s q−1 .
In the case of pointed spaces, this fact is [BK87, Prop. X.6.3]; the proof, which appears also as [GJ09, Lemma VIII.1.8], works equally well for a cosimplicial spectrum 6 . By abuse of notation, we also denote this group by N q π j B • . Thus
π j Tot q/q+1 Γ(X • , τ t F) ≃ π j Ω q N q Γ(X • , τ t F) ≃ N q π j+q Γ(X • , τ t F) ≃ N q π j+q τ t Γ(X • , F) ⊂ π j+q τ t Γ(X • , F) = π t Γ(X • , F) j = t − q 0 otherwise.
Remark A.4. On the starting pages, one has
E s,t 2 = π t−s Γτ t F ≃ H s (C, π t F) andĚ 2s−t,s 3 = H s (π t− * Ω * N * Γ(X • , F)) = H s (N * π t Γ(X • , F)) ≃Ȟ s (X ↠ * , π t F).
In particular, note that our assumption implies that theČech-to-derived functor spectral sequence collapses.
B Appendix: The Adams spectral sequence at height one
In this appendix, we compute the K(1)-local E 1 -Adams spectral sequences at all primes. The material is well-known, and is implicit in the computations of [Rav84;MRW77]. At the prime two we give a different perspective to [BGH22] on the computation, which illustrates how one can make use of the Postnikov tower of a sheaf of spectra on BG proét : this approach may be useful in higher height examples that are more computationally challenging. In particular, we use the finite resolution of the K(1)-local sphere, and as such our only input is knowledge of the HFPSS for the conjugation action on KU , displayed in Fig. 12.
B.1 Odd primes
At odd primes, the multiplicative lift gives a splitting
Z × p ≃ (1 + pZ p ) × µ p−1 ≃ Z p × µ p−1 ,
with the second isomorphism given by the p-adic logarithm. A pair (a, b) ∈ Z p × µ p−1 therefore acts on π t E as (b exp p (a)) t . In particular, η 4 cannot survive to E∞ since η 4 = 0 ∈ π * S. The only option is d3(u 2 η) = η 4 , which implies the rest by multiplicativity.
In particular, the entire spectral sequence is determined by the differential d 2 (x) = 0, which implies that all other differentials vanish by multiplicativity. To deduce this differential, note that the edge map H 1 (Z × 2 , π 0 KU 2 ) → H 0 (µ 2 , H 1 (Z 2 , π 0 KU 2 )) can be interpreted as the restriction map Hom Z × 2 , Z 2 → Hom (Z 2 , Z 2 ) , along exp 2 : Z 2 → Z × 2 . This is an isomorphism since Hom (Z/2, Z 2 ) = 0, so d 2 must act trivially on bidegree (0, 1). Our next task is to compute the differentials. In the rest of the appendix we will prove the following: Proposition B.3. The differentials on the third page are as displayed in Fig. 11. The spectral sequence collapses at E 4 with a horizontal vanishing line.
We will compute these differentials by comparing to the HFPSS for the conjugation action on KU 2 (Fig. 12), which reads E s,t 2 = H s (µ 2 , π * KU 2 ) = Z 2 [η, u ±2 ]/2η =⇒ π t−s KO 2 . The following result folklore:
Lemma B.4. The K-local sphere fits in a fibre sequence
1 K → KO 2 ψ 5 −1 −−−→ KO 2 .
Proof. We first consider the map ψ 5 − 1 : KU 2 → KU 2 . Certainly ψ 5 acts trivially on KU 1+4Z2 2 := Γ(Z × 2 /1 + 4Z 2 , E), so there is a map KU 1+4Z2 2 → fib(KU 2
ψ 5 −1 −−−→ KU 2 )(61)
induced by the inclusion of fixed points KU 1+4Z2 2 → KU 2 . As observed in Lemma B.2, the HFPSS computing π * KU 1+4Z2 2 collapses at E 2 with horizontal vanishing above filtration one, and one therefore sees that (61) must be an equivalence by computing homotopy groups of fib(ψ 5 − 1) using the exact sequence.
Taking fixed points for the µ 2 action now yields the result:
1 K ≃ (KU 1+4Z2 2 ) µ2 ≃ fib(KU 2 ψ 5 −1 −−−→ KU 2 ) µ2 ≃ fib(KU µ2 2 ψ 5 −1 −−−→ KU µ2 2 ) ≃ fib(KO 2 ψ 5 −1 −−−→ KO 2 ).
Proof (Proposition B.3). The previous lemma gives the diagram
Σ −1 KO 2 1 K KO 2 Σ −1 KU 2 KU 1+4Z2 2 KU 2(62)
in which the top row is obtained as µ 2 -fixed points of the bottom. The HFPSS for the middle map,
E s,t 2 = H s (µ 2 , π t KU 1+4Z2 2 ) =⇒ π t−s 1 K ,(63)
is very closely to the descent spectral sequence; it is displayed in Fig. 13. In fact, in Lemma B.5 we will show that the two spectral sequences are isomorphic (including differentials), up to a certain filtration shift. To infer the differentials in Fig. 11, it is therefore enough to compute the differentials in Fig. 13.
and an exact sequence 0 → H s−1 (µ 2 , π 2t−1 KU 1+4Z2 ) → H s (µ 2 , π 2t KU 2 ) → H s (µ 2 , π 2t KU 2 ) → H s (µ 2 , π 2t−1 KU 1+4Z2 ) → 0 for s ≥ 1 (and t ̸ = 0). The middle terms are either both zero or both Z/2, and in the latter case we obtain the following further isomorphisms:
H s−1 (µ 2 , π 2t−1 KU 1+4Z2 2 ) ∼ − → H s (µ 2 , π 2t KU 2 ) ≃ Z/2,(66)Z/2 ≃ H s (µ 2 , π 2t KU 2 ) ∼ − → H s (µ 2 , π 2t−1 KU 1+4Z2 2 ).(67)
For s = 0 and t ̸ = 0 even, we instead have an exact sequence 0 → Z 2
5 t −1 − −− → Z 2 → H 0 (µ 2 , π 2t−1 KU 1+4Z2 2 ) → 0.(68)
Equations (64) to (68) compute the effect of the maps in (62) on spectral sequences. Namely:
(i) The map Σ −1 KU 2 → KU 1+4Z2 2 induces a (filtration preserving) surjection from E 2 (KU ) onto the unfilled classes in Fig. 13. (ii) The map KU 1+4Z2 2 → KU 2 is injective on the solid classes in Fig. 13. It induces a filtration-preserving isomorphism on the subalgebras in internal degree t = 0, and away from this increases filtration by one.
The differentials in Fig. 13 follow almost immediately: on unfilled classes, they are images of differentials in the HFPSS for KU 2 , and on most solid classes they are detected by differentials in the HFPSS for KU 2 . We are left to determine a small number of differentials on classes with internal degree t close to zero.
(i) The exact sequence induced by Lemma B.4 shows that the map 1 K → KO 2 induces an isomorphism on π 0 . As a result, the unit in the HFPSS for KU 1+4Z2 is a permanent cycle.
(ii) Write u −2 η 2 for the generator in bidegree (s, t) = (2, 0), which maps to a class of the same name in the HFPSS for KU 2 . This cannot survive in the HFPSS for KU 1+4Z2 2 : one sees that π −2 1 K = 0 using the fibre sequence of Lemma B.4 and the fact that π −2 KO 2 = π −1 KO 2 = 0. The only possibility is a nonzero d 2 since all other possible differentials occur at E 4 or later, when there are no possible targets left (by virtue of the known d 3 -differentials). Likewise, this implies d 2 ((u −2 η 2 ) 2j+1 ) ≡ 2 (u −2 η 2 ) 2j d 2 (u −2 η 2 ) ̸ = 0 by the Leibniz rule.
(iii) Write z for the generator in bidegree (s, t − s) = (0, −3), which is detected in filtration one by u −2 η.
As above one computes that π −3 1 K = 0, and so this class must die; the only option is a nonzero d 4 on z. In fact, when j is even the exact sequence π 4j−2 KO 2 → π 4j−3 1 K → π 4j−3 KO 2 implies that π 4j−3 1 K = 0, so that (u −2 η 2 ) 2j z supports a d 4 by the same argument. When j is odd the sequence only gives a bound |π 4j−3 1 K |≤ 4, but this is already populated by elements in filtration s ≤ 2 (which survive by comparison to the HFPSS for KU 2 ). Thus (u −2 η 2 ) 2j z supports a d 4 in these cases too.
After this, the spectral sequence collapses by sparsity.
To conclude, we must show that the two spectral sequences H * (G, π * KU 2 ) =⇒ π * 1 K (69)
H * (µ 2 , π * KU 1+4Z2 2 ) =⇒ π * 1 K(70)
are isomorphic, up to a shift in filtration.
Lemma B.5. For each s ≥ 0 and t there are isomorphisms H s+1 (G, π t KU 2 ) ≃ H s (µ 2 , π t−1 KU 1+4Z2 2 ) (t ̸ = 1), H s (G, π 0 KU 2 ) ≃ H s (µ 2 , π 0 KU 2 ).
These are compatible with differentials, and together yield a (filtration-shifting) isomorphism of spectral sequences between (69) and (70).
Remark B.6. In other words, when passing from the HFPSS for KU 1+4Z2 2 (70) to the descent spectral sequence (69), the filtration shift is precisely by one away from internal degree t = 0, and zero otherwise.
Proof. Note first that the E 1 -pages are abstractly isomorphic:
H i+j (G, π 2t KU 2 ) ≃ H i (µ 2 , H j (H, π 2t KU 2 )) ≃ H i (µ 2 , π 2t−j KU 1+4Z2 2 ),(71)
where j = 1 unless t = 0, in which case we also have j = 0. The first isomorphism is given by the Lyndon-Hochschild-Serre spectral sequence, which collapses with each degree of the abutment concentrated in a single filtration; the second isomorphism comes from the same fact about the homotopy fixed point spectral sequence for the action of 1 + 4Z 2 on KU 2 . Note that in both cases the abutment is sometimes in positive filtration, and this will account for the shift.
Next recall that the descent spectral sequence comes from global sections of the Postnikov tower for E ∈ Sh(BG proét , Sp); on the other hand, the µ 2 -fixed points spectral sequence is induced by global sections of the sheaf j * E ∈ Sh(B(µ 2 ) proét , Sp), where j : G → µ 2 is the quotient by the open subgroup 1 + 4Z 2 . Since each j * τ ≤t E(T ) = τ ≤t E(res µ2 G T ) is t-truncated, we obtain a map of towers in Sh(B(µ 2 ) proét , Sp),
j * E · · · τ ≤t j * E τ ≤t−1 j * E · · · j * E · · · j * τ ≤t E j * τ ≤t−1 E · · ·
Taking global sections of the top row yields the HFPSS (70), while the bottom yields the descent spectral sequence (69) (bearing in mind that Γ(µ 2 /µ 2 , j * (−)) ≃ Γ(G/G, −)).
To proceed, we compute the sections of these towers. Note that any cover of µ 2 ∈ B(µ 2 ) proét must split, and so sheafification on B(µ 2 ) proét preserves any multiplicative presheaf when restricted to the generating sub-site Free µ2 , i.e. any F satisfying F(S ⊔ S ′ ) ≃ F(S) × F(S ′ ). Thus
τ t j * E : µ 2 → τ t E(Z × 2 /1 + 4Z 2 ) = τ t KU 1+4Z2 2
This sheaf is therefore zero unless t is odd or zero. On the other hand, π s Γj * τ t E : µ 2 → π s Γ(µ 2 , j * τ t E) = π s Γ(Z × 2 /1 + 4Z 2 , τ t E) = H t−s (1 + 4Z 2 , π t KU 2 ). This is zero unless t is even and s = 1, or t = s = 0. In particular, for t ̸ = 0 we have
τ 2t−1 j * E ≃ τ 2t−1 j * τ 2t E ≃ j * τ 2t E,
while j * τ 0 E has homotopy concentrated in degrees {−1, 0}. Note that this also implies that Γj * τ ≤2t E is (2t − 1)-truncated for t ̸ = 0, since π 2t Γj * τ ≤2t E = π 2t Γj * τ 2t E = 0.
The isomorphisms (71) can be interpreted as arising from the two trigraded spectral sequences H i (µ 2 , H j (1 + 4Z 2 , π 2t KU 2 )) =⇒ H i+j (G, π 2t KU 2 ), H i (µ 2 , H j (1 + 4Z 2 , π 2t KU 2 )) =⇒ H i (µ 2 , π 2t−j KU 1+4Z2 2 )
coming from the bifiltration ΓE = Γτ ≤j j * τ ≤2t E (c.f. Proposition A.1). At a fixed t, the first is associated to the filtration Γτ 2t E = Γj * τ 2t E = lim j Γτ ≤j j * τ 2t E, or equivalently to the resolution of H * (1 + 4Z 2 , π 2t KU 2 ) by acyclic µ 2 -modules, and is therefore the LHSSS. On the other hand, the second is µ 2 -cohomology applied pointwise to the HFPSS for the 1 + 4Z 2 -action (at a fixed j); in particular, both collapse at E 1 .
For t ̸ = 0, the towers therefore look as in the following diagram, in which we have identified in both rows those consecutive layers with zero fibre, i.e. we run the tower 'at double speed'.
· · · τ ≤2t−1 j * E τ ≤2t−3 j * E · · · τ 2t−1 j * E · · · j * τ ≤2t E j * τ ≤2t−2 E · · · j * τ 2t E
A large but routine diagram verifies that the map τ 2t−1 j * E → j * τ 2t E agrees on homotopy with (71).
Near zero, we have instead the diagram
τ ≤1 j * E τ ≤0 j * E τ ≤−1 j * E τ ≤−2 j * E τ 1 j * E τ 0 j * E τ −1 j * E j * τ ≤1 E j * τ ≤0 E j * τ ≤−1 E j * τ ≤−2 E 0 j * τ 0 E 0 β
To see that the dashed arrow exists, note that d 1 : τ −1 j * E → Στ 0 j * E is zero on homotopy: in the proof Proposition B.3 we computed the only nontrivial d 2 differentials, which have source in internal degree t = 0. As a map between Eilenberg-Mac Lane objects, it is in fact null, so we can lift as below:
τ −1 j * E Στ 0 j * E τ ≤−1 j * E τ ≤0 j * E ∃α d1 0 τ −1 j * E 0 τ ≤0 j * E j * τ 0 E j * τ ≤0 E j * τ ≤−1 E α ∃β
We must show that evident map τ 0 j * E → j * τ 0 E and the map β agree with the respective compositions of edge maps. To deduce this, it is enough to contemplate the diagrams below, in which the dashed arrow are equivalences and the dotted arrows admit right inverses.
τ 0 j * E τ ≤0 j * E τ 0 j * τ ≥0 E τ 0 j * τ ≤0 E τ 0 j * τ 0 E τ ≤0 j * τ ≤0 E τ ≤0 j * τ p E j * τ 0 E j * τ ≤0 E τ −1 j * E τ ≤0 j * E τ −1 j * τ ≥0 E τ −1 j * τ ≤0 E τ −1 j * τ 0 E τ ≤−1 j * τ ≤0 E τ ≤−1 j * τ 0 E j * τ 0 E j * τ ≤0 E
The squiggly arrows are the edges maps from the trigraded spectral sequences, which are isomorphisms thanks to the collapse of the two trigraded spectral sequences.
Theorem 1. 1
1([DH04;Rog08]). Write E n for height n Morava E-theory at the (implicit) prime p. The unitL K(n) S → E n is a K(n)-local pro-Galois extension for the Devinatz-Hopkins action of G. That is, there are K(n)-local spectra E hU nfor every open subgroup U of G n such that the following hold:(i) each E hU nis an E ∞ -ring spectrum over which E n is a commutative algebra,(ii) Choosing a cofinal sequence of open subgroups U yields E n ≃ L K(n) lim − →U E hU n ,(iii) for any normal inclusion V ◁ U of open subgroups, the map E hU n → E hV n is a faithful U/V -Galois extension of K(n)-local spectra.
Definition 2. 10 .
10Let G be a profinite group. The mod p cohomological dimension of G is cd p (G) := sup d : there exists a p-power torsion G-module M with H d (G, M ) ̸ = 0 ∈ [0, ∞]. The mod p virtual cohomological dimension vcd p (G) is the smallest mod p cohomological dimension of an open subgroup U ⊂ G. Definition 2.11.
[HPS99, Lemma 2.1] or [Mat15, Proposition 3.5])
conservative, by virtue of the equivalence (21).
factors through the groupoid completion J gpd i/ . But since J i/ is filtered, both inclusions J i/ → J gpd i/ ← {i} are cofinal by [HTT, Corollary 4.1.2.6].(3) We are left to prove that T → Sh(T, Mod E,K ) ∈ Pr L,smon is a hypercomplete sheaf on Profin /(S/G) . This is precisely the content of [Hai22, Theorem 0.5], noting that (i ) limits in Pr L,smon are computed in Cat ∞ ;
Remark 3 .
311 (c.f.[HMS17], Remark 3.7). One can also interpret the group Pic ′ (A) as that of K-localA-modules invertible with respect to the unlocalised smash product. In one direction this is clear: if M ∈ Mod E(S),K is invertible before localisation, then it is perfect by virtue of being a dualisable object of Mod E(S) . On the other hand, if M ∈ Pic(A) is perfect then M −1 and hence M ∧ M −1 are already K-local, so M ∧ M −1 ≃ M ⊗ M −1 ≃ E(S).
Proposition 3. 12 .
12The assignment S → Mod perf E(S),K is a hypercomplete sheaf on BG proét , valued in Cat smon ∞ .
Theorem 4.1 ([HMS94], Theorem 1.3). M ∈ Sp K is invertible if and only if E ∨ * M is a free π * E-module of rank one.
I k n ) contains the open neighbourhood x + I k n M (compare [BH16, Lemma 5.2]).
Definition 4. 3 .≃
3The algebraic Picard group of Sp K is Pic alg n := Pic Mod G π * E .The exotic Picard group of Sp K is defined by the exact sequence of abelian groups1 → κ n → Pic n E ∨ * − − → Pic alg n ,whose existence follows from Theorem 4.1. Restricting both Picard groups to their subgroups of elements concentrated in even degrees, one can equally obtain κ n as the kernel of Pic(Mod G π0E ).
Theorem 4. 4 .
4At arbitrary height n, the 1-line of the descent spectral sequence for pic(E) computes the image of Pic 0 n in Pic alg,0 n . The exotic Picard group κ n is computed by the subgroup in filtration ≥ 2. Proving Theorem 4.4 will require a short discussion of derived complete modules. Firstly, recall that π 0 E is a regular Noetherian local ring, with maximal ideal I n = (p, v 1 , . . . , v n ). If R is any such (classical) ring and m its maximal ideal, the m-adic completion functor (−) ∧ m has left-derived functors L i , defined for example in [GM92]; this is in spite of (−) ∧ m not being right-exact. For any R-module M the completion map M → M ∧ m factors through the zero-th derived functor M η M − − → L 0 M ϵ M − − → M ∧ m , and one says that M is L-complete or derived m-complete if η M is an isomorphism. Hovey and Strickland prove the following facts about L-completion:
an isomorphism when M is finitely generated [HS99, Proposition A.4]. In particular, if R is itself L-complete then finitely generated modules are complete, i.e. M = L 0 M = M ∧ m and L i M = 0 for i > 0.
L
R . Moreover, the abelian category Mod ♥cpl R of L-complete discrete R-modules includes as the heart of Mod cpl R for a t-structure constructed in [SAG, Proposition 7.3.4.4], and the ∞-categorical localisation functor L : Mod R → Mod cpl R agrees upon restriction with the (total) left derived functor of L-completion [SAG, Corollary 7.3.7.5]; in particular, L 0 ≃ π 0 L.
Remark 4. 10 .
10As a consequence, the map E ∨ * : Pic n → Pic alg n is:(i) injective if and only if the zero stem in the E ∞ term of the Picard spectral sequence is concentrated in filtration ≤ 1;
Corollary 4 .
411 ([CZ22], Proposition 1.25). If p > 2, p − 1 ∤ n and n 2 ≤ 4p − 4, there is an isomorphism
Examples of such pairs (n, p) are (3, 5), (5, 13), (9, 41) and (11, 61); in each case, this is the first prime for which [Pst22, Remark 2.6] leaves open the possibility of exotic Picard elements. The case (n, p) = (3, 5) case was considered by Culver and Zhang, using different methods; however, they show as above that Heard's spectral sequence combined with the conjectural vanishing H 0 (S, π 2p−2 E) = 0, would imply that κ n = 0 [CZ22, Corollary 1.27].
u −2 η 2 Figure 2 :Figure 3 :Figure 4 : 1 .
22341The E3-page of the HFPSS for E at p = 2. The E4 = E∞-page of the HFPSS for E at p = 2. The E3-page of the Picard spectral sequence at p = 2. H 0 denotes H 0 (Z × 2 , Pic(E)) = Z/2, and H 1 = H 1 (Z × 2 , (π0E) × ) = Pic alg,0 The possible dashed differentials affects the Picard group; we have omitted some possible differentials with source t − s ≤ −1.
Remark 5. 3 .
3Hopkins and Lurie show [HL, Corollary 2.2.3] that this is equivalent to a more familiar definition in terms of intrinsic properties of A: using [HL, Prop. 2.9.6] together with faithfulness of the map 1 K → E, one sees that A is K-locally Azumaya if and only if
Definition 5. 4 .
4The K-local Brauer group of R is the set Br(R) := Az(R)/∼, equipped with the abelian group structure [A] + [B] = [A ⊗ R B].
we will exhibit Br(R | R ′ ) as a full subspace of lim BPic(R ′ • ). Taking the covering G → * gives the desired result on π 0 . By definition, Br(R) is the full subcategory of Pic(Cat R,K ) spanned by module categories. Using the inclusion of BPic(R ′ ) as the component of the unit in Br(R ′ ) we can form the diagram
Figure 5 :
5The height one Picard spectral sequence for odd primes (implicitly at p = 3). Classes are labelled as follows:• = Z/2, □ × = Z × p , × = µp−1,and circles denote p-power torsion (labelled by the torsion degree). Since Pic1 ∼ = Pic alg 1 ∼ = Z × p , no differentials can hit the (−1)-stem. Differentials with source in stem t − s ≤ −2 have been omitted. Using the results of Section 5.1, we obtain an upper bound on the relative Brauer group: Corollary 5.16. At odd primes, Br 0 1 is isomorphic to a subgroup of µ p−1 . Proof. The only possible differentials are d 2 -differentials on classes in the (−1)-stem; note that there are no differentials into the (−1)-stem, since every E 2 -class in the 0-stem is a permanent cycle. The generator in E 1,0 2 supports a d 2 , since this is the case for the class in E 1,0 2 of the descent spectral sequence for the C 2 -action on KU [GL21, Prop. 7.15] (this is displayed in
Figure 6 :
6The E3-page of the Brauer spectral sequence at p = 2. We know that all remaining classes in the 0-stem survive, by comparing to the algebraic Picard group. Thus the only differentials that remain to compute are those out of the (−1)-stem; some of these can be transported from the descent spectral sequence for Pic(KO) hC 2 -see Figs. 7 and 8.
FFigure 7 :Figure 8 :
78≃ lim(· · · → τ ≤t F → · · ·) The E3-page of the Brauer spectral sequence for KO, as in [GL21,Figure 7.2]. The Picard spectral sequence for KO2.
Spectral sequence (54) at fixed t. The y-intercept is s = t. Spectral sequence (55) at fixed q.The y-intercept is s = q.
Figure 9 :
9The trigraded spectral sequences computing 1 E
)
The E 2 -page of (57). The leftmost nonzero entry is (t ′ − s ′ , s ′ ) = (t − s + 2, s − 2).
Figure 10 :
10Truncated spectral sequences computing π * C.
Lemma B. 1 .Figure 11 :Figure 12 :
11112The starting page of the K-local E-Adams spectral sequence isE s,t 2 = H s (Z × p , π t E) = Z p t = 0 and s = 0, 1 Z/p νp(t ′ )+1 t = 2(p − 1)t ′ ̸ = 0 and s = 1(59)and zero otherwise. In particular, it collapses immediately to E ∞ . The E3-page of the HFPSS for E at p = 2. The HFPSS for the C2-Galois extension KO → KU . The class η represents η in the Hurewicz image in π * KO, and towers of slope one are related by η-multiplications.
the leftmost term vanishes for t ̸ = 0 and the middle map is null for t = 0. Taking µ 2 -cohomology yields isomorphismsH * (µ 2 , π 0 KU 1+4Z2 2 ) ∼ − → H * (µ 2 , π 0 KU 2 )(64)H * (µ 2 , π 0 KU 2 ) ∼ − → H * (µ 2 , π −1 KU 1+4Z2 2 ),
u −2 η 2 Figure 13 : 2 → 2 .
21322The HFPSS for µ2 acting on KU 1+4Z 2 . Solid classes are detected in the HFPSS for KU2 by the map KU 1+4Z 2 KU2, with a filtration shift of one in internal degrees t ̸ = 0. Unfilled classes are in the image of the HFPSS for KU2 under Σ −1 KU2 → KU 1+4Z 2 The only differentials not immediately determined by this are the d2 and d4 differentials on classes in internal degree t = 0 and −3 respectively, which are treated at the end of Proposition B.3.
, Zachary Halladay, Luka Ilic, Thomas Nikolaus, Arthur Pander-Maat, Lucas Piessevaux, Maxime Ramzi, Ivan Tomašić and Paul vanKoughnett, and would like to extend my thanks to them all. Above all, I am indebted to my supervisors Lennart Meier and Behrang Noohi, for their guidance, support and careful reading of previous drafts. Finally, I'd like to thank Utrecht University for their hospitality. This work forms a part of my thesis, supported by EPSRC under grant EP/R513106/1.
for any open subgroup U ⊂ G [BD10, Discussion above Prop. 8.1.2 and Lemma 6.3.6 respectively]. In fact, [BD10, Theorem 8.2.1] proves the same equivalence for any closed subgroup H, but restricting ourshows that the spectrum F n := lim
− →
E dhU
n
(the colimit taken in plain spectra) defines
a fibrant object of Spt G , F: G/U → F hU
n
[Dav06, Corollary 3.14]; Behrens and Davis show that E hU
n
≃
L K F hU
n
≃ E dhU
n
Remark 2.26. Similar arguments are used in [BS14, Theorem 3.2.3] and [Mat21, Proposition A.10] to prove Postnikov completeness results. Specifically, Bhatt and Scholze prove that Postnikov towers of hypercomplete objects in the ∞-topos of BG proét converge. Using this fact, one can also deduce Corollary 2.24 directly from Proposition 2.8; in fact, this proves Postnikov completeness of E ⊗ X for any spectrum X. We remark that it was proven in [MR22] that Postnikov towers converge in the hypercomplete topos of an arbitrary replete site, answering [BS14, Question 3.1.12]. Corollary 2.27. The starting page of the spectral sequence (14) is given by continuous group cohomology:
Then there is an isomorphism between the descent and Bousfield-Kan spectral sequences, up to reindexing: for all r, Proposition 2.30. Décalage of the Postnikov filtration induces an isomorphism between the following spectral sequences:E s,t
r
∼ =Ě
2s−t,s
r+1 .
Theorem 3.1. The presheaf ν p Mod E,K : BG op proét → Pr L,smon satisfies hyperdescent. Remark 3.2. By Lemma 2.5, ν p Mod E δ ,K = Mod ν p E δ ,K = Mod E,K . Thus taking endomorphisms of the unit gives an alternative proof of Proposition 2.8 as an immediate corollary. The functor pic preserves limits of symmetric monoidal ∞-categories by [MS16, Proposition 2.2.3], and so we obtain the first avatar of our main result:
Let T • ↠ T −1 = S be a hypercovering in BG proét , and form the diagramHTT, Theorem 7.2.3.6 and Remark 7.2.3.3], so
that Postnikov towers in Sh(T ) converge: Sh(T ) ≃ lim n Sh(T, S ≤n ) by [HTT, Theorem 7.2.1.10].
The proof of Theorem 3.1 follows by combining the previous proposition with the results of [Mat16]:
Proof (Prop. 3.1). 5 though not by the unit!
. . .
. . .
. . .
Section 3.1 we showed that K-local modules determine a sheaf of symmetric-monoidal ∞-categories on the site BG proét . Since limits in Pr L,smon are computed in Cat smon∞
, the functor
pic : Pr L,smon → Sp ≥0 ,
preserves them [MS16, Prop. 2.2.3], and so the composition pic • Mod E,K is immediately seen to be a sheaf
of connective spectra. As a result, the 0-stem in its descent spectral sequence (20) converges conditionally to
Pic n . Its E 1 -page consists of cohomology of the homotopy sheaves π * (pic • Mod E,K ), and as in Corollary 2.27
we'd like to identify this with group cohomology with coefficients in the continuous G-module π * pic(E). In
order to deduce this once again from [BS14, Lemma 4.3.9], we would need to show that
Proposition 3.19. Any F ∈ Pic Sh(T, Mod E,K ) perf is locally free.because the descent spectral sequence for ΓF collapses immediately to the 0-line. Indeed, profinite sets have
homotopy dimension zero, and therefore cohomological dimension zero [HTT, Corollary 7.2.2.30].
Lemma 3.20. Suppose that C is a site, and D, D ′ are presentable prestable ∞-categories. Let F ∈ Sh(C, D), and p 1 , p 2 : D → D ′ two functors related by a natural equivalence τ[a,b] p 1 ≃ τ [a,b] p 2 . Then the descent spectral sequences for p 1 F and p 2 F ′ satisfy the following:(i) The E 2 -pages agree in a range: E s,t
2,p1F ≃ E s,t
2,p2F if t ∈ [a, b].
(ii) Under the isomorphism induced by (i), we have d s,t
r,p1F = d s,t
r,p2F if 2 ≤ r ≤ b − t + 1.
covering of profinite sets ([Sch12, Prop. 3.7]), we can restrict attention to the presheaf on Profin ≃ BG proét /G . The local duality equivalenceT → Mod cpl
π0E(T ×G)
(35)
Mod tors
π0E ≃ Mod cpl
π0E
of [BHV18, Theorem 3.7] or [SAG, Proposition 7.3.1.3] implies that Mod cpl
π0E is compactly generated, and
hence dualisable in Pr L . Thus the presheaf
We have taken slight notational liberties: in[Dav03], Spt G denotes the category of spectra based on discrete G-sets, which is equipped with a model structure lifted from the Jardine model structure on the (equivalent) category ShvSpt of sheaves of spectra.
According to[Wei94, Prop. 3.5.8], this is true whenever the system A j i, * → A j i, * is Mittag-Leffler for each fixed j. But we have already noted that any inclusion J ⊂ I induces a surjection A j i,J ↠ A j i,I .
This is almost what it means for A → B to be an effective descent morphism in the sense of[Mat16], except that we require descent for modules and not algebras.
(x) + x 2 = 2x 2 = 0.After this there is no space for further differentials on x, so κ 1 = Z/2.
Note that the inductive argument there applies in the 'cosimplicial' direction, i.e., in the notation of loc. cit. one shows for any fixed t that N n,k πtX = ker(πtX → M n,k πtX) for k, n ≤ s.
Brauer groups at height oneUsing the descent results of the Section 5.1, we now give bounds on the size of the relative Brauer groups Br 0 1 . As usual, the story differs depending on the parity of the prime. In fact, we give conjectural descriptions for some nontrivial Brauer elements; in upcoming work we will elaborate on these computations.Odd primesWe first consider the case p > 2. The starting page of the Picard spectral sequence is recorded in Lemma 4.16 (and computed in Lemma B.1). Using this, we obtain the following:Lemma 5.15. At odd primes, the starting page of the Picard spectral sequence is given byThis is displayed inFig. 5. In particular, the spectral sequence collapses for degree reasons at the E 3 -page.Proof. All that remains to compute is internal degrees t = 0, 1. We invoke the Lyndon-Hochschild-Serre spectral sequence, which collapses since the extension is split. In particular,Proof. We will use the Lyndon-Hochschild-Serre spectral sequence [SW00, Theorem 4.2.6] for the inclusionSince everything is (p)-local, taking µ p−1 fixed-points is exact. The spectral sequence therefore collapses, and what remains is to compute Z p -cohomology.By [SW00, §3.2], the trivial pro-p module Z p admits a projective resolutionin the (abelian) category C p (Z p ) of pro-p continuous Z × p -modules; here we write ζ for a topological generator, and Z p [[G]] := lim U <oG Z p [G/U ] for the completed group algebra of a profinite group G. In particular,where for the final isomorphism we have used the isomorphism exp p :to obtain ν p (exp p (ζ) t − 1) = ν p (tζ) + 1 = ν p (t) + 1. The µ p−1 -action has fixed points precisely when it is trivial, i.e. when p − 1 | t, which gives the stated form.B.2 p = 2What changes at even primes? In this case the multiplicative lift is instead defined on (Z/4) × , and provides a splittingThis implies the following more complicated form for the starting page, since cd 2 (µ 2 ) = ∞.Lemma B.2. The starting page of the descent spectral sequence for the action of G on E is given byZ 2 t = 0 and s = 0, 1 Z/2 t ≡ 4 2 and s odd Z/2 t ≡ 4 0 and s > 1 odd Z/2 ν2(t)+2 0 ̸ = t ≡ 4 0 and s = 1 (60) and zero otherwise. The result is displayed inFig. 11, which is reproduced for convenience of the reader.Proof. We will again use the Lyndon-Hochschild-Serre spectral sequence for the inclusion. Since µ 2 is 2-torsion, we will have higher µ 2 -cohomology contributions. Nevertheless, the computation of Z 2 -cohomology is identical to the odd-prime case, except that now one has ν 2 (exp 2 (ζ) t − 1) = ν 2 (t) + 2. ThusZ 2 t = 0 and j = 0, 1 Z 2 /2 ν2(t)+2 t ̸ = 0 and j = 1For t ̸ = 0, the E 2 -page of the Lyndon-Hochschild-Serre is therefore concentrated in degrees j = 1, and so collapses. This yields H s (Z × 2 , π t KU 2 ) ≃ H s−1 (µ 2 , Z/2 ν2(t)+2 ), which accounts for most of the groups in (60). For t = 0, it instead takes the form E * , * 2 = Z 2 [x (0,1) , y (2,0) ]/(2x, 2y, x 2 ) =⇒ H i+j (Z × 2 , π 0 KU 2 ).
An ∞-categorical approach to R-line bundles, R-module Thom spectra, and twisted R-homology. Matthew Ando, Andrew J Blumberg, David Gepner, Michael J Hopkins, Charles Rezk, http:/doi.wiley.com/10.1112/jtopol/jtt035Journal of Topology. 7317538416visited on 11/10/2022Matthew Ando, Andrew J. Blumberg, David Gepner, Michael J. Hopkins, and Charles Rezk. "An ∞-categorical approach to R-line bundles, R-module Thom spectra, and twisted R-homology". In: Journal of Topology 7.3 (Sept. 2014), pp. 869-893. issn: 17538416. doi: 10.1112/jtopol/ jtt035. url: http://doi.wiley.com/10.1112/jtopol/jtt035 (visited on 11/10/2022).
Topological Hochschild homology and cohomology of A ∞ ring spectra. Vigleik Angeltveit, Geometry & Topology. 12Vigleik Angeltveit. "Topological Hochschild homology and cohomology of A ∞ ring spectra". In: Geometry & Topology 12.2 (2008), pp. 987-1032.
Brauer groups andétale cohomology in derived algebraic geometry. Benjamin Antieau, David Gepner, 10.2140/gt.2014.18.1149arXiv:1210.0290Geom. Topol18visited on 11/24/2022) (cit. on pp. 4, 42, 43Benjamin Antieau and David Gepner. "Brauer groups andétale cohomology in derived algebraic geometry". In: Geom. Topol. 18.2 (Apr. 7, 2014), pp. 1149-1244. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.2014.18.1149. arXiv: 1210.0290. url: http://arxiv.org/abs/1210.0290 (visited on 11/24/2022) (cit. on pp. 4, 42, 43).
Picard sheaves, local Brauer groups, and topological modular forms. Benjamin Antieau, Lennart Meier, Vesna Stojanoska, arXiv:2210.15743visited on 11/24/2022) (cit. on pp. 4, 42Benjamin Antieau, Lennart Meier, and Vesna Stojanoska. Picard sheaves, local Brauer groups, and topological modular forms. Oct. 27, 2022. arXiv: 2210.15743. url: http://arxiv.org/ abs/2210.15743 (visited on 11/24/2022) (cit. on pp. 4, 42).
Michael Artin, Pierre Deligne, Alexander Grothendieck, Jean-Louis Verdier, Bernard Saint-Donat, Théorie des topos et cohomologieétale des schémas. Séminaire de Géométrie Algébrique du Bois Marie-CohomologieExposée VbisMichael Artin, Pierre Deligne, Alexander Grothendieck, Jean-Louis Verdier, and Bernard Saint- Donat. "Théorie des topos et cohomologieétale des schémas". In: Séminaire de Géométrie Algébrique du Bois Marie-Cohomologie (SGA4). Chap. Exposée Vbis.
. Michael Artin, Barry Mazur, Lecture Notes in Mathematics. 100Etale Homotopy.Michael Artin and Barry Mazur. Etale Homotopy. Vol. 100. Lecture Notes in Mathematics.
. Heidelberg Berlin, http:/link.springer.com/10.1007/BFb0080957isbn: 978-3-540-04619-6 978-3-540-36142-8. doi: 10.1007/ BFb0080957Springervisited on 11/24/2022Berlin, Heidelberg: Springer, 1969. isbn: 978-3-540-04619-6 978-3-540-36142-8. doi: 10.1007/ BFb0080957. url: http://link.springer.com/10.1007/BFb0080957 (visited on 11/24/2022).
The Brauer group of a commutative ring. Maurice Auslander, Oscar Goldman, Transactions of the American Mathematical Society. 973citMaurice Auslander and Oscar Goldman. "The Brauer group of a commutative ring". In: Trans- actions of the American Mathematical Society 97.3 (1960), pp. 367-409 (cit. on p. 3).
L-complete Hopf algebroids and their comodules. Andrew Baker, arXiv:0901.1471visited on 12/06/2022Andrew Baker. L-complete Hopf algebroids and their comodules. June 10, 2009. arXiv: 0901. 1471. url: http://arxiv.org/abs/0901.1471 (visited on 12/06/2022).
Invertible modules for commutative S-algebras with residue fields". In: manuscripta math. Andrew Baker, Birgit Richter, http:/link.springer.com/10.1007/s00229-005-0582-1118visited on 11/24/2022) (cit. on p. 3Andrew Baker and Birgit Richter. "Invertible modules for commutative S-algebras with residue fields". In: manuscripta math. 118.1 (Sept. 2005), pp. 99-119. issn: 0025-2611, 1432-1785. doi: 10.1007/s00229-005-0582-1. url: http://link.springer.com/10.1007/s00229-005- 0582-1 (visited on 11/24/2022) (cit. on p. 3).
Brauer groups for commutative S-algebras. Andrew Baker, Birgit Richter, Markus Szymik, 10.1016/j.jpaa.2012.03.001arXiv:1005.5370Journal of Pure and Applied Algebra. 21642visited on 11/24/2022) (cit. onAndrew Baker, Birgit Richter, and Markus Szymik. "Brauer groups for commutative S-algebras". In: Journal of Pure and Applied Algebra 216.11 (Nov. 2012), pp. 2361-2376. issn: 00224049. doi: 10.1016/j.jpaa.2012.03.001. arXiv: 1005.5370. url: http://arxiv.org/abs/1005.5370 (visited on 11/24/2022) (cit. on pp. 42, 46).
Pro-categories in homotopy theory. Ilan Barnea, Yonatan Harpaz, Geoffroy Horel, 10.2140/agt.2017.17.567arXiv:1507.01564Algebr. Geom. Topol. 17visited on 11/05/2022Ilan Barnea, Yonatan Harpaz, and Geoffroy Horel. "Pro-categories in homotopy theory". In: Algebr. Geom. Topol. 17.1 (Jan. 26, 2017), pp. 567-643. issn: 1472-2739, 1472-2747. doi: 10. 2140 / agt . 2017 . 17 . 567. arXiv: 1507 . 01564. url: http : / / arxiv . org / abs / 1507 . 01564 (visited on 11/05/2022).
Constructing the determinant sphere using a Tate twist. Tobias Barthel, Agnès Beaudry, Paul G Goerss, Vesna Stojanoska, arXiv:1810.0665111/05/202210citTobias Barthel, Agnès Beaudry, Paul G. Goerss, and Vesna Stojanoska. Constructing the de- terminant sphere using a Tate twist. Sept. 13, 2021. arXiv: 1810.06651. url: http://arxiv. org/abs/1810.06651 (visited on 11/05/2022) (cit. on p. 10).
Completed power operations for Morava E-theory. Tobias Barthel, Martin Frankland, Algebraic & Geometric Topology. 1534citTobias Barthel and Martin Frankland. "Completed power operations for Morava E-theory". In: Algebraic & Geometric Topology 15.4 (2015), pp. 2065-2131 (cit. on p. 34).
The E 2 -term of the K(n)-local E n -Adams spectral sequence. Tobias Barthel, Drew Heard, 10.1016/j.topol.2016.03.024arXiv:1410.5269Topology and its Applications. 2061668641visited on 12/06/2022) (cit. on pp. 21, 32Tobias Barthel and Drew Heard. "The E 2 -term of the K(n)-local E n -Adams spectral sequence". In: Topology and its Applications 206 (June 2016), pp. 190-214. issn: 01668641. doi: 10.1016/ j.topol.2016.03.024. arXiv: 1410.5269. url: http://arxiv.org/abs/1410.5269 (visited on 12/06/2022) (cit. on pp. 21, 32).
Local duality in algebra and topology. Tobias Barthel, Drew Heard, Gabriel Valenzuela, 10.1016/j.aim.2018.07.017arXiv:1511.0352611/05/2022Advances in Mathematics. 33535Tobias Barthel, Drew Heard, and Gabriel Valenzuela. "Local duality in algebra and topology". In: Advances in Mathematics 335 (Sept. 2018), pp. 563-663. issn: 00018708. doi: 10.1016/j. aim.2018.07.017. arXiv: 1511.03526. url: http://arxiv.org/abs/1511.03526 (visited on 11/05/2022) (cit. on p. 35).
Chromatic homotopy theory is asymptotically algebraic. Tobias Barthel, Tomer Schlank, Nathaniel Stapleton, arXiv:1711.00844visited on 11/05/2022) (cit. on p. 6Tobias Barthel, Tomer Schlank, and Nathaniel Stapleton. Chromatic homotopy theory is asymp- totically algebraic. Jan. 19, 2020. arXiv: 1711.00844. url: http://arxiv.org/abs/1711.00844 (visited on 11/05/2022) (cit. on p. 6).
Monochromatic homotopy theory is asymptotically algebraic. Tobias Barthel, M Tomer, Nathaniel Schlank, Stapleton, 10.1007/s00222-019-00943-9arXiv:1903.1000311/05/2022In: Invent. math. 2203cit. on p. 6Tobias Barthel, Tomer M. Schlank, and Nathaniel Stapleton. "Monochromatic homotopy theory is asymptotically algebraic". In: Invent. math. 220.3 (June 2020), pp. 737-845. issn: 0020-9910, 1432-1297. doi: 10.1007/s00222-019-00943-9. arXiv: 1903.10003. url: http://arxiv.org/ abs/1903.10003 (visited on 11/05/2022) (cit. on p. 6).
. Clark Barwick, Saul Glasman, Peter Haine, Exodromy, arXiv:1807.03281visited on 11/05/2022Clark Barwick, Saul Glasman, and Peter Haine. Exodromy. Aug. 22, 2020. arXiv: 1807.03281. url: http://arxiv.org/abs/1807.03281 (visited on 11/05/2022).
Pyknotic objects, I. Basic notions. Clark Barwick, Peter Haine, cit. on pp. 3, 8, 14Clark Barwick and Peter Haine. Pyknotic objects, I. Basic notions. url: https://www.maths. ed.ac.uk/~cbarwick/papers/pyknotic.pdf (cit. on pp. 3, 8, 14).
Cohomology of the Morava stabilizer group through the duality resolution at n = p = 2. Agnès Beaudry, Irina Bobkova, Paul G Goerss, Hans-Werner Henn, Viet-Cuong Pham, Vesna Stojanoska, 10.48550/arXiv.2210.15994arXiv:2210.15994visited on 11/24/2022) (cit. on pp. 2, 4)Agnès Beaudry, Irina Bobkova, Paul G. Goerss, Hans-Werner Henn, Viet-Cuong Pham, and Vesna Stojanoska. Cohomology of the Morava stabilizer group through the duality resolution at n = p = 2. Oct. 28, 2022. doi: 10 . 48550 / arXiv . 2210 . 15994. arXiv: 2210 . 15994. url: http://arxiv.org/abs/2210.15994 (visited on 11/24/2022) (cit. on pp. 2, 4).
The Exotic K(2)-Local Picard Group at the Prime 2. Agnes Beaudry, Irina Bobkova, Paul G Goerss, Hans-Werner Henn, Viet-Cuong Pham, Vesna Stojanoska, arXiv:2212.07858visited on 01/01/2023) (cit. on pp. 2, 4)Agnes Beaudry, Irina Bobkova, Paul G. Goerss, Hans-Werner Henn, Viet-Cuong Pham, and Vesna Stojanoska. The Exotic K(2)-Local Picard Group at the Prime 2. Dec. 15, 2022. arXiv: 2212.07858. url: http://arxiv.org/abs/2212.07858 (visited on 01/01/2023) (cit. on pp. 2, 4).
Chromatic splitting for the K(2)-local sphere at p = 2. Agnès Beaudry, Paul G Goerss, Hans-Werner Henn, 10.2140/gt.2022.26.377arXiv:1712.08182Geom. Topol. 261visited on 11/24/2022) (cit. on pp. 39, 40, 54Agnès Beaudry, Paul G. Goerss, and Hans-Werner Henn. "Chromatic splitting for the K(2)-local sphere at p = 2". In: Geom. Topol. 26.1 (Apr. 5, 2022), pp. 377-476. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.2022.26.377. arXiv: 1712.08182. url: http://arxiv.org/abs/1712.08182 (visited on 11/24/2022) (cit. on pp. 39, 40, 54).
The algebraic duality resolution at p = 2. Agnès Beaudry, 10.2140/agt.2015.15.3653doi: 10 . 2140 / agt . 2015 . 15 . 3653Algebr. Geom. Topol. 156Mathematical Sciences PublishersPublisher. visited on 03/09/2023Agnès Beaudry. "The algebraic duality resolution at p = 2". In: Algebr. Geom. Topol. 15.6 (Jan. 12, 2016). Publisher: Mathematical Sciences Publishers, pp. 3653-3705. issn: 1472-2739. doi: 10 . 2140 / agt . 2015 . 15 . 3653. url: https : / / msp . org / agt / 2015 / 15 -6 / p17 . xhtml (visited on 03/09/2023).
Dualizing spheres for compact p-adic analytic groups and duality in chromatic homotopy. Agnès Beaudry, Paul G Goerss, Michael J Hopkins, Vesna Stojanoska, arXiv:2010.095182022visited on 11/05/2022Agnès Beaudry, Paul G. Goerss, Michael J. Hopkins, and Vesna Stojanoska. Dualizing spheres for compact p-adic analytic groups and duality in chromatic homotopy. May 17, 2022. arXiv: 2010.09518. url: http://arxiv.org/abs/2010.09518 (visited on 11/05/2022).
The homotopy fixed point spectra of profinite Galois extensions. Mark Behrens, Daniel G Davis, 10.1090/S0002-9947-10-05154-811/05/2022In: Trans. Amer. Math. Soc. 362cit. on pp. 10, 11Mark Behrens and Daniel G. Davis. "The homotopy fixed point spectra of profinite Galois extensions". In: Trans. Amer. Math. Soc. 362.9 (Apr. 14, 2010), pp. 4983-5042. issn: 0002-9947, 1088-6850. doi: 10.1090/S0002-9947-10-05154-8. url: https://www.ams.org/tran/2010- 362-09/S0002-9947-10-05154-8/ (visited on 11/05/2022) (cit. on pp. 10, 11).
The pro-étale topology for schemes. Bhargav Bhatt, Peter Scholze, arXiv:1309.11982628visited on 11/24/2022) (cit. on pp. 3, 5, 14Bhargav Bhatt and Peter Scholze. The pro-étale topology for schemes. Dec. 17, 2014. arXiv: 1309.1198. url: http://arxiv.org/abs/1309.1198 (visited on 11/24/2022) (cit. on pp. 3, 5, 14, 20-22, 26, 28, 34).
A universal characterization of higher algebraic K-theory. Andrew J Blumberg, David Gepner, Goncalo Tabuada, 10.2140/gt.2013.17.733Geom. Topol. 172visited on 11/24/2022Andrew J. Blumberg, David Gepner, and Goncalo Tabuada. "A universal characterization of higher algebraic K-theory". In: Geom. Topol. 17.2 (Apr. 18, 2013), pp. 733-838. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.2013.17.733. arXiv: 1001.2282. url: http://arxiv.org/abs/ 1001.2282 (visited on 11/24/2022).
Conditionally convergent spectral sequences. J , Michael Boardman, ; Jean-Pierre Meyer, Jack Morava, W Stephen Wilson, 10.1090/conm/239/03597Contemporary Mathematics. Providence, Rhode IslandAmerican Mathematical Society239J. Michael Boardman. "Conditionally convergent spectral sequences". In: Contemporary Mathe- matics. Ed. by Jean-Pierre Meyer, Jack Morava, and W. Stephen Wilson. Vol. 239. Providence, Rhode Island: American Mathematical Society, 1999, pp. 49-84. isbn: 978-0-8218-1057-6 978- 0-8218-7829-3. doi: 10.1090/conm/239/03597. url: http://www.ams.org/conm/239/ (visited on 11/05/2022).
Topological resolutions in K(2)-local homotopy theory at the prime 2". Irina Bobkova, Paul G Goerss, https:/onlinelibrary.wiley.com/doi/10.1112/topo.12076In: Journal of Topology. 1138visited on 04/09/2023) (citIrina Bobkova and Paul G. Goerss. "Topological resolutions in K(2)-local homotopy theory at the prime 2". In: Journal of Topology 11.4 (Dec. 2018), pp. 918-957. issn: 1753-8416, 1753-8424. doi: 10.1112/topo.12076. url: https://onlinelibrary.wiley.com/doi/10.1112/topo. 12076 (visited on 04/09/2023) (cit. on p. 38).
Homotopy limits, completions and localizations. 2. corr. pr. Lecture notes in mathematics 304. K Aldridge, Daniel M Bousfield, Kan, isbn: 978-3-540- 06105-2 978-0-387-06105-4Springer34854BerlincitAldridge K. Bousfield and Daniel M. Kan. Homotopy limits, completions and localizations. 2. corr. pr. Lecture notes in mathematics 304. Berlin: Springer, 1987. 348 pp. isbn: 978-3-540- 06105-2 978-0-387-06105-4 (cit. on p. 54).
Robert Burklund, M Tomer, Allen Schlank, Yuan, arXiv:2207.09929The Chromatic Nullstellensatz. 2022. 47Robert Burklund, Tomer M Schlank, and Allen Yuan. The Chromatic Nullstellensatz. 2022. arXiv: 2207.09929 (cit. on p. 47).
On the Strict Picard Spectrum of Commutative Ring Spectra. Shachar Carmeli, arXiv:2208.0307346visited on 05/22/2023) (citShachar Carmeli. On the Strict Picard Spectrum of Commutative Ring Spectra. Aug. 5, 2022. arXiv: 2208.03073. url: http://arxiv.org/abs/2208.03073 (visited on 05/22/2023) (cit. on p. 46).
Higher semiadditive Grothendieck-Witt theory and the K(1)-local sphere. Shachar Carmeli, Allen Yuan, arXiv:2109.1223348math.ATShachar Carmeli and Allen Yuan. Higher semiadditive Grothendieck-Witt theory and the K(1)- local sphere. 2022. arXiv: 2109.12233 [math.AT] (cit. on p. 48).
Hyperdescent andétale K-theory. Dustin Clausen, Akhil Mathew, 10.1007/s00222-021-01043-311/05/2022In: Invent. math. 2253citDustin Clausen and Akhil Mathew. "Hyperdescent andétale K-theory". In: Invent. math. 225.3 (Sept. 2021), pp. 981-1076. issn: 0020-9910, 1432-1297. doi: 10.1007/s00222-021-01043-3. arXiv: 1905.06611. url: http://arxiv.org/abs/1905.06611 (visited on 11/05/2022) (cit. on pp. 14-18, 22).
Cohomological Descent. Brian Conrad, Brian Conrad. Cohomological Descent. url: https://math.stanford.edu/~conrad/papers/ hypercover.pdf.
Exotic K(h)-local Picard groups when 2p − 1 = h 2 and the Vanishing Conjecture. Dominic , Leon Culver, Ningchuan Zhang, arXiv:2203.09455visited on 04/09/2023) (cit. on pp. 4, 37, 38Dominic Leon Culver and Ningchuan Zhang. Exotic K(h)-local Picard groups when 2p − 1 = h 2 and the Vanishing Conjecture. Apr. 9, 2022. arXiv: 2203.09455. url: http://arxiv.org/abs/ 2203.09455 (visited on 04/09/2023) (cit. on pp. 4, 37, 38).
Profinite and discrete G-spectra and iterated homotopy fixed points. Daniel Davis, Gereon Quick, 10.2140/agt.2016.16.225711/05/2022Algebr. Geom. Topol. 1610citDaniel Davis and Gereon Quick. "Profinite and discrete G-spectra and iterated homotopy fixed points". In: Algebr. Geom. Topol. 16.4 (Sept. 12, 2016), pp. 2257-2303. issn: 1472-2739, 1472- 2747. doi: 10.2140/agt.2016.16.2257. url: http://msp.org/agt/2016/16-4/p14.xhtml (visited on 11/05/2022) (cit. on p. 10).
The Lubin-Tate Spectrum and its Homotopy Fixed Points Spectra. Daniel G Davis, Northwestern UniversityDoctoral thesis.. cit. on pp. 4, 10, 11Daniel G. Davis. "The Lubin-Tate Spectrum and its Homotopy Fixed Points Spectra". Doctoral thesis. Northwestern University, 2003 (cit. on pp. 4, 10, 11).
Homotopy fixed points for L K(n) (E n ∧ X) using the continuous action. Daniel G Davis, 10.1016/j.jpaa.2005.06.02211/05/2022Journal of Pure and Applied Algebra. 2063cit. on pp. 4, 6, 10, 11Daniel G. Davis. "Homotopy fixed points for L K(n) (E n ∧ X) using the continuous action". In: Journal of Pure and Applied Algebra 206.3 (Aug. 2006), pp. 322-354. issn: 00224049. doi: 10.1016/j.jpaa.2005.06.022. url: https://linkinghub.elsevier.com/retrieve/pii/ S002240490500191X (visited on 11/05/2022) (cit. on pp. 4, 6, 10, 11).
Iterated homotopy fixed points for the Lubin-Tate spectrum. Daniel G Davis, 10.1016/j.topol.2009.08.026Topology and its Applications. 156visited on 12/06/2022Daniel G. Davis. "Iterated homotopy fixed points for the Lubin-Tate spectrum". In: Topology and its Applications 156.17 (Nov. 2009), pp. 2881-2898. issn: 01668641. doi: 10.1016/j.topol. 2009.08.026. url: https://linkinghub.elsevier.com/retrieve/pii/S0166864109004234 (visited on 12/06/2022).
Commutative ring objects in pro-categories and generalized Moore spectra. G Daniel, Tyler Davis, Lawson, 10.2140/gt.2014.18.103Geom. Topol18visited on 11/05/2022Daniel G. Davis and Tyler Lawson. "Commutative ring objects in pro-categories and generalized Moore spectra". In: Geom. Topol. 18.1 (Jan. 9, 2014), pp. 103-140. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.2014.18.103. url: http://www.msp.org/gt/2014/18-1/p04.xhtml (visited on 11/05/2022).
Théorie de Hodge: II. Pierre Deligne, http:/link.springer.com/10.1007/BF0268469211/05/2022Publications Mathématiques de l'IHÉS. 40cit. on pp. 21, 50Pierre Deligne. "Théorie de Hodge: II". In: Publications Mathématiques de l'IHÉS 40.1 (Dec. 1971), pp. 5-57. issn: 0073-8301, 1618-1913. doi: 10.1007/BF02684692. url: http://link. springer.com/10.1007/BF02684692 (visited on 11/05/2022) (cit. on pp. 21, 50).
Théorie de Hodge: III". fr. Pierre Deligne, Publications Mathématiques de l'IHÉS. 44Pierre Deligne. "Théorie de Hodge: III". fr. In: Publications Mathématiques de l'IHÉS 44 (1974), pp. 5-77. url: http://www.numdam.org/item/PMIHES_1974__44__5_0/.
La Conjecture de Weil. II. Pierre Deligne, http:/link.springer.com/10.1007/BF02684780springer.com/10.1007/BF02684780Publications Mathématiques de l'IHÉS. 52visited on 11/24/2022Pierre Deligne. "La Conjecture de Weil. II". In: Publications Mathématiques de l'IHÉS 52.1 (Dec. 1980), pp. 137-252. issn: 0073-8301, 1618-1913. doi: 10.1007/BF02684780. url: http: //link.springer.com/10.1007/BF02684780 (visited on 11/24/2022).
The Action of the Morava Stabilizer Group on the Lubin-Tate Moduli Space of Lifts. Ethan S Devinatz, Michael J Hopkins, 10.2307/2375086issn: 00029327. doi: 10 . 2307 / 2375086American Journal of Mathematics. 1173669visited on 11/24/2022Ethan S. Devinatz and Michael J. Hopkins. "The Action of the Morava Stabilizer Group on the Lubin-Tate Moduli Space of Lifts". In: American Journal of Mathematics 117.3 (June 1995), p. 669. issn: 00029327. doi: 10 . 2307 / 2375086. url: https : / / www . jstor . org / stable / 2375086?origin=crossref (visited on 11/24/2022).
The Action of the Morava Stabilizer Group on the Lubin-Tate Moduli Space of Lifts. Ethan S Devinatz, Michael J Hopkins, 10.2307/2375086issn: 00029327. doi: 10 . 2307 / 2375086American Journal of Mathematics. 1173669visited on 11/24/2022Ethan S. Devinatz and Michael J. Hopkins. "The Action of the Morava Stabilizer Group on the Lubin-Tate Moduli Space of Lifts". In: American Journal of Mathematics 117.3 (June 1995), p. 669. issn: 00029327. doi: 10 . 2307 / 2375086. url: https : / / www . jstor . org / stable / 2375086?origin=crossref (visited on 11/24/2022).
Homotopy fixed point spectra for closed subgroups of the Morava stabilizer groups. Ethan S Devinatz, Michael J Hopkins, 10.1016/S0040-9383(03)00029-6Topology 43. 409383visited on 11/24/2022) (cit. on pp. 3, 6, 10Ethan S. Devinatz and Michael J. Hopkins. "Homotopy fixed point spectra for closed subgroups of the Morava stabilizer groups". In: Topology 43.1 (Jan. 2004), pp. 1-47. issn: 00409383. doi: 10.1016/S0040-9383(03)00029-6. url: https://linkinghub.elsevier.com/retrieve/ pii/S0040938303000296 (visited on 11/24/2022) (cit. on pp. 3, 6, 10).
Nilpotence and Stable Homotopy Theory I. Ethan S Devinatz, Michael J Hopkins, Jeffrey H Smith, 10.2307/1971440issn: 0003486X. doi: 10 . 2307 / 1971440The Annals of Mathematics. 128207visited on 11/24/2022Ethan S. Devinatz, Michael J. Hopkins, and Jeffrey H. Smith. "Nilpotence and Stable Homotopy Theory I". In: The Annals of Mathematics 128.2 (Sept. 1988), p. 207. issn: 0003486X. doi: 10 . 2307 / 1971440. url: https : / / www . jstor . org / stable / 1971440 ? origin = crossref (visited on 11/24/2022).
Hypercovers and simplicial presheaves. Daniel Dugger, Sharon Hollander, Daniel C Isaksen, 10.1017/S0305004103007175doi: 10 . 1017 / S0305004103007175In: Math. Proc. Camb. Phil. Soc. 136visited on 11/24/2022) (cit. on p. 10Daniel Dugger, Sharon Hollander, and Daniel C. Isaksen. "Hypercovers and simplicial presheaves". In: Math. Proc. Camb. Phil. Soc. 136.1 (Jan. 2004), pp. 9-51. issn: 0305-0041, 1469-8064. doi: 10 . 1017 / S0305004103007175. url: http : / / www . journals . cambridge . org / abstract _ S0305004103007175 (visited on 11/24/2022) (cit. on p. 10).
Topological hypercovers and A 1 -realizations. Daniel Dugger, Daniel C Isaksen, http:/link.springer.com/10.1007/s00209-003-0607-ydoi: 10. 1007/s00209-003-0607-y. urlMathematische Zeitschrift. 246visited on 11/24/2022Daniel Dugger and Daniel C. Isaksen. "Topological hypercovers and A 1 -realizations". In: Math- ematische Zeitschrift 246.4 (Apr. 1, 2004), pp. 667-689. issn: 0025-5874, 1432-1823. doi: 10. 1007/s00209-003-0607-y. url: http://link.springer.com/10.1007/s00209-003-0607-y (visited on 11/24/2022).
Complete modules and torsion modules. G William, John P C Dwyer, Greenlees, 10.1353/ajm.2002.0001issn: 1080-6377. doi: 10.1353/ ajm . 2002 . 0001American Journal of Mathematics. 124visited on 11/05/2022William G. Dwyer and John P. C. Greenlees. "Complete modules and torsion modules". In: American Journal of Mathematics 124.1 (2002), pp. 199-220. issn: 1080-6377. doi: 10.1353/ ajm . 2002 . 0001. url: http : / / muse . jhu . edu / content / crossref / journals / american _ journal_of_mathematics/v124/124.1dwyer.pdf (visited on 11/05/2022).
A David, Harold M Edwards, Hastings, http:/link.springer.com/10.1007/BFb0081083isbn: 978-3-540-07863-0 978-3-540-38103-7. doi: 10.1007/ BFb0081083Čech and Steenrod Homotopy Theories with Applications to Geometric Topology. Heidelberg; Berlin HeidelbergSpringer542visited on 11/05/2022David A. Edwards and Harold M. Hastings.Čech and Steenrod Homotopy Theories with Ap- plications to Geometric Topology. Vol. 542. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer Berlin Heidelberg, 1976. isbn: 978-3-540-07863-0 978-3-540-38103-7. doi: 10.1007/ BFb0081083. url: http://link.springer.com/10.1007/BFb0081083 (visited on 11/05/2022).
Rings, modules, and algebras in stable homotopy theory. Igor Anthony D Elmendorf, Michael A Kriz, J Mandell, Peter, American Mathematical Soc4737Anthony D Elmendorf, Igor Kriz, Michael A. Mandell, and J. Peter May. Rings, modules, and algebras in stable homotopy theory. 47. American Mathematical Soc., 1997 (cit. on p. 37).
Enriched ∞-categories via non-symmetric ∞-operads. David Gepner, Rune Haugseng, 10.1016/j.aim.2015.02.007issn:00018708.doi:10.1016/j.aim.2015.02.007.arXiv:1312.3178Advances in Mathematics. 279visited on 11/24/2022David Gepner and Rune Haugseng. "Enriched ∞-categories via non-symmetric ∞-operads". In: Advances in Mathematics 279 (July 2015), pp. 575-716. issn: 00018708. doi: 10.1016/j. aim.2015.02.007. arXiv: 1312.3178. url: http://arxiv.org/abs/1312.3178 (visited on 11/24/2022).
David Gepner, Tyler Lawson, 10.1112/S0010437X21007065arXiv:1607.01118Brauer groups and Galois cohomology of commutative ring spectra. 157visited on 11/24/2022) (cit. on pp. 2-4, 11, 42, 43, 46, 49David Gepner and Tyler Lawson. "Brauer groups and Galois cohomology of commutative ring spectra". In: Compositio Math. 157.6 (June 2021), pp. 1211-1264. issn: 0010-437X, 1570-5846. doi: 10 . 1112 / S0010437X21007065. arXiv: 1607 . 01118. url: https : / / www . cambridge . org / core / journals / compositio -mathematica / article / brauer -groups -and -galois - cohomology-of-commutative-ring-spectra/DEB4A4904D33FAF8FFDD7B9CB4A3FEF5 (visited on 11/24/2022) (cit. on pp. 2-4, 11, 42, 43, 46, 49).
Philippe Gille, Tamás Szamuely, Central simple algebras and Galois cohomology. Cambridge University Press165Philippe Gille and Tamás Szamuely. Central simple algebras and Galois cohomology. Vol. 165. Cambridge University Press, 2017.
Dix exposés sur la cohomologie des schémas. Open Library ID: OL4617233M. Jean Giraud, North-Holland Publishing Company3864Amsterdam, ParisJean Giraud. Dix exposés sur la cohomologie des schémas. Open Library ID: OL4617233M. Amsterdam, Paris: North-Holland Publishing Company, 1968. [8], 386 (cit. on p. 4).
On Hopkins' Picard groups for the prime 3 and chromatic level 2. Paul G Goerss, Hans-Werner Henn, Mark Mahowald, Charles Rezk, 10.1112/jtopol/jtu024arXiv:1210.7033Journal of Topology. 8117538416visited on 11/24/2022) (cit. on pp. 2, 4)Paul G. Goerss, Hans-Werner Henn, Mark Mahowald, and Charles Rezk. "On Hopkins' Picard groups for the prime 3 and chromatic level 2". In: Journal of Topology 8.1 (Mar. 2015), pp. 267- 294. issn: 17538416. doi: 10.1112/jtopol/jtu024. arXiv: 1210.7033. url: http://arxiv. org/abs/1210.7033 (visited on 11/24/2022) (cit. on pp. 2, 4).
Moduli spaces of commutative ring spectra. G Paul, Michael J Goerss, Hopkins, 10.1017/CBO9780511529955.009isbn: 978-0-521-60305-8 978-0-511-52995-5. doi: 10.1017/ CBO9780511529955.009Structured Ring Spectra. Ed. by Andrew Baker and Birgit RichterCambridge University Press1st ed. visited on 11/25/2022) (cit. on p. 9Paul G. Goerss and Michael J. Hopkins. "Moduli spaces of commutative ring spectra". In: Structured Ring Spectra. Ed. by Andrew Baker and Birgit Richter. 1st ed. Cambridge University Press, Nov. 18, 2004, pp. 151-200. isbn: 978-0-521-60305-8 978-0-511-52995-5. doi: 10.1017/ CBO9780511529955.009. url: https://www.cambridge.org/core/product/identifier/ CBO9780511529955A014/type/book_part (visited on 11/25/2022) (cit. on p. 9).
Simplicial Homotopy Theory. Basel: Birkhäuser. G Paul, John F Goerss, Jardine, http:/link.springer.com/10.1007/978-3-0346-0189-4springer.com/10.1007/978-3-0346-0189-4visited on 04/24/2023) (cit. on p. 54Paul G. Goerss and John F. Jardine. Simplicial Homotopy Theory. Basel: Birkhäuser, 2009. isbn: 978-3-0346-0188-7 978-3-0346-0189-4. doi: 10.1007/978-3-0346-0189-4. url: http: //link.springer.com/10.1007/978-3-0346-0189-4 (visited on 04/24/2023) (cit. on p. 54).
Derived functors of I-adic completion and local homology. P C John, J Greenlees, Peter, Journal of Algebra. 14933citJohn P.C. Greenlees and J. Peter May. "Derived functors of I-adic completion and local homol- ogy". In: Journal of Algebra 149.2 (1992), pp. 438-453 (cit. on p. 33).
Descent for sheaves on compact Hausdorff spaces. J Peter, Haine, 10.48550/arXiv.2210.00186doi:10.48550/arXiv.2210.00186.arXiv:2210.00186visited on 10/12/2022) (cit. on pp. 23, 25, 34Peter J. Haine. Descent for sheaves on compact Hausdorff spaces. Oct. 1, 2022. doi: 10.48550/ arXiv.2210.00186. arXiv: 2210.00186. url: http://arxiv.org/abs/2210.00186 (visited on 10/12/2022) (cit. on pp. 23, 25, 34).
Morava modules and the K(n)-local Picard group. Drew Heard, 38University of MelbourneDoctoral thesis. citDrew Heard. "Morava modules and the K(n)-local Picard group". Doctoral thesis. University of Melbourne, 2014. url: https://minerva-access.unimelb.edu.au/items/67e645a3-3505- 572c-9999-c560f9781829 (cit. on p. 38).
The Sp k,n -local stable homotopy category. Drew Heard, arXiv:2108.0248611/05/2022cit. on pp. 6, 31Drew Heard. The Sp k,n -local stable homotopy category. May 24, 2022. arXiv: 2108.02486. url: http://arxiv.org/abs/2108.02486 (visited on 11/05/2022) (cit. on pp. 6, 31).
Picard groups of higher real K-theory spectra at height p − 1. Drew Heard, Akhil Mathew, Vesna Stojanoska, 10.1112/S0010437X17007242arXiv:1511.08064Compositio Math. 153visited on 11/24/2022) (cit. on pp. 3, 27, 30Drew Heard, Akhil Mathew, and Vesna Stojanoska. "Picard groups of higher real K-theory spectra at height p − 1". In: Compositio Math. 153.9 (Sept. 2017), pp. 1820-1854. issn: 0010- 437X, 1570-5846. doi: 10.1112/S0010437X17007242. arXiv: 1511.08064. url: http://arxiv. org/abs/1511.08064 (visited on 11/24/2022) (cit. on pp. 3, 27, 30).
Multiplicative Tate Spectral Sequences. Alice Petronella Hedenlund, 50Doctoral thesis. 2021. visited on 11/24/2022) (citAlice Petronella Hedenlund. "Multiplicative Tate Spectral Sequences". Doctoral thesis. 2021. url: https://www.duo.uio.no/handle/10852/83364 (visited on 11/24/2022) (cit. on p. 50).
On finite resolutions of K(n)-local spheres. Hans-Werner Henn, 10.1017/CBO9780511721489.008doi: 10 . 1017 / CBO9780511721489 . 008Elliptic Cohomology. Ed. by Haynes R. Miller and Douglas C. Ravenel.Cambridge University Press1st edHans-Werner Henn. "On finite resolutions of K(n)-local spheres". In: Elliptic Cohomology. Ed. by Haynes R. Miller and Douglas C. Ravenel. 1st ed. Cambridge University Press, Mar. 15, 2007, pp. 122-169. isbn: 978-0-521-70040-5 978-0-511-72148-9. doi: 10 . 1017 / CBO9780511721489 . 008. url: https://www.cambridge.org/core/product/identifier/CBO9780511721489A013/ type/book_part (visited on 03/10/2023).
A Mini-Course on Morava Stabilizer Groups and Their Cohomology. Hans-Werner Henn, arXiv:1702.05033visited on 03/09/2023Hans-Werner Henn. A Mini-Course on Morava Stabilizer Groups and Their Cohomology. Feb. 16, 2017. arXiv: 1702.05033. url: http://arxiv.org/abs/1702.05033 (visited on 03/09/2023).
The homotopy of the K(2)-local Moore spectrum at the prime 3 revisited. Nasko Hans-Werner Henn, Mark Karamanov, Mahowald, 10.1007/s00209-013-1167-4Mathematische Zeitschrift. 2754citHans-Werner Henn, Nasko Karamanov, and Mark Mahowald. "The homotopy of the K(2)-local Moore spectrum at the prime 3 revisited". In: Mathematische Zeitschrift 275 (Nov. 2008). doi: 10.1007/s00209-013-1167-4 (cit. on p. 4).
On Brauer Groups of Lubin-Tate Spectra I. J Michael, Jacob Hopkins, Lurie, cit. on pp. 4, 42Michael J. Hopkins and Jacob Lurie. On Brauer Groups of Lubin-Tate Spectra I. url: https: //www.math.ias.edu/~lurie/papers/Brauer.pdf (cit. on pp. 4, 42).
Michael J Hopkins, Mark Mahowald, Hal Sadofsky, 10.1090/conm/158/01454Constructions of elements in Picard groups". In: Contemporary Mathematics. Eric M. Friedlander and Mark E. MahowaldProvidence, Rhode IslandAmerican Mathematical Society15833cit. on pp. 2-4, 6, 7, 32Michael J. Hopkins, Mark Mahowald, and Hal Sadofsky. "Constructions of elements in Picard groups". In: Contemporary Mathematics. Ed. by Eric M. Friedlander and Mark E. Mahowald. Vol. 158. Providence, Rhode Island: American Mathematical Society, 1994, pp. 89-126. isbn: 978-0-8218-5165-4 978-0-8218-7749-4. doi: 10.1090/conm/158/01454. url: http://www.ams. org/conm/158/ (visited on 11/24/2022) (cit. on pp. 2-4, 6, 7, 32, 33, 39).
Vanishing lines in generalized Adams spectral sequences are generic. Michael J Hopkins, John H Palmieri, Jeffrey H Smith, 10.2140/gt.1999.3.15511/05/2022Geom. Topol. 3.1. cit. on pp. 15, 17)Michael J. Hopkins, John H. Palmieri, and Jeffrey H. Smith. "Vanishing lines in generalized Adams spectral sequences are generic". In: Geom. Topol. 3.1 (July 2, 1999), pp. 155-165. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.1999.3.155. url: http://www.msp.org/gt/1999/3- 1/p07.xhtml (visited on 11/05/2022) (cit. on pp. 15, 17).
Nilpotence and Stable Homotopy Theory II. J Michael, Jeffrey H Hopkins, Smith, 10.2307/120991The Annals of Mathematics. 1481visited on 11/05/2022Michael J. Hopkins and Jeffrey H. Smith. "Nilpotence and Stable Homotopy Theory II". In: The Annals of Mathematics 148.1 (July 1998), p. 1. issn: 0003486X. doi: 10.2307/120991. url: https://www.jstor.org/stable/120991?origin=crossref (visited on 11/05/2022).
Operations and co-operations in Morava E-theory. Mark Hovey, Homology, Homotopy and Applications. 6Mark Hovey. "Operations and co-operations in Morava E-theory". In: Homology, Homotopy and Applications 6 (2004), pp. 201-236.
Mark Hovey, 10.1090/S0002-9947-07-04298-5pii=S0002-9947-07-04298-5Morava E-theory of filtered colimits. 360visited on 12/06/2022) (cit. on p. 34Mark Hovey. "Morava E-theory of filtered colimits". In: Trans. Amer. Math. Soc. 360.1 (Jan. 1, 2008), pp. 369-383. issn: 00029947. doi: 10 . 1090 / S0002 -9947 -07 -04298 -5. url: http : //www.ams.org/journal-getitem?pii=S0002-9947-07-04298-5 (visited on 12/06/2022) (cit. on p. 34).
Mark Hovey, Neil P Strickland, 10.1090/memo/066611/25/2022Morava K-theories and localisation. American Mathematical Society139cit. on pp.Mark Hovey and Neil P. Strickland. Morava K-theories and localisation. Vol. 139. Memoirs of the American Mathematical Society. issn: 0065-9266, 1947-6221 Issue: 666. American Mathematical Society, 1999. isbn: 978-0-8218-1079-8 978-0-8218-6389-3 978-1-4704-0255-6. doi: 10 . 1090 / memo/0666. url: https://www.ams.org/memo/0666 (visited on 11/25/2022) (cit. on pp. 20, 33-35).
K-theory of dualizable categories (after A. Efimov). Marc Hoyois, Marc Hoyois. K-theory of dualizable categories (after A. Efimov). url: https://hoyois.app. uni-regensburg.de/papers/efimov.pdf.
Stable homotopy groups of spheres. C Daniel, Guozhen Isaksen, Zhouli Wang, Xu, https:/www.pnas.org/doi/10.1073/pnas.2012335117Publisher: Proceedings of the National Academy of Sciences. Proceedings of the National Academy of Sciences. visited on 11/24/2022Daniel C. Isaksen, Guozhen Wang, and Zhouli Xu. "Stable homotopy groups of spheres". In: Proceedings of the National Academy of Sciences 117.40 (Oct. 6, 2020). Publisher: Proceedings of the National Academy of Sciences, pp. 24757-24763. doi: 10.1073/pnas.2012335117. url: https://www.pnas.org/doi/10.1073/pnas.2012335117 (visited on 11/24/2022).
Facets of descent, I". George Janelidze, Walter Tholen, http:/link.springer.com/10.1007/BF00878100In: Appl Categor Struct. 2visited on 11/05/2022George Janelidze and Walter Tholen. "Facets of descent, I". In: Appl Categor Struct 2.3 (1994), pp. 245-281. issn: 0927-2852, 1572-9095. doi: 10 . 1007 / BF00878100. url: http : / / link . springer.com/10.1007/BF00878100 (visited on 11/05/2022).
. Uwe Jannsen, Continuousétale cohomology". In: Mathematische Annalen: Math. Ann. 280cit. on pp. 8, 39Uwe Jannsen. "Continuousétale cohomology". In: Mathematische Annalen: Math. Ann. 280.2 (1988), pp. 207-245 (cit. on pp. 8, 39).
John F Jardine, http:/link.springer.com/10.1007/978-3-0348-0066-21007/978-3-0348-0066-2GeneralizedÉtale Cohomology Theories. Basel; BaselSpringervisited on 11/05/2022) (cit. on pp. 4, 10, 11John F. Jardine. GeneralizedÉtale Cohomology Theories. Basel: Springer Basel, 1997. isbn: 978-3-0348-0065-5 978-3-0348-0066-2. doi: 10.1007/978-3-0348-0066-2. url: http://link. springer.com/10.1007/978-3-0348-0066-2 (visited on 11/05/2022) (cit. on pp. 4, 10, 11).
A relation between the Picard group of the E(n)-local homotopy category and E(n)-based Adams spectral sequence. Yousuke Kamiya, Katsumi Shimomura, isbn: 978-0-8218-3285-1Homotopy Theory: Relations with Algebraic Geometry, Group Cohomology, and Algebraic K-Theory. Paul G. Goerss and Stewart PriddyAmerican Mathematical Soccit. on pp. 2, 4)Yousuke Kamiya and Katsumi Shimomura. "A relation between the Picard group of the E(n)- local homotopy category and E(n)-based Adams spectral sequence". In: Homotopy Theory: Relations with Algebraic Geometry, Group Cohomology, and Algebraic K-Theory. Ed. by Paul G. Goerss and Stewart Priddy. American Mathematical Soc., 2004. isbn: 978-0-8218-3285-1 (cit. on pp. 2, 4).
On Hopkins' Picard Group Pic 2 at the prime 3. Nasko Karamanov, 10.2140/agt.2010.10.275Algebr. Geom. Topol. 10cit. on pp. 2, 4)Nasko Karamanov. "On Hopkins' Picard Group Pic 2 at the prime 3". In: Algebr. Geom. Topol. 10 (Feb. 12, 2010), pp. 275-292. doi: 10.2140/agt.2010.10.275 (cit. on pp. 2, 4).
Une résolution projective pour le second groupe de Morava pour p = 5 et applications. Olivier Lader, Université de StrasbourgPhD thesisOlivier Lader. "Une résolution projective pour le second groupe de Morava pour p = 5 et applications". PhD thesis. Université de Strasbourg, 2013.
The Adams-Novikov spectral sequence and Voevodsky's slice tower. Marc Levine, GeomMarc Levine. "The Adams-Novikov spectral sequence and Voevodsky's slice tower". In: Geom.
. Topol, 10.2140/gt.2015.19.2691arXiv:1311.417911/05/20221950citTopol. 19.5 (Oct. 20, 2015), pp. 2691-2740. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.2015. 19.2691. arXiv: 1311.4179. url: http://arxiv.org/abs/1311.4179 (visited on 11/05/2022) (cit. on pp. 50, 52).
Higher algebra. Jacob Lurie, cit. on pp. 8, 10, 11, 19, 34, 43Jacob Lurie. Higher algebra. url: https : / / www . math . ias . edu /~lurie / papers / HA . pdf (cit. on pp. 8, 10, 11, 19, 34, 43).
Jacob Lurie, Spectral algebraic geometry. cit. on pp. 8, 24, 34, 35Jacob Lurie. Spectral algebraic geometry. url: https://www.math.ias.edu/~lurie/papers/ SAG-rootfile.pdf (cit. on pp. 8, 24, 34, 35).
Jacob Lurie, isbn: 978-0-691-14049-0Higher Topos Theory (AM-170). visited on 11/24/2022) (cit. on pp. 8, 25, 29Jacob Lurie. Higher Topos Theory (AM-170). July 26, 2009. isbn: 978-0-691-14049-0. url: https://press.princeton.edu/books/paperback/9780691140490/higher-topos-theory- am-170 (visited on 11/24/2022) (cit. on pp. 8, 25, 29).
Elliptic II: Orientations. Jacob Lurie, 38Jacob Lurie. Elliptic II: Orientations. 2018. url: https : / / www . math . ias . edu /~lurie / papers/Elliptic-II.pdf (cit. on p. 38).
The Image of J in the EHP Sequence. Mark Mahowald, 10.2307/2007048The Annals of Mathematics. 11665visited on 11/24/2022Mark Mahowald. "The Image of J in the EHP Sequence". In: The Annals of Mathematics 116.1 (July 1982), p. 65. issn: 0003486X. doi: 10.2307/2007048. url: https://www.jstor.org/ stable/2007048?origin=crossref (visited on 11/24/2022).
A thick subcategory theorem for modules over certain ring spectra. Akhil Mathew, GeomAkhil Mathew. "A thick subcategory theorem for modules over certain ring spectra". In: Geom.
. Topol, 10.2140/gt.2015.19.2359arXiv:1311.394011/05/20221917Topol. 19.4 (July 29, 2015), pp. 2359-2392. issn: 1364-0380, 1465-3060. doi: 10.2140/gt.2015. 19.2359. arXiv: 1311.3940. url: http://arxiv.org/abs/1311.3940 (visited on 11/05/2022) (cit. on p. 17).
Akhil Mathew, arXiv:1404.2156The Galois group of a stable homotopy theory. 5visited on 11/24/2022) (cit. on pp.Akhil Mathew. The Galois group of a stable homotopy theory. Jan. 7, 2016. arXiv: 1404.2156. url: http://arxiv.org/abs/1404.2156 (visited on 11/24/2022) (cit. on pp. 5, 16-18, 23, 25-27).
Examples of descent up to nilpotence. Akhil Mathew, arXiv:1701.0152811/05/2022Akhil Mathew. Examples of descent up to nilpotence. Jan. 5, 2017. arXiv: 1701.01528. url: http://arxiv.org/abs/1701.01528 (visited on 11/05/2022).
On K(1)-local T R. Akhil Mathew, arXiv:2005.0874420visited on 01/01/2023) (citAkhil Mathew. On K(1)-local T R. Feb. 8, 2021. arXiv: 2005.08744. url: http://arxiv.org/ abs/2005.08744 (visited on 01/01/2023) (cit. on p. 20).
Nilpotence and descent in equivariant stable homotopy theory. Akhil Mathew, Niko Naumann, Justin Noel, 10.1016/j.aim.2016.09.027arXiv:1507.06869Advances in Mathematics 305. visited on 11/24/2022) (cit. on pp. 15, 17)Akhil Mathew, Niko Naumann, and Justin Noel. "Nilpotence and descent in equivariant stable homotopy theory". In: Advances in Mathematics 305 (Jan. 2017), pp. 994-1084. issn: 00018708. doi: 10.1016/j.aim.2016.09.027. arXiv: 1507.06869. url: http://arxiv.org/abs/1507. 06869 (visited on 11/24/2022) (cit. on pp. 15, 17).
The Picard group of topological modular forms via descent theory. Akhil Mathew, Vesna Stojanoska, 10.2140/gt.2016.20.3133arXiv:1409.7702Geom. Topol. 20visited on 11/24/2022) (cit. on pp. 2, 3, 6, 12, 13, 23, 26, 31Akhil Mathew and Vesna Stojanoska. "The Picard group of topological modular forms via descent theory". In: Geom. Topol. 20.6 (Dec. 21, 2016), pp. 3133-3217. issn: 1364-0380, 1465- 3060. doi: 10.2140/gt.2016.20.3133. arXiv: 1409.7702. url: http://arxiv.org/abs/ 1409.7702 (visited on 11/24/2022) (cit. on pp. 2, 3, 6, 12, 13, 23, 26, 31).
United Elliptic Homology. Lennart Meier, 2020-04-18T01:27:04ZUniversitäts-und Landesbibliothek BonnDoctoral thesisvisited on 11/24/2022) (cit. on pp. 3, 11Lennart Meier. "United Elliptic Homology". Accepted: 2020-04-18T01:27:04Z. Doctoral thesis. Universitäts-und Landesbibliothek Bonn, Sept. 11, 2012. url: https://bonndoc.ulb.uni- bonn.de/xmlui/handle/20.500.11811/5378 (visited on 11/24/2022) (cit. on pp. 3, 11).
On relations between Adams spectral sequences, with an application to the stable homotopy of a Moore space. R Haynes, Miller, 10.1016/0022-4049(81)90064-5issn: 00224049. doi: 10 . 1016 / 0022 -4049(81 ) 90064 -5Journal of Pure and Applied Algebra. 203visited on 11/05/2022Haynes R. Miller. "On relations between Adams spectral sequences, with an application to the stable homotopy of a Moore space". In: Journal of Pure and Applied Algebra 20.3 (Mar. 1981), pp. 287-312. issn: 00224049. doi: 10 . 1016 / 0022 -4049(81 ) 90064 -5. url: https : //linkinghub.elsevier.com/retrieve/pii/0022404981900645 (visited on 11/05/2022).
Periodic Phenomena in the Adams-Novikov Spectral Sequence. R Haynes, Douglas C Miller, W Ravenel, Stephen Wilson, 10.2307/1971064The Annals of Mathematics. 106469visited on 11/24/2022) (cit. on p. 54Haynes R. Miller, Douglas C. Ravenel, and W. Stephen Wilson. "Periodic Phenomena in the Adams-Novikov Spectral Sequence". In: The Annals of Mathematics 106.3 (Nov. 1977), p. 469. issn: 0003486X. doi: 10.2307/1971064. url: https://www.jstor.org/stable/1971064? origin=crossref (visited on 11/24/2022) (cit. on p. 54).
Hypercohomology Spectra and Thomason's Descent Theorem. Stephen A Mitchell, 10.1090/fic/016isbn: 978-0-8218-0818-4 978-1-4704-2984-3. doi: 10 . 1090 / fic / 016Algebraic K-Theory. Ed. by Victor SnaithAmerican Mathematical SocietyProvidence, Rhode Islandvisited on 11/05/2022Stephen A. Mitchell. "Hypercohomology Spectra and Thomason's Descent Theorem". In: Al- gebraic K-Theory. Ed. by Victor Snaith. Providence, Rhode Island: American Mathematical Society, July 22, 1997. isbn: 978-0-8218-0818-4 978-1-4704-2984-3. doi: 10 . 1090 / fic / 016. url: http://www.ams.org/fic/016 (visited on 11/05/2022).
On Postnikov completeness for replete topoi. Shubhodip Mondal, Emanuel Reinecke, arXiv:2210.1414611/03/202220Shubhodip Mondal and Emanuel Reinecke. On Postnikov completeness for replete topoi. Oct. 25, 2022. arXiv: 2210.14146. url: http://arxiv.org/abs/2210.14146 (visited on 11/03/2022) (cit. on p. 20).
Fabien Morel, 10.24033/bsmf.22842102-622X. doi: 10.24033/ bsmf.2284Ensembles profinis simpliciaux et interprétation géométrique du foncteur T. 124visited on 11/05/2022Fabien Morel. "Ensembles profinis simpliciaux et interprétation géométrique du foncteur T ". In: Bul. Soc. Math. France 124.2 (1996), pp. 347-373. issn: 0037-9484, 2102-622X. doi: 10.24033/ bsmf.2284. url: http://www.numdam.org/item?id=BSMF_1996__124_2_347_0 (visited on 11/05/2022).
The cohomology of finite subgroups of Morava stabilizer groups and Smith-Toda complexes. Lee Stewart Nave, 55University of WashingtonDoctoral thesisLee Stewart Nave. "The cohomology of finite subgroups of Morava stabilizer groups and Smith- Toda complexes". Doctoral thesis. University of Washington, 1999, p. 55.
Triangulated Categories. Amnon Neeman, 14853isbn: 978-0-691-08686-6. visited on 11/24/2022) (citAmnon Neeman. Triangulated Categories. (AM-148), Volume 148. Jan. 23, 2001. isbn: 978- 0-691-08686-6. url: https : / / press . princeton . edu / books / paperback / 9780691086866 / triangulated-categories-am-148-volume-148 (visited on 11/24/2022) (cit. on p. 53).
Grundlehren der mathematischen Wissenschaften. Jürgen Neukirch, Alexander Schmidt, Kay Wingberg, http:/link.springer.com/10.1007/978-3-540-37889-1Cohomology of number Fields. Berlin, HeidelbergSpringer Berlin Heidelberg323visited on 11/24/2022Jürgen Neukirch, Alexander Schmidt, and Kay Wingberg. Cohomology of number Fields. Vol. 323. Grundlehren der mathematischen Wissenschaften. Berlin, Heidelberg: Springer Berlin Heidel- berg, 2008. isbn: 978-3-540-37888-4 978-3-540-37889-1. doi: 10.1007/978-3-540-37889-1. url: http://link.springer.com/10.1007/978-3-540-37889-1 (visited on 11/24/2022).
Synthetic spectra and the cellular motivic category. Piotr Pstrągowski, arXiv:1803.01804Piotr Pstrągowski. Synthetic spectra and the cellular motivic category. 2018. arXiv: 1803.01804.
Chromatic Picard groups at large primes. Piotr Pstrągowski, arXiv:1811.054153837visited on 11/24/2022) (cit. on pp. 4, 6, 32Piotr Pstrągowski. Chromatic Picard groups at large primes. Jan. 28, 2022. arXiv: 1811.05415. url: http://arxiv.org/abs/1811.05415 (visited on 11/24/2022) (cit. on pp. 4, 6, 32, 37, 38).
Profinite Homotopy Theory. Gereon Quick, Documenta Mathematica. 28Gereon Quick. "Profinite Homotopy Theory". In: Documenta Mathematica (2008), p. 28.
Continuous group actions on profinite spaces. Gereon Quick, arXiv:0906.024511/05/2022cit. on pp. 4, 6, 10Gereon Quick. Continuous group actions on profinite spaces. Nov. 4, 2010. arXiv: 0906.0245. url: http://arxiv.org/abs/0906.0245 (visited on 11/05/2022) (cit. on pp. 4, 6, 10).
Continuous homotopy fixed points for Lubin-Tate spectra. Gereon Quick, arXiv:0911.5238visited on 11/05/2022Gereon Quick. Continuous homotopy fixed points for Lubin-Tate spectra. Mar. 22, 2012. arXiv: 0911.5238. url: http://arxiv.org/abs/0911.5238 (visited on 11/05/2022).
An application of simplicial profinite groups. G Daniel, Quillen, 10.1007/BF02564511doi: 10 . 1007 / BF02564511Commentarii Mathematici Helvetici. 44visited on 11/24/2022Daniel G. Quillen. "An application of simplicial profinite groups". In: Commentarii Mathematici Helvetici 44.1 (Dec. 1, 1969), pp. 45-60. issn: 1420-8946. doi: 10 . 1007 / BF02564511. url: https://doi.org/10.1007/BF02564511 (visited on 11/24/2022).
Localization with Respect to Certain Periodic Homology Theories. C Douglas, Ravenel, 10.2307/2374308issn: 00029327. doi: 10.2307/ 2374308American Journal of Mathematics. 106351visited on 11/24/2022) (cit. on p. 54Douglas C. Ravenel. "Localization with Respect to Certain Periodic Homology Theories". In: American Journal of Mathematics 106.2 (Apr. 1984), p. 351. issn: 00029327. doi: 10.2307/ 2374308. url: https : / / www . jstor . org / stable / 2374308 ? origin = crossref (visited on 11/24/2022) (cit. on p. 54).
Complex Cobordism and Stable Homotopy Groups of Spheres. C Douglas, Ravenel, ElsevierDouglas C Ravenel. Complex Cobordism and Stable Homotopy Groups of Spheres. Elsevier, 1986.
C Douglas, Ravenel, isbn: 978-0-691-02572-8Nilpotence and Periodicity in Stable Homotopy Theory. (AM-128). 12816visited on 11/24/2022) (citDouglas C. Ravenel. Nilpotence and Periodicity in Stable Homotopy Theory. (AM-128), Volume 128. Aug. 11, 1992. isbn: 978-0-691-02572-8. url: https://press.princeton.edu/books/ paperback/9780691025728/nilpotence-and-periodicity-in-stable-homotopy-theory- am-128-volume (visited on 11/24/2022) (cit. on pp. 16, 18).
Emily Riehl, 10.1017/CBO9781107261457Categorical Homotopy Theory. New Mathematical Monographs. Cambridge University Press. Emily Riehl. Categorical Homotopy Theory. New Mathematical Monographs. Cambridge Uni- versity Press, 2014. doi: 10.1017/CBO9781107261457.
Elements of ∞-Category Theory. Emily Riehl, Dominic Verity, 10.1017/9781108936880Cambridge University Press1st ed. visited on 11/24/2022Emily Riehl and Dominic Verity. Elements of ∞-Category Theory. 1st ed. Cambridge University Press, Jan. 31, 2022. isbn: 978-1-108-93688-0 978-1-108-83798-9. doi: 10.1017/9781108936880. url: https://www.cambridge.org/core/product/identifier/9781108936880/type/book (visited on 11/24/2022).
John Rognes, 137. isbn: 978- 0-8218-4063-4Galois extensions of structured ring spectra: stably dualizable groups. Open Library ID: OL18500212M. Providence, R.I. American Mathematical Societycit. on pp. 3, 10John Rognes. Galois extensions of structured ring spectra: stably dualizable groups. Open Library ID: OL18500212M. Providence, R.I: American Mathematical Society, 2008. vii, 137. isbn: 978- 0-8218-4063-4 (cit. on pp. 3, 10).
Peter Scholze, Lectures on Condensed Mathematics. cit. on pp. 3, 8, 14, 22Peter Scholze. Lectures on Condensed Mathematics. url: https://people.mpim-bonn.mpg. de/scholze/Condensed.pdf (cit. on pp. 3, 8, 14, 22).
p-adic Hodge theory for rigid-analytic varieties. Peter Scholze, arXiv:1205.346311/05/20223523citPeter Scholze. p-adic Hodge theory for rigid-analytic varieties. Nov. 3, 2012. arXiv: 1205.3463. url: http://arxiv.org/abs/1205.3463 (visited on 11/05/2022) (cit. on pp. 23, 35).
. Jean-Pierre Serre, isbn: 978-3- 540-61990-1Galois cohomology. Berlin. 210Springercit. on pp. 10, 11Jean-Pierre Serre. Galois cohomology. Berlin; New York: Springer, 1997. 210 pp. isbn: 978-3- 540-61990-1 (cit. on pp. 10, 11).
The Stacks project authors. The Stacks project. 28The Stacks project authors. The Stacks project. url: https://stacks.math.columbia.edu (cit. on p. 28).
Gross-Hopkins duality. P Neil, Strickland, Topology 39.5 (2000). Neil P Strickland. "Gross-Hopkins duality". In: Topology 39.5 (2000), pp. 1021-1033.
Cohomology of p-Adic Analytic Groups. Peter Symonds, Thomas Weigel, http:/link.springer.com/10.1007/978-1-4612-1380-212. isbn: 978-1-4612-7122-2 978-1-4612-1380-2. doi: 10.1007/ 978-1-4612-1380-2New Horizons in pro-p Groups. Marcus du Sautoy, Dan Segal, and Aner Shalev. Boston, MA: Birkhäuser Bostonvisited on 11/10/2022) (cit. on pp. 8, 39, 55Peter Symonds and Thomas Weigel. "Cohomology of p-Adic Analytic Groups". In: New Hori- zons in pro-p Groups. Ed. by Marcus du Sautoy, Dan Segal, and Aner Shalev. Boston, MA: Birkhäuser Boston, 2000. Chap. 12. isbn: 978-1-4612-7122-2 978-1-4612-1380-2. doi: 10.1007/ 978-1-4612-1380-2. url: http://link.springer.com/10.1007/978-1-4612-1380-2 (visited on 11/10/2022) (cit. on pp. 8, 39, 55).
Algebraic K-theory andétale cohomology. Robert W Thomason, 10.24033/asens.1495Ann. Sci.École Norm. Sup. 18visited on 11/05/2022Robert W. Thomason. "Algebraic K-theory andétale cohomology". In: Ann. Sci.École Norm. Sup. 18.3 (1985), pp. 437-552. issn: 0012-9593, 1873-2151. doi: 10.24033/asens.1495. url: http://www.numdam.org/item?id=ASENS_1985_4_18_3_437_0 (visited on 11/05/2022).
Algebraic K-theory andétale cohomology. Robert W Thomason, 10.24033/asens.1596id = ASENS _ 1989 _ 4 _ 22 _ 4 _ 675 _ 0Ann. Sci. Ecole Norm. Sup. 22Erratum. visited on 11/24/2022Robert W. Thomason. "Erratum "Algebraic K-theory andétale cohomology"". In: Ann. Sci. Ecole Norm. Sup. 22.4 (1989), pp. 675-677. issn: 0012-9593, 1873-2151. doi: 10.24033/asens. 1596. url: http : / / www . numdam . org / item ? id = ASENS _ 1989 _ 4 _ 22 _ 4 _ 675 _ 0 (visited on 11/24/2022).
Derived Azumaya algebras and generators for twisted derived categories. Bertrand Toën, 10.1007/s00222-011-0372-1In: Invent. math. 18943visited on 04/09/2023) (citBertrand Toën. "Derived Azumaya algebras and generators for twisted derived categories". In: Invent. math. 189.3 (2012), pp. 581-652. issn: 1432-1297. doi: 10.1007/s00222-011-0372-1. url: https://doi.org/10.1007/s00222-011-0372-1 (visited on 04/09/2023) (cit. on p. 43).
An Introduction to Homological Algebra. Charles A Weibel, 10.1017/CBO9781139644136isbn: 978-0-521-43500-0 978-0-521-55987-4 978-1-139-64413-6. doi: 10. 1017/CBO9781139644136Cambridge University Press1st ed. visited on 11/24/2022) (cit. on p. 22Charles A. Weibel. An Introduction to Homological Algebra. 1st ed. Cambridge University Press, Apr. 29, 1994. isbn: 978-0-521-43500-0 978-0-521-55987-4 978-1-139-64413-6. doi: 10. 1017/CBO9781139644136. url: https://www.cambridge.org/core/product/identifier/ 9781139644136/type/book (visited on 11/24/2022) (cit. on p. 22).
Classifying modules over K-theory spectra. Jerome J Wolbert, 10.1016/S0022-4049(96)00112-01016/ S0022-4049(96)00112-0. urlJournal of Pure and Applied Algebra. 12437Jerome J. Wolbert. "Classifying modules over K-theory spectra". In: Journal of Pure and Ap- plied Algebra 124.1 (1998), pp. 289-323. issn: 0022-4049. doi: https://doi.org/10.1016/ S0022-4049(96)00112-0. url: https://www.sciencedirect.com/science/article/pii/ S0022404996001120 (cit. on p. 37).
| [] |
[
"Efficiency of MHD Wave Generation in Weakly Ionized Atmospheres",
"Efficiency of MHD Wave Generation in Weakly Ionized Atmospheres"
] | [
"Paul S Cally \nSchool of Mathematics\nMonash University\n3800VictoriaAustralia\n"
] | [
"School of Mathematics\nMonash University\n3800VictoriaAustralia"
] | [] | Generation of Alfvén and slow magneto-acoustic waves in weakly ionized atmospheres by excitation of the charges-only component of the two fluid (charges and neutrals) plasma is shown to be more or less efficient depending on the energy fraction initially allocated to the three stationary "flow differential" modes which characterize the inter-species drift. This is explained via detailed analysis of the full tendimensional spectral description of two-fluid linear magnetohydrodynamics. Excitation via the velocity of the charges only is found to be very inefficient, in accord with previous results, whilst excitation via the magnetic field perturbation alone is highly efficient. All ten eigenvalues and eigenvectors are presented analytically in the high collision frequency regime. | null | [
"https://export.arxiv.org/pdf/2306.04801v1.pdf"
] | 259,108,716 | 2306.04801 | 33485a05dce6ee12ed4311bfa5d39cf29d47a4a7 |
Efficiency of MHD Wave Generation in Weakly Ionized Atmospheres
June 9, 2023
Paul S Cally
School of Mathematics
Monash University
3800VictoriaAustralia
Efficiency of MHD Wave Generation in Weakly Ionized Atmospheres
June 9, 2023Draft version Typeset using L A T E X default style in AASTeX631Solar atmosphere(1477) -Plasma astrophysics(1261) -Magnetohydrodynamics(1964)
Generation of Alfvén and slow magneto-acoustic waves in weakly ionized atmospheres by excitation of the charges-only component of the two fluid (charges and neutrals) plasma is shown to be more or less efficient depending on the energy fraction initially allocated to the three stationary "flow differential" modes which characterize the inter-species drift. This is explained via detailed analysis of the full tendimensional spectral description of two-fluid linear magnetohydrodynamics. Excitation via the velocity of the charges only is found to be very inefficient, in accord with previous results, whilst excitation via the magnetic field perturbation alone is highly efficient. All ten eigenvalues and eigenvectors are presented analytically in the high collision frequency regime.
INTRODUCTION
The low atmospheres of cool stars are generally very weakly ionized. For example, the quiet solar photosphere has an ionization fraction as low as 10 −4 (Khomenko et al. 2014). This has led to some discussion about whether Alfvén waves can be excited there. Alfvén waves are often invoked as important drivers of the solar wind and coronal heating (Cranmer & van Ballegooijen 2005;McIntosh et al. 2011), so this is an important issue.
To date, the argument has revolved around timescales, and less obviously the wave initialization mechanism. The electron-ion elastic scattering collision frequency at the quiet Sun photospheric base is of order 10 10 s −1 , which allows the charges to be modelled as a single fluid at frequencies much lower than this. The coupling between the neutrals fluid and the charges fluid is then described by the charges-neutrals collision frequency ν cn of around 10 9 s −1 and the neutral-charges collision frequency ν nc of about 10 6 s −1 , dropping to a few tens of thousands per second at the top of the photosphere. The two fluids (neutrals and charges) are coupled only via these collisions.
The standard one-fluid (1F) ideal magnetohydrodynamic (MHD) formula for Alfvén wave energy flux density (energy per unit area per unit time) is
F = Ea(1)
directed along the field lines, where E = E kin + E mag is the wave energy density made up equally of kinetic and magnetic contributions, and a is the Alfvén speed. In terms of the plasma velocity amplitude v and equilibrium density ρ, E mag = E kin = 1 4 ρv 2 . The extra factor of 1 2 comes from RMS averaging. Thus F = 1 2 ρv 2 a overall. Vranjes et al. (2008) argued that this should be reduced by the factor χ −2 in a two-fluid (2F, charges and neutrals) plasma, where the ratio of neutrals to charges density χ = ρ n /ρ c is typically of order 10 3 − 10 4 . This is because any velocity originally on the charges alone must quickly be shared with the neutrals, thereby greatly reducing v. Based on this insight, they concluded that a significant flux of Alfvén waves could not be generated in the solar photosphere.
On the other hand, Tsap et al. (2011) found no such diminution of flux in their 2F model. The cause of the discrepancy was discussed by Soler et al. (2013b), who concluded that the issue hinges on whether or not the neutrals fluid is given initial velocity v n matching that of the charges fluid v c . If it is, then there is no collisional quenching of the velocity and the full 1F energy flux formula applies. If on the other hand the neutrals are left unperturbed (v n = 0) by the driver, then the χ −2 reduction must be applied. Soler et al. judged that both viewpoints are valid; it just depends upon the nature of the perturbation that initiates the wave.
The nature of the dependence on initial conditions is explored further here using a spectral description.
Spectral Overview
For specified wavevector k, it is shown that the linearized system is fully described by a tenth order eigenvalue system, corresponding to six slowly decaying MHD wave modes (slow, Alfvén and fast, each propagating in the ±k directions); one stationary very slowly decaying isobaric mode with zero total pressure; and three very rapidly decaying (nanoseconds) stationary flow differential modes whose main characteristic is that they exhibit very different velocities of the neutrals and charges fluids. Only the MHD modes transport energy, and only the flow differential modes have significantly discrepant velocities between the two species. The eigenmodes are independent of each other, so their energies are simply whatever they were initially given by the excitation process, subject to their respective fast or slow collisional decays.
Any energy given to the flow differential modes is immediately lost to collisions, and represents an inefficiency of the excitation. For many simple excitations, this is around 50%. Energy deposited in the isobaric mode also does not contribute to energy flux, but typically very little of this mode is generated. Energy density E i given to each MHD mode (where i represents one of the six modes) plays a full role in energy transport, carrying flux F i = E i V i , where V i is the corresponding group velocity for that mode. Typically, realistic excitation results in nearly equal energies in the positively and negatively directed versions of each mode type, and so net flux is small or zero, but flux in each direction can independently be large.
However, pure excitation via just a magnetic field perturbation does not place significant energy in the flow differential modes, and so essentially all energy is efficiently allocated to travelling MHD modes, 50% in each direction. Therefore, a purely magnetic excitation can in fact be a most efficient generator of MHD waves. The same holds true for fullplasma excitations where charges and neutrals fluids are given the same initial velocities, as in Tsap et al. (2011), because again the flow differential modes are not significantly excited.
The eigensystem decomposition is described in detail in Section 2. Section 3 discusses the quadratic expressions for wave energy and flux. An analytic analysis of three different charges-only Alfvén wave excitation mechanisms is presented in Section 4. Numerical examples of both Alfvén and magneto-acoustic excitation corresponding to different levels in the low solar atmosphere are set out in Section 5. The results are discussed and summarized in Section 6.
MATHEMATICAL FORMULATION
Governing Equations
Consider a partially ionized hydrogen plasma. Extension to a more realistic chemical mixture does not change the arguments to be presented. The linearized coupled two-fluid (2F) wave equations take the form (Soler et al. 2013a)
ρ n ∂v n ∂t = −∇p n + α cn (v c − v n ),(2a)∂p n ∂t = −γP n ∇· v n ,(2b)
for the neutrals, and
ρ c ∂v c ∂t = −∇p c + 1 µ (∇×b)×B − α cn (v c − v n ),(2c)∂p c ∂t = −γP c ∇· v c ,(2d)∂b ∂t = ∇×(v c ×B),(2e)
for the charges, where the equilibrium magnetic field B and charges and neutrals gas pressures P c and P n are perturbed respectively by b, p c and p n . These equations follow from linearizing Equations (1) of Popescu Braileanu et al. (2019) under the assumption of temperature equality between charges and neutrals. The equilibrium charges and neutrals mass densities are ρ c and ρ n , α cn is the friction coefficient, and µ is the vacuum permeability. The fluid velocities of the charges and neutrals are respectively v c = (u c , v c , w c ) and v n = (u n , v n , w n ). Without loss of generality, B is arbitrarily oriented in the z-direction and the wavevector k ≡ −i ∇ = (sin θ, 0, cos θ)k in the x-z plane. Since b is perpendicular to k (which also follows from ∇· b = 0), we need only retain two components, b y and b ⊥ = b x cos θ − b z sin θ.
These 2F energy equations above are adiabatic, and do not explicitly feed energy lost via collisions back into the thermal state of the plasma. However, it is to be understood that this is where it goes (Cally & Gómez-Míguez 2023).
Although Equations (2) are commonly used in two-fluid studies, Vranjes et al. (2008) (and more recently Alharbi et al. 2022) questioned the significance of the Lorentz force term on the right hand side of Equation (2c) on the basis that the collisional frequency greatly exceeds the ion gyrofrequency Ω i and hence that ions traverse only a tiny portion of their gyration path before scattering. However, Tsap et al. (2011) demonstrate that the ratio of the Lorentz force to the net ion-neutral collision force is of order
Ω i ν in : |v c − v n | |v c | .
The left hand side is typically 10 −3 − 10 −2 in the low solar atmosphere (see Khomenko et al. 2014, Fig. 1). However, rapid collisions easily ensure that the right hand side is smaller than this, so the Lorentz force dominates or is at least comparable to the collisional force. Indeed, as will be shown, the collisions impose two timescales on the system: (i) a very rapid scale of order nanoseconds over which |v c − v n | is reduced almost to zero, and (ii) a much longer ambipolar diffusion timescale which typically exceeds Alfvén wave periods of interest. The Lorentz force is unimportant over the former scale, but clearly significant over the latter, so it is retained here.
Modal Eigenfrequencies and Eigenvectors
Let X = (u n , v n , w n ,
ψ n = p n /ρ n , u c , v c , w c , ψ c = p c /ρ c , α ⊥ = b ⊥ / √ µ ρ, α y = b y / √ µ ρ) T , where ρ = ρ c + ρ n is
the total equilibrium density. Then Equations (2) can be written in matrix form dX/dt = M X with exact solution X(t) = e M t X(0). The matrix M is not defective so the matrix exponential may be constructed by direct exponentiation of the eigenvalues, e M t = P T diag[e λ1t , . . . , e λ10t ]P −1 , where the λ m are the eigenvalues of M and the columns of the 10 × 10 matrix P are the corresponding eigenvectors.
The following quantities are introduced to define M: neutral to charges mass ratio χ = ρ n /ρ c , (total) Alfvén velocity a = B/ √ µρ, and (total) sound speed c defined in terms of the charges and neutrals squared sound speeds c 2 c = γP c /ρ c and c 2 n = γP n /ρ n by c 2 = (c 2 c + χc 2 n )/(1 + χ). Thermal equilibrium is assumed between the species, c 2 c = 2c 2 n , so c 2 = c 2 n (2 + χ)/(1 + χ). The total Alfvén speed a = B/ √ µρ is related to the charges-only Alfvén speed a c = B/ √ µρ c by a c = a √ 1 + χ. It is natural to use a rather than a c as the measure of magnetic influence in the strongly coupled regime. We also introduce the charges-neutral collision frequency ν cn = α cn /ρ c and the neutral-charges collision frequency ν nc = α cn /ρ n , related by ν cn = χν nc .
The matrix M is written down explicitly in Appendix A. In the absence of collisions, the top left 4 × 4 and bottom right 6 × 6 submatrices of M decouple, yielding two oppositely directed acoustic waves and two steady (zero frequency) incompressive velocity shears on the neutrals, and the six oppositely directed slow, Alfvén and fast waves on the charges.
On the other hand, the ten eigenfrequencies ω = i λ and the accompanying eigenvectors of the full collisionally coupled system are quite different in nature. All eigenfrequencies are complex for finite non-zero ν nc and may be calculated numerically for any combination of parameters. It is instructive to calculate their asymptotic values in the large collision rate regime ν nc ≫ ω that applies to waves of interest in the low solar atmosphere. A perturbation method for deriving these formulae is sketched in Appendix B.
Alfvén waves -
ω A = a k cos θ ±1 − i a k χ cos θ 2(1 + χ)ν nc + O (ω/ν nc ) 2 ,(3a)
which accords with Equation (21) of de Pontieu & Haerendel (1998) for the two-dimensional case θ = 0. The corresponding eigenvectors are
A ± = 0, ∓1 − i a k z (χ + 2) 2(χ + 1)ν nc + O ν −2 nc , 0, 0, 0, ∓1 + i a k z χ 2(χ + 1)ν nc + O ν −2 nc , 0, 0, 0, 1 ,(3b)
which verifies that the velocities on the neutrals and charges (second and sixth components) differ only at O(ν −1 nc ).
Slow waves -
ω s = ± k √ a 2 + c 2 − ∆ √ 2 − ik 2 χH s 2ν nc (χ + 1)(χ + 2) 2 (a 2 + c 2 − ∆) ∆ + O (ω/ν nc ) 2 ,(3c)
where ∆ = √ a 4 + c 4 − 2a 2 c 2 cos 2θ satisfies |c 2 − a 2 | ⩽ ∆ ⩽ a 2 + c 2 , and H s = a 2 c 2 cos 2θ a 2 (χ + 2)(χ + 3) + c 2 (χ + 3) − ∆(χ + 2) + ∆ a 4 (χ + 2) 2 + a 2 c 2 (χ + 2) + c 4 − a 2 (χ + 2) + c 2 a 4 (χ + 2) + c 4 . (3d)
The corresponding eigenvector up to and including terms of O(ν −1 nc ) is far too long to present here, so we give it to leading order only. For the positively (upper sign) and negatively (lower sign) directed modes, the eigenvector is
S ± = ∓ √ a 2 + c 2 − ∆ √ 2a , 0, ± cot θ a 2 − c 2 + ∆ √ 2a √ a 2 + c 2 − ∆ , c 2 (χ + 1) csc θ a 2 cos 2θ − c 2 + ∆ a(χ + 2) (a 2 + c 2 − ∆) , ∓ √ a 2 + c 2 − ∆ √ 2a , 0, ± cot θ a 2 − c 2 + ∆ √ 2a √ a 2 + c 2 − ∆ , 2c 2 (χ + 1) csc θ a 2 cos 2θ − c 2 + ∆ a(χ + 2) (a 2 + c 2 − ∆) , 1, 0 + O ν −1 nc . (3e)
Again the velocities on the neutrals and charges differ only at O(ν −1 nc ).
Fast waves -
ω f = ± k √ a 2 + c 2 + ∆ √ 2 − ik 2 χH f 2ν nc (χ + 1)(χ + 2) 2 (a 2 + c 2 + ∆) ∆ + O (ω/ν nc ) 2 ,(3f)
where H f = −a 2 c 2 cos 2θ a 2 (χ + 2)(χ + 3) + c 2 (χ + 3) + ∆(χ + 2) + ∆ a 4 (χ + 2) 2 + a 2 c 2 (χ + 2) + c 4 + a 2 (χ + 2) + c 2 a 4 (χ + 2) + c 4 . (3g)
The eigenvector F ± for the positively directed fast mode is exactly as for S ± given in Equation (3e) but with ∆ replaced by −∆.
Flow differential mode rapid decay (triple) -
ω diff = −i (1 + χ)ν nc + O a 2 k 2 /ν nc .(3h)
The eigenvectors are
f x = −1, 0, 0, − ic 2 k sin θ ν nc (χ + 2) , χ, 0, 0, 2ic 2 kχ sin θ ν nc (χ + 2) , − iakχ ν nc (χ + 1) , 0 + O(ν −2 nc ), f y = 0, −1, 0, 0, 0, χ, 0, 0, 0, − iakχ cos θ ν nc (χ + 1) + O(ν −2 nc ), f z = 0, 0, −1, − ic 2 k cos θ ν nc (χ + 2) , 0, 0, χ, 2ic 2 kχ cos θ ν nc (χ + 2) , 0, 0 + O(ν −2 nc ),(3i)
corresponding to flows in the x, y and z directions respectively. To leading order, all three of these modes have oppositely directed velocities with ratio |v c |/|v n | = χ and no thermal or magnetic components, so their energy is purely kinetic to this order. All other eigenmodes have near-equal velocities in the highly collisional regime. The rapid decay of these flow eigenmodes represents the decay of the flow differentials.
Isobaric mode slow decay -
ω pr = − 2 i c 2 k 2 (1 + χ) ν nc (2 + χ) 2 + O (c k/ν nc ) 3 c k. (3j)
The eigenvector is
ik sin θ ν nc , 0, − ik cos θ χ tan 2 θ − 2 ν nc (χ + 2) , −1, 0, 0, − ikχ sec θ ν nc (χ + 2) , χ, 0, 0 + O(ν −2 nc ).(3k)
This mode balances the pressures on the neutrals and charges, p c + p n = 0, with small O(ν −1 nc ) velocity (u and w) and magnetic (b ⊥ ) perturbations. Hence it is stationary apart from the slow diffusive decay.
To the order shown, the three classic MHD waves inherit a slow temporal decay rate of order 1/ν nc , even for χ ≫ 1. The decay rate of the erstwhile stationary isobaric mode is even slower, of order 1/(χν nc ) in the weakly ionized regime. In stark contrast though, the three modes representing an imbalance between v c and v n decay extremely rapidly, with rate exactly ν nc + ν cn independent of ionization fraction. For Alfvén waves, this is in accord with the analysis of Section 4.1.3 of Soler et al. (2013b).
The four modes with zero real component of eigenfrequency are called "entropy modes" by Soler et al. (2013a), and "vortex modes" by Zaqarashvili et al. (2011) based on the natures of zero-frequency modes in 1F plasmas. However, in the 2F context, we prefer the usage introduced above. Only the isobaric mode has any thermal signature, and so could reasonably be called an "entropy" mode, but in view of its pressure balance character, we adopt "isobaric" as a more precise descriptor. The flow differential modes are most notable for their discrepant velocities, and so are best named accordingly, rather than with reference to the unrelated vorticity i k×v.
Cutoffs
From the exact expression for the Alfvén eigenfrequency, it can be shown that there are Alfvén cutoff wavenumbers at
k ± A,c = ν nc | sec θ| 8(χ + 1)a χ 2 + 20χ − 8 ∓ (χ − 8) 3/2 χ 1/2 = 2 √ 2 ν nc | sec θ| a 1 + χ χ 2 + 20χ − 8 ± (χ − 8) 3/2 χ 1/2 = 2ν nc | sec θ| a 1 − 1 χ + O χ −3 for "+" sign, ν nc | sec θ| √ χ 2a 1 + 3 2χ + O χ −2 for "−" sign, for χ ≫ 1.(4)
The second line here accords with Equation (20) of Soler et al. (2013b) (for the case θ = 0), noting that their c A is our a c = B/ √ µρ c = a √ 1 + χ, the charges-only Alfvén speed. As they note, there is a real cutoff wavenumber interval only if χ > 8. For reference, although χ ≫ 1 in the low solar atmosphere, it is only about 1 at the top of the chromosphere. The Alfvén eigenfrequency is pure imaginary
(evanescent) for k + A,c ⩽ k ⩽ k − A,c .
The slow wave is subject to similar cutoffs (Soler et al. 2013a). However, these Alfvén and slow cutoffs are at very high wavenumbers and associated frequencies in the solar context, so are of limited practical importance. They are illustrated in Figure 1.
Section Summary
In summary, the full solution of the wave equations takes the form
X(t) = 10 m=1 C m X m e −i ωmt ,(5)
where the ω j are the complex eigenfrequencies and the X m are the corresponding eigenvectors. In matrix form, this is
X(t) = Pe −i Ω t C(6)
where Ω = diag[ω 1 , . . . , ω 10 ], C = (C 1 , . . . , C 10 ) T , and P is the matrix of eigenvectors of M arranged as columns consistently with the eigenvalues. In all cases of interest, the three flow differential modes (arbitrarily m = 5, 6 and 7) have large negative imaginary eigenfrequencies, and hence disappear almost instantaneously. For all intents and purposes therefore, only the remaining seven eigenvalues and their eigenvectors are significant. We saw analytically in Equations (3) and shall see graphically in Figure 2 that these exhibit only minor drift between neutrals and charges.
ENERGY AND FLUX
It is conventional to introduce quadratic expressions for wave-energy density and wave-energy-density-flux in linear wave studies. Linear energy contributions such as B·b/µ for the magnetic energy average to zero over a period and are ignored. Extending the standard Eckart (1960) process to the 2F equations (2), it is easily shown (Cally & Gómez-Míguez 2023
) that ∂E ∂t + ∇· F = −α cn |v n − v c | 2 ,(7)
where E = ρ n |v n | 2 /2 + p 2 n /(2ρ n c 2 n ) + ρ c |v c | 2 /2 + p 2 c /(2ρ c c 2 c ) + |b| 2 /2µ may be interpreted as a quadratic energy density, and F = p n v n + p c v c − µ −1 (v c ×B) ×b is the corresponding energy density flux. The right hand side represents collisional loss. The energy density consists of respectively the kinetic energy of the neutrals, the thermal energy of the neutrals, the kinetic energy of the charges, the thermal energy of the charges and the magnetic energy. The flux consists of the rate of working by the neutrals and charges pressure perturbations and the Poynting flux.
When a complex representation of the variables is in use, the quadratic terms in E are replaced by their absolute values, so
E = 1 2 ρ n |v n | 2 + |p n | 2 2ρ n c 2 n + 1 2 ρ c |v c | 2 + |p c | 2 2ρ c c 2 c + |b| 2 2µ ,(8)
and the flux becomes
F = Re p * n v n + p * c v c − 1 µ (v c ×B) ×b * .(9)
Any particular initialization of oscillations corresponds to a particular partitioning of energy into the ten eigenmodes.
The quadratic wave energy density may be expressed as an Hermitian form in terms of the spectral coefficients C:
E = ρ n C † Ψ(t)C.(10)
Similarly, the vertical wave energy flux is
F z = 1 2 ρ n C † Ξ(t) C.(11)
The Hermitian matrices Ψ and Ξ are described in Appendix C. Of course, the flux in any other direction can be calculated similarly, but F z will serve to illustrate the point.
INITIALIZING ALFVÉN WAVES ON THE CHARGES ALONE
Because of its intrinsic interest as well as its relative simplicity, we now look at the Alfvén case, polarized in the y-direction, but initiate it in three contrasting ways: (i) with an initial velocity v c only; (ii) with an initial magnetic perturbation b y only; and (iii) with a fully formed Alfvén wave on the charges alone. By way of exposition, a further whole-plasma kinetic excitation is also briefly discussed: (iv) initiation via v n = v c only.
(i) Ignoring terms of order ν −1 nc in the eigenvectors set out above, the velocity-only initiation proposed by Vranjes et al. (2008) is constructed uniquely from our eigenvectors as
X(0) = (0, 0, 0, 0, 0, 1, 0, 0, 0, 0) = (A − − A + + 2f y )/(χ + 2).(12a)
This consists initially of two oppositely directed full-plasma Alfvén waves and the y-polarized flow differential mode that decays almost instantly. Based on Equations (C11) and (C12), the total energy density at time t = 0 associated with unit velocity is E(0) = ρ n /(2χ) = ρ c /2, as expected. Almost instantly, the f y component decays due to collisions, leaving only the two oppositely directed Alfvén waves, each with energy density E ± = ρ c (χ + 1)/(χ + 2) 2 , which is negligible in comparison for χ ≫ 1. The corresponding wave energy fluxes a E + and −a E − directed along the equilibrium magnetic field are similarly small. This is very inefficient.
(ii) Alternatively, initializing the wave with a magnetic perturbation only,
X(0) = (0, 0, 0, 0, 0, 0, 0, 0, 0, 1/ χ + 1) = (A + + A − )/(2 χ + 1) (12b)
similarly injects energy E(0) = ρ c /2. However now the modal decomposition is contains no decaying flow differential component to leading order. The two energy densities are E ± = ρ c /4, equally splitting the energy flux ± ρ c a/4 in the two directions. This is very efficient.
(iii) Next, we suppose that a fully developed Alfvén wave is placed on the charges only at t = 0, as if the charges and neutrals were collisionally disconnected on t < 0 with collisions turning on discontinuously at t = 0. Then
X(0) = 1 √ 2 (0, 0, 0, 0, 0, −1, 0, 0, 0, (χ + 1) −1/2 ) = 1 2 1 + (χ + 1) −1/2 A + + 1 2 1 − (χ + 1) −1/2 A − − (χ + 1) −1/2 f y /(2(χ + 1)) 1/2 ,(12c)
with total energy E(0) = ρ c /2. Once again, the flow differential mode f y vanishes rapidly, this time leaving slightly unbalanced Alfvén waves propagating in the two field-aligned directions. The respective energy densities are E ± = ρ c √ χ + 1 ± 1 2 /8(χ + 1) for the Alfvén waves and E y = ρ c χ/4(χ + 1) for the flow differential mode. In the χ ≫ 1 regime, these are asymptotically E ± ∼ ρ c /8, with a small excess at higher order in the prograde Alfvén wave compared to the retrograde one, and E y ∼ ρ c /4. So, in effect, this charges-only Alfvén excitation rapidly loses half its energy to collisions and initiates oppositely directed Alfvén waves each with a quarter of the total energy, and carrying fluxes ±ρ c a/8 in each direction. Despite the imposed directionality of the original Alfvén wave on the charges, the resulting Alfvén waves on the full plasma are nearly balanced in the two directions. This case is 50% efficient in generating Alfvén waves.
(iv) Finally, we mention the whole-plasma excitation case where the species velocities v n = v c are initiated together, as envisaged explicitly in Section 6 of Soler et al. (2013b) and implicitly by Tsap et al. (2011). Then
X(0) = (0, −1/ χ + 1, 0, 0, 0, −1/ χ + 1, 0, 0, 0, 0) = (A + − A − ) /(2 χ + 1).(12d)
This again places no energy on the flow differential modes, with almost identical consequences to case (ii): the oppositely directed Alfvén waves are generated equally, with no collisional energy loss.
Overall, generation of Alfvén waves via the charges alone can range from highly inefficient (case i) to highly efficient (case ii) and in between (case iii). The distinguishing feature is the energy initially placed in the flow differential mode, which is rapidly damped and its energy lost to thermalization. Cases (ii) and (iv) suffer no such losses to leading order in the inverse collision frequency, and so are highly efficient.
Of the three scenarios discussed, case (i), excitation via v c only, is the least plausible as well as the least efficient. It is difficult to envisage a mechanism that would inject energy into the charges plasma velocity only and not the magnetic field b y . Specifically, a purely mechanical mechanism could not distinguish between the charges and the neutrals, and an electrical impulse would generate a magnetic perturbation via Faraday induction.
A similar analytical analysis could be performed for the magneto-acoustic waves, though with greater algebraic complexity. However, we choose to proceed numerically. In the following section Alfvén as well as the analogous magneto-acoustic wave excitations are explored via computationally-derived eigenvalues and eigenvectors at three representative levels in the low solar atmosphere for charges-only initiations akin to case (iii) above.
NUMERICAL RESULTS: THREE HEIGHTS
Consider three heights in the solar atmosphere; lower photosphere (h = 0 km), upper photosphere (h = 250 km) and temperature minimum (h = 560 km), with representative sound speed c, Alfvén speed a, ionization ratio χ and collision frequency ν nc set out in Table 1. The complex eigenfrequencies ω r + i ω i of the slow, Alfvén and fast waves are plotted against k in Figure 1 for the temperature minimum h = 560 km model. The first Alfvén cutoff, k + A,c , and its similar slow counterpart, are apparent. They accurately match the asymptotic formulae given in Equations (3) until the cutoffs start to intrude.
For concreteness, we primarily examine these models with the fiducial wavenumber k = k 1 = 2π/10 3 = 0.0063 rad m −1 , corresponding to a wavelength of 1 km, and θ = 30 • . A non-zero propagation angle θ removes any ambiguity (2008) with 100 G base magnetic field and a 600 km magnetic scale height fall-off. In the second block, frequencies Re ω/2π (Hz) and decay times τ = 1/| Im ω| of the slow, Alfvén and fast waves in these atmospheres with k = k1 (1 km wavelength) and θ = 30 • are listed, as well as the decay times of the |vn − vc| neutrals-charges flow differential and the isobaric mode.
between Alfvén and slow waves. With these values, the corresponding numerically calculated frequencies ω r /2π (Hz) and decay times τ = 1/ω i are also set out in Table 1. They scale with k as set out in Equations (3). The slow and Alfvén modes are not followed beyond their cutoffs. The decay rates of the acoustically dominated fast wave are well below those of the magnetically dominated slow and Alfvén waves. -ω i (s -1 ) Figure 1. Phase speed ωr/k (km s −1 , log-linear, left) and decay rate |ωi| (s −1 , log-log, right) plotted against wavenumber k (rad m −1 ) for the temperature minimum model h = 560 km with θ = 30 • . For comparison, the specific wavenumber k1 = 0.0063 referenced in Table 1 and various examples below is comfortably short of the cutoffs and therefore in the asymptotic regime of Equations (3). Blue: slow wave; red long-dashed: Alfvén wave; green short-dashed: fast wave. The vertical line indicates the Alfvén cutoff k + A,c .
In the spirit of Vranjes et al. (2008), the cases where either an Alfvén, slow or fast wave is initiated on the decoupled charges alone are now explored. This is an extension of Scenario (iii) of Section 4, and represents a sort of thought experiment where we suppose the neutrals and charges are collisionally decoupled on t < 0, and one of these MHD Figure 2. Energies in the ten components (un, vn, wn, ψn, uc, vc, wc, ψc, α ⊥ , αy) of the eigenmodes (log scale) for the three models of Table 1, h = 0, 250 and 560 km respectively for the first, second and third columns. In each case, k = k1 and θ = 30 • . Total energy is normalized to 1, and each mode is labelled with its eigenfrequency. The rows correspond to the different mode types. From top to bottom: isobaric mode; the three flow differential modes; slow; Alfvén; fast. The colours correspond to neutrals-kinetic (light fawn); neutrals-thermal (dark fawn); charges-kinetic (red); charges-thermal (red-brown); and magnetic (blue).
modes is placed on the charges alone. It is of interest to determine how these charges-only eigenmodes project onto the 10D eigenspace of the coupled system. Collisions are then turned on at t = 0. Figure 2 physically characterizes the eigenmodes by presenting the energy distributions associated with each of the ten components of X for each of seven eigenmodes of the coupled system (rows) for each h (columns) with k = k 1 and θ = 30 • (the oppositely directed slow, Alfvén and fast modes are omitted as they are the same as for the forwarddirected cases). Noting that the ordinate is presented logarithmically, it is seen that:
• The isobaric mode resides predominantly on the charges. The pressure perturbations in the neutral and charged fluids cancel out, p n + p c = 0, allowing a static atmosphere in the strong coupling regime. Therefore, the respective thermal energy densities are in the ratio 2 : χ, explaining the predominance of the latter at large χ.
Kinetic and magnetic energies are negligible in comparison to the therml energy in this mode.
• In the three flow differential modes, there is a clear difference between v n and v c . If they were identical, their respective kinetic energies would be in the ratio χ : 1, with more energy on the neutrals, which is certainly not the case. This situation is unsustainable in the presence of inter-species collisions, and hence produces the extremely rapid decay indicated by the eigenfrequencies.
• The slow modes are dominated by magnetic and neutrals-kinetic energies in the x-z plane, with the neutral and charges velocities almost matching.
• The Alfvén modes are polarized in the y direction only, with equipartition between kinetic and magnetic energies. The neutrals and the charges velocities match closely, and hence the neutrals kinetic energy dominates the charges.
• The fast modes are similar to the slow modes, except that thermal energy plays a much larger role, as is to be expected in predominantly acoustic waves. Figure 3. Dimensionless velocity differential (logarithmic scale) |vc − vn| 1 2 ρc/E associated with each eigenmode (as labelled) for the models at h = 0 km (blue); h = 250 km (yellow); and h = 560 km (black), all with k = k1 and θ = 30 • . The flow differential vc − vn is dominated by the three extremely rapidly decaying flow differential modes, with the MHD and isobaric modes retaining only small remnant drifts that vanish as νnc → ∞. Figure 3 shows the distribution of non-dimensionalized |v c − v n | over the ten eigenmodes for each h. It is seen that this is dominated by the flow differential modes, with orders of magnitude smaller drift between neutrals and charges on all the other modes. This shows that when these modes are essentially quenched over a few nanoseconds, very little species drift remains, and the wave behaves largely as 1F MHD waves, though with diffusion operating over times of order ν nc /ω 2 r . From Figure 2, nearly all energy in the flow differential modes resides in the kinetic energy of the charges flow, explaining why 1 2 ρ c |v c − v n | 2 /E ≈ 1 for those modes. Figure 4 surveys the modal energy distributions of all three initialization cases at t = 0 and t = δt in the three low-atmosphere models (δt = 2 ns, 10 ns and 150 ns respectively for h = 0 km, 250 km and 560 km). We see that one flow differential mode accounts for half the energy at t = 0, but this has essentially disappeared by t = δt. In essence, the oscillations can be thought to start from this few-nanoseconds state in which the flow differential eigenmodes have been suppressed.
Projection onto Eigenmodes
The modal fluxes in the MHD waves calculated using Equation (9) accord perfectly with the energies multiplied by their respective ideal MHD group velocities,
∂ω ∂k z = ± cos θ ∆ a 2 + c 2 + 2a 2 c 2 cos 2 θ − a 4 − c 4 √ 2 ∆ √ a 2 + c 2 − ∆ , slow ±a, Alfvén ± cos θ ∆ a 2 + c 2 − 2a 2 c 2 cos 2 θ + a 4 + c 4 √ 2 ∆ √ a 2 + c 2 + ∆ , fast ,(13)
i.e., F = E ∂ω/∂k, and so need not be presented explicitly. However, this serves as a check on the numerics. An alternative and more realistic scenario is that some driver operates solely on the charges over seconds or minutes. During this period, the near-instantaneous decay of the flow differential modes keeps the neutrals and charges fluid velocities strongly coupled, thereby resulting in energy distributions across the components of X as set out in the first and the last three rows of Figure 2. Across the height range, it is seen that:
• Initiating with the charges-restricted Alfvén wave (yellow) loses half its energy to the flow differential mode, which quickly vanishes, and about a quarter each to the upgoing and downgoing Alfvén modes.
• Slow wave (acoustic on the charges) initiation (brown) also loses half its energy to the flow differential, with most of the remainder apportioned to the isobaric mode. Very little ends up in the travelling waves.
• Fast wave initiation (blue) loses half its energy to the flow differential with around 20 − 25% going to each of the upgoing and downgoing slow waves. A small amount also goes to the two fast waves at h = 560 km. Recalling that the fast wave on the charges is primarily magnetic, due to the large a c , it was to be expected that it would primarily drive the slow waves (also magnetic) of the coupled plasma.
Overall, Alfvén and slow waves (the two magnetic waves) are quite efficiently excited and are near-symmetric in direction, whilst generation of the acoustically dominated fast wave is very inefficient for all three drivers. Initiating with b ⊥ or b y alone (not shown), and no velocity, just splits the energy into two equal but oppositely directed slow or Alfvén modes respectively,à la d'Alembert. This was described analytically for Alfvén waves in Section 4. There is no flow differential at any stage, and hence no energy loss, so this initiation is particularly efficient. Half of the initialization energy propagates in each direction.
Although there is quite efficient full-plasma wave generation in this charges-only wave initiation scenario, Figure 4 explains why net flux is still very small. It is not because negligible Alfvén (or slow or fast) wave is excited. Instead, despite the initial state being an upward (positive z-direction) MHD eigenmode on the charges-only plasma, it projects onto almost equal amplitudes of positively and negatively directed MHD waves in each case. Therefore, when calculating fluxes, these nearly cancel, though each carries up to a quarter of the initial perturbation energy. The initial but fast-disappearing energy in the flow differential modes was never going anywhere anyway, since it had zero ω r , and in any case took only half the energy with it.
A real wave excitation region will of course be of finite extent. In most circumstances, it will generate near-equal fluxes of upward and downward propagating MHD waves. In the case of initiation on the charges only, resulting waves on the full plasma carry up to 25% of the perturbation energy in each direction for the Alfvén wave, a little less for slow waves initiated by fast waves, and very little for fast waves. Net flux is therefore very small. However, once they emerge above or below the excitation region, their uni-directional Alfvén and slow wave fluxes will be manifest and substantial. It is not correct to say that no significant flux is produced in the Vranjes et al. scenario.
DISCUSSION
For specified wavevector k, the linearized two-fluid collisionally-coupled MHD equations for a weakly ionized plasma imply a ten-dimensional space of solutions, consisting of the familiar six MHD wave modes -slow, Alfvén and fast propagating either forward or backward -as well as a very slowly decaying stationary isobaric mode and three extremely rapidly decaying (nanoseconds) flow differential mode. The role of these flow modes is to support large velocity differentials between neutrals and charges. They alone carry this responsibility, as the remaining seven eigenmodes intrinsically support only very small differentials.
With hindsight, it could be no other way. It is well known that collisions discharge any significant difference between v c and v n much more rapidly than any other timescale associated with 2F MHD waves, so they must have their own dedicated eigenmodes with eigenfrequencies that have large negative imaginary parts. When ω ≪ ν nc , the remaining 2F MHD waves differ only slightly from the standard 1F description, ν nc → ∞, in which v c = v n exactly. The asymptotic eigenfrequency formulae presented in Equations (3) describe these processes in detail. For the Alfvén mode specifically, addressed in detail in Section 4, the asymptotic eigenvector corresponding to eigenvalue ω A is given by Equation (3b), so (v c − v n ) /|v c | ∼ i a k z /ν nc as ν nc → ∞, showing indeed that the interspecies drift scales as ν −1 nc and vanishes in the high-collision limit.
In the cases examined numerically in Section 5, wave initiation in the form of charges-specific MHD waves results in almost exactly one half of the initial energy residing on the flow differential modes (see Figure 4), which disappears immediately. The remaining MHD eigenmodes are all roughly symmetrically present in energy, i.e., the forward and backward directed modes of each species have nearly equal energies. This is not an accident. If it were not so, the remaining net flux would not be small, contrary to the very small velocities implied by the vanishing of the flow differential modes. The result is also consistent with our analytical analysis of Section 4.
As in case (ii) of Section 4, an oscillation excited purely via a magnetic perturbation produces no significant flow differential modes and hence essentially no energy loss. The flux still splits into two oppositely directed parts though, now 50:50. This is a very efficient excitation mechanism.
On the other hand, it is also possible in principle to excite the flow differential modes solely, in which case all energy is immediately lost. However, it is difficult to imagine a process that would do this in practice, as it would involve an initial state consisting of one or more of the flow differential eigenmodes given asymptotically in Equation (3i).
In summary, the ten-dimensional spectral decomposition of the 2F MHD Equations (2) gives considerable insight into how excitation of the charged fluid alone rapidly suppresses interspecies drift via the rapid decay of the flow differential modes. The remaining energy, dependent on the specific initial state, typically finds itself in long-lived MHD waves. This indicates that it is indeed plausible to launch substantial wave flux, especially Alfvén wave flux, upward from a weakly ionized photosphere even if the excitation mechanism only directly accesses the charges.
Of course, if excitation is mechanical and equally drives both species in concert, there was never an issue in the first place.
APPENDIX
A. MATRIX M AND ITS CHARACTERISTIC POLYNOMIAL
The fundamental coefficient matrix
M = −ν nc 0 0 −ik x ν nc 0 0 0 0 0 0 −ν nc 0 0 0 ν nc 0 0 0 0 0 0 −ν nc −ik z 0 0 ν nc 0 0 0 − ic 2 (χ+1)kx χ+2 0 − ic 2 (χ+1)kz χ+2 0 0 0 0 0 0 0 χν nc 0 0 0 −χν nc 0 0 −ik x ia(χ + 1)k 0 0 χν nc 0 0 0 −χν nc 0 0 0 ia(χ + 1)k z 0 0 χν nc 0 0 0 −χν nc −ik z 0 0 0 0 0 0 − 2ic 2 (χ+1)kx χ+2 0 − 2ic 2 (χ+1)kz χ+2 0 0 0 0 0 0 0 iak 0 0 0 0 0 0 0 0 0 0 iak z 0 0 0 0 ,(A1)
where k x = k sin θ and k z = k cos θ. Vertical and horizontal dividers have been included to accentuate the neutrals (top left block) and charges (bottom right) submatrices and explicate their collisional couplings (top right and bottom left).
Since the y polarization is decoupled from x-z, M can be broken down into separate third and seventh order matrices (it can be expressed in block-diagonal form under the appropriate reordering of rows and columns). Hence, the tenth order characteristic polynomial (dispersion function) of M may be factored into a cubic ω 2 (ω + i (χ + 1)ν nc ) − a 2 k 2 (χ + 1) cos 2 θ (ω + i ν nc ) = 0,
which captures the two Alfvén modes and one flow differential mode (those oscillations polarized in the y-direction), and an equation of the seventh order (see also Eq. (57) of Zaqarashvili et al. 2011) that contains the four magneto-acoustic modes, two flow differential modes and the isobaric mode:
ω ω 4 − (a 2 + c 2 )k 2 ω 2 + a 2 c 2 k 4 cos 2 θ = ω τ 2 ω 6 (χ + 1) 2 − k 2 ω 4 a 2 (χ + 2) + 3c 2 (χ + 1)(χ + 2) + c 2 k 4 ω 2 2 a 2 (χ + 2) + c 2 + a 2 (χ + 2) cos 2θ (χ + 2) 2 − 2a 2 c 4 k 6 (χ + 1) cos 2 θ (χ + 2) 2 + i τ 2 ω 6 χ + 1 − k 2 ω 4 a 2 (χ + 2) 2 + c 2 (4χ + 5) (χ + 1)(χ + 2) + c 2 k 4 ω 2 2a 2 (χ + 2) cos 2θ + a 2 (χ + 2)(χ + 3) + 2c 2 (χ + 1) (χ + 2) 2 − 2a 2 c 4 k 6 (χ + 1) cos 2 θ (χ + 2) 2 ,
where τ = 1/ν nc is the neutrals-charges collision timescale. In the fully coupled limit τ → 0, this dispersion relation reduces to that of the classic 1F magneto-acoustic modes and the stationary isobaric mode ω = 0. The full tenth order formulation is retained here for unity of exposition.
B. EIGENSYSTEM PERTURBATION ANALYSIS
Although the eigenvalues and eigenvectors of M may be found numerically, the analytic large-ν nc asymptotic formulae set out in Equations (3) are valuable aids to understanding. The process of developing these formulae is not straightforward though. A matrix method is sketched here, beginning with splitting M = M 0 + ν nc M 1 . Thus
(M 0 + ν nc M 1 ) X = λX.(B4)
Recall that the eigenfrequencies are ω = i λ.
At large ν nc , M 1 plays the leading role. It is only rank-3, but is diagonalizable: M 1 = SΛS −1 , where the columns of S are the eigenvectors and the diagonal matrix Λ = diag(−(χ + 1), −(χ + 1), −(χ + 1), 0, 0, 0, 0, 0, 0, 0) is made up of the eigenvalues. The non-zero entries correspond to the three flow differential modes. (Alternatively, LU decomposition works just as well in that it also collects these modes in only the first three rows of the upper triangular U matrix.) Then Equation (B4) may be recast as
(Q + ν nc Λ) Y = λY ,(B5)
where Q = S −1 M 0 S and Y = S −1 X. The required procedure then differs between the flow differential and the other seven modes.
B.1. Flow Differential Modes
Substituting λ = ν nc (λ 0 + ν −1 nc λ 1 + ν −2 nc λ 2 + · · · ) and Y = Y 0 + ν −1 nc Y 1 + ν −2 nc Y 2 + · · · into Equation (B5) and equating powers of ν nc , it is found that
ΛY 0 = λ 0 Y 0 ,(B6)
an eigenvalue equation that determines λ 0 = −(χ + 1) for the three flow differential modes and the corresponding leading order eigenvectors Y 0 . At the next order,
(Λ − λ 0 I) Y 1 − λ 1 Y 0 = −QY 0 ,(B7)
etc. This may be solved for λ 1 and Y 1 by writing it as an 11 × 11 matrix equation for (Y 1 , λ 1 ) when supplemented with a normalization condition. In practice, one component of Y 1 corresponding to a nonzero component of Y 0 is set to zero keeping that component of Y unchanged under the perturbation. Equations (3i) result on recovering X = SY . The procedure may be repeated to higher order if required.
No use is made of the remaining seven eigen-solutions, which are better attacked as follows.
B.2. MHD and Isobaric Modes
Defining J = diag(0, 0, 0, 1, 1, 1, 1, 1, 1, 1) it is clear that JΛ = 0, and Equation (B5) may be used to derive
R + ν −1 nc Q Y = λ J + ν −1 nc I Y ,(B8)
where R = Λ + JQ. With λ = λ 0 + ν −1 nc λ 1 + ν −2 nc λ 2 + · · · , and Y expanded as before, it follows that
RY 0 = λ 0 JY 0 .(B9)
This is a generalized eigenvalue equation from which the leading behaviours of the remaining seven λ and Y may be determined. At the next order,
(R − λ 1 J) Y 1 − λ 1 JY 0 = (λ 0 I − Q) Y 0 ,(B10)
which may again be used to solve for λ 1 and Y 1 subject to a normalization. The remainder of Equations (3) result.
C. ENERGY AND FLUX IN MATRIX FORM
In complex matrix form, we may write E = ρ n X † ΦX
where the dagger indicates conjugate transpose and Φ = 1 2 diag 1, 1, 1, χ + 2 c 2 (χ + 1)
, 1 χ , 1 χ , 1 χ , χ + 2 2c 2 χ(χ + 1)
, χ + 1 χ , χ + 1 χ .
(C12)
In terms of the eigenvector decomposition and coefficient vector C the energy density is
E = ρ n C † Ψ(t)C,(C13)
where Ψ(t) = e i Ω * t P † ΦPe −i Ω t , and Ω = diag[ω 1 , . . . , ω 10 ]. By construction, Ψ is Hermitian. In principle, since Ψ is not diagonal, there is cross-talk between the modes. However, in practice, the off-diagonal contributions to E appear in purely imaginary complex conjugate pairs which do not contribute at all to the overall energy as they cancel, and in any case they are entirely negligible in magnitude. For that reason, we may attribute all energy to the ten individual eigenmodes and not interactions between them.
The z-component of flux F z for example may also be written as a quadratic form,
F z = 1 2 ρ n X † ΥX,(C14)
where the real symmetric (and therefore Hermitian) matrix
Υ = ,(C15)
or in terms of C, F z = 1 2 ρ n C † Ξ C,
where Ξ = e i Ω * t P † ΥP e −i Ω t . Unlike the energy density, the cross-talk in flux between eigenmodes can be substantial in magnitude. However, cross-talk between MHD modes averages to zero over time, as does the interaction between a wave mode and the isobaric mode. The interaction with a flow differential mode of course quickly vanishes. So only the diagonal entries, attributed to slow, Alfvén and fast waves, contribute meaningfully.
The author thanks Elena Khomenko and Martin Gómez Míguez for their very useful comments and suggestions on an initial draft of this paper.
v n w n ψ n u c v c w c ψ c αv n w n ψ n u c v c w c ψ c αv n w n ψ n u c v c w c ψ c α
-9.807 × 10 8 u n v n w n ψ n u c v c w c ψ c α ⊥ α y -1.76 × 10 8 u n v n w n ψ n u c v c w c ψ c α ⊥ α y -1.05 × 10 7 u n v n w n ψ n u c v c w c ψ c α ⊥ α y v n w n ψ n u c v c w c ψ c α ⊥ α y v n w n ψ n u c v c w c ψ c α ⊥ α y
Figure 4 .
4Energy in each of the eigenmodes (negatively directed fast, negatively directed Alfvén, . . . , positively directed Alfvén, positively directed fast, as labelled) at times t = 0 (left) and t = δt > 0 (right) for model h = 0 (top row, δt = 2 ns), h = 250 km (middle row, δt = 10 ns) and h = 560 km (bottom, δt = 150 ns) with initiation by Alfvén (yellow), slow (brown) and fast (blue) positively directed eigenmodes of the charged fluid only. Total initial energies are normalized to 1. The three flow differential modes have almost disappeared in just nanoseconds, whilst the others have not changed.
Table 1 .
1Atmospheric Models and Wave Eigenfrequenciesh
c
a
χ
νnc
(km) (km s −1 ) (km s −1 )
(s −1 )
photospheric base
0
10
0.63
1400 7 × 10 5
upper photosphere
250
8.6
1.4
11000
1600
temperature minimum 560
8.2
5.5
10500
1000
flow τ
isobaric τ
slow
Alfvén
fast
(s)
(s)
Freq (Hz)
τ (s)
Freq (Hz)
τ (s)
Freq (Hz)
τ (s)
1.0 × 10 −9 1.2 × 10 5
0.5453
9 × 10 4
0.5456
1.2 × 10 5
10
6 × 10 7
5.7 × 10 −9 3.0 × 10 4
1.208
416
1.212
551
8.6
6 × 10 4
9.5 × 10 −8 2.0 × 10 3
4.42
2.0
4.76
2.2
8.8
9.2
Note-Representative atmospheric parameters at the photospheric base h = 0, upper photosphere h = 250 km and temperature
minimum h = 560 km drawn roughly from Figures 1 and 2 of Cally & Gómez-Míguez (2023), adapted for a hydrogen atmosphere
from Model C7 of Avrett & Loeser
. A Alharbi, I Ballai, V Fedun, G Verth, 10.1093/mnras/stac444MNRAS. 511Alharbi, A., Ballai, I., Fedun, V., & Verth, G. 2022, MNRAS, 511, 5274, doi: 10.1093/mnras/stac444
. E H Avrett, R Loeser, 10.1086/523671ApJS. 175229Avrett, E. H., & Loeser, R. 2008, ApJS, 175, 229, doi: 10.1086/523671
. P S Cally, M M Gómez-Míguez, 10.3847/1538-4357/acbb63The Astrophysical Journal. 946Cally, P. S., & Gómez-Míguez, M. M. 2023, The Astrophysical Journal, 946, 108, doi: 10.3847/1538-4357/acbb63
. S R Cranmer, A A Van Ballegooijen, 10.1086/426507ApJS. 156265Cranmer, S. R., & van Ballegooijen, A. A. 2005, ApJS, 156, 265, doi: 10.1086/426507
. B De Pontieu, G Haerendel, A&A. 338729de Pontieu, B., & Haerendel, G. 1998, A&A, 338, 729
C Eckart, Hydrodynamics of Oceans and Atmospheres. Oxford: PergamonEckart, C. 1960, Hydrodynamics of Oceans and Atmospheres (Oxford: Pergamon)
. E Khomenko, M Collados, A Díaz, N Vitas, 10.1063/1.4894106Physics of Plasmas. 2192901Khomenko, E., Collados, M., Díaz, A., & Vitas, N. 2014, Physics of Plasmas, 21, 092901, doi: 10.1063/1.4894106
. S W Mcintosh, B De Pontieu, M Carlsson, 10.1038/nature10235Nature. 475477McIntosh, S. W., de Pontieu, B., Carlsson, M., et al. 2011, Nature, 475, 477, doi: 10.1038/nature10235
. B Popescu Braileanu, V S Lukin, E Khomenko, Á De Vicente, 10.1051/0004-6361/201935844A&A. 63079Popescu Braileanu, B., Lukin, V. S., Khomenko, E., & de Vicente,Á. 2019, A&A, 630, A79, doi: 10.1051/0004-6361/201935844
. R Soler, M Carbonell, J L Ballester, 10.1088/0067-0049/209/1/16ApJS. 20916Soler, R., Carbonell, M., & Ballester, J. L. 2013a, ApJS, 209, 16, doi: 10.1088/0067-0049/209/1/16
. R Soler, M Carbonell, J L Ballester, J Terradas, 10.1088/0004-637X/767/2/171ApJ. 767171Soler, R., Carbonell, M., Ballester, J. L., & Terradas, J. 2013b, ApJ, 767, 171, doi: 10.1088/0004-637X/767/2/171
. Y T Tsap, A V Stepanov, Y G Kopylova, 10.1007/s11207-011-9727-4Tsap, Y. T., Stepanov, A. V., & Kopylova, Y. G. 2011, SoPh, 270, 205, doi: 10.1007/s11207-011-9727-4
. J Vranjes, S Poedts, B P Pandey, B De Pontieu, 10.1051/0004-6361:20078274A&A. 478553Vranjes, J., Poedts, S., Pandey, B. P., & de Pontieu, B. 2008, A&A, 478, 553, doi: 10.1051/0004-6361:20078274
. T V Zaqarashvili, M L Khodachenko, H O Rucker, 10.1051/0004-6361/201016326A&A. 52982Zaqarashvili, T. V., Khodachenko, M. L., & Rucker, H. O. 2011, A&A, 529, A82+, doi: 10.1051/0004-6361/201016326
| [] |
[
"Electronic Supporting Information for Electric-field frictional effects in confined zwitterionic molecules Electric-field frictional effects in confined zwitterionic molecules",
"Electronic Supporting Information for Electric-field frictional effects in confined zwitterionic molecules Electric-field frictional effects in confined zwitterionic molecules"
] | [
"Melisa M Gianetti \n†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly\n\nUniversity of Milan\nvia Celoria 1620133MilanoItaly\n\n§International School for Advanced Studies (SISSA)\n∥Department of Physical Chemistry\nSchool of Chemistry\nThe Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\n¶CNR-IOM\nConsiglio Nazionale delle Ricerche -Istituto Officina dei Materiali, c/o SISSA\nVia Bonomea 265, Via Bonomea 26534136, 34136Trieste, TriesteItaly, Italy\n\nTel Aviv University\n6997801Tel AvivIsrael\n\n†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly\n\n§Department of Physical Chemistry\nSchool of Chemistry\n¶CNR-IOM, Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali and International School for Advanced Studies (SISSA)\nUniversity of Milan\nvia Celoria 16, Via Bonomea 26520133, 34136Milano, TriesteItaly, Italy\n\nThe Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\nTel Aviv University\n6997801Tel AvivIsrael\n",
"Roberto Guerra ",
"Andrea Vanossi ",
"Michael Urbakh ",
"Nicola Manini ",
"Melisa M Gianetti \n†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly\n\nUniversity of Milan\nvia Celoria 1620133MilanoItaly\n\n§International School for Advanced Studies (SISSA)\n∥Department of Physical Chemistry\nSchool of Chemistry\nThe Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\n¶CNR-IOM\nConsiglio Nazionale delle Ricerche -Istituto Officina dei Materiali, c/o SISSA\nVia Bonomea 265, Via Bonomea 26534136, 34136Trieste, TriesteItaly, Italy\n\nTel Aviv University\n6997801Tel AvivIsrael\n\n†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly\n\n§Department of Physical Chemistry\nSchool of Chemistry\n¶CNR-IOM, Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali and International School for Advanced Studies (SISSA)\nUniversity of Milan\nvia Celoria 16, Via Bonomea 26520133, 34136Milano, TriesteItaly, Italy\n\nThe Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\nTel Aviv University\n6997801Tel AvivIsrael\n",
"Roberto Guerra ",
"Andrea Vanossi ",
"Michael Urbakh ",
"Nicola Manini "
] | [
"†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly",
"University of Milan\nvia Celoria 1620133MilanoItaly",
"§International School for Advanced Studies (SISSA)\n∥Department of Physical Chemistry\nSchool of Chemistry\nThe Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\n¶CNR-IOM\nConsiglio Nazionale delle Ricerche -Istituto Officina dei Materiali, c/o SISSA\nVia Bonomea 265, Via Bonomea 26534136, 34136Trieste, TriesteItaly, Italy",
"Tel Aviv University\n6997801Tel AvivIsrael",
"†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly",
"§Department of Physical Chemistry\nSchool of Chemistry\n¶CNR-IOM, Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali and International School for Advanced Studies (SISSA)\nUniversity of Milan\nvia Celoria 16, Via Bonomea 26520133, 34136Milano, TriesteItaly, Italy",
"The Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\nTel Aviv University\n6997801Tel AvivIsrael",
"†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly",
"University of Milan\nvia Celoria 1620133MilanoItaly",
"§International School for Advanced Studies (SISSA)\n∥Department of Physical Chemistry\nSchool of Chemistry\nThe Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\n¶CNR-IOM\nConsiglio Nazionale delle Ricerche -Istituto Officina dei Materiali, c/o SISSA\nVia Bonomea 265, Via Bonomea 26534136, 34136Trieste, TriesteItaly, Italy",
"Tel Aviv University\n6997801Tel AvivIsrael",
"†Dipartimento di Fisica\n‡Center for Complexity and Biosystems\nDepartment of Physics\nUniversità degli Studi di Milano\nVia Celoria 1620133MilanoItaly",
"§Department of Physical Chemistry\nSchool of Chemistry\n¶CNR-IOM, Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali and International School for Advanced Studies (SISSA)\nUniversity of Milan\nvia Celoria 16, Via Bonomea 26520133, 34136Milano, TriesteItaly, Italy",
"The Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science\nTel Aviv University\n6997801Tel AvivIsrael"
] | [
"J. Phys. Chem. C"
] | We theoretically explore the effect of a transverse electric field on the frictional response of a bi-layer of packed zwitterionic molecules. The dipole-moment reorientation promoted by the electric field can lead to either stick-slip or smooth sliding dynamics, with average shear stress values varying over a wide range. A structure-property relation is revealed by investigating the array of molecules and their mutual orientation 1 arXiv:2301.11861v2 [cond-mat.soft] 8 Jun 2023 | null | [
"https://export.arxiv.org/pdf/2301.11861v2.pdf"
] | 259,108,841 | 2301.11861 | d5892abb66b480b9bbb73c35499b0f7176940154 |
Electronic Supporting Information for Electric-field frictional effects in confined zwitterionic molecules Electric-field frictional effects in confined zwitterionic molecules
8 Jun 2023
Melisa M Gianetti
†Dipartimento di Fisica
‡Center for Complexity and Biosystems
Department of Physics
Università degli Studi di Milano
Via Celoria 1620133MilanoItaly
University of Milan
via Celoria 1620133MilanoItaly
§International School for Advanced Studies (SISSA)
∥Department of Physical Chemistry
School of Chemistry
The Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science
¶CNR-IOM
Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali, c/o SISSA
Via Bonomea 265, Via Bonomea 26534136, 34136Trieste, TriesteItaly, Italy
Tel Aviv University
6997801Tel AvivIsrael
†Dipartimento di Fisica
‡Center for Complexity and Biosystems
Department of Physics
Università degli Studi di Milano
Via Celoria 1620133MilanoItaly
§Department of Physical Chemistry
School of Chemistry
¶CNR-IOM, Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali and International School for Advanced Studies (SISSA)
University of Milan
via Celoria 16, Via Bonomea 26520133, 34136Milano, TriesteItaly, Italy
The Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science
Tel Aviv University
6997801Tel AvivIsrael
Roberto Guerra
Andrea Vanossi
Michael Urbakh
Nicola Manini
Melisa M Gianetti
†Dipartimento di Fisica
‡Center for Complexity and Biosystems
Department of Physics
Università degli Studi di Milano
Via Celoria 1620133MilanoItaly
University of Milan
via Celoria 1620133MilanoItaly
§International School for Advanced Studies (SISSA)
∥Department of Physical Chemistry
School of Chemistry
The Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science
¶CNR-IOM
Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali, c/o SISSA
Via Bonomea 265, Via Bonomea 26534136, 34136Trieste, TriesteItaly, Italy
Tel Aviv University
6997801Tel AvivIsrael
†Dipartimento di Fisica
‡Center for Complexity and Biosystems
Department of Physics
Università degli Studi di Milano
Via Celoria 1620133MilanoItaly
§Department of Physical Chemistry
School of Chemistry
¶CNR-IOM, Consiglio Nazionale delle Ricerche -Istituto Officina dei Materiali and International School for Advanced Studies (SISSA)
University of Milan
via Celoria 16, Via Bonomea 26520133, 34136Milano, TriesteItaly, Italy
The Raymond and Beverly Sackler Faculty of Exact Sciences and The Sackler Center for Computational Molecular and Materials Science
Tel Aviv University
6997801Tel AvivIsrael
Roberto Guerra
Andrea Vanossi
Michael Urbakh
Nicola Manini
Electronic Supporting Information for Electric-field frictional effects in confined zwitterionic molecules Electric-field frictional effects in confined zwitterionic molecules
J. Phys. Chem. C
2022S88 Jun 2023⊥Current affiliation: Institutt for maskinteknikk og produksjon, NTNU, Richard Birkelands vei 2B, 7034 Trondheim, Norway ∥Current affiliation: Institutt for maskinteknikk og produksjon, NTNU, Richard Birkelands vei 2B, 7034 Trondheim, Norway
We theoretically explore the effect of a transverse electric field on the frictional response of a bi-layer of packed zwitterionic molecules. The dipole-moment reorientation promoted by the electric field can lead to either stick-slip or smooth sliding dynamics, with average shear stress values varying over a wide range. A structure-property relation is revealed by investigating the array of molecules and their mutual orientation 1 arXiv:2301.11861v2 [cond-mat.soft] 8 Jun 2023
. u I = U I /A is the internal potential energy; u EF = U EF /A is the potential energy for the interaction with the applied electric field; u S = U S /A is the potential energy of the pulling spring. In the analysis of the potential energy contributions (panels h,k,n), thermal noise affects the data heavily: to discern readable energy signals we apply a Gaussian smoothing with width of 0.05 nm. To mitigate the thermal noise in the SUP velocity we evaluate its average value by means of finite differences of the SUP position over the time corresponding to the driving stage advancing by 0.05 nm. . Thermal noise has been mitigated like in Figure S3. The intense electric field keeps the SUP layer of chains flat to the point of remaining nearly "frozen", thus preventing the entanglement of SUB chains. Friction traces at the lowest velocities do show some hints of stick points ( Figure S4f, i) and eventually at low speed friction seems to depend only weakly on the sliding velocity, Figure S4a. Stick-slip features are visible in the SUP velocity and total potential energy, while they are harder to detect in the internal contribution U I . Even at T = 300 K, signs of the unexpected slight decrease of U I during stick can be detected near x stage ≃ 82 nm and x stage ≃ 90 nm, similar to that discussed for T = 0 in the main text.
µ d = −0.08 for E = 1 V · nm −1 ; µ d = 0.09 for E = 5 V · nm −1 ; µ d = 0.12 for E = 10 V · nm −1 .
S6
SI Movies
Each of the SI movies reports 3 ns (corresponding to a 15 nm advancement of the stage) of a MD simulation. In simulation time, the frame rate is 1 frame every 20 ps. In running time, the frame rate is 10 frames per second. For clarity, like in Fig. S1, the movies only include a 5 nm y-thick slice of the simulation cell (whose entire y-side is 14.41 nm). One of the particles of the SUP layer is drawn of bigger size and lighter color to improve the visibility of the advancement of this layer.
• Movie1.mp4: 3 ns of a E = 2 V · nm −1 , v stage = 5 m · s −1 , T = 300 K simulation, also reported in the snapshots of Figure 3 of the main text;
• Movie2.mp4: 3 ns of a E = 5 V · nm −1 , v stage = 5 m · s −1 , T = 300 K simulation, also reported in the snapshots of Figure 4b,c,e,f of the main text;
• Movie3.mp4: 3 ns of a E = 10 V · nm −1 , v stage = 5 m · s −1 , T = 300 K simulation, also reported in the snapshots shown in Figure S1 and in the friction trace shown in Figure S4d,l of the present SI.
S7
and interlocking. Moreover, the thermal friction enhancement previously observed in these molecules is shown to be suppressed by the electric field, recovering the expected thermolubricity at large-enough fields. The same holds for other basic tribological quantities, such as the external load, which can influence friction in opposite ways depending on the strength of the applied electric field. Our findings open a route for the reversible control of friction forces via electric polarization of the sliding surface.
Introduction
Given the substantial interest for applications, the possibility of controlling friction and mechanical response without direct intervention on the often inaccessible contacting surfaces by means of external fields has been investigated extensively in recent years. [1][2][3][4] In particular, the tuning role of applied electric fields in confined geometries has been widely investigated in triboelectrochemistry experiments. [5][6][7][8][9][10][11][12][13] In most of these experiments ions move in a liquid solution driven by the applied field, covering and effectively modifying the sliding surfaces, thus inducing changes in friction. [14][15][16][17][18][19][20][21][22] An alternative approach is based on the ability of the electric field to reorient macromolecules, thus changing their conformation in aqueous solution [23][24][25] and dry environments, [26][27][28] with potentially dramatic effects on friction. Within this alternative scheme, here we consider neutral chain molecules tethered to flat substrate surfaces. These chains represent zwitterionic molecules, which have found applications for colloid stabilization, regulation in wetting and adhesion, the creation of protective coatings and many others. 23,[29][30][31] Zwitterionic molecules can also be reoriented by using light 32 and in some particular cases the orientation of these large molecules or some parts of them can be determined using X-ray spectroscopy. 33 The characterization of the structure of polymer layers deposited on surfaces is always crucial for tribology. 34 The surface force apparatus (SFA) provides a highly sensitive way to measure the frictional shear stress between atomically flat surfaces with molecules deposited on them, while applying a controlled normal load. [35][36][37][38][39] While such experimental setup does measure the system rheological and dissipa-tive response in terms of crucial, yet averaged, physical quantities, 40 a viable exploitation of the electrotunable approach requires casting light on the elemental mechanisms and molecular rearrangements occurring at the sheared interface. Molecular dynamics (MD) simulations, as a sort of controlled computational "experiment", have proved to be extremely useful in investigating the atomistic details of frictional processes at sliding interfaces. [41][42][43][44][45][46] These strategies make it possible to avoid interpretative pitfalls arising from indirect or ex situ characterization of contact surfaces.
This work explores this idea, considering a model where the electric-field-controlled geometric rearrangement of zwitterionic molecular segments leads to modifications of the sliding interface properties, associated with distinct frictional regimes.
Model and Simulations
We Red, gray, blue, green, cyan spheres represent cations, neutral residues, anions and the two uncharged particles standing for a glycerol group and the hydrophobic chain forming the inner part of the vesicle, respectively. For clarity, SUP chains are colored lighter than SUB chains. Magenta and purple spheres represent the static SUB and the sliding SUP rigid layers, respectively. The yellow sphere represents the pulling stage advancing at constant speed and driving the SUP layer through a spring.
We name the sliding layer SUP and the static one SUB, see Figure 1. The two end uncharged residues of each molecule are 'planted' in these rigid layers, which keep them in place, while allowing them some elastic freedom. To account for screening by the surrounding water molecules, the charges on the ionic residues are reduced to ±25% of an elementary charge. We adopt LAMMPS 53,54 as the simulation platform and we control the simulation temperature by means of a Langevin thermostat with a damping rate γ = 1 ps −1 applied to all particles forming the molecules. 55 This thermostat is set to act only along the transverse coordinates y and z, in order to prevent any spurious thermostat-originated frictional damping along the most relevant sliding direction x. 56,57 We start off with a fresh initial configuration consisting in a periodic, but arbitrary chain arrays, as described in Ref. 50. To prepare a thermodynamically meaningful initial point, we set E = 5 V · nm −1 and a constant sliding velocity of 5 m · s −1 . Under these conditions, we anneal the system by ramping T up from 0 to 500 K in 1 ns, we keep it at 500 K for 1 ns, we then ramp the temperature down from 500 to 0 K, in 1 ns, and finally continue sliding at 0 K for 1 ns. The final configuration of this simulation is the starting point for multiple simulations with different values of E. Before each run, we ramp the electric field steadily up or down to the target field value E in 100 ps.
Starting from the end of the 5 V · nm −1 simulation at T = 0, we ramp up the temperature to 300 K in 2 ns and then maintain it constant for another 0.2 ns to produce the initial configuration for all the T = 300 K simulations. Different E values are obtained with the same ramps as for the T = 0 case. In a typical production run, we pull the SUP layer through a spring with k = 1 eV · nm −2 ≃ 0.16 N · m −1 , and let the stage advance by 100 nm, which allows the cations to interact more effectively with the SUP chains: more and more of their head cations reach energetically convenient "hollow sites" formed by adjacent anions in the SUP layer (see Figure 3c). In the slip events this interlocking is lost (see Figure 3b,d and ESI Movie1 †). This increased interlocking tends to favor stick-slip dynamics, as evident from the data of Figure 2.
usually at v stage = 5 m · s −1 ,
For even larger fields E ≥ 5 V · nm −1 the SUB layer rids itself of its steric hindrance and, as a result, the chains in this layer acquire a substantially disordered and fluid-like arrangement (see Electronic Supplementary Information (ESI) Figure S1b and Movie3 †).
The SUB chains liftup is also illustrated in the snapshots of Figures 1 and 4, in ESI Movie2 †, and more quantitatively by the thickening of the molecular layer reported in Figure 2b. The Given the nontrivial temperature effects that were observed in this model in the absence of the transverse electric field, 50 it is useful to compare the presented results with those of a similar investigation carried out at T = 0 K (green data in Figure 2). Up to E ≃ 4 V · nm −1 , the system exhibits essentially smooth-sliding dynamics with weak stick points (see e.g. For E ≥ 5 V · nm −1 , friction increases rapidly because the SUB chains further stand up (see Figure 5) forming a looser and thicker layer, and acquiring such a substantial lateral freedom, that a significant fraction of SUB cations succeeds in reaching the energetically favorable interaction sites surrounded by the anions of the flat SUP layer (like in Figure 3a).
The resulting extra SUB-SUP interaction enhances the effective corrugation, and therefore also friction at high field, see Figure 2a.
Effects of sliding velocity
As the sliding speed is reduced, tribological systems often tend to transition from smooth sliding to stick-slip dynamics. The current model is expected to follow this tendency. To To better characterize the origin of these distinct frictional regimes, we analyze the energetics of the system, considering the three contributions to the total potential energy: the internal potential energy (U I ) including the bonding, nonbonding, and Coulomb interactions within and among the molecules, the energy due to the interaction of the molecular charges with the applied electric field (U EF ) and that associated to the elongation of the pulling spring (U S ), see Figure 6h,k,n, where combinations of these energies are reported per unit area, and shifted by irrelevant arbitrary constants.
Remarkably, the internal energy U I tends to decrease during the stick phase, showing that the molecular layers actually relax while the spring exerts a stronger and stronger pulling force. Inclusion of the electrical energy causes the potential energy to slowly increase during stick, reflecting the increasing tilt of the SUB chains. However, this progressive increase and the corresponding drop at slip are quite small, less than 0.5 mJ·m −2 , compared to the changes in the total potential energy, which includes the spring contribution too. The smallness of these energy steps indicates an extremely small deformation and forward displacement of the SUP layer during the stick phase, with the slip event occurring as a sudden collective collapse of the provisionally-formed interlayer bonds. The total potential energy released in its downward jumps, of the order 1 mJ · m −2 , divided by the typical slip jump ≃ 1 nm generates a typical stress in the 1 MPa region, quite consistent with the observed shear-stress peaks at the end of the stick intervals.
We verified (see ESI Figure S3 †) that essentially the same observations regarding the effect of varying the driving velocity apply at room temperature, for a field in the same range, E = 5 V · nm −1 . As expected, the dynamics does not change for slower sliding velocity and the highest simulated velocity is not sufficiently large to change the dynamics to smooth sliding.
Finally, for very large field E = 10 V · nm −1 , the dynamics is neither a clear stick-slip nor smooth a sliding. A speed reduction (as far as accessible in simulations) does not seem to lead to a clearer picture, see ESI Figure S4 †for details.
Effects of the applied load
The applied load L was kept at 10 MPa in all previous simulations. To explore the effect of varying load, in simulations we raise and then decrease it in small steps. As the friction traces show no significant memory effect, we report the resulting data as an average over both simulations executed under the same load. We evaluate friction for a low (1 V · nm −1 ), intermediate (5 V · nm −1 ), and high (10 V · nm −1 ) electric field.
As shown in Figure 7, the outcome of these simulations indicates that the load dependence of friction changes qualitatively, depending on the electric field. In the absence of electric field, Ref. 50 reported an essentially load-independent frictional shear stress. Here, for relatively weak field, we observe that friction decreases with load (differential friction coefficient µ ≡ dS/dL ≃ −0.08). For intermediate field, at room temperature the model exhibits a nonmonotonic dependence of friction upon loading, with friction initially increasing (µ ≃ 0.09), and then rapidly decreasing at higher load. At the highest tested field, friction increases faster (µ ≃ 0.12) and the friction-growing range extends to even larger loads, although eventually friction reaches a maximum around L ≃ 80 MPa, and then it starts to decrease. This nontrivial behavior emerges despite the expected monotonic compression of the sheared layer, shown in Figure 7d,e,f.
The mechanism for the friction change with load is related to the changes in the number of interpenetrating chains of opposite layers. We quantify the degree of interlocking by evaluating the "hooking fraction" h, defined as the fractional number of chains whose cation crosses the average level of cations of the opposite layer. 50 For low field, the increasing load promotes steric interactions among chains in the same layer, resulting in a flatter configuration with suppressed hooking (see Figure S5a †) and lower friction. Intermediate field pushes the SUB chains to a more vertical position. Increasing load promotes interlocking, and thus increased friction, up to 20 MPa. When the SUB chains bend substantially under the effect of higher loads, they acquire flatter layer configurations, and further raising L suppresses interlocking (see Figure S5 †), and therefore friction, as observed in Figure 7b. For large field the interlocking is negligible (less than 1%) for all since the SUP chains adopt a substantially disordered configuration, as previously noted.
Increasing load brings opposite chains closer, rapidly promoting stronger Coulombic and steric interactions, so that friction increases.
For T = 0 K load has generally moderate effect on friction, with an overall tendency to suppress friction. The reason is the SUB layer being pressed down with decreased angular fluctuations of the individual chains.
Conclusions
In mainstream triboelectrochemical approaches, the electric field introduces a bias to the diffusive ionic motion and even moderate electric fields can generate a significant electricenergy difference for the ions at different locations, easily exceeding the thermal energy k B T , thus resulting in significant ionic displacements, and potentially important surface alterations with effects on friction which can range from modest to dramatic. The model investigated here involves no free ions: all frictional changes here are associated to the reorientation of the flexible dipolar section of zwitterionic molecules pinned to a surface. As a result, the displacement of charged residues is limited by the molecular size, and accordingly, for a given field strength, the reorientation energetics is comparably more modest than can be achieved by mobile ions. Moreover, this electric-coupling energy has to compete not only with thermal disordering effects of the order of k B T , but also with the elastic molecular deformation energy. As a consequence, quite strong fields are generally required for sizable effects. In the present model we consider fields up to 10 V·nm −1 , which determine substantial structural alterations and remarkable effects on friction. Such field values can however be quite challenging to achieve in the lab. With easily accessible fields much smaller than 1 V · nm −1 , we observe negligible structural and tribologic effects. The dramatic tribologic effects observed very recently in phospholipid assemblies 58 are likely associated to fieldinduced dramatic layer reconfigurations, such as the electroporation mechanism identified in that work and in Ref. 59, which are clearly outside the globally rigid-layer model studied here.
The main outcome of this investigation is the non-monotonic variation of friction as a function of the electric field at room temperature. The electric field promotes an asymmetric deformation of the SUP and SUB zwitterionic layers. With increasing field the upper-layer chains tend to arrange in a compact ordered and flat configuration while the lower-layer chains tend to stand up and approach the vertical direction of the electric field. Thermal fluctuations promote interlayer chain interlocking and the increased freedom to move of the standing-up SUB chains allows a substantial fraction of them to reach binding sites in the SUP layer. This is reflected by an increase in friction for E ≤ 5.5 V · nm −1 . Higher fields lead to an extra-flat, compressed, and rigid SUP monolayer that suppresses interlocking and thus friction.
By testing the dependence on the sliding velocity for specific values of T and E we find that for moderate fields stick-slip dynamics is observed for most velocities and temperatures.
At low electric field, friction decreases with load given the suppression of the interlocking and the increasing interaction between chains of the same layer. For intermediate electric fields the load-dependence is not monotonic and for large field friction increases with load given the promotion of the interactions between chains of opposite layers.
While the specific quantitative detail of these results is to be traced to the peculiar electro-mechanical and geometric properties of the investigated model, and to the limitations intrinsic to modeling, we can generally conclude that friction of dipolar compounds can be altered even substantially by sufficiently large external DC electric fields. We argue that confined bi-layers can switch from smooth sliding to stick-slip regimes in a complex and non trivial, thus fascinating, way. Compared to the slow thermal diffusion of free ions, dipole reorientation has the advantage of a faster response, mostly limited by molecular inertia: this observation suggests that the zwitterionic bi-layer approach may lead to a sizable response in the AC regime, especially at frequencies exceeding ∼ 10 kHz, where ion diffusion usually becomes negligible.
Author Contributions
MMG, RG and NM performed the numerical investigation and the data analysis. All authors discussed and collaborated to the formal analysis. All authors participated in the writing and revisions of this paper.
Conflicts of interest
There are no conflicts to declare.
Acknowledgement
The authors acknowledge support from the grant PRIN2017 UTFROM of the Italian Min-
Supporting Information Available
Snapshot of a simulation illustrating the fluid-like arrangement at high fields at room temperature ( Figure S1); average shear stress as a function of v stage and the energetics of the system at T = 0 K and E = 4 V · nm −1 ( Figure S2); average shear stress as a function of v stage and the energetics of the system at T = 300 K and E = 5 V · nm −1 ( Figure S3); average shear stress as a function of v stage and the energetics of the system at T = 300 K and E = 10 V · nm −1 ( Figure S4); average shear stress at room temperature and E = 10 V · nm −1 and hooking fraction as a function of load at room temperature for E = 1, 5 and 10 V · nm −1 ( Figure S5).
Figure S1 :
S1(a) Side view and (b) top view of a 4 nm y-thick slice of a snapshot of a simulation carried out with E = 10 V · nm −1 , at T = 300 K. For better visibility, the top view includes the SUB chains only, i.e. the region inside the orange rectangle in panel (a).
Figure S2 :
S2(a) Average shear stress as a function of v stage for E = 4 V · nm −1 , T = 0 K, and L = 10 MPa. (b-e) Shear-stress traces. Instantaneous (f,i,l) shear stress, (g,j,m) velocity of the SUP layer, and (h,k,n) per unit area potential-energy contributions as a function of x stage for the corresponding velocities in panel (a)
Figure S3 :
S3(a) Average shear stress as a function of v stage for E = 5 V·nm −1 , T = 300 K and L = 10 MPa. (b-e) Shear-stress traces for the velocities reported in panel (a). (f,i,l) Shear stress, (g,j,m) instantaneous velocity of the SUP layer and (h,k,n) per unit area potential energy contributions as a function of x stage for the corresponding velocities in panel (a).
Figure S4 :
S4(a) Average shear stress as a function of v stage for E = 10 V · nm −1 , T = 300 K and L = 10 MPa. (b-e) Shear traces for a few of the velocities reported in panel (a). (f,i,l) Shear stress, (g,j,m) instantaneous velocity of the SUP layer and (h,k,n) per unit area potential energy contributions as a function of x stage for the corresponding velocities in panel (a)
Figure S5 :
S5(a-c) Average shear stress (same as Fig. 7 of the article) and (d-e) average hooking fraction h (our strategy for quantifying the degree of interlocking defined in the Supporting Information of Ref. 1) as a function of the applied load L at T = 300 K. Blue lines are linear fits of the small-load (up to 20 MPa) data. The resulting differential friction coefficients are:
Figure 1 :
1extend a recently-developed model inspired by SFA experiments with confined selfassembled vesicles of zwitterionic molecules. 47-49 The model was introduced and described in detail in Ref. 50. Briefly, the organic polymers composing the vesicles walls are represented as chains of point-like particles. Each of these particles is a coarse-grained representation of the dipalmitoylphosphatidylcholine molecule, as detailed in Figure 1 of Ref. 50, and as investigated in SFA experiments very recently with similar molecules. 51 As the protagonists of interfacial friction are the zwitterionic head groups, the molecular model describes them in greater detail. Each chain of 7 point particles consists of: one cation followed by three uncharged residues, by an anion, and finally by two uncharged particles representing a glycerol group and a long alkyl tail (seeFigure 1). In real-life self-assembled vesicles, these hydrophobic tails segregate away from the aqueous solution, provide a directed support to the zwitterionic segments, and transmit the external load and shear forces produced by the SFA probe. In the model, the structural hydrophobic parts of the vesicles are represented by parallel rigid layers, that can slide relative to each other, in a supercell of area A ≃ 120 nm 2 . Side view of a snapshot of the contacting zwitterionic molecules at T = 0 K, with an applied electric field E = 5 V · nm −1 and load L = 10 MPa.
Compared to Ref. 50, the novelty of the present work is the introduction of an electric field directed along theẑ axis, i.e. perpendicular to the sliding, which affects the orientation of the zwitteronic molecules and their response to shear. In SFA experiments, such an electric field arises when the top surface is positively charged and the bottom one is negatively charged. Because the total molecular charge vanishes, so does the net electric force acting on each molecule. On the other hand, the zwitterionic section of the molecule carries an electric dipole given by the product of the residue charge, 0.25 elementary charges, times their separation, which equals 0.64 nm at equilibrium. Thus this dipole moment is approximately d = 2.6 × 10 −29 C · m = 7.7 Debye. The electric field generates a torque acting on each molecular dipole. This torque tends to deflect the zwitterionic chain orientation away from the equilibrium angle θ = 111 • 50,52 implemented in the adopted molecular model, toward the vertical directionẑ. The torque is maximum when the zwitterionic chain lies flat in the xy plane (θ = 90 • ), and vanishes when the chain stands upright in theẑ direction (θ = 180 • ), which represents the stable equilibrium point for the electric torque produced by the upward electric field. For the direction of the electric field considered here only the zwitterionic chains planted in the SUB layer can reach this "standup" orientation in the field, because for the molecules planted in the SUP layer any upward bending is hindered by the rigid SUP layer itself (see Figure 1). As a result, an increasing strength of the electric field favors the SUB-related zwitterionic chains tendency to stand up, while it tends to push the upper layer of zwitterionic chains flatter and flatter in the xy plane. The electric-energy gain of a dipole rotating from the horizontal to the standup configuration in a E = 1 V · nm −1 field amounts to E · d = 0.16 eV. For comparison, the angular-spring energy cost to rotate from the rest position at 111 • to vertical (180 • ) amounts to 1.45 eV.
thus running for 20 ns. Batches of simulations with varied loads are performed in sequences first increasing and then decreasing the load in small steps. The instantaneous friction force (and therefore the shear stress) is provided by the spring elongation. Average quantities are evaluated over the final 80 nm of steady-state regime, i.e. omitting the initial transient of 20 nm. 3 Results and discussion 3.1 Effects of the transverse electric field
Figure 2a Figure 2 :
2a2reports the average frictional shear stress as a function of the electric field. The vertical bars reflect the root mean squared fluctuations: wide bars are indicative of stick-slip dynamics, as, e.g., in the friction trace of Figure 2c, while smoother sliding generates narrower bars, as in the friction trace of Figure 2f. The T = 300 K simulation data are consistent with rather large ratio S/L of shear stress to load in the 0.5-1.8 range, depending on the electric field. The average friction is enhanced by the electric field up to E ≃ 5 V · nm −1 showing stick-slip dynamics for all electric-field values, as in the example of Figure 2c. The upward pointing electric field flattens the SUP chains against the supporting layer: intra-layer steric hindrance and the Coulombic interactions within the same layer generate a substantially flat well-ordered configuration. In contrast, the zwitterionic heads of the SUB chains tend to align in the electric field's direction, pointing in a more and more vertical configuration as the electric field increases. The lifted heads acquire an increasing configuration freedom, Variation of (a) the average frictional shear stress and (b) the average distance between the rigid layers as a function of the electric field E, for v stage = 5 m · s −1 , applied load L = 10 MPa, and at temperatures T = 0 K (green circles) and T = 300 K (red triangles). Averaging omits the initial transient consisting in the first 20 nm of sliding in each simulation. Vertical bars measure the root mean square fluctuations: wide bars are typically indicative of stick-slip dynamics. (c-f) Examples of friction traces (instantaneous shear stress as a function of x stage = v stage t), corresponding to the E values pointed at in panel (a).
Figure 3
3: 5 nm y-thick slices of snapshots of a simulation carried out at T = 300 K, with E = 2 V · nm −1 and v stage = 5 m · s −1 . Side (a,b) and zoomed (c,d) views for (a,c) stick and (b,d) slip configurations. For clarity, SUP chains are colored lighter than SUB chains. SUP layer becomes extremely flat, while the excessive freedom makes the SUB chains highly susceptible to thermal fluctuations, which tend to weaken the transient interlocking bonds, leading to the reduction of friction. The ensuing erratic dynamics appears somewhere in between stick-slip and smooth sliding, seeFigure 2d. Note that partial alignment of the SUB chains inFigure 4fis the result of the "combing" produced by the rightward advancing SUP layer.
Figure 4 :
4Side (a-c) and top (d-f) views of 4 nm y-thick slices of E = 5 V · nm −1 simulation snapshots at (a,d) T = 0 and (b,c,e,f) T = 300 K. Top views show only SUB chains inside the orange rectangles in (a-c). For clarity, SUP chains are colored lighter than SUB chains.
Figure 2e )Figure 5 :
2e5resulting in fairly low overall friction. We observe a ratio S/L = 0.16 at E = 4 V · nm −1 , and S/L < 0.1 for E ≤ 3 V · nm −1 . Like at T = 300 K, as E is increased the layer of the SUP chains flattens out, while the SUB chains tend to stand up and approach the vertical direction of the electric field. However, for a moderate field E < 5 V · nm −1 without thermal fluctuations the SUB layer remains mostly ordered, with very few SUB cations attain the energetically favorable hollow configuration. The result is a quite weak overall corrugation which is reflected in the low friction. Side (a-c) and top (d-f) views of 4 nm y-thick slices of T = 0 K simulation snapshots for different values of electric field: (a,d), (b,e), (c,f) correspond to E = 4, 6.5, and 10 V·nm −1 respectively. Top views show only SUB chains inside the orange rectangles in (a-c). For clarity, SUP chains are colored lighter than SUB chains.
verify this hypothesis, and to gain insight in the model behavior at lower speed, i.e. closer to an experimentally relevant regime, we tested the model behavior as a function of the velocity of the pulling stage for specific values of T and E. For E = 6.5 V · nm −1 at T = 0 K the system exhibits stick-slip dynamics at most of the simulated velocities, seeFigure 6a-e. This results in an essentially velocity-independent time-averaged friction,Figure 6a.The friction peaks acquire more and more the asymmetric shape, typical of stick-slip dynamics, as the driving speed is reduced. The reason is evident by comparing the spikes in the velocity of the SUP center of mass as a function of the pulling stage displacement, reported inFigure 6g,j,m. The slip jump, of the order of 1 nm, tends to increase marginally with the driving speed. 5 Similar features are observed for E = 4 V·nm −1 (see ESIFigure S2†).
Figure 6 :
6(a) Average shear stress as a function of v stage for E = 6.5 V · nm −1 , T = 0 K and L = 10 MPa. (b-e) Shear traces for a few of the velocities reported in panel (a). (f,i,l) Shear stress, (g,j,m) instantaneous velocity of the SUP layer and (h,k,n) per unit area potential energy contributions as a function of the stage displacement x stage , for three of the explored velocities. u I = U I /A = internal potential energy; u EF = U EF /A = potential energy of the charges in the applied electric field; u S = U S /A = potential energy of the pulling spring.
Figure 7 :
7(a-c) Average shear stress and (d-e) SUP-SUB distance as a function of the applied load L. Blue lines are linear fits of the small-load (up to 20 MPa) T = 300 K data. The resulting differential friction coefficients are: µ = −0.08 for E = 1 V · nm −1 ; µ = 0.09 for E = 5 V · nm −1 ; µ = 0.12 for E = 10 V · nm −1 .
istry of University and Research and from the University of Milan through the APC initiative. A.V. acknowledges also support by ERC Advanced Grant ULTRADISS, contract No. 8344023. M.U. acknowledges the financial support of the Israel Science Foundation, Grant 1141/18. The authors acknowledge useful discussions with Di Jin, Jacob Klein, Erio Tosatti and Yu Zhang.
Movie1: 3
3ns of the MD simulation corresponding to the snapshots shown inFigure 3,E = 2 V · nm −1 at T = 300 K (MP4).Movie2: 3 ns of the MD simulation corresponding to the snapshots shown inFigure 4,E = 5 V · nm −1 at T = 300 K (MP4).Movie3: 3 ns of the MD simulation corresponding to the snapshots shown inFigure S1,
Thermal Friction Enhancement in Zwitterionic Monolayers. M M Gianetti, R Guerra, A Vanossi, M Urbakh, N Manini, J. Phys. Chem. C. 2022S8Gianetti, M. M.; Guerra, R.; Vanossi, A.; Urbakh, M.; Manini, N. Thermal Friction Enhancement in Zwitterionic Monolayers. J. Phys. Chem. C 2022, 126, 2797-2805. S8
Controlling friction with external electric or magnetic fields: 25 examples. J Krim, Front. Mech. Eng. 522Krim, J. Controlling friction with external electric or magnetic fields: 25 examples. Front. Mech. Eng. 2019, 5, 22.
Triboelectrochemistry: influence of applied electrical potentials on friction and wear of lubricated contacts. H A Spikes, Tribol. Lett. 68Spikes, H. A. Triboelectrochemistry: influence of applied electrical potentials on friction and wear of lubricated contacts. Tribol. Lett. 2020, 68, 1-27.
Electrotunable friction with ionic liquid lubricants. F Bresme, A A Kornyshev, S Perkin, M Urbakh, Nat. Mater. 2022Bresme, F.; Kornyshev, A. A.; Perkin, S.; Urbakh, M. Electrotunable friction with ionic liquid lubricants. Nat. Mater. 2022, 21, 848-858.
Fluctuation of interfacial electronic properties induces friction tuning under an electric field. A Song, R Shi, H Lu, X Wang, Y Hu, H.-J Gao, J Luo, T Ma, Nano Lett. 2022Song, A.; Shi, R.; Lu, H.; Wang, X.; Hu, Y.; Gao, H.-J.; Luo, J.; Ma, T. Fluctuation of interfacial electronic properties induces friction tuning under an electric field. Nano Lett. 2022, 22, 1889-1896.
Switching Atomic Friction by Electrochemical Oxidation. A Labuda, F Hausen, N N Gosvami, P H Grütter, R B Lennox, R Bennewitz, Langmuir. 27Labuda, A.; Hausen, F.; Gosvami, N. N.; Grütter, P. H.; Lennox, R. B.; Bennewitz, R. Switching Atomic Friction by Electrochemical Oxidation. Langmuir 2011, 27, 2561- 2566.
Influence of electric potentials on the tribological behaviour of silicon carbide. A Kailer, T Amann, O Krummhauer, M Herrmann, U Sydow, M Schneider, 18th International Conference on Wear of Materials. 271Kailer, A.; Amann, T.; Krummhauer, O.; Herrmann, M.; Sydow, U.; Schneider, M. Influence of electric potentials on the tribological behaviour of silicon carbide. Wear 2011, 271, 1922-1927, 18th International Conference on Wear of Materials.
Electrorheological fluids: mechanisms, dynamics, and microfluidics applications. P Sheng, W Wen, Annu. Rev. Fluid Mech. 44Sheng, P.; Wen, W. Electrorheological fluids: mechanisms, dynamics, and microfluidics applications. Annu. Rev. Fluid Mech. 2012, 44, 143-174.
Surface structures and frictional properties of Au (100) in an electrochemical environment. F Hausen, J Zimmet, R Bennewitz, Surf. Sci. 607Hausen, F.; Zimmet, J.; Bennewitz, R. Surface structures and frictional properties of Au (100) in an electrochemical environment. Surf. Sci. 2013, 607, 20-24.
1 1): A frictional transition controlled by electrochemical potential. S Iqbal, S Wezisla, N Podgaynyy, H Baltruschat, Pyridine On, Au, Electrochim. Acta. 1Iqbal, S.; Wezisla, S.; Podgaynyy, N.; Baltruschat, H. Pyridine on Au (1 1 1): A frictional transition controlled by electrochemical potential. Electrochim. Acta 2015, 186, 427-435.
Recent highlights in nanoscale and mesoscale friction. A Vanossi, D Dietzel, A Schirmeisen, E Meyer, R Pawlak, T Glatzel, M Kisiel, S Kawai, N Manini, Beilstein J. Nanotech. 9Vanossi, A.; Dietzel, D.; Schirmeisen, A.; Meyer, E.; Pawlak, R.; Glatzel, T.; Kisiel, M.; Kawai, S.; Manini, N. Recent highlights in nanoscale and mesoscale friction. Beilstein J. Nanotech. 2018, 9, 1995-2014.
Anomalous potential-dependent friction on Au (111) measured by AFM. L Pashazanusi, M Oguntoye, S Oak, J N Albert, L R Pratt, N S Pesika, Langmuir. 34Pashazanusi, L.; Oguntoye, M.; Oak, S.; Albert, J. N.; Pratt, L. R.; Pesika, N. S. Anomalous potential-dependent friction on Au (111) measured by AFM. Langmuir 2018, 34, 801-806.
Nanoscale friction and growth of surface oxides on a metallic glass under electrochemical polarization. H Ma, R Bennewitz, Tribol. Int. 158106925Ma, H.; Bennewitz, R. Nanoscale friction and growth of surface oxides on a metallic glass under electrochemical polarization. Tribol. Int. 2021, 158, 106925.
Potential-Dependent Interfacial Frictional Behavior between Charged Microspheres and Gold in Aqueous Solutions. S Li, Y Li, P Bai, Y Meng, Y Tian, J. Phys. Chem. C. 2022Li, S.; Li, Y.; Bai, P.; Meng, Y.; Tian, Y. Potential-Dependent Interfacial Frictional Be- havior between Charged Microspheres and Gold in Aqueous Solutions. J. Phys. Chem. C 2022, 126, 4555-4562.
Ionic liquid near a charged wall: Structure and capacitance of electrical double layer. M V Fedorov, A A Kornyshev, J. Phys. Chem. B. 112Fedorov, M. V.; Kornyshev, A. A. Ionic liquid near a charged wall: Structure and capacitance of electrical double layer. J. Phys. Chem. B 2008, 112, 11868-11872.
An ionic liquid lubricant enables superlubricity to be "switched on" in situ using an electrical potential. H Li, R J Wood, M W Rutland, R Atkin, Chem. Commun. 50Li, H.; Wood, R. J.; Rutland, M. W.; Atkin, R. An ionic liquid lubricant enables superlubricity to be "switched on" in situ using an electrical potential. Chem. Commun. 2014, 50, 4368-4370.
Effect of imidazolium ionic liquid additives on lubrication performance of propylene carbonate under different electrical potentials. X Yang, Y Meng, Y Tian, Tribol. Lett. 56Yang, X.; Meng, Y.; Tian, Y. Effect of imidazolium ionic liquid additives on lubrication performance of propylene carbonate under different electrical potentials. Tribol. Lett. 2014, 56, 161-169.
Squeezout phenomena and boundary layer formation of a model ionic liquid under confinement and charging. R Capozza, A Vanossi, A Benassi, E Tosatti, J. Chem. Phys. 64707Capozza, R.; Vanossi, A.; Benassi, A.; Tosatti, E. Squeezout phenomena and boundary layer formation of a model ionic liquid under confinement and charging. J. Chem. Phys. 2015, 142, 064707.
Electrotunable Friction with Ionic Liquid Lubricants: How Important Is the Molecular Structure of the Ions?. O Y Fajardo, F Bresme, A A Kornyshev, M Urbakh, J. Phys. Chem. Lett. 63998Fajardo, O. Y.; Bresme, F.; Kornyshev, A. A.; Urbakh, M. Electrotunable Friction with Ionic Liquid Lubricants: How Important Is the Molecular Structure of the Ions? J. Phys. Chem. Lett. 2015, 6, 3998.
Electrical charging effects on the sliding friction of a model nano-confined ionic liquid. R Capozza, A Benassi, A Vanossi, E Tosatti, J. Chem. Phys. 144703Capozza, R.; Benassi, A.; Vanossi, A.; Tosatti, E. Electrical charging effects on the sliding friction of a model nano-confined ionic liquid. J. Chem. Phys. 2015, 143, 144703.
Influence of electric potentials on friction of sliding contacts lubricated by an ionic liquid. C Dold, T Amann, A Kailer, Phys. Chem. Chem. Phys. 17Dold, C.; Amann, T.; Kailer, A. Influence of electric potentials on friction of sliding contacts lubricated by an ionic liquid. Phys. Chem. Chem. Phys. 2015, 17, 10339- 10342.
Nanoscale lubrication of ionic surfaces controlled via a strong electric field. E Strelcov, R Kumar, V Bocharova, B G Sumpter, A Tselev, S V Kalinin, Sci. Rep-UK. 5Strelcov, E.; Kumar, R.; Bocharova, V.; Sumpter, B. G.; Tselev, A.; Kalinin, S. V. Nanoscale lubrication of ionic surfaces controlled via a strong electric field. Sci. Rep- UK 2015, 5, 1-5.
Mechanisms of electrotunable friction in friction force microscopy experiments with ionic liquids. K Pivnic, O Y Fajardo, F Bresme, A A Kornyshev, M Urbakh, J. Phys. Chem. C. 122Pivnic, K.; Fajardo, O. Y.; Bresme, F.; Kornyshev, A. A.; Urbakh, M. Mechanisms of electrotunable friction in friction force microscopy experiments with ionic liquids. J. Phys. Chem. C 2018, 122, 5004-5012.
A reversibly switching surface. J Lahann, S Mitragotri, T.-N Tran, H Kaido, J Sundaram, I S Choi, S Hoffer, G A Somorjai, R Langer, 299Lahann, J.; Mitragotri, S.; Tran, T.-N.; Kaido, H.; Sundaram, J.; Choi, I. S.; Hoffer, S.; Somorjai, G. A.; Langer, R. A reversibly switching surface. Science 2003, 299, 371-374.
Electric-field-induced friction reduction and control. C Drummond, Phys. Rev. Lett. 109154302Drummond, C. Electric-field-induced friction reduction and control. Phys. Rev. Lett. 2012, 109, 154302.
A reversibly electrocontrollable polymer brush for electro-switchable friction. H Zeng, Y Zhang, S Mao, H Nakajima, K Uchiyama, J. Mater. Chem. C. 5Zeng, H.; Zhang, Y.; Mao, S.; Nakajima, H.; Uchiyama, K. A reversibly electro- controllable polymer brush for electro-switchable friction. J. Mater. Chem. C 2017, 5, 5877-5881.
Nanoscale friction switches: friction modulation of monomolecular assemblies using external electric fields. K K Karuppiah, Y Zhou, L K Woo, S Sundararajan, Langmuir. 25Karuppiah, K. K.; Zhou, Y.; Woo, L. K.; Sundararajan, S. Nanoscale friction switches: friction modulation of monomolecular assemblies using external electric fields. Langmuir 2009, 25, 12114-12119.
Nanoscopic friction under electrochemical control. A S De Wijn, A Fasolino, A Filippov, M Urbakh, Phys. Rev. Lett. 55502de Wijn, A. S.; Fasolino, A.; Filippov, A.; Urbakh, M. Nanoscopic friction under elec- trochemical control. Phys. Rev. Lett. 2014, 112, 055502.
Effects of molecule geometry and dispersion on nanoscopic friction under electrochemical control. A S De Wijn, A Fasolino, A E Filippov, M Urbakh, J. Phys-Condens. de Wijn, A. S.; Fasolino, A.; Filippov, A. E.; Urbakh, M. Effects of molecule geometry and dispersion on nanoscopic friction under electrochemical control. J. Phys-Condens.
. Mat, 28105001Mat. 2016, 28, 105001.
Polymer tribology. N K Myshkin, A V Kovalev, World ScientificMyshkin, N. K.; Kovalev, A. V. Polymer tribology; World Scientific, 2009; pp 3-37.
Brushing up functional materials. S Ma, X Zhang, B Yu, F Zhou, NPG Asia Mater. 11Ma, S.; Zhang, X.; Yu, B.; Zhou, F. Brushing up functional materials. NPG Asia Mater. 2019, 11, 1-39.
Structures, properties, and applications of zwitterionic polymers. K Qu, Z Yuan, Y Wang, Z Song, X Gong, Y Zhao, Q Mu, Q Zhan, W Xu, L Wang, ChemPhys-Mater. 2022Qu, K.; Yuan, Z.; Wang, Y.; Song, Z.; Gong, X.; Zhao, Y.; Mu, Q.; Zhan, Q.; Xu, W.; Wang, L. Structures, properties, and applications of zwitterionic polymers. ChemPhys- Mater 2022, 1, 294-309.
Light-responsive polymer surfaces via postpolymerization modification of grafted polymer-brush structures. M Dübner, N D Spencer, C Padeste, Langmuir. 30Dübner, M.; Spencer, N. D.; Padeste, C. Light-responsive polymer surfaces via post- polymerization modification of grafted polymer-brush structures. Langmuir 2014, 30, 14971-14981.
Self-assembled layers of substituted poly (p-phenylene) s on gold and copper investigated by soft X-ray spectroscopy. G Hähner, A Marti, N D Spencer, S Brunner, W R Caseri, U W Suter, M Rehahn, Langmuir. 12Hähner, G.; Marti, A.; Spencer, N. D.; Brunner, S.; Caseri, W. R.; Suter, U. W.; Rehahn, M. Self-assembled layers of substituted poly (p-phenylene) s on gold and copper investigated by soft X-ray spectroscopy. Langmuir 1996, 12, 719-725.
Surface chemistry in tribology. A J Gellman, N D Spencer, Proceedings of the Institution of Mechanical Engineers. the Institution of Mechanical Engineers216Gellman, A. J.; Spencer, N. D. Surface chemistry in tribology. Proceedings of the Insti- tution of Mechanical Engineers, Part J: Journal of Engineering Tribology 2002, 216, 443-461.
Dynamic behavior of confined branched hydrocarbon lubricant fluids under shear. C Drummond, J Israelachvili, Macromolecules. 33Drummond, C.; Israelachvili, J. Dynamic behavior of confined branched hydrocarbon lubricant fluids under shear. Macromolecules 2000, 33, 4910-4920.
Dynamic phase transitions in confined lubricant fluids under shear. C Drummond, J Israelachvili, Phys. Rev. E. 6341506Drummond, C.; Israelachvili, J. Dynamic phase transitions in confined lubricant fluids under shear. Phys. Rev. E 2001, 63, 041506.
Friction between two weakly adhering boundary lubricated surfaces in water. C Drummond, J Israelachvili, P Richetti, Phys. Rev. E. 6766110Drummond, C.; Israelachvili, J.; Richetti, P. Friction between two weakly adhering boundary lubricated surfaces in water. Phys. Rev. E 2003, 67, 066110.
Lubrication at physiological pressures by polyzwitterionic brushes. M Chen, W H Briscoe, S P Armes, J Klein, Science. 323Chen, M.; Briscoe, W. H.; Armes, S. P.; Klein, J. Lubrication at physiological pressures by polyzwitterionic brushes. Science 2009, 323, 1698-1701.
Lubrication by charged polymers. U Raviv, S Giasson, N Kampf, J.-F Gohy, R Jérôme, J Klein, Nature. 425Raviv, U.; Giasson, S.; Kampf, N.; Gohy, J.-F.; Jérôme, R.; Klein, J. Lubrication by charged polymers. Nature 2003, 425, 163-165.
Scratching the Surface: Fundamental Investigations of Tribology with Atomic Force Microscopy. R Carpick, M Salmeron, Chem. Rev. 971163Carpick, R.; Salmeron, M. Scratching the Surface: Fundamental Investigations of Tri- bology with Atomic Force Microscopy. Chem. Rev. 1997, 97, 1163.
Recent advances in single-asperity nanotribology. I Szlufarska, M Chandross, R Carpick, J. Phys. D. 123001Szlufarska, I.; Chandross, M.; Carpick, R. Recent advances in single-asperity nanotri- bology. J. Phys. D 2008, 41, 123001.
Colloquium: Modeling friction: From nanoscale to mesoscale. A Vanossi, N Manini, M Urbakh, S Zapperi, E Tosatti, Rev. Mod. Phys. 529Vanossi, A.; Manini, N.; Urbakh, M.; Zapperi, S.; Tosatti, E. Colloquium: Modeling friction: From nanoscale to mesoscale. Rev. Mod. Phys. 2013, 85, 529.
N Manini, O M Braun, A Vanossi, Fundamentals of Friction and Wear on the Nanoscale. Meyer, E.BerlinSpringer2nd ed.; Gnecco, E.. p 175Manini, N.; Braun, O. M.; Vanossi, A. In Fundamentals of Friction and Wear on the Nanoscale 2nd ed.; Gnecco, E., Meyer, E., Eds.; Springer, Berlin, 2015; p 175.
Friction and nonlinear dynamics. N Manini, O M Braun, E Tosatti, R Guerra, A Vanossi, J. Phys-Condens. Mat. 28293001Manini, N.; Braun, O. M.; Tosatti, E.; Guerra, R.; Vanossi, A. Friction and nonlinear dynamics. J. Phys-Condens. Mat. 2016, 28, 293001.
Sliding friction of amorphous asperities on crystalline substrates: scaling with contact radius and substrate thickness. J M Monti, M O Robbins, ACS Nano. 14Monti, J. M.; Robbins, M. O. Sliding friction of amorphous asperities on crystalline substrates: scaling with contact radius and substrate thickness. ACS Nano 2020, 14, 16997-17003.
From Molecular to Multiasperity Contacts: How Roughness Bridges the Friction Scale Gap. L Frérot, A Crespo, J A El-Awady, M O Robbins, J Cayer-Barrioz, D Mazuyer, ACS Nano. 17Frérot, L.; Crespo, A.; El-Awady, J. A.; Robbins, M. O.; Cayer-Barrioz, J.; Mazuyer, D. From Molecular to Multiasperity Contacts: How Roughness Bridges the Friction Scale Gap. ACS Nano 2023, 17, 2205-2211.
Normal and frictional interactions between liposomebearing biomacromolecular bilayers. A Gaisinskaya-Kipnis, J Klein, Biomacromolecules. 17Gaisinskaya-Kipnis, A.; Klein, J. Normal and frictional interactions between liposome- bearing biomacromolecular bilayers. Biomacromolecules 2016, 17, 2591-2602.
Surface interactions between boundary layers of poly (ethylene oxide)-liposome complexes: Lubrication, bridging, and selective ligation. S A Angayarkanni, N Kampf, J Klein, Langmuir. 35Angayarkanni, S. A.; Kampf, N.; Klein, J. Surface interactions between boundary lay- ers of poly (ethylene oxide)-liposome complexes: Lubrication, bridging, and selective ligation. Langmuir 2019, 35, 15469-15480.
The role of hyaluronic acid in cartilage boundary lubrication. W Lin, Z Liu, N Kampf, J Klein, Cells. 20201606Lin, W.; Liu, Z.; Kampf, N.; Klein, J. The role of hyaluronic acid in cartilage boundary lubrication. Cells 2020, 9, 1606.
Thermal Friction Enhancement in Zwitterionic Monolayers. M M Gianetti, R Guerra, A Vanossi, M Urbakh, N Manini, J. Phys. Chem. C. 2022Gianetti, M. M.; Guerra, R.; Vanossi, A.; Urbakh, M.; Manini, N. Thermal Friction Enhancement in Zwitterionic Monolayers. J. Phys. Chem. C 2022, 126, 2797-2805.
. Y Zhang, D Jin, R Tivony, N Kampf, J Klein, arXiv:2305.19178Preprint atY. Zhang, D. Jin, R. Tivony, N. Kampf, J. Klein, Preprint at arXiv:2305.19178 (2023).
Kinetics, statistics, and energetics of lipid membrane electroporation studied by molecular dynamics simulations. R A Böckmann, B L De Groot, S Kakorin, E Neumann, H Grubmüller, Biophys. J. 95Böckmann, R. A.; De Groot, B. L.; Kakorin, S.; Neumann, E.; Grubmüller, H. Kinet- ics, statistics, and energetics of lipid membrane electroporation studied by molecular dynamics simulations. Biophys. J. 2008, 95, 1837-1850.
Fast parallel algorithms for short-range molecular dynamics. S Plimpton, J. Comput. Phys. 117Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 1995, 117, 1-19.
. A P Thompson, H M Aktulga, R Berger, D S Bolintineanu, W M Brown, P S Crozier, P J Veld, A Kohlmeyer, S G Moore, T D Nguyen, Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. S.; Brown, W. M.; Crozier, P. S.; in't Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D., et al.
LAMMPS-a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comput. Phys. Commun. LAMMPS-a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comput. Phys. Commun. 2022, 271, 108171.
Computer Simulations of Liquids. M P Allen, D J Tildesley, Oxford University PressOxfordAllen, M. P.; Tildesley, D. J. Computer Simulations of Liquids; Oxford University Press, Oxford, 1991.
Modern Tribology Handbook. M O Robbins, M Müser, CRC PressBhushan, B., Ed; Boca Raton, FLRobbins, M. O.; Müser, M. In Modern Tribology Handbook ; Bhushan, B., Ed.; CRC Press, Boca Raton, FL, 2001; pp 717-825.
Growth, microstructure, and failure of crazes in glassy polymers. J Rottler, M O Robbins, Phys. Rev. E. 6811801Rottler, J.; Robbins, M. O. Growth, microstructure, and failure of crazes in glassy polymers. Phys. Rev. E 2003, 68, 011801.
. D Jin, J Klein, arXiv:2303.08555Preprint atD. Jin, and J. Klein, Preprint at arXiv:2303.08555 (2023).
. D Jin, Y Zhang, J Klein, arXiv:2303.08551Preprint atD. Jin, Y. Zhang and J. Klein, Preprint at arXiv:2303.08551 (2023).
| [] |
[
"Stochastic Geometry Analysis of a Two-Tier Vehicular Network with Roadside Units and Vehicular Relays",
"Stochastic Geometry Analysis of a Two-Tier Vehicular Network with Roadside Units and Vehicular Relays"
] | [
"Chang-Sik Choi ",
"François Baccelli "
] | [] | [] | This research explores the utilization of relays in vehicle-to-all (V2X) communications, where roadside units (RSUs) alone may not be sufficient to ensure network connectivity due to network congestion, signal attenuation, or interference. By employing stochastic geometry, we analyze a spatially-correlated vehicular network that incorporates both RSUs and relays to serve network users on roads. Our model considers the geometric characteristics of roads, RSUs, relays, and users using random points based on the road structure. Assuming separate frequency resources for RSUs and relays, users can associate with either RSUs or relays. We derive the association probability and coverage probability for the typical user, enabling us to assess network performance. Additionally, we investigate user throughput by considering interactions among different links within the network. This paper offers practical insights for the design of two-tier vehicular networks. Specifically, we express user association, user signal-to-interference ratio (SIR), and user throughput as functions of network variables. This information aids in determining optimal relay density and operating bandwidth to enhance network reliability and maximize user throughput in vehicular networks. | null | [
"https://export.arxiv.org/pdf/2204.12243v2.pdf"
] | 259,108,976 | 2204.12243 | 87d878c691fb25c3ef478ea1529078ebd309edbd |
Stochastic Geometry Analysis of a Two-Tier Vehicular Network with Roadside Units and Vehicular Relays
8 Jun 2023
Chang-Sik Choi
François Baccelli
Stochastic Geometry Analysis of a Two-Tier Vehicular Network with Roadside Units and Vehicular Relays
8 Jun 20231Index Terms-Vehicular networksVehicle relaysCoverage probabilityThroughputStochastic geometry
This research explores the utilization of relays in vehicle-to-all (V2X) communications, where roadside units (RSUs) alone may not be sufficient to ensure network connectivity due to network congestion, signal attenuation, or interference. By employing stochastic geometry, we analyze a spatially-correlated vehicular network that incorporates both RSUs and relays to serve network users on roads. Our model considers the geometric characteristics of roads, RSUs, relays, and users using random points based on the road structure. Assuming separate frequency resources for RSUs and relays, users can associate with either RSUs or relays. We derive the association probability and coverage probability for the typical user, enabling us to assess network performance. Additionally, we investigate user throughput by considering interactions among different links within the network. This paper offers practical insights for the design of two-tier vehicular networks. Specifically, we express user association, user signal-to-interference ratio (SIR), and user throughput as functions of network variables. This information aids in determining optimal relay density and operating bandwidth to enhance network reliability and maximize user throughput in vehicular networks.
I. INTRODUCTION
A. Motivation and Prior Work
Recent advancements have opened up new possibilities for vehicles to take on additional roles in urban environments, expanding beyond their traditional transportation function [2]- [4]. In these emerging scenarios, vehicles will actively participate in a range of applications focused on road safety and efficiency. They will achieve this by establishing communication links with neighboring vehicles, pedestrians, traffic lights, and Internet-of-Things (IoT) devices [2], [4]. The presence of advanced vehicles equipped with sensors offers opportunities to enhance not only their own safety but also that of others [5], [6]. However, to realize the full potential of this innovative vehicle use, reliable communication is essential among various network entities, such as vehicles, base stations, smart sensors, and pedestrians [6]- [8].
By strategically placing base stations in proximity to roads or deploying roadside units (RSUs), vehicular networks can establish reliable and high-capacity communication links. RSUs, connected to the core network via backhaul connections, serve Chang-Sik Choi is with Hongik University, South Korea. François Baccelli is with Inria Paris and with Telecom Paris, France.(email: [email protected], [email protected]). This paper is an extension of our early work [1]. R as hosts for advanced Vehicle-to-Everything (V2X) applications [5], [9]. However, as the number of network users grows and vehicular networks accommodate an expanding range of services, certain network users who solely rely on RSUs may encounter limitations in coverage. This can be attributed to factors such as data congestion, imbalanced load distribution, signal attenuation, and significant interference. In order to address these limitations, various technologies have been developed, and this paper specifically focuses on the utilization of vehicular relays [10], [11]. More precisely, the deployment of RSU-operated vehicular relays will bring about a transformation in the topology of vehicular networks, resulting in enhanced network reliability and throughput [10], [11]. For example, relays can efficiently forward critical messages to users situated at greater distances from RSUs. An illustration depicting such a scenario is provided in Fig. 1.
To reveal full potential of such a two-tier vehicular network, this paper studies the fundamental performance of a twotier heterogeneous vehicular network with RSUs and RSUoperated relays, especially by emphasizing on the topology of such a network and the interaction between network elements. To represent the spatial interaction between RSUs, relays, and network users, we use a stochastic geometry [12], [13]. In particular, some studies on vehicular networks used analytic models based on Poisson point processes [14], random points on Manhattan road layout [15], and Cox point processes [13], [16]. For instance, the Cox models were used in [17]- [24] to describe the locations of vehicles on roads. These papers analyzed the basic performance of typical user in vehicular networks. Similarly, [17], [18], [20], [23] studied cellular networks with vehicles.
To define the contribution of the present paper with sufficient precision, the main thing to stress is that the studied model is linked to the class of multi-tier models introduced in [25]- [27], with several types (or tiers) of base stations having different densities and different power characteristics.
The model considered in [25]- [27] featured base stations of all types being spatially distributed as homogeneous planar Poisson point processes. The main novelty of the present multi-tier setting is that base stations of different types are here Cox processes w.r.t. the very same line process, and are hence only conditionally independent and in no case independent since they are constrained by a common road system. Since cellular networks w.r.t. such Cox processes w.r.t. lines were only studied in the single tier only, the analytical solution of this new class of Cox and dependent multi-tier cellular architecture is the main new theoretical achievement of the present paper. Note a two-tier network model with spatiallycorrelated RSUs and relays was introduced by [28]. In this study, the LOS coverage areas created by transmitters-RSUs and relays-are modeled as Boolean models and their interaction were analyzed by the stochastic geometry. Nevertheless, overlooking the role of users in such a network, it did not study network performance seen by users such as interference or SINR coverage probability. With results on [28], this paper reveal the full potential and feasibility of a two-tier network model with RSUs and relays.
B. Theoretical Contributions
Modeling of a spatially correlated two-tier heterogeneous vehicular network This paper focuses on the unique geometric characteristic of vehicular networks, where network elements such as RSUs, vehicle transceivers, and pedestrians are closely associated with roads. To capture this characteristic, we employ a modeling approach that starts by representing the road layout as a Poisson line process. Subsequently, we distribute RSUs, vehicular relays, and users as Poisson point processes conditioned on these lines. This conditional construction ensures that all elements are located exclusively on roads. By incorporating this approach, we can analyze the geometric interactions within a two-tier heterogeneous vehicular network, specifically examining communication links such as RSU-to-relay, relay-to-user, and RSU-to-user connections. Unlike our previous work [16], [18] with a single set of the Cox point process or a more recent one [28] with only a two set of Cox point processes, this study characterizes two sets of transmitters and a single set of users as Cox point processes conditionally on the same line process. This pioneering attempt let us investigate the network performance seen by the typial user in a two-tier heterogeneous vehicular networks, while focusing on their geometric correlation.
Association behavior of users and coverage probability Motivated by basic safety messages transmitted from network elements and received by nearby users [15], [29], we assume that users are associated with the RSU or relay closest to them. We quantify the association behavior as a function of RSU and relay densities. The obtained probability describes the fraction of users associated with RSUs or with relays, at any given time. We show that the association probability is not a linear function of their densities because of the geometric correlation between RSUs and relays. Assuming frequency resources are separated for operating relays and for serving network users, we evaluate the SIR coverage probability of the typical user. Comprehensive analysis and design insights Taking into account the fact that relays are operated by RSUs and users are served by both relays and RSUs, we derive the effective throughput of the typical user in the proposed network. In particular, we get the user throughput formula leveraging (i) the throughputs of RSU-to-user links and relay-to-user links, respectively, (ii) the SIR distribution and throughput of RSUto-relay links, (iii) the average number of network elements involved in the above links. Without ignoring the bottleneck resulting from the RSU-to-relay links, the throughput formula accurately describes the redistribution of the network payload achieved by spatially correlated relays in heterogeneous vehicular network architectures. In particular, we express the user throughput as a function of the frequency resources and the densities of RSUs, relays, and users. As a result, it can be effectively used to design and build heterogeneous vehicular networks where spatially correlated network elements exist. For instance, leveraging the throughput expression, network operators can allocate frequency resources to various links to optimize the network performance for given densities of RSUs, relays, and network users.
II. SYSTEM MODEL
This section explains the spatial model for network elements. Then it discusses the propagation model, the user association principle, and performance metrics.
A. Spatial Model for RSUs and Users
To model road layouts, we assume that the roads are modeled as a isotropic Poisson line process Φ [30]. In the context of stochastic geometry such a model has been widely accepted for its analytical tractability [17]- [24]. The proposed Poisson line process is created by a homogeneous Poisson point process on a cylinder C = R ×[0, π]. Consider a Poisson point process Ξ of intensity λ l /π on C. Here λ l (per km) is the mean number of line segments in a disk of diameter 1 km.
Then, its each point δ (r,θ) ∈ Ξ is mapped into a line, where r is the distance from the origin to the line and θ is the angle between the line and the x-axis, measured in the counterclockwise direction [30]. Collectively, all points of Ξ give rise to a Poisson line process Φ.
Conditionally on each line l(r, θ) ∈ Φ, the locations of RSUs and network users, are modeled as independent onedimensional Poisson point processes S r,θ and U r,θ of intensities µ s and µ u , respectively, where µ u ≫ µ s . Here, µ s is the mean number of RSUs on a road segment of 1 km and µ r is the mean number of relays on a road segment of 1 km.
Collectively, the RSU point process S and the user point process U form Cox point processes constructed under an identical Poisson line process [16]. We have
S = ri,θi∈Φ S ri,θi ,(1)U = ri,θi∈Φ U ri,θi .(2)
Figs. 2 -4 show the proposed model with RSUs, relays, and network users. They are on the very same road structure. Table I summarizes the proposed network elements and notation. [18]. A more realistic model obtained by changing the intensity measure of Ξ is left for future work.
B. Spatial Model for Relay and Reserved Spectrum
We assume that vehicular relays are wirelessly connected to RSUs and they serve network users [10], [29] as in Fig. 1. In the sequel, RSU-operated vehicular relays will be referred to relays.
Since relays are on roads too, we model the locations of relays as a Cox point process, denoted by R. Specifically, conditional on each road l(r, θ) created by the above Poisson line process Φ, the locations of relays on each road follow a Poisson point process R r,θ of intensity µ r . Following the notation of Eqs. (1) and (2), we let
R = ri,θi∈Φ R ri,θi .(3)
It is important to note that the RSU point process S, the relay point process R, and the user point process U are all on the same line process Φ. As a result, our approach captures the fact that RSUs, relays, and users are all on the very same road structure. Figs. 2 -4 show the spatial distributions of the RSUs, relays, and network users in the proposed network.
To operate relays, network operators can employ various approaches. To maintain the tractability of our work, we consider the simplest assumption that the frequency resources for operating relays and the frequency resources for serving network users are separate. (See Fig. 1 where links are shown.) This is partly motivated by the radio resource management technique in practice [10], [29], where the frequency resources can be autonomously taken by vehicles or scheduled by RSUs. Specifically, to communicate with relays, RSUs use the spectrum f 2 of bandwidth W 2 . On the other hand, to serve network users on roads, RSUs and relays use the spectrum f 1 of bandwidth W 1 . In other words, we have three different Note that we assume f 1 and f 2 do not overlap and that W 1 , W 2 < W where W 1 + W 2 = W and W is the total available bandwidth. Table II shows the communication links and their corresponding resources.
Remark 2.
In practical cases, users may experience limited coverage because of interference or attenuation. In the proposed architecture, RSUs configure relays to forward their messages to network users. To ensure reliable reception of messages at their final destinations, we assume that the initial links of such relaying, namely RSU-to-relay communications occupy a reserved spectrum f 2 of bandwidth W 2 . For the rest of the communication links in the proposed two-tier heterogeneous vehicular networks, e.g., relay-to-user and RSUto-user links, we consider those links use a spectrum f 1 of bandwidth W 1 . Therefore, there is no co-channel interference between RSU-to-relay communications and the rest of the communications in the proposed architecture. Motivated by current standard implementation [10], [29], [31], we assume that RSU-to-user and relay-to-user links exist on the same spectrum and thus there is co-channel interference between them.
C. Relay and User Mobility
In vehicular networks, RSUs do not move while relays and network users move along the roads. We assume that relays and users move along the line they are located on and that they choose their speeds uniformly out of a given distribution. Specifically, each relay independently selects its own speed on the interval [v r;min , v r;max ] uniformly at random. Each network user selects its own speed on the interval [v u;min , v u;max ] uniformly at random. Example 1. One can relax the above mobility assumption. An example is that where relays and users on each road choose their own speeds out of standard normal distributions. Based on the displacement theorem [32], relay and user point processes are Poisson point processes on their lines. Thus, the relay and user point process are time invariant Cox point processes. This shows that the proposed mobility model and the corresponding analysis in this paper generalize to various mobility cases.
D. Relay Association and User Association
With regards to relays, this paper assumes that relays are associated with their closest RSUs. The RSU-to-relay communication links are established between RSUs and relays; and then, these relays forward messages from RSUs to nearby users. See Fig. 1. Combined with the separate spectrum usage given in II-B, the nearest association is a basis for the reliable reception of forwarded messages at the final destinations.
With regards to users, this assumes that each user is associated with its closest transmitter, namely either an RSU or a relay. This is based on practical use cases [5], [7], [8], [15] where network users are configured to connect with their nearest transmitters. The bottom figures of Figs. 2 -4 show the user association map as the Voronoi tessellation, illustrated by solid blue lines. The centers of the Voronoi cells are the transmitter point process, i.e., S + R. The cells are the user association map. Users are connected to transmitters at their cell centers.
As the number of transmitters increase, the average size of cells decreases and thus the average number of users associated with each transmitters.
Remark 3. [1] studied various user association techniques
including the maximum average receive signal power association and the nearest user association. In this paper, motivated by the various distance-critical safety V2X applications, we focus on the nearest user association principle. Nevertheless, the formulas and analysis given in this paper can be readily used to analyze the maximum average receive signal power association simply by changing the coefficients of transmit powers, exploiting techniques in [25], [26], [33].
E. Propagation Model
Let a receiver at a distance d from its transmitter either an RSU or a relay. We assume rich scattering around the network users [34] and a power-law path loss function [15]. Hence, the received signal power at the receiver is given by
pHL(d)(4)
where p = {p s , p r } and p s is the transmit power of RSUs and p r is the transmit power of relays. H represents Rayleigh fading, modeled by an independent exponential random variable with mean one, and L(d) is the path loss over distance d.
For path loss, we address that the path loss shows different characteristics, depending on the relative locations of transmitters and receivers, or more precisely, on whether transceivers are on the same road or not [35]. For tractability, we use a simple path loss model where the path loss L(d) over a distance d is
L(d) = d −α on the same road, d −β on different road,(5)
where 2 < α ≤ β.
F. Performance Metrics
This paper analyzes the performance seen by the network users. We derive the coverage probability of the typical user and then we evaluate the coverage probability of the typical RSU-to-relay link. Using both, we compute the user effective throughput.
1) User Coverage Probability: To analyze the network performance seen by the typical user, we use the Palm distribution of the user point process to feature a typical user at the origin. A line l(0, θ 0 ) almost surely exists because of the user at the origin [16]. The coverage probability of the typical user (6) where p is the transmit power of the association transmitter which could be either p s or p r depending on the user association and τ is the SIR threshold. Here the ball of radius X ⋆ centered at the origin is denoted by B 0 ( X ⋆ ). For the disk centered at the origin, we simply write B(r) ≡ B 0 (r). Based on the user association principle in Section II-D, the association transmitter X ⋆ is
P 0 U (SIR > τ ) is P 0 U p X ⋆ HL( X ⋆ ) Xj ∈(S+R)\B( X ⋆ ) p Xj H j L( X j ) > τ ,X ⋆ = arg min X k ∈S 0,θ +R 0,θ +S+R X k .(7)
Here, the association transmitter is an element of S 0,θ0 + R 0,θ0 + S + R. When the association transmitter is an RSU,
X ⋆ = X ⋆ S .
Similarly, when the association transmitter is a relay, X ⋆ = X ⋆ R . Based on their association transmitters, we divide users into two types: ones with RSUs and ones with relays. Then, we evaluate the coverage probability of each association type as follows:
P 0 U (SIR S→U > τ ) := P 0 U (SIR > τ |X ⋆ = X ⋆ S ),(8)P 0 U (SIR R→U > τ ) := P 0 U (SIR > τ |X ⋆ = X ⋆ R ),(9)
the coverage probability of the typical relay-associated user and the coverage probability of the typical RSU-associated user, respectively.
2) Relay Coverage Probability:
To analyze the SIR of the typical relay, we consider the Palm distribution of the relay point process. The coverage probability of the typical relay,
P 0 R (SIR S→R > τ ) is given by P 0 R p s HL( X ⋆ S ) Xj ∈S\B( X ⋆ S ) p s H j L( X j ) > τ ,(10)
where X ⋆ S is the RSU closest to the typical relay located at the origin under the Palm distribution of the relay point process R. Since RSU-to-relay communications are assumed to occur over a frequency bandwidth of W 2 , the RSU-to-relay links do not interfere with RSU-to-user and relay-to-user links.
3) Throughput: Using the coverage probabilities above, we derive the throughput of the typical user. In the proposed vehicular network where relays are operated by RSUs over a separate wireless resource W 2 , the throughput is not just a simple function of the SIR distribution of the typical user. The precise definition of the user throughput will be given in Section V.
III. PRELIMINARY: ASSOCIATION PROBABILITY
Lemma 1. The probability that the typical user is associated with a RSU transmitter is Eq. (11). The probability that the typical user is associated with a relay transmitter is P(A r ) = 1 − P(A s ).
Proof: See [1, Theorem 1]. P(A s ) = ∞ 0 2µ s exp −2(µ s + µ r )r − 2λ l r 0 1 − e −2(µs+µr ) √ r 2 −u 2 du dr + 4µ s λ l ∞ 0 π/2 0 re −2r(µs+µr)−2r(µs+µr ) sin(θ)−2λ l r 0 1−exp (−2(µs+µr) √ r 2 −u 2 ) du dθ dr.(11)
The derived association probability will be used to derive the coverage probability of the typical user. It is important to note the association probability is not a linear function of RSU or relay densities becuase of the spatial correlation between RSUs and relays. Proposition 1. The average number of users associated with the typical RSU is µu µs P u (A s ) and the average number of users associated with the typical relay is µu µr P 0 u (A r ). On the other hand, the average number of relays associated to the typical RSU is µr µs . Proof: Consider a factor graph with an edge from each user to its association RSU or relay. From the mass transport principle [32],
λ l µ u P 0 U (A s , E) = λ l µ s d in ,(12)
where the left-hand side is the mean mass sent by the users to their association RSUs on the same lines, whereas the right-hand side is the mean mass received by the RSUs from their associated users on the same lines. λ l µ s is the spatial density of RSUs and d in is the mean number of same-line users associated to the typical RSU under the Palm distribution of S. Similarly, considering users and their associated RSUs on different lines, we have
λ l µ u P 0 U (A s , E c ) = λ l µ s d ′ in ,(13)
where the left-hand side is the mean mass out of the users and the right-hand side is the mean mass received by the RSUs: d ′ in is the mean number of different-line users associated to the typical RSU. As a result, the mean number of users per RSU is d ′ in + d in = µ u P 0 U (A s )/µ s . Similarly, the mean number of users per relay is µ u P 0 U (A r )/µ r . Finally, the mean number of relays per RSU is µ r /µ s .
The above proposition is essential to address the impact of RSU-to-relay links to the system performance. We use the above expression in the derivation of the user throughput in Section V.
IV. COVERAGE PROBABILITY OF USER AND RELAY
In Section IV-A, we first evaluate the coverage probability of the typical user under the Palm distribution of the user point process, by leveraging the facts that the network users are connected with their closest RSUs or relays and that RSUto-user links interfere with relay-to-user links and vice versa. Then, in Section IV-B we independently derive the coverage probability of the typical relay under the Palm distribution of the relay point process. The coverage probabilities of Sections IV-A and IV-B impacts the throughput of the network that we will see in Section V.
A. Coverage Probability of the Typical User
This section gives the coverage probability of the typical user. Note all RSUs and relays are assumed to have users to serve with high probability. We denote by γ the ratio of relay transmit power to the RSU transmit power, γ = p r /p s . As in Section III, let E be the event that the association transmitter and the typical user are on the same line. We denote by A s the event that the association transmitter is an RSU and by A r the event that the association transmitter is a relay. On the other hand, we also have τ r β (r 2 cos 2 (θ)+v 2 ) −β/2 1+τ r β (r 2 cos 2 (θ)+v 2 ) −β/2 dv dθ.
Theorem 1. The coverage probability of the typical user is
P 0 U (SIR > τ, E, A s ) + P 0 U (SIR > τ, E, A r ) + P 0 U (SIR > τ, E c , A s ) + P 0 U (SIR > τ, E c , A r ), given by Eq. (14) -(17), respectively where G 1 (r, a, b) = 2µ s e −2rµs−2µs ∞ r τ r α u −α a+τ r α u −α du × e −2µr ∞ r τ r α u −α b+τ r α u −α du , G 2 (r, v, a, b) = e −2(µs+µr ) √ r 2 −v 2 × e −2µs ∞ √ r 2 −v 2 τ r α (v 2 +u 2 ) − β 2 a+τ r α (v 2 +u 2 ) − β 2 du × e −2µr ∞ √ r 2 −v 2 τ r α (v 2 +u 2 ) − β 2 b+τ r α (v 2 +u 2 ) − β 2 du , G 3 (r, v, a, b) = e −2µs ∞H 1 (r, a, b) = e −2µsr−2µs ∞ r τ r β u −α a+τ r β u −α du−2µr ∞ r τ r β u −α b+τ r β u −α du , H 2 (r, v, a, b) = e −2(µs+µr ) √ r 2 −v 2 × e −2µs ∞ √ r 2 −v 2 τ r β (v 2 +u 2 ) −β/2 a+τ r β (v 2 +u 2 ) −β/2 du × e −2µr ∞ √ r 2 −v 2 τ r β (v 2 +u 2 ) −β/2 b+τ r β (v 2 +u 2 ) −β/2 du , H 3 (r, v, a, b) = e −2µs ∞P 0 U (SIR > τ, E, A s ) = ∞ 0 G 1 (r, a, b)e −2λ l r 0 1−G2(r,v,a,b) dv−2λ l ∞ r 1−G3(r,v,a,b) dv dr a=1,b= 1 γ ,(14)P 0 U (SIR > τ, E, A r ) = ∞ 0 G 1 (r, a, b)e −2λ l r 0 1−G2(r,v,a,b) dv−2λ l ∞ r 1−G3(r,v,a,b) dv dr a=γ,b=1 ,(15)P 0 U (SIR > τ, E c , A s ) = ∞ 0 H 1 (r, a, b)e −2λ l r 0 1−H2(r,v,a,b) dv e −2λ l ∞ r 1−H3(r,v,a,b) dv H 4 (r, c) dr a=1,b= 1 γ ,c=µs ,(16)P 0 U (SIR > τ, E c , A r ) = ∞ 0 H 1 (r, a, b)e −2λ l r 0 1−H2(r,v,a,b) dv e −2λ l ∞ r 1−H3(r,v,a,b) dv H 4 (r, c) dr a=γ,b=1,c=µr ,(17)
Φ+δ r⋆ ,θ⋆ +δ 0,θ 0
ri,θi E |Tj |> √ r 2 −r 2 i Tj ∈S0,0 1 1 + p s sL( r i e i,1 + T j e i,2 ) E |Tj |> √ r 2 −r 2 i Tj ∈R0,0 1 1 + p r sL( r i e i,1 + T j e i,2 ) ,(18)∞ 0 J 1 (r)e −2λ l r 0 1−J2(r,v) dv e −2λ l ∞ r 1−J3(r,v) dv dr + ∞ 0 K 1 (r)e −2λ l r 0 1−K2(r,v) dv−2λ l ∞ r 1−K3(r,v) dv K 4 (r) dr. (21)(20)
Corollary 1. When p s = p r , or γ = 1, the coverage probability of the typical user is given by Eq.
K 2 (r, v) = e −2(µs+µr) √ r 2 −v 2 e −2(µs+µr ) ∞ √ r 2 −v 2 τ r β (v 2 +u 2 ) −β/2 1+τ r β (v 2 +u 2 ) −β/2 du , K 3 (r) = e −2(µs+µr ) ∞ 0 τ r β (v 2 +u 2 ) −β/2 1+τ r β (v 2 +u 2 ) −β/2 du , K 4 (r) = π/2 0 4λ l (µ s + µ r )re −2(µs+µr)r sin(θ) e −2(µs+µr ) ∞ r sin(θ)
τ r β (r 2 cos 2 (θ)+v 2 ) −β/2 1+τ r β (r 2 cos 2 (θ)+v 2 ) −β/2 dv dθ.
Proof: Having γ = 1 in Theorem 1 completes proof. In above, we derived the coverage probability of the typical user at the origin. Yet, the result is applicable to all the users in the network.
Remark 4.
In Theorem 1, we analyze the SIR of the typical user located at the origin by using the Palm distribution of the user point process. Since the user point process is time invariant ergodic Poisson point process, the obtained formula corresponds to the spatial average of SIRs of all the users in the network [32], [36]. In other words, it corresponds to the statistic of the SIRs of all users in a large ball, at any given time. In addition, since the user point process is a time-invariant and ergodic Poisson point process, the coverage probability of the typical user coincides with the time average of the coverage probability of a specific user, obtained over a very long time [37]. Fig. 6, the SIR curve slightly changes as the density of the relays varies. In the low SIR regime, the SIR curve slightly decreases as we increase the number of relays. This is because, in the low SIR regime, users are more likely to be associated with transmitters on lines that are different from the ones of users, and the received signal powers from the association transmitters are moderately dominated by the interference from the other transmitters. On the other hand, in the high SIR regime, the SIR curve increases as the number of relays increases. This is because, in the high SIR regime, users are more likely to be associated with the transmitters on the lines that are the same as the ones of users, and therefore the received signal powers dominate the interference. Nevertheless, it is worthwhile to mention that increasing the relay density does not always increase the SIR curve in some range of parameters. Especially when the relay or RSU density is very high, transmitters and receivers may be very close to each other and thus the power-law path loss function of this paper should be replaced with a truncated version, e.g., L(d) = min{1, d −α or d −β }, to account for near field effect. The analysis with a truncated power-law path loss model is left for future work. In the right picture of Fig. 6, we increase the line density to show the change of the SIR curve. Although µ s and µ r are equal to one, the number of RSUs or relays λ l µ s /π or λ l µ r π respectively increases as λ l increases. Therefore, the increment of the interference dominates the increment of the received signal power and this explains the decrement of the SIR curve as λ l increases. In Fig. 7, we increase the road density λ l and the linear density of relays µ r at the same rate. In both of pictures, increasing the road density decreases the SIR curve. It is important to mention that the SIR curve decrease in α = β is less significant than the SIR curve decrease in α = β. By comparing the top figures of Fig. 6 and 7, we see that the SIR curve decrease much clearer in Fig. 7 because, in general, the average number of transmitters per unit area is λ l (µ s +µ v )/π and thus the top figure of Fig. 7 has much more transmitters than the top figure of Fig. 6, on average. We can conclude that the interference caused by relays is significant for dense urban areas where roads are densely distributed. Nevertheless, by comparing λ l = 1, µ s = 1, µ r = 5 and λ l = 5, µ s = 1, µ r = 5, we see that the SIR curve decrease from µ r = 1 to µ r = 5 is about 15 -20% when the SIR threshold is between −10 dB and 0 dB. When the SIR threshold is not within this range, the decrease is between 5 -10%. From these observations, we conclude that despite some SIR decrease, relays are able to redistribute the users that are previously associated with RSUs. In the right picture of Fig. 7, users are more likely to be associated with relays as the relay density increases. Especially, in the low SIR regime, users are associated with relays on different roads. Consequently, if the cross-road attenuation is not very significant, the received signal power from the association relays increases as we increase the number of relays and it compensates the interference from added relays to some extent. It is worthwhile to stress that such a behavior of the SIR curves exists as long as the density of RSUs or relays is not too high. For instance, a truncated path loss model should be used if transmitters and receivers are too close to each other.
B. Coverage Probability of the Typical Relay
In practice, relays can serve users only when the relevant data are channeled through RSU-to-relay links. To evaluate the network user performance restricted by the RSU-to-relay links, this section evaluate the coverage probability of the typical relay.
J 2 (r, v) = e −2µs √ r 2 −v 2 −2µs ∞ √ r 2 −v 2 τ r α (v 2 +u 2 ) − β 2 1+τ r α (v 2 +u 2 ) − β 2 du , J 3 (r) = e −2µs ∞ 0 τ r α (v 2 +u 2 ) −β/2 1+τ r α (v 2 +u 2 ) −β/2 du , and K 1 = e −2rµs−2µs ∞ r τ r β u −α 1+τ r β u −α du , K 2 (r, v) = e −2µs √ r 2 −v 2 −2µs ∞ √ r 2 −v 2 τ r β (v 2 +u 2 ) −β/2
1+τ r β (v 2 +u 2 ) −β/2 du ,
K 3 (r) = e −2µs ∞ 0 τ r β (v 2 +u 2 ) −β/2 1+τ r β (v 2 +u 2 ) −β/2 du , K 4 (r) = π/2 0 4λ l µ s re −2µsr sin(θ) e −2µs ∞ r sin(θ)
τ r β (r 2 cos 2 (θ)+v 2 ) −β/2 1+τ r β (r 2 cos 2 (θ)+v 2 ) −β/2 dv dθ.
Proof: The result is obtained by following techniques similar to the proof of Theorem 1.
We combine Propositions 1, Lemma 1, Theorems 1, and 2 to derive the user throughput.
V. USER THROUGHPUT
In the proposed network, users are associated with either RSUs or relays.
Firstly, the normalized achievable rate of RSU-associated users is defined by the mean achievable rate of the typical RSU-associated users, divided by the mean number of users per RSU. The normalized achievable rate of the RSUassociated user is given by
T s = W 1 E[log 2 (1 + SIR s→u )] E[# users per RSU] .(22)
The normalized achievable rate is in a heuristic metric because it is given by the ratio of the achievable rate to the average number of users, not the exact number. However, the exact distribution of the Cox-Voronoi cell is unknown; and thus using the exact number of users per RSU is infeasible. Here, we leverage the mass transport principle to obtain the mean number of users in the typical RSU cell (Proposition 1) and use it to compute the normalized achievable rate of RSUassociated user. Secondly, the normalized achievable rate of relay-associated users is dictated by both the coverage probabilities of the RSUto-relay and relay-to-user links. Using the coverage probabilities of the both links, the normalized achievable rate of the relay-associated user is defined by
T r = min W 2 E[log 2 (1 + SIR s→r )] E[# relay per RSU] E[# user per relay]
,
W 1 E[log 2 (1 + SIR r→u )] E[# user per relay] ,(23)
where W 2 is the bandwidth for the RSU-to-relay communications and W 1 is the bandwidth for RSU-to-user and relay-touser communications. We combine Eqs. (22) and (23) to have the user throughput as follows:
T = P 0 U (A s ) T s + P 0 U (A r ) T r ,(24)
where P 0 U (A s ) and P 0 U (A r ) are given by Theorem 1. Remark 5. The instantaneous SIRs of RSU-to-relays links do not directly dictate the user throughput. However, these links indirectly affect the user performance by restricting the amount of data available at relays. Consequently, the throughput of relay-associated users will be determined by (i) the throughput of RSU-to-relay links, (ii) the throughput of relay-to-user links, (iii) the bandwidths W 1 and W 2 , and (iv) the number of relays per RSU and the number of users per relay.
Theorem 3. The user throughput is given by
T = P 0 U (A s )W 1 ∞ 0 P 0 U (SIR s→u > 2 ξ − 1) u s dξ + P 0 U (A r ) min ∞ 0 W 2 P 0 R (SIR s→r > 2 ξ − 1) r sūr dξ, ∞ 0 W 1 P 0 U (SIR r→u > 2 ξ − 1) u r dξ ,
where P 0 U (A s ) and P 0 R (SIR s→r > 2 ξ − 1) are given by Theorems 1 and 2, respectively. Using the functions in Theorem 1, the coverage probability of the RSU-associated typical user is given by Eq. (26). Similarly, the coverage probability of the relay-associated typical user is given by Eq. (27). Using Proposition 1, we haveū s = µ u P 0 U (A s )/µ s , u r = µ u P 0 U (A r )/µ r , andr s = µ r /µ s Proof: The coverage probabilities of the typical RSU-touser link and of the typical relay-to-user link are obtained by leveraging Theorem 1, respectively. To obtainū s ,ū r , andr s , we use Proposition 1. This completes the proof.
Example 2. Suppose γ = 1 and W 2 is sufficiently large. Then, the user throughput is
T =W 1 ∞ 0 µ s P 0 U (SIR S→U > 2 ξ − 1) µ u dξ + W 1 ∞ 0 µ r P 0 U (SIR R→U > 2 ξ − 1) µ u dξ.(28)
On the other hand, the user throughput without any relay is
µ s µ u W 1 ∞ 0 P 0 U (SIR > 2 ξ − 1) dξ.(29)
As a result, based on Eqs. (28) and (29), the proposed network has a multiplicative gain Γ in the user throughput given by Fig. 8 shows the user throughput in Theorem 3 where we use W = 20 MHz, λ l = 3/km, µ s = 1/km, µ r = 3/km, µ u = 15/km, α = 2.5, and β = 3.5. It shows that for the given network parameter, W 2 = 14 MHz will maximize the user throughput of the proposed two-tier heterogeneous vehicular network. Note the maximum value of W 2 varies depending on variables including λ l , µ s , µ r , α and β. In practice, network operators can easily find the optimal W 2 with a little computation cost by exploiting Theorem 3.
Γ = ∞ 0 P 0 U (SIR S→U > 2 ξ − 1) dξ ∞ 0 P 0 U (SIR > 2 ξ − 1) dξ + µ r µ s ∞ 0 P 0 U (SIR R→U > 2 ξ − 1) dξ ∞ 0 P 0 U (SIR > 2 ξ − 1) dξ .
VI. CONCLUSION AND FUTURE WORK
Using stochastic geometry, this paper proposes and analyzes a novel two-tier heterogeneous vehicular network architecture where RSUs and vehicular relays serve network users. By assuming such vehicular relays are operated by RSUs and users are associated with either RSUs or relays, we derive the association probability of the network users. We find that the association probability is a nonlinear function of the RSU and relay densities because RSUs, relays, and users are all on roads. Then, we derive the coverage probability of the typical user and then obtain the user throughput. In particular, the user throughput incorporates the fact that RSUs operate relays and that the throughput of relay-associated users is dictated by the SIRs of RSU-to-relay and relay-to-user links and the corresponding bandwidths for those links. The paper gives practical insights on designing heterogeneous vehicular networks with RSUs and vehicular relays. By presenting the formulas for SIR and user throughput as network parameters, one can easily identify the complex interactions occur at the network and use these formulas to enhance reliability or to increase throughput.
The present paper starts a new line of studies on heterogeneous vehicular networks. It provides a tractable model and a tool to analyzing the network performance. The analysis of this paper can be developed further by considering new and more practical components; for instance, the clustering of vehicles on roads can be represented by an independent cluster point process on roads. The analysis of the proposed two-tier heterogeneous vehicular network is applicable to the analysis of multi-tier vehicular networks where there are various types of network elements exist such as RSUs, relays, and IoT devices.
APPENDIX PROOF OF THEOREM 1
Proof: Under the Palm distribution of the user point process, P 0 U (·), there exist a typical user at the origin and a line l(0, θ 0 ) containing the typical user. Here, θ 0 is a uniform random variable between 0 and π. By the law of total probability, the coverage probability is given by
P 0 U (SIR > τ ) = P 0 U (SIR > τ, E) + P 0 U (SIR > τ, E c ),(30)
where we can write E : {l ⋆ = l(0, θ 0 )} and E c : {l ⋆ = l(0, θ 0 )}. The former is the event that the line l ⋆ containing the association transmitter is l(0, θ 0 ), the line that contains the typical user.
P 0 U (SIR s→u > τ ) = 1 P 0 U (A s ) ∞ 0 G 1 (r, a, b)e −2λ l r 0 1−G2(r,v,a,b) dv−2λ l ∞ r 1−G3(r,v,a,b) dv dr a=1,b= 1 γ + 1 P 0 U (A s ) ∞ 0 H 1 (r, a, b)e −2λ l r 0 1−H2(r,v,a,b) dv−2λ l ∞ r 1−H3(r,v,a,b) dv H 4 (r, a, b) dr a=1,b= 1 γ ,c=µs .(26)P 0 U (SIR r→u > τ ) = 1 P 0 U (A r ) ∞ 0 G 1 (r, a, b)e −2λ l r 0 1−G2(r,v,a,b) dv e −2λ l ∞ r 1−G3(r,v,a,b) dv dr a=γ,b=1 + 1 P 0 U (A r ) ∞ 0 H 1 (r, a, b)e −2λ l r 0 1−H2(r,v,a,b) dv−2λ l ∞ r 1−H3(r,v,a,b) dvH 4 (r, a, b) dr a=γ,b=1,c=µr .(27)P 0 U (SIR > τ, E) = P 0 U (SIR > τ, E, A s ) + P 0 U (SIR > τ, E, A r ),(31)
where A s and A r are the events that the typical user is associated with its closest RSU and with its closest relay, respectively. Then, with I the interference seen by the typical user, we have
P(SIR > τ, E, A s ) = P 0 U (p s HL( X ⋆ ) > τ I, E, A s ) = P 0 U (p s H > τ I X ⋆ α , E, A s ) = E Φ P 0 U H > τ I X ⋆ α p s , E, A s Φ = E Φ r=∞ r=0 P 0 U H > τ r α I p s E, A s , X ⋆ S = r, Φ P( X ⋆ S ∈ [r, r + dr), E, A s |Φ)] ,(32)
where we express the probability as the conditional expectation w.r.t. Φ. Then, we write it as a conditional expectation w.r.t. the nearest RSU. We have P( X ⋆ S = r dr, E, A s |Φ) the conditional probability density function of the distance from the origin to the closest RSU.
In a similar way, we have
P(SIR > τ, E, A r ) = E Φ r=∞ r=0 P 0 U H > τ r α I p r E, A r , X ⋆ R = r, Φ P( X ⋆ R ∈ [r, r + dr), E, A r |Φ)] .(33)
In Eq. (33), P( X ⋆ R ∈ [r, r+dr), E, A r |Φ) is the conditional probability density function of the distance from the origin to the nearest relay. Furthermore, the integrands of Eqs. (32) and (33) are
P 0 U H > τ r α I p s E, A s , r, Φ = E 0 U e −sI |E, A s , r, Φ | s=τ r α p −1 s ,(34)P 0 U H > τ r α I p r E, A r , r, Φ = E 0 U e −sI |E, A r , r, Φ | s=τ r α p −1 r ,(35)
respectively. We obtain
E 0 U e −sI |E, A s , r, Φ = E 0 U e −sI |E, A r , r, Φ
from the independence of the Poisson point processes. The conditional Laplace transform of interference is given by
E 0 U e −sI |E, A s , r, Φ = |T k |>r T k ∈S 0,θ 0 +R 0,θ 0 1 1 + sp T k − β 2 ri,θi∈Φ\0,θ0 |T k |>r T k ∈S r i ,θ i +R r i ,θ i 1 1 + sp T k − β 2
where we use the Laplace transform of the exponential random variable and the fact that conditionally on the line process and conditionally on the association distance r, all RSUs and relays are at distances greater than r. Then, we have
E 0 U e −sI |E, A s , r, Φ = e −2µs ∞ r sps u −α 1+spsu −α du−2µr ∞ r spr u −α 1+spr u −α du × |ri|<r ri∈Φ e −2µs ∞ √ r 2 −r 2 i sps(r 2 i +u 2 ) − β 2 1+sps (r 2 i +u 2 ) − β 2 du e −2µr ∞ √ r 2 −r 2 i spr (r 2 i +u 2 ) − β 2 1+spr (r 2 i +u 2 ) − β 2 du × |ri|>r ri∈Φ e −2µs ∞ 0 sps (r 2 i +u 2 ) − β 2 1+sps(r 2 i +u 2 ) − β 2 du e −2µr ∞ 0 spr (r 2 i +u 2 ) − β 2 1+spr (r 2 i +u 2 ) − β 2 du .(36)
where we use the facts that RSU and relay point processes on different lines are conditionally independent and that the distances from the origin to any RSU points {T k } k∈Z ∈ S ri,θi are given by { r 2 i + G 2 k } k∈Z , where {G k } k∈Z is the RSU Poisson point process on the real axis S 0,0 .
On the other hand, the probability density function in Eq. where we use the facts that (i) X ⋆ ∈ S 0,θ0 and (ii) there is no point of R 0,θ0 + S ri,θi + R ri,θi within a disk of radius r centered at the origin. The probability density function in Eq. To obtain the first part of Eq. (30), we combine Eqs. (33) - (37).
Let us now evaluate the second part of Eq. (30). By the law of total probability, the expression P 0 U (SIR > τ, E c ) is
P 0 U (SIR > τ, E c , A s ) + P 0 U (SIR > τ, E c , A r )(38)
where A s and A r denote the events that the typical user is associated with the RSU or with the relay, respectively. Let l ⋆ denote the line of the association RSU transmitter. Then, by conditioning on Φ, on l ⋆ , and then X ⋆ S , we can write the first part of Eq. (38) as follows:
P 0 U (SIR > τ, E c , A s ) = P 0 U (p s H > τ I X ⋆ β , E c , A s ) = E Φ P 0 U H > τ I X ⋆ β p s , E c , A s Φ = E Φ,l⋆ P 0 U H > τ I X ⋆ S β p s , E c , A s l ⋆ , Φ = E Φ,l⋆,r P 0 U H > τ r β I p s E c , A s , r, l ⋆ Φ ,(39)
where we write X ⋆ S = r. In a similar way, the second part of Eq. (38) is given by
P 0 U (SIR > τ, E c , A r ) = E Φ,l⋆,r P 0 U H > τ r β I p r E c , A r , r, l ⋆ Φ ,(40)
where we write X ⋆ R = r. By using the fact that H is an exponential random variable, the conditional probability of Eq. (39) is given by expression (18) where the distances from the origin to the points of the Poisson point process on line l(r i , θ i ) are represented by r i e 1 + T j e 2 where e i,1 is an unit 1 orthogonal vector from the origin to the line l(r i , θ i ) and e i,2 is an unit 1 vector, orthogonal to the vector e i,1 . Here, S 0,0 is the RSU point process on the x-axis and R 0,0 is the relay point process on the x-axis. By using the probability generating functional of the Poisson point process, we have
where the above five terms of Eq. (41) correspond to the Laplace transforms of the interference of (i) RSU plus relay on the line l(0, θ 0 ), (ii) RSU on the lines closer than r, (iii) relay on the lines closer than r, (iv) RSU on the lines further than r, and (v) relay on the lines further than r, respectively. To obtain the conditional probability density function of the distance from the origin to its closest RSU in Eq. (39), we use the facts that (i) X ⋆ S is the closest to the origin, (ii) X ⋆ S ∈ S r⋆,θ⋆ , and (iii) all the other RSU or relay point processes have no point in the disk of radius r. Therefore, using the void probability of the Poisson point process, the conditional probability density function of the distance from the origin to its closest RSU in Eq. (39) is
In a similar way, the conditional probability density function of the distance from the origin to its closest relay in Eq. (40) is given by
Finally, we combine Eqs. (41) and (42) then integrate the result w.r.t. l ⋆ and then w.r.t. Φ. First, to integrate w.r.t. l ⋆ , we combine all the functions w.r.t. l ⋆ to get the expression (19). Then, we combine Eqs. (41), (42), and (19). Similarly, to obtain the second part of Eq. (38), we combine Eq. (41) and (43) and evaluate the functions w.r.t. l ⋆ to obtain Eq. (20). Then, we combine the rest of Eq. (41), (43) and (20) to complete the proof.
Fig. 1 .
1Illustration of the proposed vehicular network with RSUs, relays, and users. Network users may receive signals from RSUs (right) or relays (left).
Fig. 2 .
2The proposed network model with λ l = 4/km, µs = 1/km, µr = 2/km, and µu = 10/km. The top picture shows the network elements and the bottom picture shows the user association map created by the all RSUs and relays. types of communication links (i) RSU-to-user links, (ii) relayto-user links, and (iii) RSU-to-relay links. The links of types (i) and (ii) use W 1 and the links of type (iii) use W 2 .
Fig. 3 .
3The proposed network model with λ l = 6/km, µs = 1/km, µr = 2/km, and µu = 10/km. The top picture shows the network elements and the bottom picture shows the user association map created by the all RSUs and relays.
Fig. 4 .
4The proposed network with λ l = 8/km, µs = 1/km, µr = 2/km, and µu = 10/km. The top picture shows the network elements and the bottom picture shows the user association map created by the all RSUs and relays.
β (v 2 +u 2 ) −β/2 du , H 4 (r, c) = π/2 0 4λ l cre −2r(µs+µr ) sin(θ) × e −2c ∞ r sin(θ)
Fig. 5
5shows that the derived coverage probability of the typical user matches the numerical results obtained by Monte Carlo simulations, performed under various network parameters. In Figs. 6 -7, we show only analytical results. Note that in the top figure of
Fig. 5 .
5The derived formula matches the simulation results. For the top figure, we use γ = 1, α = 2.5 and β = 3.5. For the bottom figure, we use γ = 1, α = 2.5 and β = 4. The units of λ l , µs, and µr: per kilometer.
Fig. 6 .
6In the top figure, λ l and µs are fixed as µr varies. In the bottom figure, µs and µr are fixed as λ l varies.
Fig. 7 .
7For the top figure, α = β. For the bottom, α = β.
Theorem 2 .
2The coverage probability of the typical relay is given by Eq. (25) wherē J 1 (r) = 2µ s e −2rµs−2µs ∞ r τ r α u −α 1+τ r α u −α du ,
Fig. 8 .
8User throughput in Theorem 3.
(32) is given byP( X ⋆ S ∈ [r, r + dr), E, A s |Φ) = ∂(1 − P(S 0,θ0 (B 0 (r)) = 0)) ∂r dr × P(R 0,θ0 (B(r)) = ∅)ri,θi∈Φ P(S ri,θi + R ri,θi (B(r)) = ∅) = 2µ s e −2rµs dre
isP( X ⋆ R ∈ (r, r + dr), E, A r |Φ) = 2µ r e −2rµr dre
P 0 U
0( X ⋆ S ∈ [r, r + dr), E c , A r |l ⋆ ,
SU -to -r el ay User B gets directly from RSU B User A gets indirectly through relay ARelay-to-user
R SU -to -u se r
RSU A
RSU B
Relay A
User A
User B
Road
Road
TABLE I
INETWORK ELEMENTS
Var
Description
Ξ
Poisson point point on C
Φ
2-D Poisson distributed roads
λ l
mean number of roads in a disk of diameter 1 km
S
RSUs on all roads
R
Relays on all roads
U
User on all roads
µs
mean number of RSUs on a 1 km road
µr
mean number of relays on a 1 km road
µu
mean number of users on a 1 km road
Remark 1 .
1It is important to mention that we take the the simplest approach to characterize spatially correlated component in a two-tier heterogeneous vehicular network. For instance, Ξ is a homogeneous Poisson point process of a constant intensity. Thus, we have an isotropic Poisson line process Φ on the Euclidean plane. Nevertheless, one can change it by considering dirac-delta measure across the angle of roads to create a Manhattan-like road layout. See
TABLE II
IISPECTRUM USAGE
Communication link types
Bandwidth
RSU-to-user links
W 1
relay-to-user links
W 1
RSU-to-relay links
W 2
ACKNOWLEDGMENTThe work of Chang-Sik Choi was supported in part by the NRF-2021R1F1A1059666 and by IITP Grant 2018-0-00792. The work of Francois Baccelli was supported by the ERC NEMO grant 788851 to INRIA.
User association in a heterogeneous vehicular network with roadside units and vehicle relays. C.-S Choi, IEEE Wireless Commun. Lett. 1111C.-S. Choi, "User association in a heterogeneous vehicular network with roadside units and vehicle relays," IEEE Wireless Commun. Lett., vol. 11, no. 11, pp. 2345-2349, 2022.
Dedicated short-range communications (DSRC) standards in the United States. J B Kenney, Proceedings of the IEEE. 997J. B. Kenney, "Dedicated short-range communications (DSRC) standards in the United States," Proceedings of the IEEE, vol. 99, no. 7, pp. 1162- 1182, July 2011.
An information framework for creating a smart city through Internet of Things. J Jin, J Gubbi, S Marusic, M Palaniswami, IEEE Internet Things J. 12J. Jin, J. Gubbi, S. Marusic, and M. Palaniswami, "An information framework for creating a smart city through Internet of Things," IEEE Internet Things J., vol. 1, no. 2, pp. 112-121, 2014.
Connected vehicles: Solutions and challenges. N Lu, N Cheng, N Zhang, X Shen, J W Mark, IEEE Internet Things J. 14N. Lu, N. Cheng, N. Zhang, X. Shen, and J. W. Mark, "Connected vehicles: Solutions and challenges," IEEE Internet Things J., vol. 1, no. 4, pp. 289-299, 2014.
Study on LTE-based V2X services. 3GPP TR 36.8853GPP TR 36.885, "Study on LTE-based V2X services," 3GPP TR 36.885.
Vehicleto-everything (V2X) services supported by LTE-based systems and 5G. S Chen, J Hu, Y Shi, Y Peng, J Fang, R Zhao, L Zhao, IEEE Commun. Standards Mag. 12S. Chen, J. Hu, Y. Shi, Y. Peng, J. Fang, R. Zhao, and L. Zhao, "Vehicle- to-everything (V2X) services supported by LTE-based systems and 5G," IEEE Commun. Standards Mag., vol. 1, no. 2, pp. 70-76, 2017.
Service requirements for enhanced V2X scenarios. 3GPP TS 22.8168163GPP TS 22.816, "Service requirements for enhanced V2X scenarios," 3GPP TS 22.816.
Study on enhancement of 3GPP support for 5G V2X services. 3gpp Tr 22, 3GPP TR 22.8368363GPP TR 22.836, "Study on enhancement of 3GPP support for 5G V2X services," 3GPP TR 22.836.
A tutorial on 5G NR V2X communications. M H C Garcia, A Molina-Galan, M Boban, J Gozalvez, B Coll-Perales, T , A Kousaridas, IEEE Commun. Surv&Tuts. 233M. H. C. Garcia, A. Molina-Galan, M. Boban, J. Gozalvez, B. Coll- Perales, T. Ş ahin, and A. Kousaridas, "A tutorial on 5G NR V2X communications," IEEE Commun. Surv&Tuts, vol. 23, no. 3, pp. 1972- 2026, 2021.
Study on NR sidelink relay. 3gpp Tr 38, 3GPP TR 38.8368363GPP TR 38.836, "Study on NR sidelink relay," 3GPP TR 38.836.
NR; study on integrated access and backhaul. 3gpp Tr 38, 3GPP TR 38.8748743GPP TR 38.874, "NR; study on integrated access and backhaul," 3GPP TR 38.874.
Stochastic geometry models of mobile communication networks. F Baccelli, S Zuyev, Frontiers in queueingF. Baccelli and S. Zuyev, "Stochastic geometry models of mobile communication networks," Frontiers in queueing, pp. 227-243, 1997.
A population model based on a Poisson line tessellation. F Morlot, Proc. IEEE WiOpt. IEEE WiOptF. Morlot, "A population model based on a Poisson line tessellation," in Proc. IEEE WiOpt, 2012, pp. 337-342.
An Aloha protocol for multihop mobile wireless networks. F Baccelli, B Blaszczyszyn, P Muhlethaler, IEEE Trans. Inf. Theory. 522F. Baccelli, B. Blaszczyszyn, and P. Muhlethaler, "An Aloha protocol for multihop mobile wireless networks," IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 421-436, Feb 2006.
Study on evaluation methodology of new vehicleto-everything (V2X) use cases for LTE and NR. 3GPP TR 37.8853GPP TR 37.885, "Study on evaluation methodology of new vehicle- to-everything (V2X) use cases for LTE and NR."
Poisson Cox point processes for vehicular networks. C.-S Choi, F Baccelli, IEEE Trans. Veh. Technol. 6710C.-S. Choi and F. Baccelli, "Poisson Cox point processes for vehicular networks," IEEE Trans. Veh. Technol., vol. 67, no. 10, pp. 10 160-10 165, Oct 2018.
Coverage analysis of a vehicular network modeled as Cox process driven by Poisson line process. V V Chetlur, H S Dhillon, IEEE Trans. Wireless Commun. 177V. V. Chetlur and H. S. Dhillon, "Coverage analysis of a vehicular network modeled as Cox process driven by Poisson line process," IEEE Trans. Wireless Commun., vol. 17, no. 7, pp. 4401-4416, July 2018.
An analytical framework for coverage in cellular networks leveraging vehicles. C.-S Choi, F Baccelli, IEEE Trans. Commun. 6610C.-S. Choi and F. Baccelli, "An analytical framework for coverage in cellular networks leveraging vehicles," IEEE Trans. Commun., vol. 66, no. 10, pp. 4950-4964, Oct 2018.
Success probability and area spectral efficiency of a VANET modeled as a Cox process. V V Chetlur, H S Dhillon, IEEE Wireless Commun. Lett. 75V. V. Chetlur and H. S. Dhillon, "Success probability and area spectral efficiency of a VANET modeled as a Cox process," IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 856-859, 2018.
Spatial and temporal analysis of direct communications from static devices to mobile vehicles. C.-S Choi, F Baccelli, IEEE Trans. Wireless Commun. 1811C.-S. Choi and F. Baccelli, "Spatial and temporal analysis of direct communications from static devices to mobile vehicles," IEEE Trans. Wireless Commun., vol. 18, no. 11, pp. 5128-5140, 2019.
Densification leveraging mobility: An IoT architecture based on mesh networking and vehicles. C.-S Choi, F Baccelli, G De Veciana, Proc. IEEE/ACM MobiHoc. IEEE/ACM MobiHocC.-S. Choi, F. Baccelli, and G. de Veciana, "Densification leveraging mobility: An IoT architecture based on mesh networking and vehicles," in Proc. IEEE/ACM MobiHoc, 2018, p. 71-80.
Performance of downlink NOMA in vehicular communication networks: An analysis based on Poisson line Cox point process. Y Sun, Z Ding, X Dai, K Navaie, D K C So, IEEE Trans. Veh. Technol. 6911Y. Sun, Z. Ding, X. Dai, K. Navaie, and D. K. C. So, "Performance of downlink NOMA in vehicular communication networks: An analysis based on Poisson line Cox point process," IEEE Trans. Veh. Technol., vol. 69, no. 11, pp. 14 001-14 006, 2020.
Modeling and analysis of vehicle safety message broadcast in cellular networks. C.-S Choi, F Baccelli, IEEE Trans. Wireless Commun. 207C.-S. Choi and F. Baccelli, "Modeling and analysis of vehicle safety message broadcast in cellular networks," IEEE Trans. Wireless Com- mun., vol. 20, no. 7, pp. 4087-4099, 2021.
On the outage performance of network noma (N-NOMA) modeled by Poisson line Cox point process. Y Sun, Z Ding, X Dai, IEEE Trans. Veh. Technol. 708Y. Sun, Z. Ding, and X. Dai, "On the outage performance of network noma (N-NOMA) modeled by Poisson line Cox point process," IEEE Trans. Veh. Technol., vol. 70, no. 8, pp. 7936-7950, 2021.
Modeling and analysis of K-tier downlink heterogeneous cellular networks. H S Dhillon, R K Ganti, F Baccelli, J G Andrews, IEEE J. Sel. Areas Commun. 303H. S. Dhillon, R. K. Ganti, F. Baccelli, and J. G. Andrews, "Modeling and analysis of K-tier downlink heterogeneous cellular networks," IEEE J. Sel. Areas Commun., vol. 30, no. 3, pp. 550-560, 2012.
Heterogeneous cellular networks with flexible cell association: A comprehensive downlink sinr analysis. H.-S Jo, Y J Sang, P Xia, J G Andrews, IEEE Trans. Wireless Commun. 1110H.-S. Jo, Y. J. Sang, P. Xia, and J. G. Andrews, "Heterogeneous cellular networks with flexible cell association: A comprehensive downlink sinr analysis," IEEE Trans. Wireless Commun., vol. 11, no. 10, pp. 3484- 3495, 2012.
User association for load balancing in heterogeneous cellular networks. Q Ye, B Rong, Y Chen, M Al-Shalash, C Caramanis, J G Andrews, IEEE Trans. Wireless Commun. 126Q. Ye, B. Rong, Y. Chen, M. Al-Shalash, C. Caramanis, and J. G. Andrews, "User association for load balancing in heterogeneous cellular networks," IEEE Trans. Wireless Commun., vol. 12, no. 6, pp. 2706- 2716, 2013.
Los coverage area in vehicular networks with cox-distributed roadside units and relays. C.-S Choi, F Baccelli, IEEE Trans. Veh. Technol. C.-S. Choi and F. Baccelli, "Los coverage area in vehicular networks with cox-distributed roadside units and relays," IEEE Trans. Veh. Tech- nol., pp. 1-11, 2023.
NR; physical channels and modulation. 3GPP TR 382113GPP TR 38.211, "NR; physical channels and modulation," 3GPP TR 38211.
Stochastic geometry and its applications. S N Chiu, D Stoyan, W S Kendall, J Mecke, John Wiley & SonsS. N. Chiu, D. Stoyan, W. S. Kendall, and J. Mecke, Stochastic geometry and its applications. John Wiley & Sons, 2013.
NR; physical layer procedures for control. 3GPP TR 382133GPP TR 38.213, "NR; physical layer procedures for control," 3GPP TR 38213.
Stochastic geometry and wireless networks: volume I theory. F Baccelli, B Błaszczyszyn, Foundations and Trends in Networking. 33-4F. Baccelli and B. Błaszczyszyn, "Stochastic geometry and wireless networks: volume I theory," Foundations and Trends in Networking, vol. 3, no. 3-4, pp. 249-449, 2010.
On association cells in random heterogeneous networks. S Singh, F Baccelli, J G Andrews, IEEE Wireless Commun. Lett. 31S. Singh, F. Baccelli, and J. G. Andrews, "On association cells in random heterogeneous networks," IEEE Wireless Commun. Lett., vol. 3, no. 1, pp. 70-73, 2014.
Wireless communications. A Goldsmith, Cambridge University PressA. Goldsmith, Wireless communications. Cambridge University Press, 2005.
NR; study on channel model for frequencies from 0.5 to 100 GHz. 3GPP TR 389013GPP TR 38.901, "NR; study on channel model for frequencies from 0.5 to 100 GHz," 3GPP TR 38901.
Stochastic geometry and wireless networks: Volume II applications. F Baccelli, B Błaszczyszyn, Foundations and Trends in Networking. 41-2F. Baccelli and B. Błaszczyszyn, "Stochastic geometry and wireless net- works: Volume II applications," Foundations and Trends in Networking, vol. 4, no. 1-2, pp. 1-312, 2010.
Elements of queueing theory: Palm Martingale calculus and stochastic recurrences. F Baccelli, P Brémaud, Springer & Verlag26F. Baccelli and P. Brémaud, Elements of queueing theory: Palm Mar- tingale calculus and stochastic recurrences. Springer & Verlag, 2013, vol. 26.
| [] |