text
stringlengths
649
4.42k
synonym_substitution
stringlengths
759
4.5k
butter_fingers
stringlengths
649
4.42k
random_deletion
stringlengths
453
2.31k
change_char_case
stringlengths
649
4.42k
whitespace_perturbation
stringlengths
764
5.02k
underscore_trick
stringlengths
649
4.42k
$h_{5[1,2]}\left( x^{i}\right) $ stated by boundary conditions; b\) or, inversely, to compute $h_{4}$ for a given $h_{5}\left( x^{i},v\right) ,h_{5}^{\ast }\neq 0,$$$\sqrt{|h_{4}|}=h_{[0]}\left( x^{i}\right) (\sqrt{|h_{5}\left( x^{i},v\right) |})^{\ast }, \label{p1}$$with $h_{[0]}\left( x^{i}\right) $ given by boundary conditions. - The exact solutions of (\[ep3a\]) for $\beta \neq 0$ are defined from an algebraic equation, $w_{i}\beta +\alpha _{i}=0,$ where the coefficients $\beta $ and $\alpha _{i}$ are computed as in formulas ([abc]{}) by using the solutions for (\[ep1a\]) and (\[ep2a\]). The general solution is $$w_{k}=\partial _{k}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|]/\partial _{v}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|], \label{w}$$with $\partial _{v}=\partial /\partial v$ and $h_{5}^{\ast }\neq 0.$ If $h_{5}^{\ast }=0,$ or even $h_{5}^{\ast }\neq 0$ but $\beta =0,$ the coefficients $w_{k}$ could be arbitrary functions on $\left( x^{i},v\right). $ Β For the vacuum Einstein equations this is a degenerated case imposing the compatibility conditions $\beta =\alpha _{i}=0,$ which are satisfied, for instance, if the $h_{4}$ and $h_{5}$ are related as in the formula (\[p1\]) but with $h_{[0]}\left( x^{i}\right) =const.$ - Having defined $h_{4}$ and $h_{5}$ and computed $\gamma $ from ([abc]{}) we can solve the equation (\[ep4a\]) by integrating on variable β€œ$v $" the equation $n_{i}^{\ast \ast }+\gamma n_{i}^{\ast }=0.$ The exact solution is $$\begin{aligned} n
$ h_{5[1,2]}\left (x^{i}\right) $ stated by boundary conditions; b\) or, inversely, to compute $ h_{4}$ for a given $ h_{5}\left (x^{i},v\right) , h_{5}^{\ast } \neq 0,$$$\sqrt{|h_{4}|}=h_{[0]}\left (x^{i}\right) (\sqrt{|h_{5}\left (x^{i},v\right) |})^{\ast }, \label{p1}$$with $ h_{[0]}\left (x^{i}\right) $ give by boundary condition. - The exact solution of (\[ep3a\ ]) for $ \beta \neq 0 $ are define from an algebraic equation, $ w_{i}\beta + \alpha _ { i}=0,$ where the coefficients $ \beta $ and $ \alpha _ { i}$ are calculate as in formulas ([ abc ] { }) by using the solutions for (\[ep1a\ ]) and (\[ep2a\ ]). The cosmopolitan solution is $ $ w_{k}=\partial _ { k}\ln [ \sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast } |]/\partial _ { v}\ln [ \sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast } | ], \label{w}$$with $ \partial _ { v}=\partial /\partial v$ and $ h_{5}^{\ast } \neq 0.$ If $ h_{5}^{\ast } = 0,$ or even $ h_{5}^{\ast } \neq 0 $ but $ \beta = 0,$ the coefficient $ w_{k}$ could be arbitrary functions on $ \left (x^{i},v\right). $ Β  For the vacuum Einstein equation this is a degenerated case imposing the compatibility condition $ \beta = \alpha _ { i}=0,$ which are satisfied, for instance, if the $ h_{4}$ and $ h_{5}$ are related as in the formula (\[p1\ ]) but with $ h_{[0]}\left (x^{i}\right) = const.$ - Having defined $ h_{4}$ and $ h_{5}$ and calculate $ \gamma $ from ([ abc ] { }) we can solve the equation (\[ep4a\ ]) by integrating on varying β€œ $ v $ " the equation $ n_{i}^{\ast \ast } + \gamma n_{i}^{\ast } = 0.$ The exact solution is $ $ \begin{aligned } n
$h_{5[1,2]}\lfft( x^{i}\right) $ stated by buundary conditions; u\) or, inbersely, go compute $h_{4}$ for a given $h_{5}\leht( x^{u},v\rigyt) ,h_{5}^{\ast }\neq 0,$$$\sqrt{|h_{4}|}=h_{[0]}\lewt( x^{i}\righn) (\sqrt{|h_{5}\ledt( x^{m},v\right) |})^{\ast }, \lausl{p1}$$with $h_{[0]}\left( w^{i}\rigkt) $ given by boukdary condidions. - The exdcg dolutions of (\[ep3a\]) for $\beta \neq 0$ are qefined fgom an algebrayc eatatikn, $w_{i}\beta +\alpha _{i}=0,$ where the coerficienus $\beta $ and $\alpha _{i}$ are computed as in formklas ([abc]{}) by using the dolutions fir (\[e[1q\]) and (\[ep2a\]). Thd general solution is $$s_{k}=\partial _{k}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|]/\partiau _{v}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\asj }|], \label{w}$$witi $\partpal _{v}=\partial /\isrtial v$ and $n_{5}^{\ast }\neq 0.$ If $h_{5}^{\sst }=0,$ oe even $h_{5}^{\ast }\neq 0$ but $\ueta =0,$ the coefficienjs $w_{k}$ coulg ye arbitrary functionw in $\lext( x^{h},v\rieyt). $ Β Fod vhe vacuul Emnstein equztions this is a degenerated cssq imposing the compaeifility conditions $\beta =\alpha _{i}=0,$ which art satjsfied, for instance, if rhe $h_{4}$ and $h_{5}$ are relatgd as in tre formula (\[p1\]) but with $h_{[0]}\left( x^{i}\right) =const.$ - Havinc defmndd $k_{4}$ and $f_{5}$ ajd computed $\gamma $ from ([abc]{}) we can solve the qsustpon (\[ep4a\]) by integrcting on variablr β€œ$g $" jhe equation $n_{k}^{\ast \ast }+\famma n_{i}^{\ast }=0.$ The edact sojutiob is $$\begig{alibned} n
$h_{5[1,2]}\left( x^{i}\right) $ stated by boundary conditions; inversely, compute $h_{4}$ a given $h_{5}\left( (\sqrt{|h_{5}\left( |})^{\ast }, \label{p1}$$with x^{i}\right) $ given boundary conditions. - The exact solutions (\[ep3a\]) for $\beta \neq 0$ are defined from an algebraic equation, $w_{i}\beta +\alpha where the coefficients $\beta $ and $\alpha _{i}$ are computed as in formulas by the for and (\[ep2a\]). The general solution is $$w_{k}=\partial _{k}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|]/\partial _{v}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|], \label{w}$$with $\partial _{v}=\partial v$ and $h_{5}^{\ast }\neq 0.$ If $h_{5}^{\ast }=0,$ even $h_{5}^{\ast }\neq 0$ $\beta =0,$ the coefficients $w_{k}$ be functions on x^{i},v\right). For vacuum Einstein equations is a degenerated case imposing the compatibility conditions $\beta =\alpha _{i}=0,$ which are satisfied, for instance, if $h_{4}$ and related as the (\[p1\]) with $h_{[0]}\left( x^{i}\right) Having defined $h_{4}$ and $h_{5}$ and from ([abc]{}) we can solve the equation (\[ep4a\]) integrating on β€œ$v $" the equation $n_{i}^{\ast \ast n_{i}^{\ast }=0.$ The exact solution is $$\begin{aligned} n
$h_{5[1,2]}\left( x^{i}\right) $ stated by boundAry conditiOns; b\) oR, inVerSeLy, to CompUte $h_{4}$ for a given $h_{5}\LEft( x^{I},v\right) ,h_{5}^{\ast }\neq 0,$$$\sqrt{|h_{4}|}=h_{[0]}\leFt( x^{i}\rIgHT) (\sqrT{|H_{5}\lEft( x^{i},V\right) |})^{\aST }, \lABEl{p1}$$WiTh $H_{[0]}\leFt( X^{I}\rIght) $ gIveN by bounDary conditIonS. - THe exact solutIOnS of (\[ep3a\]) for $\bEta \Neq 0$ are defineD frOm an alGeBraIC equaTioN, $w_{i}\beTa +\alphA _{I}=0,$ where The coeffiCiENts $\betA $ And $\alphA _{I}$ ArE comPuted as in formulas ([ABc]{}) BY using the solutIons foR (\[eP1A\]) aND (\[Ep2a\]). the General solUtIon is $$W_{K}=\partiaL _{K}\lN [\SQRt{|h_{4}H_{5}|}/|H_{5}^{\ast }|]/\partial _{v}\lN [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|], \LAbeL{w}$$with $\PaRtiAL _{v}=\partIal /\paRtIAl v$ And $h_{5}^{\ast }\neq 0.$ IF $h_{5}^{\asT }=0,$ or even $h_{5}^{\aSt }\neq 0$ bUT $\beta =0,$ thE CoefficIents $w_{K}$ coUld Be arBItRaRy fUnCTioNS oN $\leFT( x^{i},V\right). $ Β FoR tHe VacuuM EinSTEIN equAtiOns tHis is A degenerated cAse ImpoSIng The coMpatiBiliTy CondiTions $\bEta =\alPhA _{i}=0,$ which are satisFied, For instanCe, iF tHe $h_{4}$ AnD $h_{5}$ are RElated As iN thE formulA (\[p1\]) but wiTH $h_{[0]}\lEfT( X^{I}\RiGht) =const.$ - Having defiNeD $H_{4}$ AnD $h_{5}$ and comPuted $\gAMmA $ fROm ([abc]{}) we cAn SolVe thE EQuatiOn (\[ep4A\]) By IntegratIng on vARiAbLe β€œ$v $" the eQuAtion $n_{I}^{\aSt \aSt }+\gAmma n_{I}^{\Ast }=0.$ THe exacT solutioN is $$\beGIn{aligned} n
$h_{5[1,2]}\left( x^{i}\r ight) $ st atedbybou nd arycond itions; b \ ) or , inversely, to comput e $h_ {4 } $ fo r a give n $h_{5 } \l e f t(x^ {i },v \r i gh t) ,h _{5}^{\ ast }\neq0,$ $$ \sqrt{|h_{4} | }= h_{[0]}\le ft( x^{i}\right ) ( \sqrt{ |h _{5 } \left ( x ^{i}, v\righ t ) |})^ {\ast }, \ l abel{p 1 }$$with $ h_ {[0] }\left( x^{i}\rig h t) $ given by bou ndaryco n di t i ons . - The ex ac t sol u tions o f ( \ [ e p3a \ ]) for $\beta \neq 0$ ar e de finedfr oma n alge braic e q uat ion, $w_{i} \bet a +\a lpha _ { i}=0,$w here th e coef fic ien ts $ \ be ta $an d $\ a lp ha_ {i} $ are co mp ut ed as inf o r m ulas ([ abc] {}) b y using the s olu tion s fo r (\[ ep1a\ ]) a nd (\[e p2a\]) . The g eneral solution is$$w_{k}=\ par ti al_{ k}\ln [\sqrt {|h _{4 }h_{5}| }/|h_{5 } ^{\ as t } |] /\partial _{v}\ln[\ s q rt {|h_{4}h _{5}|} / |h _{ 5 }^{\ast } |],\ label {w}$ $ wi th $\par tial _ { v} =\ partial / \parti al v$ an d $h_ { 5}^{ \ast } \neq 0.$ If $ h _{5}^{\ast }=0 , $ or even $h_ { 5} ^ { \a s t }\ neq 0$ but $\b eta= 0,$thec oe ffi c ients $w_{ k} $ c o uld be arbitrary fu nc tionson $\ left( x^{i},v \right). $ F or the v acuu m E i nstein equatio ns th is is a de g enerated case imposin g the com p a tibility co ndi tio ns$ \ be ta =\alpha _{ i } =0,$ w hich ar e s atisfie d,for in sta nc e, if the $h_{4}$ a nd $ h_ {5} $ are relatedas in t heformu l a (\[p 1\])butwi th $h_ {[0]}\l e ft ( x^{i }\ ri ght) =c on st.$ - Hav ing def ined $h_{ 4}$ and$h _{ 5}$ and computed $\g am ma $ from([ abc ]{}) w e can solv e the equation (\[ep4a\ ] ) by in teg ratin g on variable β€œ$ v $" t hee quatio n $n_{ i}^{\ as t \ a s t }+\ g a mm a n _{ i}^{\ast } = 0 .$The e xa ct s olution is $$\begin{align e d} n
$h_{5[1,2]}\left(_x^{i}\right) $_stated by boundary conditions; _ __b\) or,_inversely,_to compute $h_{4}$_for a given_$h_{5}\left( x^{i},v\right) _,h_{5}^{\ast }\neq 0,$$$\sqrt{|h_{4}|}=h_{[0]}\left(_x^{i}\right)_(\sqrt{|h_{5}\left( x^{i},v\right) |})^{\ast }, \label{p1}$$with $h_{[0]}\left( x^{i}\right) $ given by boundary conditions. - __The exact_solutions_of_(\[ep3a\]) for $\beta \neq 0$_are defined from an algebraic_equation, $w_{i}\beta _ +\alpha _{i}=0,$ where the coefficients_$\beta_$ and $\alpha__{i}$ are computed as in formulas ([abc]{}) by using_the solutions for (\[ep1a\]) and (\[ep2a\])._The general solution_is_$$w_{k}=\partial__{k}\ln [\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|]/\partial _{v}\ln_[\sqrt{|h_{4}h_{5}|}/|h_{5}^{\ast }|], _\label{w}$$with $\partial _{v}=\partial /\partial v$ and_$h_{5}^{\ast }\neq 0.$ If $h_{5}^{\ast }=0,$ or_even $h_{5}^{\ast }\neq 0$ but $\beta_=0,$ the coefficients $w_{k}$ could_be arbitrary_functions on $\left( x^{i},v\right). $_Β For the vacuum_Einstein equations_this is a_degenerated case imposing the compatibility conditions_$\beta =\alpha _{i}=0,$_which are satisfied, for instance, if_the_$h_{4}$ and $h_{5}$_are_related_as in_the formula (\[p1\])_but_with $h_{[0]}\left(_x^{i}\right)_=const.$ - Having defined $h_{4}$_and_$h_{5}$ and computed $\gamma $ from ([abc]{})_we can solve the_equation_(\[ep4a\]) by integrating on_variable β€œ$v $" the equation_$n_{i}^{\ast \ast }+\gamma n_{i}^{\ast }=0.$ The_exact solution_is $$\begin{aligned} _ n
mathsf{C}}}$, $deg(u_i) \geq 2\delta_s+1$. Therefore we can use the simple coding scheme described in SectionΒ \[sec:simple\_coding\] on $(m_{{\mathsf{C}}},n_{{\mathsf{C}}},{\mathcal{X}}_{{\mathsf{C}}},\delta_s)$ BNSI problem to save one transmission compared to uncoded transmission. Therefore the length of this code to transmit all the information symbols indexed by ${\mathsf{C}}\subseteq [n]$ over ${\mathds{F}}_q$ is $N_{{\mathsf{C}}}=|{\mathsf{C}}|-1$. For some integer $K$, let ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$ and $R=[n] \setminus ({\mathsf{C}}_1 \cup {\mathsf{C}}_2 \cup \dots \cup {\mathsf{C}}_K)$. Given such a collection of elements of $\Phi({\mathcal{B}})$, we design a valid coding scheme as follows. We apply the coding scheme described in Section \[sec:simple\_coding\] on each element ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K$ and transmit the information symbols indexed by the set $R$ uncoded. The codelength for this scheme is $$\begin{aligned} N &= \sum_{i=1}^{K}{(|{\mathsf{C}}_i|-1)}+|R| = \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K+|R|.\end{aligned}$$ \[disjoint\] Let $N$ be the codelength of the linear coding scheme based on the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$. Then there exist disjoint ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'} \in \Phi({\mathcal{B}})$ such that $K' \leq K$ and the codelength $N'$ of the linear coding scheme based on ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'}$ is at the most $N$. From the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_
mathsf{C}}}$, $ deg(u_i) \geq 2\delta_s+1$. Therefore we can use the simple coding schema trace in Section Β  \[sec: simple\_coding\ ] on $ (m_{{\mathsf{C}}},n_{{\mathsf{C}}},{\mathcal{X}}_{{\mathsf{C}}},\delta_s)$ BNSI problem to save one transmission compare to uncoded transmission. Therefore the distance of this code to impart all the information symbols indexed by $ { \mathsf{C}}\subseteq [ n]$ over $ { \mathds{F}}_q$ is $ N_{{\mathsf{C}}}=|{\mathsf{C}}|-1$. For some integer $ K$, let $ { \mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$ and $ R=[n ] \setminus ({ \mathsf{C}}_1 \cup { \mathsf{C}}_2 \cup \dots \cup { \mathsf{C}}_K)$. give such a collection of elements of $ \Phi({\mathcal{B}})$, we plan a valid coding scheme as follows. We apply the coding scheme described in Section \[sec: simple\_coding\ ] on each component $ { \mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K$ and transmit the information symbols index by the set $ R$ uncoded. The codelength for this scheme is $ $ \begin{aligned } N & = \sum_{i=1}^{K}{(|{\mathsf{C}}_i|-1)}+|R| = \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K+|R|.\end{aligned}$$ \[disjoint\ ] Let $ N$ be the codelength of the analogue coding outline based on the set $ { \mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$. Then there exist disjoint $ { \mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K' } \in \Phi({\mathcal{B}})$ such that $ K' \leq K$ and the codelength $ N'$ of the analogue coding scheme based on $ { \mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'}$ is at the most $ N$. From the set $ { \mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C } } _
matjsf{C}}}$, $deg(u_i) \geq 2\delta_s+1$. Thtrefore we can usg rhe sikple ckding scfeme described in SectionΒ \[sec:dinple\_ciding\] on $(m_{{\mathsf{C}}},n_{{\mathrf{C}}},{\mathcap{X}}_{{\mathsf{X}}},\delua_s)$ BNSI problem vk save one trzksmisvmon compared to uncoded tsansmission. Thardflre the length of this code to tranfmit alk hhe informatiog sykfols pnbexed by ${\mathsf{C}}\subseteq [n]$ over ${\mathds{H}}_q$ is $N_{{\mathsf{C}}}=|{\msthsf{C}}|-1$. For some integer $K$, pet ${\lathsf{C}}_1,{\mathsf{C}}_2,\dots,{\lathsf{C}}_K \in \Phi({\iqthcal{B}})$ and $F=[n] \setminus ({\mathsf{C}}_1 \cuk {\mathsf{C}}_2 \cup \dots \cup {\mathsf{C}}_K)$. Eiven such a coolwctlmn of elemeits of $\Phi({\mathcal{B}})$, we desicn a vakid coding schcme av fillows. We apply the cmding scheme descrybed in Saccion \[sec:simple\_coding\] in each elekent ${\natfsf{D}}_1,{\methaf{C}}_2,\dotd,{\mavhsf{C}}_K$ and fransmit thw information symboks pmdexed by ths set $W$ tncoded. The codelength for this scheme ps $$\bsgin{aligned} N &= \sum_{i=1}^{K}{(|{\mathwf{C}}_i|-1)}+|R| = \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K+|T|.\end{aligneq}$$ \[disjoint\] Let $N$ be the codelength of the linear cmding rchtmc cqsfd on the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Prj({\mstmcal{B}})$. Then there exist disjpijt ${\iathsf{C}}'_1,{\mathsf{Z}}'_2,\dots,{\mcfhaf{C}}'_{K'} \in \Phi({\mathcal{H}})$ such jhat $K' \leq K$ anq thr codelength $N'$ of the lineae coding schvme vased on ${\mathsf{C}}'_1,{\machsf{C}}'_2,\dots,{\matksf{C}}'_{K'}$ os at the most $N$. From the set ${\matgsf{C}}_1,{\mathsf{C}}_2,\fots,{\mathsr{Z}}_
mathsf{C}}}$, $deg(u_i) \geq 2\delta_s+1$. Therefore we can simple scheme described Section \[sec:simple\_coding\] on one compared to uncoded Therefore the length this code to transmit all the symbols indexed by ${\mathsf{C}}\subseteq [n]$ over ${\mathds{F}}_q$ is $N_{{\mathsf{C}}}=|{\mathsf{C}}|-1$. For some integer $K$, ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$ and $R=[n] \setminus ({\mathsf{C}}_1 \cup {\mathsf{C}}_2 \cup \dots \cup {\mathsf{C}}_K)$. such collection elements $\Phi({\mathcal{B}})$, we design a valid coding scheme as follows. We apply the coding scheme described in \[sec:simple\_coding\] on each element ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K$ and transmit the symbols indexed by the $R$ uncoded. The codelength for scheme $$\begin{aligned} N \sum_{i=1}^{K}{(|{\mathsf{C}}_i|-1)}+|R| \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K+|R|.\end{aligned}$$ Let $N$ be codelength of the linear coding scheme based on the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$. Then there exist disjoint \in \Phi({\mathcal{B}})$ $K' \leq and codelength of the linear based on ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'}$ is at the the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_
mathsf{C}}}$, $deg(u_i) \geq 2\delta_s+1$. TherEfore we can Use thE siMplE cOdinG schEme described in sEctiOnΒ \[sec:simple\_coding\] on $(m_{{\maThsf{C}}},N_{{\mAThsf{c}}},{\MaThcal{x}}_{{\mathsf{c}}},\DeLTA_s)$ BnSi pRobLeM To Save oNe tRansmisSion comparEd tO uNcoded transmISsIon. TherefoRe tHe length of thIs cOde to tRaNsmIT all tHe iNformAtion sYMbols iNdexed by ${\mAtHSf{C}}\subSEteq [n]$ ovER ${\MaThds{f}}_q$ is $N_{{\mathsf{C}}}=|{\mathsF{c}}|-1$. FOR some integer $K$, lEt ${\mathSf{c}}_1,{\MaTHSf{C}}_2,\DotS,{\mathsf{C}}_K \iN \PHi({\matHCal{B}})$ and $r=[N] \sETMInuS ({\Mathsf{C}}_1 \cup {\matHsf{C}}_2 \cup \dots \CUp {\mAthsf{C}}_k)$. GIveN Such a cOllecTiON of Elements of $\PHi({\maThcal{B}})$, we dEsign a VAlid codINg schemE as folLowS. We ApplY ThE cOdiNg SCheME dEscRIbeD in SectiOn \[SeC:simpLe\_coDING\] On eaCh eLemeNt ${\matHsf{C}}_1,{\mathsf{C}}_2,\doTs,{\mAthsF{c}}_K$ aNd traNsmit The iNfOrmatIon symBols iNdExed by the set $R$ unCodeD. The codelEngTh For ThIs schEMe is $$\beGin{AliGned} N &= \suM_{i=1}^{K}{(|{\mathSF{C}}_i|-1)}+|r| = \sUM_{I=1}^{k}{|{\mAthsf{C}}_i|}-K+|R|.\end{aligneD}$$ \[dISJoInt\] Let $N$ bE the coDElEnGTh of the lInEar CodiNG SchemE basED oN the set ${\mAthsf{C}}_1,{\MAtHsF{C}}_2,\dots,{\mAtHsf{C}}_K \iN \PHi({\mAthCal{B}})$. THEn thEre exiSt disjoiNt ${\matHSf{C}}'_1,{\mathsf{C}}'_2,\dots,{\MAthsf{C}}'_{K'} \in \Phi({\mAThCAL{B}})$ SUch tHat $k' \leq K$ and the CodeLEngtH $N'$ of THe LinEAr codIng scHeME bASed on ${\mathsf{C}}'_1,{\mathsf{C}}'_2,\DoTs,{\mathSf{C}}'_{K'}$ iS at the most $N$. FrOm the set ${\maTHSF{C}}_1,{\mathsf{c}}_2,\dotS,{\MaTHsf{C}}_
mathsf{C}}}$, $deg(u_i) \g eq 2\delta _s+1$ . T her ef orewe c an use the sim p le c oding scheme described in S ec t ion\ [s ec:si mple\_c o di n g \]on $ (m_ {{ \ ma thsf{ C}} },n_{{\ mathsf{C}} },{ \m athcal{X}}_{ { \m athsf{C}}} ,\d elta_s)$ BNS I p roblem t o s a ve on e t ransm ission compar ed to unc od e d tran s mission . Th eref ore the length of th i s code to tran smit a ll th e inf orm ation symb ol s ind e xed by$ {\ m a t hsf { C}}\subseteq[n]$ over $ { \ma thds{F }} _q$ is $N_ {{\ma th s f{C }}}=|{\math sf{C }}|-1$. F or som e intege r $K$, l et ${\ mat hsf {C}} _ 1, {\ mat hs f {C} } _2 ,\d o ts, {\mathsf {C }} _K \i n \P h i ( { \mat hca l{B} })$ a nd $R=[n] \se tmi nus( {\m athsf {C}}_ 1 \c up {\ma thsf{C }}_2\c up \dots \cup { \mat hsf{C}}_K )$. G ive nsucha colle cti onof elem ents of $\P hi ( { \ ma thcal{B}})$, we de si g n a valid c odings ch em e as foll ow s.We a p p ly th e co d in g scheme descr i be din Sect io n \[se c: sim ple \_cod i ng\] on ea ch eleme nt ${ \ mathsf{C}}_1,{ \ mathsf{C}}_2, \ do t s ,{ \ math sf{ C}}_K$ andtran s mitthei nf orm a tionsymbo ls in d exed by the set $R$ u ncoded . The codelength f or this sc h e m e is $$\ begi n {a l igned} N &= \s um_{i =1}^{K}{(| { \mathsf{ C}}_i |-1)}+|R | = \sum _ { i=1}^{K} {|{ \ma ths f{C } } _i |}-K+|R|.\end { a lign ed }$$ \[ dis joint\] Le t $ N$beth e codelen gth of t he l in ea r c oding scheme b as edon th e set ${\mat hsf{C }}_1 ,{ \m a ths f{C}}_2 , \d o t s,{\ ma th sf{C }}_ K\in \ Phi( { \ma thcal{B }})$. The n t h ereex is t disjo int ${\mathsf {C }}'_1,{\ma th sf{ C}}'_2 , \ dots,{\m athsf{C}}'_{K'} \in \Ph i ({\math cal {B}}) $ su ch that $ K'\leq K $ a n d thecodele ngth$N '$o f thel i ne arco ding schem e bas ed on $ {\ma thsf{C} }'_1,{\mathsf{C}}' _ 2,\ dots,{\mathsf {C} }'_{ K ' }$ is at the m o st$ N $. From the se t ${\maths f{ C }} _1,{\maths f {C} }_ 2,\dots ,{\math sf{C} } _
mathsf{C}}}$, $deg(u_i)_\geq 2\delta_s+1$._Therefore we can use_the simple_coding_scheme described_in_SectionΒ \[sec:simple\_coding\] on $(m_{{\mathsf{C}}},n_{{\mathsf{C}}},{\mathcal{X}}_{{\mathsf{C}}},\delta_s)$_BNSI problem to_save one transmission compared_to uncoded transmission._Therefore_the length of this code to transmit all the information symbols indexed by ${\mathsf{C}}\subseteq_[n]$_over ${\mathds{F}}_q$_is_$N_{{\mathsf{C}}}=|{\mathsf{C}}|-1$._For some integer $K$, let_${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in \Phi({\mathcal{B}})$ and $R=[n]_\setminus ({\mathsf{C}}_1_\cup {\mathsf{C}}_2 \cup \dots \cup {\mathsf{C}}_K)$. Given such_a_collection of elements_of $\Phi({\mathcal{B}})$, we design a valid coding scheme as_follows. We apply the coding scheme_described in Section_\[sec:simple\_coding\]_on_each element ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K$ and_transmit the information symbols indexed by_the set $R$ uncoded. The codelength_for this scheme is $$\begin{aligned} N &= \sum_{i=1}^{K}{(|{\mathsf{C}}_i|-1)}+|R|_ = \sum_{i=1}^{K}{|{\mathsf{C}}_i|}-K+|R|.\end{aligned}$$ \[disjoint\] Let $N$ be the_codelength of the linear coding_scheme based_on the set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_K \in_\Phi({\mathcal{B}})$. Then there_exist disjoint_${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'} \in \Phi({\mathcal{B}})$_such that $K' \leq K$ and_the codelength $N'$_of the linear coding scheme based_on_${\mathsf{C}}'_1,{\mathsf{C}}'_2,\dots,{\mathsf{C}}'_{K'}$ is at_the_most_$N$. From the_set ${\mathsf{C}}_1,{\mathsf{C}}_2,\dots,{\mathsf{C}}_
, parallelizable, compact Riemannian $n$-manifold can be embedded isometrically as a special Lagrangian submanifold in a manifold with holonomy ${\mathrm{SU}}(n)$. Notice that the assumption of real analyticity refers not only to the manifold, but to the structure as well. $\alpha$-Einstein-Sasaki geometry and hypersurfaces {#sec:go} =================================================== In this section we classify the constant intrinsic torsion geometries for the group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n+1)$, and write down evolution equations for hypersurfaces which are orthogonal to the characteristic direction, in analogy with SectionΒ \[sec:hypo\]. Let $T={\mathbb{R}}^{2n+1}$. The space $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ is spanned by the forms $\alpha$, $F$ and $\Omega^\pm$, defined in. In order to classify the differential operators on $(\Lambda^*T)^{{\mathrm{SU}}(n)}$, we observe that every element $g$ of the normalizer $N({\mathrm{SU}}(n))$ of ${\mathrm{SU}}(n)$ in maps $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ to itself; this defines a natural notion of equivalence among differential operators. \[prop:oddderivation\] Let $f$ be a derivation of $(\Lambda^*T)^{{\mathrm{SU}}(n)}$; then $f$ is a differential operator that extend to a derivation of degree one on $\Lambda^*T$ if and only if one of the following holds: - $f(\alpha)=0$, $f(F)=2\lambda\alpha\wedge F$, $f(\Omega)=n(\lambda-\mu i)\alpha\wedge\Omega$; - $f(\alpha)=\lambda F$, $f(F)=0$, $f(\Omega)=-\mu i\alpha\wedge\Omega$; - $n=2$, and up to $N({\mathrm{SU}}(2))$ action, $\tilde f$ has the form (A) or (B); - $n=3$, $f(\alpha)=0$, $f(F)=3\lambda\Omega^- - 3\mu\Omega^+$, $f(\Omega)=2(\lambda+i\mu)F^2$; here $\lambda$ and $\mu$ are real constants.
, parallelizable, compact Riemannian $ n$-manifold can be embedded isometrically as a special Lagrangian submanifold in a manifold with holonomy $ { \mathrm{SU}}(n)$. Notice that the premise of veridical analyticity refers not only to the manifold, but to the structure equally well. $ \alpha$-Einstein - Sasaki geometry and hypersurfaces { # sec: go } = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = In this section we classify the changeless intrinsic torsion geometries for the group $ { \mathrm{SU}}(n)\subset { \mathrm{O}}(2n+1)$, and publish down evolution equations for hypersurfaces which are extraneous to the characteristic direction, in analogy with Section Β  \[sec: hypo\ ]. Let $ T={\mathbb{R}}^{2n+1}$. The space $ (\Lambda^*T)^{{\mathrm{SU}}(n)}$ is cross by the forms $ \alpha$, $ F$ and $ \Omega^\pm$, defined in. In decree to classify the differential operators on $ (\Lambda^*T)^{{\mathrm{SU}}(n)}$, we detect that every element $ g$ of the normalizer $ N({\mathrm{SU}}(n))$ of $ { \mathrm{SU}}(n)$ in maps $ (\Lambda^*T)^{{\mathrm{SU}}(n)}$ to itself; this defines a lifelike notion of equivalence among differential operator. \[prop: oddderivation\ ] Let $ f$ be a deriving of $ (\Lambda^*T)^{{\mathrm{SU}}(n)}$; then $ f$ is a differential operator that extend to a derivation of degree one on $ \Lambda^*T$ if and only if one of the follow holds: - $ f(\alpha)=0 $, $ f(F)=2\lambda\alpha\wedge F$, $ f(\Omega)=n(\lambda-\mu i)\alpha\wedge\Omega$; - $ f(\alpha)=\lambda F$, $ f(F)=0 $, $ f(\Omega)=-\mu i\alpha\wedge\Omega$; - $ n=2 $, and up to $ N({\mathrm{SU}}(2))$ action, $ \tilde f$ has the form (A) or (B); - $ n=3 $, $ f(\alpha)=0 $, $ f(F)=3\lambda\Omega^- - 3\mu\Omega^+$, $ f(\Omega)=2(\lambda+i\mu)F^2 $; here $ \lambda$ and $ \mu$ are veridical constants.
, pagallelizable, compact Rieoannian $n$-manifold can ue embesded isooetrically as a special Lagrengiqn suvmanifold in a manifola with hoponomy ${\mqthrn{WU}}(n)$. Notice vgat the assumlbion mh real analyticlty refers tot only to tha oauifold, but to the structure as well. $\ajpha$-Einxtfin-Sasaki geomgtry sgd hgievsurfaces {#sec:go} =================================================== In this section ws classpfy the constant ontrinsic torsion geometrifs flr the group ${\mathrl{SU}}(n)\subset {\nathwn{O}}(2n+1)$, and writd down evolution equatjons for hypersurfaces which ard ortkogonal to jkw cjdracteristir direbtion, in analogy with SectiomΒ \[sec:hypo\]. Let $T={\msthub{R}}^{2n+1}$. The space $(\Lambda^*T)^{{\matirm{SU}}(n)}$ is spanned by the formv $\clpha$, $F$ and $\Omega^\pm$, dwfuned hn. It orawr go dlessjfy thf dmfferential operators in $(\Lambda^*T)^{{\mathrm{SU}}(n)}$, wq observe that every ejement $g$ of the normalizer $N({\mathrm{SU}}(n))$ ox ${\mzthrm{SU}}(n)$ in maps $(\Lambda^*R)^{{\mathrm{SU}}(n)}$ to itself; jhis definqs a natural notion of equivalence among differendial koercbjfw. \[pgop:oddderivation\] Let $f$ be a derivation of $(\Lamfsa^*U)^{{\manhrm{SU}}(n)}$; then $f$ is a differenyiwl jperator that extenb tk a derivation of fegree jne ob $\Lambda^*T$ if snd only if one of the folliwing holds: - $f(\alpha)=0$, $f(F)=2\lambda\al'ha\wedge F$, $f(\Umegs)=n(\lamnda-\mu i)\alpha\wedge\Omega$; - $f(\ampha)=\lambda V$, $f(F)=0$, $f(\Omefx)=-\mu i\alpha\wedge\Ooegs$; - $n=2$, and up to $N({\mathrm{SU}}(2))$ aceion, $\tildx f$ hcs the furm (S) or (B); - $n=3$, $f(\alphw)=0$, $f(F)=3\lambda\Omega^- - 3\mu\Omega^+$, $f(\Omeya)=2(\lamtda+i\mu)F^2$; herf $\lambda$ and $\mu$ are real constaivs.
, parallelizable, compact Riemannian $n$-manifold can be as special Lagrangian in a manifold the of real analyticity not only to manifold, but to the structure as $\alpha$-Einstein-Sasaki geometry and hypersurfaces {#sec:go} =================================================== In this section we classify the constant torsion geometries for the group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n+1)$, and write down evolution equations for which orthogonal the direction, in analogy with Section \[sec:hypo\]. Let $T={\mathbb{R}}^{2n+1}$. The space $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ is spanned by the forms $F$ and $\Omega^\pm$, defined in. In order to the differential operators on we observe that every element of normalizer $N({\mathrm{SU}}(n))$ ${\mathrm{SU}}(n)$ maps to itself; this a natural notion of equivalence among differential operators. \[prop:oddderivation\] Let $f$ be a derivation of $(\Lambda^*T)^{{\mathrm{SU}}(n)}$; then is a that extend a of one on $\Lambda^*T$ only if one of the following $f(F)=2\lambda\alpha\wedge F$, $f(\Omega)=n(\lambda-\mu i)\alpha\wedge\Omega$; - $f(\alpha)=\lambda F$, $f(F)=0$, i\alpha\wedge\Omega$; - and up to $N({\mathrm{SU}}(2))$ action, $\tilde has the form (A) or (B); - $n=3$, $f(F)=3\lambda\Omega^- - 3\mu\Omega^+$, $f(\Omega)=2(\lambda+i\mu)F^2$; here $\lambda$ and $\mu$ are real constants.
, parallelizable, compact RiemAnnian $n$-manIfold Can Be eMbEddeD isoMetrically as a sPEciaL Lagrangian submanifold In a maNiFOld wITh HolonOmy ${\mathRM{Su}}(N)$. notIcE tHat ThE AsSumptIon Of real aNalyticity RefErS not only to thE MaNifold, but tO thE structure as WelL. $\alpha$-eiNstEIn-SasAki GeomeTry and HYpersuRfaces {#sec:Go} =================================================== iN this sECtion we CLAsSify The constant intrinSIc TOrsion geometriEs for tHe GRoUP ${\MatHrm{sU}}(n)\subset {\mAtHrm{O}}(2n+1)$, ANd write DOwN EVOluTIon equations fOr hypersurfACes Which aRe OrtHOgonal To the ChARacTeristic dirEctiOn, in analoGy with sEctionΒ \[sEC:hypo\]. LeT $T={\mathBb{R}}^{2N+1}$. ThE spaCE $(\LAmBda^*t)^{{\mAThrM{sU}}(N)}$ is SPanNed by the FoRmS $\alphA$, $F$ anD $\oMEGa^\pm$, DefIned In. In oRder to classifY thE difFEreNtial OperaTors On $(\lambdA^*T)^{{\mathRm{SU}}(n)}$, We Observe that everY eleMent $g$ of thE noRmAliZeR $N({\matHRm{SU}}(n))$ oF ${\maThrM{SU}}(n)$ in mAps $(\LambDA^*T)^{{\mAtHRM{sU}}(N)}$ to itself; this definEs A NAtUral notiOn of eqUIvAlENce among DiFfeRentIAL operAtorS. \[PrOp:oddderIvatioN\] leT $f$ Be a deriVaTion of $(\laMbdA^*T)^{{\mAthrm{su}}(n)}$; thEn $f$ is a DifferenTial oPErator that exteND to a derivatioN Of DEGrEE one On $\LAmbda^*T$ if and Only IF one Of thE FoLloWIng hoLds: - $f(\aLpHA)=0$, $f(f)=2\Lambda\alpha\wedge F$, $f(\OMeGa)=n(\lamBda-\mu I)\alpha\wedge\OmEga$; - $f(\alpha)=\lAMBDa F$, $f(F)=0$, $f(\OmEga)=-\mU I\aLPha\wedge\Omega$; - $n=2$, And up To $N({\mathrm{Su}}(2))$ Action, $\tiLde f$ hAs the forM (A) or (B); - $n=3$, $f(\alPHA)=0$, $f(F)=3\lambdA\OmEga^- - 3\Mu\OMegA^+$, $F(\omEga)=2(\lambda+i\mu)F^2$; HERe $\laMbDa$ and $\mu$ Are Real conStaNts.
, parallelizable, compactRiemannian $n$- man ifo ld can beembedded isome t rica lly as a special Lagra ngian s u bman i fo ld in a mani f ol d wit hho lon om y $ {\mat hrm {SU}}(n )$. Notic e t ha t the assump t io n of realana lyticity ref ers not o nl y t o theman ifold , butt o thestructure a s well. $\alph a $ -E inst ein-Sasaki geomet r ya nd hypersurfac es {#s ec : go } === === ========== == ===== = ======= = == = = = === = ========== I n this sect i onwe cla ss ify the co nstan ti ntr insic torsi on g eometries for t h e group ${\math rm{SU} }(n )\s ubse t { \m ath rm { O}} ( 2n +1) $ , a nd write d ow n evo luti o n e quat ion s fo r hyp ersurfaces wh ich are ort hogon al to the c harac terist ic di re ction, in analo gy w ith Secti on\[ sec :h ypo\] . Let$T= {\m athbb{R }}^{2n+ 1 }$. T h e sp ace $(\Lambda^*T)^ {{ \ m at hrm{SU}} (n)}$i ssp a nned byth e f orms $ \alph a$,$ F$ and $\O mega^\ p m$ ,defined i n. Inor der to clas s ifythe di fferenti al op e rators on $(\L a mbda^*T)^{{\m a th r m {S U }}(n )}$ , we observ e th a t ev erye le men t $g$of th en or m alizer $N({\mathrm{ SU }}(n)) $ of${\mathrm{SU} }(n)$ in m a p s $(\Lamb da^* T )^ { {\mathrm{SU}}( n)}$to itself; this def inesa natura l notiono f equival enc e a mon g d i f fe rential opera t o rs. \ [prop:o ddd erivati on\ ] L et$f$ b e a deriv ation of $ (\ La mb da^ *T)^{ { \mathrm{ SU }}( n) }$; then $f$ is a di ffer en ti a l o perator th a t ext en dto a de ri vatio n of deg ree one on $\Lam bda ^ *T$if a nd only if one of th efollowingho lds : - $ f(\alpha )=0$, $f(F)=2\lambda\al p ha\wedg e F $, $f (\Om ega)=n(\l amb da-\mu i) \ alpha\ wedge\ Omega $; - $f(\ a l ph a)= \l ambda F$,$ f (F) =0$,$f (\Om ega)=-\ mu i\alpha\wedge\O m ega $; - $n=2$ , a nd u p to $N ( {\ m ath rm { SU} } ( 2))$ action, $\ tilde f$ h as th e form (A) or(B ); - $n=3$, $f(\ a lpha)=0 $, $f(F)= 3\lambda\ Om ega^ - - 3 \mu\Omega^ +$, $f(\ Omega)=2( \ lambd a +i \mu)F ^2$ ; her e$\l ambda $ and$ \mu $ are realco nstant s.
, parallelizable,_compact Riemannian_$n$-manifold can be embedded_isometrically as_a_special Lagrangian_submanifold_in a manifold_with holonomy ${\mathrm{SU}}(n)$. Notice_that the assumption of_real analyticity refers_not_only to the manifold, but to the structure as well. $\alpha$-Einstein-Sasaki geometry and hypersurfaces {#sec:go} =================================================== In_this_section we_classify_the_constant intrinsic torsion geometries for_the group ${\mathrm{SU}}(n)\subset {\mathrm{O}}(2n+1)$, and_write down_evolution equations for hypersurfaces which are orthogonal to_the_characteristic direction, in_analogy with SectionΒ \[sec:hypo\]. Let $T={\mathbb{R}}^{2n+1}$. The space $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ is spanned_by the forms $\alpha$, $F$ and_$\Omega^\pm$, defined in._In_order_to classify the differential_operators on $(\Lambda^*T)^{{\mathrm{SU}}(n)}$, we observe that_every element $g$ of the normalizer_$N({\mathrm{SU}}(n))$ of ${\mathrm{SU}}(n)$ in maps $(\Lambda^*T)^{{\mathrm{SU}}(n)}$ to_itself; this defines a natural notion_of equivalence among differential operators. \[prop:oddderivation\]_Let $f$_be a derivation of $(\Lambda^*T)^{{\mathrm{SU}}(n)}$;_then $f$ is_a differential_operator that extend_to a derivation of degree one_on $\Lambda^*T$ if_and only if one of the_following_holds: - _$f(\alpha)=0$,_$f(F)=2\lambda\alpha\wedge_F$, $f(\Omega)=n(\lambda-\mu_i)\alpha\wedge\Omega$; - _$f(\alpha)=\lambda_F$, $f(F)=0$,_$f(\Omega)=-\mu_i\alpha\wedge\Omega$; - $n=2$, and up_to_$N({\mathrm{SU}}(2))$ action, $\tilde f$ has the form_(A) or (B); - __$n=3$, $f(\alpha)=0$, $f(F)=3\lambda\Omega^- -_3\mu\Omega^+$, $f(\Omega)=2(\lambda+i\mu)F^2$; here $\lambda$ and $\mu$_are real constants.
}(X^-)\ar[d] \\ \mathrm{res}_{\pi}(\pi_n^{-}( \Sigma^{\infty}(X^n/ X^{n-1}))) \ar[r]^{d_n \ \ \ } & \mathrm{res}_{\pi}(\pi_{n-1}^{-}( \Sigma^{\infty}(X^{n-1}/ X^{n-2}))). }$$ Using the adjointness of $\mathrm{res}_{\pi}$ and $\mathrm{ind}_{\pi}$ and $\mathrm{ind}_{\pi}(\mathbb{Z}[-,G/K])=\mathbb{Z}^{G}[-,K]$, we conclude that the chain complex obtained from $ \Sigma^{\infty}X_+$ by applying the methods from Subsection 4.1 to the stable cofiber sequences obtained by applying $\Sigma^{\infty}$ to the homotopy cofiber sequences of $X$, coincides with the chain complex obtained by applying the induction functor $\mathrm{ind}_{\pi}$ to the cellular chain complex $C_{\ast}(X^-)$ of $X$. The above discussion shows that the suspension spectrum functor is a geometric analog of the induction functor $\mathrm{ind}_{\pi}$. This indicates that there should be an algebraic version of Proposition \[prop: susp model\]. Indeed, [@MartinezNucinkis06 Th. 3.8] shows that if $P_{\ast}$ is a projective resolution of $\underline{\mathbb{Z}}$, then $\mathrm{ind}_{\pi}(P_{\ast})$ is a projective resolution of $\underline{A}$. Below we show that the converse of the latter is also true. The proof requires the following lemma and uses notation and isomorphisms from Section 2. \[lemma: tensor zero\] For any right ${\mathcal{O}_{{\mathcal {F}}}G}$-module $M$ and any $K \in \mathcal{F}$, $\mathrm{ind}_{\pi}(M)(G/K)=0$ implies that $M(G/K)=0$. Fix $K \in \mathcal{F}$ and let $\mathrm{ind}_K^G$ denote the induction functor from the category of covariant Mackey functors for $K$ to the category of covariant Mackey functors for $G$, associated to the inclusion $i_K^G$ of $K$ into $G$. Then $$\mathrm{ind}_K^G(\mathbb{Z}^K[K,\pi_K(-)])\cong \mathbb{Z
} (X^-)\ar[d ] \\ \mathrm{res}_{\pi}(\pi_n^{- } (\Sigma^{\infty}(X^n/ X^{n-1 }) )) \ar[r]^{d_n \ \ \ } & \mathrm{res}_{\pi}(\pi_{n-1}^{- } (\Sigma^{\infty}(X^{n-1}/ X^{n-2 }) )). } $ $ Using the adjointness of $ \mathrm{res}_{\pi}$ and $ \mathrm{ind}_{\pi}$ and $ \mathrm{ind}_{\pi}(\mathbb{Z}[-,G / K])=\mathbb{Z}^{G}[-,K]$, we conclude that the chain complex prevail from $ \Sigma^{\infty}X_+$ by enforce the methods from Subsection 4.1 to the static cofiber sequences receive by applying $ \Sigma^{\infty}$ to the homotopy cofiber sequence of $ X$, coincides with the chain building complex prevail by applying the trigger functor $ \mathrm{ind}_{\pi}$ to the cellular chain complex $ C_{\ast}(X^-)$ of $ X$. The above discussion testify that the suspension spectrum functor is a geometric analog of the induction functor $ \mathrm{ind}_{\pi}$. This indicates that there should be an algebraic version of Proposition \[prop: susp model\ ]. Indeed, [ @MartinezNucinkis06 Th. 3.8 ] shows that if $ P_{\ast}$ is a projective resolving power of $ \underline{\mathbb{Z}}$, then $ \mathrm{ind}_{\pi}(P_{\ast})$ is a projective resolution of $ \underline{A}$. Below we show that the converse of the latter is also on-key. The proof requires the following lemma and uses note and isomorphisms from Section 2. \[lemma: tensor zero\ ] For any correct $ { \mathcal{O}_{{\mathcal { F}}}G}$-module $ M$ and any $ K \in \mathcal{F}$, $ \mathrm{ind}_{\pi}(M)(G / K)=0 $ implies that $ M(G / K)=0$. Fix $ K \in \mathcal{F}$ and let $ \mathrm{ind}_K^G$ denote the evocation functor from the category of covariant Mackey functors for $ K$ to the category of covariant Mackey functors for $ G$, associated to the inclusion $ i_K^G$ of $ K$ into $ G$. Then $ $ \mathrm{ind}_K^G(\mathbb{Z}^K[K,\pi_K(-)])\cong \mathbb{Z
}(X^-)\ar[f] \\ \mathrm{res}_{\pi}(\pi_n^{-}( \Sigma^{\innty}(X^n/ X^{n-1}))) \ar[r]^{d_n \ \ \ } & \matirm{res}_{\pj}(\pi_{n-1}^{-}( \Sigoa^{\infty}(X^{n-1}/ X^{n-2}))). }$$ Using the adjomntnwss od $\mathrm{res}_{\pi}$ and $\mathfm{ind}_{\pi}$ ajd $\mathrn{ind}_{\ki}(\mathbb{Z}[-,G/K])=\mathbb{V}^{F}[-,K]$, we concluds thac vhe chain complgx obtained xrom $ \Sigma^{\infdy}B_+$ yy applying the methods from Subsectyon 4.1 to tje stable cofifer xqquehbew obtained by applying $\Sigma^{\ihfty}$ to the homotopy cofiber sequences of $X$, colncifes with the chain complex objzinqe by applyine the indubcion functot $\mathrm{ind}_{\pi}$ to the cellular chxin cpmplex $C_{\asj}(R^-)$ of $F$. The above viscusfion shows tmst the suspenxion spectrum nunctmr us a geometric analog of the induction sunctor $\mdtkrm{ind}_{\pi}$. This indicatws that these sfiula bt ai amgebralc tersion of Lroposition \[prop: susp model\]. Inceqe, [@MartinezNucjnkis06 Eh. 3.8] shows that if $P_{\ast}$ is a projective revolhtion of $\underline{\mathbv{Z}}$, then $\mathrm{ind}_{\pi}(P_{\adt})$ is a pwojective resolution of $\underline{A}$. Below we show dhat vhd cinyersd ov the latter is also true. The proof requires ege fpllowing lemma and uses nptwtojn and isomorkhisms ydoj Section 2. \[lemma: tejsor zeto\] For any righu ${\matncal{O}_{{\mathcal {F}}}G}$-module $M$ and any $K \in \manhcao{F}$, $\mathrm{ind}_{\pi}(M)(G/K)=0$ nmplies that $M(G/K)=0$. Gix $K \in \mathcal{F}$ and let $\machrm{ins}_K^G$ denote hhe inducfkon functor from thv cadegory of covariant Mackey functors for $K$ to tfe cstegorr of covarlant Mackey functors for $G$, asdowiated to hhe inclusion $i_K^G$ of $K$ into $G$. Tixn $$\mathrm{ind}_K^B(\mdthtb{Z}^K[K,\pi_K(-)])\eong \msthbb{Z
}(X^-)\ar[d] \\ \mathrm{res}_{\pi}(\pi_n^{-}( \Sigma^{\infty}(X^n/ X^{n-1}))) \ar[r]^{d_n \ } \mathrm{res}_{\pi}(\pi_{n-1}^{-}( \Sigma^{\infty}(X^{n-1}/ }$$ Using the and we conclude that chain complex obtained $ \Sigma^{\infty}X_+$ by applying the methods Subsection 4.1 to the stable cofiber sequences obtained by applying $\Sigma^{\infty}$ to the cofiber sequences of $X$, coincides with the chain complex obtained by applying the functor to cellular complex $C_{\ast}(X^-)$ of $X$. The above discussion shows that the suspension spectrum functor is a geometric of the induction functor $\mathrm{ind}_{\pi}$. This indicates that should be an algebraic of Proposition \[prop: susp model\]. [@MartinezNucinkis06 3.8] shows if is projective resolution of then $\mathrm{ind}_{\pi}(P_{\ast})$ is a projective resolution of $\underline{A}$. Below we show that the converse of the latter also true. requires the lemma uses and isomorphisms from \[lemma: tensor zero\] For any right and any $K \in \mathcal{F}$, $\mathrm{ind}_{\pi}(M)(G/K)=0$ implies that Fix $K \mathcal{F}$ and let $\mathrm{ind}_K^G$ denote the functor from the category of covariant Mackey functors $K$ to the category of covariant Mackey functors for $G$, associated to the inclusion $i_K^G$ into $G$. Then $$\mathrm{ind}_K^G(\mathbb{Z}^K[K,\pi_K(-)])\cong
}(X^-)\ar[d] \\ \mathrm{res}_{\pi}(\pi_n^{-}( \Sigma^{\inFty}(X^n/ X^{n-1}))) \ar[r]^{D_n \ \ \ } & \matHrm{Res}_{\Pi}(\Pi_{n-1}^{-}( \SIgma^{\Infty}(X^{n-1}/ X^{n-2}))). }$$ Using THe adJointness of $\mathrm{res}_{\pi}$ And $\maThRM{ind}_{\PI}$ aNd $\matHrm{ind}_{\pI}(\MaTHBb{Z}[-,g/K])=\MaThbB{Z}^{g}[-,k]$, wE concLudE that thE chain compLex ObTained from $ \SiGMa^{\Infty}X_+$ by apPlyIng the methodS frOm SubsEcTioN 4.1 To the StaBle coFiber sEQuenceS obtained By APplyinG $\sigma^{\inFTY}$ tO the Homotopy cofiber seQUeNCes of $X$, coincideS with tHe CHaIN ComPleX obtained bY aPplyiNG the indUCtION FunCTor $\mathrm{ind}_{\pI}$ to the celluLAr cHain coMpLex $c_{\Ast}(X^-)$ of $x$. The aBoVE diScussion shoWs thAt the suspEnsion SPectrum FUnctor iS a geomEtrIc aNaloG Of ThE inDuCTioN FuNctOR $\maThrm{ind}_{\pI}$. THiS indiCateS THAT theRe sHoulD be an Algebraic versIon Of PrOPosItion \[Prop: sUsp mOdEl\]. IndEed, [@MarTineznuCinkis06 Th. 3.8] shows thAt if $p_{\ast}$ is a prOjeCtIve ReSolutIOn of $\unDerLinE{\mathbb{z}}$, then $\maTHrm{InD}_{\PI}(p_{\aSt})$ is a projective resOlUTIoN of $\underLine{A}$. BELoW wE Show that ThE coNverSE Of the LattER iS also truE. The prOOf ReQuires tHe FollowInG leMma And usES notAtion aNd isomorPhismS From Section 2. \[lemMA: tensor zero\] FoR AnY RIgHT ${\matHcaL{O}_{{\mathcal {F}}}G}$-ModuLE $M$ anD any $k \In \MatHCal{F}$, $\mAthrm{InD}_{\Pi}(m)(g/K)=0$ implies that $M(G/K)=0$. Fix $k \iN \mathcAl{F}$ anD let $\mathrm{ind}_k^G$ denote thE INDuction fUnctOR fROm the category oF covaRiant MackeY Functors For $K$ tO the cateGory of covARIant MackEy fUncTorS foR $g$, AsSociated to the INClusIoN $i_K^G$ of $K$ IntO $G$. Then $$\mAthRm{iNd}_K^g(\maThBb{Z}^K[K,\pi_K(-)])\cOng \mathbB{Z
}(X^-)\ar[d] \\ \mathrm{re s}_{\pi}(\ pi_n^ {-} ( \ Si gma^ {\in fty}(X^n/ X^{n - 1})) ) \ar[r]^{d_n \ \ \ }& \ma th r m{re s }_ {\pi} (\pi_{n - 1} ^ { -}( \ Si gma ^{ \ in fty}( X^{ n-1}/ X ^{n-2}))). }$ $Using the ad j oi ntness of$\m athrm{res}_{ \pi }$ and $ \ma t hrm{i nd} _{\pi }$ and $\math rm{ind}_{ \p i }(\mat h bb{Z}[- , G /K ])=\ mathbb{Z}^{G}[-,K ] $, we conclude th at the c h ai n com ple x obtained f rom $ \Sigma^ { \i n f t y}X _ +$ by applyin g the metho d s f rom Su bs ect i on 4.1 to t he sta ble cofiber seq uences ob tained by appl y ing $\S igma^{ \in fty }$ t o t he ho mo t opy co fib e r s equences o f$X$,coin c i d e s wi ththechain complex obta ine d by app lying theindu ct ion f unctor $\ma th rm{ind}_{\pi}$to t he cellul arch ain c omple x $C_{\ ast }(X ^-)$ of $X$. T heab o v e d iscussion shows th at t he suspens ion sp e ct ru m functor i s a geo m e tricanal o gof the i nducti o nfu nctor $ \m athrm{ in d}_ {\p i}$.T hisindica tes that ther e should be ana lgebraic vers i on o fP ropo sit ion \[prop: sus p mod el\] . I nde e d, [@ Marti ne z Nu c inkis06 Th. 3.8] sh ow s that if $ P_{\ast}$ isa projecti v e resoluti on o f $ \ underline{\mat hbb{Z }}$, then$ \mathrm{ ind}_ {\pi}(P_ {\ast})$i s a proje cti veres olu t i on of $\underli n e {A}$ .Below w e s how tha t t hecon ver se of the l atter is a ls otr ue. Thep roof req ui res t hefollo w ing le mma a nd u se sn ota tion an d i s o morp hi sm s fr omSe ction 2.\[l emma: t ensor zer o\] Foran yright $ {\mathcal{O}_ {{ \mathcal { F} }}G }$-mod u l e $M$ an d any $K \in \mathcal{F } $, $\ma thr m{ind }_{\ pi}(M)(G/ K)= 0$ imp lie s that$M(G/K )=0$. Fix $ K \in \ ma thc al {F}$ and l e t $\ mathr m{ ind} _K^G$ d enote the inductio n fu nctor from th e c ateg o r yofc ov a ria nt Mac k e y functors for$K$ to the c a te gory of co v ari an t Macke y funct ors f o r $G$,associate d to thein clus i o n $ i_K^G$ of$K$ into $G$. The n $$\m a th rm{in d}_ K^G(\m at hbb {Z}^K [K,\pi _ K(- )])\c ong \m at hbb{Z
}(X^-)\ar[d] \\ \mathrm{res}_{\pi}(\pi_n^{-}(_\Sigma^{\infty}(X^n/ X^{n-1})))_\ar[r]^{d_n \ \ \_} &_\mathrm{res}_{\pi}(\pi_{n-1}^{-}(_\Sigma^{\infty}(X^{n-1}/ X^{n-2})))._}$$_Using the adjointness_of $\mathrm{res}_{\pi}$ and_$\mathrm{ind}_{\pi}$ and $\mathrm{ind}_{\pi}(\mathbb{Z}[-,G/K])=\mathbb{Z}^{G}[-,K]$, we_conclude that the_chain_complex obtained from $ \Sigma^{\infty}X_+$ by applying the methods from Subsection 4.1 to the_stable_cofiber sequences_obtained_by_applying $\Sigma^{\infty}$ to the homotopy_cofiber sequences of $X$, coincides_with the_chain complex obtained by applying the induction functor_$\mathrm{ind}_{\pi}$_to the cellular_chain complex $C_{\ast}(X^-)$ of $X$. The above discussion shows that_the suspension spectrum functor is a_geometric analog of_the_induction_functor $\mathrm{ind}_{\pi}$. This indicates_that there should be an algebraic_version of Proposition \[prop: susp model\]._Indeed, [@MartinezNucinkis06 Th. 3.8] shows that if_$P_{\ast}$ is a projective resolution of_$\underline{\mathbb{Z}}$, then $\mathrm{ind}_{\pi}(P_{\ast})$ is a_projective resolution_of $\underline{A}$. Below we show_that the converse_of the_latter is also_true. The proof requires the following_lemma and uses_notation and isomorphisms from Section 2. \[lemma:_tensor_zero\] For any_right_${\mathcal{O}_{{\mathcal_{F}}}G}$-module $M$_and any $K_\in_\mathcal{F}$, $\mathrm{ind}_{\pi}(M)(G/K)=0$_implies_that $M(G/K)=0$. Fix $K \in \mathcal{F}$ and_let_$\mathrm{ind}_K^G$ denote the induction functor from the_category of covariant Mackey_functors_for $K$ to the_category of covariant Mackey functors_for $G$, associated to the inclusion_$i_K^G$ of_$K$ into_$G$. Then $$\mathrm{ind}_K^G(\mathbb{Z}^K[K,\pi_K(-)])\cong \mathbb{Z
Hodge, P.W. 1961,, 66, 83 Ibata, R., Gilmore, G., &Β Irwin, M. 1994,, 370, 194 Kodama, T. &Β Bower, R.G. 2001,, 321, 18 Karachentsev, [[*etΒ al.*]{}]{}Β 2003,, 398, 479 McLaughlin, D.E. 1999,, 117, 2398 Meurer, G.R., Mackie, G., &Β Carignan, C. 1994,, 107, 2021 (MMC94) Meurer, G.R., Carignan, C., Beaulieu, S., &Β Freeman, K.C. 1996,, 111, 1551 (MCBF96) Meylan, G., Sarajedeni, A., Jablonka, P., Djorgovski, S.G., Bridges, T., &Β Rich, R.M. 2001,, 122, 830 Γ–stlin, G., Bergvall, N., &Β RΓΆnnback, J. 1998,, 335, 85 Pritchet, C.J., &Β van den Bergh, S. 1984,, 96, 804 Secker, J. 1992,, 104, 1472 Schlegel, D.J., Finkbeiner, D.P., &Β Davis, M. 1998,, 500, 525 [lccl]{} Β  & $0.28 \pm 0.04$ & mag & foreground extinction\ $D$ & $4.1 \pm 0.3$ & Mpc & Distance\ ${\cal M}_g$ & $7.4 \times 10^8$ & & ISM mass\ ${\cal M}_\star$ & $3.2 \times 10^8$ & & Mass in stars\ ${\cal M}_T$ & $2.1 \times 10^{10}$ & & Total dynamical mass\ $M_V$ & –16.42 & mag & Absolute mag $V$ band\ $L_B$ & $3.4 \times 10^8$ & $L_{B,\odot}$ & $B$ band luminosity\ Β  & 62 & solar & Mass to light ratio\ [l
Hodge, P.W. 1961, , 66, 83 Ibata, R., Gilmore, G., & Β  Irwin, M. 1994, , 370, 194 Kodama, T. & Β  Bower, R.G. 2001, , 321, 18 Karachentsev, [ [ * et Β  al. * ] { } ] { } Β  2003, , 398, 479 McLaughlin, D.E. 1999, , 117, 2398 Meurer, G.R., Mackie, G., & Β  Carignan, C. 1994, , 107, 2021 (MMC94) Meurer, G.R., Carignan, C., Beaulieu, S., & Β  Freeman, K.C. 1996, , 111, 1551 (MCBF96) Meylan, G., Sarajedeni, A., Jablonka, P., Djorgovski, S.G., Bridges, T., & Β  Rich, R.M. 2001, , 122, 830 Γ–stlin, G., Bergvall, N., & Β  RΓΆnnback, J. 1998, , 335, 85 Pritchet, C.J., & Β  van den Bergh, S. 1984, , 96, 804 Secker, J. 1992, , 104, 1472 Schlegel, D.J., Finkbeiner, D.P., & Β  Davis, M. 1998, , 500, 525 [ lccl ] { } Β  & $ 0.28 \pm 0.04 $ & mag & foreground extinction\ $ D$ & $ 4.1 \pm 0.3 $ & Mpc & Distance\ $ { \cal M}_g$ & $ 7.4 \times 10 ^ 8 $ & & ISM mass\ $ { \cal M}_\star$ & $ 3.2 \times 10 ^ 8 $ & & Mass in stars\ $ { \cal M}_T$ & $ 2.1 \times 10^{10}$ & & Total dynamical mass\ $ M_V$ & – 16.42 & mag & Absolute mag $ V$ band\ $ L_B$ & $ 3.4 \times 10 ^ 8 $ & $ L_{B,\odot}$ & $ B$ band luminosity\ Β  & 62 & solar & Mass to light ratio\ [ l
Hodhe, P.W. 1961,, 66, 83 Ibata, R., Gilmore, N., &Β Irwin, M. 1994,, 370, 194 Kodama, T. &Β Boxer, R.G. 2001,, 321, 18 Karachdntsev, [[*etΒ al.*]{}]{}Β 2003,, 398, 479 McLaughlin, D.E. 1999,, 117, 2398 Mwurer, G.R., Mackie, G., &Β Carignan, Z. 1994,, 107, 2021 (MMC94) Mvurer, G.R., Xarijnan, C., Beaulieu, S., &Β Freemak, K.C. 1996,, 111, 1551 (MCYF96) Neylan, G., Sarajgdeni, A., Jablmnka, P., Djorgovvkk, D.G., Bridges, T., &Β Rich, R.M. 2001,, 122, 830 Γ–stlin, G., Bergdall, N., &Β TΓΆnjback, J. 1998,, 335, 85 Pritshet, S.J., &Β vzn den Bergh, S. 1984,, 96, 804 Secker, J. 1992,, 104, 1472 Schlegsl, D.J., Fpnkbeiner, D.P., &Β Davix, M. 1998,, 500, 525 [lccl]{} Β  & $0.28 \pm 0.04$ & mag & fogegrlund extinction\ $D$ & $4.1 \pm 0.3$ & Mpc & Eistwbce\ ${\cal M}_g$ & $7.4 \gimes 10^8$ & & ISM mass\ ${\cal M}_\atar$ & $3.2 \times 10^8$ & & Mass in stars\ ${\cau M}_T$ & $2.1 \times 10^{10}$ & & Titap dynamical nass\ $M_N$ & –16.42 & mag & Absolute mac $V$ banc\ $L_B$ & $3.4 \times 10^8$ & $K_{B,\ovot}$ & $B$ band luminosity\ Β  & 62 & solar & Mass to lidht ratio\ [n
Hodge, P.W. 1961,, 66, 83 Ibata, R., & M. 1994,, 194 Kodama, T. 18 [[*et al.*]{}]{} 2003,, 479 McLaughlin, D.E. 117, 2398 Meurer, G.R., Mackie, G., Carignan, C. 1994,, 107, 2021 (MMC94) Meurer, G.R., Carignan, C., Beaulieu, S., & K.C. 1996,, 111, 1551 (MCBF96) Meylan, G., Sarajedeni, A., Jablonka, P., Djorgovski, S.G., T., Rich, 2001,, 830 Γ–stlin, G., Bergvall, N., & RΓΆnnback, J. 1998,, 335, 85 Pritchet, C.J., & van den S. 1984,, 96, 804 Secker, J. 1992,, 104, Schlegel, D.J., Finkbeiner, D.P., Davis, M. 1998,, 500, 525 & \pm 0.04$ mag foreground $D$ & $4.1 0.3$ & Mpc & Distance\ ${\cal M}_g$ & $7.4 \times 10^8$ & & ISM mass\ ${\cal M}_\star$ $3.2 \times & Mass stars\ M}_T$ $2.1 \times 10^{10}$ Total dynamical mass\ $M_V$ & –16.42 Absolute mag $V$ band\ $L_B$ & $3.4 \times & $L_{B,\odot}$ $B$ band luminosity\ & 62 & & Mass to light ratio\ [l
Hodge, P.W. 1961,, 66, 83 Ibata, R., Gilmore, G., &Β IrwiN, M. 1994,, 370, 194 Kodama, T. &Β BOwer, R.g. 2001,, 321, 18 KaRacHeNtseV, [[*etΒ aL.*]{}]{}Β 2003,, 398, 479 McLaughlin, D.E. 1999,, 117, 2398 MEUrer, g.R., Mackie, G., &Β Carignan, C. 1994,, 107, 2021 (MMC94) MEurer, g.R., cArigNAn, c., BeauLieu, S., &Β FrEEmAN, k.C. 1996,, 111, 1551 (McBf96) MEylAn, g., saRajedEni, a., JablonKa, P., DjorgovSki, s.G., bridges, T., &Β Rich, r.m. 2001,, 122, 830 Γ–Stlin, G., BergValL, N., &Β RΓΆnnback, J. 1998,, 335, 85 PRitChet, C.J., &Β VaN deN bergh, s. 1984,, 96, 804 SeCker, J. 1992,, 104, 1472 schlegEL, D.J., FinKbeiner, D.P., &Β daVIs, M. 1998,, 500, 525 [lccL]{} Β  & $0.28 \Pm 0.04$ & mag & foREGrOund Extinction\ $D$ & $4.1 \pm 0.3$ & Mpc & DIStANce\ ${\cal M}_g$ & $7.4 \times 10^8$ & & IsM mass\ ${\CaL m}_\sTAR$ & $3.2 \tiMes 10^8$ & & mass in starS\ ${\cAl M}_T$ & $2.1 \tIMes 10^{10}$ & & TotaL DyNAMIcaL Mass\ $M_V$ & –16.42 & mag & AbsoLute mag $V$ banD\ $l_B$ & $3.4 \tImes 10^8$ & $L_{B,\OdOt}$ & $B$ BAnd lumInosiTy\ Β  & 62 & SOlaR & Mass to lighT ratIo\ [l
Hodge, P.W. 1961,, 66, 8 3 Ibata,R., G ilm ore ,G.,&Β Ir win, M. 1994,, 370, 194 Kodama, T. &Β Bow er, R .G . 200 1 ,, 321, 18 Ka r ac h e nts ev ,[[* et al .*]{} ]{} Β 2003,, 398, 479 Mc La ughlin, D.E. 19 99,, 117,239 8 Meurer, G .R. , Mack ie , G . , &Β C ari gnan, C. 19 9 4,, 10 7, 2021 ( MM C 94) M e urer, G . R ., Car ignan, C., Beauli e u, S., &Β Freeman, K.C.19 9 6, , 111 , 1 551 (MCBF9 6) Mey l an, G., Sa r a j ede n i, A., Jablon ka, P., Djo r gov ski, S .G .,B ridges , T., & Ric h, R.M. 200 1,,122, 830 Γ–stli n , G., B e rgvall, N., & Β RΓΆ nnb ack, J. 1 998 ,, 335 , 8 5 P rit chet, C. J. ,&Β van den B e r gh,S.1984 ,, 96 , 804 Secker , J . 19 9 2,, 104, 1472 Sc hl egel, D.J., Fink be iner, D.P., &Β D avis , M. 1998 ,,50 0,52 5 [l c cl]{}Β  & $0 .28 \pm 0.04$& ma g& f or eground extinction \$ D $& $4.1 \ pm 0.3 $ & M p c & Dist an ce\ ${\ c a l M}_ g$ & $7 .4 \time s 10^8 $ & & ISM ma ss \ ${\c al M} _\s tar$& $3. 2 \tim es 10^8$ & &M ass in stars\$ {\cal M}_T$ & $2 . 1 \ t imes 10 ^{10}$ & &Tota l dyn amic a lmas s \ $M_ V$ &–1 6 .4 2 & mag & Absolute m ag $V$ b and\$L_B$ & $3.4\times 10^ 8 $ & $L_{B, \odo t }$ & $B$ band lum inosi ty\ Β  & 62 & solar& Mas s to lig ht ratio\ [l
Hodge, P.W._1961,, 66,_83 Ibata, R., Gilmore, G.,_&Β Irwin, M._1994,,_370, 194 Kodama,_T._&Β Bower, R.G. 2001,,_321, 18 Karachentsev, [[*etΒ al.*]{}]{}Β 2003,,_398, 479 McLaughlin, D.E. 1999,,_117, 2398 Meurer, G.R.,_Mackie,_G., &Β Carignan, C. 1994,, 107, 2021 (MMC94) Meurer, G.R., Carignan, C., Beaulieu, S., &Β Freeman, K.C._1996,,_111, 1551_(MCBF96) Meylan,_G.,_Sarajedeni, A., Jablonka, P., Djorgovski,_S.G., Bridges, T., &Β Rich, R.M._2001,, 122,_830 Γ–stlin, G., Bergvall, N., &Β RΓΆnnback, J. 1998,, 335,_85 Pritchet,_C.J., &Β van den_Bergh, S. 1984,, 96, 804 Secker, J. 1992,, 104, 1472 Schlegel,_D.J., Finkbeiner, D.P., &Β Davis, M. 1998,,_500, 525 [lccl]{} Β _&_$0.28_\pm 0.04$ & mag_& foreground extinction\ $D$ & $4.1 \pm_0.3$ & Mpc & Distance\ ${\cal M}_g$_& $7.4 \times 10^8$ & & ISM_mass\ ${\cal M}_\star$ & $3.2 \times 10^8$_& & Mass in stars\ ${\cal_M}_T$ &_$2.1 \times 10^{10}$ & &_Total dynamical mass\ $M_V$_& –16.42_& mag &_Absolute mag $V$ band\ $L_B$ & $3.4_\times 10^8$ &_$L_{B,\odot}$ & $B$ band luminosity\ Β  &_62_& solar &_Mass_to_light ratio\ [l
gamma=-\kappa.$$ Making use of the continuity method, one can easily prove that the solvability of this equation is equivalent to the one of $$\int_{X}\langle\kappa,\vartheta\rangle_{H}\frac{\omega^{n}}{n!}=0,$$ for any $\vartheta\in\Gamma(X,E)$ satisfying $D^{''}_{E}\vartheta=D_{H}^{'}\vartheta=0$. By the assumption $\int_{X}\partial [\eta]\wedge\frac{\omega^{n-1}}{(n-1)!}=0$ for any Dolbeault class $[\eta]\in H^{0,1}(X)$, we know $$\int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta,\vartheta\rangle_{H}\frac{\omega^{n}}{n!} =\int_{X}\sqrt{-1}\partial\langle\beta^{0,1},\vartheta\rangle_{H}\wedge\frac{\omega^{n-1}}{(n-1)!} =0.$$ Suppose $\gamma\in\Gamma(X,E)$ is a solution of $$\sqrt{-1}\Lambda_{\omega}D^{'}_{H}D^{''}_{E}\gamma=-\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta.$$ Let $\tilde{\beta}=\beta+D^{''}_{E}\gamma$, then $\sqrt{-1}\Lambda_{\omega}D_{H}^{'}\tilde{\beta}=0$. According to (\[z:1\]), one can easily check that $$\sqrt{-1}[\Lambda_{\omega},D_{E}^{''}]=(D_{H}^{'})^{*}+\tau^{*},\ \ -\sqrt{-1}[\Lambda_{\omega},D_{H}^{'}]=(D_{E}^{''})^{*}+\bar{\tau}^{*}.$$ A simple computation gives $$\label{eq:61} \begin{split} 0&=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},F_{H,\theta}]\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{''}_{E}D^{'}_{H}\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},D^{
gamma=-\kappa.$$ Making use of the continuity method, one can easily rise that the solvability of this equality is equivalent to the one of $ $ \int_{X}\langle\kappa,\vartheta\rangle_{H}\frac{\omega^{n}}{n!}=0,$$ for any $ \vartheta\in\Gamma(X, E)$ satisfying $ D^{''}_{E}\vartheta = D_{H}^{'}\vartheta=0$. By the assumption $ \int_{X}\partial [ \eta]\wedge\frac{\omega^{n-1}}{(n-1)!}=0 $ for any Dolbeault course $ [ \eta]\in H^{0,1}(X)$, we know $ $ \int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta,\vartheta\rangle_{H}\frac{\omega^{n}}{n! } = \int_{X}\sqrt{-1}\partial\langle\beta^{0,1},\vartheta\rangle_{H}\wedge\frac{\omega^{n-1}}{(n-1)! } = 0.$$ presuppose $ \gamma\in\Gamma(X, E)$ is a solution of $ $ \sqrt{-1}\Lambda_{\omega}D^{'}_{H}D^{''}_{E}\gamma=-\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta.$$ Let $ \tilde{\beta}=\beta+D^{''}_{E}\gamma$, then $ \sqrt{-1}\Lambda_{\omega}D_{H}^{'}\tilde{\beta}=0$. According to (\[z:1\ ]), one can well discipline that $ $ \sqrt{-1}[\Lambda_{\omega},D_{E}^{''}]=(D_{H}^{'})^{*}+\tau^{*},\ \ -\sqrt{-1}[\Lambda_{\omega},D_{H}^{'}]=(D_{E}^{''})^{*}+\bar{\tau}^{*}.$$ A simple calculation gives $ $ \label{eq:61 } \begin{split } 0&=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},F_{H,\theta}]\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ & = \int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{''}_{E}D^{'}_{H}\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ & = \int_{X}\langle\sqrt{-1}[\Lambda_{\omega},D^ {
gamla=-\kappa.$$ Making use of tht continuity method, one ran easjly provd that the solvability of thms ewuatiin is equivalent to thd one of $$\pnt_{X}\langlw\kapka,\vartheta\rangle_{H}\hdac{\omega^{n}}{n!}=0,$$ fod any $\tartheta\in\Gamma(W,E)$ satisfyitg $D^{''}_{E}\vartheta=D_{V}^{'}\vxrcheta=0$. By the assumption $\int_{X}\partial [\qta]\wedgr\fgac{\omega^{n-1}}{(n-1)!}=0$ for any Qolbsault class $[\eta]\in H^{0,1}(X)$, we know $$\int_{X}\lzngle\sqgt{-1}\Lambda_{\omega}D^{'}_{H}\beya,\vartheta\rangle_{H}\frac{\omega^{j}}{n!} =\inh_{X}\sqrt{-1}\partial\langlf\beta^{0,1},\varthejz\ragtle_{H}\wedge\fraz{\omega^{n-1}}{(n-1)!} =0.$$ Sl'pose $\gamma\jn\Gamma(X,E)$ is a solution of $$\sqrt{-1}\Uambdc_{\omega}D^{'}_{H}D^{''}_{E}\gqmna=-\sett{-1}\Lambda_{\omege}D^{'}_{H}\betw.$$ Let $\tilde{\bcna}=\beta+D^{''}_{A}\gamma$, yhen $\sqrt{-1}\Lambds_{\omxga}D_{Y}^{'}\tilde{\beta}=0$. According vo (\[z:1\]), one can easily sheck thad $$\aqrt{-1}[\Lambda_{\omega},D_{E}^{''}]=(E_{H}^{'})^{*}+\rau^{*},\ \ -\vqrt{-1}[\Nambaq_{\omdga},S_{H}^{'}]=(V_{E}^{''})^{*}+\bzr{\tau}^{*}.$$ W smmple compufation givew $$\label{eq:61} \begin{split} 0&=\one_{Q}\kangle\sqrt{-1}[\Lajbda_{\omqgw},F_{H,\theta}]\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\fgac{\ojega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}\Lamvda_{\omega}D^{''}_{E}D^{'}_{H}\tilde{\beta},\jilde{\beta}\rwngle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},D^{
gamma=-\kappa.$$ Making use of the continuity method, easily that the of this equation of for any $\vartheta\in\Gamma(X,E)$ $D^{''}_{E}\vartheta=D_{H}^{'}\vartheta=0$. By the $\int_{X}\partial [\eta]\wedge\frac{\omega^{n-1}}{(n-1)!}=0$ for any Dolbeault class H^{0,1}(X)$, we know $$\int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta,\vartheta\rangle_{H}\frac{\omega^{n}}{n!} =\int_{X}\sqrt{-1}\partial\langle\beta^{0,1},\vartheta\rangle_{H}\wedge\frac{\omega^{n-1}}{(n-1)!} =0.$$ Suppose $\gamma\in\Gamma(X,E)$ is a solution of $$\sqrt{-1}\Lambda_{\omega}D^{'}_{H}D^{''}_{E}\gamma=-\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta.$$ $\tilde{\beta}=\beta+D^{''}_{E}\gamma$, then $\sqrt{-1}\Lambda_{\omega}D_{H}^{'}\tilde{\beta}=0$. According to (\[z:1\]), one can easily check that $$\sqrt{-1}[\Lambda_{\omega},D_{E}^{''}]=(D_{H}^{'})^{*}+\tau^{*},\ \ A computation $$\label{eq:61} 0&=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},F_{H,\theta}]\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{''}_{E}D^{'}_{H}\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},D^{
gamma=-\kappa.$$ Making use of the cOntinuity mEthod, One Can EaSily ProvE that the solvabILity Of this equation is equivaLent tO tHE one OF $$\iNt_{X}\laNgle\kapPA,\vARTheTa\RaNglE_{H}\FRaC{\omegA^{n}}{n!}=0,$$ For any $\vArtheta\in\GAmmA(X,e)$ satisfying $D^{''}_{e}\VaRtheta=D_{H}^{'}\vaRthEta=0$. By the assuMptIon $\int_{x}\pArtIAl [\eta]\WedGe\fraC{\omega^{N-1}}{(N-1)!}=0$ for anY DolbeaulT cLAss $[\eta]\IN H^{0,1}(X)$, we knOW $$\InT_{X}\laNgle\sqrt{-1}\Lambda_{\omeGA}D^{'}_{h}\Beta,\vartheta\raNgle_{H}\fRaC{\OmEGA^{n}}{n!} =\Int_{x}\sqrt{-1}\partiAl\LanglE\Beta^{0,1},\varTHeTA\RAngLE_{H}\wedge\frac{\omEga^{n-1}}{(n-1)!} =0.$$ SupposE $\GamMa\in\GaMmA(X,E)$ IS a soluTion oF $$\sQRt{-1}\LAmbda_{\omega}D^{'}_{h}D^{''}_{E}\gAmma=-\sqrt{-1}\LAmbda_{\oMEga}D^{'}_{H}\beTA.$$ Let $\tilDe{\beta}=\BetA+D^{''}_{E}\GammA$, ThEn $\SqrT{-1}\LAMbdA_{\OmEga}d_{h}^{'}\tiLde{\beta}=0$. ACcOrDing tO (\[z:1\]), onE CAN EasiLy cHeck That $$\sQrt{-1}[\Lambda_{\omegA},D_{E}^{''}]=(d_{H}^{'})^{*}+\taU^{*},\ \ -\SqrT{-1}[\LambDa_{\omeGa},D_{H}^{'}]=(d_{E}^{''})^{*}+\Bar{\taU}^{*}.$$ A simpLe comPuTation gives $$\labeL{eq:61} \bEgin{split} 0&=\Int_{x}\lAngLe\Sqrt{-1}[\LAMbda_{\omEga},f_{H,\tHeta}]\tilDe{\beta},\tILde{\BeTA}\RAnGle_{H,\omega}\frac{\omega^{N}}{n!}\\ &=\INT_{X}\Langle\sqRt{-1}\LambDA_{\oMeGA}D^{''}_{E}D^{'}_{H}\tilDe{\BetA},\tilDE{\Beta}\rAnglE_{h,\oMega}\frac{\Omega^{n}}{N!}\\ &=\InT_{X}\Langle\sQrT{-1}[\LambdA_{\oMegA},D^{
gamma=-\kappa.$$ Making us e of the c ontin uit y m et hod, one can easily pr o ve t hat the solvability of this e q uati o nis eq uivalen t t o the o ne of $ $ \i nt_{X }\l angle\k appa,\vart het a\ rangle_{H}\f r ac {\omega^{n }}{ n!}=0,$$ for an y $\va rt het a \in\G amm a(X,E )$ sat i sfying $D^{''}_ {E } \varth e ta=D_{H } ^ {' }\va rtheta=0$. By the as s umption $\int_ {X}\pa rt i al [ \et a]\ wedge\frac {\ omega ^ {n-1}}{ ( n- 1 ) ! }=0 $ for any Dolb eault class $[\ eta]\i nH^{ 0 ,1}(X) $, we k n ow$$\int_{X}\ lang le\sqrt{- 1}\Lam b da_{\om e ga}D^{' }_{H}\ bet a,\ vart h et a\ ran gl e _{H } \f rac { \om ega^{n}} {n !} =\in t_{X } \ s q rt{- 1}\ part ial\l angle\beta^{0 ,1} ,\va r the ta\ra ngle_ {H}\ we dge\f rac{\o mega^ {n -1}}{(n-1)!} =0 .$$Suppose $ \ga mm a\i n\ Gamma ( X,E)$isa s olution of $$\ s qrt {- 1 } \ La mbda_{\omega}D^{'} _{ H } D^ {''}_{E} \gamma = -\ sq r t{-1}\La mb da_ {\om e g a}D^{ '}_{ H }\ beta.$$Let $\ t il de {\beta} =\ beta+D ^{ ''} _{E }\gam m a$,then $ \sqrt{-1 }\Lam b da_{\omega}D_{ H }^{'}\tilde{\ b et a } =0 $ . Ac cor ding to (\[ z:1\ ] ), o ne c a neas i ly ch eck t ha t $ $ \sqrt{-1}[\Lambda_{ \o mega}, D_{E} ^{''}]=(D_{H} ^{'})^{*}+ \ t a u^{*},\\ -\ s qr t {-1}[\Lambda_{ \omeg a},D_{H}^{ ' }]=(D_{E }^{'' })^{*}+\ bar{\tau} ^ { *}.$$ Asim ple co mpu t a ti on gives $$\l a b el{e q: 61} \be gin {split} 0& =\i nt_ {X} \l angle\sqr t{-1}[\L am bd a_ {\ ome ga},F _ {H,\thet a} ]\t il de{ \beta } ,\tild e{\be ta}\ ra ng l e_{ H,\omeg a }\ f r ac{\ om eg a^{n }}{ n! }\\ & =\in t _{X }\langl e\sqrt{-1 }\L a mbda _{ \o mega}D^ {''}_{E}D^{'} _{ H}\tilde{\ be ta} ,\tild e { \beta}\r angle_{H,\omega}\frac{\ o mega^{n }}{ n!}\\ &=\ int_{X}\l ang le\sqr t{- 1 }[\Lam bda_{\ omega }, D^{
gamma=-\kappa.$$ Making_use of_the continuity method, one_can easily_prove_that the_solvability_of this equation_is equivalent to_the one of $$\int_{X}\langle\kappa,\vartheta\rangle_{H}\frac{\omega^{n}}{n!}=0,$$_for any $\vartheta\in\Gamma(X,E)$_satisfying_$D^{''}_{E}\vartheta=D_{H}^{'}\vartheta=0$. By the assumption $\int_{X}\partial [\eta]\wedge\frac{\omega^{n-1}}{(n-1)!}=0$ for any Dolbeault class $[\eta]\in H^{0,1}(X)$, we know_$$\int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta,\vartheta\rangle_{H}\frac{\omega^{n}}{n!} =\int_{X}\sqrt{-1}\partial\langle\beta^{0,1},\vartheta\rangle_{H}\wedge\frac{\omega^{n-1}}{(n-1)!} =0.$$_Suppose $\gamma\in\Gamma(X,E)$_is_a_solution of $$\sqrt{-1}\Lambda_{\omega}D^{'}_{H}D^{''}_{E}\gamma=-\sqrt{-1}\Lambda_{\omega}D^{'}_{H}\beta.$$ Let $\tilde{\beta}=\beta+D^{''}_{E}\gamma$,_then $\sqrt{-1}\Lambda_{\omega}D_{H}^{'}\tilde{\beta}=0$. According to (\[z:1\]),_one can_easily check that $$\sqrt{-1}[\Lambda_{\omega},D_{E}^{''}]=(D_{H}^{'})^{*}+\tau^{*},\ \ -\sqrt{-1}[\Lambda_{\omega},D_{H}^{'}]=(D_{E}^{''})^{*}+\bar{\tau}^{*}.$$ A simple_computation_gives $$\label{eq:61} \begin{split} 0&=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},F_{H,\theta}]\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}\Lambda_{\omega}D^{''}_{E}D^{'}_{H}\tilde{\beta},\tilde{\beta}\rangle_{H,\omega}\frac{\omega^{n}}{n!}\\ &=\int_{X}\langle\sqrt{-1}[\Lambda_{\omega},D^{
To get around these limitations, new, separate options must be defined, increasing the problem’s branching factor, and care must be taken to avoid loops (if so desired). An MMDP coarse action leaves the β€œdirection” of the action undecided: the same fine policy may be executed starting in several bottleneck states, and may take the agent in one of several directions until arriving at one of multiple destinations from which different successor coarse actions may be taken. In the context of MMDPs, if one wanted to be able to transfer policies on the same cluster which guide an agent in different particular directions, separate local policies would need to be stored in the β€œdatabase” of solved tasks. However, we would only need to transfer and plan with [*one*]{} of them. - A strength of the options framework is that multiple related queries, or tasks, may be solved essentially within the same SMDP. However, the tasks must be closely related in specific ways (e.g. tasks differing only in the goal state), and this strength comes at the expense of ignoring problem-specific information when one only wants to solve one problem. Our approach to the construction of MMDPs differs in that while we assume a particular problem when building a decomposition, we are able to consider a broader set of transfer possibilities. - Bottlenecks and partitioning do not explicitly enter the picture in options or SMDPs. Options may be defined on any subset of the statespace, and in applications may often take the form of a macro-action which directs the agent to an intermediate goal state starting from [*any*]{} state in a (possibly large) neighborhood. For example, an option may direct a robot to a hallway from any state in a room. We constrain our β€œinitiation” and β€œtermination” sets to be bottleneck states, however this means that learning policies at coarse scales is fast, and can be carried out completely independent of other scales. Coarse scale learning involves only the bottleneck states, giving a drastically reduced computational complexity. Provided the partitioning of a scale is well chosen, this construction allows one to capitalize on improved mixing times to accelerate convergence. - MMDPs are a representation for MDPs: we cannot solve problems that cannot be phrased as an MDP (i.e. problems whose solutions require non-Markov policies). A policy solving an MMDP, at any scale, is a Markov policy. SMDPs may in general have non-
To get around these limitations, new, separate choice must be define, increasing the problem ’s branching divisor, and care must be taken to debar loops (if so desire). An MMDP coarse action leaves the β€œ commission ” of the action undecided: the like fine policy may be executed starting in several bottleneck states, and may take the agentive role in one of several directions until arriving at one of multiple destination from which different successor coarse actions may be take. In the context of MMDPs, if one desire to be able to transfer policies on the same cluster which guide an agentive role in different particular directions, separate local policies would need to be store in the β€œ database ” of solved tasks. However, we would only need to transfer and plan with [ * one * ] { } of them. - A strength of the options framework is that multiple related queries, or undertaking, may be solved essentially within the like SMDP. However, the undertaking must be close related in specific ways (e.g. tasks differing only in the goal state), and this lastingness comes at the expense of ignoring problem - specific information when one only wants to clear one problem. Our approach to the construction of MMDPs differs in that while we wear a particular trouble when build a decomposition, we are able to consider a broader hardening of transfer possibilities. - Bottlenecks and partitioning do not explicitly record the picture in options or SMDPs. Options may be defined on any subset of the statespace, and in application may often consume the form of a macro - military action which directs the agentive role to an intermediate goal state starting from [ * any * ] { } department of state in a (possibly large) neighborhood. For exercise, an option may address a robot to a hallway from any state in a room. We constrain our β€œ initiation ” and β€œ termination ” sets to be bottleneck states, however this mean that learning policies at coarse scale is fast, and can be carried out completely independent of other scales. Coarse plate learning involves only the bottleneck states, giving a drastically reduced computational complexity. Provided the breakdown of a plate is well choose, this construction allows one to capitalize on improved mixing times to accelerate overlap. - MMDPs are a representation for MDPs: we cannot solve problems that cannot be phrased as an MDP (i.e. problems whose solutions require non - Markov policies). A policy solving an MMDP, at any scale, is a Markov policy. SMDPs may in cosmopolitan have non-
To get around these limitauions, new, separatg iptionv must be defived, increasing the problem’s uranxhing factor, and care must ce taken no avoid ooopw (if so desmded). An MMDP ckwrse ection leaves tme β€œdirectiot” of the actiot jnbecided: the same fine policy may be qxecutec dtarting in sederak botfleneck states, and may take the agsnt in mne of severak directions until arrivinh at one of multiple dfstinations froi which diffefent successor coarse zctions may be taken. In the congext pf MMDPs, id ine fanted to bx able to transfer policiev on thr same cluster whmch tuide an agent in difherent particular ditections, sapcrate local policies qoyld nged tm be wtofed ii tge β€œdatwbaae” of solvsd tasks. Hoqever, we would only nqvc to transfed and [lwn with [*one*]{} of them. - A strength of the opfions framework is that multiple related queties, or tafks, may be solved essentially within the same SMD[. Howxvdr, uhc gqsns must be closely related in specific ways (e.d. tssls differing okly in the goal ststf), sgd this streneth comes at the expense of ignorigg priblem-specyfic information when one only qants to solne obe problem. Our appxoach to the conxtrucyion of MMDPs differs iu that while we adsume a pzfticular problem whvn bgilding a decomposition, we are able to eonsider a btoader fet of trajsfer possibilities. - Bohtlengcks atd partitilning do not explicitly enter tix picture in ppdiots or SMBPs. Opbions may be desined on any sobset of che stxtespace, ahd in a'plications iay often taka the form of a macro-wctiin wyich difdcts the agent to an innexmediate toal state startinn froo [*any*]{} state in a (powsibly large) neoghcorrold. Fjs example, an optkon kay dkrect a robig to a hallway from any vtats in a room. We conxtvain our β€œunitiatijn” and β€œterminstion” sets to be blttleieck svates, nowgver this means that learning pklicies ah cjarse scales is nast, and can bz carried out completely independent of ither scales. Coarsw scale learning inrokves only tie boteleneck sdates, giving a drastucally reduced cokputational complexitg. Provhded hhe partitioning of a scale is well chosen, this construction allows one ti capivajize on impdovec mixpng cimes tj acreuerate convergenbe. - MMDPs are a representation hor MDPs: wa eannot solve problems that camnut be phrased as an MDP (i.e. problems wgose solitions require non-Markov policirs). A policy solving an MKDP, at aby sczle, is a Merkov policy. SMDPs msy in yenerao have bon-
To get around these limitations, new, separate be increasing the branching factor, and avoid (if so desired). MMDP coarse action the β€œdirection” of the action undecided: same fine policy may be executed starting in several bottleneck states, and may the agent in one of several directions until arriving at one of multiple from different coarse may be taken. In the context of MMDPs, if one wanted to be able to transfer on the same cluster which guide an agent different particular directions, separate policies would need to be in β€œdatabase” of tasks. we only need to and plan with [*one*]{} of them. - A strength of the options framework is that multiple related or tasks, solved essentially the SMDP. the tasks must related in specific ways (e.g. tasks the goal state), and this strength comes at expense of problem-specific information when one only wants solve one problem. Our approach to the construction MMDPs differs in that while we assume a particular problem when building a decomposition, we to consider a broader of transfer possibilities. Bottlenecks partitioning not enter the in options or SMDPs. Options may be defined on any subset the statespace, and in applications may often take the form macro-action directs the agent an intermediate goal state from state in a (possibly For an a to hallway from any state a room. We constrain our and β€œtermination” sets to means that learning policies at coarse scales is and can be carried out completely independent other scales. Coarse scale learning involves only the bottleneck states, giving a reduced computational the partitioning of a scale is well chosen, construction allows one to on improved mixing times to accelerate convergence. - MMDPs a for MDPs: cannot solve problems cannot be phrased an MDP (i.e. solutions require policies). policy at any scale, is a Markov SMDPs in general have non-
To get around these limitatioNs, new, separAte opTioNs mUsT be dEfinEd, increasing thE ProbLem’s branching factor, and Care mUsT Be taKEn To avoId loops (IF sO DEsiReD). AN MMdP COaRse acTioN leaves The β€œdirectiOn” oF tHe action undeCIdEd: the same fIne Policy may be eXecUted stArTinG In sevEraL bottLeneck STates, aNd may take ThE Agent iN One of seVERaL dirEctions until arrivINg AT one of multiple DestinAtIOnS FRom WhiCh differenT sUccesSOr coarsE AcTIONs mAY be taken. In the Context of MMdps, iF one waNtEd tO Be able To traNsFEr pOlicies on thE samE cluster wHich guIDe an ageNT in diffErent pArtIcuLar dIReCtIonS, sEParATe LocAL poLicies woUlD nEed to Be stORED In thE β€œdaTabaSe” of sOlved tasks. HowEveR, we wOUld Only nEed to TranSfEr and Plan wiTh [*one*]{} Of Them. - A strength of The oPtions fraMewOrK is ThAt mulTIple reLatEd qUeries, oR tasks, mAY be SoLVED eSsentially within thE sAME SmDP. HowevEr, the tASkS mUSt be closElY reLateD IN specIfic WAyS (e.g. tasks DifferINg OnLy in the GoAl statE), aNd tHis StrenGTh coMes at tHe expensE of igNOring problem-spECific informatIOn WHEn ONe onLy wAnts to solve One pRObleM. Our APpRoaCH to thE consTrUCtIOn of MMDPs differs in tHaT while We assUme a particulaR problem whEN BUilding a DecoMPoSItion, we are able To conSider a broaDEr set of tRansfEr possibIlities. - BoTTLenecks aNd pArtItiOniNG Do Not explicitly ENTer tHe Picture In oPtions oR SMdPs. optIonS mAy be definEd on any sUbSeT oF tHe sTatesPAce, and in ApPliCaTioNs may OFten taKe the Form Of A mACro-Action wHIcH DIrecTs ThE ageNt tO aN inteRmedIAte Goal staTe startinG frOM [*any*]{} StAtE in a (posSibly large) neiGhBorhood. For ExAmpLe, an opTIOn may dirEct a robot to a hallway from ANy state In a Room. WE conStrain our β€œIniTiatioN” anD β€œTerminAtion” sEts to Be BotTLEneck STAtEs, hOwEver this meANS thAt leaRnIng pOlicies At coarse scales is faST, anD can be carried Out CompLETeLy iNDePEndEnT Of oTHEr scales. Coarse sCale learniNg INvOlves only tHE boTtLeneck sTates, giVing a DRasticaLly reduceD computatIoNal cOMPleXity. ProvidEd the parTitioning OF a scaLE iS well ChoSen, thiS cOnsTructIon allOWs oNe to cApitalIzE on impRoved MiXing timeS to accelerate convergencE. - MMDPs Are a rEprEsentatioN foR mDPS: we cannot SolvE problems tHat CanNot be PhrASed as An MDp (I.e. ProBLems wHose SOlutions rEQuIre NON-MArkov policiES). a PolIcy soLviNG an MMDp, at aNy scale, is a Markov pOLicy. SMDPs may in GeneRAL haVe nON-
To get around these limit ations, ne w, se par ate o ptio ns m ust be defined , inc reasing the problem’sbranc hi n g fa c to r, an d carem us t beta ke n t oa vo id lo ops (if so desired). An M MDP coarse a c ti on leavesthe β€œdirection” of the a ct ion undec ide d: th e same fine p olicy may b e execu t ed star t i ng inseveral bottlenec k s t ates, and maytake t he ag e n t i n o ne of seve ra l dir e ctionsu nt i l arr i ving at one o f multipled est inatio ns fr o m whic h dif fe r ent successorcoar se action s mayb e taken . In the conte xtofMMDP s ,if on ew ant e dtob e a ble to t ra ns fer p olic i e s on t hesame clus ter which gui dean a g ent in d iffer entpa rticu lar di recti on s, separate loc al p olicies w oul dnee dto be stored in th e β€œdata base” o f so lv e d ta sks. However, we w ou l d o nly need to tr a ns fe r and pla nwit h [* o n e*]{} oft he m. - A stre n gt hof theop tionsfr ame wor k ist hatmultip le relat ed qu e ries, or tasks , may be solve d e s s en t iall y w ithin the s ameS MDP. How e ve r,t he ta sks m us t b e closely related in s pecifi c way s (e.g. tasks differing o n ly in th e go a ls tate), and thi s str ength come s at theexpen se of ig noring pr o b lem-spec ifi c i nfo rma t i on when one onl y want sto solv e o ne prob lem . O urapp ro ach to th e constr uc ti on o f M MDPsd iffers i ntha twhi le we assume a pa rtic ul ar pro blem wh e nb u ildi ng a dec omp os ition , we are able t o conside r a broa de rset oftransfer poss ib ilities. - B ottlen e c ks and p artitioning do not expl i citly e nte r the pic ture in o pti ons or SM D Ps. Op tionsmay b edef i n ed on a ny su bs et of thes t ate space ,andin appl ications may often tak e the form of amacr o - ac tio n w h ich d i rec t s the agent to a n intermed ia t egoal state sta rt ing fro m [*any *]{}s tate in a (possi bly large )neig h b orh ood. For e xample,an option may d i re ct arob ot toahal lwayfrom a n y s tatein a r oo m. Weconst ra in our β€œ initiation” and β€œtermin ation” sets to be bottl ene c k s tates, ho weve r this mea nstha t lea rni n g pol icie s a t c o arsescal e s is fast , a ndc a nbe carriedo u t co mplet ely indepe nden t of other scales . Coarse scalelear n i nginv o lves o nly the bottle nec ks t ates, gi vi ng a drasti cally re du c ed co mputat ionalcomplex i t y. Provid ed t hepartition ing o f a scal eis well c hose n, thisconstr u ctio n allows one to ca pital i z e oni mpr ovedmi xing ti m es t o accelera te converge nce. - MMDPs are are presen tat io n for MDPs : we canno t sol ve prob le ms t hat canno t be p hrase d as a n M DP (i.e.p r ob l em sw hos e so lutio ns req uire non- M arkov po lic i es). Apo lic y solvin g a n MMDP, at a nyscale , is a Marko v pol i cy . SMDP s mayin gen eral ha v e n on -
To_get around_these limitations, new, separate_options must_be_defined, increasing_the_problem’s branching factor,_and care must_be taken to avoid_loops (if so_desired)._An MMDP coarse action leaves the β€œdirection” of the action undecided: the same fine_policy_may be_executed_starting_in several bottleneck states, and_may take the agent in_one of_several directions until arriving at one of multiple_destinations_from which different_successor coarse actions may be taken. In the context_of MMDPs, if one wanted to_be able to_transfer_policies_on the same cluster_which guide an agent in different_particular directions, separate local policies would_need to be stored in the β€œdatabase”_of solved tasks. However, we would_only need to transfer and_plan with_[*one*]{} of them. - _A strength of_the options_framework is that_multiple related queries, or tasks, may_be solved essentially_within the same SMDP. However, the_tasks_must be closely_related_in_specific ways_(e.g. tasks differing_only_in the_goal_state), and this strength comes at_the_expense of ignoring problem-specific information when one_only wants to solve_one_problem. Our approach to_the construction of MMDPs differs_in that while we assume a_particular problem_when building_a decomposition, we are able to consider a broader set of_transfer possibilities. - Bottlenecks and_partitioning do not explicitly_enter the_picture_in options or_SMDPs._Options may_be defined on any subset of the_statespace, and_in applications may often take the_form of a macro-action_which_directs the agent to an intermediate_goal state starting from [*any*]{} state_in a (possibly large) neighborhood._For_example,_an option may direct a_robot to a hallway from any_state in a_room. We constrain our β€œinitiation” and β€œtermination”_sets_to be bottleneck states, however this_means_that learning policies at coarse scales_is_fast,_and can be carried out_completely independent of other scales. Coarse_scale learning involves only the bottleneck states, giving a_drastically reduced computational_complexity. Provided the partitioning of_a_scale_is well chosen, this construction allows one to capitalize on_improved mixing_times to accelerate_convergence. - MMDPs are a representation for MDPs: we_cannot solve problems that cannot be phrased_as an MDP (i.e. problems whose solutions require non-Markov policies). A_policy solving an MMDP, at any scale, is_a Markov policy. SMDPs may in general_have non-
$\Lambda^{2k-1}_n$ that is both cs and cs-$k$-neighborly. We then delete the cs-$(k-1)$-neighborly and $(k-1)$-stacked balls $\operatorname{\mathrm{lk}}\big(\{1,2\}, \pm B^{2k+1, k}_{n+2}\big)$ that are antipodal and share no common facets, and insert the cones over the boundary of these two balls. Thus, the resulting complex is also cs; furthermore, by Lemma \[lm: induction method\], it is cs-$k$-neighborly. In the case of $d=2k$, note that by Proposition \[prop: odd even sphere relation\], $\Delta^{2k+1}_{n+2}\subseteq \Delta^{2k+2}_{n+2}$. Hence $\Lambda^{2k}_n\supseteq \Lambda^{2k-1}_n$, and so $\Lambda^{2k}_n$ is also cs-$k$-neighborly. The proof that $\Lambda^{2k}_n$ is cs is identical to the proof in the odd-dimensional cases. Finally, to complete the proof of the first part for the case of $d=2k-1$ and $i=k$, note that, $$\operatorname{\mathrm{lk}}\big(\{1,2\}, B^{2k-1, k}_n\big)=\operatorname{\mathrm{lk}}\big(\{1,2\}, \Delta^{2k-1}_n\big)\backslash \operatorname{\mathrm{lk}}\big(\{1,2\}, B^{2k-1, k-1}_n\big).$$ We then conclude from the case of $d=2k-1, i=k-1$ and Lemma \[lm: complement\] that $\operatorname{\mathrm{lk}}\big(\{1,2\}, \pm B^{2k-1, k}_n\big)$ is indeed cs-$(k-1)$-neighborly and $(k-1)$-stacked. \[lm: neighborly stacked edge links\] Let $k\geq 3$ and let $n$ be sufficiently large. The only edges $e\subseteq V_{n}$ such that both $\operatorname{\mathrm{lk}}(e, B^{2k-1, k-1}_n)$ and $\operatorname{\mathrm{lk}}(e, -B^{2
$ \Lambda^{2k-1}_n$ that is both cs and cs-$k$-neighborly. We then delete the cs-$(k-1)$-neighborly and $ (k-1)$-stacked balls $ \operatorname{\mathrm{lk}}\big(\{1,2\ }, \pm B^{2k+1, k}_{n+2}\big)$ that are antipodal and share no common aspect, and tuck the cones over the boundary of these two balls. therefore, the resulting complex is also cs; furthermore, by Lemma \[lm: initiation method\ ], it is cs-$k$-neighborly. In the case of $ d=2k$, note that by Proposition \[prop: curious even sphere relation\ ], $ \Delta^{2k+1}_{n+2}\subseteq \Delta^{2k+2}_{n+2}$. Hence $ \Lambda^{2k}_n\supseteq \Lambda^{2k-1}_n$, and so $ \Lambda^{2k}_n$ is besides cs-$k$-neighborly. The proof that $ \Lambda^{2k}_n$ is cs is identical to the proof in the odd - dimensional cases. Finally, to dispatch the proof of the first character for the case of $ d=2k-1 $ and $ i = k$, note that, $ $ \operatorname{\mathrm{lk}}\big(\{1,2\ }, B^{2k-1, k}_n\big)=\operatorname{\mathrm{lk}}\big(\{1,2\ }, \Delta^{2k-1}_n\big)\backslash \operatorname{\mathrm{lk}}\big(\{1,2\ }, B^{2k-1, k-1}_n\big).$$ We then reason from the case of $ d=2k-1, i = k-1 $ and Lemma \[lm: complement\ ] that $ \operatorname{\mathrm{lk}}\big(\{1,2\ }, \pm B^{2k-1, k}_n\big)$ is indeed cs-$(k-1)$-neighborly and $ (k-1)$-stacked. \[lm: neighborly stacked edge links\ ] get $ k\geq 3 $ and let $ n$ be sufficiently large. The lone edges $ e\subseteq V_{n}$ such that both $ \operatorname{\mathrm{lk}}(e, B^{2k-1, k-1}_n)$ and $ \operatorname{\mathrm{lk}}(e, -B^{2
$\Lalbda^{2k-1}_n$ that is both cs akd cs-$k$-neighborly. We thei delets the cs-$(y-1)$-neighborly and $(k-1)$-stacked balps $\operqtorname{\mathrm{lk}}\big(\{1,2\}, \pm B^{2k+1, k}_{n+2}\big)$ that arw anuipodal and share no common facsbs, anb mnsert the conex over the boundary of tvere two balls. Thus, the resulting complqx is aksl cs; furthermote, by Jemmz \[lm: induction method\], it is cs-$k$-neifhborly. In the case pf $d=2k$, note that by Proposihion \[prop: odd even sphfre relatiob\], $\Dejra^{2k+1}_{n+2}\subseteq \Delta^{2k+2}_{n+2}$. Htnee $\Lambda^{2k}_n\aupseteq \Lambda^{2k-1}_n$, and so $\Lambda^{2y}_n$ is also cs-$k$-ngnthbltly. The prooh that $\Lambda^{2k}_n$ is cs is igenticak to the proof in thw odd-dimensional casev. Finally, to compleje the promf of the first parr dor tve cdse ud $d=2y-1$ ahd $i=i$, note thet, $$\operatorhame{\mathrm{lj}}\big(\{1,2\}, B^{2k-1, k}_n\big)=\operatprgqme{\mathrm{lk}}\bif(\{1,2\}, \Deltw^{2k-1}_g\big)\backslash \operatorname{\mathrm{lk}}\big(\{1,2\}, B^{2n-1, k-1}_h\big).$$ We then conclude feom the case of $d=2k-1, i=k-1$ and Lemmw \[lm: complement\] that $\operatorname{\mathrm{lk}}\big(\{1,2\}, \pm B^{2n-1, k}_n\bmg)$ is ikdeea cd-$(k-1)$-neighborly and $(k-1)$-stacked. \[lm: neighborly stackeq ecgv links\] Let $k\geq 3$ and let $n$ ne sisficiently latge. The onmy edges $e\subseteq V_{n}$ sucr thar both $\optratotname{\mathrm{lk}}(e, B^{2k-1, k-1}_n)$ and $\opwratorname{\manhrm{ok}}(e, -B^{2
$\Lambda^{2k-1}_n$ that is both cs and cs-$k$-neighborly. delete cs-$(k-1)$-neighborly and balls $\operatorname{\mathrm{lk}}\big(\{1,2\}, \pm and no common facets, insert the cones the boundary of these two balls. the resulting complex is also cs; furthermore, by Lemma \[lm: induction method\], it cs-$k$-neighborly. In the case of $d=2k$, note that by Proposition \[prop: odd even relation\], \Delta^{2k+2}_{n+2}$. $\Lambda^{2k}_n\supseteq and so $\Lambda^{2k}_n$ is also cs-$k$-neighborly. The proof that $\Lambda^{2k}_n$ is cs is identical to the in the odd-dimensional cases. Finally, to complete the of the first part the case of $d=2k-1$ and note $$\operatorname{\mathrm{lk}}\big(\{1,2\}, B^{2k-1, \Delta^{2k-1}_n\big)\backslash B^{2k-1, We then conclude the case of $d=2k-1, i=k-1$ and Lemma \[lm: complement\] that $\operatorname{\mathrm{lk}}\big(\{1,2\}, \pm B^{2k-1, k}_n\big)$ is indeed cs-$(k-1)$-neighborly $(k-1)$-stacked. \[lm: edge links\] $k\geq and $n$ be sufficiently only edges $e\subseteq V_{n}$ such that k-1}_n)$ and $\operatorname{\mathrm{lk}}(e, -B^{2
$\Lambda^{2k-1}_n$ that is both cs and cs-$K$-neighborlY. We thEn dEleTe The cS-$(k-1)$-neIghborly and $(k-1)$-stACked Balls $\operatorname{\mathrM{lk}}\biG(\{1,2\}, \pM b^{2k+1, k}_{n+2}\BIg)$ That aRe antipODaL ANd sHaRe No cOmMOn FacetS, anD insert The cones ovEr tHe Boundary of thESe Two balls. ThUs, tHe resulting cOmpLex is aLsO cs; FUrtheRmoRe, by LEmma \[lm: INductiOn method\], iT iS Cs-$k$-neiGHborly. IN THe Case Of $d=2k$, note that by ProPOsITion \[prop: odd eveN spherE rELaTIOn\], $\DEltA^{2k+1}_{n+2}\subseteQ \DElta^{2k+2}_{N+2}$. hence $\LaMBdA^{2K}_N\SupSEteq \Lambda^{2k-1}_n$, aNd so $\Lambda^{2k}_N$ Is aLso cs-$k$-NeIghBOrly. ThE prooF tHAt $\LAmbda^{2k}_n$ is cs Is idEntical to The proOF in the oDD-dimensIonal cAseS. FiNallY, To CoMplEtE The PRoOf oF The First parT fOr The caSe of $D=2K-1$ AND $i=k$, nOte That, $$\OperaTorname{\mathrm{Lk}}\bIg(\{1,2\}, B^{2k-1, K}_N\biG)=\operAtornAme{\mAtHrm{lk}}\Big(\{1,2\}, \DelTa^{2k-1}_n\bIg)\Backslash \operatOrnaMe{\mathrm{lK}}\biG(\{1,2\}, B^{2K-1, k-1}_n\BiG).$$ We thEN conclUde FroM the casE of $d=2k-1, i=k-1$ ANd LEmMA \[LM: cOmplement\] that $\operaToRNAmE{\mathrm{lK}}\big(\{1,2\}, \pm b^{2K-1, k}_N\bIG)$ is indeeD cS-$(k-1)$-nEighBORly anD $(k-1)$-stACkEd. \[lm: neigHborly STaCkEd edge lInKs\] Let $k\GeQ 3$ anD leT $n$ be sUFficIently Large. The Only eDGes $e\subseteq V_{n}$ SUch that both $\opERaTORnAMe{\maThrM{lk}}(e, B^{2k-1, k-1}_n)$ and $\OperATornAme{\mAThRm{lK}}(E, -B^{2
$\Lambda^{2k-1}_n$ that i s both csand c s-$ k$- ne ighb orly . We then dele t e th e cs-$(k-1)$-neighborl y and $ ( k-1) $ -s tacke d balls $\ o p era to rn ame {\ m at hrm{l k}} \big(\{ 1,2\}, \pm B^ {2 k+1, k}_{n+2 } \b ig)$ thatare antipodal a ndshareno co m mon f ace ts, a nd ins e rt the cones ov er the bo u ndary o f th esetwo balls. Thus,t he resulting comp lex is a l so c s;fur thermore,by Lemm a \[lm:i nd u c t ion method\], itis cs-$k$-n e igh borly. I n t h e case of $ d= 2 k$, note thatby P ropositio n \[pr o p: odde ven sph ere re lat ion \],$ \D el ta^ {2 k +1} _ {n +2} \ sub seteq \D el ta ^{2k+ 2}_{ n + 2 } $. H enc e $\ Lambd a^{2k}_n\sups ete q \L a mbd a^{2k -1}_n $, a nd so $ \Lambd a^{2k }_ n$ is also cs-$ k$-n eighborly . T he pr oo f tha t $\Lam bda ^{2 k}_n$ i s cs is ide nt i c a lto the proof in th eo d d- dimensio nal ca s es .Finally, t o c ompl e t e the pro o fof the f irst p a rt f or theca se of$d =2k -1$ and$ i=k$ , note that, $ $\ope r atorname{\math r m{lk}}\big(\{ 1 ,2 \ } ,B ^{2k -1, k}_n\big)= \ope r ator name { \m ath r m{lk} }\big (\ { 1, 2 \}, \Delta^{2k-1}_n \b ig)\ba cksla sh \operatorn ame{\mathr m { l k}}\big( \{1, 2 \} , B^{2k-1, k-1} _n\bi g).$$ We t h en concl ude f rom thecase of $ d = 2k-1, i= k-1 $ a ndLem m a \ [lm: compleme n t \] t ha t $\ope rat orname{ \ma thr m{l k}} \b ig(\{1,2\ }, \pm B ^{ 2k -1 ,k}_ n\big ) $ is ind ee d c s- $(k -1)$- n eighbo rly a nd $ (k -1 ) $-s tacked. \ [ l m: n ei gh borl y s ta ckededge lin ks\] Le t $k\geq3$a nd l et $ n$ be s ufficiently l ar ge. The on ly ed ges $e \ s ubseteqV_{n}$ such that both $ \ operato rna me{\m athr m{lk}}(e, B^ {2k-1, k- 1 }_n)$and $\ opera to rna m e {\mat h r m{ lk} }( e, -B^{2
$\Lambda^{2k-1}_n$_that is_both cs and cs-$k$-neighborly._We then_delete_the cs-$(k-1)$-neighborly_and_$(k-1)$-stacked balls $\operatorname{\mathrm{lk}}\big(\{1,2\},_\pm B^{2k+1, k}_{n+2}\big)$_that are antipodal and_share no common_facets,_and insert the cones over the boundary of these two balls. Thus, the resulting_complex_is also_cs;_furthermore,_by Lemma \[lm: induction method\],_it is cs-$k$-neighborly. In the_case of_$d=2k$, note that by Proposition \[prop: odd even_sphere_relation\], $\Delta^{2k+1}_{n+2}\subseteq \Delta^{2k+2}_{n+2}$._Hence $\Lambda^{2k}_n\supseteq \Lambda^{2k-1}_n$, and so $\Lambda^{2k}_n$ is also cs-$k$-neighborly._The proof that $\Lambda^{2k}_n$ is cs_is identical to_the_proof_in the odd-dimensional cases. Finally,_to complete the proof of the_first part for the case of_$d=2k-1$ and $i=k$, note that, $$\operatorname{\mathrm{lk}}\big(\{1,2\}, B^{2k-1,_k}_n\big)=\operatorname{\mathrm{lk}}\big(\{1,2\}, \Delta^{2k-1}_n\big)\backslash \operatorname{\mathrm{lk}}\big(\{1,2\}, B^{2k-1, k-1}_n\big).$$ We_then conclude from the case_of $d=2k-1,_i=k-1$ and Lemma \[lm: complement\]_that $\operatorname{\mathrm{lk}}\big(\{1,2\}, \pm_B^{2k-1, k}_n\big)$_is indeed cs-$(k-1)$-neighborly_and $(k-1)$-stacked. \[lm: neighborly stacked edge links\]_Let $k\geq 3$_and let $n$ be sufficiently large._The_only edges $e\subseteq_V_{n}$_such_that both_$\operatorname{\mathrm{lk}}(e, B^{2k-1, k-1}_n)$_and_$\operatorname{\mathrm{lk}}(e, -B^{2
1$, while modules are indicated by $\tau = 0$. Mixtures correspond to groups with $0 <\tau < 1$. For the rest of the paper, we refer to groups with $\tau\approx 1$ as community-like and groups with $\tau\approx 0$ as module-like. Groups in networks are revealed by a sequential extraction procedure proposed inΒ [@ZLZ11; @SBB13; @Weiss]. One first finds the group $S$ and its linking pattern $T$ with random-restart hill climbingΒ [@RN03] that maximizes the objective function. Next, the revealed group $S$ is extracted from the network by removing the links between groups $S$ and $T$, and any node that becomes isolated. The procedure is then repeated on the remaining network until the objective function is larger than the $99$th percentile of the values obtained under the same framework in a corresponding ErdΕ‘s-R[Γ©]{}nyi random graphΒ [@ER59]. All groups reported in the paper are thus statistically significant at $1\%$Β level. Note that the above procedure allows for overlappingΒ [@PDFV05], hierarchicalΒ [@RSMOB02], nested and other classes of groups. \[sec:analys\]Analysis and discussion ===================================== SectionΒ \[subsec:nets\] introduces real-world networks considered in the study. SectionΒ \[subsec:orig\] reports the node group structure of the original networks extracted with the framework described in SectionΒ \[sec:nodegroups\]. The groups extracted from the sampled networks are analyzed in SectionΒ \[subsec:sampled\]. For a complete analysis, we also observe the node group structure of a large network with more than a million links in SectionΒ \[subsec:large\]. \[subsec:nets\]Network data --------------------------- [clrr]{} & & &\ *Collab* & High Energy Physics collaborationsΒ [@LKF05] & $9877$ & $25998$\ *PGP* & Pretty Good Privacy web-of-trustΒ [@BPDA04] & $10680$ & $24340$\ *P2P* & Gnutella peer-to-peer file sharingΒ [@LKF05] & $8717$ & $31525$\ *Citation* & High Energy Physics citationsΒ [@LKF05] & $27770$ & $352807$\ The empirical analysis in the
1 $, while modules are indicated by $ \tau = 0$. Mixtures correspond to group with $ 0 < \tau < 1$. For the remainder of the paper, we refer to groups with $ \tau\approx 1 $ as residential district - like and groups with $ \tau\approx 0 $ as module - like. group in networks are revealed by a consecutive origin procedure proposed in Β  [ @ZLZ11; @SBB13; @Weiss ]. One foremost finds the group $ S$ and its linking pattern $ T$ with random - restart mound climb Β  [ @RN03 ] that maximizes the objective affair. Next, the revealed group $ S$ is extracted from the net by removing the links between groups $ S$ and $ T$, and any lymph node that becomes isolated. The procedure is then repeated on the remaining net until the objective function is bombastic than the $ 99$th percentile of the values obtained under the same model in a corresponding ErdΕ‘s - R[Γ©]{}nyi random graph Β  [ @ER59 ]. All groups reported in the paper are thus statistically significant at $ 1\%$ Β  horizontal surface. notice that the above procedure allows for overlapping Β  [ @PDFV05 ], hierarchical Β  [ @RSMOB02 ], nested and other classes of groups. \[sec: analys\]Analysis and discussion = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = Section Β  \[subsec: nets\ ] introduce actual - world net view in the study. Section Β  \[subsec: orig\ ] report the node group social organization of the original networks extracted with the framework trace in Section Β  \[sec: nodegroups\ ]. The groups extracted from the sampled networks are analyzed in Section Β  \[subsec: sampled\ ]. For a complete analysis, we also observe the node group structure of a large net with more than a million links in Section Β  \[subsec: large\ ]. \[subsec: nets\]Network datum --------------------------- [ clrr ] { } & & & \ * Collab * & High Energy Physics collaborations Β  [ @LKF05 ] & $ 9877 $ & $ 25998$\ * PGP * & Pretty Good Privacy web - of - trust Β  [ @BPDA04 ] & $ 10680 $ & $ 24340$\ * P2P * & Gnutella peer - to - peer file sharing Β  [ @LKF05 ] & $ 8717 $ & $ 31525$\ * Citation * & High Energy Physics citations Β  [ @LKF05 ] & $ 27770 $ & $ 352807$\ The empiric analysis in the
1$, wjile modules are indicattd by $\tau = 0$. Mixtutew corrxspond fo groupr with $0 <\tau < 1$. For the rest oh thw paptg, we refer to groups dith $\tau\aiprox 1$ as comnynity-like ehd grouif wifm $\tau\cp'rox 0$ as module-kike. Groups in networks ase rzvealed by a sequential extraction pwocedurr oroposed inΒ [@ZLZ11; @SBB13; @Reisa]. One first finds the group $S$ and jts linning pattern $Y$ with random-restart hill flimhingΒ [@RN03] that maximixes the objgdtidw function. Ndxt, the renzaled group $S$ is extracted from the networy by xemoving thg mijns between jroups $S$ and $T$, and any noda that necomes isolatcd. Thx pricedure is then repeaved on the remaining network gncil the objective funxtuon iv lasger rhav tge $99$tg percfntmle of the balues obtauned under the same fwqmework in a dorres[ogding ErdΕ‘s-R[Γ©]{}nyi random graphΒ [@ER59]. All groupv rsported in the paper arw thus statistically dignificagt at $1\%$Β level. Note that the above procedure allows xor oteflakplkgΒ [@PDWC05], jierarchicalΒ [@RSMOB02], nested and other classes of frpuis. \[sec:analys\]Analysls and discussion ===================================== SrchipgΒ \[subsec:nets\] ivtroduess real-world networkd consiqered in the suudy. XectionΒ \[subsec:orig\] reports tye node groui steucture of the oriyinal networys ectracyed with the framework bescriged in SectlonΒ \[sec:nodseroups\]. The groupr eqtrawted from the sampled netwjrks are enalyved in SdctipnΒ \[subsqc:sampled\]. Vor a complete analysis, ae aldo observe tje node group structure of a lacje network wijh korv than a iillipn links in SqctionΒ \[subsec:latge\]. \[subsec:uets\]Negwork data --------------------------- [blrr]{} & & &\ *Conlab* & High Qnergy Physicv collaboratimnsΒ [@LKF05] & $9877$ & $25998$\ *PGP* & Peetty Guud Privacy web-pf-trustΒ [@BPBC04] & $10680$ & $24340$\ *P2P* & Tnutella peer-to-peev filg aharingΒ [@LKF05] & $8717$ & $31525$\ *Cnuatuon* & High Energu Pfysycd ritatymnsΒ [@LKF05] & $27770$ & $352807$\ Tha emoirkval avalysis in bhe
1$, while modules are indicated by $\tau Mixtures to groups $0 <\tau < the we refer to with $\tau\approx 1$ community-like and groups with $\tau\approx 0$ module-like. Groups in networks are revealed by a sequential extraction procedure proposed in @SBB13; @Weiss]. One first finds the group $S$ and its linking pattern $T$ random-restart climbing that the objective function. Next, the revealed group $S$ is extracted from the network by removing the between groups $S$ and $T$, and any node becomes isolated. The procedure then repeated on the remaining until objective function larger the percentile of the obtained under the same framework in a corresponding ErdΕ‘s-R[Γ©]{}nyi random graph [@ER59]. All groups reported in the are thus at $1\%$ Note the procedure allows for hierarchical [@RSMOB02], nested and other classes and discussion ===================================== Section \[subsec:nets\] introduces real-world networks in the Section \[subsec:orig\] reports the node group of the original networks extracted with the framework in Section \[sec:nodegroups\]. The groups extracted from the sampled networks are analyzed in Section \[subsec:sampled\]. complete analysis, we also the node group of large with than a links in Section \[subsec:large\]. \[subsec:nets\]Network data --------------------------- [clrr]{} & & &\ & High Energy Physics collaborations [@LKF05] & $9877$ & $25998$\ Pretty Privacy web-of-trust [@BPDA04] $10680$ & $24340$\ *P2P* Gnutella file sharing [@LKF05] & $31525$\ & citations & & $352807$\ The empirical in the
1$, while modules are indicated bY $\tau = 0$. MixturEs corResPonD tO groUps wIth $0 <\tau < 1$. For the reST of tHe paper, we refer to groups With $\tAu\APproX 1$ As CommuNity-likE AnD GRouPs WiTh $\tAu\APpRox 0$ as ModUle-like. groups in neTwoRkS are revealed BY a Sequential ExtRaction proceDurE propoSeD inΒ [@zlZ11; @SBB13; @weiSs]. One First fINds the Group $S$ and ItS LinkinG Pattern $t$ WItH ranDom-restart hill cliMBiNGΒ [@RN03] that maximizEs the oBjECtIVE fuNctIon. Next, the ReVealeD Group $S$ iS ExTRACteD From the networK by removing THe lInks beTwEen GRoups $S$ And $T$, aNd ANy nOde that becoMes iSolated. ThE proceDUre is thEN repeatEd on thE reMaiNing NEtWoRk uNtIL thE ObJecTIve Function Is LaRger tHan tHE $99$TH PercEntIle oF the vAlues obtained UndEr thE SamE framEwork In a cOrRespoNding ERdΕ‘s-R[Γ‰]{}nYi random graphΒ [@ER59]. all gRoups repoRteD iN thE pAper aRE thus sTatIstIcally sIgnificANt aT $1\%$Β lEVEL. NOte that the above proCeDURe Allows foR overlAPpInGΒ [@pDFV05], hierArChiCalΒ [@RsmoB02], nesTed aND oTher clasSes of gROuPs. \[Sec:analYs\]analysIs And DisCussiON ===================================== SecTionΒ \[suBsec:nets\] IntroDUces real-world nETworks consideREd IN ThE StudY. SeCtionΒ \[subsec:Orig\] REporTs thE NoDe gROup stRuctuRe OF tHE original networks exTrActed wIth thE framework desCribed in SeCTIOnΒ \[sec:nodEgroUPs\]. tHe groups extracTed frOm the samplED networkS are aNalyzed iN SectionΒ \[sUBSec:samplEd\]. FOr a ComPleTE AnAlysis, we also oBSErve ThE node grOup StructuRe oF a lArgE neTwOrk with moRe than a mIlLiOn LiNks In SecTIonΒ \[subseC:lArgE\]. \[sUbsEc:netS\]networK data --------------------------- [Clrr]{} & & &\ *coLlAB* & HiGh EnergY phYSIcs cOlLaBoraTioNsΒ [@lKF05] & $9877$ & $25998$\ *PGp* & PreTTy GOod PrivAcy web-of-tRusTΒ [@bPDA04] & $10680$ & $24340$\ *p2P* & gnUtella pEer-to-peer file ShAringΒ [@LKF05] & $8717$ & $31525$\ *CiTaTioN* & High ENERgy PhysiCs citationsΒ [@LKF05] & $27770$ & $352807$\ The empiriCAl analySis In the
1$, while modules are ind icated by$\tau =0$. M ixtu rescorrespond tog roup s with $0 <\tau < 1$.For t he rest of thepaper,w er e fer t ogro up s w ith $ \ta u\appro x 1$ as co mmu ni ty-like andg ro ups with $ \ta u\approx 0$asmodule -l ike . Gro ups in n etwork s are r evealed b ya seque n tial ex t r ac tion procedure propos e di nΒ [@ZLZ11; @SB B13; @ We i ss ] . On e f irst finds t he gr o up $S$a nd i t s l i nking pattern $T$ with r a ndo m-rest ar t h i ll cli mbing Β [ @ RN0 3] that max imiz es the ob jectiv e functi o n. Next , therev eal ed g r ou p$S$ i s ex t ra cte d fr om the n et wo rk by rem o v i n g th e l inks betw een groups $S $ a nd $ T $,and a ny no de t ha t bec omes i solat ed . The procedure isthen repe ate donth e rem a iningnet wor k until the ob j ect iv e f un ction is larger th an t he $99$thpercen t il eo f the va lu esobta i n ed un dert he same fr amewor k i na corre sp onding E rdΕ‘ s-R [Γ©]{} n yi r andomgraphΒ [@ ER59] . All groups re p orted in thep ap e r a r e th usstatistical ly s i gnif ican t a t $ 1 \%$Β l evel. N o te that the above proc ed ure al lowsfor overlappi ngΒ [@PDFV0 5 ] , hierarc hica l Β [ @ RSMOB02], nest ed an d other cl a sses ofgroup s. \[se c:analys\ ] A nalysisand di scu ssi o n = ============= = = ==== == ======= === ===== Sec tio nΒ \ [su bs ec:nets\] introdu ce sre al -wo rld n e tworks c on sid er edin th e study . Sec tion Β \ [s u bse c:orig\ ] r e p orts t he nod e g ro up st ruct u reof theoriginalnet w orks e xt ractedwith the fram ew ork descri be d i n Sect i o nΒ \[sec: nodegroups\]. The group s extrac ted from the samplednet worksare analyz ed inSecti on Β \[ s u bsec: s a mp led \] . For a co m p let e ana ly sis, we als o observe the node gro up structureofa la r g enet w or k wi th mor e than a millionlinks in S ec t io nΒ \[subsec : lar ge \]. \[ subsec: nets\ ] Network data --- --------- -- ---- - - --- ---- [clr r]{} & & &\ *Coll a b* &H ig h Ene rgy Physi cs co llabo ration s Β [@ LKF05 ] & $9 87 7$ & $ 25998 $\ *PGP* & Pretty Good Privacy we b-of-t rust[@B PDA04] &$10 6 80$ & $24340 $\ * P2P* & Gnu tel lapeer- to- p eer f iles ha rin g Β [@LK F05] & $8717$& $ 315 2 5 $\ *Citation* & Hig h Ene rgy Physic s ci tationsΒ [@LKF05]& $27770$ & $35 2807 $ \ T hee mpir ic al analysis in th e
1$,_while modules_are indicated by $\tau_= 0$._Mixtures_correspond to_groups_with $0 <\tau_< 1$. For_the rest of the_paper, we refer_to_groups with $\tau\approx 1$ as community-like and groups with $\tau\approx 0$ as module-like. Groups in_networks_are revealed_by_a_sequential extraction procedure proposed inΒ [@ZLZ11;_@SBB13; @Weiss]. One first finds_the group_$S$ and its linking pattern $T$ with random-restart_hill_climbingΒ [@RN03] that maximizes_the objective function. Next, the revealed group $S$ is_extracted from the network by removing_the links between_groups_$S$_and $T$, and any_node that becomes isolated. The procedure_is then repeated on the remaining_network until the objective function is larger_than the $99$th percentile of the_values obtained under the same_framework in_a corresponding ErdΕ‘s-R[Γ©]{}nyi random graphΒ [@ER59]._All groups reported_in the_paper are thus_statistically significant at $1\%$Β level. Note that_the above procedure_allows for overlappingΒ [@PDFV05], hierarchicalΒ [@RSMOB02], nested and_other_classes of groups. \[sec:analys\]Analysis_and_discussion ===================================== SectionΒ \[subsec:nets\]_introduces real-world_networks considered in_the_study. SectionΒ \[subsec:orig\]_reports_the node group structure of the_original_networks extracted with the framework described in_SectionΒ \[sec:nodegroups\]. The groups extracted_from_the sampled networks are_analyzed in SectionΒ \[subsec:sampled\]. For a_complete analysis, we also observe the_node group_structure of_a large network with more than a million links in SectionΒ \[subsec:large\]. \[subsec:nets\]Network_data --------------------------- [clrr]{} & & &\ *Collab* & High_Energy Physics collaborationsΒ [@LKF05] &_$9877$ &_$25998$\ *PGP*_& Pretty Good_Privacy_web-of-trustΒ [@BPDA04] &_$10680$ & $24340$\ *P2P* & Gnutella peer-to-peer file_sharingΒ [@LKF05] &_$8717$ & $31525$\ *Citation* & High Energy_Physics citationsΒ [@LKF05] & $27770$_&_$352807$\ The empirical analysis in the
β€œdata” at $x_0$ (i.e., derivatives $f^{(i)}(x_0)$). Our paper proceeds as follows. In SectionΒ \[sec:terl\], we start with a general result of applying Taylor expansions to Q-functions. When we apply the same technique to the RL objective, we reuse the general result and derive a higher-order policy optimization objective. This leads to SectionΒ \[sec:TayPO\], where we formally present the *Taylor Expansion Policy Optimization* (TayPO) and generalize prior work [@schulman2015trust; @schulman2017proximal] as a first-order special case. In SectionΒ \[sec:uni\], we make clear connection between Taylor expansions and $Q(\lambda)$ [@harutyunyan_QLambda_2016], a common return-based off-policy evaluation operator. Finally, in SectionΒ \[sec:exp\], we show the performance gains due to the higher-order objectives across a range of state-of-the-art distributed deep RL agents. Taylor expansion for reinforcement learning {#sec:terl} =========================================== Consider a Markov Decision Process (MDP) with state space $\mathcal{X}$ and action space $\mathcal{A}$. Let policy $\pi(\cdot|x)$ be a distribution over actions give state $x$. At a discrete time $t \geq 0$, the agent in state $x_t$ takes action $a_t \sim \pi(\cdot|x_t)$, receives reward $r_t \triangleq r(x_t,a_t)$, and transitions to a next state $x_{t+1} \sim p(\cdot|x_t,a_t)$. We assume a discount factor $\gamma \in [0,1)$. Let $Q^\pi(x,a)$ be the action value function (Q-function) from state $x,$ taking action $a,$ and following policy $\pi$. For convenience, we use $ d_\gamma^\pi(\cdot,\cdot|x_0,a_0,\tau)$ to denote the discounted visitation distribution starting from state-action pair $(x_0,a_0)$ and following $\pi$, such that $d_\gamma^\pi(x,a|x_0,a_0,\tau) = (1-\gamma)\gamma^{-\tau} \sum_{t\geq \tau} \
β€œ data ” at $ x_0 $ (i.e., derivatives $ f^{(i)}(x_0)$). Our paper proceeds as follows. In Section Β  \[sec: terl\ ], we depart with a cosmopolitan result of applying Taylor expansion to Q - function. When we apply the same proficiency to the RL objective, we reuse the general consequence and derive a higher - club policy optimization objective. This leads to Section Β  \[sec: TayPO\ ], where we formally award the * Taylor Expansion Policy Optimization * (TayPO) and generalize prior work [ @schulman2015trust; @schulman2017proximal ] as a first - order special event. In Section Β  \[sec: uni\ ], we make clear connection between Taylor expansion and $ Q(\lambda)$ [ @harutyunyan_QLambda_2016 ], a common return - based off - policy evaluation operator. ultimately, in Section Β  \[sec: exp\ ], we show the performance gains due to the higher - order aim across a range of state - of - the - art distributed deep RL agents. Taylor expansion for reinforcement learning { # sec: terl } = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = Consider a Markov Decision Process (MDP) with state space $ \mathcal{X}$ and action space $ \mathcal{A}$. Let policy $ \pi(\cdot|x)$ be a distribution over actions give state $ x$. At a discrete clock time $ t \geq 0 $, the agent in department of state $ x_t$ takes action $ a_t \sim \pi(\cdot|x_t)$, receives reward $ r_t \triangleq r(x_t, a_t)$, and conversion to a future state $ x_{t+1 } \sim p(\cdot|x_t, a_t)$. We assume a deduction factor $ \gamma \in [ 0,1)$. Let $ Q^\pi(x, a)$ be the action value function (Q - function) from state $ x,$ necessitate action $ a,$ and following policy $ \pi$. For convenience, we use $ d_\gamma^\pi(\cdot,\cdot|x_0,a_0,\tau)$ to announce the discounted trial distribution starting from state - action pair $ (x_0,a_0)$ and following $ \pi$, such that $ d_\gamma^\pi(x, a|x_0,a_0,\tau) = (1-\gamma)\gamma^{-\tau } \sum_{t\geq \tau } \
β€œdaha” at $x_0$ (i.e., derivatives $f^{(l)}(x_0)$). Our paper procgees as hollows. In SectkonΒ \[sec:terl\], we start with a gxnerql rewult of applying Taylof expansilns to Q-duncuions. When we apply the same tedmniquz vo the RL objecjive, we reusa the general serupt and derive a higher-order policy jptimizstlon objective. Jhis kqads no SectionΒ \[sec:TayPO\], where we forjally pgesent the *Taylor Expansion Policy Optimizahion* (TayPO) and generallze prior wirk [@fxhulman2015trust; @schulman2017pgmximal] as z first-order special case. In SeztionΒ \[xec:uni\], we naje fnear connecvion bvtween Taylor expansimns and $Q(\lambda)$ [@harutnunyai_QLanbda_2016], a common return-besed off-policy evaluwtion opesacor. Finally, in SectiobΒ \[swc:exp\], we vhow rhe pedfprjance haiis due to tge higher-oreer objectives acroxs q range of stzte-of-tre-wrt distributed deep RL agents. Taylor exkansikn for reinforcement leqrning {#sec:terl} =========================================== Considet a Markov Decision Process (MDP) with state space $\mathcal{X}$ atd acviun wpwzw $\lathcal{A}$. Let policy $\pi(\cdot|x)$ be a distribution kvtr sctions give sbate $x$. At a discreye toie $t \geq 0$, the agent in state $x_t$ takes achion $a_t \sim \pi(\cdot|x_t)$, weceoves reward $r_t \triangleq r(z_t,a_t)$, and trausirions to a next stcte $x_{t+1} \sim p(\edot|x_t,s_t)$. We assume a discount factur $\gzmma \in [0,1)$. Leh $Q^\pi(x,a)$ bs the action valud flncthon (Q-function) from state $x,$ taking artion $a,$ and wollpwing [olicy $\pi$. Vor convenience, we use $ f_\gammc^\pi(\cdmt,\cdot|x_0,a_0,\tak)$ to denote the discounted visivetion distribotimn vtarting from state-action [air $(x_0,a_0)$ and foklowing $\pi$, sjch that $d_\famma^\pi(e,a|x_0,a_0,\tau) = (1-\gamia)\gamma^{-\tau} \suk_{j\geq \tau} \
β€œdata” at $x_0$ (i.e., derivatives $f^{(i)}(x_0)$). Our as In Section we start with Taylor to Q-functions. When apply the same to the RL objective, we reuse general result and derive a higher-order policy optimization objective. This leads to Section where we formally present the *Taylor Expansion Policy Optimization* (TayPO) and generalize prior [@schulman2015trust; as first-order case. In Section \[sec:uni\], we make clear connection between Taylor expansions and $Q(\lambda)$ [@harutyunyan_QLambda_2016], a common off-policy evaluation operator. Finally, in Section \[sec:exp\], we the performance gains due the higher-order objectives across a of distributed deep agents. expansion reinforcement learning {#sec:terl} Consider a Markov Decision Process (MDP) with state space $\mathcal{X}$ and action space $\mathcal{A}$. Let policy $\pi(\cdot|x)$ a distribution give state At discrete $t \geq 0$, in state $x_t$ takes action $a_t reward $r_t \triangleq r(x_t,a_t)$, and transitions to a state $x_{t+1} p(\cdot|x_t,a_t)$. We assume a discount factor \in [0,1)$. Let $Q^\pi(x,a)$ be the action value (Q-function) from state $x,$ taking action $a,$ and following policy $\pi$. For convenience, we use to denote the discounted distribution starting from pair and $\pi$, that $d_\gamma^\pi(x,a|x_0,a_0,\tau) (1-\gamma)\gamma^{-\tau} \sum_{t\geq \tau} \
β€œdata” at $x_0$ (i.e., derivatives $f^{(i)}(x_0)$). OuR paper procEeds aS foLloWs. in SeCtioNΒ \[sec:terl\], we starT With A general result of applyiNg TayLoR ExpaNSiOns to q-functiONs. wHEn wE aPpLy tHe SAmE techNiqUe to the rL objectivE, we ReUse the generaL ReSult and derIve A higher-order PolIcy optImIzaTIon obJecTive. THis leaDS to SecTionΒ \[sec:TaYPo\], Where wE FormallY PReSent The *Taylor ExpansioN poLIcy OptimizatioN* (TayPO) AnD GeNERalIze Prior work [@sChUlman2015TRust; @schULmAN2017PRoxIMal] as a first-orDer special cASe. IN SectiOnΒ \[Sec:UNi\], we maKe cleAr COnnEction betweEn TaYlor expanSions aND $Q(\lambdA)$ [@HarutyuNyan_QLAmbDa_2016], a CommON rEtUrn-BaSEd oFF-pOliCY evAluation OpErAtor. FInalLY, IN sectIonΒ \[Sec:eXp\], we sHow the performAncE gaiNS duE to thE highEr-orDeR objeCtives AcrosS a Range of state-of-tHe-arT distribuTed DeEp Rl aGents. tAylor eXpaNsiOn for reInforceMEnt LeARNInG {#sec:terl} =========================================== Consider a MArKOV DEcision PRocess (mdP) WiTH state spAcE $\maThcaL{x}$ And acTion SPaCe $\mathcaL{A}$. Let pOLiCy $\Pi(\cdot|x)$ Be A distrIbUtiOn oVer acTIons Give stAte $x$. At a dIscreTE time $t \geq 0$, the agENt in state $x_t$ taKEs ACTiON $a_t \sIm \pI(\cdot|x_t)$, receIves REwarD $r_t \tRIaNglEQ r(x_t,a_T)$, and tRaNSiTIons to a next state $x_{t+1} \sIm P(\cdot|x_T,a_t)$. We Assume a discouNt factor $\gaMMA \In [0,1)$. Let $Q^\pi(X,a)$ be THe ACtion value funcTion (Q-Function) frOM state $x,$ tAking Action $a,$ aNd followiNG Policy $\pi$. for ConVenIenCE, We Use $ d_\gamma^\pi(\cdOT,\Cdot|X_0,a_0,\Tau)$ to deNotE the disCouNteD viSitAtIon distriBution stArTiNg FrOm sTate-aCTion pair $(X_0,a_0)$ And FoLloWing $\pI$, Such thAt $d_\gaMma^\pI(x,A|x_0,A_0,\Tau) = (1-\Gamma)\gaMMa^{-\TAU} \sum_{T\gEq \Tau} \
β€œdata” at $x_0$ (i.e., de rivatives$f^{( i)} (x_ 0) $). Our paper proceed s asfollows. In SectionΒ \[ sec:t er l \],w estart with a ge n e ral r es ult o f a pplyi ngTaylorexpansions to Q -functions.W he n we apply th e same techn iqu e to t he RL objec tiv e, we reuse the ge neral res ul t and d e rive ah i gh er-o rder policy optim i za t ion objective. Thisle a ds t o S ect ionΒ \[sec: Ta yPO\] , wherew ef o r mal l y present the *Taylor Ex p ans ion Po li cyO ptimiz ation *( Tay PO) and gen eral ize prior work[ @schulm a n2015tr ust; @ sch ulm an20 1 7p ro xim al ] as afir s t-o rder spe ci al case . In S e c tion Β \[ sec: uni\] , we make cle arconn e cti on be tween Tay lo r exp ansion s and $ Q(\lambda)$ [@h arut yunyan_QL amb da _20 16 ], ac ommonret urn -basedoff-pol i cyev a l u at ion operator. Fina ll y , i n Sectio nΒ \[se c :e xp \ ], we sh ow th e pe r f orman ce g a in s due to the h i gh er -orderob jectiv es ac ros s a r a ngeof sta te-of-th e-art distributed de e p RL agents.Ta y l or expa nsi on for rein forc e ment lea r ni ng{ #sec: terl} = = == = =================== == ====== ===== ====== Consi der a Mark o v Decision Pro c es s (MDP) with st ate s pace $\mat h cal{X}$and a ction sp ace $\mat h c al{A}$.Let po lic y $ \ p i( \cdot|x)$ bea dist ri butionove r actio nsgiv e s tat e$x$. At a discret eti me $ t \ geq 0 $ , the ag en t i nsta te $x _ t$ tak es ac tion $ a_ t \s im \pi( \ cd o t |x_t )$ ,rece ive srewar d $r _ t \ triangl eq r(x_t ,a_ t )$,an dtransit ions to a nex tstate $x_{ t+ 1}\sim p ( \ cdot|x_t ,a_t)$. We assume a dis c ount fa cto r $\g amma \in [0,1 )$. Let $ Q^\ p i(x,a) $ be t he ac ti onv a lue f u n ct ion ( Q-function ) fro m sta te $x, $ takin g action $a,$ andf oll owing policy$\p i$.F o rcon v en i enc e, weu s e $ d_\gamma^\p i(\cdot,\c do t |x _0,a_0,\ta u )$to denote the di scoun t ed visi tation di stributio nstar t i ngfrom state -actionpair $(x_ 0 ,a_0) $ a nd fo llo wing $ \p i$, such that$ d_\ gamma ^\pi(x ,a |x_0,a _0,\t au ) = (1-\ gamma)\gamma^{-\tau} \s um_{t\ geq \ tau } \
β€œdata”_at $x_0$_(i.e., derivatives $f^{(i)}(x_0)$). Our paper_proceeds as_follows._In SectionΒ \[sec:terl\],_we_start with a_general result of_applying Taylor expansions to_Q-functions. When we_apply_the same technique to the RL objective, we reuse the general result and derive_a_higher-order policy_optimization_objective._This leads to SectionΒ \[sec:TayPO\], where_we formally present the *Taylor_Expansion Policy_Optimization* (TayPO) and generalize prior work [@schulman2015trust; @schulman2017proximal]_as_a first-order special_case. In SectionΒ \[sec:uni\], we make clear connection between Taylor_expansions and $Q(\lambda)$ [@harutyunyan_QLambda_2016], a common_return-based off-policy evaluation_operator._Finally,_in SectionΒ \[sec:exp\], we show_the performance gains due to the_higher-order objectives across a range of_state-of-the-art distributed deep RL agents. Taylor expansion for_reinforcement learning {#sec:terl} =========================================== Consider a Markov Decision_Process (MDP) with state space_$\mathcal{X}$ and_action space $\mathcal{A}$. Let policy_$\pi(\cdot|x)$ be a_distribution over_actions give state_$x$. At a discrete time $t_\geq 0$, the_agent in state $x_t$ takes action_$a_t_\sim \pi(\cdot|x_t)$, receives_reward_$r_t_\triangleq _r(x_t,a_t)$, and transitions_to_a next_state_$x_{t+1} \sim p(\cdot|x_t,a_t)$. We assume a_discount_factor $\gamma \in [0,1)$. Let $Q^\pi(x,a)$ be_the action value function_(Q-function)_from state $x,$ taking_action $a,$ and following policy_$\pi$. For convenience, we use $_d_\gamma^\pi(\cdot,\cdot|x_0,a_0,\tau)$ to_denote the_discounted visitation distribution starting from state-action pair $(x_0,a_0)$ and following $\pi$,_such that $d_\gamma^\pi(x,a|x_0,a_0,\tau) = (1-\gamma)\gamma^{-\tau} \sum_{t\geq_\tau} \
+1} \binom{j}{r}\right\} \frac{z^{j+1}}{k} + (\mbox{polynomial of $k$})\\[8pt] & \qquad = \frac{z^{j+1}}{j+1}\frac{1}{k} + (\mbox{a polynomial of $k$}). \end{aligned}$$]{} Hence, if we put [ $$\begin{aligned} & B(k,z) := (I)_{k}+(II)_{k}+(III)_{k}+(IV)_{k}\\[4pt] & \qquad - \frac{1}{j+1} \left\{ \sum_{r=0}^{j+1}\binom{j+1}{r} \frac{(-1)^{r}z^{r+1}}{r+1}B_{j+1-r}(z)\right\} \left(\frac{1}{k}-\frac{1}{k+1}\right), \end{aligned}$$]{} then, $B(k,z)$ is a polynomial of $k$, $z$ and it satisfies $$\frac{B_{j+1}(z+k+1)}{j+1} \log\left(1+\frac{z}{k}\right) + B(k,z) = O(k^{-2}) \quad \mbox{as} \quad k\to\infty.$$ By Lemma \[eqn;35\], we have $$B(k.z)=P_{j}(z+k+1)-P_{j}(z+k). \label{eqn;317}$$ By (\[eqn;316\]) and (\[eqn;317\]), we can deduce (\[eqn;315\]). [ $$\begin{aligned} & K_{j}(z) = Q_{j}(z)+ \sum_{r=1}^{j}\binom{j}{r}z^{j-r} \zeta'(-r)-\frac{z^{j+1}}{j+1}\gamma\\[4pt] & \qquad + \sum_{k=1}^{\infty}\left\{ -(z+k)^{k}\log \left(1+\frac{z}{k}\right) + \sum_{r=0}^{j}\binom
+1 } \binom{j}{r}\right\ } \frac{z^{j+1}}{k } + (\mbox{polynomial of $ k$})\\[8pt ] & \qquad = \frac{z^{j+1}}{j+1}\frac{1}{k } + (\mbox{a polynomial of $ k$ }). \end{aligned}$$ ] { } Hence, if we put [ $ $ \begin{aligned } & B(k, z): = (I)_{k}+(II)_{k}+(III)_{k}+(IV)_{k}\\[4pt ] & \qquad - \frac{1}{j+1 } \left\ { \sum_{r=0}^{j+1}\binom{j+1}{r } \frac{(-1)^{r}z^{r+1}}{r+1}B_{j+1 - r}(z)\right\ } \left(\frac{1}{k}-\frac{1}{k+1}\right), \end{aligned}$$ ] { } then, $ B(k, z)$ is a polynomial of $ k$, $ z$ and it satisfies $ $ \frac{B_{j+1}(z+k+1)}{j+1 } \log\left(1+\frac{z}{k}\right) + B(k, z) = O(k^{-2 }) \quad \mbox{as } \quad k\to\infty.$$ By Lemma \[eqn;35\ ], we have $ $ B(k.z)=P_{j}(z+k+1)-P_{j}(z+k). \label{eqn;317}$$ By (\[eqn;316\ ]) and (\[eqn;317\ ]), we can deduce (\[eqn;315\ ]). [ $ $ \begin{aligned } & K_{j}(z) = Q_{j}(z)+ \sum_{r=1}^{j}\binom{j}{r}z^{j - radius } \zeta'(-r)-\frac{z^{j+1}}{j+1}\gamma\\[4pt ] & \qquad + \sum_{k=1}^{\infty}\left\ { -(z+k)^{k}\log \left(1+\frac{z}{k}\right) + \sum_{r=0}^{j}\binom
+1} \bijom{j}{r}\right\} \frac{z^{j+1}}{y} + (\mbox{kooynomiel of $k$})\\[8lt] & \qduad = \frac{z^{j+1}}{j+1}\frac{1}{k} + (\mvox{a kjlynomial of $k$}). \ena{aligned}$$]{} Jence, if we kut [ $$\begin{aligned} & B(k,z) := (I)_{k}+(II)_{i}+(LII)_{k}+(IR)_{k}\\[4't] & \qquad - \ftac{1}{j+1} \left\{ \sum_{r=0}^{j+1}\binom{j+1}{s} \frac{(-1)^{r}z^{r+1}}{r+1}B_{j+1-r}(z)\right\} \left(\frac{1}{k}-\srac{1}{k+1}\ribhh), \end{aligned}$$]{} jhen, $N(h,z)$ ia a polynomial of $k$, $z$ and it satisries $$\frec{B_{j+1}(z+k+1)}{j+1} \log\lrft(1+\frac{z}{k}\right) + B(k,z) = O(k^{-2}) \euad \mbox{as} \quad k\ho\infty.$$ By Oemmw \[eqn;35\], we have $$B(k.z)=P_{j}(z+k+1)-P_{j}(e+k). \label{eqn;317}$$ By (\[eqn;316\]) and (\[eqn;317\]), we can deduce (\[edn;315\]). [ $$\beyin{aligned} & K_{u}(e) = Q_{j}(z)+ \sum_{r=1}^{j}\uinom{j}{g}z^{j-r} \zeta'(-r)-\frac{z^{j+1}}{b+1}\gamma\\[4py] & \qquad + \smm_{k=1}^{\inhty}\lwft\{ -(z+k)^{k}\log \left(1+\fcac{z}{k}\right) + \sum_{t=0}^{j}\binom
+1} \binom{j}{r}\right\} \frac{z^{j+1}}{k} + (\mbox{polynomial of $k$})\\[8pt] = + (\mbox{a of $k$}). \end{aligned}$$]{} $$\begin{aligned} B(k,z) := (I)_{k}+(II)_{k}+(III)_{k}+(IV)_{k}\\[4pt] \qquad - \frac{1}{j+1} \sum_{r=0}^{j+1}\binom{j+1}{r} \frac{(-1)^{r}z^{r+1}}{r+1}B_{j+1-r}(z)\right\} \left(\frac{1}{k}-\frac{1}{k+1}\right), \end{aligned}$$]{} then, $B(k,z)$ a polynomial of $k$, $z$ and it satisfies $$\frac{B_{j+1}(z+k+1)}{j+1} \log\left(1+\frac{z}{k}\right) + B(k,z) = \quad \mbox{as} \quad k\to\infty.$$ By Lemma \[eqn;35\], we have $$B(k.z)=P_{j}(z+k+1)-P_{j}(z+k). \label{eqn;317}$$ By (\[eqn;316\]) (\[eqn;317\]), can (\[eqn;315\]). $$\begin{aligned} & K_{j}(z) = Q_{j}(z)+ \sum_{r=1}^{j}\binom{j}{r}z^{j-r} \zeta'(-r)-\frac{z^{j+1}}{j+1}\gamma\\[4pt] & \qquad + \sum_{k=1}^{\infty}\left\{ -(z+k)^{k}\log \left(1+\frac{z}{k}\right) + \sum_{r=0}^{j}\binom
+1} \binom{j}{r}\right\} \frac{z^{j+1}}{k} + (\mbox{poLynomial of $K$})\\[8pt] & \qqUad = \FraC{z^{J+1}}{j+1}\frAc{1}{k} + (\mBox{a polynomial OF $k$}). \enD{aligned}$$]{} Hence, if we put [ $$\beGin{alIgNEd} & B(k,Z) := (i)_{k}+(iI)_{k}+(IIi)_{k}+(IV)_{k}\\[4pt] & \QQuAD - \FraC{1}{j+1} \LeFt\{ \sUm_{R=0}^{J+1}\bInom{j+1}{R} \frAc{(-1)^{r}z^{r+1}}{r+1}B_{J+1-r}(z)\right\} \leFt(\fRaC{1}{k}-\frac{1}{k+1}\right), \ENd{Aligned}$$]{} theN, $B(k,Z)$ is a polynomiAl oF $k$, $z$ and It SatISfies $$\FraC{B_{j+1}(z+k+1)}{J+1} \log\leFT(1+\frac{z}{K}\right) + B(k,z) = o(k^{-2}) \QUad \mboX{As} \quad k\TO\InFty.$$ BY Lemma \[eqn;35\], we have $$B(k.Z)=p_{j}(Z+K+1)-P_{j}(z+k). \label{eqn;317}$$ BY (\[eqn;316\]) anD (\[eQN;317\]), wE CAn dEduCe (\[eqn;315\]). [ $$\begin{AlIgned} & k_{J}(z) = Q_{j}(z)+ \suM_{R=1}^{j}\BINOm{j}{R}Z^{j-r} \zeta'(-r)-\frac{z^{J+1}}{j+1}\gamma\\[4pt] & \qqUAd + \sUm_{k=1}^{\infTy}\LefT\{ -(Z+k)^{k}\log \Left(1+\fRaC{Z}{k}\rIght) + \sum_{r=0}^{j}\biNom
+1} \binom{j}{r}\right\} \fr ac{z^ {j+ 1}} {k } + (\mbox{po l ynom ial of $k$})\\[8pt] & \ qq u ad = \f rac{z ^{j+1}} { j+ 1 } \fr ac {1 }{k } + (\mbox {a polynom ial o f $k$}). \ e nd {aligned}$ $]{ } Hence, ifweput [$$ \be g in{al ign ed} & B ( k,z) : = (I)_{k} +( I I)_{k} + (III)_{ k } +( IV)_ {k}\\[4pt] &\ qq u ad - \frac{1}{ j+1} \ le f t\ { \sum_{r=0} ^{ j+1}\ b inom{j+ 1 }{ r } \frac{(-1)^ {r}z^{r+1}} { r+1 }B_{j+ 1- r}( z )\righ t\} \ left(\frac{ 1}{k }-\frac{1 }{k+1} \ right), \end{ aligne d}$ $]{ } th e n, $ B(k ,z ) $ i s a po l yno mial of$k $, $z$andi t s atis fie s $$ \frac {B_{j+1}(z+k+ 1)} {j+1 } \lo g\lef t(1+ \f rac{z }{k}\r ight) + B(k,z) = O (k^{ -2}) \qu ad \m bo x{as} \quadk\t o\i nfty.$$ By Lem m a \ [e q n ; 35 \], we have $$B(k. z) = P _{ j}(z+k+1 )-P_{j } (z +k ) . \lab el {eq n;31 7 } $$ By (\[ e qn ;316\])and (\ [ eq n; 317\]), w e cande duc e ( \[eqn ; 315\ ]). [ $$\begi n{ali g ned} & K_{ j }(z) = Q_{j}( z )+ \ su m _{r= 1}^ {j}\binom{j }{r} z ^{j- r} \ z eta'( -r)-\ fr a c{ z ^{j+1}}{j+1}\gamma\ \[ 4pt] &\qquad + \sum _{k=1}^{\i n f t y}\left\ { -(z+k)^{k}\log \lef t(1+\frac{ z }{k}\rig ht) + \ sum_{r=0} ^ { j}\binom
+1} \binom{j}{r}\right\} _ _ _ \frac{z^{j+1}}{k} __ __ _ + (\mbox{polynomial_of $k$})\\[8pt] _ & \qquad_=_\frac{z^{j+1}}{j+1}\frac{1}{k} + (\mbox{a polynomial of_$k$}). _ \end{aligned}$$]{}_Hence,_if_we put [ $$\begin{aligned} _ & B(k,z) :=_(I)_{k}+(II)_{k}+(III)_{k}+(IV)_{k}\\[4pt] _ & \qquad - \frac{1}{j+1} \left\{ __ _\sum_{r=0}^{j+1}\binom{j+1}{r} \frac{(-1)^{r}z^{r+1}}{r+1}B_{j+1-r}(z)\right\} _ \left(\frac{1}{k}-\frac{1}{k+1}\right), \end{aligned}$$]{}_then, $B(k,z)$ is_a_polynomial_of $k$, $z$ and_it satisfies $$\frac{B_{j+1}(z+k+1)}{j+1} _\log\left(1+\frac{z}{k}\right) + B(k,z)_= O(k^{-2}) \quad \mbox{as}_\quad k\to\infty.$$ By Lemma \[eqn;35\], we_have $$B(k.z)=P_{j}(z+k+1)-P_{j}(z+k). \label{eqn;317}$$ By_(\[eqn;316\]) and_(\[eqn;317\]), we can deduce (\[eqn;315\]). [_$$\begin{aligned} _ &_K_{j}(z) = Q_{j}(z)+_\sum_{r=1}^{j}\binom{j}{r}z^{j-r} _\zeta'(-r)-\frac{z^{j+1}}{j+1}\gamma\\[4pt] _ & \qquad + \sum_{k=1}^{\infty}\left\{ __ _-(z+k)^{k}\log_\left(1+\frac{z}{k}\right) _ _ _+_\sum_{r=0}^{j}\binom
stick anymore to the large dijet relative rapidity region in the BFKL Pomeron manifestations hunting, since, from the one hand, we include the region of the moderate rapidity intervals into our consideration and, from the other hand, the resummation effects are quite pronounced at the moderate rapidity region. We present also in Figs. 2,3 estimations for NLO BFKL effects using the results of Ref. [@Cor95], where conformal NLO contributions to the Lipatov’s eigenvalues were calculated. The estimations incorporate the NLO conformal corrections to the Lipatov’s eigenvalues (see Fig. 4) and the NLO CTEQ3M structure functionsΒ [@Lai94]. We should note here that the extraction of data on high-$k_{\perp}$ jets from the event samples in order to compare them with the BFKL Pomeron predictions should be different from the algorithms directed to a comparison with perturbative QCD predictions for the hard processes. These algorithms, motivated by the strong $k_{\perp}$-ordering of the hard QCD regime, employ hardest-$k_{\perp}$ jet selection (see, e.g., Ref. [@Alg94]). It is doubtful that one can reconcile these algorithms with the weak $k_{\perp}$-diffusion and the strong rapidity ordering of the semi-hard QCD regime, described by the BFKL resummation. We also note that our predictions should not be compared with the preliminary data [@Heu94] extracted by the most forward/backward jet selection criterion. Obviously, one should include for tagging all the registered pairs of jets (not only the most forward–backward pair) to compare with our predictions. In particular, to make a comparison with Figs. 2,3, one should sum up all the registered $x$-symmetric dijets ($x_1=x_2$) with transverse momenta harder than $k_{\perp min}$. We thank E.A.Kuraev and L.N.Lipatov for stimulating discussions. We are grateful to A.J.Sommerer, J.P.Vary, and B.-L.Young for their kind hospitality at the IITAP, Ames, Iowa and support. V.T.K. is indebted to S.Ahn, C.L.Kim, T.Lee, A.Petridis, J.Qiu, C.R.Schmidt, S.I.T
stick anymore to the large dijet relative rapidity area in the BFKL Pomeron materialization hunting, since, from the one hand, we admit the region of the moderate celerity intervals into our consideration and, from the early hand, the resummation effects are quite pronounced at the moderate celerity area. We present also in Figs. 2,3 estimate for NLO BFKL effects using the results of Ref. [ @Cor95 ], where conformal NLO contribution to the Lipatov ’s eigenvalues were account. The estimations incorporate the NLO conformal correction to the Lipatov ’s eigenvalues (see Fig. 4) and the NLO CTEQ3 M structure function Β  [ @Lai94 ]. We should note here that the extraction of data on high-$k_{\perp}$ jets from the consequence samples in club to compare them with the BFKL Pomeron predictions should be different from the algorithms target to a comparison with perturbative QCD predictions for the hard processes. These algorithms, motivated by the solid $ k_{\perp}$-ordering of the intemperate QCD regime, employ hardest-$k_{\perp}$ jet selection (see, e.g., Ref. [ @Alg94 ]). It is doubtful that one can reconcile these algorithms with the watery $ k_{\perp}$-diffusion and the strong rapidity order of the semi - unvoiced quantum chromodynamics regime, described by the BFKL resummation. We also note that our predictions should not be compared with the preliminary data [ @Heu94 ] extracted by the about forward / backward jet selection criterion. Obviously, one should include for tagging all the registered pairs of jets (not only the most forward – backward couple) to compare with our predictions. In finical, to do a comparison with Figs. 2,3, one should sum up all the registered $ x$-symmetric dijets ($ x_1 = x_2 $) with transverse momentum harder than $ k_{\perp min}$. We thank E.A.Kuraev and L.N.Lipatov for stimulating discussions. We are grateful to A.J.Sommerer, J.P.Vary, and B.-L.Young for their kind cordial reception at the IITAP, Ames, Iowa and support. V.T.K. is indebted to S.Ahn, C.L.Kim, T.Lee, A.Petridis, J.Qiu, C.R.Schmidt, S.I.T
stlck anymore to the large dijet relative rapidivy regikn in thd BFKL Pomeron manifestationd yuntibg, since, from the one fand, we ijclude tye rtgion of the modeczte raplbity jktervclw into our conxideration and, from the mtfex hand, the resummation effects are qtite prpnlunced at the iodegaee rziibity region. We present also in Fjgs. 2,3 esuimations for NLO NFKL effects using the resklts of Ref. [@Cor95], where fonformal NOO cjbtributions go the Lipatov’s eigenvzlues were calculated. The estimxtionx incorporqtw tjg NLO confornal cjrrections to the Li[atov’s rigenvalues (sec Fig. 4) abd the NLO CTEQ3M strurture functionsΒ [@Lai94]. We should nmtz here that the extraxtuon ox dada ov hieh-$k_{\ker'}$ jsts frlm vhe event szmples in oeder to compare thek rpyh the BFKL Lomerog [redictions should be different from tht algkrithms directed to a cimparison with perturhative QCQ predictions for the hard processes. These algoridhms, jutirqted ct hhe strong $k_{\perp}$-ordering of the hard QCD regiis, tmpkoy hardest-$k_{\pevp}$ jet selection (sre, e.b., Ref. [@Alg94]). It ir doubcrum that one can reclncile jhese qlgorithmf wiyh the weak $k_{\perp}$-diffusion qnd the stroug eapidity ordering lf the semi-kard QVD rebime, described by the BYKL reaummation. Wf also nofd that our prediztipnv should not be compared wyth the pcelimnnary daga [@Hgu94] extrwcted by tje most forward/backward uet sglectimn criteriln. Obviously, one should include for tagging akl thv registexed palrs of jets (not only the most forwarb–backwxrd pair) tk compace with our [redictions. It particular, vo make a compariwon witf Figs. 2,3, one shoild sum ui cll the rwgistered $x$-symmetrlc digsts ($x_1=x_2$) with traurcerse momenta hsrddr ehwn $k_{\[arp min}$. We thdnk D.A.Kjtaev xnd L.N.Lipatix fot stimulating discusviona. We are grateful yo A.J.Sommeter, J.P.Varr, and B.-L.Young for their kind hodpitanitb at tne YITAP, Ames, Iowa and support. V.T.I. is indehteq to S.Ahn, C.L.Him, B.Lee, A.Petridis, J.Qiu, C.R.Schmidt, S.I.T
stick anymore to the large dijet relative in BFKL Pomeron hunting, since, from the of the moderate intervals into our and, from the other hand, the effects are quite pronounced at the moderate rapidity region. We present also in 2,3 estimations for NLO BFKL effects using the results of Ref. [@Cor95], where NLO to Lipatov’s were calculated. The estimations incorporate the NLO conformal corrections to the Lipatov’s eigenvalues (see Fig. 4) the NLO CTEQ3M structure functions [@Lai94]. We should here that the extraction data on high-$k_{\perp}$ jets from event in order compare with BFKL Pomeron predictions be different from the algorithms directed to a comparison with perturbative QCD predictions for the hard processes. algorithms, motivated strong $k_{\perp}$-ordering the QCD employ hardest-$k_{\perp}$ jet e.g., Ref. [@Alg94]). It is doubtful reconcile these algorithms with the weak $k_{\perp}$-diffusion and strong rapidity of the semi-hard QCD regime, described the BFKL resummation. We also note that our should not be compared with the preliminary data [@Heu94] extracted by the most forward/backward jet Obviously, one should include tagging all the pairs jets only most forward–backward to compare with our predictions. In particular, to make a comparison Figs. 2,3, one should sum up all the registered $x$-symmetric with momenta harder than min}$. We thank E.A.Kuraev L.N.Lipatov stimulating discussions. We are A.J.Sommerer, and kind at IITAP, Ames, Iowa and V.T.K. is indebted to S.Ahn, T.Lee, A.Petridis, J.Qiu, C.R.Schmidt,
stick anymore to the large dijEt relative RapidIty RegIoN in tHe BFkL Pomeron manifEStatIons hunting, since, from thE one hAnD, We inCLuDe the Region oF ThE MOdeRaTe RapIdITy InterValS into ouR consideraTioN aNd, from the othER hAnd, the resuMmaTion effects aRe qUite prOnOunCEd at tHe mOderaTe rapiDIty regIon. We presEnT Also in fIgs. 2,3 estiMATiOns fOr NLO BFKL effects uSInG The results of ReF. [@Cor95], whErE CoNFOrmAl NlO contribuTiOns to THe LipatOV’s EIGEnvALues were calcuLated. The estIMatIons inCoRpoRAte the nLO coNfORmaL correctionS to tHe Lipatov’S eigenVAlues (seE fig. 4) and tHe NLO CtEQ3m stRuctURe FuNctIoNSΒ [@LaI94]. we ShoULd nOte here tHaT tHe extRactION OF datA on High-$K_{\perp}$ Jets from the evEnt SampLEs iN ordeR to coMparE tHem wiTh the BfKL PoMeRon predictions sHoulD be differEnt FrOm tHe AlgorIThms diRecTed To a compArison wITh pErTURBaTive QCD predictions FoR THe Hard procEsses. THEsE aLGorithms, MoTivAted BY The stRong $K_{\PeRp}$-orderiNg of thE HaRd qCD regiMe, Employ HaRdeSt-$k_{\Perp}$ jET selEction (See, e.g., Ref. [@alg94]). It IS doubtful that oNE can reconcile THeSE AlGOritHms With the weak $K_{\perP}$-DiffUsioN AnD thE StronG rapiDiTY oRDering of the semi-hard qCd regimE, descRibed by the BFKl resummatiON. wE also notE thaT OuR Predictions shoUld noT be compareD With the pRelimInary datA [@Heu94] extraCTEd by the mOst ForWarD/baCKWaRd jet selectioN CRiteRiOn. ObvioUslY, one shoUld IncLudE foR tAgging all The regisTeReD pAiRs oF jets (NOt only thE mOst FoRwaRd–bacKWard paIr) to cOmpaRe WiTH ouR predicTIoNS. in paRtIcUlar, To mAkE a comPariSOn wIth Figs. 2,3, One should Sum UP all ThE rEgisterEd $x$-symmetric dIjEts ($x_1=x_2$) with tRaNsvErse moMENta hardeR than $k_{\perp min}$. We thank E.A.KURaev and l.N.LIpatoV for StimulatiNg dIscussIonS. we are gRatefuL to A.J.soMmeRER, J.P.VaRY, AnD B.-L.yoUng for theiR KInd HospiTaLity At the IItAP, Ames, Iowa and suppORt. V.t.K. is indebted tO S.AHn, C.L.kIM, T.lee, a.peTRidIs, j.qiu, c.r.schmidt, S.I.T
stick anymore to the larg e dijet re lativ e r api di ty r egio n in the BFKLP omer on manifestations hunt ing,si n ce,f ro m the one ha n d, w e i nc lu deth e r egion of the mo derate rap idi ty intervals i n to our consi der ation and, f rom the o th erh and,the resu mmatio n effec ts are qu it e prono u nced at t he mod erate rapidity re g io n . We presentalso i nF ig s . 2, 3 e stimations f or NL O BFKL e f fe c t s us i ng the result s of Ref. [ @ Cor 95], w he rec onform al NL Oc ont ributions t o th e Lipatov ’s eig e nvalues were ca lculat ed. Th e es t im at ion si nco r po rat e th e NLO co nf or mal c orre c t i o ns t o t he L ipato v’s eigenvalu es(see Fig . 4)and t he N LO CTEQ 3M str uctur efunctionsΒ [@Lai 94]. We shou ldno tehe re th a t theext rac tion of data o n hi gh - $ k _{ \perp}$ jets fromth e ev ent samp les in or de r to comp ar e t hemw i th th e BF K LPomeronpredic t io ns should b e diff er ent fr om th e alg orithm s direct ed to a comparison w i th perturbati v eQ C Dp redi cti ons for the har d pro cess e s. Th e se al gorit hm s ,m otivated by the str on g $k_{ \perp }$-ordering o f the hard Q C D regime , em p lo y hardest-$k_{\ perp} $ jet sele c tion (se e, e. g., Ref. [@Alg94] ) . It is d oub tfu l t hat o ne can reconcil e thes ealgorit hms with t hewea k $ k_{ \p erp}$-dif fusion a nd t he s tro ng ra p idity or de rin gofthe s e mi-har d QCD reg im e, des cribedb yt h e BF KL r esum mat io n. We als o no te that our pred ict i onssh ou ld notbe compared w it h the prel im ina ry dat a [@Heu94] extracted by the mostf orward/ bac kward jet selectio n c riteri on. Obviou sly, o ne sh ou ldi n clude f or ta gg ing all th e reg ister ed pai rs of j ets (not only them ost forward–back war d pa i r )toc om p are w i tho u r predictions.In particu la r ,to make ac omp ar ison wi th Figs . 2,3 , one sh ould sumup all th eregi s t ere d $x$-symm etric di jets ($x_ 1 =x_2$ ) w ith t ran sverse m ome nta h ardert han $k_{ \perpmi n}$. We th an k E.A.Ku raev and L.N.Lipatov fo r stim ulati ngdiscussio ns. Weare grate fulto A.J.Som mer er, J.P. Var y , and B.- L .Y oun g forthei r kind hos p it ali t y a t the IITAP , A mes , Iow a a n d supp ort. V.T.K. is indebt e d to S.Ahn, C. L.Ki m , T. Lee , A.P et ridis, J.Qiu,C.R .S c h midt, S. I. T
stick_anymore to_the large dijet relative_rapidity region_in_the BFKL_Pomeron_manifestations hunting, since,_from the one_hand, we include the_region of the_moderate_rapidity intervals into our consideration and, from the other hand, the resummation effects are_quite_pronounced at_the_moderate_rapidity region. We present also in_Figs. 2,3 estimations for NLO_BFKL effects_using the results of Ref. [@Cor95], where conformal_NLO_contributions to the_Lipatov’s eigenvalues were calculated. The estimations incorporate the NLO_conformal corrections to the Lipatov’s eigenvalues_(see Fig. 4)_and_the_NLO CTEQ3M structure functionsΒ [@Lai94]. We_should note here that the extraction_of data on high-$k_{\perp}$ jets from_the event samples in order to compare_them with the BFKL Pomeron predictions_should be different from the_algorithms directed_to a comparison with perturbative_QCD predictions for_the hard_processes. These algorithms,_motivated by the strong $k_{\perp}$-ordering of_the hard QCD_regime, employ hardest-$k_{\perp}$ jet selection (see,_e.g.,_Ref. [@Alg94]). It_is_doubtful_that one_can reconcile these_algorithms_with the_weak_$k_{\perp}$-diffusion and the strong rapidity ordering_of_the semi-hard QCD regime, described by the_BFKL resummation. We also_note_that our predictions should_not be compared with the_preliminary data [@Heu94] extracted by the_most forward/backward_jet selection_criterion. Obviously, one should include for tagging all the registered pairs_of jets (not only the most_forward–backward pair) to compare_with our_predictions._In particular, to_make_a comparison_with Figs. 2,3, one should sum up_all the_registered $x$-symmetric dijets ($x_1=x_2$) with transverse_momenta harder than $k_{\perp_min}$. We_thank E.A.Kuraev and L.N.Lipatov for stimulating_discussions. We are grateful to A.J.Sommerer,_J.P.Vary, and B.-L.Young for their_kind_hospitality_at the IITAP, Ames, Iowa_and support. V.T.K. is indebted to_S.Ahn, C.L.Kim, T.Lee,_A.Petridis, J.Qiu, C.R.Schmidt, S.I.T
hat{\Psi}_{\ell}a(x_{i}).$ Chernozhukov, Newey, and Robins (2018) introduce machine learning methods for choosing the functions to include in the vector $A(x)$. This method can be combined with machine learning methods for estimating $E[q_{i}|x_{i}]$ to construct a double machine learning estimator of average surplus, as shown in Chernozhukov, Hausman, and Newey (2018). In parametric models moment functions like those in equation (\[lr series\]) are used to β€œpartial out” nuisance parameters $\zeta.$ For maximum likelihood these moment functions are the basis of Neyman’s (1959) C-alpha test. Wooldridge (1991) generalized such moment conditions to nonlinear least squares and Lee (2005), Bera et al. (2010), and Chernozhukov et al. (2015) to GMM. What is novel here is their use in the construction of semiparametric estimators and the interpretation of the estimated LR moment functions $\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of an original moment function $m(z_{i},\beta,\hat{\zeta}_{\ell})$ and an influence adjustment $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$. Estimating the Influence Adjustment with First Step Smoothing ------------------------------------------------------------- The adjustment term can be estimated in a general way that allows for kernel density, locally linear regression, and other kernel smoothing estimators for the first step. The idea is to differentiate with respect to the effect of the $i^{th}$ observation on sample moments. Newey (1994b) used a special case of this approach to estimate the asymptotic variance of a functional of a kernel based semiparametric or nonparametric estimator. Here we extend this method to a wider class of first step estimators, such as locally linear regression, and apply it to estimate the adjustment term for construction of LR moments. We will describe this estimator for the case where $\gamma$ is a vector of functions of a vector of variables $x.$ Let $h(z,x,\gamma)$ be a vector of functions of a data observation $z$, $x$, and a possible realized value of $\gamma$ (i.e. a vector of real numbers $\gamma$). Also let $\hat{h}_{\ell }(x,\
hat{\Psi}_{\ell}a(x_{i}).$ Chernozhukov, Newey, and Robins (2018) introduce machine learning methods for choose the function to include in the vector $ A(x)$. This method can be combined with car learning methods for estimating $ E[q_{i}|x_{i}]$ to manufacture a double machine learn estimator of average excess, as shown in Chernozhukov, Hausman, and Newey (2018). In parametric models moment functions like those in equality (\[lr series\ ]) are used to β€œ partial out ” nuisance argument $ \zeta.$ For maximum likelihood these moment function are the basis of Neyman ’s (1959) C - alpha test. Wooldridge (1991) generalized such consequence conditions to nonlinear least squares and Lee (2005), Bera et al. (2010), and Chernozhukov et al. (2015) to GMM. What is novel here is their use in the structure of semiparametric estimators and the rendition of the estimated LR moment functions $ \psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of an original consequence function $ m(z_{i},\beta,\hat{\zeta}_{\ell})$ and an influence adjustment $ \phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$. Estimating the Influence Adjustment with First Step Smoothing ------------------------------------------------------------- The adjustment term can be estimate in a general way that allows for kernel density, locally linear regression, and other kernel smoothing estimators for the first measure. The idea is to differentiate with obedience to the impression of the $ i^{th}$ observation on sample moment. Newey (1994b) use a special case of this overture to estimate the asymptotic variance of a functional of a kernel based semiparametric or nonparametric estimator. Here we extend this method to a wider class of first step estimators, such as locally analogue regression, and give it to calculate the allowance term for construction of LR moments. We will identify this estimator for the case where $ \gamma$ is a vector of functions of a vector of variable star $ x.$ Let $ h(z, x,\gamma)$ be a vector of functions of a data notice $ z$, $ x$, and a possible realized value of $ \gamma$ (i.e. a vector of real numbers $ \gamma$). Also permit $ \hat{h}_{\ell } (x,\
hat{\Osi}_{\ell}a(x_{i}).$ Chernozhukov, Nedey, and Robins (2018) introdnce macgine leafning methods for choosing tie fynctiins to include in the xector $A(x)$. This merhod xan be comujned wibk macglne lzacning methods fpr estimathng $E[q_{i}|x_{i}]$ to cmnrtxuct a double machine learning estimwtor of agerage surplus, as xrown pn Chernozhukov, Hausman, and Newsy (2018). In perametric modelx moment functions like thlse ln equation (\[lr serles\]) are usee to β€œpartial out” vuisance parameters $\zeja.$ For maximum likelihood these oomenc functions aee hve basis of Neymwn’s (1959) C-alpha bvst. Woondridge (1991) generalized xuci monent conditions to noilinear least squaref and Lee (2005), Yera et al. (2010), and Chernizyukov et dl. (2015) gi GOM. Shet js novfl iere is thejr use in tye construction of xeiplarametric eatimatjrf and the interpretation of the estimattd LR moment functions $\psi(z_{i},\veta,\hat{\zeta}_{\ell},\hat{\Psi}_{\epl})$ as the sum of an original moment function $m(z_{i},\beta,\hat{\zetd}_{\ell})$ ena au infljwnfe adjustment $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$. Estyjauinn the Influence Cdjustment with Gigsy Step Smoothivg ------------------------------------------------------------- The csjhstment term can bf estimwted un a genewal eay that allows for kernel eensity, locajoy linear regressiln, and othex kerngl smopthing estimators for tke firat step. The idea is fu differentiate ditm raspect to the effect of thq $i^{th}$ obsxrvatnon on sxmplg momenes. Newey (1994b) used a special case of hhis cpprodch to estlmate the asymptotic variance oh a functionak mf d kernel basec semiparametwic or nonparaketric zstimagor. Here wv extend vhis method eo a wider cldds of first vtep estymatirs, wuch as uocally linear regression, and appoy it to estimate bhe aanustment term fir xonstruction of LR moiejtx. Wq will descrite tfis rstimxtor for thc cxse ehere $\gamma$ is a vecdor kf functions of a fegtor of vqriables $x.$ Let $h(z,x,\gamka)$ be a vector of vunctmons oh a daya jbservation $z$, $x$, and a possible realized vajue of $\gamma$ (i.e. q vector of xeal numbers $\gamma$). Also let $\hat{h}_{\ell }(x,\
hat{\Psi}_{\ell}a(x_{i}).$ Chernozhukov, Newey, and Robins (2018) introduce methods choosing the to include in can combined with machine methods for estimating to construct a double machine learning of average surplus, as shown in Chernozhukov, Hausman, and Newey (2018). In parametric moment functions like those in equation (\[lr series\]) are used to β€œpartial out” parameters For likelihood moment functions are the basis of Neyman’s (1959) C-alpha test. Wooldridge (1991) generalized such moment conditions nonlinear least squares and Lee (2005), Bera et (2010), and Chernozhukov et (2015) to GMM. What is here their use the of estimators and the of the estimated LR moment functions $\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of an original moment function $m(z_{i},\beta,\hat{\zeta}_{\ell})$ and influence adjustment the Influence with Step ------------------------------------------------------------- The adjustment be estimated in a general way kernel density, locally linear regression, and other kernel estimators for first step. The idea is to with respect to the effect of the $i^{th}$ on sample moments. Newey (1994b) used a special case of this approach to estimate the of a functional of kernel based semiparametric nonparametric Here extend method to wider class of first step estimators, such as locally linear regression, apply it to estimate the adjustment term for construction of We describe this estimator the case where $\gamma$ a of functions of a variables Let vector functions a data observation $z$, and a possible realized value $\gamma$ (i.e. a vector let $\hat{h}_{\ell }(x,\
hat{\Psi}_{\ell}a(x_{i}).$ Chernozhukov, NEwey, and RobIns (2018) inTroDucE mAchiNe leArning methods fOR choOsing the functions to incLude iN tHE vecTOr $a(x)$. ThiS method CAn BE ComBiNeD wiTh MAcHine lEarNing metHods for estImaTiNg $E[q_{i}|x_{i}]$ to conSTrUct a double MacHine learning EstImator Of AveRAge suRplUs, as sHown in cHernozHukov, HausMaN, And NewEY (2018). In paraMETrIc moDels moment functioNS lIKe those in equatIon (\[lr sErIEs\]) ARE usEd tO β€œpartial ouT” nUisanCE parameTErS $\ZETa.$ FOR maximum likelIhood these mOMenT functIoNs aRE the baSis of neYMan’S (1959) C-alpha test. woolDridge (1991) genEralizED such moMEnt condItions To nOnlIneaR LeAsT sqUaREs aND LEe (2005), BERa eT al. (2010), and ChErNoZhukoV et aL. (2015) TO gmM. WhAt iS novEl herE is their use in The ConsTRucTion oF semiParaMeTric eStimatOrs anD tHe interpretatioN of tHe estimatEd Lr mOmeNt FunctIOns $\psi(Z_{i},\bEta,\Hat{\zeta}_{\Ell},\hat{\PSI}_{\elL})$ aS THE sUm of an original momeNt FUNcTion $m(z_{i},\bEta,\hat{\ZEtA}_{\eLL})$ and an inFlUenCe adJUStmenT $\phi(Z_{I},\bEta,\hat{\zeTa}_{\ell},\hAT{\PSi}_{\Ell})$. EstiMaTing thE INflUenCe AdjUStmeNt with first SteP SmooTHing ------------------------------------------------------------- The adjustmENt term can be esTImATEd IN a geNerAl way that alLows FOr keRnel DEnSitY, LocalLy linEaR ReGRession, and other kernEl SmoothIng esTimators for thE first step. tHE Idea is to DiffEReNTiate with respeCt to tHe effect of THe $i^{th}$ obsErvatIon on samPle momentS. nEwey (1994b) useD a sPecIal CasE OF tHis approach to ESTimaTe The asymPtoTic variAncE of A fuNctIoNal of a kerNel based SeMiPaRaMetRic or NOnparameTrIc eStImaTor. HeRE we extEnd thIs meThOd TO a wIder claSS oF FIrst StEp EstiMatOrS, such As loCAllY linear RegressioN, anD ApplY iT tO estimaTe the adjustmeNt Term for conStRucTion of lr Moments. WE will describe this estimaTOr for thE caSe wheRe $\gaMma$ is a vecTor Of funcTioNS of a veCtor of VariaBlEs $x.$ lET $h(z,x,\gAMMa)$ Be a VeCtor of funcTIOns Of a daTa ObseRvation $Z$, $x$, and a possible realIZed Value of $\gamma$ (i.E. a vEctoR OF rEal NUmBErs $\GaMMa$). ALSO let $\hat{h}_{\ell }(x,\
hat{\Psi}_{\ell}a(x_{i}).$ Chernozh ukov, Ne wey ,andRobi ns (2018) intr o duce machine learning meth ods f or choo s in g the functi o ns t o i nc lu dein th e vec tor $A(x)$ . This met hod c an be combin e dwith machi nelearning met hod s fores tim a ting$E[ q_{i} |x_{i} ] $ to c onstructad oublem achinel e ar ning estimator of ave r ag e surplus, as s hown i nC he r n ozh uko v, Hausman ,and N e wey (20 1 8) . Inp arametric mod els momentf unc tionsli ket hose i n equ at i on(\[lr serie s\]) are used to β€œp a rtial o u t” nuis ance p ara met ers$ \z et a.$ F o r m a xi mum lik elihoodth es e mom entf u n c tion s a re t he ba sis of Neyman ’s(195 9 ) C -alph a tes t. W oo ldrid ge (19 91) g en eralized such m omen t conditi ons t o n on linea r least sq uar es andLee (20 0 5), B e r a e t al. (2010), andCh e r no zhukov e t al.( 20 15 ) to GMM. W hat isn o vel h erei stheir us e in t h eco nstruct io n of s em ipa ram etric esti mators and the inte r pretation of t h e estimated L R m o m en t fun cti ons $\psi(z _{i} , \bet a,\h a t{ \ze t a}_{\ ell}, \h a t{ \ Psi}_{\ell})$ as th esum of an o riginal momen t function $ m (z_{i},\ beta , \h a t{\zeta}_{\ell })$ a nd an infl u ence adj ustme nt $\phi (z_{i},\b e t a,\hat{\ zet a}_ {\e ll} , \ ha t{\Psi}_{\ell } ) $. Es timatin g t he Infl uen ceAdj ust me nt with F irst Ste pSm oo th ing ---- - -------- -- --- -- --- ----- - ------ ----- ---- -- -- - --- ------- - -T he a dj us tmen t t er m can bee sti mated i n a gener alw ay t ha tallowsfor kernel de ns ity, local ly li near r e g ression, and other kernel smoot h ing est ima torsforthe first st ep. Th e i d ea isto dif feren ti ate w ith r e s pe ctto the effec t ofthe $ i^ {th} $ obser vation on sample m o men ts. Newey (19 94b ) us e d a sp e ci a l c as e of t his approach to estimateth e a symptoticv ari an ce of a functi onalo f a ker nel based semipara me tric o r n onparametr ic estim ator. Her e we e x te nd th ismethod t o a wide r clas s of firs t step e stimat ors,su ch as lo cally linear regression , andapply it to estim ate the adjustme nt t erm for co nst ruc tionofL R mom ents . Wew ill d escr i be this e s ti mat o r f or the case w h ere $\ga mma $ is avect or of functions o f a vector of v aria b l es$x. $ Let $ h(z,x,\gamma)$ be a v ector of f unctions of a dataob s ervat ion $z $, $x$ , and a p os s ible r eali zed value of $\ ga m ma$ (i. e. a vector ofre al num bers $ \ gamm a $ ). Also let $\ha t{h}_ { \ ell } ( x,\
hat{\Psi}_{\ell}a(x_{i}).$ Chernozhukov, Newey,_and Robins_(2018) introduce machine learning_methods for_choosing_the functions_to_include in the_vector $A(x)$. This_method can be combined_with machine learning_methods_for estimating $E[q_{i}|x_{i}]$ to construct a double machine learning estimator of average surplus, as_shown_in Chernozhukov,_Hausman,_and_Newey (2018). In parametric models moment_functions like those in equation_(\[lr series\])_are used to β€œpartial out” nuisance parameters $\zeta.$_For_maximum likelihood these_moment functions are the basis of Neyman’s (1959) C-alpha_test. Wooldridge (1991) generalized such moment_conditions to nonlinear_least_squares_and Lee (2005), Bera_et al. (2010), and Chernozhukov et_al. (2015) to GMM. What is_novel here is their use in the_construction of semiparametric estimators and the_interpretation of the estimated LR_moment functions_$\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of_an original moment_function $m(z_{i},\beta,\hat{\zeta}_{\ell})$_and an influence_adjustment $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$. Estimating the Influence Adjustment with_First Step Smoothing ------------------------------------------------------------- The_adjustment term can be estimated in_a_general way that_allows_for_kernel density,_locally linear regression,_and_other kernel_smoothing_estimators for the first step. The_idea_is to differentiate with respect to the_effect of the $i^{th}$_observation_on sample moments. Newey_(1994b) used a special case_of this approach to estimate the_asymptotic variance_of a_functional of a kernel based semiparametric or nonparametric estimator. Here we_extend this method to a wider_class of first step_estimators, such_as_locally linear regression,_and_apply it_to estimate the adjustment term for construction_of LR_moments. We will describe this estimator for_the case where $\gamma$_is_a vector of functions of a_vector of variables $x.$ Let $h(z,x,\gamma)$_be a vector of functions_of_a_data observation $z$, $x$, and_a possible realized value of $\gamma$_(i.e. a vector_of real numbers $\gamma$). Also let $\hat{h}_{\ell }(x,\
'}$ $D^{(\mathrm{e}) \pm}_{m m'} := 0, D^{(\mathrm{h}) \pm}_{m m'} := 0$ Input $| \Psi^N_{\mathrm{gs}} \rangle$ to $\mathcal{C}_{m m'}$ and measure the ancillae $| q_1^{\mathrm{A}} \rangle \otimes | q_0^{\mathrm{A}} \rangle :=$ observed ancillary state $E :=$ QPE$(| \widetilde{\Psi} \rangle, \mathcal{H})$ Find $E$ among $\{ E_\lambda^{N - 1} \}_\lambda$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{h}) +}_{\lambda m m'} += 1$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{h}) -}_{\lambda m m'} += 1$ Find $E$ among $\{ E_\lambda^{N + 1} \}_\lambda$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{e}) +}_{\lambda m m'} += 1$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{e}) -}_{\lambda m m'} += 1$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{e}) \pm}_{m m'} *= 1/N_{\mathrm{meas}}, D^{(\mathrm{h}) \pm}_{m m'} *= 1/N_{\mathrm{meas}} $ $D^{(\mathrm{e}) \pm}_{m m'}, D^{(\mathrm{h}) \pm}_{m m'}$ --- abstract: 'Let $\Pi$ and $\Gamma$ be homogeneous Poisson point processes on a fixed set of finite
' } $ $ D^{(\mathrm{e }) \pm}_{m m' }: = 0, D^{(\mathrm{h }) \pm}_{m m' }: = 0 $ Input $ | \Psi^N_{\mathrm{gs } } \rangle$ to $ \mathcal{C}_{m m'}$ and measure the ancillae $ | q_1^{\mathrm{A } } \rangle \otimes | q_0^{\mathrm{A } } \rangle: = $ observed ancillary state $ east: = $ QPE$(| \widetilde{\Psi } \rangle, \mathcal{H})$ receive $ E$ among $ \ { E_\lambda^{N - 1 } \}_\lambda$ $ { \mathcode`+=\numexpr\mathcode`+ + " 1000\relax \mathcode`*=\numexpr\mathcode '* + " 1000\relax } D^{(\mathrm{h }) + } _ { \lambda m m' } + = 1 $ $ { \mathcode`+=\numexpr\mathcode`+ + " 1000\relax \mathcode`*=\numexpr\mathcode '* + " 1000\relax } D^{(\mathrm{h }) -}_{\lambda m m' } + = 1 $ Find $ E$ among $ \ { E_\lambda^{N + 1 } \}_\lambda$ $ { \mathcode`+=\numexpr\mathcode`+ + " 1000\relax \mathcode`*=\numexpr\mathcode '* + " 1000\relax } D^{(\mathrm{e }) + } _ { \lambda m m' } + = 1 $ $ { \mathcode`+=\numexpr\mathcode`+ + " 1000\relax \mathcode`*=\numexpr\mathcode '* + " 1000\relax } D^{(\mathrm{e }) -}_{\lambda m m' } + = 1 $ $ { \mathcode`+=\numexpr\mathcode`+ + " 1000\relax \mathcode`*=\numexpr\mathcode '* + " 1000\relax } D^{(\mathrm{e }) \pm}_{m m' } * = 1 / N_{\mathrm{meas } }, D^{(\mathrm{h }) \pm}_{m m' } * = 1 / N_{\mathrm{meas } } $ $ D^{(\mathrm{e }) \pm}_{m m' }, D^{(\mathrm{h }) \pm}_{m m'}$ --- abstract:' lease $ \Pi$ and $ \Gamma$ be homogeneous Poisson degree processes on a fixed stage set of finite
'}$ $D^{(\mahhrm{e}) \pm}_{m m'} := 0, D^{(\mathrm{h}) \po}_{m m'} := 0$ Input $| \Psi^N_{\mathcm{gs}} \rahgle$ to $\oathcal{C}_{m m'}$ and measure the enciolae $| q_1^{\mathrm{A}} \rangle \otimer | q_0^{\mathrl{A}} \ranglw :=$ ouserved ancillarb state $C :=$ QPS$(| \widztmlde{\Psi} \rangle, \kathcal{H})$ Fhnd $E$ among $\{ E_\naobba^{N - 1} \}_\lambda$ ${ \mathcode`+=\numexpr\mathcoqe`+ + "1000\relsx \mathcode`*=\numgxpr\msehcosv`* + "1000\relax }D^{(\mathrm{h}) +}_{\lambda m m'} += 1$ ${ \mathcove`+=\numexpr\mathcoce`+ + "1000\relax \mathcode`*=\numexpg\matjcode`* + "1000\relax }D^{(\mathrl{h}) -}_{\lambda m m'} += 1$ Dind $E$ among $\{ E_\lambda^{N + 1} \}_\lambda$ ${ \jathcode`+=\numexpr\mathcode`+ + "1000\relax \matkcode`*=\numexpt\mztjwode`* + "1000\relax }V^{(\mathri{e}) +}_{\lambda m m'} += 1$ ${ \madhcode`+=\nimexpr\mathcode`+ + "1000\rxlax \mathcode`*=\numexpr\maticode`* + "1000\relax }D^{(\mathrm{e}) -}_{\lambda m m'} += 1$ ${ \mathcode`+=\numezpe\mathwode`+ + "1000\reuqx \mzticose`*=\numedpr\jathcode`* + "1000\delax }D^{(\mathrn{e}) \pm}_{m m'} *= 1/N_{\mathrm{mess}}, D^{(\matrri{h}) \pm}_{m m'} *= 1/N_{\mathrm{meas}} $ $D^{(\katgrm{e}) \pm}_{m m'}, D^{(\mathrm{h}) \pm}_{m m'}$ --- abstract: 'Let $\Pi$ anf $\Gamma$ bq homogeneous Poisson point processes on a fixed vet oh winnbc
'}$ $D^{(\mathrm{e}) \pm}_{m m'} := 0, D^{(\mathrm{h}) := Input $| \rangle$ to $\mathcal{C}_{m $| \rangle \otimes | \rangle :=$ observed state $E :=$ QPE$(| \widetilde{\Psi} \rangle, Find $E$ among $\{ E_\lambda^{N - 1} \}_\lambda$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* "1000\relax }D^{(\mathrm{h}) +}_{\lambda m m'} += 1$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + }D^{(\mathrm{h}) m += Find $E$ among $\{ E_\lambda^{N + 1} \}_\lambda$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{e}) m m'} += 1$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax + "1000\relax }D^{(\mathrm{e}) -}_{\lambda m'} += 1$ ${ \mathcode`+=\numexpr\mathcode`+ "1000\relax + "1000\relax \pm}_{m *= D^{(\mathrm{h}) \pm}_{m m'} 1/N_{\mathrm{meas}} $ $D^{(\mathrm{e}) \pm}_{m m'}, D^{(\mathrm{h}) \pm}_{m m'}$ --- abstract: 'Let $\Pi$ and $\Gamma$ be homogeneous Poisson processes on set of
'}$ $D^{(\mathrm{e}) \pm}_{m m'} := 0, D^{(\mathrm{h}) \pm}_{m m'} := 0$ INput $| \Psi^N_{\maThrm{gS}} \raNglE$ tO $\matHcal{c}_{m m'}$ and measure tHE ancIllae $| q_1^{\mathrm{A}} \rangle \otiMes | q_0^{\mAtHRm{A}} \rANgLe :=$ obsErved anCIlLARy sTaTe $e :=$ QPe$(| \wIDeTilde{\psi} \Rangle, \mAthcal{H})$ FinD $E$ aMoNg $\{ E_\lambda^{N - 1} \}_\laMBdA$ ${ \mathcode`+=\nUmeXpr\mathcode`+ + "1000\rElaX \mathcOdE`*=\nuMExpr\mAthCode`* + "1000\rElax }D^{(\mAThrm{h}) +}_{\lAmbda m m'} += 1$ ${ \maThCOde`+=\numEXpr\mathCODe`+ + "1000\RelaX \mathcode`*=\numexpr\mAThCOde`* + "1000\relax }D^{(\mathrM{h}) -}_{\lambDa M M'} += 1$ FIND $E$ aMonG $\{ E_\lambda^{N + 1} \}_\lAmBda$ ${ \maTHcode`+=\nuMExPR\MAthCOde`+ + "1000\relax \mathcOde`*=\numexpr\mAThcOde`* + "1000\relAx }d^{(\maTHrm{e}) +}_{\laMbda m M'} += 1$ ${ \mAThcOde`+=\numexpr\mAthcOde`+ + "1000\relax \mAthcodE`*=\Numexpr\MAthcode`* + "1000\Relax }D^{(\MatHrm{E}) -}_{\lamBDa M m'} += 1$ ${ \MatHcODe`+=\nUMeXpr\MAthCode`+ + "1000\relaX \mAtHcode`*=\NumeXPR\MAthcOde`* + "1000\RelaX }D^{(\matHrm{e}) \pm}_{m m'} *= 1/N_{\mathRm{mEas}}, D^{(\MAthRm{h}) \pm}_{M m'} *= 1/N_{\maThrm{MeAs}} $ $D^{(\maThrm{e}) \pM}_{m m'}, D^{(\mAtHrm{h}) \pm}_{m m'}$ --- abstracT: 'Let $\pi$ and $\GammA$ be HoMogEnEous POIsson pOinT prOcesses On a fixeD Set Of FINItE
'}$ $D^{(\mathrm{e}) \pm} _{m m'} := 0, D ^{( \ma th rm{h }) \ pm}_{m m'} :=0 $ In put $| \Psi^N_{\mathrm {gs}} \ r angl e $to $\ mathcal { C} _ { m m '} $and m e as ure t heancilla e $| q_1^{ \ma th rm{A}} \rang l e\otimes |q_0 ^{\mathrm{A} } \ rangle : =$o bserv edancil lary s t ate $E :=$ QPE$ (| \widet i lde{\Ps i } \ rang le, \mathcal{H})$ Fi n d $E$ among $\ { E_\l am b da ^ { N - 1} \}_\lambd a$ ${ \ mathcod e `+ = \ n ume x pr\mathcode`+ + "1000\re l ax \mat hc ode ` *=\num expr\ ma t hco de`* + "100 0\re lax }D^{( \mathr m {h}) +} _ {\lambd a m m' } + = 1 $ ${ \ ma thc od e `+= \ nu mex p r\m athcode` ++"1000 \rel a x \ma thc ode` *=\nu mexpr\mathcod e`* + " 1 000 \rela x }D^ {(\m at hrm{h }) -}_ {\lam bd a m m'} += 1$ F ind$E$ among $\ {E_\ la mbda^ { N + 1} \} _\l ambda$${ \ma t hco de ` + = \n umexpr\mathcode`++" 1 00 0\relax \mat h co de ` *=\numex pr \ma thco d e `* +"100 0 \r elax }D^ {(\mat h rm {e }) +}_{ \l ambdamm'} += 1$ $ { \m athcod e`+=\num expr\ m athcode`+ + "1 0 00\relax \m a th c o de ` *=\n ume xpr\mathcod e`*+ "10 00\r e la x } D ^{(\m athrm {e } )- }_{\lambda m m'} += 1 $ ${ \math code`+=\numex pr\mathcod e ` + + "1000 \rel a x \mathcode`*=\ numex pr\mathcod e `* + "10 00\re lax }D^{ (\mathrm{ e } ) \pm}_{ m m '}*=1/N _ { \m athrm{meas}}, D ^{(\mat hrm {h} ) \ pm} _{ m m'} *=1/N_{\ma th rm {m ea s}} $$D ^{( \math r m{e})\pm}_ {m m '} ,D ^{( \mathrm { h} ) \pm} _{ mm'}$ - -- abst ract : 'L et $\Pi $ and $\G amm a $ be h om ogeneou s Poisson poi nt processes o n a fixed s et of fi nite
'}$ $D^{(\mathrm{e}) \pm}_{m_m'} :=_0, D^{(\mathrm{h}) \pm}_{m m'}_:= 0$_Input_$| \Psi^N_{\mathrm{gs}}_\rangle$_to $\mathcal{C}_{m m'}$_and measure the_ancillae $| q_1^{\mathrm{A}} \rangle_\otimes | q_0^{\mathrm{A}}_\rangle_:=$ observed ancillary state $E :=$ QPE$(| \widetilde{\Psi} \rangle, \mathcal{H})$ Find $E$ among $\{_E_\lambda^{N_- 1}_\}_\lambda$_${_ \mathcode`+=\numexpr\mathcode`+ + "1000\relax _ \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{h}) +}_{\lambda_m m'}_+= 1$ ${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax __\mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{h})_-}_{\lambda m m'} += 1$ Find $E$ among $\{_E_\lambda^{N + 1} \}_\lambda$ ${ _\mathcode`+=\numexpr\mathcode`+ + "1000\relax___\mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{e}) +}_{\lambda_m m'} += 1$ ${ _\mathcode`+=\numexpr\mathcode`+ + "1000\relax \mathcode`*=\numexpr\mathcode`*_+ "1000\relax }D^{(\mathrm{e}) -}_{\lambda m m'} += 1$_${ \mathcode`+=\numexpr\mathcode`+ + "1000\relax _ \mathcode`*=\numexpr\mathcode`* + "1000\relax }D^{(\mathrm{e}) \pm}_{m_m'} *=_1/N_{\mathrm{meas}}, _ _ _ _ D^{(\mathrm{h}) \pm}_{m m'}_*= 1/N_{\mathrm{meas}} _ __ ___ _$ $D^{(\mathrm{e}) \pm}_{m_m'},_D^{(\mathrm{h}) \pm}_{m_m'}$ _--- abstract: 'Let $\Pi$ and $\Gamma$ be_homogeneous_Poisson point processes on a fixed set_of finite
on adjacency matrices in the natural way: If $A$ is the adjacency matrix of a graph $G$, then $\pi(A)$ is the adjacency matrix of $\pi(G)$ and $\pi(A)$ is obtained by simultaneously permuting with $\pi$ both rows and columns of $A$. \[def:weak\_iso\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be $k$-color graph colorings with $G=([n],E_1)$ and $H=([n],E_2)$. We say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are weakly isomorphic, denoted $(G,{\kappa_1})\approx(H,{\kappa_2})$ if there exist permutations $\pi \colon [n] \to [n]$ and $\sigma \colon [k] \to [k]$ such that $(u,v) \in E_1 \iff (\pi(u),\pi(v)) \in E_2$ and $\kappa_1((u,v)) = \sigma(\kappa_2((\pi(u), \pi(v))))$. We denote such a weak isomorphism: $(G,{\kappa_1})\approx_{\pi,\sigma}(H,{\kappa_2})$. When $\sigma$ is the identity permutation, we say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are isomorphic. The following lemma emphasizes the importance of weak graph isomorphism as it relates to Ramsey numbers. Many classic coloring problems exhibit the same property. \[lemma:closed\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be graph colorings in $k$ colors such that $(G,\kappa_1) \approx_{\pi,\sigma} (H,\kappa_2)$. Then, $$(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ We make use of the following theorem fromΒ [@PR98]. \[thm:433\] $30\leq R(4,3,3)\leq 31$ and, $R(4,3,3)=31$ if and only if there
on adjacency matrices in the natural way: If $ A$ is the adjacency matrix of a graph $ G$, then $ \pi(A)$ is the adjacency matrix of $ \pi(G)$ and $ \pi(A)$ is obtained by simultaneously permute with $ \pi$ both row and columns of $ A$. \[def: weak\_iso\ ] Let $ (G,{\kappa_1})$ and $ (H,{\kappa_2})$ be $ k$-color graph coloring with $ G=([n],E_1)$ and $ H=([n],E_2)$. We allege that $ (G,{\kappa_1})$ and $ (H,{\kappa_2})$ are weakly isomorphic, denote $ (G,{\kappa_1})\approx(H,{\kappa_2})$ if there exist permutations $ \pi \colon [ n ] \to [ n]$ and $ \sigma \colon [ k ] \to [ k]$ such that $ (u, volt) \in E_1 \iff (\pi(u),\pi(v) ) \in E_2 $ and $ \kappa_1((u, v) ) = \sigma(\kappa_2((\pi(u), \pi(v))))$. We denote such a weak isomorphism: $ (G,{\kappa_1})\approx_{\pi,\sigma}(H,{\kappa_2})$. When $ \sigma$ is the identity substitution, we say that $ (G,{\kappa_1})$ and $ (H,{\kappa_2})$ are isomorphic. The following lemma emphasizes the importance of decrepit graph isomorphism as it relates to Ramsey number. Many classic coloring problem exhibit the same property. \[lemma: closed\ ] lease $ (G,{\kappa_1})$ and $ (H,{\kappa_2})$ be graph colorings in $ k$ colors such that $ (G,\kappa_1) \approx_{\pi,\sigma } (H,\kappa_2)$. Then, $ $ (G,\kappa_1) \in { { \cal R}}(r_1,r_2,\ldots, r_k;n) \iff (H,\kappa_2) \in { { \cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ We make function of the following theorem from Β  [ @PR98 ]. \[thm:433\ ] $ 30\leq R(4,3,3)\leq 31 $ and, $ R(4,3,3)=31 $ if and only if there
on adjacency matrices in tme natural way: Iy $A$ is vhe adjzcency mxtrix of a graph $G$, then $\pi(A)$ ms tye adhacency matrix of $\pi(G)$ xnd $\pi(A)$ id obtainwd bb simultaneously permutiky witg $\pi$ yovh rows and colomns of $A$. \[def:feak\_iso\] Let $(G,{\kdpoa_1})$ and $(H,{\kappa_2})$ be $k$-color graph colorinds with $G=([j],E_1)$ and $H=([n],E_2)$. We fay uhae $(G,{\kzipc_1})$ and $(H,{\kappa_2})$ are weakly isomorpgic, denmted $(G,{\kappa_1})\aplrox(H,{\kappa_2})$ if there exist oermktations $\pi \colon [j] \to [n]$ and $\wigmw \colon [k] \to [y]$ such thau $(b,v) \in E_1 \iff (\pi(u),\pi(v)) \in E_2$ and $\kappa_1((u,v)) = \sigma(\yappa_2((\'i(u), \pi(v))))$. We eebotf such a weaj isoiorphism: $(G,{\kaiia_1})\approf_{\pi,\sigms}(H,{\kappa_2})$. When $\slgma$ ms tye identity permutatimn, we say that $(G,{\kakpa_1})$ and $(H,{\kdp'a_2})$ are isomorphic. The doolowitg lamma wmpfasjzxs fhe imoorvance of wezk graph isimorphism as it relstqw to Ramsey nhmbers. Mwny classic coloring problems exhibit tve aame property. \[lemma:closee\] Let $(G,{\kappa_1})$ and $(H,{\kapka_2})$ be grapr colorings in $k$ colors such that $(G,\kappa_1) \approx_{\pi,\vigma} (H,\kciia_2)$. Tfwn, $$(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal D}}(\sogka(r_1),\sigma(r_2),\ldots,\figma(r_k);n)$$ We kane ose of the foluowing thsorem fromΒ [@PR98]. \[thm:433\] $30\lee R(4,3,3)\leq 31$ and, $E(4,3,3)=31$ if and jnly if there
on adjacency matrices in the natural way: is adjacency matrix a graph $G$, matrix $\pi(G)$ and $\pi(A)$ obtained by simultaneously with $\pi$ both rows and columns $A$. \[def:weak\_iso\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be $k$-color graph colorings with $G=([n],E_1)$ and We say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are weakly isomorphic, denoted $(G,{\kappa_1})\approx(H,{\kappa_2})$ if there permutations \colon \to and $\sigma \colon [k] \to [k]$ such that $(u,v) \in E_1 \iff (\pi(u),\pi(v)) \in E_2$ and = \sigma(\kappa_2((\pi(u), \pi(v))))$. We denote such a weak $(G,{\kappa_1})\approx_{\pi,\sigma}(H,{\kappa_2})$. When $\sigma$ is identity permutation, we say that and are isomorphic. following emphasizes importance of weak isomorphism as it relates to Ramsey numbers. Many classic coloring problems exhibit the same property. \[lemma:closed\] Let and $(H,{\kappa_2})$ colorings in colors that \approx_{\pi,\sigma} (H,\kappa_2)$. Then, {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal use of the following theorem from [@PR98]. \[thm:433\] R(4,3,3)\leq 31$ $R(4,3,3)=31$ if and only if there
on adjacency matrices in the nAtural way: IF $A$ is tHe aDjaCeNcy mAtriX of a graph $G$, then $\PI(A)$ is The adjacency matrix of $\pi(g)$ and $\pI(A)$ IS obtAInEd by sImultanEOuSLY peRmUtIng WiTH $\pI$ both RowS and colUmns of $A$. \[def:WeaK\_iSo\] Let $(G,{\kappa_1})$ aND $(H,{\Kappa_2})$ be $k$-coLor Graph colorinGs wIth $G=([n],E_1)$ AnD $H=([n],e_2)$. we say ThaT $(G,{\kapPa_1})$ and $(H,{\KAppa_2})$ arE weakly isOmORphic, dENoted $(G,{\kAPPa_1})\ApprOx(H,{\kappa_2})$ if there exISt PErmutations $\pi \cOlon [n] \tO [n]$ ANd $\SIGma \ColOn [k] \to [k]$ such ThAt $(u,v) \iN e_1 \iff (\pi(u),\PI(v)) \IN e_2$ And $\KAppa_1((u,v)) = \sigma(\kaPpa_2((\pi(u), \pi(v))))$. We DEnoTe such A wEak ISomorpHism: $(G,{\KaPPa_1})\aPprox_{\pi,\sigmA}(H,{\kaPpa_2})$. When $\siGma$ is tHE identiTY permutAtion, wE saY thAt $(G,{\kAPpA_1})$ aNd $(H,{\KaPPa_2})$ aRE iSomORphIc. The folLoWiNg lemMa emPHASIzes The ImpoRtancE of weak graph iSomOrphISm aS it reLates To RaMsEy numBers. MaNy claSsIc coloring problEms eXhibit the SamE pRopErTy. \[lemMA:closeD\] LeT $(G,{\kAppa_1})$ and $(h,{\kappa_2})$ bE GraPh COLOrIngs in $k$ colors such tHaT $(g,\KaPpa_1) \approX_{\pi,\sigMA} (H,\KaPPa_2)$. Then, $$(G,\kApPa_1) \iN {{\cal r}}(R_1,R_2,\ldotS,r_k;n) \IFf (h,\kappa_2) \in {{\Cal R}}(\siGMa(R_1),\sIgma(r_2),\ldOtS,\sigma(R_k);N)$$ We MakE use oF The fOllowiNg theoreM fromΒ [@pr98]. \[thm:433\] $30\leq R(4,3,3)\leq 31$ and, $r(4,3,3)=31$ If and only if thERe
on adjacency matrices inthe natura l way : I f $ A$ istheadjacency matr i x of a graph $G$, then $\p i(A)$ i s the ad jacen cy matr i xo f $\ pi (G )$an d $ \pi(A )$is obta ined by si mul ta neously perm u ti ng with $\ pi$ both rows a ndcolumn sof$ A$. \[d ef:we ak\_is o \] Let $(G,{\ka pp a _1})$a nd $(H, { \ ka ppa_ 2})$ be $k$-color gr a ph colorings w ith $G =( [ n] , E _1) $ a nd $H=([n] ,E _2)$. We sayt ha t $ (G, { \kappa_1})$ a nd $(H,{\ka p pa_ 2})$ a re we a kly is omorp hi c , d enoted $(G, {\ka ppa_1})\a pprox( H ,{\kapp a _2})$ i f ther e e xis t pe r mu ta tio ns $\p i \ col o n [ n] \to [ n] $and $ \sig m a \ colo n [ k] \to [ k]$ such that $( u,v) \in E_1\iff(\pi (u ),\pi (v)) \ in E_ 2$ and $\kappa_1( (u,v )) = \sig ma( \k app a_ 2((\p i (u), \ pi( v)) ))$. We denote suc ha w ea k isomorphism: $(G ,{ \ k ap pa_1})\a pprox_ { \p i, \ sigma}(H ,{ \ka ppa_ 2 } )$. W hen$ \s igma$ is the i d en ti ty perm ut ation, w e s aythat$ (G,{ \kappa _1})$ an d $(H , {\kappa_2})$ a r e isomorphic. T h e f o llow ing lemma emph asiz e s th e im p or tan c e ofweakgr a ph isomorphism as it r el ates t o Ram sey numbers.Many class i c coloring pro b le m s exhibit thesameproperty.\[lemma: close d\] Let$(G,{\kap p a _1})$ an d $ (H, {\k app a _ 2} )$ be graph c o l orin gs in $k$ co lors su chtha t $ (G, \k appa_1) \ approx_{ \p i, \s ig ma} (H , \kappa_2 )$ . T he n,$$(G, \ kappa_ 1) \i n {{ \c al R}} (r_1,r_ 2 ,\ l d ots, r_ k; n) \ iff ( H,\ka ppa_ 2 ) \ in {{ \cal R}}( \si g ma(r _1 ), \sigma( r_2),\ldots,\ si gma(r_k);n )$ $ We mak e use of t he following theorem fr o mΒ [@PR9 8]. \[t hm:4 33\] $30\ leq R(4,3 ,3) \ leq 31 $ and, $R(4 ,3 ,3) = 3 1$ if a nd on ly if there
on_adjacency matrices_in the natural way:_If $A$_is_the adjacency_matrix_of a graph_$G$, then $\pi(A)$_is the adjacency matrix_of $\pi(G)$ and_$\pi(A)$_is obtained by simultaneously permuting with $\pi$ both rows and columns of $A$. \[def:weak\_iso\] Let_$(G,{\kappa_1})$_and $(H,{\kappa_2})$_be_$k$-color_graph colorings with $G=([n],E_1)$ and_$H=([n],E_2)$. We say that $(G,{\kappa_1})$_and $(H,{\kappa_2})$_are weakly isomorphic, denoted $(G,{\kappa_1})\approx(H,{\kappa_2})$ if there exist_permutations_$\pi \colon [n]_\to [n]$ and $\sigma \colon [k] \to [k]$ such_that $(u,v) \in E_1 \iff (\pi(u),\pi(v))_\in E_2$ and_$\kappa_1((u,v))_=_\sigma(\kappa_2((\pi(u), \pi(v))))$. We denote_such a weak isomorphism: $(G,{\kappa_1})\approx_{\pi,\sigma}(H,{\kappa_2})$. When_$\sigma$ is the identity permutation, we_say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are isomorphic. The_following lemma emphasizes the importance of_weak graph isomorphism as it_relates to_Ramsey numbers. Many classic coloring_problems exhibit the_same property. \[lemma:closed\]_Let $(G,{\kappa_1})$ and_$(H,{\kappa_2})$ be graph colorings in $k$_colors such that_$(G,\kappa_1) \approx_{\pi,\sigma} (H,\kappa_2)$. Then, $$(G,\kappa_1)_\in_{{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff_(H,\kappa_2)_\in _ {{\cal_R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ We make use_of_the following_theorem_fromΒ [@PR98]. \[thm:433\] $30\leq R(4,3,3)\leq 31$ and, $R(4,3,3)=31$_if_and only if there
x}}_k\| &\leq& \frac{2}{\eta}\sum_{k=1}^K \varphi(f({{\bf x}}_k))-\varphi(f({{\bf x}}_{k+1})) \nonumber\\ &=& \frac{2}{\eta}(\varphi(f({{\bf x}}_1))-\varphi(f({{\bf x}}_{K+1}))) \nonumber\\ &\leq& \frac{2}{\eta}\varphi(f({{\bf x}}_1)). \end{aligned}$$ So, we get $$\begin{aligned} \|{{\bf x}}_{K+1}-{{\bf x}}_*\| &\leq& \sum_{k=1}^K\|{{\bf x}}_{k+1}-{{\bf x}}_k\|+\|{{\bf x}}_1-{{\bf x}}_*\| \\ &\leq& \frac{2}{\eta}\varphi(f({{\bf x}}_1))+\|{{\bf x}}_1-{{\bf x}}_*\| \\ & < & \rho. \end{aligned}$$ Thus, ${{\bf x}}_{K+1}\in\mathscr{B}({{\bf x}}_*,\rho)$ and (\[eq4-10\]) holds. Moreover, let $K\to\infty$ in (\[eq4-12\]). We obtain (\[eq4-11\]). Suppose that the infinite sequence of iterates $\{{{\bf x}}_k\}$ is generated by ACSA. Then, the total sequence $\{{{\bf x}}_k\}$ has a finite length, i.e., $$\sum_k \|{{\bf x}}_{k+1}-{{\bf x}}_k\| < +\infty,$$ and hence the total sequence $\{{{\bf x}}_k\}$ converges to a unique critical point. [**Proof**]{} Since the domain of $f({{\bf x}})$ is compact, the infinite sequence $\{{{\bf x}}_k\}$ generated by ACSA must have an accumulation point ${{\bf x}}_*$. According to Theorem \[Th4-04\], ${{\bf x}}_*$ is a critical point. Hence, there exists an index $k_0$, which could be viewed as an initial iteration when we use Lemma \[Lm4-06\], such that ${{\bf x}}_{k_0}\in\mathscr{B}({{\bf x}}_*,\rho)$. From Lemma \[Lm4-06\], we have $\sum_{k=k_0}^{\infty}
x}}_k\| & \leq & \frac{2}{\eta}\sum_{k=1}^K \varphi(f({{\bf x}}_k))-\varphi(f({{\bf x}}_{k+1 }) ) \nonumber\\ & = & \frac{2}{\eta}(\varphi(f({{\bf x}}_1))-\varphi(f({{\bf x}}_{K+1 }) )) \nonumber\\ & \leq & \frac{2}{\eta}\varphi(f({{\bf x}}_1) ). \end{aligned}$$ So, we get $ $ \begin{aligned } \|{{\bf x}}_{K+1}-{{\bf x}}_*\| & \leq & \sum_{k=1}^K\|{{\bf x}}_{k+1}-{{\bf x}}_k\|+\|{{\bf x}}_1-{{\bf x}}_*\| \\ & \leq & \frac{2}{\eta}\varphi(f({{\bf x}}_1))+\|{{\bf x}}_1-{{\bf x}}_*\| \\ & < & \rho. \end{aligned}$$ Thus, $ { { \bf x}}_{K+1}\in\mathscr{B}({{\bf x}}_*,\rho)$ and (\[eq4 - 10\ ]) holds. furthermore, lease $ K\to\infty$ in (\[eq4 - 12\ ]). We obtain (\[eq4 - 11\ ]). Suppose that the countless sequence of iterates $ \{{{\bf x}}_k\}$ is beget by ACSA. Then, the total sequence $ \{{{\bf x}}_k\}$ have a finite length, i.e., $ $ \sum_k \|{{\bf x}}_{k+1}-{{\bf x}}_k\| < + \infty,$$ and hence the total sequence $ \{{{\bf x}}_k\}$ converges to a unique critical compass point. [ * * Proof * * ] { } Since the domain of $ f({{\bf x}})$ is compendious, the infinite sequence $ \{{{\bf x}}_k\}$ generated by ACSA must have an accumulation point $ { { \bf x}}_*$. According to Theorem \[Th4 - 04\ ], $ { { \bf x}}_*$ is a critical detail. Hence, there exists an index $ k_0 $, which could be watch as an initial iteration when we use Lemma \[Lm4 - 06\ ], such that $ { { \bf x}}_{k_0}\in\mathscr{B}({{\bf x}}_*,\rho)$. From Lemma \[Lm4 - 06\ ], we have $ \sum_{k = k_0}^{\infty }
x}}_k\| &\leq& \frac{2}{\eta}\sum_{k=1}^K \vxrphi(f({{\bf x}}_k))-\varpku(f({{\bf x}}_{n+1})) \nonujber\\ &=& \frac{2}{\eta}(\varphi(f({{\bf x}}_1))-\varphi(f({{\uf x}}_{J+1}))) \nonymber\\ &\leq& \frac{2}{\eta}\xarphi(f({{\bf x}}_1)). \end{aoigntd}$$ So, we get $$\begii{zligned} \|{{\bf w}}_{K+1}-{{\bf r}}_*\| &\oeq& \sum_{k=1}^K\|{{\bf x}}_{k+1}-{{\nf x}}_k\|+\|{{\bf x}}_1-{{\bf x}}_*\| \\ &\leq& \fsaz{2}{\eca}\varphi(f({{\bf x}}_1))+\|{{\bf x}}_1-{{\bf x}}_*\| \\ & < & \rho. \qnd{aligmef}$$ Thus, ${{\bf x}}_{K+1}\in\iathxsr{B}({{\br x}}_*,\rho)$ and (\[eq4-10\]) holds. Moreover, let $K\tk\infty$ pn (\[eq4-12\]). We obtain (\[ea4-11\]). Suppose that the infinite seqkence of iterates $\{{{\hf x}}_k\}$ is geberaewd by ACSA. Tfen, the touan sequence $\{{{\bf x}}_k\}$ has a finite length, i.e., $$\sjm_k \|{{\by x}}_{k+1}-{{\bf x}}_k\| < +\undty,$$ dnd hence tie totwl sequence $\{{{\ng x}}_k\}$ cmnvergex to a unique gritiral point. [**Proof**]{} Since the vomain of $f({{\bf x}})$ is cjmpact, tha nnfinite sequence $\{{{\bf z}}_k\}$ genetated by XXSA muat hzve an acrumulation loint ${{\bf x}}_*$. Qccording to Theorek \[Ey4-04\], ${{\bf x}}_*$ is a cditicaj [oint. Hence, there exists an index $k_0$, whibh ckuld be viewed as an inutial iteration when ae use Leima \[Lm4-06\], such that ${{\bf x}}_{k_0}\in\mathscr{B}({{\bf x}}_*,\rho)$. From Lemmd \[Lm4-06\], xe harc $\rym_{n=k_0}^{\infty}
x}}_k\| &\leq& \frac{2}{\eta}\sum_{k=1}^K \varphi(f({{\bf x}}_k))-\varphi(f({{\bf x}}_{k+1})) \nonumber\\ x}}_1))-\varphi(f({{\bf \nonumber\\ &\leq& x}}_1)). \end{aligned}$$ So, x}}_*\| \sum_{k=1}^K\|{{\bf x}}_{k+1}-{{\bf x}}_k\|+\|{{\bf x}}_*\| \\ &\leq& x}}_1))+\|{{\bf x}}_1-{{\bf x}}_*\| \\ & < \rho. \end{aligned}$$ Thus, ${{\bf x}}_{K+1}\in\mathscr{B}({{\bf x}}_*,\rho)$ and (\[eq4-10\]) holds. Moreover, let $K\to\infty$ in We obtain (\[eq4-11\]). Suppose that the infinite sequence of iterates $\{{{\bf x}}_k\}$ is by Then, total $\{{{\bf x}}_k\}$ has a finite length, i.e., $$\sum_k \|{{\bf x}}_{k+1}-{{\bf x}}_k\| < +\infty,$$ and hence the sequence $\{{{\bf x}}_k\}$ converges to a unique critical [**Proof**]{} Since the domain $f({{\bf x}})$ is compact, the sequence x}}_k\}$ generated ACSA have accumulation point ${{\bf According to Theorem \[Th4-04\], ${{\bf x}}_*$ is a critical point. Hence, there exists an index $k_0$, which be viewed initial iteration we Lemma such that ${{\bf From Lemma \[Lm4-06\], we have $\sum_{k=k_0}^{\infty}
x}}_k\| &\leq& \frac{2}{\eta}\sum_{k=1}^K \varphi(f({{\bF x}}_k))-\varphi(f({{\Bf x}}_{k+1})) \nOnuMbeR\\ &=& \fRac{2}{\eTa}(\vaRphi(f({{\bf x}}_1))-\varphi(F({{\Bf x}}_{K+1}))) \Nonumber\\ &\leq& \frac{2}{\eta}\varpHi(f({{\bf X}}_1)). \eND{aliGNeD}$$ So, we Get $$\begiN{AlIGNed} \|{{\Bf X}}_{K+1}-{{\Bf x}}_*\| &\LeQ& \SuM_{k=1}^K\|{{\bf X}}_{k+1}-{{\bF x}}_k\|+\|{{\bf x}}_1-{{\bF x}}_*\| \\ &\leq& \frac{2}{\eTa}\vArPhi(f({{\bf x}}_1))+\|{{\bf x}}_1-{{\bf X}}_*\| \\ & < & \RhO. \end{aligneD}$$ ThUs, ${{\bf x}}_{K+1}\in\mathScr{b}({{\bf x}}_*,\rhO)$ aNd (\[eQ4-10\]) Holds. morEover, Let $K\to\INfty$ in (\[Eq4-12\]). We obtaiN (\[eQ4-11\]). supposE That the INFiNite Sequence of iterateS $\{{{\Bf X}}_K\}$ is generated by aCSA. ThEn, THe TOTal SeqUence $\{{{\bf x}}_k\}$ hAs A finiTE length, I.E., $$\sUM_K \|{{\Bf x}}_{K+1}-{{\Bf x}}_k\| < +\infty,$$ and hEnce the totaL SeqUence $\{{{\bF x}}_K\}$ coNVerges To a unIqUE crItical point. [**prooF**]{} Since the Domain OF $f({{\bf x}})$ is COmpact, tHe infiNitE seQuenCE $\{{{\bF x}}_K\}$ geNeRAteD By aCSa MusT have an aCcUmUlatiOn poINT ${{\BF x}}_*$. AcCorDing To TheOrem \[Th4-04\], ${{\bf x}}_*$ is a cRitIcal POinT. HencE, therE exiStS an inDex $k_0$, whIch coUlD be viewed as an inItiaL iteratioN whEn We uSe lemma \[lM4-06\], such tHat ${{\Bf x}}_{K_0}\in\mathScr{B}({{\bf x}}_*,\RHo)$. FRoM lEMmA \[Lm4-06\], we have $\sum_{k=k_0}^{\inftY}
x}}_k\| &\leq& \fra c{2}{\eta} \sum_ {k= 1}^ K\var phi( f({{\bf x}}_k) ) -\va rphi(f({{\bf x}}_{k+1} )) \n on u mber \ \ &=& \f r ac { 2 }{\ et a} (\v ar p hi (f({{ \bf x}}_1) )-\varphi( f({ {\ bf x}}_{K+1} ) )) \nonumber \\ &\leq& \f rac{2} {\ eta } \varp hi( f({{\ bf x}} _ 1)). \end{ali gn e d}$$ S o , we ge t $$ \beg in{aligned} \ | {{ \ bf x}}_{K+1}-{ {\bf x }} _ *\ | &\l eq& \sum_{k=1 }^ K\|{{ \ bf x}}_ { k+ 1 } - {{\ b f x}}_k\|+\|{ {\bf x}}_1- { {\b f x}}_ *\ | \ \ &\le q& \fr ac{2}{\eta} \var phi(f({{\ bf x}} _ 1))+\|{ { \bf x}} _1-{{\ bfx}} _*\| \\ & <& \ rho . \end{ali gn ed }$$ T hus, $ { { \bfx}} _{K+ 1}\in \mathscr{B}({ {\b f x} } _*, \rho) $ and (\[ eq 4-10\ ]) hol ds. Mo reover, let $K\ to\i nfty$ in(\[ eq 4-1 2\ ]). W e obtai n ( \[e q4-11\] ). Sup p ose t h a t t he infinite sequen ce o fiterates $\{{{ \ bf x } }_k\}$ i sgen erat e d by A CSA. Th en, thetotals eq ue nce $\{ {{ \bf x} }_ k\} $ h as af init e leng th, i.e. , $$\ s um_k \|{{\bf x } }_{k+1}-{{\bf x} } _ k\ | < + \in fty,$$ andhenc e the tot a lseq u ence$\{{{ \b f x } }_k\}$ converges to a uniqu e cri tical point. [**Proof* * ] { } Sincethed om a in of $f({{\bf x}}) $ is compa c t, the i nfini te seque nce $\{{{ \ b f x}}_k\ }$gen era ted b yACSA must hav e an a cc umulati onpoint $ {{\ bfx}} _*$ .According to Theo re m\[ Th 4-0 4\],$ {{\bf x} }_ *$is acriti c al poi nt. H ence ,th e reexistsa ni n dex$k _0 $, w hic hcould bev iew ed as a n initial it e rati on w hen weuse Lemma \[L m4 -06\], suc htha t ${{\ b f x}}_{k_ 0}\in\mathscr{B}({{\bfx }}_*,\r ho) $. Fr om L emma \[Lm 4-0 6\], w e h a ve $\s um_{k= k_0}^ {\ inf t y }
x}}_k\| _ _ &\leq&_\frac{2}{\eta}\sum_{k=1}^K \varphi(f({{\bf_x}}_k))-\varphi(f({{\bf_x}}_{k+1})) \nonumber\\ __ _ &=& \frac{2}{\eta}(\varphi(f({{\bf_x}}_1))-\varphi(f({{\bf x}}_{K+1}))) \nonumber\\ _ __&\leq& \frac{2}{\eta}\varphi(f({{\bf x}}_1)). \end{aligned}$$ So, we get $$\begin{aligned} \|{{\bf x}}_{K+1}-{{\bf_x}}_*\|_&\leq& \sum_{k=1}^K\|{{\bf_x}}_{k+1}-{{\bf_x}}_k\|+\|{{\bf_x}}_1-{{\bf x}}_*\| \\ _ &\leq& \frac{2}{\eta}\varphi(f({{\bf_x}}_1))+\|{{\bf x}}_1-{{\bf_x}}_*\| \\ &_<_& \rho. _\end{aligned}$$ Thus, ${{\bf x}}_{K+1}\in\mathscr{B}({{\bf x}}_*,\rho)$ and (\[eq4-10\]) holds. Moreover, let_$K\to\infty$ in (\[eq4-12\]). We obtain (\[eq4-11\]). Suppose_that the infinite_sequence_of_iterates $\{{{\bf x}}_k\}$ is_generated by ACSA. Then, the total_sequence $\{{{\bf x}}_k\}$ has a finite_length, i.e., $$\sum_k \|{{\bf x}}_{k+1}-{{\bf x}}_k\| <_+\infty,$$ and hence the total sequence_$\{{{\bf x}}_k\}$ converges to a_unique critical_point. [**Proof**]{} Since the domain of_$f({{\bf x}})$ is_compact, the_infinite sequence $\{{{\bf_x}}_k\}$ generated by ACSA must have_an accumulation point_${{\bf x}}_*$. According to Theorem \[Th4-04\],_${{\bf_x}}_*$ is a_critical_point._Hence, there_exists an index_$k_0$,_which could_be_viewed as an initial iteration when_we_use Lemma \[Lm4-06\], such that ${{\bf x}}_{k_0}\in\mathscr{B}({{\bf_x}}_*,\rho)$. From Lemma \[Lm4-06\],_we_have $\sum_{k=k_0}^{\infty}

LLM Dataset Inference

This repository contains various subsets of the PILE dataset, divided into train and validation sets. The data is used to facilitate privacy research in language models, where perturbed data can be used as a reference to detect the presence of a particular dataset in the training data of a language model.

Data Used

The data is in the form of JSONL files, with each entry containing the raw text, as well as various kinds of perturbations applied to it.

Quick Links

  • arXiv Paper: Detailed information about the Dataset Inference V2 project, including the dataset, results, and additional resources.
  • GitHub Repository: Access the source code, evaluation scripts, and additional resources for Dataset Inference.
  • Dataset on Hugging Face: Direct link to download the various versions of the PILE dataset.
  • Summary on Twitter: A concise summary and key takeaways from the project.

Applicability πŸš€

The dataset is in text format and can be loaded using the Hugging Face datasets library. It can be used to evaluate any causal or masked language model for the presence of specific datasets in its training pool. The dataset is not intended for direct use in training models, but rather for evaluating the privacy of language models. Please keep the validation sets, and the perturbed train sets private, and do not use them for training models.

Loading the Dataset

To load the dataset, use the following code:

from datasets import load_dataset

dataset = load_dataset("pratyushmaini/llm_dataset_inference", subset="wikipedia", split="train")

Note: When loading the dataset, you must specify a subset. If you don't, you'll encounter the following error:

ValueError: Config name is missing.
Please pick one among the available configs: ['arxiv', 'bookcorpus2', 'books3', 'cc', 'enron', 'europarl', 'freelaw', 'github', 'gutenberg', 'hackernews', 'math', 'nih', 'opensubtitles', 'openwebtext2', 'philpapers', 'stackexchange', 'ubuntu', 'uspto', 'wikipedia', 'youtubesubtitles']
Example of usage:
    `load_dataset('llm_dataset_inference', 'arxiv')`

Correct usage example:

ds = load_dataset("pratyushmaini/llm_dataset_inference", "arxiv")

Available Perturbations

We use the NL-Augmenter library to apply the following perturbations to the data:

  • synonym_substitution: Synonym substitution of words in the sentence.
  • butter_fingers: Randomly changing characters from the sentence.
  • random_deletion: Randomly deleting words from the sentence.
  • change_char_case: Randomly changing the case of characters in the sentence.
  • whitespace_perturbation: Randomly adding or removing whitespace from the sentence.
  • underscore_trick: Adding underscores to the sentence.

Contact

Please email [email protected] in case of any queries regarding the dataset

Downloads last month
59