text
stringlengths
84
944
page_content='model depth. As observed in Section 3, we show that the activation of large language models changes slowly across blocks.\nThis property can be leveraged to increase the efficiency of a trained model by parallelizing, reordering, or skipping certain\nintermediate sub-blocks without significantly impacting the overall accuracy.\nSetting Wiki(ppl) C4(ppl)\nBaseline 11.57 10.17\nSkip every 2 layers 21.16 16.58\nSkip every 4 layers 13.45 11.37' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18}
page_content='Improving the inference efficiency of Transformer models is a challenging task due to their sequential execution of\nTransformer layers. Each sub-block depends on the output of the previous one, leading to low hardware efficiency, particularly\nduring the token generation phase where each forward pass is computed for only one token. However, the sequential execution\nof blocks and sub-blocks yields computation bubbles, and the latter involves a large amount of communication overhead.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18}
page_content='Here, we present an interesting observation that can potentially alleviate these challenges. We found that the activation of\nthe model changes slowly across blocks. Specifically, the cosine similarity of activations between adjacent blocks is often\nabove 0.99. This suggests that the blocks might take the previous activation as input – parallelize or reorder the blocks –' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18}
page_content='without significantly affecting the output. Slowly changing activations suggest that it may be possible to parallelize, reorder,\nor even skip blocks while maintaining a similar output. Some existing models, such as GPT-J (Wang & Komatsuzaki, 2021),\nGPT-NeoX (Black et al., 2022), and PaLM (Chowdhery et al., 2022) already placed the Attention block and MLP block\n19' metadata={'source': 'pdfs/paper_3.pdf', 'page': 18}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nin parallel in training to facilitate parallel computation and reduce the communication overhead.\nHere we investigate the possibility at inference time. And surprisingly, we found parallelizing those blocks for models that\nare trained in a sequence manner will not hurt the performance of downstream tasks significantly. And surprisingly, we found' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='parallelizing those blocks for models that are trained in a sequence manner will not hurt the performance of downstream\ntasks significantly. TableC.3 presents some preliminary results of OPT-175B and Bloom\nGiven the activation yand Transformer layer l, we have:\neyl←yl+MHAl(yl)\nbyl←eyl+MLPl(eyl)\nParallelizing two blocks refers to placing the Attention and MLP blocks in parallel, i.e.:\nbyl←y+MHAl(yl)+MLPl(yl)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='Parallelizing four blocks then parallelize the blocks of two Transformer layers, defined as follows:\nbyl+1←yl+MHAl(yl)+MLPl(yl)+MHAl+1(yl)+MLPl+1(yl)\nSkipping layers is straightforward, which drops an entire Transformer layer for every nlayers.\nWe are surprised to find that parallel two layers preserve accuracy on a series of tasks across models. Besides, randomly\nskipping 25% layers doesn’t lead to catastrophic quality. Our findings suggest from the downstream task perspective, the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='activation patterns within the model are relatively consistent across different blocks, providing a potential avenue for future\nresearch on model compression and optimization.\nD Implementation Details\nFigure 12 presents a more detailed workflow of DEJAVU . The left diagram shows how an input yperforms the sparse MHA\nwith selected indices 0,3, predicted by the head predictor. Similarly, the right diagram shows how an input yperforms the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='sparse MLP with selected indices 0,2, predicted by the neuron predictor of that layer.\nSelected Head Index!! = {0,3}Attention with ""!#,""!$,""!%$%&($)""!\'\tOutput Projection)%*"!($)%(($)""")\t!*= {0,2}$σ($"""+)"""+ MLP""($)\nSelected Neurons IndexSparsified AttentionSparsified MLP\nFigure 12. Detailed diagram on the sparsified computation process of MLP and Attention. Notation refers to Section 2.3' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='Next, we will present a general explanation of two optimizations we used in DEJAVU implementation. Kernel fusion: A\nstandard implementation of sparse matrix-vector multiply (e.g., Wx in PyTorch) that separately indexes a subset of the\nmatrix W[idx,:]before multiplying with input xwould incur 3 ×the amount of memory IOs: one to load a subset of Wfrom\nGPU memory, one to write that subset to a different contiguous region in memory, and one to load that (now contiguous)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='subset in again to multiply with x. Similarly, to use sparse matrix multiply routines (e.g., cuSparse), we would first need\nto convert W[idx,:]to sparse format, again incurring more memory IOs. We instead fuse the indexing and the multiplication\nstep: we load a subset of W[idx,:]to memory, along with x, perform the multiply, then write down the result. This fused\nimplementation (in Triton (Tillet et al., 2019)) yields up to 4 ×speedup compared to a standard PyTorch implementation' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='(Section E). Memory coalescing: the weight matrices are conventionally stored in row-major format. This allows us to load\n20' metadata={'source': 'pdfs/paper_3.pdf', 'page': 19}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nW[idx,:]optimally (as the second dimension is contiguous in memory). However, for cases where we need to load W[:,idx]\n(attention output projection and the 2nd weight matrix in the MLP) this format significantly slows down memory loading,\nasidxcould contain indices pointing to non-contiguous memory. A simple solution is to store these matrices in column-major' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='format (i.e., storing W⊤in contiguous row-major format), then use the same fused kernel above. This transposition is done\nonce when loading the model, and incurs no added cost during generation.\nE Benchmarking Sparse MLP and Sparse Attention\n0.2 0.4 0.6 0.8\nMLP density100200300400Time (us)Sparse MLP benchmark\nDense MLP\nSparse MLP baseline (PyTorch)\nSparse MLP - Deja vu\nFigure 13. Speed benchmarking of the MLP layer of OPT-175B on 8xA100s. Our sparse implementation is up to 4.5 ×faster than the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='baseline implementation in PyTorch. Our sparse MLP implementation remains faster than dense MLP for density up to 0.8.\nWe validate that our hardware-aware implementation of sparse MLP and sparse attention (Section 4.4) yields wall-clock\nspeed up compared to both dense MLP/attention and compared to the standard implementation in PyTorch.\nRecall that our implementation fuses the sparse indexing and the multiplication (W1\nSM)⊤yfor weight matrices (W1)⊤and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='input vector y. In cases where we need to index W2\nSM, we store the transpose of W2to ensure memory coalescing. For the\nbaseline implementation in PyTorch, we index (W1\nSM)⊤as a separate operation before multiplying with y, which incurs\nmore memory reads/writes.\nSimilarly, we fuse the sparse indexing and the multiplication (WQ\nSA)⊤y,(WK\nSA)⊤y,(WV\nSA)⊤yfor weight matrices (WQ)⊤,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='(WK)⊤,(WV)⊤and input vector y. Note we usually concatenate all three matrices in the standard implementation, but we sep-\narate them here for clarity. In cases where we need to index WO\nSA, we store the transpose of WOto ensure memory coalescing.\nIn Figure 13 and Figure 14, our sparse MLP and attention implementations are 4-5 ×faster than the baseline implementation\nin Pytorch, and remains faster than the dense version for density up to 0.8.\nF Notations and Basic Definitions' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='For a positive integer n, let[n]:={1,2,···,n}. For a matrix A∈Rn×n, letAi,:andA:,jbe two column vectors corresponding\nto the i-th row and the j-th column of Arespectively, and Ai,jbe the entry at the i-th row and the j-th column. For a vector\nx∈Rn, let√x∈Rndenote the vector with the i-th entry being√xianddiag( x)∈Rn×ndenote the diagonal matrix with\nthei-th diagonal entry being xi. For two matrices A,W∈Rn×n, let∥A∥W:=(Pn\ni=1Pn\nj=1Wi,jA2\ni,j)1/2andW◦Adenote' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='the matrix where (W◦A)i,j=Wi,jAi,j. For matrix W∈Rn×n, letDWi:=diag( Wi,:)withi∈[n].\nFor two vectors x∈Rnandw∈Rn\n≥0, let∥x∥w:= (Pn\ni=1wix2\ni)1/2. For a vector x, we denote ∥x∥2:= (Pn\ni=1x2\ni)1/2as its\n21' metadata={'source': 'pdfs/paper_3.pdf', 'page': 20}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0.2 0.4 0.6 0.8\nAttention density100200300Time (us)Sparse attention benchmark\nDense attention\nSparse attention baseline (PyTorch)\nSparse attention - Deja vu\nFigure 14. Speed benchmarking of the attention layer of OPT-175B on 8xA100s. Our sparse implementation is up to 5 ×faster than the\nbaseline implementation in PyTorch. Our sparse attention implementation remains faster than dense MLP for density up to 0.8.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 21}
page_content='ℓ2norm. We denote ∥x∥p:=(Pn\ni=1|xi|p)1/pas itsℓpnorm. For a square matrix A, we denote tr[A]as the trace of matrix A.\nFor a matrix A∈Rn×k(suppose n≥k), we use ∥A∥to denote its spectral norm, i.e., ∥A∥=supx∥Ax∥2/∥x∥2. We use ∥A∥F\nto denote its Frobenius norm ∥A∥F:=(Pn\ni=1Pk\nj=1A2\ni,j)1/2.\nSuppose matrix A∈Rn×khas SVD decomposition UΣV⊤where U∈Rn×k(this matrix has orthonormal columns),\nΣ∈Rk×kis a diagonal matrix, and V∈Rk×k. We call columns of Uare singular vectors. We use A†∈Rk×nto denote the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 21}
page_content='Moore-Penrose pseudoinverse, then A†=VΣ−1U⊤. Suppose Σ∈Rk×kis sorted diagonal matrix, let σ1,···,σkdenote the\ndiagonal entries of Σ. Then we call σithei-th singular value of matrix, and we write it as σi(A).\nFor any symmetric matrix B∈Rk×k, we define its eigenvalue decomposition as UΛU⊤, where Λis a diagonal matrix. Let\nλ1,···,λkdenote the entries on diagonal of Λ∈Rk×k. We say λiis the i-th eigenvalue. Usually we write it as λi(B).\nThe connection between eigenvalues and singular values is\nσ2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 21}
page_content='i(A)=λi(A⊤A)\nWe use notation A⪰0to denote that matrix Ais positive semidefinite (psd). Mathematically, A⪰0means for all vectors\nx, we have x⊤Ax≥0.\nSimilarly, for two squarer matrices AandB, we use A⪰Bto denote the case where for all vectors x,x⊤Ax≥x⊤Bx.\nWe use Pr[]andE[]for probability and expectation. We denote max{a,b}as the maximum between aandb. We denote\nmin{a,b}(resp. max{a,b}) as the minimum (reps. maximum) between aandb.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 21}
page_content='Throughout, for non-negative real numbers a and b, we use the notation a=(1±ϵ)bifa∈[(1−ϵ)b,(1+ϵ)b].\nG Subspace Embeddings and Norm Preserving\nIn Section G.1, we show the norm preserving of the soft-max functions. In Section G.2, we show the norm preserving of the\nReLU function. In Section G.3, we introduce the folded Guassian distribution. In Section G.4, we introduce the ℓ2subspace' metadata={'source': 'pdfs/paper_3.pdf', 'page': 21}
page_content='embedding. In Section G.5, we introduce the ℓ1subspace embedding. In Section G.6, we introduce different sketching\nmatrices for subspace embedding.\n22' metadata={'source': 'pdfs/paper_3.pdf', 'page': 21}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nG.1 Soft-Max Functions\nLetK∈Rs×dandV∈Rd×s. Inspired by the softmax unit in the attention scheme of large language models. The softmax\nrelated regression has been studied in many settings (Zandieh et al., 2023; Alman & Song, 2023; Brand et al., 2023; Li et al.,\n2023b; Deng et al., 2023b;a; Gao et al., 2023a; Li et al., 2023a; Gao et al., 2023b). In this work, we follow the standard' metadata={'source': 'pdfs/paper_3.pdf', 'page': 22}
page_content='softmax definition. We define σ1:Rs→Rsto be a softmax function, i.e., for any vector y∈Rs, theσ(y)can be written as\nσ1(y)i=exp(yi)\nPd\nj=1exp(yj),∀i∈[d]\nThe standard softmax is ℓ1version. In this work, we also consider the ℓ2generalization. We define σ2:Rs→Rsto be a\nsoftmax function ( ℓ2version), i.e., for any vector y∈Rs, theσ(y)can be written as\nσ2(y)i=exp(yi)\n(Pd\nj=1exp(2 yj))1/2,∀i∈[d]\nWe define function f:Rd→Rd\nf(x)=V·(σ(K·x)) (3)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 22}
page_content='Definition G.1. We say X ⊂Rdis a rank- ksubspace, if there is an orthonormal basis U∈Rd×k, for any x∈ X, there is\ny∈Rksuch that\nx=Uy.\nWe can have\nLemma G.2. Letτ∈(0,1). LetX ⊂Rddenote a subspace with rank k. Letfbe defined based on σ2function. Let Vis\na random Gaussian matrices with d≥Ω(ϵ−2(k+log(1 /δ)))rows. Let V=τV, then we have with probability 1−δ\n(1−ϵ)τ∥x∥2≤∥f(x)∥≤(1+ϵ)τ∥x∥2.\nfor all unit vectors x∈X.\nFurther, if d=O(k+log(1 /δ)), then we have\n0.5τ∥x∥2≤∥f(x)∥≤2τ∥x∥2.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 22}
page_content='Remark G.3.The above condition implies that fis a shrinking operator but also not shrinking arbitrarily small.\nProof. Given d≥Ω(ϵ−2(k+log(1 /δ))), by using Lemma G.11 , we have\n(1−ϵ)∥y∥2≤∥V y∥2≤(1+ϵ)∥y∥2\nAs the input of the function fhere is the output of a softmax function ( ℓ2version), we know that ∥y∥2=1.\nThus, we have\n(1−ϵ)≤∥V y∥2≤(1+ϵ)\nBy rescaling V, we have\n(1−ϵ)∥x∥2≤∥V y∥2≤(1+ϵ)∥x∥2.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 22}
page_content='Lemma G.4. Letτ∈(0,1). LetX ⊂Rddenote a subspace with rank k. Letfbe defined based on σ1function. Suppose\nVis a random Gaussian matrix with d≥Ω((k+log(1 /δ)))rows. Let V=1\n2τV.\nThen we have\n1\n4√sτ·∥x∥2≤∥f(x)∥2≤τ·∥x∥2\nfor all unit vectors x.\nProof. By property of subspace embedding, we know that if d≥Ω(ϵ−2(s+log(1 /δ))),\n(1−ϵ)∥y∥2≤∥V y∥2≤(1+ϵ)∥y∥2\nBy property of function of f, we know we only need to care ∥y∥1=1, this implies that\n1√s∥y∥1≤∥y∥2≤∥y∥1\n23' metadata={'source': 'pdfs/paper_3.pdf', 'page': 22}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nOn one hand, we have\n∥V y∥2≤(1+ϵ)·∥y∥2\n≤(1+ϵ)·∥y∥1\n= (1+ ϵ), (4)\nwhere the first step follows from ∥V y∥2≤(1+ϵ)∥y∥2, the second step follows from ∥y∥2≤∥y∥1and the last step follows\nfrom∥y∥1=1.\nOn the other hand, we have\n∥V y∥2≥(1−ϵ)∥y∥2\n≥1√s(1−ϵ)∥y∥1\n=1√s(1−ϵ), (5)\nwhere the first step follows from (1−ϵ)∥y∥2≤∥V y∥2, the second step follows from1√s∥y∥1≤∥y∥2and the last step follows\nfrom∥y∥1=1.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 23}
page_content='Combining Eq. (5)and Eq. (4) together, we have\n(1−ϵ)1√s≤∥V y∥2≤(1+ϵ)\nChoosing ϵ=1/2, we have\n1\n2√s≤∥V y∥2≤2.\nByV=1\n2τVand∥x∥2=1, we have\n1\n4√sτ∥x∥2≤∥V y∥2≤τ∥x∥2.\nG.2 ReLU Functions\nWe use ϕ:R→Rto denote ReLU function, i.e., ϕ(z)=max {z,0}.\nWe define function g:Rd→Rd\ng(x)=V·(ϕ(K·x)) (6)\nLetK∈Rs×dandV∈Rd×s.\nLemma G.5. LetX ⊂Rddenote a rank- ksubspace. Let Kdenote a random Gaussian matrix. Let Vdenote a random' metadata={'source': 'pdfs/paper_3.pdf', 'page': 23}
page_content='Gaussian matrix. Let s≥Ω(ϵ−2klog(1/(δϵ))). Letd≥Ω(ϵ−2(k+log(1 /δ))). Then we know with high probability 1−δ,\nfor all unit vector x∈X\n(1−ϵ)∥x∥2≤∥f(x)∥2≤(1+ϵ)∥x∥2\nProof. Suppose s≥Ω(ϵ−2log(1/δ)).\nUsing Lemma G.6, Fact G.7, we can show that for each fixed\n(1−ϵ)∥x∥2≤∥ϕ(Kx)∥2≤(1+ϵ)∥x∥2\nholds with probability 1−δ.\nBy a standard ϵ-net argument (Lemma G.9), the net points in Xis at most (10/ϵ)O(k).\nTaking a union bound over all the net points, we can show that for all x∈X\n(1−ϵ)∥x∥2≤∥ϕ(Kx)∥2≤(1+ϵ)∥x∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 23}
page_content='holds with probability 1−δ/2ands≥Ω(ϵ−2klog(1/(δϵ))).\nFurther, we using Lemma G.11, we can show that\n(1−ϵ)∥ϕ(Kx)∥2≤∥f(x)∥2≤(1+ϵ)∥ϕ(Kx)∥2\nholds with probability 1−δ/2.\nCombining together,\n(1−ϵ)2∥x∥2≤∥f(x)∥2≤(1+ϵ)2∥x∥2\n24' metadata={'source': 'pdfs/paper_3.pdf', 'page': 23}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nholds with probability 1−δ.\nRescaling the ϵ, we complete the proof.\nG.3 Folded Gaussian Distribution\nWe state a standard tool from literature,\nLemma G.6 (Lemma 1 on page 1325 of Laurent and Massart (Laurent & Massart, 2000)) .LetX∼X2\nkbe a chi-squared\ndistributed random variable with kdegrees of freedom. Each one has zero means and σ2variance.\nThen,\nPr[X−kσ2≥(2√\nkt+2t)σ2]≤exp(−t)\nPr[kσ2−X≥2√\nktσ2]≤exp(−t)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='Further if k≥Ω(ϵ−2t)andt≥Ω(log(1 /δ)), then we have\nPr[|X−kσ2|≤ϵkσ2]≤δ.\nWe prove the following property,\nFact G.7. Leth,q∈Rpbe fixed vectors and h̸= 0,W∈Rm×pbe random matrix with i.i.d. entries Wi,j∼ N(0,2\nm), and\nvector v∈Rmdefined as vi=ϕ((Wh)i)=1(W(h+q))i≥0(Wh)i. Then,\n•|vi|follows i.i.d. from the following distribution: with half probability |vi|=0, and with the other half probability |vi|\nfollows from folded Gaussian distributions |N(0,2∥h∥2\nm)|.\n•m∥v∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='2∥h∥2is in distribution identical to χ2\nω(chi-square distribution of order ω) where ωfollows from binomial distribution\nB(m,1/2).\nProof. We assume each vector Wiis generated by first generating a gaussian vector g∼N(0,2I\nm)and then setting Wi=±g\nwhere the sign is chosen with half-half probability. Now, |⟨Wi,h⟩|=|⟨g,h⟩|only depends on g, and is in distribution identical\nto|N(0,2∥h∥2\nm)|. Next, after the sign is determined, the indicator 1⟨Wi,h+q⟩≥0is1with half probability and 0with another' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='half. Therefore, |vi|satisfies the aforementioned distribution. As for ∥v∥2, letting ω∈{0,1,...,m}be the variable indicator\nhow many indicators are 1, then ω∼B(m,1/2)andm∥v∥2\n2∥h∥2∼χ2\nω.\nG.4 ℓ2subspace embedding\nWe define a standard notion in sketching technique.3\nDefinition G.8 (ℓ2subspace embedding (Sarlos, 2006)) .A(ϵ,δ,ℓ 2)-subspace embedding for the column space of an n×d\nmatrix Ais a matrix Sfor which\nPr[∀x∈Rd,∥SAx∥2\n2=(1±ϵ)∥Ax∥2\n2]≥1−δ.\nThe above condition is equivalent to' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='Pr[∥U⊤U−U⊤S⊤SU∥≤ϵ]≥1−δ.\nwhere the Uis the orthonormal basis of A.\nFor the reason of above conditions are equivalent, we refer the readers to the survey (Woodruff, 2014).\nWe state a standard tool in literature,\nLemma G.9 (Lemma 5 in (Woodruff, 2014)) .LetX ⊂Rdbe rank k. For any γ∈(0,1), there is a γ-netNofXfor which\n|N|≤(1+4 /γ)k.\n3We remark that sketching technique has widely applied to many applications such as linear regression, low-rank approximation (Clarkson' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='& Woodruff, 2013; Nelson & Nguyên, 2013; Lu et al., 2013; Boutsidis et al., 2016; Cohen, 2016; Razenshteyn et al., 2016; Song et al., 2017;\n2019), linear programming (Song & Yu, 2021; Dong et al., 2021; Jiang et al., 2021; Gu & Song, 2022), semi-definite programming (Gu &\nSong, 2022; Song et al., 2023b), empirical risk minimization(Lee et al., 2019; Qin et al., 2023b), training over-parameterized neural network' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='(Brand et al., 2021; Song et al., 2021; Alman et al., 2022; Hu et al., 2022; Zhang, 2022).\n25' metadata={'source': 'pdfs/paper_3.pdf', 'page': 24}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nG.5 ℓ1subspace embedding\nWhen p=1, using Cauchy random variables, Sohler and Woodruff (Sohler & Woodruff, 2011) showed there exist ℓ1oblivious\nsubspace embeddings with O(dlogd)rows and κ=O(dlogd). This approach was generalized by using p-stable random\nvariables in work of Meng and Mahoney (Meng & Mahoney, 2013) to ℓp-norms when 1<p< 2, where they showed there' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='existℓpoblivious subspace embeddings with O(dlogd)rows and κ=O((dlogd)1/p). Unlike the case when p= 2, due to\nthe large distortion\nIn (Wang & Woodruff, 2018), they show for every 1≤p<2, any oblivious subspace embedding with dimension rhas distortion\nκ=Ω(1\n(1\nd)1/p·log2/pr+(r\nn)1/p−1/2). They also give sparse oblivious subspace embeddings for every 1≤p<2which are optimal\nin dimension and distortion, up to poly (logd)factors. Importantly for p=1, they achieve r=O(dlogd),κ=O(dlogd)and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='s=O(logd)non-zero entries per column.\nDefinition G.10 (ℓ1subspace embedding) .Let0<α<β be parameters. We will say a matrix Sis anℓ1subspace embedding\nfor an n×dmatrix Aif there are constants c1,c2>0so that for all x∈Rd,\n∥Ax∥≤∥SAx∥1≤dc1∥Ax∥1,\nandShas at most dc2rows.\nG.6 Random Matrices\nMatrices b Time for R·A Reference\nRandom Gaussian ϵ−2(d+log(1 /δ)) Tmat(b,n,d) Thm. 6 of (Woodruff, 2014)\nSRHT ϵ−2(√\nd+√logn)2log(d/δ)ndlog(ϵ−1d(logn)) Thm. 7 of (Woodruff, 2014)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='AMS ϵ−2(d+log(1 /δ)) Tmat(b,n,d) Follow from JL guarantee\nCount-sketch ϵ−2δ−1d2nnz(A) Thm. 9 of (Woodruff, 2014)\nSparse embedding ϵ−2d·polylog(d/(ϵδ)) ϵ−1nnz(A)polylog(d/(ϵδ)) Thm. 10 (2) of (Woodruff, 2014)\nSparse embedding ϵ−2d1+γϵ−1nnz(A)poly(1 /γ) Thm. 10 (1) of (Woodruff, 2014)\nTable 9. Summary for different sketching matrices for subspace embedding. The sketching matrix Rhas size b×n. The vectors are from the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='column subspace of matrix Awith size n×d.ϵ∈(0,1)is the error parameter, and δ∈(0,1)is the probability parameter. Tmat(a,b,c)denotes\nthe running time of fast matrix multiplication of two matrices with size a×bandb×c.In the first sparse embedding matrix, each column has\ns≥ϵ−1polylog(d/(ϵδ))non-zero entries; In the second sparse embedding matrix, each column has s≥ϵ−1poly(1/γ)non-zero entries,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='γ >0is a tunable parameter that gives different trade-offs, and δcan be as small as 1/poly(d).For count-sketch matrices, the subspace\nembedding guarantee is proved from JL moment property, instead of directly from JL guarantee.\nLemma G.11 (Theorem 6 of (Woodruff, 2014)) .Let0< ϵ,δ < 1andS=1√\nkR∈Rk×nwhere the entries Ri,jofRare\nindependent standard normal random variables. Then if k= Θ(ϵ−2(d+log(1 /δ))), then for any fixed n×dmatrix A, with' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='probability 1−δ,Sis a(1±ϵ)ℓ2-subspace embedding for A, that is, simultaneously for all x∈Rd,∥SAx∥2=(1±ϵ)∥Ax∥2.\nHereC >0is an absolute constant.\nWe consider several standard sketching matrices:\n1. Random Gaussian matrices.\n2. Subsampled randomized Hadamard/Fourier transform (SRHT) matrices (Lu et al., 2013).\n3. AMS sketch matrices (Alon et al., 1996), random {−1,+1}per entry.\n4.Count-Sketch matrices (Charikar et al., 2002), each column only has one non-zero entry, and is −1,+1half probability' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='each.\n5.Sparse embedding matrices (Nelson & Nguyên, 2013), each column only has snon-zero entries, and each entry is\n−1√s,+1√shalf probability each.\n6. Uniform sampling matrices.\nDefinition G.12 (Random Gaussian matrix) .We say R∈Rb×nis a random Gaussian matrix if all entries are sampled from\nN(0,1/b)independently.\n26' metadata={'source': 'pdfs/paper_3.pdf', 'page': 25}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nDefinition G.13 (Subsampled randomized Hadamard/Fourier transform matrix (Lu et al., 2013)) .We say R∈Rb×nis a\nsubsampled randomized Hadamard transform (SRHT) matrix4if it is of the form R=p\nn/bSHD , where S∈Rb×nis a random\nmatrix whose rows are b uniform samples (without replacement) from the standard basis of Rn,H∈Rn×nis a normalized' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='Walsh-Hadamard matrix, and D∈Rn×nis a diagonal matrix whose diagonal elements are i.i.d. Rademacher random variables.\nDefinition G.14 (AMS sketch matrix (Alon et al., 1996)) .Leth1,h2,···,hbbebrandom hash functions picking from a 4-wise\nindependent hash family H={h:[n]→{−1√\nb,+1√\nb}}. Then R∈Rb×nis a AMS sketch matrix if we set Ri,j=hi(j)\nDefinition G.15 (Count-sketch matrix (Charikar et al., 2002)) .Leth:[n]→[b]be a random 2-wise independent hash function' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='andσ: [n]→{− 1,+1}be a random 4-wise independent hash function. Then R∈Rb×nis a count-sketch matrix if we set\nRh(i),i=σ(i)for all i∈[n]and other entries to zero.\nDefinition G.16 (Sparse embedding matrix I (Nelson & Nguyên, 2013)) .We say R∈Rb×nis a sparse embedding matrix\nwith parameter sif each column has exactly snon-zero elements being ±1/√suniformly at random, whose locations are\npicked uniformly at random without replacement (and independent across columns)5.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='Definition G.17 (Sparse embedding matrix II (Nelson & Nguyên, 2013)) .Leth: [n]×[s]→[b/s]be a random 2-wise\nindependent hash function and σ:[n]×[s]→{− 1,1}be a 4-wise independent. Then R∈Rb×nis a sparse embedding matrix\nII with parameter sif we set R(j−1)b/s+h(i,j),i=σ(i,j)/√sfor all (i,j)∈[n]×[s]and all other entries to zero6.\nDefinition G.18 (Uniform sampling matrix) .We say R∈Rb×nis a uniform sampling matrix if it is of the form R=p\nn/bSD ,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='where S∈Rb×nis a random matrix whose rows are b uniform samples (without replacement) from the standard basis of\nRn, andD∈Rn×nis a diagonal matrix whose diagonal elements are i.i.d. Rademacher random variables.\nH Distances, Angles, and Inner Product\nMost of the properties in this section are very standard in literature, e.g., see (Gu et al., 2023).\nLetU∈Rn×kdenote an orthonormal basis, we use U⊥∈Rn×(n−k)denote the matrix such that UU⊤+U⊥U⊤\n⊥=In.\nDefinition H.1. LetX∈Rn×kandY∈Rn×k.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='For any matrix X, and for orthogonal matrix Y(Y⊤Y=Ik) we define\n•tanθ(Y,X):=∥Y⊤\n⊥X(Y⊤X)−1∥\nFor orthogonal matrices YandX(Y⊤Y=IkandX⊤X=Ik), we define\n•cosθ(Y,X):=σmin(Y⊤X).\n–It is obvious that cos(Y,X)=1/∥(Y⊤X)−1∥andcos(Y,X)≤1.\n•sinθ(Y,X):=∥(I−Y Y⊤)X∥.\n–It is obvious that sinθ(Y,X)=∥Y⊥Y⊤\n⊥X∥=∥Y⊤\n⊥X∥andsinθ(Y,X)≤1.\n•dist(Y,X):=min Q∈Ok∥Y Q−X∥\nwhere Okis the set of k×korthogonal matrices.\nLemma H.2 (Structural lemma for orthogonal matrices) .LetX,Y∈Rn×kbe orthogonal matrices. Then\n(Y⊤X)⊥=Y⊤\n⊥X.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='Proof. Let us first compute the Gram of Y⊤X, which is\nX⊤Y Y⊤X=X⊤(I−Y⊥Y⊤\n⊥)X\n=X⊤X−X⊤Y⊥Y⊤\n⊥X\n=Ik−X⊤Y⊥Y⊤\n⊥X,\n4In this case, we require logno be an integer.\n5For our purposes the signs need only be O(logd)-wise independent, and each column can be specified by a O(logd)-wise independent\npermutation, and the seeds specifying the permutations in different columns need only be O(logd)-wise independent.\n6This definition has the same behavior as sparse embedding matrix I for our purpose\n27' metadata={'source': 'pdfs/paper_3.pdf', 'page': 26}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nwhere the first step follows from Y⊥Y⊤\n⊥+Y Y⊤=I, the second step follows from simple algebra, and the last step follows\nfromXis an orthogonal matrix, so X⊤=X−1.\nThis means that (Y⊤X)⊥=Y⊤\n⊥X.\nLemma H.3 (Orthogonal and inverse share singular vectors) .LetA∈Rk×kbe non-singular, then A⊥andA−1have the\nsame set of singular vectors. Consequently, ∥A⊥A−1∥=∥A⊥∥∥A−1∥.\nProof. LetA∈Rk×kandA⊤A+A⊤' metadata={'source': 'pdfs/paper_3.pdf', 'page': 27}
page_content='⊥A⊥=Ik, we will show that ∥A⊥A−1∥=∥A⊥∥∥A−1∥. Let x∈Rkbe the unit\neigenvector of Athat realizes the spectral norm, note that\n∥A⊥x∥2\n2= 1−∥A∥2,\nwe argue that xcorresponds to the smallest singular value of A⊥via contradiction. Suppose there exists some unit vector\nywith∥A⊥y∥2<∥A⊥x∥2, by definition, we know that ∥A⊥y∥2\n2+∥Ay∥2\n2= 1, this means that ∥Ay∥2>∥Ax∥2=∥A∥,\ncontradicts the definition of spectral norm. Similarly, if zis the unit vector that realizes the spectral norm of A⊥, then it is' metadata={'source': 'pdfs/paper_3.pdf', 'page': 27}
page_content='also singular vector corresponds to the smallest singular value of A, or equivalently, the spectral norm of A−1. Our above\nargument essentially implies that A⊥andA−1have the same set of singular vectors. The proof is then straightforward:\nsuppose A⊥z=λzandA−1z=µz, then\nA⊥A−1z=A⊥µz\n=µ(A⊥z)\n=λµz,\nwhere the first step follows from our assumption, the second step follows from µis a real number and a real number multiplying' metadata={'source': 'pdfs/paper_3.pdf', 'page': 27}
page_content='a matrix is commutative and follows from the associative property, and the third step follows from our assumption.\nThus, we have ∥A⊥A−1∥=∥A⊥∥∥A−1∥, and we have proved the assertion.\nLemma H.4. LetX,Y∈Rn×kbe orthogonal matrices, then\ntanθ(Y,X)=sinθ(Y,X)\ncosθ(Y,X).\nProof. Due to Lemma H.2, we have (Y⊤X)⊥=Y⊤\n⊥X. Thus, tanθ(Y,X)=∥(Y⊤X)⊥(Y⊤X)−1∥. The proof then follows\nstraightforwardly from Lemma H.3.\nLemma H.5. LetX,Y∈Rn×kbe orthogonal matrices, then sin2θ(Y,X)+cos2θ(Y,X)=1 .' metadata={'source': 'pdfs/paper_3.pdf', 'page': 27}
page_content='Proof. Recall that cosθ(Y,X) =1\n∥(Y⊤X)−1∥andsinθ(Y,X) =∥Y⊤\n⊥X∥, by Lemma H.2, we know that (Y⊤X)⊥=Y⊤\n⊥X,\ntherefore sinθ(Y,X) =∥(Y⊤X)⊥∥. Let A:=Y⊤X, by Lemma H.3, we know that A⊥andA−1have the same singular\nvectors, or equivalently, the singular vector realizing ∥A⊥∥corresponds to the smallest singular value of A. Letz∈Rkbe\nthe unit singular vector with singular value ∥A⊥∥, then\nz⊤A⊤Az+z⊤A⊤\n⊥A⊥z= 1,\n∥A⊥∥2+σ2\nmin(A)= 1,\ncos2θ(Y,X)+sin2θ(Y,X)= 1.\nThis completes the proof.\nH.1 Angle is close' metadata={'source': 'pdfs/paper_3.pdf', 'page': 27}
page_content='Lemma H.6. Letϵ∈(0,0.1)Letxdenote a unit vector, i.e., ∥x∥2=1.\nLetz=(x+y)/∥x+y∥2.\nIf∥y∥2≤ϵ·∥x∥2, thenp\n1−⟨x,z⟩2≤2√ϵ\nProof. We have\n∥x+y∥2≥ ∥x∥2−∥y∥2\n≥1−ϵ\nwhere the first step follows from triangle inequality.\n28' metadata={'source': 'pdfs/paper_3.pdf', 'page': 27}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nWe also have\n∥x+y∥2≤ ∥x∥2+∥y∥2\n≤1+ϵ (7)\nWe have\n(1−ϵ)2≥1−2ϵ (8)\nWe also have1\n(1+ϵ)2≥1−3ϵ (9)\nwhere ϵ∈(0,0.1).\nCombining Eq. (8) and Eq. (9), we have\n1\n(1+ϵ)2·(1−ϵ)2≥(1−2ϵ)·(1−3ϵ)\n= 1−5ϵ+6ϵ2\n≥1−5ϵ+ϵ\n= 1−4ϵ (10)\nwhere the first step follows from Eq. (8) and Eq. (9) and the rest of them follow from simple algebra.\nFinally, we have\n1−⟨x,z⟩2= 1−⟨x,x+y\n∥x+y∥2⟩2\n= 1−1\n∥x+y∥2\n2⟨x,x+y⟩2\n= 1−1\n∥x+y∥2\n2·(∥x∥2\n2+⟨x,y⟩)2\n= 1−1\n∥x+y∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 28}
page_content='2·(1+⟨x,y⟩)2\n≤1−1\n(1+ϵ)2·(1+⟨x,y⟩)2\n≤1−1\n(1+ϵ)2·(1−ϵ)2\n≤1−(1−4ϵ)\n= 4ϵ,\nwhere the first step follow the definition of z, the second step follows from the reorganization, the third step follows from\nthe definition of inner product, the fourth step follows from ∥x∥2=1, the fifth step follows from Eq. (7), the sixth step follows\nfrom 1+⟨x,y⟩≥1−|⟨x,y⟩|≥1−∥x∥2·∥y∥2≥1−ϵ, the seventh step follows from Eq. (10) and the last step follows from\nsimple algebra.\nI Function Approximations' metadata={'source': 'pdfs/paper_3.pdf', 'page': 28}
page_content='We first we show the function approximation for two operators in Section I.1, which means that there are two functions. Then\nwe show the function approximations for four operators in Section I.2.\nI.1 Function Approximations for Two Operators\nLemma I.1. Letf1:Rd→Rdand let f2:Rd→Rd.\nAssume the the following conditions\n•Condition 1a. f1is a linear function\n•Condition 1b. ∥f1(x)∥2≤ϵ1∥x∥2(f1is shrinking)\n•Condition 1c. ∥f1(x)−f1(y)∥2≤L1∥x−y∥2(f1is Lipschitz)\n29' metadata={'source': 'pdfs/paper_3.pdf', 'page': 28}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n•Condition 2a. f2is a linear function\n•Condition 2b. ∥f2(x)∥2≤ϵ2∥x∥2(f2is shrinking)\n•Condition 2c. ∥f2(x)−f2(y)∥2≤L2∥x−y∥2(f2is Lipschitz)\nWe define three functions\n•\ng1(x)=: (I+f1)·(I+f2)(x)\n=x+f2(x)+f1(x+f2(x))\n•\ng2(x)=: (I+f2)·(I+f1)(x)\n=x+f1(x)+f2(x+f1(x))\n•\ng3(x)=: (I+f1+f2)(x)\n=x+f1(x)+f2(x)\nThen we can show that\n•Part 1. ∥g1(x)−g2(x)∥2≤2ϵ1ϵ2∥x∥2(iff1andf2are linear functions)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 29}
page_content='•Part 2. ∥g1(x)−g2(x)∥2≤(ϵ2·L1+ϵ1·L2)∥x∥2(iff1andf2are Lipschitz functions)\n•Part 3. ∥g1(x)−g3(x)∥2≤ϵ1ϵ2∥x∥2(iff1is a linear function)\n•Part 4. ∥g1(x)−g3(x)∥2≤ϵ2·L1∥x∥2(iff1is a Lipschitz function)\n•Part 5. ∥g2(x)−g3(x)∥2≤ϵ1ϵ2∥x∥2(iff2is a linear function)\n•Part 6. ∥g2(x)−g3(x)∥2≤ϵ1·L2∥x∥2(iff2is a Lipschitz function)\nProof. Part 1.\nWe have\n∥g1(x)−g2(x)∥2≤ ∥g1(x)−g3(x)∥2+∥g3(x)−g2(x)∥2\n≤ϵ1ϵ2∥x∥2+ϵ1ϵ2∥x∥2\n= 2ϵ1ϵ2∥x∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 29}
page_content='where the first step follows from triangular inequality, the second step follows from Part 3 and Part 5 and the last step follows\nfrom simple algebra.\nPart 2.\nWe have\n∥g1(x)−g2(x)∥2≤ ∥g1(x)−g3(x)∥2+∥g3(x)−g2(x)∥2\n≤ϵ2·L1∥x∥2+ϵ1·L2∥x∥2\n= (ϵ2·L1+ϵ1·L2)∥x∥2\nwhere the first step follows from triangular inequality, the second step follows from Part 4 and Part 6 and the last step follows\nfrom simple algebra.\nPart 3.\n30' metadata={'source': 'pdfs/paper_3.pdf', 'page': 29}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nWe have\n∥g1(x)−g3(x)∥2=∥f1(x+f2(x))−f1(x)∥2\n=∥f1(x+f2(x)−x)∥2\n=∥f1(f2(x))∥2\n≤ϵ1·∥f2(x)∥2\n≤ϵ1·ϵ2·∥x∥2,\nwhere the first step follows from the definition of g1andg3, the second step follows from the fact that f1is a linear function, the\nthird step follows from simple algebra, the fourth step follows from Condition 1b and the last step follows from Condition 2b.\nPart 4.\n∥g1(x)−g3(x)∥2=∥f1(x+f2(x))−f1(x)∥2\n≤L1·∥x+f2(x)−x∥2\n=L1·∥f2(x)∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 30}
page_content='≤L1·ϵ2∥x∥2,\nwhere the first step follows from definition of g1andg3, the second step follows from Condition 1c, the third step follows\nfrom simple algebra and the last step follows from Condition 2b.\nPart 5.\nWe have\n∥g2(x)−g3(x)∥2=∥f2(x+f1(x))−f2(x)∥2\n=∥f2(x+f1(x)−x)∥2\n=∥f2(f1(x))∥2\n≤ϵ2·∥f1(x)∥2\n≤ϵ2·ϵ1·∥x∥2,\nwhere the first step follows from the definition of g2andg3, the second step follows from the fact that f2is a linear function, the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 30}
page_content='third step follows from simple algebra, the fourth step follows from Condition 2b and the last step follows from Condition 1b.\nPart 6.\n∥g2(x)−g3(x)∥2=∥f2(x+f1(x))−f2(x)∥2\n≤L2·∥x+f1(x)−x∥2\n=L2·∥f1(x)∥2\n≤L2·ϵ1∥x∥2,\nwhere the first step follows from definition of g1andg3, the second step follows from Condition 2c, the third step follows from\nsimple algebra and the last step follows from Condition 1b.\nI.2 Function Approximations for Four Operators' metadata={'source': 'pdfs/paper_3.pdf', 'page': 30}
page_content='Lemma I.2. For each i∈[4], we assume the following conditions\n•i(a)fiis a linear function\n•i(b)∥fi(x)∥2≤ϵi∥x∥2(fiis shriking)\n•i(c)∥fi(x)−fi(y)∥2≤Li∥x−y∥2(fiis Lipschitz)\nWe define three functions\n•g1(x):=(I+f1)·(I+f2)·(I+f3)·(I+f4)(x)\n•g2(x):=(I+f1)·(I+f3)·(I+f2)·(I+f4)(x)\n•g3(x):=(I+f1+f2+f3+f4)(x)\n31' metadata={'source': 'pdfs/paper_3.pdf', 'page': 30}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nThen, we can show that\n•Part 1. ∥g1(x)−g2(x)∥2≤2(ϵ1ϵ2+ϵ1ϵ3+ϵ1ϵ4+ϵ2ϵ3+ϵ2ϵ4+ϵ3ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ4+ϵ1ϵ3ϵ4+ϵ2ϵ3ϵ4+ϵ1ϵ2ϵ3ϵ4)∥x∥2\n(iffi,∀i∈[4]are linear functions)\n•Part 2. ∥g1(x)−g2(x)∥2≤(2L1ϵ2+ 2L1ϵ3+ 2L1ϵ4+L2ϵ3+ 2L2ϵ4+ 2L3ϵ4+ 2L1ϵ2ϵ3+ 2L1ϵ2ϵ4+ 2L1ϵ3ϵ4+\nL2ϵ3ϵ4+2L1ϵ2ϵ3ϵ4+L3ϵ2+L3ϵ2ϵ4)∥x∥2(iffi,∀i∈[4]are Lipschitz functions)\n•Part 3. ∥g1(x)−g3(x)∥2≤(ϵ1ϵ2+ϵ1ϵ3+ϵ1ϵ4+ϵ2ϵ3+ϵ2ϵ4+ϵ3ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ4+ϵ1ϵ3ϵ4+ϵ2ϵ3ϵ4+ϵ1ϵ2ϵ3ϵ4)∥x∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 31}
page_content='(iffi,∀i∈[4]are linear functions)\n•Part 4. ∥g1(x)−g3(x)∥2≤(L1ϵ2+L1ϵ3+L1ϵ4+L2ϵ3+L2ϵ4+L3ϵ4+L1ϵ2ϵ3+L1ϵ2ϵ4+L1ϵ3ϵ4+L2ϵ3ϵ4+\nL1ϵ2ϵ3ϵ4)∥x∥2(iffi,∀i∈[4]are Lipschitz functions)\n•Part 5. ∥g2(x)−g3(x)∥2≤(ϵ1ϵ2+ϵ1ϵ3+ϵ1ϵ4+ϵ2ϵ3+ϵ2ϵ4+ϵ3ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ4+ϵ1ϵ3ϵ4+ϵ2ϵ3ϵ4+ϵ1ϵ2ϵ3ϵ4)∥x∥2\n(iffi,∀i∈[4]are linear functions)\n•Part 6. ∥g2(x)−g3(x)∥2≤(L1ϵ2+L1ϵ3+L1ϵ4+L2ϵ4+L3ϵ2+L3ϵ4+L1ϵ2ϵ3+L1ϵ2ϵ4+L1ϵ3ϵ4+L3ϵ2ϵ4+\nL1ϵ2ϵ3ϵ4)∥x∥2\n(iffi,∀i∈[4]are Lipschitz functions)\nProof. Part 1.\nWe have' metadata={'source': 'pdfs/paper_3.pdf', 'page': 31}
page_content='∥g1(x)−g2(x)∥2≤ ∥g1(x)−g3(x)∥2+∥g3(x)−g2(x)∥2\n≤2(ϵ1ϵ2+ϵ1ϵ3+ϵ1ϵ4+ϵ2ϵ3+ϵ2ϵ4+ϵ3ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ4+ϵ1ϵ3ϵ4+ϵ2ϵ3ϵ4+ϵ1ϵ2ϵ3ϵ4)∥x∥2\nwhere the first step follows from triangular inequality and the last step follows from Part 3 and Part 5.\nPart 2.\nWe have\n∥g1(x)−g2(x)∥2≤ ∥g1(x)−g3(x)∥2+∥g3(x)−g2(x)∥2\n≤(2L1ϵ2+2L1ϵ3+2L1ϵ4+L2ϵ3+2L2ϵ4+2L3ϵ4+2L1ϵ2ϵ3+2L1ϵ2ϵ4+2L1ϵ3ϵ4+\nL2ϵ3ϵ4+2L1ϵ2ϵ3ϵ4+L3ϵ2+L3ϵ2ϵ4)∥x∥2\nwhere the first step follows from triangular inequality and the last step follows from Part 4 and Part 6.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 31}
page_content='Part 3. We have\n∥g1(x)−g3(x)∥2=∥(I+f1)·(I+f2)·(I+f3)·(x+f4(x))−(I+f1+f2+f3+f4)(x))∥2\n=∥(x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+\nf3(x+f4(x)))+f1(x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x))))\n−(I+f1+f2+f3+f4)(x)))∥2\n=∥f3(f4(x))+f2(f4(x)+f3(x+f4(x)))+f1(f4(x)+\nf3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x)))))∥2\n=∥f3(f4(x))+f2(f4(x))+f2(f3(x))+f2(f3(f4(x)))+\nf1(f4(x))+f1(f3(x))+f1(f3(f4(x)))+f1(f2(x))+f1(f2(f4(x)))\n+f1(f2(f3(x)))+f1(f2(f3(f4(x)))))∥2\n≤ ∥f3(f4(x))∥2+∥f2(f4(x))∥2+∥f2(f3(x))∥2+∥f2(f3(f4(x)))∥2+' metadata={'source': 'pdfs/paper_3.pdf', 'page': 31}
page_content='∥f1(f4(x))∥2+∥f1(f3(x))∥2+∥f1(f3(f4(x)))∥2+∥f1(f2(x))∥2+∥f1(f2(f4(x)))∥2+\n∥f1(f2(f3(x)))∥2+∥f1(f2(f3(f4(x))))∥2\n≤(ϵ3ϵ4+ϵ2ϵ4+ϵ2ϵ3+ϵ2ϵ3ϵ4+ϵ1ϵ4+ϵ1ϵ3+ϵ1ϵ3ϵ4+ϵ1ϵ2+ϵ1ϵ2ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ3ϵ4)∥x∥2\n= (ϵ1ϵ2+ϵ1ϵ3+ϵ1ϵ4+ϵ2ϵ3+ϵ2ϵ4+ϵ3ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ4+ϵ1ϵ3ϵ4+ϵ2ϵ3ϵ4+ϵ1ϵ2ϵ3ϵ4)∥x∥2,\nwhere the first step follows from the definition of g1andg3, the second step follows from simple algebra, the third step follows' metadata={'source': 'pdfs/paper_3.pdf', 'page': 31}
page_content='from reorganization, the fourth step follows from the fact that all fi,∀i∈[4]are linear function, the fifth step follows from\ntriangular inequality, the sixth step follows from i(b)and the last step follows from reorganization.\n32' metadata={'source': 'pdfs/paper_3.pdf', 'page': 31}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nPart 4. We have\n∥g1(x)−g3(x)∥2=∥(I+f1)·(I+f2)·(I+f3)·(x+f4(x))−(I+f1+f2+f3+f4)(x))∥2\n=∥x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+\nf3(x+f4(x)))+f1(x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x))))\n−(I+f1+f2+f3+f4)(x))∥2\n=∥f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x)))\n+f1(x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x))))\n−f1(x)−f2(x)−f3(x))∥2\n=∥f3(x+f4(x))−f3(x)+f2(x+f4(x)+f3(x+f4(x)))−f2(x)\n+f1(x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x))))−f1(x)∥' metadata={'source': 'pdfs/paper_3.pdf', 'page': 32}
page_content='≤L3∥x+f4(x)−x∥2+L2∥x+f4(x)+f3(x+f4(x))−x∥2\n+L1∥x+f4(x)+f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x)))−x∥2\n≤L3∥f4(x)∥2+L2∥f4(x)+f3(x+f4(x))∥2\n+L1∥f4(x)+f3(x+f4(x))+f2(x+f4(x)+f3(x+f4(x)))∥2\n≤L3ϵ4∥x∥2+L2ϵ4∥x∥2+L2ϵ3∥x+f4(x)∥2\n+L1ϵ4∥x∥2+L1ϵ3∥x+f4(x)∥2+L1ϵ2∥x+f4(x)+f3(x+f4(x))∥2\n≤L3ϵ4∥x∥2+L2ϵ4∥x∥2+L2ϵ3∥x∥+L2ϵ3ϵ4∥x∥2\n+L1ϵ4∥x∥2+L1ϵ3∥x∥2+L1ϵ3ϵ4∥x∥2+L1ϵ2∥x∥2+L1ϵ2ϵ4∥x∥2+L1ϵ2ϵ3∥x+f4(x)∥2\n≤L3ϵ4∥x∥2+L2ϵ4∥x∥2+L2ϵ3∥x∥+L2ϵ3ϵ4∥x∥2\n+L1ϵ4∥x∥2+L1ϵ3∥x∥2+L1ϵ3ϵ4∥x∥2+L1ϵ2∥x∥2+L1ϵ2ϵ4∥x∥2+L1ϵ2ϵ4∥x∥2+L1ϵ2ϵ3ϵ4∥x∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 32}
page_content='= (L3ϵ4+L2ϵ4+L2ϵ3+L2ϵ3ϵ4+L1ϵ4+L1ϵ3+L1ϵ3ϵ4+L1ϵ2+L1ϵ2ϵ4+L1ϵ2ϵ3+L1ϵ2ϵ3ϵ4)∥x∥2\n= (L1ϵ2+L1ϵ3+L1ϵ4+L2ϵ3+L2ϵ4+L3ϵ4+L1ϵ2ϵ3+L1ϵ2ϵ4+L1ϵ3ϵ4+L2ϵ3ϵ4+L1ϵ2ϵ3ϵ4)∥x∥2\nwhere the first step follows from the definition of g1andg3, the second step follows from simple algebra, the third step follows\nfrom simple algebra, the fourth step follows from reorganization, the fifth step follows from the fact that all fi,∀i∈[4]are Lips-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 32}
page_content='chitz functions, the sixth step follows from simple algebra, the seventh step follows from i(b), the eighth step follows from trian-\ngular inequality, the ninth step follows from i(b), the tenth step follows from i(b)and the last step follows from reorganization.\nPart 5. We have\n∥g2(x)−g3(x)∥2=∥(I+f1)·(I+f3)·(I+f2)·(x+f4(x))−(I+f1+f2+f3+f4)(x))∥2\n=∥(x+f4(x)+f2(x+f4(x))+f3(x+f4(x)+\nf2(x+f4(x)))+f1(x+f4(x)+f2(x+f4(x))+f3(x+f4(x)+f2(x+f4(x))))\n−(I+f1+f2+f3+f4)(x)))∥2\n=∥f2(f4(x))+f3(f4(x))+' metadata={'source': 'pdfs/paper_3.pdf', 'page': 32}
page_content='f3(f2(x+f4(x)))+f1(f4(x))+f1(f2(x+f4(x)))+f1(f3(x+f4(x)+f2(x+f4(x)))))∥2\n≤(ϵ2ϵ4+ϵ3ϵ4+ϵ3ϵ2+ϵ3ϵ2ϵ4+ϵ1ϵ4+ϵ1ϵ2+ϵ1ϵ2ϵ4+ϵ1ϵ3+ϵ1ϵ3ϵ4+ϵ1ϵ3ϵ2+ϵ1ϵ3ϵ2ϵ4)∥x∥2\n= (ϵ1ϵ2+ϵ1ϵ3+ϵ1ϵ4+ϵ2ϵ3+ϵ2ϵ4+ϵ3ϵ4+ϵ1ϵ2ϵ3+ϵ1ϵ2ϵ4+ϵ1ϵ3ϵ4+ϵ2ϵ3ϵ4+ϵ1ϵ2ϵ3ϵ4)∥x∥2,\nwhere the first step follows from the definition of g2andg3, the second step follows from simple algebra, the third step follows\nfrom the fact that all fi,∀i∈[4]are linear function, the fourth step follows from triangular inequality and i(b), and the last' metadata={'source': 'pdfs/paper_3.pdf', 'page': 32}
page_content='step follows from reorganization.\n33' metadata={'source': 'pdfs/paper_3.pdf', 'page': 32}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nPart 6. We have\n∥g2(x)−g3(x)∥2=∥(I+f1)·(I+f3)·(I+f2)·(x+f4(x))−(I+f1+f2+f3+f4)(x))∥2\n=∥(x+f4(x)+f2(x+f4(x))+f3(x+f4(x)+f2(x+f4(x)))\n+f1(x+f4(x)+f2(x+f4(x))+f3(x+f4(x)+f2(x+f4(x))))\n−(I+f1+f2+f3+f4)(x)))∥2\n=∥f2(x+f4(x))−f2(x)+f3(x+f4(x)+f2(x+f4(x)))−f3(x)\n+f1(x+f4(x)+f2(x+f4(x))+f3(x+f4(x)+f2(x+f4(x))))−f1(x)∥2\n≤ ∥f2(x+f4(x))−f2(x)∥2+∥f3(x+f4(x)+f2(x+f4(x)))−f3(x)∥2\n+∥f1(x+f4(x)+f2(x+f4(x))+f3(x+f4(x)+f2(x+f4(x))))−f1(x)∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='≤L2ϵ4∥x∥2+L3ϵ4∥x∥2+L3ϵ2∥x+f4(x)∥2\n+L1ϵ4∥x∥2+L1ϵ2∥x+f4(x)∥2+L1ϵ3∥x+f4(x)+f2(x+f4(x))∥2\n≤L2ϵ4∥x∥2+L3ϵ4∥x∥2+L3ϵ2∥x∥2+L3ϵ2ϵ4∥x∥2\n+L1ϵ4∥x∥2+L1ϵ2∥x∥2+L1ϵ2ϵ4∥x∥2+L1ϵ3∥x∥+L1ϵ3ϵ4∥x∥2+L1ϵ3ϵ2∥x∥2+L1ϵ3ϵ2ϵ4∥x∥2\n= (L1ϵ2+L1ϵ3+L1ϵ4+L2ϵ4+L3ϵ2+L3ϵ4+L1ϵ2ϵ3+L1ϵ2ϵ4+L1ϵ3ϵ4+L3ϵ2ϵ4+L1ϵ2ϵ3ϵ4)∥x∥2\nwhere the first step follows from the definition of g2andg3, the second step follows from simple algebra, the third step follows' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='from reorganization, the fourth step follows from triangular inequality, the fifth step follows from the fact that all fi,∀i∈[4]are\nLipschitz functions and i(b), the sixth step follows from triangular inequality, and the last step follows from reorganization.\nJ Nearest Neighbor Search Data Structure\nWe use the reduction-based approximate MaxIP method with LSH data-structure to achieve sublinear iteration cost. Note' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='that we choose this method due to its clear theoretical guarantee on the retrieval results. It is well-known that an LSH\ndata-structures is used for approximate nearest neighbor problem. The following definition of approximate nearest neighbor\nsearch is very standard in literature (Arya & Mount, 1993; Indyk & Motwani, 1998a; Datar et al., 2004; Andoni et al., 2014;\n2015; Andoni & Razenshteyn, 2015; Indyk & Wagner, 2018; Andoni et al., 2017; 2018; Dong et al., 2019; Chen et al., 2020b;' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='Li & Li, 2022; Li et al., 2019).\nJ.1 LSH andMaxIP\nWe start with the defining the Approximate Nearest Neighbor ( ANN ) problem (Arya & Mount, 1993; Indyk & Motwani,\n1998a; Datar et al., 2004; Andoni et al., 2014; 2015; Andoni & Razenshteyn, 2015; Indyk & Wagner, 2018; Andoni et al.,\n2017; 2018; Dong et al., 2019; Chen et al., 2020b) as:\nDefinition J.1 (Approximate Nearest Neighbor ( ANN )).Letc>1andr∈(0,2)denote two parameters. Given an n-vector' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='setY⊂Sd−1on a unit sphere, the objective of the (c,r)-Approximate Nearest Neighbor ( ANN ) is to construct a data structure\nthat, for any query x∈Sd−1such that miny∈Y∥y−x∥2≤r, it returns a vector zfromYthat satisfies ∥z−x∥2≤c·r.\nTheANN problem can be solved via locality sensitive hashing ( LSH) (Indyk & Motwani, 1998a; Datar et al., 2004; Indyk\n& Wagner, 2018). In this paper, we use the standard definitions of LSH (see Indyk and Motwani (Indyk & Motwani, 1998a)).' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='Definition J.2 (Locality Sensitive Hashing) .Letc >1denote a parameter. Let p1,p2∈(0,1)denote two parameters and\np1> p2. We say a function family His(r,c·r,p1,p2)-sensitive if and only if, for any vectors x,y∈Rd, for any hchosen\nuniformly at random from H, we have:\n• if∥x−y∥2≤r, then Prh∼H[h(x)=h(y)]≥p1,\n• if∥x−y∥2≥c·r, then Prh∼H[h(x)=h(y)]≤p2.\nNext, we show that LSH solves ANN problem with sublinear query time complexity.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='Theorem J.3 (Andoni, Laarhoven, Razenshteyn and Waingarten (Andoni et al., 2017)) .Letc>1andr∈(0,2)denote two\nparameters. One can solve (c,r)-ANN on a unit sphere in query time O(d·nρ)using preprocessing time O(dn1+o(1))and\nspace O(n1+o(1)+dn), where ρ=2\nc2−1\nc4+o(1).\n34' metadata={'source': 'pdfs/paper_3.pdf', 'page': 33}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nHere we write o(1)is equivalent to O(1/√logn). Note that we could reduce dtono(1)with Johnson–Lindenstrauss\nLemma (Johnson & Lindenstrauss, 1984). Besides, we could achieve better ρusing LSH in (Andoni & Razenshteyn, 2015)\nif we allowed to have more proprocessing time.\nIn this work, we focus on a well-known problem in computational complexity: approximate MaxIP . In this work, we follow' metadata={'source': 'pdfs/paper_3.pdf', 'page': 34}
page_content='the standard notation in (Chen, 2018) and define the approximate MaxIP problem as follows:\nDefinition J.4 (Approximate MaxIP ).Letc∈(0,1)andτ∈(0,1)denote two parameters. Given an n-vector dataset Y⊂Sd−1\non a unit sphere, the objective of the (c,τ)-MaxIP is to construct a data structure that, given a query x∈Sd−1such that\nmax y∈Y⟨x,y⟩≥τ, it retrieves a vector zfromYthat satisfies ⟨x,z⟩≥c·max y∈Y⟨x,y⟩.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 34}
page_content='In many applications, it is more convenient to doing inner product search in a transformed/projected space compared to doing\ninner product search in the original space. Thus, we propose the following definitions (Definition J.5 and Definition J.6)\nDefinition J.5 (Projected MaxIP ).Letϕ,ψ:Rd→Rkdenote two transforms. Given a data set Y⊆Rdand a point x∈Rd,\nwe define (ϕ,ψ)-MaxIP as follows:\n(ϕ,ψ)-MaxIP (x,Y):=max\ny∈Y⟨ϕ(x),ψ(y)⟩' metadata={'source': 'pdfs/paper_3.pdf', 'page': 34}
page_content='Definition J.6 (Projected approximate MaxIP ).Letϕ,ψ:Rd→Rkdenote two transforms. Given an n-vector dataset\nY⊂Rdso that ψ(Y)⊂Sk−1, the goal of the (c,ϕ,ψ,τ )-MaxIP is to construct a data structure that, given a query x∈Rdand\nϕ(x)∈Sk−1such that max y∈Y⟨ϕ(x),ψ(y)⟩≥τ, it retrieves a vector z∈Ythat satisfies ⟨ϕ(x),ψ(z)⟩≥c·(ϕ,ψ)-MaxIP (x,Y).\nJ.2 Connections\nFact J.7. Letexdenote the vector that ⟨ex,x⟩≥1−1\n2ϵ2, where both exandxare unit vectors. We have\n∥ex−x∥2≤ϵ\nProof.\n∥ex−x∥2= (∥ex∥2\n2+∥x∥2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 34}
page_content='2−2⟨x,ex⟩)1/2\n= (2−2⟨x,ex⟩)1/2\n≤(2−2(1−1\n2ϵ2))1/2\n=ϵ\nNow, we complete the proof.\nLemma J.8. Letexdenote the vector that ⟨ex,x⟩≥1−1\n2ϵ2, where both exandxare unit vectors. Let 0.01c·τ >ϵ .\nSuppose there is a z∈Y, where ∥z∥2=1, such that\n⟨x,z⟩≥c·max\ny∈Y⟨x,y⟩\nNote that max y∈Y⟨x,y⟩≥τ. Then, we can find a z∈Ysuch that\n⟨ex,z⟩≥1\n2c·max\ny∈Y⟨x,y⟩\nProof. We have\n⟨ex,z⟩=⟨x,z⟩+⟨ex−x,z⟩\n≥ ⟨x,z⟩−|⟨ex−x,z⟩|\n≥ ⟨x,z⟩−∥ex−x∥2·∥z∥2\n≥ ⟨x,z⟩−ϵ\n≥c·max\ny∈Y⟨x,y⟩−ϵ\n≥0.99·c·max\ny∈Y⟨x,y⟩' metadata={'source': 'pdfs/paper_3.pdf', 'page': 34}
page_content='where the first step follows from simple algebra, the second step follows from the fact that ⟨x, y⟩ ≥ −|⟨ x, y⟩|, the\nthird step follows from the property of inner product, the fourth step follows from Fact J.7, the fifth step follows from\n⟨x,z⟩≥c·max y∈Y⟨x,y⟩and the final step follows from the fact that 0.01c·τ >ϵ .\n35' metadata={'source': 'pdfs/paper_3.pdf', 'page': 34}
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nJ.3 Efficient Transformations\nWe have learned from that (c,τ)-MaxIP on a unit sphere Sd−1using LSH forANN . Therefore, the next step is to transform\nthe direction search procedure in iterative optimization algorithm into a MaxIP on a unit sphere. To achieve this, we formulate\nthe direction search as a projected approximate MaxIP (see Definition J.5). We start with presenting a pair of transformation' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35}
page_content='ϕ0,ψ0:Rd→Rd+1such that, given a function g:Rd→R, for any x,yin a convex set K, we have\nϕ0(x):=[∇g(x)⊤,x⊤∇g(x)]⊤, ψ0(y):=[−y⊤,1]⊤. (11)\nIn this way, we show that\n⟨y−x,∇g(x)⟩=−⟨ϕ0(x),ψ0(y)⟩,\nargmin\ny∈Y⟨y−x,∇g(x)⟩= argmax\ny∈Y⟨ϕ0(x),ψ0(y)⟩ (12)\nTherefore, we could transform the direction search problem into a MaxIP problem.\nNext, we present a standard transformations (Neyshabur & Srebro, 2015) that connects the MaxIP toANN in unit sphere.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35}