Papers
arxiv:2404.16710

LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding

Published on Apr 25
ยท Submitted by akhaliq on Apr 26
#1 Paper of the day
Authors:
,
,
,
,

Abstract

We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x on coding, and 2.0x on TOPv2 semantic parsing task. We open source our code and checkpoints at https://github.com/facebookresearch/LayerSkip.

Community

Wow this is very good

Paper author
โ€ข
edited Apr 26

Author here. Thanks for posting. I have created a thread on X to explain the paper: https://twitter.com/m_elhoushi/status/1783800052986655203

Happy to answer any questions

ยท

Plain english rewrite of the paper here, would love your feedback as an author! https://www.aimodels.fyi/papers/arxiv/layer-skip-enabling-early-exit-inference-self

This is a lot like Mixture-of-Depths.

ยท

How so? Because of the adaptive computation nature?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This comment has been hidden

Supercharging AI: How LayerSkip Enhances Language Model Speed and Efficiency

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 7

Browse 7 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.16710 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.16710 in a Space README.md to link it from this page.

Collections including this paper 21