Papers
arxiv:2503.02130

Forgetting Transformer: Softmax Attention with a Forget Gate

Published on Mar 3
· Submitted by zhixuan-lin on Mar 10
Authors:

Abstract

An essential component of modern recurrent sequence models is the forget gate. While Transformers do not have an explicit recurrent form, we show that a forget gate can be naturally incorporated into Transformers by down-weighting the unnormalized attention scores in a data-dependent way. We name this attention mechanism the Forgetting Attention and the resulting model the Forgetting Transformer (FoX). We show that FoX outperforms the Transformer on long-context language modeling, length extrapolation, and short-context downstream tasks, while performing on par with the Transformer on long-context downstream tasks. Moreover, it is compatible with the FlashAttention algorithm and does not require any positional embeddings. Several analyses, including the needle-in-the-haystack test, show that FoX also retains the Transformer's superior long-context capabilities over recurrent sequence models such as Mamba-2, HGRN2, and DeltaNet. We also introduce a "Pro" block design that incorporates some common architectural components in recurrent sequence models and find it significantly improves the performance of both FoX and the Transformer. Our code is available at https://github.com/zhixuan-lin/forgetting-transformer.

Community

Paper author Paper submitter
edited about 11 hours ago

attention.png

The core method is summarized above. Highlights:
• No need for RoPE
• Hyperparameter-free
• FlashAttention-compatible
• Consistently better or on-par performance compared to the (RoPE-based) Transformer
• Great long-context capabilities, similar to the standard Transformer (yes, it learns not to forget if necessary!)

You can also see our post on X for an extended summary of our work. The code is available at https://github.com/zhixuan-lin/forgetting-transformer. We provide a plug-and-play Triton kernel with minimal dependencies. Try it today!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.02130 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.02130 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.02130 in a Space README.md to link it from this page.

Collections including this paper 5