Papers
arxiv:2411.03884

Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models

Published on Nov 6
· Submitted by Akeeper on Nov 7
#3 Paper of the day
Authors:
,
,
,
,
,

Abstract

Transformers have found extensive applications across various domains due to the powerful fitting capabilities. This success can be partially attributed to their inherent nonlinearity. Thus, in addition to the ReLU function employed in the original transformer architecture, researchers have explored alternative modules such as GeLU and SwishGLU to enhance nonlinearity and thereby augment representational capacity. In this paper, we propose a novel category of polynomial composition activations (PolyCom), designed to optimize the dynamics of transformers. Theoretically, we provide a comprehensive mathematical analysis of PolyCom, highlighting its enhanced expressivity and efficacy relative to other activation functions. Notably, we demonstrate that networks incorporating PolyCom achieve the optimal approximation rate, indicating that PolyCom networks require minimal parameters to approximate general smooth functions in Sobolev spaces. We conduct empirical experiments on the pre-training configurations of large language models (LLMs), including both dense and sparse architectures. By substituting conventional activation functions with PolyCom, we enable LLMs to capture higher-order interactions within the data, thus improving performance metrics in terms of accuracy and convergence rates. Extensive experimental results demonstrate the effectiveness of our method, showing substantial improvements over other activation functions. Code is available at https://github.com/BryceZhuo/PolyCom.

Community

Paper submitter

TL;DR: The paper introduces PolyCom, a new category of polynomial composition activation that surpasses GELU and SwishGLU by improving transformer dynamics. Through rigorous mathematical analysis, the authors reveal PolyCom's superior expressivity and efficiency, achieving the optimal approximation rate with minimal parameters. Experiments on large language models (LLMs) demonstrate that using PolyCom instead of traditional activation functions greatly improves performance, accuracy, and convergence speed.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.03884 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.03884 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.03884 in a Space README.md to link it from this page.

Collections including this paper 2