Papers
arxiv:2112.06540

A Study on Token Pruning for ColBERT

Published on Dec 13, 2021
Authors:
,
,
,

Abstract

The Col<PRE_TAG>BERT</POST_TAG> model has recently been proposed as an effective BERT based ranker. By adopting a late interaction mechanism, a major advantage of Col<PRE_TAG>BERT</POST_TAG> is that document representations can be precomputed in advance. However, the big downside of the model is the index size, which scales linearly with the number of tokens in the collection. In this paper, we study various designs for Col<PRE_TAG>BERT</POST_TAG> models in order to attack this problem. While compression techniques have been explored to reduce the index size, in this paper we study token pruning techniques for Col<PRE_TAG>BERT</POST_TAG>. We compare simple heuristics, as well as a single layer of attention mechanism to select the tokens to keep at indexing time. Our experiments show that Col<PRE_TAG>BERT</POST_TAG> indexes can be pruned up to 30\% on the MS MARCO passage collection without a significant drop in performance. Finally, we experiment on MS MARCO documents, which reveal several challenges for such mechanism.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2112.06540 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2112.06540 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2112.06540 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.