Abstract
The Col<PRE_TAG>BERT</POST_TAG> model has recently been proposed as an effective BERT based ranker. By adopting a late interaction mechanism, a major advantage of Col<PRE_TAG>BERT</POST_TAG> is that document representations can be precomputed in advance. However, the big downside of the model is the index size, which scales linearly with the number of tokens in the collection. In this paper, we study various designs for Col<PRE_TAG>BERT</POST_TAG> models in order to attack this problem. While compression techniques have been explored to reduce the index size, in this paper we study token pruning techniques for Col<PRE_TAG>BERT</POST_TAG>. We compare simple heuristics, as well as a single layer of attention mechanism to select the tokens to keep at indexing time. Our experiments show that Col<PRE_TAG>BERT</POST_TAG> indexes can be pruned up to 30\% on the MS MARCO passage collection without a significant drop in performance. Finally, we experiment on MS MARCO documents, which reveal several challenges for such mechanism.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper