Papers
arxiv:2101.06983

Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup

Published on Jan 18, 2021
Authors:
,
,
,

Abstract

Contrastive learning has been applied successfully to learn vector representations of text. Previous research demonstrated that learning high-quality representations benefits from batch-wise contrastive loss with a large number of negatives. In practice, the technique of in-batch negative is used, where for each example in a batch, other batch examples' positives will be taken as its negatives, avoiding encoding extra negatives. This, however, still conditions each example's loss on all batch examples and requires fitting the entire large batch into GPU memory. This paper introduces a gradient caching technique that decouples backpropagation between contrastive loss and the encoder, removing encoder backward pass data dependency along the batch dimension. As a result, gradients can be computed for one subset of the batch at a time, leading to almost constant memory usage.

Community

Sign up or log in to comment

Models citing this paper 90

Browse 90 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2101.06983 in a dataset README.md to link it from this page.

Spaces citing this paper 8

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.