Papers
arxiv:2210.05557

OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions

Published on Oct 11, 2022
Authors:
,
,
,

Abstract

The pretrain-finetune paradigm in modern computer vision facilitates the success of self-<PRE_TAG>supervised learning</POST_TAG>, which tends to achieve better transferability than supervised learning. However, with the availability of massive labeled data, a natural question emerges: how to train a better model with both self and full supervision signals? In this paper, we propose Omni-suPErvised Representation leArning with hierarchical supervisions (OPERA) as a solution. We provide a unified perspective of supervisions from labeled and unlabeled data and propose a unified framework of fully supervised and self-<PRE_TAG>supervised learning</POST_TAG>. We extract a set of hierarchical proxy representations for each image and impose self and full supervisions on the corresponding proxy representations. Extensive experiments on both convolutional neural networks and vision transformers demonstrate the superiority of OPERA in <PRE_TAG>image classification</POST_TAG>, segmentation, and object detection. Code is available at: https://github.com/wangck20/OPERA.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2210.05557 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2210.05557 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2210.05557 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.