Papers
arxiv:2111.10601

Deep Safe Multi-Task Learning

Published on Nov 20, 2021
Authors:
,
,
,
,

Abstract

In recent years, Multi-Task Learning (MTL) has attracted much attention due to its good performance in many applications. However, many existing MTL models cannot guarantee that their performance is no worse than their single-task counterparts on each task. Though some works have empirically observed this phenomenon, little work aims to handle the resulting problem. In this paper, we formally define this phenomenon as negative sharing and define safe multi-task learning where no negative sharing occurs. To achieve safe multi-task learning, we propose a Deep Safe Multi-Task Learning (DSMTL) model with two learning strategies: individual learning and joint learning. We theoretically study the safeness of both learning strategies in the DSMTL model to show that the proposed methods can achieve some versions of safe multi-task learning. Moreover, to improve the scalability of the DSMTL model, we propose an extension, which automatically learns a compact architecture and empirically achieves safe multi-task learning. Extensive experiments on benchmark datasets verify the safeness of the proposed methods.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2111.10601 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2111.10601 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2111.10601 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.