Papers
arxiv:1904.03493

VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research

Published on Apr 6, 2019
Authors:
,
,
,
,
,

Abstract

We present a new large-scale multilingual video description dataset, VATEX, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. Compared to the widely-used MSR-VTT dataset, VATEX is multilingual, larger, linguistically complex, and more diverse in terms of both video and natural language descriptions. We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context. Extensive experiments on the VATEX dataset show that, first, the unified multilingual model can not only produce both English and Chinese descriptions for a video more efficiently, but also offer improved performance over the monolingual models. Furthermore, we demonstrate that the spatiotemporal video context can be effectively utilized to align source and target languages and thus assist machine translation. In the end, we discuss the potentials of using VATEX for other video-and-language research.

Community

Sign up or log in to comment

Models citing this paper 43

Browse 43 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1904.03493 in a dataset README.md to link it from this page.

Spaces citing this paper 16

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.