yangbang18 commited on
Commit
8df0ae8
1 Parent(s): 00e4cb4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - en
5
+ - zh
6
+ - de
7
+ - fr
8
+ library_name: sentence-transformers
9
+ license: apache-2.0
10
+ ---
11
+
12
+ # ZeroNLG
13
+
14
+ Without any labeled downstream pairs for training, ZeroNLG is an unified framework that deals with multiple natural language generation (NLG) tasks in a zero-shot manner, including image-to-text, video-to-text, and text-to-text generation tasks across English, Chinese, German, and French.
15
+
16
+ Pre-trained data: a machine-translated version of [CC3M](https://huggingface.co/datasets/conceptual_captions), including
17
+ - 1.1M English sentences
18
+ - 1.1M English-Chinese pairs
19
+ - 1.1M English-German pairs
20
+ - 1.1M English-French pairs
21
+
22
+
23
+ Paper: [ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation](https://arxiv.org/abs/2303.06458)
24
+
25
+ Authors: *Bang Yang\*, Fenglin Liu\*, Yuexian Zou, Xian Wu, Yaowei Wang, David A. Clifton*
26
+
27
+
28
+
29
+ ## Quick Start
30
+ Please follow our [github repo](https://github.com/yangbang18/ZeroNLG) to prepare the environment at first.
31
+
32
+ ```python
33
+ from zeronlg import ZeroNLG
34
+
35
+ # Automatically download the model from Huggingface Hub
36
+ # Note: this model is especially pre-trained for visual captioning
37
+ model = ZeroNLG('zeronlg-4langs-vc')
38
+
39
+ # `images` can be a remote image url, a local image/video file, etc
40
+ # `lang` should be one of English ('en'), Chinese ('zh'), German ('de'), and French ('fr')
41
+ url = 'https://img2.baidu.com/it/u=1856500011,1563285204&fm=253&fmt=auto&app=138&f=JPEG?w=667&h=500'
42
+ caption = model.forward(images=url, lang='en', num_beams=3, task='caption')
43
+ # caption = "dogs play in the snow"
44
+
45
+ caption = model.forward(images=url, lang='zh', num_beams=3, task='caption')
46
+ # caption = "狗 在 雪 地 里 玩 耍"
47
+
48
+ # Althernatively, you can call the specific forward function
49
+ caption = model.forward_caption(images=url, lang='en', num_beams=3)
50
+ ```
51
+
52
+ ## Zero-Shot Performance
53
+ ### Visual captioning
54
+ Model: [zeronlg-4langs-vc](https://huggingface.co/yangbang18/zeronlg-4langs-vc)'s multilingual decoder + CLIP's ViT-B-32 image encoder.
55
+ | Dataset | Language | Type | BLEU@1 | BLEU@2 | BLEU@3 | BLEU@4 | METEOR | ROUGE-L | CIDEr-D | SPICE |
56
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
57
+ | [Flickr30K](https://paperswithcode.com/paper/from-image-descriptions-to-visual-denotations) | English | Image | 46.4 | 27.2 | 15.5 | 8.9 | 13.0 | 31.3 | 21.0 | 7.6
58
+ | Flickr30K | [Chinese](https://dl.acm.org/doi/abs/10.1145/3123266.3123366) | Image | 45.3 | 25.5 | 14.6 | 8.4 | - | 31.8 | 18.0 | -
59
+ | Flickr30K | [German](https://github.com/multi30k/dataset) | Image | 41.9 | 21.1 | 11.2 | 5.7 | - | 21.2 | 17.1 | -
60
+ | Flickr30K | [French](https://github.com/multi30k/dataset) | Image | 19.8 | 9.5 | 5.0 | 2.8 | - | 18.6 | 24.8 | -
61
+ | [COCO](https://paperswithcode.com/paper/microsoft-coco-captions-data-collection-and) | English | Image | 47.5 | 29.0 | 16.8 | 9.6 | 14.4 | 34.9 | 29.9 | 8.7
62
+ | [MSR-VTT](https://paperswithcode.com/paper/msr-vtt-a-large-video-description-dataset-for) | English | Video | 52.2 | 31.9 | 16.6 | 8.7 | 15.0 | 35.4 | 9.9 | -
63
+ | [VATEX](https://paperswithcode.com/paper/vatex-a-large-scale-high-quality-multilingual) | English | Video | 42.2 | 24.6 | 12.5 | 6.3 | 11.7 | 29.3 | 9.1 | -
64
+ | VATEX | Chinese | Video | 41.9 | 24.3 | 13.7 | 7.1 | - | 29.6 | 9.8 | -
65
+
66
+ **Notes:**
67
+ - For non-English visual captioning, we do not report METEOR and SPICE, beacause they consider synonym matching and named entity recognition in English by default.
68
+ - For video captioning in English, we do not report SPICE following common practices.
69
+ - `Flickr30K-Chinese` is known as `Flickr30K-CN`.
70
+ - `Flickr30K-German` and `Flickr30K-French` are introduced in `Multi30K`.
71
+
72
+ ### Cross-modal retrieval
73
+ Model: [zeronlg-4langs-vc](https://huggingface.co/yangbang18/zeronlg-4langs-vc)'s multilingual encoder + CLIP's ViT-B-32 image encoder
74
+ | Dataset | Language | Type | I2T R@1 | I2T R@5 | I2T R@10 | I2T Mean | T2I R@1 | T2I R@5 | T2I R@10 | T2I Mean | Avg.|
75
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
76
+ | [Flickr30K](https://paperswithcode.com/paper/from-image-descriptions-to-visual-denotations) | English | Image | 75.2 | 93.9 | 97.1 | 88.7 | 57.1 | 82.2 | 89.1 | 76.1 | 82.4|
77
+ | Flickr30K | [Chinese](https://dl.acm.org/doi/abs/10.1145/3123266.3123366) | Image | 75.0 | 93.0 | 96.7 | 88.2 | 53.8 | 79.8 | 87.1 | 73.6 | 80.9|
78
+ | Flickr30K | [German](https://github.com/multi30k/dataset) | Image | 70.9 | 91.1 | 95.7 | 85.9 | 47.5 | 74.1 | 83.1 | 68.2 | 77.1|
79
+ | Flickr30K | [French](https://github.com/multi30k/dataset) | Image | 55.8 | 83.4 | 91.5 | 76.9 | 56.6 | 81.2 | 88.4 | 75.4 | 76.2|
80
+ | [COCO 5K](https://paperswithcode.com/paper/microsoft-coco-captions-data-collection-and) | English | Image | 45.0 | 71.1 | 80.3 | 65.5 | 28.2 | 53.3 | 64.5 | 48.7 | 57.1
81
+ | COCO 1K | English | Image | 66.0 | 89.1 | 94.6 | 83.2 | 47.5 | 77.5 | 87.9 | 71.0 | 77.1 |
82
+ | [MSR-VTT](https://paperswithcode.com/paper/msr-vtt-a-large-video-description-dataset-for) | English | Video | 32.0 | 55.5 | 65.8 | 51.1 | 17.9 | 36.4 | 45.5 | 33.3 | 42.2
83
+ | [VATEX](https://paperswithcode.com/paper/vatex-a-large-scale-high-quality-multilingual) | English | Video | 26.9 | 52.8 | 64.2 | 48.0 | 19.2 | 41.2 | 52.7 | 37.7 | 42.8
84
+ | VATEX | Chinese | Video | 40.6 | 70.9 | 82.7 | 64.7 | 28.8 | 58.0 | 70.1 | 52.3 | 58.5 |
85
+
86
+ **Notes:**
87
+ - `I2T`: image-to-text retrieval, image as the query, search similar texts
88
+ - `T2I`: text-to-image retrieval, text as the query, search similar images
89
+ - `R@K`: Recall rate at top-K candidates
90
+ - `Avg.`: Average of `R@{1,5,10}` on both directions
91
+ - Retrieval uses the same testing sets as those for visual captioning, except `COCO-1K`, which splits the original testing set into 5 folds and report performance averaged over 5 folds.
92
+
93
+ ## Citation
94
+ ```bibtex
95
+ @article{Yang2023ZeroNLG,
96
+ title={ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation},
97
+ author={Yang, Bang and Liu, Fenglin and Zou, Yuexian and Wu, Xian and Wang, Yaowei and Clifton, David A.},
98
+ journal={arXiv preprint arXiv:2303.06458}
99
+ year={2023}
100
+ }
101
+ ```