Citaman commited on
Commit
5f082be
·
verified ·
1 Parent(s): bab98fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,3 +1,8 @@
 
 
 
 
 
1
  # VeCLIP: Improving CLIP Training via Visual-enriched Captions
2
 
3
  * A novel CLIP training scheme that achieves the SoTA performance on zero-shot ImageNet classification and COCO image text retreival using limited visual-enriched captions.* [[Paper](https://arxiv.org/abs/2310.07699)]
@@ -6,7 +11,7 @@
6
 
7
 
8
  <p align="center">
9
- <img src="figs/veclip_diagram.jpg" width="100%"></a> <br>
10
  Diagram of VeCap.
11
  </p>
12
 
@@ -248,4 +253,4 @@ If you find VeCLIP useful, please cite using this BibTeX:
248
  ## Acknowledgement
249
 
250
  - [axlearn](https://github.com/apple/axlearn): the codebase we use to train the models.
251
- - [huggingface transformers](https://huggingface.co/docs/transformers/en/index): Transformers provides APIs to load our trained models.
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
  # VeCLIP: Improving CLIP Training via Visual-enriched Captions
7
 
8
  * A novel CLIP training scheme that achieves the SoTA performance on zero-shot ImageNet classification and COCO image text retreival using limited visual-enriched captions.* [[Paper](https://arxiv.org/abs/2310.07699)]
 
11
 
12
 
13
  <p align="center">
14
+ <img src="veclip_diagram.jpg" width="100%"></a> <br>
15
  Diagram of VeCap.
16
  </p>
17
 
 
253
  ## Acknowledgement
254
 
255
  - [axlearn](https://github.com/apple/axlearn): the codebase we use to train the models.
256
+ - [huggingface transformers](https://huggingface.co/docs/transformers/en/index): Transformers provides APIs to load our trained models.