Add library name, pipeline tag, and link to code
#2
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
3 |
---
|
4 |
|
5 |
<div align="center">
|
@@ -15,6 +17,7 @@ We launch **EVA**, a vision-centric foundation model to **E**xplore the limits o
|
|
15 |
|
16 |
***EVA is the first open-sourced billion-scale vision foundation model that achieves state-of-the-art performance on a broad range of downstream tasks.***
|
17 |
|
|
|
18 |
</div>
|
19 |
|
20 |
|
@@ -180,5 +183,4 @@ The content of this project itself is licensed under the MIT License.
|
|
180 |
For help or issues using EVA, please open a GitHub [issue](https://github.com/baaivision/EVA/issues/new).
|
181 |
|
182 |
**We are hiring** at all levels at BAAI Vision Team, including full-time researchers, engineers and interns.
|
183 |
-
If you are interested in working with us on **foundation model, self-supervised learning and multimodal learning**, please contact [Yue Cao](http://yue-cao.me/) (`[email protected]`) and [Xinlong Wang](https://www.xloong.wang/) (`[email protected]`).
|
184 |
-
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
library_name: pytorch
|
4 |
+
pipeline_tag: image-classification
|
5 |
---
|
6 |
|
7 |
<div align="center">
|
|
|
17 |
|
18 |
***EVA is the first open-sourced billion-scale vision foundation model that achieves state-of-the-art performance on a broad range of downstream tasks.***
|
19 |
|
20 |
+
This repository hosts the pretrained weights for the models described in the paper [EVA: Exploring the Limits of Masked Visual Representation Learning at Scale](https://arxiv.org/abs/2211.07636). You can find a basic usage example in the Github repository, which can be found at https://github.com/baaivision/EVA.
|
21 |
</div>
|
22 |
|
23 |
|
|
|
183 |
For help or issues using EVA, please open a GitHub [issue](https://github.com/baaivision/EVA/issues/new).
|
184 |
|
185 |
**We are hiring** at all levels at BAAI Vision Team, including full-time researchers, engineers and interns.
|
186 |
+
If you are interested in working with us on **foundation model, self-supervised learning and multimodal learning**, please contact [Yue Cao](http://yue-cao.me/) (`[email protected]`) and [Xinlong Wang](https://www.xloong.wang/) (`[email protected]`).
|
|