Robotics
PyTorch
silwals commited on
Commit
8a47f31
1 Parent(s): f23b1d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -8,6 +8,7 @@ Last updated: 2023-04-07
8
  Version: 1.0
9
 
10
  Code: https://github.com/facebookresearch/eai-vc
 
11
  Other Links: VC-1 Website, VC-1 Blogpost, VC-1 Paper, VC-1 Demo
12
  The VC-1 model is a vision transformer (ViT) pre-trained on over 4,000 hours of egocentric videos from 7 different sources, together with ImageNet. The model is trained using Masked Auto-Encoding (MAE) and is available in two sizes: ViT-B and ViT-L. The model is intended for use for EmbodiedAI tasks, such as object manipulation and indoor navigation.
13
  The VC-1 model is a vision transformer (ViT) pre-trained on over 4,000 hours of egocentric videos from 7 different sources, together with ImageNet. The model is trained using Masked Auto-Encoding (MAE) and is available in two sizes: ViT-B and ViT-L. The model is intended for use for EmbodiedAI tasks, such as object manipulation and indoor navigation.
 
8
  Version: 1.0
9
 
10
  Code: https://github.com/facebookresearch/eai-vc
11
+
12
  Other Links: VC-1 Website, VC-1 Blogpost, VC-1 Paper, VC-1 Demo
13
  The VC-1 model is a vision transformer (ViT) pre-trained on over 4,000 hours of egocentric videos from 7 different sources, together with ImageNet. The model is trained using Masked Auto-Encoding (MAE) and is available in two sizes: ViT-B and ViT-L. The model is intended for use for EmbodiedAI tasks, such as object manipulation and indoor navigation.
14
  The VC-1 model is a vision transformer (ViT) pre-trained on over 4,000 hours of egocentric videos from 7 different sources, together with ImageNet. The model is trained using Masked Auto-Encoding (MAE) and is available in two sizes: ViT-B and ViT-L. The model is intended for use for EmbodiedAI tasks, such as object manipulation and indoor navigation.