Aalaa commited on
Commit
d301316
·
1 Parent(s): 1c1acac

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vision
5
+ - image-classification
6
+ datasets:
7
+ - imagenet-1k
8
+ - imagenet-21k
9
+ widget:
10
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
11
+ example_title: Tiger
12
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
13
+ example_title: Teapot
14
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
15
+ example_title: Palace
16
+ ---
17
+
18
+ # Vision Transformer (base-sized model)
19
+
20
+ Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
21
+
22
+ Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
23
+
24
+ ## Model description
25
+
26
+ The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
27
+
28
+ Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
29
+
30
+ By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
31
+
32
+ ## Dataset
33
+
34
+ The dataset used consist of spans six classes: glass, paper, cardboard, plastic, metal, and trash. Currently, the dataset consists of 2527 images:
35
+
36
+ * 501 glass
37
+ * 594 paper
38
+ * 403 cardboard
39
+ * 482 plastic
40
+ * 410 metal
41
+ * 137 trash
42
+ ## Fine_tuned Notebook
43
+
44
+ This notebook outlines the steps from preparing the data in the Vit-acceptable format to training the model [Notebook]. (https://colab.research.google.com/drive/1RbmRPJ9bFLA_qK9RGgPoHZRnUTy_md5O?usp=sharing)
45
+
46
+ ### How to use
47
+
48
+ Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
49
+
50
+ ```python
51
+ from transformers import AutoFeatureExtractor, AutoModelForImageClassification
52
+ from PIL import Image
53
+ import requests
54
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
55
+ image = Image.open(requests.get(url, stream=True).raw)
56
+ feature_extractor = AutoFeatureExtractor.from_pretrained("Aalaa/Fine_tuned_Vit_trash_classification")
57
+ model = AutoModelForImageClassification.from_pretrained("Aalaa/Fine_tuned_Vit_trash_classification")
58
+ inputs = feature_extractor(images=image, return_tensors="pt")
59
+ outputs = model(**inputs)
60
+ logits = outputs.logits
61
+ # model predicts one of the 1000 ImageNet classes
62
+ predicted_class_idx = logits.argmax(-1).item()
63
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
64
+ ```
65
+
66
+ For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
67
+
68
+ ## Training data
69
+
70
+ The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
71
+
72
+ ## Training procedure
73
+
74
+ ### Preprocessing
75
+
76
+ The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
77
+
78
+ Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
79
+
80
+ ### Pretraining
81
+
82
+ The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
83
+
84
+ ## Evaluation results
85
+
86
+ For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
87
+
88
+ ### BibTeX entry and citation info
89
+
90
+ ```bibtex
91
+ @misc{wu2020visual,
92
+ title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
93
+ author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
94
+ year={2020},
95
+ eprint={2006.03677},
96
+ archivePrefix={arXiv},
97
+ primaryClass={cs.CV}
98
+ }
99
+ ```
100
+
101
+ ```bibtex
102
+ @inproceedings{deng2009imagenet,
103
+ title={Imagenet: A large-scale hierarchical image database},
104
+ author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
105
+ booktitle={2009 IEEE conference on computer vision and pattern recognition},
106
+ pages={248--255},
107
+ year={2009},
108
+ organization={Ieee}
109
+ }