Transformers documentation
Image processors
Image processors
Image processors converts images into pixel values, tensors that represent image colors and size. The pixel values are inputs to a vision or video model. To ensure a pretrained model receives the correct input, an image processor can perform the following operations to make sure an image is exactly like the images a model was pretrained on.
- center_crop() to resize an image
- normalize() or rescale() pixel values
Use from_pretrained() to load an image processors configuration (image size, whether to normalize and rescale, etc.) from a vision model on the Hugging Face Hub or local directory. The configuration for each pretrained model is saved in a preprocessor_config.json file.
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
Pass an image to the image processor to transform it into pixel values, and set return_tensors="pt"
to return PyTorch tensors. Feel free to print out the inputs to see what the image looks like as a tensor.
from PIL import Image
import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/image_processor_example.png"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
inputs = image_processor(image, return_tensors="pt")
This guide covers the image processor class and how to preprocess images for vision models.
Image processor classes
Image processors inherit from the BaseImageProcessor class which provides the center_crop(), normalize(), and rescale() functions. There are two types of image processors.
- BaseImageProcessor is a Python implementation.
- BaseImageProcessorFast is a faster torchvision-backed version. For a batch of torch.Tensor inputs, this can be up to 33x faster. BaseImageProcessorFast is not available for all vision models at the moment. Refer to a models API documentation to check if it is supported.
Each image processor subclasses the ImageProcessingMixin class which provides the from_pretrained() and save_pretrained() methods for loading and saving image processors.
There are two ways you can load an image processor, with AutoImageProcessor or a model-specific image processor.
The AutoClass API provides a convenient method to load an image processor without directly specifying the model the image processor is associated with.
Use from_pretrained() to load an image processor, and set use_fast=True
to load a fast image processor if it’s supported.
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224", use_fast=True)
Fast image processors
BaseImageProcessorFast is based on torchvision and is significantly faster, especially when processing on a GPU. This class can be used as a drop-in replacement for BaseImageProcessor if it’s available for a model because it has the same design. Make sure torchvision is installed, and set the use_fast
parameter to True
.
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50", use_fast=True)
Control which device processing is performed on with the device
parameter. Processing is performed on the same device as the input by default if the inputs are tensors, otherwise they are processed on the CPU. The example below places the fast processor on a GPU.
from torchvision.io import read_image
from transformers import DetrImageProcessorFast
images = read_image("image.jpg")
processor = DetrImageProcessorFast.from_pretrained("facebook/detr-resnet-50")
images_processed = processor(images, return_tensors="pt", device="cuda")
Benchmarks
The benchmarks are obtained from an AWS EC2 g5.2xlarge instance with a NVIDIA A10G Tensor Core GPU.




Preprocess
Transformers’ vision models expects the input as PyTorch tensors of pixel values. An image processor handles the conversion of images to pixel values, which is represented by the batch size, number of channels, height, and width. To achieve this, an image is resized (center cropped) and the pixel values are normalized and rescaled to the models expected values.
Image preprocessing is not the same as image augmentation. Image augmentation makes changes (brightness, colors, rotatation, etc.) to an image for the purpose of either creating new training examples or prevent overfitting. Image preprocessing makes changes to an image for the purpose of matching a pretrained model’s expected input format.
Typically, images are augmented (to increase performance) and then preprocessed before being passed to a model. You can use any library (Albumentations, Kornia) for augmentation and an image processor for preprocessing.
This guide uses the torchvision transforms module for augmentation.
Start by loading a small sample of the food101 dataset.
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:100]")
From the transforms module, use the Compose API to chain together RandomResizedCrop and ColorJitter. These transforms randomly crop and resize an image, and randomly adjusts an images colors.
The image size to randomly crop to can be retrieved from the image processor. For some models, an exact height and width are expected while for others, only the shortest_edge
is required.
from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
size = (
image_processor.size["shortest_edge"]
if "shortest_edge" in image_processor.size
else (image_processor.size["height"], image_processor.size["width"])
)
_transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
Apply the transforms to the images and convert them to the RGB format. Then pass the augmented images to the image processor to return the pixel values.
The do_resize
parameter is set to False
because the images have already been resized in the augmentation step by RandomResizedCrop. If you don’t augment the images, then the image processor automatically resizes and normalizes the images with the image_mean
and image_std
values. These values are found in the preprocessor configuration file.
def transforms(examples):
images = [_transforms(img.convert("RGB")) for img in examples["image"]]
examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"]
return examples
Apply the combined augmentation and preprocessing function to the entire dataset on the fly with set_transform.
dataset.set_transform(transforms)
Convert the pixel values back into an image to see how the image has been augmented and preprocessed.
import numpy as np
import matplotlib.pyplot as plt
img = dataset[0]["pixel_values"]
plt.imshow(img.permute(1, 2, 0))


For other vision tasks like object detection or segmentation, the image processor includes post-processing methods to convert a models raw output into meaningful predictions like bounding boxes or segmentation maps.
Padding
Some models, like DETR, applies scale augmentation during training which can cause images in a batch to have different sizes. Images with different sizes can’t be batched together.
To fix this, pad the images with the special padding token 0
. Use the pad method to pad the images, and define a custom collate function to batch them together.
def collate_fn(batch):
pixel_values = [item["pixel_values"] for item in batch]
encoding = image_processor.pad(pixel_values, return_tensors="pt")
labels = [item["labels"] for item in batch]
batch = {}
batch["pixel_values"] = encoding["pixel_values"]
batch["pixel_mask"] = encoding["pixel_mask"]
batch["labels"] = labels
return batch