# InstructPix2Pix: Learning to Follow Image Editing Instructions ## Overview [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract of the paper is the following: *We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.* Resources: * [Project Page](https://www.timothybrooks.com/instruct-pix2pix). * [Paper](https://arxiv.org/abs/2211.09800). * [Original Code](https://github.com/timothybrooks/instruct-pix2pix). * [Demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix). ## Available Pipelines: | Pipeline | Tasks | Demo |---|---|:---:| | [StableDiffusionInstructPix2PixPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py) | *Text-Based Image Editing* | [🤗 Space](https://huggingface.co/spaces/timbrooks/instruct-pix2pix) | ## Usage example ```python import PIL import requests import torch from diffusers import StableDiffusionInstructPix2PixPipeline model_id = "timbrooks/instruct-pix2pix" pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" def download_image(url): image = PIL.Image.open(requests.get(url, stream=True).raw) image = PIL.ImageOps.exif_transpose(image) image = image.convert("RGB") return image image = download_image(url) prompt = "make the mountains snowy" images = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images images[0].save("snowy_mountains.png") ``` ## StableDiffusionInstructPix2PixPipeline [[autodoc]] StableDiffusionInstructPix2PixPipeline - __call__ - all