text
stringlengths 0
5.54k
|
---|
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] |
text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] |
make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"ogkalu/Comic-Diffusion", torch_dtype=torch.float16 |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# need to include the token "charliebo artstyle" in the prompt to use this checkpoint |
image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"kohbanye/pixel-art-style", torch_dtype=torch.float16 |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# need to include the token "pixelartstyle" in the prompt to use this checkpoint |
image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline |
upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( |
"stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
upscaler.enable_model_cpu_offload() |
upscaler.enable_xformers_memory_efficient_attention() |
image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline |
super_res = StableDiffusionUpscalePipeline.from_pretrained( |
"stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
super_res.enable_model_cpu_offload() |
super_res.enable_xformers_memory_efficient_attention() |
image_3 = super_res(prompt, image=image_2).images[0] |
make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image |
import torch |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel |
negative_prompt_embeds=negative_prompt_embeds, # generated from Compel |
image=init_image, |
).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid |