Diffusers documentation

Flux

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Flux

Flux is a series of text-to-image generation models based on diffusion transformers. To know more about Flux, check out the original blog post by the creators of Flux, Black Forest Labs.

Original model checkpoints for Flux can be found here. Original inference code can be found here.

Flux can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out this section for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to this blog post to learn more. For an exhaustive list of resources, check out this gist.

Flux comes in two variants:

  • Timestep-distilled (black-forest-labs/FLUX.1-schnell)
  • Guidance-distilled (black-forest-labs/FLUX.1-dev)

Both checkpoints have slightly difference usage which we detail below.

Timestep-distilled

  • max_sequence_length cannot be more than 256.
  • guidance_scale needs to be 0.
  • As this is a timestep-distilled model, it benefits from fewer sampling steps.
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = "A cat holding a sign that says hello world"
out = pipe(
    prompt=prompt,
    guidance_scale=0.,
    height=768,
    width=1360,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]
out.save("image.png")

Guidance-distilled

  • The guidance-distilled variant takes about 50 sampling steps for good-quality generation.
  • It doesn’t have any limitations around the max_sequence_length.
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = "a tiny astronaut hatching from an egg on the moon"
out = pipe(
    prompt=prompt,
    guidance_scale=3.5,
    height=768,
    width=1360,
    num_inference_steps=50,
).images[0]
out.save("image.png")

Running FP16 inference

Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See here for details.

FP16 inference code:

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16) # can replace schnell with dev
# to run on low vram GPUs (i.e. between 4 and 32 GB VRAM)
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()

pipe.to(torch.float16) # casting here instead of in the pipeline constructor because doing so in the constructor loads all models into CPU memory at once

prompt = "A cat holding a sign that says hello world"
out = pipe(
    prompt=prompt,
    guidance_scale=0.,
    height=768,
    width=1360,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]
out.save("image.png")

Single File Loading for the FluxTransformer2DModel

The FluxTransformer2DModel supports loading checkpoints in the original format shipped by Black Forest Labs. This is also useful when trying to load finetunes or quantized versions of the models that have been published by the community.

`FP8` inference can be brittle depending on the GPU type, CUDA version, and `torch` version that you are using. It is recommended that you use the `optimum-quanto` library in order to run FP8 inference on your machine.

The following example demonstrates how to run Flux with less than 16GB of VRAM.

First install optimum-quanto

pip install optimum-quanto

Then run the following example

import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
from optimum.quanto import freeze, qfloat8, quantize

bfl_repo = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16

transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)

text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)

pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2

pipe.enable_model_cpu_offload()

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt,
    guidance_scale=3.5,
    output_type="pil",
    num_inference_steps=20,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]

image.save("flux-fp8-dev.png")

FluxPipeline

class diffusers.FluxPipeline

< >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel )

Parameters

  • transformer (FluxTransformer2DModel) — Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
  • scheduler (FlowMatchEulerDiscreteScheduler) — A scheduler to be used in combination with transformer to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
  • text_encoder (CLIPTextModel) — CLIP, specifically the clip-vit-large-patch14 variant.
  • text_encoder_2 (T5EncoderModel) — T5, specifically the google/t5-v1_1-xxl variant.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • tokenizer_2 (T5TokenizerFast) — Second Tokenizer of class T5TokenizerFast.

The Flux pipeline for text-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/

__call__

< >

( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 28 timesteps: List = None guidance_scale: float = 3.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True joint_attention_kwargs: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] max_sequence_length: int = 512 ) ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. instead.
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is will be used instead
  • height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. This is set to 1024 by default for the best results.
  • width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. This is set to 1024 by default for the best results.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • timesteps (List[int], optional) — Custom timesteps to use for the denoising process with schedulers which support a timesteps argument in their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed will be used. Must be in descending order.
  • guidance_scale (float, optional, defaults to 7.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.flux.FluxPipelineOutput instead of a plain tuple.
  • joint_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • callback_on_step_end (Callable, optional) — A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.
  • callback_on_step_end_tensor_inputs (List, optional) — The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeline class.
  • max_sequence_length (int defaults to 512) — Maximum sequence length to use with the prompt.

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch
>>> from diffusers import FluxPipeline

>>> pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
>>> image.save("flux.png")

disable_vae_slicing

< >

( )

Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_vae_tiling

< >

( )

Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_vae_slicing

< >

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_vae_tiling

< >

( )

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

encode_prompt

< >

( prompt: Union prompt_2: Union device: Optional = None num_images_per_prompt: int = 1 prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None max_sequence_length: int = 512 lora_scale: Optional = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is used in all text-encoders device — (torch.device): torch device
  • num_images_per_prompt (int) — number of images that should be generated per prompt
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • lora_scale (float, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

FluxImg2ImgPipeline

class diffusers.FluxImg2ImgPipeline

< >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel )

Parameters

  • transformer (FluxTransformer2DModel) — Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
  • scheduler (FlowMatchEulerDiscreteScheduler) — A scheduler to be used in combination with transformer to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
  • text_encoder (CLIPTextModel) — CLIP, specifically the clip-vit-large-patch14 variant.
  • text_encoder_2 (T5EncoderModel) — T5, specifically the google/t5-v1_1-xxl variant.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • tokenizer_2 (T5TokenizerFast) — Second Tokenizer of class T5TokenizerFast.

The Flux pipeline for image inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/

__call__

< >

( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None strength: float = 0.6 num_inference_steps: int = 28 timesteps: List = None guidance_scale: float = 7.0 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True joint_attention_kwargs: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] max_sequence_length: int = 512 ) ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. instead.
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is will be used instead
  • image (torch.Tensor, PIL.Image.Image, np.ndarray, List[torch.Tensor], List[PIL.Image.Image], or List[np.ndarray]) — Image, numpy array or tensor representing an image batch to be used as the starting point. For both numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but if passing latents directly it is not encoded again.
  • height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. This is set to 1024 by default for the best results.
  • width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. This is set to 1024 by default for the best results.
  • strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a starting point and more noise is added the higher the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising process runs for the full number of iterations specified in num_inference_steps. A value of 1 essentially ignores image.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • timesteps (List[int], optional) — Custom timesteps to use for the denoising process with schedulers which support a timesteps argument in their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed will be used. Must be in descending order.
  • guidance_scale (float, optional, defaults to 7.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.flux.FluxPipelineOutput instead of a plain tuple.
  • joint_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • callback_on_step_end (Callable, optional) — A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.
  • callback_on_step_end_tensor_inputs (List, optional) — The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeline class.
  • max_sequence_length (int defaults to 512) — Maximum sequence length to use with the prompt.

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch

>>> from diffusers import FluxImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> device = "cuda"
>>> pipe = FluxImg2ImgPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
>>> pipe = pipe.to(device)

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> init_image = load_image(url).resize((1024, 1024))

>>> prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"

>>> images = pipe(
...     prompt=prompt, image=init_image, num_inference_steps=4, strength=0.95, guidance_scale=0.0
... ).images[0]

encode_prompt

< >

( prompt: Union prompt_2: Union device: Optional = None num_images_per_prompt: int = 1 prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None max_sequence_length: int = 512 lora_scale: Optional = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is used in all text-encoders device — (torch.device): torch device
  • num_images_per_prompt (int) — number of images that should be generated per prompt
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • lora_scale (float, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

FluxInpaintPipeline

class diffusers.FluxInpaintPipeline

< >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel )

Parameters

  • transformer (FluxTransformer2DModel) — Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
  • scheduler (FlowMatchEulerDiscreteScheduler) — A scheduler to be used in combination with transformer to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
  • text_encoder (CLIPTextModel) — CLIP, specifically the clip-vit-large-patch14 variant.
  • text_encoder_2 (T5EncoderModel) — T5, specifically the google/t5-v1_1-xxl variant.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • tokenizer_2 (T5TokenizerFast) — Second Tokenizer of class T5TokenizerFast.

The Flux pipeline for image inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/

__call__

< >

( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.6 num_inference_steps: int = 28 timesteps: List = None guidance_scale: float = 7.0 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True joint_attention_kwargs: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] max_sequence_length: int = 512 ) ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. instead.
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is will be used instead
  • image (torch.Tensor, PIL.Image.Image, np.ndarray, List[torch.Tensor], List[PIL.Image.Image], or List[np.ndarray]) — Image, numpy array or tensor representing an image batch to be used as the starting point. For both numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but if passing latents directly it is not encoded again.
  • mask_image (torch.Tensor, PIL.Image.Image, np.ndarray, List[torch.Tensor], List[PIL.Image.Image], or List[np.ndarray]) — Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W).
  • mask_image_latent (torch.Tensor, List[torch.Tensor]) — Tensor representing an image batch to mask image generated by VAE. If not provided, the mask latents tensor will ge generated by mask_image.
  • height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. This is set to 1024 by default for the best results.
  • width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. This is set to 1024 by default for the best results.
  • padding_mask_crop (int, optional, defaults to None) — The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large and contain information irrelevant for inpainting, such as background.
  • strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a starting point and more noise is added the higher the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising process runs for the full number of iterations specified in num_inference_steps. A value of 1 essentially ignores image.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • timesteps (List[int], optional) — Custom timesteps to use for the denoising process with schedulers which support a timesteps argument in their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed will be used. Must be in descending order.
  • guidance_scale (float, optional, defaults to 7.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.flux.FluxPipelineOutput instead of a plain tuple.
  • joint_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • callback_on_step_end (Callable, optional) — A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.
  • callback_on_step_end_tensor_inputs (List, optional) — The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeline class.
  • max_sequence_length (int defaults to 512) — Maximum sequence length to use with the prompt.

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch
>>> from diffusers import FluxInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = FluxInpaintPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
>>> source = load_image(img_url)
>>> mask = load_image(mask_url)
>>> image = pipe(prompt=prompt, image=source, mask_image=mask).images[0]
>>> image.save("flux_inpainting.png")

encode_prompt

< >

( prompt: Union prompt_2: Union device: Optional = None num_images_per_prompt: int = 1 prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None max_sequence_length: int = 512 lora_scale: Optional = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is used in all text-encoders device — (torch.device): torch device
  • num_images_per_prompt (int) — number of images that should be generated per prompt
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • lora_scale (float, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

FluxControlNetInpaintPipeline

class diffusers.FluxControlNetInpaintPipeline

< >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel controlnet: Union )

Parameters

  • transformer (FluxTransformer2DModel) — Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
  • scheduler (FlowMatchEulerDiscreteScheduler) — A scheduler to be used in combination with transformer to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
  • text_encoder (CLIPTextModel) — CLIP, specifically the clip-vit-large-patch14 variant.
  • text_encoder_2 (T5EncoderModel) — T5, specifically the google/t5-v1_1-xxl variant.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • tokenizer_2 (T5TokenizerFast) — Second Tokenizer of class T5TokenizerFast.

The Flux controlnet pipeline for inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/

__call__

< >

( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.6 padding_mask_crop: Optional = None timesteps: List = None num_inference_steps: int = 28 guidance_scale: float = 7.0 control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 control_mode: Union = None controlnet_conditioning_scale: Union = 1.0 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True joint_attention_kwargs: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] max_sequence_length: int = 512 ) ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation.
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2.
  • image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — The image(s) to inpaint.
  • mask_image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — The mask image(s) to use for inpainting. White pixels in the mask will be repainted, while black pixels will be preserved.
  • masked_image_latents (torch.FloatTensor, optional) — Pre-generated masked image latents.
  • control_image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — The ControlNet input condition. Image to control the generation.
  • height (int, optional, defaults to self.default_sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
  • width (int, optional, defaults to self.default_sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
  • strength (float, optional, defaults to 0.6) — Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1.
  • padding_mask_crop (int, optional) — The size of the padding to use when cropping the mask.
  • num_inference_steps (int, optional, defaults to 28) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • timesteps (List[int], optional) — Custom timesteps to use for the denoising process.
  • guidance_scale (float, optional, defaults to 7.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance.
  • control_guidance_start (float or List[float], optional, defaults to 0.0) — The percentage of total steps at which the ControlNet starts applying.
  • control_guidance_end (float or List[float], optional, defaults to 1.0) — The percentage of total steps at which the ControlNet stops applying.
  • control_mode (int or List[int], optional) — The mode for the ControlNet. If multiple ControlNets are used, this should be a list.
  • controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added to the residual in the original transformer.
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • generator (torch.Generator or List[torch.Generator], optional) — One or more torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.flux.FluxPipelineOutput instead of a plain tuple.
  • joint_attention_kwargs (dict, optional) — Additional keyword arguments to be passed to the joint attention mechanism.
  • callback_on_step_end (Callable, optional) — A function that calls at the end of each denoising step during the inference.
  • callback_on_step_end_tensor_inputs (List[str], optional) — The list of tensor inputs for the callback_on_step_end function.
  • max_sequence_length (int, optional, defaults to 512) — The maximum length of the sequence to be generated.

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch
>>> from diffusers import FluxControlNetInpaintPipeline
>>> from diffusers.models import FluxControlNetModel
>>> from diffusers.utils import load_image

>>> controlnet = FluxControlNetModel.from_pretrained(
...     "InstantX/FLUX.1-dev-controlnet-canny", torch_dtype=torch.float16
... )
>>> pipe = FluxControlNetInpaintPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-schnell", controlnet=controlnet, torch_dtype=torch.float16
... )
>>> pipe.to("cuda")

>>> control_image = load_image(
...     "https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny-alpha/resolve/main/canny.jpg"
... )
>>> init_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
... )
>>> mask_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
... )

>>> prompt = "A girl holding a sign that says InstantX"
>>> image = pipe(
...     prompt,
...     image=init_image,
...     mask_image=mask_image,
...     control_image=control_image,
...     control_guidance_start=0.2,
...     control_guidance_end=0.8,
...     controlnet_conditioning_scale=0.7,
...     strength=0.7,
...     num_inference_steps=28,
...     guidance_scale=3.5,
... ).images[0]
>>> image.save("flux_controlnet_inpaint.png")

encode_prompt

< >

( prompt: Union prompt_2: Union device: Optional = None num_images_per_prompt: int = 1 prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None max_sequence_length: int = 512 lora_scale: Optional = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is used in all text-encoders device — (torch.device): torch device
  • num_images_per_prompt (int) — number of images that should be generated per prompt
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • lora_scale (float, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

FluxControlNetImg2ImgPipeline

class diffusers.FluxControlNetImg2ImgPipeline

< >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel controlnet: Union )

Parameters

  • transformer (FluxTransformer2DModel) — Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
  • scheduler (FlowMatchEulerDiscreteScheduler) — A scheduler to be used in combination with transformer to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
  • text_encoder (CLIPTextModel) — CLIP, specifically the clip-vit-large-patch14 variant.
  • text_encoder_2 (T5EncoderModel) — T5, specifically the google/t5-v1_1-xxl variant.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • tokenizer_2 (T5TokenizerFast) — Second Tokenizer of class T5TokenizerFast.

The Flux controlnet pipeline for image-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/

__call__

< >

( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.6 num_inference_steps: int = 28 timesteps: List = None guidance_scale: float = 7.0 control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 control_mode: Union = None controlnet_conditioning_scale: Union = 1.0 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True joint_attention_kwargs: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] max_sequence_length: int = 512 ) ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation.
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2.
  • image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — The image(s) to modify with the pipeline.
  • control_image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — The ControlNet input condition. Image to control the generation.
  • height (int, optional, defaults to self.default_sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
  • width (int, optional, defaults to self.default_sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
  • strength (float, optional, defaults to 0.6) — Conceptually, indicates how much to transform the reference image. Must be between 0 and 1.
  • num_inference_steps (int, optional, defaults to 28) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • timesteps (List[int], optional) — Custom timesteps to use for the denoising process.
  • guidance_scale (float, optional, defaults to 7.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance.
  • control_mode (int or List[int], optional) — The mode for the ControlNet. If multiple ControlNets are used, this should be a list.
  • controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added to the residual in the original transformer.
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • generator (torch.Generator or List[torch.Generator], optional) — One or more torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.flux.FluxPipelineOutput instead of a plain tuple.
  • joint_attention_kwargs (dict, optional) — Additional keyword arguments to be passed to the joint attention mechanism.
  • callback_on_step_end (Callable, optional) — A function that calls at the end of each denoising step during the inference.
  • callback_on_step_end_tensor_inputs (List[str], optional) — The list of tensor inputs for the callback_on_step_end function.
  • max_sequence_length (int, optional, defaults to 512) — The maximum length of the sequence to be generated.

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch
>>> from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetModel
>>> from diffusers.utils import load_image

>>> device = "cuda" if torch.cuda.is_available() else "cpu"

>>> controlnet = FluxControlNetModel.from_pretrained(
...     "InstantX/FLUX.1-dev-Controlnet-Canny-alpha", torch_dtype=torch.bfloat16
... )

>>> pipe = FluxControlNetImg2ImgPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-schnell", controlnet=controlnet, torch_dtype=torch.float16
... )

>>> pipe.text_encoder.to(torch.float16)
>>> pipe.controlnet.to(torch.float16)
>>> pipe.to("cuda")

>>> control_image = load_image("https://huggingface.co/InstantX/SD3-Controlnet-Canny/resolve/main/canny.jpg")
>>> init_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
... )

>>> prompt = "A girl in city, 25 years old, cool, futuristic"
>>> image = pipe(
...     prompt,
...     image=init_image,
...     control_image=control_image,
...     control_guidance_start=0.2,
...     control_guidance_end=0.8,
...     controlnet_conditioning_scale=1.0,
...     strength=0.7,
...     num_inference_steps=2,
...     guidance_scale=3.5,
... ).images[0]
>>> image.save("flux_controlnet_img2img.png")

encode_prompt

< >

( prompt: Union prompt_2: Union device: Optional = None num_images_per_prompt: int = 1 prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None max_sequence_length: int = 512 lora_scale: Optional = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded
  • prompt_2 (str or List[str], optional) — The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is used in all text-encoders device — (torch.device): torch device
  • num_images_per_prompt (int) — number of images that should be generated per prompt
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • pooled_prompt_embeds (torch.FloatTensor, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.
  • lora_scale (float, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
< > Update on GitHub