license: other
license_name: bria-legal-lobby
license_link: https://bria.ai/legal-lobby
BRIA 3.0 ControlNet Union Model Card
BRIA-3.0 ControlNet-Union, trained on the foundation of BRIA-3.0 Text-to-Image
BRIA 3.0 was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
Join our Discord community for more information, tutorials, tools, and to connect with other users!
Model Description
Developed by: BRIA AI
Model type: ControlNet for Latent diffusion
License: bria-3.0
Model Description: ControlNet Union for BRIA 3.0 Text-to-Image model. The model generates images guided by text and a conditioned image.
Resources for more information: BRIA AI
Get Access
BRIA 3.0 ControlNet-Union requires access to BRIA 3.0 Text-to-Image. For more information, click here.
Control Mode
Control Mode | Description |
---|---|
0 | depth |
1 | canny |
2 | colorgrid |
3 | recolor |
4 | tlie |
5 | pose |
Inference
pip install diffusers==0.30.2, hf_hub_download
from huggingface_hub import hf_hub_download
import os
try:
local_dir = os.path.dirname(__file__)
except:
local_dir = '.'
hf_hub_download(repo_id="briaai/BRIA-3.0-TOUCAN", filename='pipeline_bria.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.0-TOUCAN", filename='transformer_bria.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.0-TOUCAN", filename='bria_utils.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.0-ControlNet-Union", filename='pipeline_bria_controlnet.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.0-ControlNet-Union", filename='controlnet_bria.py', local_dir=local_dir)
import torch
from diffusers.utils import load_image
from controlnet_bria import BriaControlNetModel, BriaMultiControlNetModel
from pipeline_bria_controlnet import BriaControlNetPipeline
<!-- from diffusers import FluxControlNetPipeline, FluxControlNetModel -->
base_model = 'briaai/BRIA-3.0-TOUCAN'
controlnet_model = 'briaai/BRIA-3.0-ControlNet-Union'
controlnet = BriaControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
pipe = BriaControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
pipe.to("cuda")
control_image_canny = load_image("https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union-alpha/resolve/main/images/canny.jpg")
controlnet_conditioning_scale = 0.5
control_mode = 0
width, height = control_image.size
prompt = 'A bohemian-style female travel blogger with sun-kissed skin and messy beach waves.'
image = pipe(
prompt,
control_image=control_image,
control_mode=control_mode,
width=width,
height=height,
controlnet_conditioning_scale=controlnet_conditioning_scale,
num_inference_steps=24,
guidance_scale=3.5,
).images[0]
image.save("image.jpg")
Multi-Controls Inference
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetPipeline, FluxControlNetModel, FluxMultiControlNetModel
base_model = 'black-forest-labs/FLUX.1-dev'
controlnet_model_union = 'InstantX/FLUX.1-dev-Controlnet-Union'
controlnet_union = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)
controlnet = FluxMultiControlNetModel([controlnet_union]) # we always recommend loading via FluxMultiControlNetModel
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = 'A bohemian-style female travel blogger with sun-kissed skin and messy beach waves.'
control_image_depth = load_image("https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/resolve/main/images/depth.jpg")
control_mode_depth = 2
control_image_canny = load_image("https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/resolve/main/images/canny.jpg")
control_mode_canny = 0
width, height = control_image.size
image = pipe(
prompt,
control_image=[control_image_depth, control_image_canny],
control_mode=[control_mode_depth, control_mode_canny],
width=width,
height=height,
controlnet_conditioning_scale=[0.2, 0.4],
num_inference_steps=24,
guidance_scale=3.5,
generator=torch.manual_seed(42),
).images[0]
Resources
- InstantX/FLUX.1-dev-Controlnet-Canny
- InstantX/FLUX.1-dev-Controlnet-Union
- Shakker-Labs/FLUX.1-dev-ControlNet-Depth
- Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro
Acknowledgements
Thanks zzzzzero for help us pointing out some bugs in the training.