|
--- |
|
base_model: black-forest-labs/FLUX.1-dev |
|
library_name: diffusers |
|
tags: |
|
- flux |
|
- flux-diffusers |
|
- text-to-image |
|
- diffusers |
|
- controlnet |
|
- diffusers-training |
|
- flux |
|
- flux-diffusers |
|
- text-to-image |
|
- diffusers |
|
- controlnet |
|
- diffusers-training |
|
inference: true |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# promeai/FLUX.1-controlnet-lineart-promeai |
|
|
|
`promeai/FLUX.1-controlnet-lineart-promeai` holds controlnet weights trained on black-forest-labs/FLUX.1-dev with lineart condition. |
|
|
|
|
|
Here are some example images. |
|
|
|
prompt: cute anime girl with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress and a white apron mouth open holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere |
|
| input-control | result image | |
|
| - |- | |
|
| ![input-control)](./images/example-control.jpg) | ![output)](./images/example-output.jpg) | |
|
|
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
## How to use |
|
|
|
### with diffusers |
|
|
|
```python |
|
# TODO: add an example code snippet for running this diffusion pipeline |
|
import torch |
|
from diffusers.utils import load_image |
|
from diffusers.pipelines.flux.pipeline_flux_controlnet import FluxControlNetPipeline |
|
from diffusers.models.controlnet_flux import FluxControlNetModel |
|
|
|
base_model = 'black-forest-labs/FLUX.1-dev' |
|
controlnet_model = 'promeai/FLUX.1-controlnet-lineart-promeai' |
|
controlnet = FluxControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16) |
|
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16) |
|
pipe.to("cuda") |
|
|
|
control_image = load_image("./images/example-control.jpg") |
|
prompt = "cute anime girl with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress and a white apron mouth open holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere" |
|
image = pipe( |
|
prompt, |
|
control_image=control_image, |
|
controlnet_conditioning_scale=0.6, |
|
num_inference_steps=28, |
|
guidance_scale=3.5, |
|
).images[0] |
|
image.save("./image.jpg") |
|
``` |
|
|
|
### with comfyui |
|
An [example comfyui workflow](./example_workflow.json)is also provided. |
|
|
|
|
|
## Limitations and bias |
|
|
|
[TODO: provide examples of latent issues and potential remediations] |
|
|
|
## Training details |
|
|
|
This controlnet is trained on one A100-80G GPU, with carefully selected proprietary real-world images dataset, with imagesize 512 + batchsize 3 (earlier period), and imagesize 1024 + batchsize 1 (after 512 training). With above configs, the GPU memory was about 70G and takes around 3 days to get this 14000steps-checkpoint. |