license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
pipeline_tag: text-to-image
tags:
- Stable Diffusion
- image-generation
- Flux
- diffusers
- controlnet
This repository provides a IP-Adapter checkpoint for FLUX.1-dev model by Black Forest Labs
See our github for comfy ui workflows.
Models
IP-Adapter is trained on 512x512 resolution for 50k steps and 1024x1024 for 25k steps resolution and works for both 512x512 and 1024x1024 resolution. We release v1 version - better and realistic version, which can be used directly in ComfyUI!
Please, see our ComfyUI custom nodes installation guide
Examples
See examples of our models results below.
Also, some generation results with input images are provided in "Files and versions"
Inference
To try our models, you have 2 options:
- Use main.py from our official repo
- Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)
Instruction for ComfyUI
- Go to ComfyUI/custom_nodes
- Clone x-flux-comfyui, path should be ComfyUI/custom_nodes/x-flux-comfyui/*, where * is all the files in this repo
- Go to ComfyUI/custom_nodes/x-flux-comfyui/ and run python setup.py
- Update x-flux-comfy with
git pull
or reinstall it. - Download Clip-L
model.safetensors
from OpenAI VIT CLIP large, and put it toComfyUI/models/clip_vision/*
. - Download our IPAdapter from huggingface, and put it to
ComfyUI/models/xlabs/ipadapters/*
. - Use
Flux Load IPAdapter
andApply Flux IPAdapter
nodes, choose right CLIP model and enjoy your genereations. - You can find example workflow in folder workflows in this repo.
Limitations
The IP Adapter is currently in beta. We do not guarantee that you will get a good result right away, it may ta
License
Our weights fall under the FLUX.1 [dev] Non-Commercial License