File size: 1,856 Bytes
32c0be8 f65aff4 05aab4e b0d14c4 6780a2e f65aff4 32c0be8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: in the style of TOK
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: a coi fish. in the style of TOK
output:
url: images/example_3buowzz1z.png
- text: a panda on eucalyptus branch, sleeping. in the style of TOK
output:
url: images/example_7wb9y3owz.png
- text: a boy snowboarding. in the style of TOK
output:
url: images/example_0s2fzx9kd.png
- text: a man tennis player playing tennis on tennis court. in the style of TOK
output:
url: images/example_u2i3ke60c.png
---
# sweet-brush
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `in the style of TOK` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/fffiloni/sweet-brush/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('fffiloni/sweet-brush', weight_name='sweet-brush.safetensors')
image = pipeline('A person in a bustling cafe. in the style of TOK').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|