confidence_be_closed_front
Model trained with AI Toolkit by Ostris

- Prompt
- back of beige conf_be colostomy pouch on the forest floor.

- Prompt
- front of white conf_be colostomy pouch on an office desk

- Prompt
- a black conf_be colostomy pouch and a beige conf_be colostomy pouch in an animated style

- Prompt
- Front of a black conf_be colostomy pouch displayed on a 1960s television system

- Prompt
- An image of a white conf_be colostomy pouch on a billboard in Times Square in New York City in the evening
Trigger words
Trigger word is "conf_be colostomy pouch"
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('salts-models/confidence-be-closed-front', weight_name='confidence_be_closed_front.safetensors')
image = pipeline('back of beige conf_be colostomy pouch on the forest floor.').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
- Downloads last month
- 45
Model tree for salts-models/confidence-be-closed-front
Base model
black-forest-labs/FLUX.1-dev