Upload 18 files
Browse files- README.md +74 -0
- Screenshot 2024-09-08 101527.png +0 -0
- aodai (1).png +0 -0
- aodai (10).png +0 -0
- aodai (11).png +0 -0
- aodai (12).png +0 -0
- aodai (13).png +0 -0
- aodai (14).png +0 -0
- aodai (2).png +0 -0
- aodai (3).png +0 -0
- aodai (4).png +0 -0
- aodai (5).png +0 -0
- aodai (6).png +0 -0
- aodai (7).png +0 -0
- aodai (8).png +0 -0
- aodai (9).png +0 -0
- train_lora_flux.yaml +90 -0
- workflow_lora_aodai_v2.json +913 -0
README.md
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- text-to-image
|
4 |
+
- lora
|
5 |
+
- diffusers
|
6 |
+
- flux
|
7 |
+
base_model: black-forest-labs/FLUX.1-dev
|
8 |
+
license: creativeml-openrail-m
|
9 |
+
library_name: diffusers
|
10 |
+
---
|
11 |
+
|
12 |
+
# Flux.1-Dev LoRA Adapter Trained on Me
|
13 |
+
|
14 |
+
LoRA Adapter for [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) trained on 22 pictures of young women wearing a traditional Vietnamese dess `ao dai` with [ai-toolkit](https://github.com/ostris/ai-toolkit/tree/main)
|
15 |
+
|
16 |
+
# Model Details
|
17 |
+
|
18 |
+
**Some Amusing Examples**
|
19 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(1).png" width=576>
|
20 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(3).png" width=576>
|
21 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(4).png" width=576>
|
22 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(5).png" width=576>
|
23 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(6).png" width=576>
|
24 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(7).png" width=576>
|
25 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(8).png" width=576>
|
26 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(9).png" width=576>
|
27 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(10).png" width=576>
|
28 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(11).png" width=576>
|
29 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(12).png" width=576>
|
30 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(13).png" width=576>
|
31 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/aodai%20(13).png" width=576>
|
32 |
+
LoRA was trained with the trigger phrase `a0da1`
|
33 |
+
|
34 |
+
Full training config available at [train_lora_flux.yaml](./train_lora_flux.yaml)
|
35 |
+
|
36 |
+
# Usage
|
37 |
+
|
38 |
+
With diffusers package
|
39 |
+
*Note: FLUX uses ~70GBvram when loaded directly with diffusers*
|
40 |
+
*Note: Recommended to load at ~70% scale for best results*
|
41 |
+
|
42 |
+
```python
|
43 |
+
from diffusers import DiffusionPipeline
|
44 |
+
|
45 |
+
pipeline = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev")
|
46 |
+
pipeline.load_lora_weights("dtthanh/aodai_v2", weight_name="aodai_v2.safetensors")
|
47 |
+
pipeline.to("cuda")
|
48 |
+
|
49 |
+
prompt = "a photo of a young Asian women dressed in traditional Vietnamese dress called a0da1."
|
50 |
+
|
51 |
+
out = pipeline(
|
52 |
+
prompt=prompt,
|
53 |
+
guidance_scale=3.5,
|
54 |
+
num_inference_steps=20,
|
55 |
+
cross_attention_kwargs={"scale": 0.7}
|
56 |
+
).images[0]
|
57 |
+
|
58 |
+
out.save("aodai.png")
|
59 |
+
```
|
60 |
+
|
61 |
+
ComfyUI Workflow
|
62 |
+
|
63 |
+
trigger: a0da1
|
64 |
+
|
65 |
+
prompt: "a photo of a young [enethnicity] women dressed in traditional Vietnamese dress called a0da1."
|
66 |
+
|
67 |
+
|
68 |
+
<img src="https://huggingface.co/dtthanh/aodai_v2/resolve/main/Screenshot%202024-09-08%20101527" width=800>
|
69 |
+
|
70 |
+
File available at [workflow_aodai_lora.json](workflow_aodai_lora_v2.json)
|
71 |
+
|
72 |
+
# Additional Details
|
73 |
+
|
74 |
+
Please see base model page [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) for all details on appropriate usage, licensing, and more.
|
Screenshot 2024-09-08 101527.png
ADDED
aodai (1).png
ADDED
aodai (10).png
ADDED
aodai (11).png
ADDED
aodai (12).png
ADDED
aodai (13).png
ADDED
aodai (14).png
ADDED
aodai (2).png
ADDED
aodai (3).png
ADDED
aodai (4).png
ADDED
aodai (5).png
ADDED
aodai (6).png
ADDED
aodai (7).png
ADDED
aodai (8).png
ADDED
aodai (9).png
ADDED
train_lora_flux.yaml
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
job: extension
|
3 |
+
config:
|
4 |
+
# this name will be the folder and filename name
|
5 |
+
name: "aodai_v1"
|
6 |
+
process:
|
7 |
+
- type: 'sd_trainer'
|
8 |
+
# root folder to save training sessions/samples/weights
|
9 |
+
training_folder: "output"
|
10 |
+
# uncomment to see performance stats in the terminal every N steps
|
11 |
+
# performance_log_every: 1000
|
12 |
+
device: cuda:0
|
13 |
+
# if a trigger word is specified, it will be added to captions of training data if it does not already exist
|
14 |
+
# alternatively, in your captions you can add [trigger] and it will be replaced with the trigger word
|
15 |
+
trigger_word: "a0da1"
|
16 |
+
network:
|
17 |
+
type: "lora"
|
18 |
+
linear: 32
|
19 |
+
linear_alpha: 32
|
20 |
+
save:
|
21 |
+
dtype: float16 # precision to save
|
22 |
+
save_every: 250 # save every this many steps
|
23 |
+
max_step_saves_to_keep: 4 # how many intermittent saves to keep
|
24 |
+
push_to_hub: false #change this to True to push your trained model to Hugging Face.
|
25 |
+
# You can either set up a HF_TOKEN env variable or you'll be prompted to log-in
|
26 |
+
# hf_repo_id: your-username/your-model-slug
|
27 |
+
# hf_private: true #whether the repo is private or public
|
28 |
+
datasets:
|
29 |
+
# datasets are a folder of images. captions need to be txt files with the same name as the image
|
30 |
+
# for instance image2.jpg and image2.txt. Only jpg, jpeg, and png are supported currently
|
31 |
+
# images will automatically be resized and bucketed into the resolution specified
|
32 |
+
# on windows, escape back slashes with another backslash so
|
33 |
+
# "C:\\path\\to\\images\\folder"
|
34 |
+
- folder_path: "./aodai"
|
35 |
+
caption_ext: "txt"
|
36 |
+
caption_dropout_rate: 0.05 # will drop out the caption 5% of time
|
37 |
+
shuffle_tokens: false # shuffle caption order, split by commas
|
38 |
+
cache_latents_to_disk: true # leave this true unless you know what you're doing
|
39 |
+
resolution: [ 512, 768, 1024 ] # flux enjoys multiple resolutions
|
40 |
+
train:
|
41 |
+
batch_size: 1
|
42 |
+
steps: 1000 # total number of steps to train 500 - 4000 is a good range
|
43 |
+
gradient_accumulation_steps: 1
|
44 |
+
train_unet: true
|
45 |
+
train_text_encoder: false # probably won't work with flux
|
46 |
+
gradient_checkpointing: true # need the on unless you have a ton of vram
|
47 |
+
noise_scheduler: "flowmatch" # for training only
|
48 |
+
optimizer: "adamw8bit"
|
49 |
+
lr: 1e-4
|
50 |
+
# uncomment this to skip the pre training sample
|
51 |
+
# skip_first_sample: true
|
52 |
+
# uncomment to completely disable sampling
|
53 |
+
# disable_sampling: true
|
54 |
+
# uncomment to use new vell curved weighting. Experimental but may produce better results
|
55 |
+
# linear_timesteps: true
|
56 |
+
|
57 |
+
# ema will smooth out learning, but could slow it down. Recommended to leave on.
|
58 |
+
ema_config:
|
59 |
+
use_ema: true
|
60 |
+
ema_decay: 0.99
|
61 |
+
|
62 |
+
# will probably need this if gpu supports it for flux, other dtypes may not work correctly
|
63 |
+
dtype: bf16
|
64 |
+
model:
|
65 |
+
# huggingface model name or path
|
66 |
+
name_or_path: "black-forest-labs/FLUX.1-dev"
|
67 |
+
is_flux: true
|
68 |
+
quantize: true # run 8bit mixed precision
|
69 |
+
# low_vram: true # uncomment this if the GPU is connected to your monitors. It will use less vram to quantize, but is slower.
|
70 |
+
sample:
|
71 |
+
sampler: "flowmatch" # must match train.noise_scheduler
|
72 |
+
sample_every: 250 # sample every this many steps
|
73 |
+
width: 576
|
74 |
+
height: 1024
|
75 |
+
prompts:
|
76 |
+
# you can add [trigger] to the prompts here and it will be replaced with the trigger word
|
77 |
+
# - "[trigger] holding a sign that says 'I LOVE PROMPTS!'"\
|
78 |
+
- "a photo of a young woman wearing a traditional Vietnamese dress called [trigger]"
|
79 |
+
- "a photo of a young Asian woman wearing a traditional Vietnamese dress called [trigger]"
|
80 |
+
- "a photo of a young Vietnamese woman wearing a traditional Vietnamese dress called [trigger]"
|
81 |
+
- "a photo of a young Western woman wearing a traditional Vietnamese dress called [trigger]"
|
82 |
+
neg: "" # not used on flux
|
83 |
+
seed: 42
|
84 |
+
walk_seed: true
|
85 |
+
guidance_scale: 4
|
86 |
+
sample_steps: 20
|
87 |
+
# you can add any additional meta info here. [name] is replaced with config name at top
|
88 |
+
meta:
|
89 |
+
name: "[name]"
|
90 |
+
version: '1.0'
|
workflow_lora_aodai_v2.json
ADDED
@@ -0,0 +1,913 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"last_node_id": 69,
|
3 |
+
"last_link_id": 90,
|
4 |
+
"nodes": [
|
5 |
+
{
|
6 |
+
"id": 8,
|
7 |
+
"type": "VAEDecode",
|
8 |
+
"pos": {
|
9 |
+
"0": 1083,
|
10 |
+
"1": 19
|
11 |
+
},
|
12 |
+
"size": {
|
13 |
+
"0": 210,
|
14 |
+
"1": 46
|
15 |
+
},
|
16 |
+
"flags": {},
|
17 |
+
"order": 14,
|
18 |
+
"mode": 0,
|
19 |
+
"inputs": [
|
20 |
+
{
|
21 |
+
"name": "samples",
|
22 |
+
"type": "LATENT",
|
23 |
+
"link": 24
|
24 |
+
},
|
25 |
+
{
|
26 |
+
"name": "vae",
|
27 |
+
"type": "VAE",
|
28 |
+
"link": 12
|
29 |
+
}
|
30 |
+
],
|
31 |
+
"outputs": [
|
32 |
+
{
|
33 |
+
"name": "IMAGE",
|
34 |
+
"type": "IMAGE",
|
35 |
+
"links": [
|
36 |
+
60
|
37 |
+
],
|
38 |
+
"slot_index": 0
|
39 |
+
}
|
40 |
+
],
|
41 |
+
"properties": {
|
42 |
+
"Node name for S&R": "VAEDecode"
|
43 |
+
}
|
44 |
+
},
|
45 |
+
{
|
46 |
+
"id": 22,
|
47 |
+
"type": "BasicGuider",
|
48 |
+
"pos": {
|
49 |
+
"0": 783,
|
50 |
+
"1": 212
|
51 |
+
},
|
52 |
+
"size": {
|
53 |
+
"0": 241.79998779296875,
|
54 |
+
"1": 46
|
55 |
+
},
|
56 |
+
"flags": {},
|
57 |
+
"order": 12,
|
58 |
+
"mode": 0,
|
59 |
+
"inputs": [
|
60 |
+
{
|
61 |
+
"name": "model",
|
62 |
+
"type": "MODEL",
|
63 |
+
"link": 85,
|
64 |
+
"slot_index": 0
|
65 |
+
},
|
66 |
+
{
|
67 |
+
"name": "conditioning",
|
68 |
+
"type": "CONDITIONING",
|
69 |
+
"link": 74,
|
70 |
+
"slot_index": 1
|
71 |
+
}
|
72 |
+
],
|
73 |
+
"outputs": [
|
74 |
+
{
|
75 |
+
"name": "GUIDER",
|
76 |
+
"type": "GUIDER",
|
77 |
+
"links": [
|
78 |
+
30
|
79 |
+
],
|
80 |
+
"slot_index": 0,
|
81 |
+
"shape": 3
|
82 |
+
}
|
83 |
+
],
|
84 |
+
"properties": {
|
85 |
+
"Node name for S&R": "BasicGuider"
|
86 |
+
}
|
87 |
+
},
|
88 |
+
{
|
89 |
+
"id": 16,
|
90 |
+
"type": "KSamplerSelect",
|
91 |
+
"pos": {
|
92 |
+
"0": 1110,
|
93 |
+
"1": 648
|
94 |
+
},
|
95 |
+
"size": {
|
96 |
+
"0": 315,
|
97 |
+
"1": 58
|
98 |
+
},
|
99 |
+
"flags": {},
|
100 |
+
"order": 0,
|
101 |
+
"mode": 0,
|
102 |
+
"inputs": [],
|
103 |
+
"outputs": [
|
104 |
+
{
|
105 |
+
"name": "SAMPLER",
|
106 |
+
"type": "SAMPLER",
|
107 |
+
"links": [
|
108 |
+
19
|
109 |
+
],
|
110 |
+
"shape": 3
|
111 |
+
}
|
112 |
+
],
|
113 |
+
"properties": {
|
114 |
+
"Node name for S&R": "KSamplerSelect"
|
115 |
+
},
|
116 |
+
"widgets_values": [
|
117 |
+
"euler"
|
118 |
+
]
|
119 |
+
},
|
120 |
+
{
|
121 |
+
"id": 25,
|
122 |
+
"type": "RandomNoise",
|
123 |
+
"pos": {
|
124 |
+
"0": 1128,
|
125 |
+
"1": 353
|
126 |
+
},
|
127 |
+
"size": {
|
128 |
+
"0": 315,
|
129 |
+
"1": 82
|
130 |
+
},
|
131 |
+
"flags": {},
|
132 |
+
"order": 1,
|
133 |
+
"mode": 0,
|
134 |
+
"inputs": [],
|
135 |
+
"outputs": [
|
136 |
+
{
|
137 |
+
"name": "NOISE",
|
138 |
+
"type": "NOISE",
|
139 |
+
"links": [
|
140 |
+
37
|
141 |
+
],
|
142 |
+
"shape": 3
|
143 |
+
}
|
144 |
+
],
|
145 |
+
"properties": {
|
146 |
+
"Node name for S&R": "RandomNoise"
|
147 |
+
},
|
148 |
+
"widgets_values": [
|
149 |
+
983245457248207,
|
150 |
+
"randomize"
|
151 |
+
]
|
152 |
+
},
|
153 |
+
{
|
154 |
+
"id": 11,
|
155 |
+
"type": "DualCLIPLoader",
|
156 |
+
"pos": {
|
157 |
+
"0": -473,
|
158 |
+
"1": 356
|
159 |
+
},
|
160 |
+
"size": {
|
161 |
+
"0": 315,
|
162 |
+
"1": 106
|
163 |
+
},
|
164 |
+
"flags": {},
|
165 |
+
"order": 2,
|
166 |
+
"mode": 0,
|
167 |
+
"inputs": [],
|
168 |
+
"outputs": [
|
169 |
+
{
|
170 |
+
"name": "CLIP",
|
171 |
+
"type": "CLIP",
|
172 |
+
"links": [
|
173 |
+
83
|
174 |
+
],
|
175 |
+
"slot_index": 0,
|
176 |
+
"shape": 3
|
177 |
+
}
|
178 |
+
],
|
179 |
+
"properties": {
|
180 |
+
"Node name for S&R": "DualCLIPLoader"
|
181 |
+
},
|
182 |
+
"widgets_values": [
|
183 |
+
"t5xxl_fp8_e4m3fn.safetensors",
|
184 |
+
"clip_l.safetensors",
|
185 |
+
"flux"
|
186 |
+
]
|
187 |
+
},
|
188 |
+
{
|
189 |
+
"id": 10,
|
190 |
+
"type": "VAELoader",
|
191 |
+
"pos": {
|
192 |
+
"0": 611,
|
193 |
+
"1": -42
|
194 |
+
},
|
195 |
+
"size": {
|
196 |
+
"0": 315,
|
197 |
+
"1": 58
|
198 |
+
},
|
199 |
+
"flags": {},
|
200 |
+
"order": 3,
|
201 |
+
"mode": 0,
|
202 |
+
"inputs": [],
|
203 |
+
"outputs": [
|
204 |
+
{
|
205 |
+
"name": "VAE",
|
206 |
+
"type": "VAE",
|
207 |
+
"links": [
|
208 |
+
12,
|
209 |
+
64
|
210 |
+
],
|
211 |
+
"slot_index": 0,
|
212 |
+
"shape": 3
|
213 |
+
}
|
214 |
+
],
|
215 |
+
"properties": {
|
216 |
+
"Node name for S&R": "VAELoader"
|
217 |
+
},
|
218 |
+
"widgets_values": [
|
219 |
+
"ae.safetensors"
|
220 |
+
]
|
221 |
+
},
|
222 |
+
{
|
223 |
+
"id": 41,
|
224 |
+
"type": "UpscaleModelLoader",
|
225 |
+
"pos": {
|
226 |
+
"0": 1188,
|
227 |
+
"1": -183
|
228 |
+
},
|
229 |
+
"size": {
|
230 |
+
"0": 315,
|
231 |
+
"1": 58
|
232 |
+
},
|
233 |
+
"flags": {},
|
234 |
+
"order": 4,
|
235 |
+
"mode": 0,
|
236 |
+
"inputs": [],
|
237 |
+
"outputs": [
|
238 |
+
{
|
239 |
+
"name": "UPSCALE_MODEL",
|
240 |
+
"type": "UPSCALE_MODEL",
|
241 |
+
"links": [
|
242 |
+
65
|
243 |
+
],
|
244 |
+
"shape": 3
|
245 |
+
}
|
246 |
+
],
|
247 |
+
"properties": {
|
248 |
+
"Node name for S&R": "UpscaleModelLoader"
|
249 |
+
},
|
250 |
+
"widgets_values": [
|
251 |
+
"4x_foolhardy_Remacri.pth"
|
252 |
+
]
|
253 |
+
},
|
254 |
+
{
|
255 |
+
"id": 39,
|
256 |
+
"type": "UltimateSDUpscale",
|
257 |
+
"pos": {
|
258 |
+
"0": 1462,
|
259 |
+
"1": -59
|
260 |
+
},
|
261 |
+
"size": {
|
262 |
+
"0": 315,
|
263 |
+
"1": 614
|
264 |
+
},
|
265 |
+
"flags": {},
|
266 |
+
"order": 15,
|
267 |
+
"mode": 4,
|
268 |
+
"inputs": [
|
269 |
+
{
|
270 |
+
"name": "image",
|
271 |
+
"type": "IMAGE",
|
272 |
+
"link": 60
|
273 |
+
},
|
274 |
+
{
|
275 |
+
"name": "model",
|
276 |
+
"type": "MODEL",
|
277 |
+
"link": 61
|
278 |
+
},
|
279 |
+
{
|
280 |
+
"name": "positive",
|
281 |
+
"type": "CONDITIONING",
|
282 |
+
"link": null,
|
283 |
+
"slot_index": 2
|
284 |
+
},
|
285 |
+
{
|
286 |
+
"name": "negative",
|
287 |
+
"type": "CONDITIONING",
|
288 |
+
"link": null,
|
289 |
+
"slot_index": 3
|
290 |
+
},
|
291 |
+
{
|
292 |
+
"name": "vae",
|
293 |
+
"type": "VAE",
|
294 |
+
"link": 64,
|
295 |
+
"slot_index": 4
|
296 |
+
},
|
297 |
+
{
|
298 |
+
"name": "upscale_model",
|
299 |
+
"type": "UPSCALE_MODEL",
|
300 |
+
"link": 65,
|
301 |
+
"slot_index": 5
|
302 |
+
}
|
303 |
+
],
|
304 |
+
"outputs": [
|
305 |
+
{
|
306 |
+
"name": "IMAGE",
|
307 |
+
"type": "IMAGE",
|
308 |
+
"links": [
|
309 |
+
66
|
310 |
+
],
|
311 |
+
"slot_index": 0,
|
312 |
+
"shape": 3
|
313 |
+
}
|
314 |
+
],
|
315 |
+
"properties": {
|
316 |
+
"Node name for S&R": "UltimateSDUpscale"
|
317 |
+
},
|
318 |
+
"widgets_values": [
|
319 |
+
2,
|
320 |
+
439606151521400,
|
321 |
+
"randomize",
|
322 |
+
20,
|
323 |
+
8,
|
324 |
+
"euler",
|
325 |
+
"simple",
|
326 |
+
0.2,
|
327 |
+
"Linear",
|
328 |
+
512,
|
329 |
+
512,
|
330 |
+
8,
|
331 |
+
32,
|
332 |
+
"None",
|
333 |
+
0,
|
334 |
+
64,
|
335 |
+
8,
|
336 |
+
16,
|
337 |
+
true,
|
338 |
+
false
|
339 |
+
]
|
340 |
+
},
|
341 |
+
{
|
342 |
+
"id": 66,
|
343 |
+
"type": "Note",
|
344 |
+
"pos": {
|
345 |
+
"0": -145,
|
346 |
+
"1": 971
|
347 |
+
},
|
348 |
+
"size": {
|
349 |
+
"0": 445.3786315917969,
|
350 |
+
"1": 251.9473419189453
|
351 |
+
},
|
352 |
+
"flags": {},
|
353 |
+
"order": 5,
|
354 |
+
"mode": 0,
|
355 |
+
"inputs": [],
|
356 |
+
"outputs": [],
|
357 |
+
"properties": {
|
358 |
+
"text": ""
|
359 |
+
},
|
360 |
+
"widgets_values": [
|
361 |
+
"young man, actor, intense expression, muddy clothes, t-shirt, jeans shorts, gym setting, dirt on body, hands clasped in front of body, brooding pose \n\nThe image shows a young man sitting on a wooden bench in a dark room. He is wearing a dark green t-shirt and grey pants. His hands are covered in mud and he appears to be deep in thought. His hair is messy and unkempt, and he has a serious expression on his face. The background is blurred, but it seems like he is in a workshop or garage. The overall mood of the image is somber and contemplative.\n"
|
362 |
+
],
|
363 |
+
"color": "#432",
|
364 |
+
"bgcolor": "#653"
|
365 |
+
},
|
366 |
+
{
|
367 |
+
"id": 13,
|
368 |
+
"type": "SamplerCustomAdvanced",
|
369 |
+
"pos": {
|
370 |
+
"0": 1099,
|
371 |
+
"1": 171
|
372 |
+
},
|
373 |
+
"size": {
|
374 |
+
"0": 355.20001220703125,
|
375 |
+
"1": 106
|
376 |
+
},
|
377 |
+
"flags": {},
|
378 |
+
"order": 13,
|
379 |
+
"mode": 0,
|
380 |
+
"inputs": [
|
381 |
+
{
|
382 |
+
"name": "noise",
|
383 |
+
"type": "NOISE",
|
384 |
+
"link": 37,
|
385 |
+
"slot_index": 0
|
386 |
+
},
|
387 |
+
{
|
388 |
+
"name": "guider",
|
389 |
+
"type": "GUIDER",
|
390 |
+
"link": 30,
|
391 |
+
"slot_index": 1
|
392 |
+
},
|
393 |
+
{
|
394 |
+
"name": "sampler",
|
395 |
+
"type": "SAMPLER",
|
396 |
+
"link": 19,
|
397 |
+
"slot_index": 2
|
398 |
+
},
|
399 |
+
{
|
400 |
+
"name": "sigmas",
|
401 |
+
"type": "SIGMAS",
|
402 |
+
"link": 20,
|
403 |
+
"slot_index": 3
|
404 |
+
},
|
405 |
+
{
|
406 |
+
"name": "latent_image",
|
407 |
+
"type": "LATENT",
|
408 |
+
"link": 89,
|
409 |
+
"slot_index": 4
|
410 |
+
}
|
411 |
+
],
|
412 |
+
"outputs": [
|
413 |
+
{
|
414 |
+
"name": "output",
|
415 |
+
"type": "LATENT",
|
416 |
+
"links": [
|
417 |
+
24
|
418 |
+
],
|
419 |
+
"slot_index": 0,
|
420 |
+
"shape": 3
|
421 |
+
},
|
422 |
+
{
|
423 |
+
"name": "denoised_output",
|
424 |
+
"type": "LATENT",
|
425 |
+
"links": null,
|
426 |
+
"shape": 3
|
427 |
+
}
|
428 |
+
],
|
429 |
+
"properties": {
|
430 |
+
"Node name for S&R": "SamplerCustomAdvanced"
|
431 |
+
}
|
432 |
+
},
|
433 |
+
{
|
434 |
+
"id": 68,
|
435 |
+
"type": "Reroute",
|
436 |
+
"pos": {
|
437 |
+
"0": 1541.0958251953125,
|
438 |
+
"1": 690.804443359375
|
439 |
+
},
|
440 |
+
"size": [
|
441 |
+
75,
|
442 |
+
26
|
443 |
+
],
|
444 |
+
"flags": {},
|
445 |
+
"order": 8,
|
446 |
+
"mode": 0,
|
447 |
+
"inputs": [
|
448 |
+
{
|
449 |
+
"name": "",
|
450 |
+
"type": "*",
|
451 |
+
"link": 90
|
452 |
+
}
|
453 |
+
],
|
454 |
+
"outputs": [
|
455 |
+
{
|
456 |
+
"name": "",
|
457 |
+
"type": "LATENT",
|
458 |
+
"links": [
|
459 |
+
89
|
460 |
+
]
|
461 |
+
}
|
462 |
+
],
|
463 |
+
"properties": {
|
464 |
+
"showOutputText": false,
|
465 |
+
"horizontal": false
|
466 |
+
}
|
467 |
+
},
|
468 |
+
{
|
469 |
+
"id": 17,
|
470 |
+
"type": "BasicScheduler",
|
471 |
+
"pos": {
|
472 |
+
"0": 1122,
|
473 |
+
"1": 485
|
474 |
+
},
|
475 |
+
"size": {
|
476 |
+
"0": 315,
|
477 |
+
"1": 106
|
478 |
+
},
|
479 |
+
"flags": {},
|
480 |
+
"order": 9,
|
481 |
+
"mode": 0,
|
482 |
+
"inputs": [
|
483 |
+
{
|
484 |
+
"name": "model",
|
485 |
+
"type": "MODEL",
|
486 |
+
"link": 38,
|
487 |
+
"slot_index": 0
|
488 |
+
}
|
489 |
+
],
|
490 |
+
"outputs": [
|
491 |
+
{
|
492 |
+
"name": "SIGMAS",
|
493 |
+
"type": "SIGMAS",
|
494 |
+
"links": [
|
495 |
+
20
|
496 |
+
],
|
497 |
+
"shape": 3
|
498 |
+
}
|
499 |
+
],
|
500 |
+
"properties": {
|
501 |
+
"Node name for S&R": "BasicScheduler"
|
502 |
+
},
|
503 |
+
"widgets_values": [
|
504 |
+
"simple",
|
505 |
+
28,
|
506 |
+
1
|
507 |
+
]
|
508 |
+
},
|
509 |
+
{
|
510 |
+
"id": 67,
|
511 |
+
"type": "Empty Latent by Ratio (WLSH)",
|
512 |
+
"pos": {
|
513 |
+
"0": 1112,
|
514 |
+
"1": 783
|
515 |
+
},
|
516 |
+
"size": {
|
517 |
+
"0": 352.79998779296875,
|
518 |
+
"1": 170
|
519 |
+
},
|
520 |
+
"flags": {},
|
521 |
+
"order": 6,
|
522 |
+
"mode": 0,
|
523 |
+
"inputs": [],
|
524 |
+
"outputs": [
|
525 |
+
{
|
526 |
+
"name": "latent",
|
527 |
+
"type": "LATENT",
|
528 |
+
"links": [
|
529 |
+
90
|
530 |
+
],
|
531 |
+
"shape": 3
|
532 |
+
},
|
533 |
+
{
|
534 |
+
"name": "width",
|
535 |
+
"type": "INT",
|
536 |
+
"links": null,
|
537 |
+
"shape": 3
|
538 |
+
},
|
539 |
+
{
|
540 |
+
"name": "height",
|
541 |
+
"type": "INT",
|
542 |
+
"links": null,
|
543 |
+
"shape": 3
|
544 |
+
}
|
545 |
+
],
|
546 |
+
"properties": {
|
547 |
+
"Node name for S&R": "Empty Latent by Ratio (WLSH)"
|
548 |
+
},
|
549 |
+
"widgets_values": [
|
550 |
+
"16:10",
|
551 |
+
"portrait",
|
552 |
+
576,
|
553 |
+
2
|
554 |
+
]
|
555 |
+
},
|
556 |
+
{
|
557 |
+
"id": 12,
|
558 |
+
"type": "UNETLoader",
|
559 |
+
"pos": {
|
560 |
+
"0": -493,
|
561 |
+
"1": 204
|
562 |
+
},
|
563 |
+
"size": {
|
564 |
+
"0": 315,
|
565 |
+
"1": 82
|
566 |
+
},
|
567 |
+
"flags": {},
|
568 |
+
"order": 7,
|
569 |
+
"mode": 0,
|
570 |
+
"inputs": [],
|
571 |
+
"outputs": [
|
572 |
+
{
|
573 |
+
"name": "MODEL",
|
574 |
+
"type": "MODEL",
|
575 |
+
"links": [
|
576 |
+
38,
|
577 |
+
61,
|
578 |
+
82
|
579 |
+
],
|
580 |
+
"slot_index": 0,
|
581 |
+
"shape": 3
|
582 |
+
}
|
583 |
+
],
|
584 |
+
"properties": {
|
585 |
+
"Node name for S&R": "UNETLoader"
|
586 |
+
},
|
587 |
+
"widgets_values": [
|
588 |
+
"flux1-dev.safetensors",
|
589 |
+
"default"
|
590 |
+
]
|
591 |
+
},
|
592 |
+
{
|
593 |
+
"id": 57,
|
594 |
+
"type": "LoraLoader",
|
595 |
+
"pos": {
|
596 |
+
"0": -132,
|
597 |
+
"1": 706
|
598 |
+
},
|
599 |
+
"size": {
|
600 |
+
"0": 394.9391174316406,
|
601 |
+
"1": 130.8064422607422
|
602 |
+
},
|
603 |
+
"flags": {},
|
604 |
+
"order": 10,
|
605 |
+
"mode": 0,
|
606 |
+
"inputs": [
|
607 |
+
{
|
608 |
+
"name": "model",
|
609 |
+
"type": "MODEL",
|
610 |
+
"link": 82
|
611 |
+
},
|
612 |
+
{
|
613 |
+
"name": "clip",
|
614 |
+
"type": "CLIP",
|
615 |
+
"link": 83
|
616 |
+
}
|
617 |
+
],
|
618 |
+
"outputs": [
|
619 |
+
{
|
620 |
+
"name": "MODEL",
|
621 |
+
"type": "MODEL",
|
622 |
+
"links": [
|
623 |
+
85
|
624 |
+
],
|
625 |
+
"slot_index": 0,
|
626 |
+
"shape": 3
|
627 |
+
},
|
628 |
+
{
|
629 |
+
"name": "CLIP",
|
630 |
+
"type": "CLIP",
|
631 |
+
"links": [
|
632 |
+
87
|
633 |
+
],
|
634 |
+
"slot_index": 1,
|
635 |
+
"shape": 3
|
636 |
+
}
|
637 |
+
],
|
638 |
+
"properties": {
|
639 |
+
"Node name for S&R": "LoraLoader"
|
640 |
+
},
|
641 |
+
"widgets_values": [
|
642 |
+
"aodai_v2_000001250.safetensors",
|
643 |
+
0.8,
|
644 |
+
1
|
645 |
+
]
|
646 |
+
},
|
647 |
+
{
|
648 |
+
"id": 43,
|
649 |
+
"type": "CLIPTextEncodeFlux",
|
650 |
+
"pos": {
|
651 |
+
"0": 2126,
|
652 |
+
"1": 576
|
653 |
+
},
|
654 |
+
"size": {
|
655 |
+
"0": 432.8548278808594,
|
656 |
+
"1": 348.077392578125
|
657 |
+
},
|
658 |
+
"flags": {},
|
659 |
+
"order": 11,
|
660 |
+
"mode": 0,
|
661 |
+
"inputs": [
|
662 |
+
{
|
663 |
+
"name": "clip",
|
664 |
+
"type": "CLIP",
|
665 |
+
"link": 87
|
666 |
+
}
|
667 |
+
],
|
668 |
+
"outputs": [
|
669 |
+
{
|
670 |
+
"name": "CONDITIONING",
|
671 |
+
"type": "CONDITIONING",
|
672 |
+
"links": [
|
673 |
+
74
|
674 |
+
],
|
675 |
+
"slot_index": 0,
|
676 |
+
"shape": 3
|
677 |
+
}
|
678 |
+
],
|
679 |
+
"properties": {
|
680 |
+
"Node name for S&R": "CLIPTextEncodeFlux"
|
681 |
+
},
|
682 |
+
"widgets_values": [
|
683 |
+
"a photo of a young woman that is dressed in traditional Vietnamese dress called a0da1 with gold embroidery on the sleeves and neckline, looking to the left. She has long, straight, dark brown hair cascading down her back, she stands on a tiled floor in a grand, ornate room with classical columns and intricate carvings, suggesting an opulent or historical setting, The background features a large window allowing natural light to flood in, casting soft shadows and highlighting the contours of her body, the floor is tiled in a rich, reddish-brown color.",
|
684 |
+
"a photo of a young woman that is dressed in traditional Vietnamese dress called a0da1 with gold embroidery on the sleeves and neckline, looking to the left. She has long, straight, dark brown hair cascading down her back, she stands on a tiled floor in a grand, ornate room with classical columns and intricate carvings, suggesting an opulent or historical setting, The background features a large window allowing natural light to flood in, casting soft shadows and highlighting the contours of her body, the floor is tiled in a rich, reddish-brown color.",
|
685 |
+
3.5
|
686 |
+
],
|
687 |
+
"color": "#232",
|
688 |
+
"bgcolor": "#353"
|
689 |
+
},
|
690 |
+
{
|
691 |
+
"id": 29,
|
692 |
+
"type": "Image Save",
|
693 |
+
"pos": {
|
694 |
+
"0": 1681,
|
695 |
+
"1": 153
|
696 |
+
},
|
697 |
+
"size": {
|
698 |
+
"0": 424.9637756347656,
|
699 |
+
"1": 841.1275634765625
|
700 |
+
},
|
701 |
+
"flags": {},
|
702 |
+
"order": 16,
|
703 |
+
"mode": 0,
|
704 |
+
"inputs": [
|
705 |
+
{
|
706 |
+
"name": "images",
|
707 |
+
"type": "IMAGE",
|
708 |
+
"link": 66
|
709 |
+
}
|
710 |
+
],
|
711 |
+
"outputs": [
|
712 |
+
{
|
713 |
+
"name": "images",
|
714 |
+
"type": "IMAGE",
|
715 |
+
"links": null,
|
716 |
+
"shape": 3
|
717 |
+
},
|
718 |
+
{
|
719 |
+
"name": "files",
|
720 |
+
"type": "STRING",
|
721 |
+
"links": null,
|
722 |
+
"shape": 3
|
723 |
+
}
|
724 |
+
],
|
725 |
+
"properties": {
|
726 |
+
"Node name for S&R": "Image Save"
|
727 |
+
},
|
728 |
+
"widgets_values": [
|
729 |
+
"flux[time(%Y-%m-%d)]",
|
730 |
+
"ComfyUI",
|
731 |
+
"_",
|
732 |
+
4,
|
733 |
+
"false",
|
734 |
+
"png",
|
735 |
+
300,
|
736 |
+
100,
|
737 |
+
"true",
|
738 |
+
"false",
|
739 |
+
"false",
|
740 |
+
"false",
|
741 |
+
"true",
|
742 |
+
"true",
|
743 |
+
"true"
|
744 |
+
]
|
745 |
+
}
|
746 |
+
],
|
747 |
+
"links": [
|
748 |
+
[
|
749 |
+
12,
|
750 |
+
10,
|
751 |
+
0,
|
752 |
+
8,
|
753 |
+
1,
|
754 |
+
"VAE"
|
755 |
+
],
|
756 |
+
[
|
757 |
+
19,
|
758 |
+
16,
|
759 |
+
0,
|
760 |
+
13,
|
761 |
+
2,
|
762 |
+
"SAMPLER"
|
763 |
+
],
|
764 |
+
[
|
765 |
+
20,
|
766 |
+
17,
|
767 |
+
0,
|
768 |
+
13,
|
769 |
+
3,
|
770 |
+
"SIGMAS"
|
771 |
+
],
|
772 |
+
[
|
773 |
+
24,
|
774 |
+
13,
|
775 |
+
0,
|
776 |
+
8,
|
777 |
+
0,
|
778 |
+
"LATENT"
|
779 |
+
],
|
780 |
+
[
|
781 |
+
30,
|
782 |
+
22,
|
783 |
+
0,
|
784 |
+
13,
|
785 |
+
1,
|
786 |
+
"GUIDER"
|
787 |
+
],
|
788 |
+
[
|
789 |
+
37,
|
790 |
+
25,
|
791 |
+
0,
|
792 |
+
13,
|
793 |
+
0,
|
794 |
+
"NOISE"
|
795 |
+
],
|
796 |
+
[
|
797 |
+
38,
|
798 |
+
12,
|
799 |
+
0,
|
800 |
+
17,
|
801 |
+
0,
|
802 |
+
"MODEL"
|
803 |
+
],
|
804 |
+
[
|
805 |
+
60,
|
806 |
+
8,
|
807 |
+
0,
|
808 |
+
39,
|
809 |
+
0,
|
810 |
+
"IMAGE"
|
811 |
+
],
|
812 |
+
[
|
813 |
+
61,
|
814 |
+
12,
|
815 |
+
0,
|
816 |
+
39,
|
817 |
+
1,
|
818 |
+
"MODEL"
|
819 |
+
],
|
820 |
+
[
|
821 |
+
64,
|
822 |
+
10,
|
823 |
+
0,
|
824 |
+
39,
|
825 |
+
4,
|
826 |
+
"VAE"
|
827 |
+
],
|
828 |
+
[
|
829 |
+
65,
|
830 |
+
41,
|
831 |
+
0,
|
832 |
+
39,
|
833 |
+
5,
|
834 |
+
"UPSCALE_MODEL"
|
835 |
+
],
|
836 |
+
[
|
837 |
+
66,
|
838 |
+
39,
|
839 |
+
0,
|
840 |
+
29,
|
841 |
+
0,
|
842 |
+
"IMAGE"
|
843 |
+
],
|
844 |
+
[
|
845 |
+
74,
|
846 |
+
43,
|
847 |
+
0,
|
848 |
+
22,
|
849 |
+
1,
|
850 |
+
"CONDITIONING"
|
851 |
+
],
|
852 |
+
[
|
853 |
+
82,
|
854 |
+
12,
|
855 |
+
0,
|
856 |
+
57,
|
857 |
+
0,
|
858 |
+
"MODEL"
|
859 |
+
],
|
860 |
+
[
|
861 |
+
83,
|
862 |
+
11,
|
863 |
+
0,
|
864 |
+
57,
|
865 |
+
1,
|
866 |
+
"CLIP"
|
867 |
+
],
|
868 |
+
[
|
869 |
+
85,
|
870 |
+
57,
|
871 |
+
0,
|
872 |
+
22,
|
873 |
+
0,
|
874 |
+
"MODEL"
|
875 |
+
],
|
876 |
+
[
|
877 |
+
87,
|
878 |
+
57,
|
879 |
+
1,
|
880 |
+
43,
|
881 |
+
0,
|
882 |
+
"CLIP"
|
883 |
+
],
|
884 |
+
[
|
885 |
+
89,
|
886 |
+
68,
|
887 |
+
0,
|
888 |
+
13,
|
889 |
+
4,
|
890 |
+
"LATENT"
|
891 |
+
],
|
892 |
+
[
|
893 |
+
90,
|
894 |
+
67,
|
895 |
+
0,
|
896 |
+
68,
|
897 |
+
0,
|
898 |
+
"*"
|
899 |
+
]
|
900 |
+
],
|
901 |
+
"groups": [],
|
902 |
+
"config": {},
|
903 |
+
"extra": {
|
904 |
+
"ds": {
|
905 |
+
"scale": 1,
|
906 |
+
"offset": [
|
907 |
+
-1099.5907249450684,
|
908 |
+
-557.818227594549
|
909 |
+
]
|
910 |
+
}
|
911 |
+
},
|
912 |
+
"version": 0.4
|
913 |
+
}
|