michaeltrs commited on
Commit
ad05ce1
1 Parent(s): cb25637

adding code

Browse files
Files changed (8) hide show
  1. .gitignore +4 -0
  2. README.md +74 -0
  3. download.py +7 -0
  4. finetune_lora.py +975 -0
  5. gen_w_lora.py +0 -65
  6. generate.py +41 -0
  7. main.py +0 -14
  8. requirements.txt +7 -0
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ *.png
2
+ **/*.png
3
+ **/**/*.png
4
+ .idea
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text2Face-LoRa
2
+ ![Python version](https://img.shields.io/badge/python-3.8+-blue.svg)
3
+ ![License](https://img.shields.io/badge/license-MIT-green)
4
+
5
+ This repository provides the code for a LoRa-finetuned version of the Stable Diffusion 2.1 model specifically optimized
6
+ for generating face images. The package includes both training and inference capabilities, along with a pretrained model
7
+ and the synthetic annotations used for finetuning.
8
+
9
+ ## Features
10
+ - **Finetuning Script:** `finetune.py` applies LoRa adjustments to both the UNet denoiser and the text encoder of Stable Diffusion.
11
+ - **Inference Script:** `generate.py` Ready-to-use script for generating images using the pretrained model.
12
+ - **Pretrained Model:** `download.py` downloads our pretrained model from Hugging Face.
13
+
14
+ # Environment Setup
15
+ Set up a conda environment to run the model using the following commands:
16
+ ```bash
17
+ conda create -n text2face
18
+ conda activate text2face
19
+
20
+ # Install requirements
21
+ pip install -r requirements.txt
22
+ ```
23
+
24
+ # Checkpoints
25
+ You can download the pretrained LoRa weights for the diffusion model and text encoder using our provided Python script `download.py`
26
+
27
+ ```python
28
+ from huggingface_hub import hf_hub_download
29
+
30
+ hf_hub_download(repo_id="michaeltrs/text2face", filename="checkpoints/lora30k/pytorch_lora_weights.safetensors", local_dir="./test")
31
+ ```
32
+
33
+ # Inference
34
+ Generate images using the `generate.py` script, which loads the SD2.1 foundation model from Hugging Face and applies the LoRa weights.
35
+ Generation is driven by defining a prompt and optionally a negative prompt.
36
+
37
+ # Finetuning
38
+ Use `finetune.py` to finetune a stable diffusion model using LoRAs for the UNet denoiser and the text encoder.
39
+ Example command for training:
40
+
41
+ ```bash
42
+ accelerate config
43
+ accelerate config default
44
+
45
+ export MODEL_NAME="stabilityai/stable-diffusion-2-1"
46
+ export TRAIN_DIR="<root dir for training data>"
47
+
48
+ accelerate launch finetune_lora.py --pretrained_model_name_or_path=$MODEL_NAME \
49
+ --train_data_dir=$TRAIN_DIR \
50
+ --train_text_encoder \
51
+ --checkpointing_steps 5000 \
52
+ --resolution=768 \
53
+ --center_crop \
54
+ --train_batch_size=4 \
55
+ --num_train_epochs 20 \
56
+ --gradient_accumulation_steps=1 \
57
+ --gradient_checkpointing \
58
+ --num_validation_images 5 \
59
+ --learning_rate=1e-05 \
60
+ --learning_rate_text_encoder=1e-05 \
61
+ --max_grad_norm=1 \
62
+ --rank 8 \
63
+ --text_encoder_rank 8 \
64
+ --lr_scheduler="constant" \
65
+ --lr_warmup_steps=0 \
66
+ --output_dir="<output directory for trained model>" \
67
+ --resume_from_checkpoint "latest" \
68
+ --validation_prompts "A young Latina woman, around 27 years old, with long hair and pale skin, expressing a mix of happiness and neutral emotions. She has fully open eyes and arched eyebrows." "The person is a 44-year-old Asian male with gray hair and a receding hairline. He has a big nose, closed mouth and is feeling a mix of anger and sadness." "A Latino Hispanic male, 22 years old, with straight hair, an oval face, and eyes fully open. His emotion is sad and partly neutral." "A white male, 28 years old, with a neutral emotion, sideburns, pale skin, little hair, an attractive appearance, a 5 o'clock shadow, and pointy nose." "A young, black, female individual with an oval face and big eyes, with a happy and partly surprised expression."
69
+ ```
70
+
71
+ # Datasets
72
+ Details on the dataset format and preparation will be available soon.
73
+
74
+
download.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ from huggingface_hub import hf_hub_download
2
+
3
+ if __name__ == "__main__":
4
+ hf_hub_download(repo_id="michaeltrs/text2face",
5
+ filename="checkpoints/lora30k/pytorch_lora_weights.safetensors",
6
+ local_dir="checkpoints")
7
+
finetune_lora.py ADDED
@@ -0,0 +1,975 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Fine-tuning script for Stable Diffusion for text2image with support for LoRA.
17
+
18
+ Modified from: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py
19
+ """
20
+
21
+ import argparse
22
+ import logging
23
+ import math
24
+ import os
25
+ import random
26
+ import shutil
27
+ from pathlib import Path
28
+
29
+ import datasets
30
+ import numpy as np
31
+ import torch
32
+ import torch.nn.functional as F
33
+ import torch.utils.checkpoint
34
+ import transformers
35
+ from accelerate import Accelerator
36
+ from accelerate.logging import get_logger
37
+ from accelerate.utils import ProjectConfiguration, set_seed
38
+ from datasets import load_dataset
39
+ from huggingface_hub import create_repo, upload_folder
40
+ from packaging import version
41
+ from peft import LoraConfig
42
+ from peft.utils import get_peft_model_state_dict
43
+ from torchvision import transforms
44
+ from tqdm.auto import tqdm
45
+ from transformers import CLIPTextModel, CLIPTokenizer
46
+
47
+ import diffusers
48
+ from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, StableDiffusionPipeline, UNet2DConditionModel
49
+ from diffusers.optimization import get_scheduler
50
+ from diffusers.training_utils import compute_snr
51
+ from diffusers.utils import check_min_version, is_wandb_available, make_image_grid
52
+ from diffusers.utils.import_utils import is_xformers_available
53
+
54
+ # import traceback
55
+
56
+
57
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
58
+ check_min_version("0.25.0.dev0")
59
+
60
+ logger = get_logger(__name__, log_level="INFO")
61
+
62
+
63
+ # TODO: This function should be removed once training scripts are rewritten in PEFT
64
+ def text_encoder_lora_state_dict(text_encoder):
65
+ state_dict = {}
66
+
67
+ def text_encoder_attn_modules(text_encoder):
68
+ from transformers import CLIPTextModel, CLIPTextModelWithProjection
69
+
70
+ attn_modules = []
71
+
72
+ if isinstance(text_encoder, (CLIPTextModel, CLIPTextModelWithProjection)):
73
+ for i, layer in enumerate(text_encoder.text_model.encoder.layers):
74
+ name = f"text_model.encoder.layers.{i}.self_attn"
75
+ mod = layer.self_attn
76
+ attn_modules.append((name, mod))
77
+
78
+ return attn_modules
79
+
80
+ for name, module in text_encoder_attn_modules(text_encoder):
81
+ for k, v in module.q_proj.lora_linear_layer.state_dict().items():
82
+ state_dict[f"{name}.q_proj.lora_linear_layer.{k}"] = v
83
+
84
+ for k, v in module.k_proj.lora_linear_layer.state_dict().items():
85
+ state_dict[f"{name}.k_proj.lora_linear_layer.{k}"] = v
86
+
87
+ for k, v in module.v_proj.lora_linear_layer.state_dict().items():
88
+ state_dict[f"{name}.v_proj.lora_linear_layer.{k}"] = v
89
+
90
+ for k, v in module.out_proj.lora_linear_layer.state_dict().items():
91
+ state_dict[f"{name}.out_proj.lora_linear_layer.{k}"] = v
92
+
93
+ return state_dict
94
+
95
+
96
+ def save_model_card(args, repo_id: str, images=None, base_model=str, dataset_name=str, repo_folder=None):
97
+ img_str = ""
98
+ if len(images) > 0:
99
+ image_grid = make_image_grid(images, 1, len(args.validation_prompts))
100
+ image_grid.save(os.path.join(repo_folder, "val_imgs_grid.png"))
101
+ img_str += "![val_imgs_grid](./val_imgs_grid.png)\n"
102
+
103
+ yaml = f"""
104
+ ---
105
+ license: creativeml-openrail-m
106
+ base_model: {base_model}
107
+ tags:
108
+ - stable-diffusion
109
+ - stable-diffusion-diffusers
110
+ - text-to-image
111
+ - diffusers
112
+ - lora
113
+ inference: true
114
+ ---
115
+ """
116
+ model_card = f"""
117
+ # LoRA text2image fine-tuning - {repo_id}
118
+ These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n
119
+ {img_str}
120
+ """
121
+ with open(os.path.join(repo_folder, "README.md"), "w") as f:
122
+ f.write(yaml + model_card)
123
+
124
+
125
+ def parse_args():
126
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
127
+ parser.add_argument(
128
+ "--pretrained_model_name_or_path",
129
+ type=str,
130
+ default=None,
131
+ required=True,
132
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
133
+ )
134
+ parser.add_argument(
135
+ "--revision",
136
+ type=str,
137
+ default=None,
138
+ required=False,
139
+ help="Revision of pretrained model identifier from huggingface.co/models.",
140
+ )
141
+ parser.add_argument(
142
+ "--variant",
143
+ type=str,
144
+ default=None,
145
+ help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16",
146
+ )
147
+ parser.add_argument(
148
+ "--dataset_name",
149
+ type=str,
150
+ default=None,
151
+ help=(
152
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
153
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
154
+ " or to a folder containing files that �� Datasets can understand."
155
+ ),
156
+ )
157
+ parser.add_argument(
158
+ "--dataset_config_name",
159
+ type=str,
160
+ default=None,
161
+ help="The config of the Dataset, leave as None if there's only one config.",
162
+ )
163
+ parser.add_argument(
164
+ "--train_data_dir",
165
+ type=str,
166
+ default=None,
167
+ help=(
168
+ "A folder containing the training data. Folder contents must follow the structure described in"
169
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
170
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
171
+ ),
172
+ )
173
+ parser.add_argument(
174
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
175
+ )
176
+ parser.add_argument(
177
+ "--caption_column",
178
+ type=str,
179
+ default="text",
180
+ help="The column of the dataset containing a caption or a list of captions.",
181
+ )
182
+ # parser.add_argument(
183
+ # "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference."
184
+ # )
185
+ parser.add_argument(
186
+ "--validation_prompts",
187
+ type=str,
188
+ default=None,
189
+ nargs="+",
190
+ help=("A set of prompts evaluated every `--validation_epochs` and logged to `--report_to`."),
191
+ )
192
+ parser.add_argument(
193
+ "--num_validation_images",
194
+ type=int,
195
+ default=4,
196
+ help="Number of images that should be generated during validation with `validation_prompt`.",
197
+ )
198
+ parser.add_argument(
199
+ "--validation_epochs",
200
+ type=int,
201
+ default=1,
202
+ help=(
203
+ "Run fine-tuning validation every X epochs. The validation process consists of running the prompt"
204
+ " `args.validation_prompt` multiple times: `args.num_validation_images`."
205
+ ),
206
+ )
207
+ parser.add_argument(
208
+ "--max_train_samples",
209
+ type=int,
210
+ default=None,
211
+ help=(
212
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
213
+ "value if set."
214
+ ),
215
+ )
216
+ parser.add_argument(
217
+ "--output_dir",
218
+ type=str,
219
+ default="sd-model-finetuned-lora",
220
+ help="The output directory where the model predictions and checkpoints will be written.",
221
+ )
222
+ parser.add_argument(
223
+ "--cache_dir",
224
+ type=str,
225
+ default=None,
226
+ help="The directory where the downloaded models and datasets will be stored.",
227
+ )
228
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
229
+ parser.add_argument(
230
+ "--resolution",
231
+ type=int,
232
+ default=512,
233
+ help=(
234
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
235
+ " resolution"
236
+ ),
237
+ )
238
+ parser.add_argument(
239
+ "--center_crop",
240
+ default=False,
241
+ action="store_true",
242
+ help=(
243
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
244
+ " cropped. The images will be resized to the resolution first before cropping."
245
+ ),
246
+ )
247
+ parser.add_argument(
248
+ "--random_flip",
249
+ action="store_true",
250
+ help="whether to randomly flip images horizontally",
251
+ )
252
+ parser.add_argument(
253
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
254
+ )
255
+ parser.add_argument("--num_train_epochs", type=int, default=100)
256
+ parser.add_argument(
257
+ "--max_train_steps",
258
+ type=int,
259
+ default=None,
260
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
261
+ )
262
+ parser.add_argument(
263
+ "--gradient_accumulation_steps",
264
+ type=int,
265
+ default=1,
266
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
267
+ )
268
+ parser.add_argument(
269
+ "--gradient_checkpointing",
270
+ action="store_true",
271
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
272
+ )
273
+ parser.add_argument(
274
+ "--learning_rate",
275
+ type=float,
276
+ default=1e-4,
277
+ help="Initial learning rate (after the potential warmup period) to use.",
278
+ )
279
+ parser.add_argument(
280
+ "--learning_rate_text_encoder",
281
+ type=float,
282
+ default=1e-4,
283
+ help="Initial learning rate (after the potential warmup period) to use.",
284
+ )
285
+ parser.add_argument(
286
+ "--scale_lr",
287
+ action="store_true",
288
+ default=False,
289
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
290
+ )
291
+ parser.add_argument(
292
+ "--lr_scheduler",
293
+ type=str,
294
+ default="constant",
295
+ help=(
296
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
297
+ ' "constant", "constant_with_warmup"]'
298
+ ),
299
+ )
300
+ parser.add_argument(
301
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
302
+ )
303
+ parser.add_argument(
304
+ "--snr_gamma",
305
+ type=float,
306
+ default=None,
307
+ help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. "
308
+ "More details here: https://arxiv.org/abs/2303.09556.",
309
+ )
310
+ parser.add_argument(
311
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
312
+ )
313
+ parser.add_argument(
314
+ "--allow_tf32",
315
+ action="store_true",
316
+ help=(
317
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
318
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
319
+ ),
320
+ )
321
+ parser.add_argument(
322
+ "--dataloader_num_workers",
323
+ type=int,
324
+ default=0,
325
+ help=(
326
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
327
+ ),
328
+ )
329
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
330
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
331
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
332
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
333
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
334
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
335
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
336
+ parser.add_argument(
337
+ "--prediction_type",
338
+ type=str,
339
+ default=None,
340
+ help="The prediction_type that shall be used for training. Choose between 'epsilon' or 'v_prediction' or leave `None`. If left to `None` the default prediction type of the scheduler: `noise_scheduler.config.prediciton_type` is chosen.",
341
+ )
342
+ parser.add_argument(
343
+ "--hub_model_id",
344
+ type=str,
345
+ default=None,
346
+ help="The name of the repository to keep in sync with the local `output_dir`.",
347
+ )
348
+ parser.add_argument(
349
+ "--logging_dir",
350
+ type=str,
351
+ default="logs",
352
+ help=(
353
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
354
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
355
+ ),
356
+ )
357
+ parser.add_argument(
358
+ "--mixed_precision",
359
+ type=str,
360
+ default=None,
361
+ choices=["no", "fp16", "bf16"],
362
+ help=(
363
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
364
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
365
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
366
+ ),
367
+ )
368
+ parser.add_argument(
369
+ "--report_to",
370
+ type=str,
371
+ default="tensorboard",
372
+ help=(
373
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
374
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
375
+ ),
376
+ )
377
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
378
+ parser.add_argument(
379
+ "--checkpointing_steps",
380
+ type=int,
381
+ default=500,
382
+ help=(
383
+ "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
384
+ " training using `--resume_from_checkpoint`."
385
+ ),
386
+ )
387
+ parser.add_argument(
388
+ "--checkpoints_total_limit",
389
+ type=int,
390
+ default=None,
391
+ help=("Max number of checkpoints to store."),
392
+ )
393
+ parser.add_argument(
394
+ "--resume_from_checkpoint",
395
+ type=str,
396
+ default=None,
397
+ help=(
398
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
399
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
400
+ ),
401
+ )
402
+ parser.add_argument(
403
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
404
+ )
405
+ parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.")
406
+ parser.add_argument(
407
+ "--rank",
408
+ type=int,
409
+ default=4,
410
+ help=("The dimension of the LoRA update matrices."),
411
+ )
412
+ parser.add_argument(
413
+ "--text_encoder_rank",
414
+ type=int,
415
+ default=4,
416
+ help=("The dimension of the LoRA update matrices for the text encoder."),
417
+ )
418
+ parser.add_argument("--train_text_encoder",
419
+ action="store_true", help="Whether to train the text encoder")
420
+
421
+ args = parser.parse_args()
422
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
423
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
424
+ args.local_rank = env_local_rank
425
+
426
+ # Sanity checks
427
+ if args.dataset_name is None and args.train_data_dir is None:
428
+ raise ValueError("Need either a dataset name or a training folder.")
429
+
430
+ return args
431
+
432
+
433
+ DATASET_NAME_MAPPING = {
434
+ "lambdalabs/pokemon-blip-captions": ("image", "text"),
435
+ }
436
+
437
+
438
+ def main():
439
+ args = parse_args()
440
+ logging_dir = Path(args.output_dir, args.logging_dir)
441
+
442
+ accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
443
+
444
+ accelerator = Accelerator(
445
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
446
+ mixed_precision=args.mixed_precision,
447
+ log_with=args.report_to,
448
+ project_config=accelerator_project_config,
449
+ )
450
+ if args.report_to == "wandb":
451
+ if not is_wandb_available():
452
+ raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
453
+ import wandb
454
+
455
+ # Make one log on every process with the configuration for debugging.
456
+ logging.basicConfig(
457
+ # filename=f'{args.output_dir}/error_logs.txt',
458
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
459
+ datefmt="%m/%d/%Y %H:%M:%S",
460
+ level=logging.ERROR,
461
+ )
462
+ logger.info(accelerator.state, main_process_only=False)
463
+ if accelerator.is_local_main_process:
464
+ datasets.utils.logging.set_verbosity_warning()
465
+ transformers.utils.logging.set_verbosity_warning()
466
+ diffusers.utils.logging.set_verbosity_info()
467
+ else:
468
+ datasets.utils.logging.set_verbosity_error()
469
+ transformers.utils.logging.set_verbosity_error()
470
+ diffusers.utils.logging.set_verbosity_error()
471
+
472
+ # If passed along, set the training seed now.
473
+ if args.seed is not None:
474
+ set_seed(args.seed)
475
+
476
+ # Handle the repository creation
477
+ if accelerator.is_main_process:
478
+ if args.output_dir is not None:
479
+ os.makedirs(args.output_dir, exist_ok=True)
480
+
481
+ if args.push_to_hub:
482
+ repo_id = create_repo(
483
+ repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
484
+ ).repo_id
485
+ # Load scheduler, tokenizer and models.
486
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
487
+ tokenizer = CLIPTokenizer.from_pretrained(
488
+ args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
489
+ )
490
+ text_encoder = CLIPTextModel.from_pretrained(
491
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
492
+ )
493
+ vae = AutoencoderKL.from_pretrained(
494
+ args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant
495
+ )
496
+ unet = UNet2DConditionModel.from_pretrained(
497
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
498
+ )
499
+ # freeze parameters of models to save more memory
500
+ unet.requires_grad_(False)
501
+ vae.requires_grad_(False)
502
+ text_encoder.requires_grad_(False)
503
+
504
+ # For mixed precision training we cast all non-trainable weigths (vae, non-lora text_encoder and non-lora unet) to half-precision
505
+ # as these weights are only used for inference, keeping weights in full precision is not required.
506
+ weight_dtype = torch.float32
507
+ if accelerator.mixed_precision == "fp16":
508
+ weight_dtype = torch.float16
509
+ elif accelerator.mixed_precision == "bf16":
510
+ weight_dtype = torch.bfloat16
511
+
512
+ # Freeze the unet parameters before adding adapters
513
+ for param in unet.parameters():
514
+ param.requires_grad_(False)
515
+
516
+ unet_lora_config = LoraConfig(
517
+ r=args.rank,
518
+ init_lora_weights="gaussian",
519
+ target_modules=["to_k", "to_q", "to_v", "to_out.0"]
520
+ )
521
+
522
+ # Freeze the text encoder parameters before adding adapters
523
+ if args.train_text_encoder:
524
+ for param in text_encoder.parameters():
525
+ param.requires_grad_(False)
526
+
527
+ text_encoder_lora_config = LoraConfig(
528
+ r=args.text_encoder_rank,
529
+ init_lora_weights="gaussian",
530
+ target_modules=["q_proj", "v_proj", "k_proj", "out_proj"]
531
+ )
532
+
533
+ # Move unet, vae and text_encoder to device and cast to weight_dtype
534
+ unet.to(accelerator.device, dtype=weight_dtype)
535
+ vae.to(accelerator.device, dtype=weight_dtype)
536
+ text_encoder.to(accelerator.device, dtype=weight_dtype)
537
+
538
+ unet.add_adapter(unet_lora_config)
539
+ if args.train_text_encoder:
540
+ text_encoder.add_adapter(text_encoder_lora_config)
541
+
542
+ if args.enable_xformers_memory_efficient_attention:
543
+ if is_xformers_available():
544
+ import xformers
545
+
546
+ xformers_version = version.parse(xformers.__version__)
547
+ if xformers_version == version.parse("0.0.16"):
548
+ logger.warn(
549
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
550
+ )
551
+ unet.enable_xformers_memory_efficient_attention()
552
+ text_encoder.enable_xformers_memory_efficient_attention()
553
+
554
+ else:
555
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
556
+
557
+ lora_layers = list(filter(lambda p: p.requires_grad, unet.parameters()))
558
+ lora_layers_text_encoder = list(filter(lambda p: p.requires_grad, text_encoder.parameters()))
559
+
560
+ # Enable TF32 for faster training on Ampere GPUs,
561
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
562
+ if args.allow_tf32:
563
+ torch.backends.cuda.matmul.allow_tf32 = True
564
+
565
+ if args.scale_lr:
566
+ args.learning_rate = (
567
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
568
+ )
569
+
570
+ # Initialize the optimizer
571
+ if args.use_8bit_adam:
572
+ try:
573
+ import bitsandbytes as bnb
574
+ except ImportError:
575
+ raise ImportError(
576
+ "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
577
+ )
578
+
579
+ optimizer_cls = bnb.optim.AdamW8bit
580
+ else:
581
+ optimizer_cls = torch.optim.AdamW
582
+
583
+ optimizer = optimizer_cls(
584
+ lora_layers + lora_layers_text_encoder,
585
+ lr=args.learning_rate,
586
+ betas=(args.adam_beta1, args.adam_beta2),
587
+ weight_decay=args.adam_weight_decay,
588
+ eps=args.adam_epsilon,
589
+ )
590
+
591
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
592
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
593
+
594
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
595
+ # download the dataset.
596
+ if args.dataset_name is not None:
597
+ # Downloading and loading a dataset from the hub.
598
+ dataset = load_dataset(
599
+ args.dataset_name,
600
+ args.dataset_config_name,
601
+ cache_dir=args.cache_dir,
602
+ data_dir=args.train_data_dir,
603
+ )
604
+ else:
605
+ data_files = {}
606
+ if args.train_data_dir is not None:
607
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
608
+ dataset = load_dataset(
609
+ "imagefolder",
610
+ data_files=data_files,
611
+ cache_dir=args.cache_dir,
612
+ )
613
+ # See more about loading custom images at
614
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
615
+
616
+ # Preprocessing the datasets.
617
+ # We need to tokenize inputs and targets.
618
+ column_names = dataset["train"].column_names
619
+
620
+ # 6. Get the column names for input/target.
621
+ dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
622
+ if args.image_column is None:
623
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
624
+ else:
625
+ image_column = args.image_column
626
+ if image_column not in column_names:
627
+ raise ValueError(
628
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
629
+ )
630
+ if args.caption_column is None:
631
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
632
+ else:
633
+ caption_column = args.caption_column
634
+ if caption_column not in column_names:
635
+ raise ValueError(
636
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
637
+ )
638
+
639
+ # Preprocessing the datasets.
640
+ # We need to tokenize input captions and transform the images.
641
+ def tokenize_captions(examples, is_train=True):
642
+ captions = []
643
+ for caption in examples[caption_column]:
644
+ if isinstance(caption, str):
645
+ captions.append(caption)
646
+ elif isinstance(caption, (list, np.ndarray)):
647
+ # take a random caption if there are multiple
648
+ captions.append(random.choice(caption) if is_train else caption[0])
649
+ else:
650
+ raise ValueError(
651
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
652
+ )
653
+ inputs = tokenizer(
654
+ captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
655
+ )
656
+ return inputs.input_ids
657
+
658
+ # Preprocessing the datasets.
659
+ train_transforms = transforms.Compose(
660
+ [
661
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
662
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
663
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
664
+ transforms.ToTensor(),
665
+ transforms.Normalize([0.5], [0.5]),
666
+ ]
667
+ )
668
+
669
+ def preprocess_train(examples):
670
+ images = [image.convert("RGB") for image in examples[image_column]]
671
+ examples["pixel_values"] = [train_transforms(image) for image in images]
672
+ examples["input_ids"] = tokenize_captions(examples)
673
+ return examples
674
+
675
+ with accelerator.main_process_first():
676
+ if args.max_train_samples is not None:
677
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
678
+ # Set the training transforms
679
+ train_dataset = dataset["train"].with_transform(preprocess_train)
680
+
681
+ def collate_fn(examples):
682
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
683
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
684
+ input_ids = torch.stack([example["input_ids"] for example in examples])
685
+ return {"pixel_values": pixel_values, "input_ids": input_ids}
686
+
687
+ def save_validation_images(global_step):
688
+ # Run a round of inference.
689
+ # Create the pipeline using the trained modules and save it.
690
+ # accelerator.wait_for_everyone()
691
+ if accelerator.is_main_process:
692
+
693
+ pipeline = StableDiffusionPipeline.from_pretrained(
694
+ args.pretrained_model_name_or_path,
695
+ text_encoder=accelerator.unwrap_model(text_encoder),
696
+ vae=vae, #
697
+ unet=accelerator.unwrap_model(unet),
698
+ revision=args.revision,
699
+ variant=args.variant,
700
+ torch_dtype=weight_dtype,
701
+
702
+ )
703
+ pipeline.save_pretrained(args.output_dir)
704
+
705
+ # Run a final round of inference.
706
+
707
+ images = []
708
+ for rep in range(args.num_validation_images):
709
+ images.append([])
710
+
711
+ logger.info("Running inference for collecting generated images...")
712
+ pipeline = pipeline.to(accelerator.device)
713
+ pipeline.torch_dtype = weight_dtype
714
+ pipeline.set_progress_bar_config(disable=True)
715
+
716
+ if args.enable_xformers_memory_efficient_attention:
717
+ pipeline.enable_xformers_memory_efficient_attention()
718
+
719
+ if args.seed is None:
720
+ generator = None
721
+ else:
722
+ generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
723
+
724
+ for i in range(len(args.validation_prompts)):
725
+ with torch.autocast("cuda"):
726
+ image = pipeline(args.validation_prompts[i], num_inference_steps=20,
727
+ generator=generator).images[0]
728
+ images[rep].append(image)
729
+
730
+ images = [i for im_ in images for i in im_]
731
+ image_grid = make_image_grid(images, args.num_validation_images, len(args.validation_prompts))
732
+ if global_step == 0:
733
+ img_savename = os.path.join(f'{args.output_dir}', f"val_imgs_grid_init.png")
734
+ txt_savename = os.path.join(f'{args.output_dir}', f"val_prompts_init.txt")
735
+ else:
736
+ img_savename = os.path.join(f'{args.output_dir}/checkpoint-{global_step}', f"val_imgs_grid_{global_step}.png")
737
+ txt_savename = os.path.join(f'{args.output_dir}/checkpoint-{global_step}', f"val_prompts_{global_step}.txt")
738
+
739
+ image_grid.save(img_savename)
740
+
741
+ with open(txt_savename, 'w') as f:
742
+ for line in args.validation_prompts:
743
+ f.write(f"{line}\n")
744
+
745
+ del pipeline
746
+
747
+ # DataLoaders creation:
748
+ train_dataloader = torch.utils.data.DataLoader(
749
+ train_dataset,
750
+ shuffle=True,
751
+ collate_fn=collate_fn,
752
+ batch_size=args.train_batch_size,
753
+ num_workers=args.dataloader_num_workers,
754
+ )
755
+
756
+ # Scheduler and math around the number of training steps.
757
+ overrode_max_train_steps = False
758
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
759
+ if args.max_train_steps is None:
760
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
761
+ overrode_max_train_steps = True
762
+
763
+ lr_scheduler = get_scheduler(
764
+ args.lr_scheduler,
765
+ optimizer=optimizer,
766
+ num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
767
+ num_training_steps=args.max_train_steps * accelerator.num_processes,
768
+ )
769
+
770
+ # Prepare everything with our `accelerator`.
771
+ unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
772
+ unet, text_encoder, optimizer, train_dataloader, lr_scheduler
773
+ )
774
+
775
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
776
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
777
+ if overrode_max_train_steps:
778
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
779
+ # Afterwards we recalculate our number of training epochs
780
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
781
+
782
+ # We need to initialize the trackers we use, and also store our configuration.
783
+ # The trackers initializes automatically on the main process.
784
+ if accelerator.is_main_process:
785
+ tracker_config = dict(vars(args))
786
+ tracker_config.pop("validation_prompts")
787
+ accelerator.init_trackers(args.output_dir, tracker_config)
788
+
789
+ # Train!
790
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
791
+
792
+ logger.info("***** Running training *****")
793
+ logger.info(f" Num examples = {len(train_dataset)}")
794
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
795
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
796
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
797
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
798
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
799
+ global_step = 0
800
+ first_epoch = 0
801
+
802
+ # Potentially load in the weights and states from a previous save
803
+ if args.resume_from_checkpoint:
804
+ if args.resume_from_checkpoint != "latest":
805
+ path = os.path.basename(args.resume_from_checkpoint)
806
+ else:
807
+ # Get the most recent checkpoint
808
+ dirs = os.listdir(args.output_dir)
809
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
810
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
811
+ path = dirs[-1] if len(dirs) > 0 else None
812
+
813
+ if path is None:
814
+ accelerator.print(
815
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
816
+ )
817
+ args.resume_from_checkpoint = None
818
+ initial_global_step = 0
819
+ else:
820
+ accelerator.print(f"Resuming from checkpoint {path}")
821
+ accelerator.load_state(os.path.join(args.output_dir, path))
822
+ global_step = int(path.split("-")[1])
823
+
824
+ initial_global_step = global_step
825
+ first_epoch = global_step // num_update_steps_per_epoch
826
+ else:
827
+ initial_global_step = 0
828
+
829
+ progress_bar = tqdm(
830
+ range(0, args.max_train_steps),
831
+ initial=initial_global_step,
832
+ desc="Steps",
833
+ # Only show the progress bar once on each machine.
834
+ disable=not accelerator.is_local_main_process,
835
+ )
836
+
837
+ if args.validation_prompts is not None:
838
+ save_validation_images(0)
839
+
840
+ for epoch in range(first_epoch, args.num_train_epochs):
841
+ unet.train()
842
+ if args.train_text_encoder:
843
+ text_encoder.train()
844
+ train_loss = 0.0
845
+ for step, batch in enumerate(train_dataloader):
846
+ with accelerator.accumulate(unet):
847
+ # Convert images to latent space
848
+ latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample()
849
+ latents = latents * vae.config.scaling_factor
850
+
851
+ # Sample noise that we'll add to the latents
852
+ noise = torch.randn_like(latents)
853
+ if args.noise_offset:
854
+ # https://www.crosslabs.org//blog/diffusion-with-offset-noise
855
+ noise += args.noise_offset * torch.randn(
856
+ (latents.shape[0], latents.shape[1], 1, 1), device=latents.device
857
+ )
858
+
859
+ bsz = latents.shape[0]
860
+ # Sample a random timestep for each image
861
+ timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
862
+ timesteps = timesteps.long()
863
+
864
+ # Add noise to the latents according to the noise magnitude at each timestep
865
+ # (this is the forward diffusion process)
866
+ noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
867
+
868
+ # Get the text embedding for conditioning
869
+ encoder_hidden_states = text_encoder(batch["input_ids"])[0]
870
+
871
+ # Get the target for loss depending on the prediction type
872
+ if args.prediction_type is not None:
873
+ # set prediction_type of scheduler if defined
874
+ noise_scheduler.register_to_config(prediction_type=args.prediction_type)
875
+
876
+ if noise_scheduler.config.prediction_type == "epsilon":
877
+ target = noise
878
+ elif noise_scheduler.config.prediction_type == "v_prediction":
879
+ target = noise_scheduler.get_velocity(latents, noise, timesteps)
880
+ else:
881
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
882
+
883
+ # Predict the noise residual and compute loss
884
+ model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
885
+
886
+ if args.snr_gamma is None:
887
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
888
+ else:
889
+ # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556.
890
+ # Since we predict the noise instead of x_0, the original formulation is slightly changed.
891
+ # This is discussed in Section 4.2 of the same paper.
892
+ snr = compute_snr(noise_scheduler, timesteps)
893
+ if noise_scheduler.config.prediction_type == "v_prediction":
894
+ # Velocity objective requires that we add one to SNR values before we divide by them.
895
+ snr = snr + 1
896
+ mse_loss_weights = (
897
+ torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(dim=1)[0] / snr
898
+ )
899
+
900
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
901
+ loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights
902
+ loss = loss.mean()
903
+
904
+ # Gather the losses across all processes for logging (if we use distributed training).
905
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
906
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
907
+
908
+ # Backpropagate
909
+ accelerator.backward(loss)
910
+ if accelerator.sync_gradients:
911
+ if args.train_text_encoder:
912
+ params_to_clip = lora_layers + lora_layers_text_encoder
913
+ else:
914
+ params_to_clip = lora_layers
915
+ accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
916
+ optimizer.step()
917
+ lr_scheduler.step()
918
+ optimizer.zero_grad()
919
+
920
+ # Checks if the accelerator has performed an optimization step behind the scenes
921
+ if accelerator.sync_gradients:
922
+
923
+ progress_bar.update(1)
924
+ global_step += 1
925
+ accelerator.log({"train_loss": train_loss}, step=global_step)
926
+ train_loss = 0.0
927
+
928
+ if global_step % args.checkpointing_steps == 0:
929
+ if accelerator.is_main_process:
930
+ # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
931
+ if args.checkpoints_total_limit is not None:
932
+ checkpoints = os.listdir(args.output_dir)
933
+ checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
934
+ checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
935
+
936
+ # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
937
+ if len(checkpoints) >= args.checkpoints_total_limit:
938
+ num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
939
+ removing_checkpoints = checkpoints[0:num_to_remove]
940
+
941
+ logger.info(
942
+ f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
943
+ )
944
+ logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
945
+
946
+ for removing_checkpoint in removing_checkpoints:
947
+ removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
948
+ shutil.rmtree(removing_checkpoint)
949
+
950
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
951
+ accelerator.save_state(save_path)
952
+
953
+ unet_lora_state_dict = get_peft_model_state_dict(accelerator.unwrap_model(unet))
954
+ text_encoder_lora_state_dict = get_peft_model_state_dict(accelerator.unwrap_model(text_encoder))
955
+
956
+ StableDiffusionPipeline.save_lora_weights(
957
+ save_directory=save_path,
958
+ unet_lora_layers=unet_lora_state_dict,
959
+ text_encoder_lora_layers=text_encoder_lora_state_dict,
960
+ safe_serialization=True,
961
+ )
962
+
963
+ logger.info(f"Saved state to {save_path}")
964
+
965
+ if args.validation_prompts is not None:
966
+ save_validation_images(global_step)
967
+
968
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
969
+ progress_bar.set_postfix(**logs)
970
+
971
+ accelerator.end_training()
972
+
973
+
974
+ if __name__ == "__main__":
975
+ main()
gen_w_lora.py DELETED
@@ -1,65 +0,0 @@
1
- from diffusers import StableDiffusionPipeline
2
- import torch
3
- from transformers import CLIPTextModel
4
-
5
-
6
- pipe_id = "stabilityai/stable-diffusion-2-1"
7
- # checkpoint_dir = "/home/michaila/Projects/github/diffusers/examples/text_to_image/sd-2-1-train-finetune-LoRA-test5/checkpoint-2800/"
8
- # checkpoint_dir = "/home/michaila/Projects/github/diffusers/examples/text_to_image/sd-2-1-train-finetune-wText-LoRA-lr1e5-r8/checkpoint-15500/"
9
- # checkpoint_dir = '/home/michaila/Projects/github/diffusers/examples/text_to_image/sd-2-1-train-finetune-LoRA-ffhq-easyportr-2/checkpoint-100/'
10
- # checkpoint_dir = "/home/michaila/Projects/github/diffusers/examples/text_to_image/sd-2-1-train-finetune-wText-LoRA-EasyPortait_lr1e5-r8/checkpoint-22000/"
11
- # checkpoint_dir = "/home/michaila/Projects/github/diffusers/examples/text_to_image/sd-2-1-train-finetune-wText-LoRA-FFHQ-EasyPortrait_lr1e5-r8_768/checkpoint-30000/"
12
- checkpoint_dir = "checkpoints/lora30k"
13
-
14
- pipe = StableDiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda")
15
-
16
- # pipe.load_lora_weights("/home/michaila/Projects/github/diffusers/examples/text_to_image/sd-2-1-train-finetune-LoRA-ffhq-easyportr-2/checkpoint-500", weight_name="pytorch_lora_weights.safetensors") # , adapter_name="toy")
17
- # pipe.load_lora_weights(checkpoint_dir, weight_name="pytorch_lora_weights.safetensors") # , adapter_name="toy")
18
- # pipe.text_encoder.load_lora_weights(checkpoint_dir, weight_name="pytorch_lora_weights.safetensors") # , adapter_name="toy")
19
- state_dict, network_alphas = StableDiffusionPipeline.lora_state_dict(
20
- # Path to my trained lora output_dir
21
- checkpoint_dir,
22
- weight_name="pytorch_lora_weights.safetensors"
23
- )
24
- pipe.load_lora_into_unet(state_dict, network_alphas, pipe.unet, adapter_name='test_lora')
25
- pipe.load_lora_into_text_encoder(state_dict, network_alphas, pipe.text_encoder, adapter_name='test_lora')
26
- pipe.set_adapters(["test_lora"], adapter_weights=[1.0])
27
- # pipe.set_adapters(["text_lora"], adapter_weights=[1.0])
28
-
29
- # def generate(prompt, name='example', seed=1):
30
- # lora_scale = 1.0
31
- # image = pipe(
32
- # prompt, num_inference_steps=50, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(seed)
33
- # ).images[0]
34
- # image.save(f"{checkpoint_dir}/{name}.png")
35
-
36
-
37
- def generate(prompt, negprompt='', steps=50, name='example', seed=1):
38
- lora_scale = 1.0
39
- image = pipe(
40
- prompt, negative_prompt=negprompt, num_inference_steps=steps, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(seed)
41
- ).images[0]
42
- image.save(f"{checkpoint_dir}/{'_'.join(prompt.replace('.', ' ').split(' '))}.png")
43
-
44
-
45
- # prompt = "a color photo of a 30 year old man with a sad expression, beard, very little hair, a slightly open mouth, his eyes look directly at the camera."
46
- # prompt = "a color photo of a 30 year old man with a sad expression, beard, very little hair, a fully open mouth, his eyes look directly at the camera."
47
- # prompt = "a 50 year old asian woman with a neutral expression, little hair, a slightly open mouth and visible teeth."
48
- # prompt = "a 50 year old asian woman smiling."
49
- # prompt = "an 20 year old white man with slightly open mouth, visible teeth. His tongue is out, clearly visible."
50
- # prompt = "A baby with fully closed mouth."
51
- # prompt = "A 25 year old female with long, blonde hair, green eyes and neutral expression looking at the camera."
52
- # prompt = "A black african female with long, straight blond hair and happy expression."
53
- # prompt = "A black female with blonde hair."
54
- # prompt = 'An attractive blond male'
55
- # prompt = 'A happy 55 year old black woman with a hat, sunglasses, earrings and visible teeth. High resolution, sharp image.' #at the camera.'
56
- prompt = 'A happy 25 year old woman with blond hair. Her head is looking significantly to the right.'
57
-
58
- negprompt = '' #'bad teeth'
59
- # generate(prompt, name='example', seed=4)
60
-
61
- generate(prompt, negprompt=negprompt, steps=50, name='example', seed=200)
62
-
63
-
64
-
65
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
generate.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from diffusers import StableDiffusionPipeline
2
+ import torch
3
+
4
+
5
+ class Model:
6
+ def __init__(self, checkpoint="checkpoints/lora30k", weight_name="pytorch_lora_weights.safetensors", device="cuda"):
7
+ self.checkpoint = checkpoint
8
+ state_dict, network_alphas = StableDiffusionPipeline.lora_state_dict(
9
+ # Path to my trained lora output_dir
10
+ checkpoint,
11
+ weight_name=weight_name
12
+ )
13
+ self.pipe = StableDiffusionPipeline.from_pretrained(
14
+ "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16).to(device)
15
+ self.pipe.load_lora_into_unet(state_dict, network_alphas, self.pipe.unet, adapter_name='test_lora')
16
+ self.pipe.load_lora_into_text_encoder(state_dict, network_alphas, self.pipe.text_encoder, adapter_name='test_lora')
17
+ self.pipe.set_adapters(["test_lora"], adapter_weights=[1.0])
18
+
19
+
20
+ def generate(self, prompt, negprompt='', steps=50, savedir=None, seed=1):
21
+ lora_scale = 1.0
22
+ image = self.pipe(prompt,
23
+ negative_prompt=negprompt,
24
+ num_inference_steps=steps,
25
+ cross_attention_kwargs={"scale": lora_scale},
26
+ generator=torch.manual_seed(seed)).images[0]
27
+ if savedir is None:
28
+ image.save(f"{self.checkpoint}/{'_'.join(prompt.replace('.', ' ').split(' '))}.png")
29
+ else:
30
+ image.save(f"{savedir}/{'_'.join(prompt.replace('.', ' ').split(' '))}.png")
31
+ return image
32
+
33
+
34
+ if __name__ == "__main__":
35
+
36
+ model = Model()
37
+
38
+ prompt = 'A happy 55 year old male with blond hair and a goatee. Visible teeth.'
39
+ negprompt = ''
40
+
41
+ image = model.generate(prompt, negprompt=negprompt, steps=50, seed=42)
main.py DELETED
@@ -1,14 +0,0 @@
1
- import torch
2
- from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
3
-
4
- model_id = "stabilityai/stable-diffusion-2-1"
5
-
6
- # Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
7
- pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
8
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
9
- pipe = pipe.to("cuda")
10
-
11
- prompt = "a photo of an astronaut riding a horse on mars"
12
- image = pipe(prompt).images[0]
13
-
14
- image.save("astronaut_rides_horse.png")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ numpy<1.24.0
2
+ torch==2.0.1
3
+ torchvision==0.15.2
4
+ diffusers==0.23.0
5
+ transformers==4.34.1
6
+ peft
7
+ accelerate