idlebg commited on
Commit
e1d8f96
1 Parent(s): e9027a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +380 -0
README.md CHANGED
@@ -1,3 +1,383 @@
1
  ---
2
  license: creativeml-openrail-m
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: creativeml-openrail-m
3
+ language:
4
+ - en
5
+ tags:
6
+ - di.ffusion.ai
7
+ - stable-diffusion
8
+ - LyCORIS
9
+ - LoRA
10
  ---
11
+ ![textenc2.jpg](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/FdHQj5OTJFwvpGeBmUrcp.jpeg)
12
+ # Model Card for di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS
13
+
14
+ <!-- Provide a quick summary of what the model is/does. [Optional] -->
15
+ di.FFUSION.ai-tXe-FXAA
16
+ Trained on &#34;121361&#34; images.
17
+
18
+
19
+ Enhance your model&#39;s quality and sharpness using your own pre-trained Unet.
20
+
21
+ ![Screenshot_1282.jpg](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/7LMvF7XgNCkTkqaGlnSt_.jpeg)
22
+
23
+ The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99))
24
+
25
+ Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {&#39;conv_dim&#39;: &#39;256&#39;, &#39;conv_alpha&#39;: &#39;256&#39;, &#39;algo&#39;: &#39;loha&#39;}
26
+
27
+ Large size due to Lyco CONV 256
28
+
29
+ ![textenco2.jpg](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/wST8mxFasiu8TJijqdHH_.jpeg)
30
+
31
+ This is a heavy experimental version we used to test even with sloppy captions (quick WD tags and terrible clip), yet the results were satisfying.
32
+
33
+ Note: This is not the text encoder used in the official FFUSION AI model.
34
+
35
+
36
+ # SAMPLES
37
+
38
+ **Available also at https://civitai.com/models/83622**
39
+
40
+
41
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/VpGDgNlC_AYotzUVxe9t2.png)
42
+ ![xyz_grid-0069-3538254854.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/FYxXTe-BL8bIHWuPPOtkp.png)
43
+ ![xyz_grid-0090-2371661606.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/PqE7af2LdKaT-vSq634BB.png)
44
+ ![xyz_grid-0133-887882152.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/2Oft5bU40hcScDFfHJnlZ.png)
45
+
46
+
47
+
48
+
49
+
50
+
51
+ For a1111
52
+ Install https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris
53
+
54
+ Download di.FFUSION.ai-tXe-FXAA to /models/Lycoris
55
+
56
+ Option1:
57
+
58
+ Insert <lyco:di.FFUSION.ai-tXe-FXAA:1.0> to prompt
59
+ No need to split Unet and Text Enc as its only TX encoder there.
60
+
61
+ You can go up to 2x weights
62
+
63
+ Option2: If you need it always ON (ex run a batch from txt file) then you can go to settings / Quicksettings list
64
+
65
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/3I8yV3dvL0W2cqT1WxI6F.png)
66
+
67
+
68
+ add sd_lyco
69
+
70
+
71
+
72
+ restart and you should have a drop-down now 🤟 🥃
73
+
74
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/6COn2V-f3npFPuXCpn2uA.png)
75
+
76
+
77
+ # Table of Contents
78
+
79
+ - [Model Card for di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS](#model-card-for--model_id-)
80
+ - [Table of Contents](#table-of-contents)
81
+ - [Table of Contents](#table-of-contents-1)
82
+ - [Model Details](#model-details)
83
+ - [Model Description](#model-description)
84
+ - [Uses](#uses)
85
+ - [Direct Use](#direct-use)
86
+ - [Downstream Use [Optional]](#downstream-use-optional)
87
+ - [Out-of-Scope Use](#out-of-scope-use)
88
+ - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
89
+ - [Recommendations](#recommendations)
90
+ - [Training Details](#training-details)
91
+ - [Training Data](#training-data)
92
+ - [Training Procedure](#training-procedure)
93
+ - [Preprocessing](#preprocessing)
94
+ - [Speeds, Sizes, Times](#speeds-sizes-times)
95
+ - [Evaluation](#evaluation)
96
+ - [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
97
+ - [Testing Data](#testing-data)
98
+ - [Factors](#factors)
99
+ - [Metrics](#metrics)
100
+ - [Results](#results)
101
+ - [Model Examination](#model-examination)
102
+ - [Environmental Impact](#environmental-impact)
103
+ - [Technical Specifications [optional]](#technical-specifications-optional)
104
+ - [Model Architecture and Objective](#model-architecture-and-objective)
105
+ - [Compute Infrastructure](#compute-infrastructure)
106
+ - [Hardware](#hardware)
107
+ - [Software](#software)
108
+ - [Citation](#citation)
109
+ - [Glossary [optional]](#glossary-optional)
110
+ - [More Information [optional]](#more-information-optional)
111
+ - [Model Card Authors [optional]](#model-card-authors-optional)
112
+ - [Model Card Contact](#model-card-contact)
113
+ - [How to Get Started with the Model](#how-to-get-started-with-the-model)
114
+
115
+
116
+ # Model Details
117
+
118
+ ## Model Description
119
+
120
+ <!-- Provide a longer summary of what this model is/does. -->
121
+ di.FFUSION.ai-tXe-FXAA
122
+ Trained on &#34;121361&#34; images.
123
+
124
+
125
+ Enhance your model&#39;s quality and sharpness using your own pre-trained Unet.
126
+
127
+ The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99))
128
+
129
+ Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {&#39;conv_dim&#39;: &#39;256&#39;, &#39;conv_alpha&#39;: &#39;256&#39;, &#39;algo&#39;: &#39;loha&#39;}
130
+
131
+ Large size due to Lyco CONV 256
132
+
133
+ This is a heavy experimental version we used to test even with sloppy captions (quick WD tags and terrible clip), yet the results were satisfying.
134
+
135
+ Note: This is not the text encoder used in the official FFUSION AI model.
136
+
137
+ - **Developed by:** FFusion.ai
138
+ - **Shared by [Optional]:** idle stoev
139
+ - **Model type:** Language model
140
+ - **Language(s) (NLP):** en
141
+ - **License:** creativeml-openrail-m
142
+ - **Parent Model:** More information needed
143
+ - **Resources for more information:** More information needed
144
+
145
+
146
+
147
+ # Uses
148
+
149
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
150
+
151
+ ## Direct Use
152
+
153
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
154
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
155
+
156
+ The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99))
157
+
158
+ Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {&#39;conv_dim&#39;: &#39;256&#39;, &#39;conv_alpha&#39;: &#39;256&#39;, &#39;algo&#39;: &#39;loha&#39;}
159
+
160
+ Large size due to Lyco CONV 256
161
+
162
+
163
+ # Bias, Risks, and Limitations
164
+
165
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
166
+
167
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
168
+
169
+
170
+ ## Recommendations
171
+
172
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
173
+
174
+
175
+
176
+
177
+
178
+ # Training Details
179
+
180
+ ## Training Data
181
+
182
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
183
+
184
+ Trained on &#34;121361&#34; images.
185
+
186
+ ss_caption_tag_dropout_rate: &#34;0.0&#34;,
187
+ ss_multires_noise_discount: &#34;0.3&#34;,
188
+ ss_mixed_precision: &#34;bf16&#34;,
189
+ ss_text_encoder_lr: &#34;1e-07&#34;,
190
+ ss_keep_tokens: &#34;3&#34;,
191
+ ss_network_args: &#34;{&#34;conv_dim&#34;: &#34;256&#34;, &#34;conv_alpha&#34;: &#34;256&#34;, &#34;algo&#34;: &#34;loha&#34;}&#34;,
192
+ ss_caption_dropout_rate: &#34;0.02&#34;,
193
+ ss_flip_aug: &#34;False&#34;,
194
+ ss_learning_rate: &#34;2e-07&#34;,
195
+ ss_sd_model_name: &#34;stabilityai/stable-diffusion-2-1-base&#34;,
196
+ ss_max_grad_norm: &#34;1.0&#34;,
197
+ ss_num_epochs: &#34;2&#34;,
198
+ ss_gradient_checkpointing: &#34;False&#34;,
199
+ ss_face_crop_aug_range: &#34;None&#34;,
200
+ ss_epoch: &#34;2&#34;,
201
+ ss_num_train_images: &#34;121361&#34;,
202
+ ss_color_aug: &#34;False&#34;,
203
+ ss_gradient_accumulation_steps: &#34;1&#34;,
204
+ ss_total_batch_size: &#34;100&#34;,
205
+ ss_prior_loss_weight: &#34;1.0&#34;,
206
+ ss_training_comment: &#34;None&#34;,
207
+ ss_network_dim: &#34;768&#34;,
208
+ ss_output_name: &#34;FusionaMEGA1tX&#34;,
209
+ ss_max_bucket_reso: &#34;1024&#34;,
210
+ ss_network_alpha: &#34;768.0&#34;,
211
+ ss_steps: &#34;2444&#34;,
212
+ ss_shuffle_caption: &#34;True&#34;,
213
+ ss_training_finished_at: &#34;1684158038.0763328&#34;,
214
+ ss_min_bucket_reso: &#34;256&#34;,
215
+ ss_noise_offset: &#34;0.09&#34;,
216
+ ss_enable_bucket: &#34;True&#34;,
217
+ ss_batch_size_per_device: &#34;20&#34;,
218
+ ss_max_train_steps: &#34;2444&#34;,
219
+ ss_network_module: &#34;lycoris.kohya&#34;,
220
+
221
+
222
+ ## Training Procedure
223
+
224
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
225
+
226
+ ### Preprocessing
227
+
228
+ &#34;{&#34;buckets&#34;: {&#34;0&#34;: {&#34;resolution&#34;: [192, 256], &#34;count&#34;: 1}, &#34;1&#34;: {&#34;resolution&#34;: [192, 320], &#34;count&#34;: 1}, &#34;2&#34;: {&#34;resolution&#34;: [256, 384], &#34;count&#34;: 1}, &#34;3&#34;: {&#34;resolution&#34;: [256, 512], &#34;count&#34;: 1}, &#34;4&#34;: {&#34;resolution&#34;: [384, 576], &#34;count&#34;: 2}, &#34;5&#34;: {&#34;resolution&#34;: [384, 640], &#34;count&#34;: 2}, &#34;6&#34;: {&#34;resolution&#34;: [384, 704], &#34;count&#34;: 1}, &#34;7&#34;: {&#34;resolution&#34;: [384, 1088], &#34;count&#34;: 15}, &#34;8&#34;: {&#34;resolution&#34;: [448, 448], &#34;count&#34;: 5}, &#34;9&#34;: {&#34;resolution&#34;: [448, 576], &#34;count&#34;: 1}, &#34;10&#34;: {&#34;resolution&#34;: [448, 640], &#34;count&#34;: 1}, &#34;11&#34;: {&#34;resolution&#34;: [448, 768], &#34;count&#34;: 1}, &#34;12&#34;: {&#34;resolution&#34;: [448, 832], &#34;count&#34;: 1}, &#34;13&#34;: {&#34;resolution&#34;: [448, 1088], &#34;count&#34;: 25}, &#34;14&#34;: {&#34;resolution&#34;: [448, 1216], &#34;count&#34;: 1}, &#34;15&#34;: {&#34;resolution&#34;: [512, 640], &#34;count&#34;: 2}, &#34;16&#34;: {&#34;resolution&#34;: [512, 768], &#34;count&#34;: 10}, &#34;17&#34;: {&#34;resolution&#34;: [512, 832], &#34;count&#34;: 3}, &#34;18&#34;: {&#34;resolution&#34;: [512, 896], &#34;count&#34;: 1525}, &#34;19&#34;: {&#34;resolution&#34;: [512, 960], &#34;count&#34;: 2}, &#34;20&#34;: {&#34;resolution&#34;: [512, 1024], &#34;count&#34;: 665}, &#34;21&#34;: {&#34;resolution&#34;: [512, 1088], &#34;count&#34;: 8}, &#34;22&#34;: {&#34;resolution&#34;: [576, 576], &#34;count&#34;: 5}, &#34;23&#34;: {&#34;resolution&#34;: [576, 768], &#34;count&#34;: 1}, &#34;24&#34;: {&#34;resolution&#34;: [576, 832], &#34;count&#34;: 667}, &#34;25&#34;: {&#34;resolution&#34;: [576, 896], &#34;count&#34;: 9601}, &#34;26&#34;: {&#34;resolution&#34;: [576, 960], &#34;count&#34;: 872}, &#34;27&#34;: {&#34;resolution&#34;: [576, 1024], &#34;count&#34;: 17}, &#34;28&#34;: {&#34;resolution&#34;: [640, 640], &#34;count&#34;: 3}, &#34;29&#34;: {&#34;resolution&#34;: [640, 768], &#34;count&#34;: 7}, &#34;30&#34;: {&#34;resolution&#34;: [640, 832], &#34;count&#34;: 608}, &#34;31&#34;: {&#34;resolution&#34;: [640, 896], &#34;count&#34;: 90}, &#34;32&#34;: {&#34;resolution&#34;: [704, 640], &#34;count&#34;: 1}, &#34;33&#34;: {&#34;resolution&#34;: [704, 704], &#34;count&#34;: 11}, &#34;34&#34;: {&#34;resolution&#34;: [704, 768], &#34;count&#34;: 1}, &#34;35&#34;: {&#34;resolution&#34;: [704, 832], &#34;count&#34;: 1}, &#34;36&#34;: {&#34;resolution&#34;: [768, 640], &#34;count&#34;: 225}, &#34;37&#34;: {&#34;resolution&#34;: [768, 704], &#34;count&#34;: 6}, &#34;38&#34;: {&#34;resolution&#34;: [768, 768], &#34;count&#34;: 74442}, &#34;39&#34;: {&#34;resolution&#34;: [832, 576], &#34;count&#34;: 23784}, &#34;40&#34;: {&#34;resolution&#34;: [832, 640], &#34;count&#34;: 554}, &#34;41&#34;: {&#34;resolution&#34;: [896, 512], &#34;count&#34;: 1235}, &#34;42&#34;: {&#34;resolution&#34;: [896, 576], &#34;count&#34;: 50}, &#34;43&#34;: {&#34;resolution&#34;: [896, 640], &#34;count&#34;: 88}, &#34;44&#34;: {&#34;resolution&#34;: [960, 512], &#34;count&#34;: 165}, &#34;45&#34;: {&#34;resolution&#34;: [960, 576], &#34;count&#34;: 5246}, &#34;46&#34;: {&#34;resolution&#34;: [1024, 448], &#34;count&#34;: 5}, &#34;47&#34;: {&#34;resolution&#34;: [1024, 512], &#34;count&#34;: 1187}, &#34;48&#34;: {&#34;resolution&#34;: [1024, 576], &#34;count&#34;: 40}, &#34;49&#34;: {&#34;resolution&#34;: [1088, 384], &#34;count&#34;: 70}, &#34;50&#34;: {&#34;resolution&#34;: [1088, 448], &#34;count&#34;: 36}, &#34;51&#34;: {&#34;resolution&#34;: [1088, 512], &#34;count&#34;: 3}, &#34;52&#34;: {&#34;resolution&#34;: [1216, 448], &#34;count&#34;: 36}, &#34;53&#34;: {&#34;resolution&#34;: [1344, 320], &#34;count&#34;: 29}, &#34;54&#34;: {&#34;resolution&#34;: [1536, 384], &#34;count&#34;: 1}}, &#34;mean_img_ar_error&#34;: 0.01693107810697896}&#34;,
229
+
230
+ ### Speeds, Sizes, Times
231
+
232
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
233
+
234
+ ss_resolution: &#34;(768, 768)&#34;,
235
+ ss_v2: &#34;True&#34;,
236
+ ss_cache_latents: &#34;False&#34;,
237
+ ss_unet_lr: &#34;2e-07&#34;,
238
+ ss_num_reg_images: &#34;0&#34;,
239
+ ss_max_token_length: &#34;225&#34;,
240
+ ss_lr_scheduler: &#34;linear&#34;,
241
+ ss_reg_dataset_dirs: &#34;{}&#34;,
242
+ ss_lr_warmup_steps: &#34;303&#34;,
243
+ ss_num_batches_per_epoch: &#34;1222&#34;,
244
+ ss_lowram: &#34;False&#34;,
245
+ ss_multires_noise_iterations: &#34;None&#34;,
246
+ ss_optimizer: &#34;torch.optim.adamw.AdamW(weight_decay=0.01,betas=(0.9, 0.99))&#34;,
247
+
248
+ # Evaluation
249
+
250
+ <!-- This section describes the evaluation protocols and provides the results. -->
251
+
252
+ ## Testing Data, Factors & Metrics
253
+
254
+ ### Testing Data
255
+
256
+ <!-- This should link to a Data Card if possible. -->
257
+
258
+ More information needed
259
+
260
+
261
+ ### Factors
262
+
263
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
264
+
265
+ More information needed
266
+
267
+ ### Metrics
268
+
269
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
270
+
271
+ More information needed
272
+
273
+ ## Results
274
+
275
+ More information needed
276
+
277
+ # Model Examination
278
+
279
+ More information needed
280
+
281
+ # Environmental Impact
282
+
283
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
284
+
285
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
286
+
287
+ - **Hardware Type:** 8xA100
288
+ - **Hours used:** 64
289
+ - **Cloud Provider:** CoreWeave
290
+ - **Compute Region:** US Main
291
+ - **Carbon Emitted:** 6.72
292
+
293
+ # Technical Specifications [optional]
294
+
295
+ ## Model Architecture and Objective
296
+
297
+ Enhance your model&#39;s quality and sharpness using your own pre-trained Unet.
298
+
299
+
300
+ ## Compute Infrastructure
301
+
302
+ More information needed
303
+
304
+ ### Hardware
305
+
306
+ 8xA100
307
+
308
+ ### Software
309
+
310
+ Fully trained only with Kohya S &amp; Shih-Ying Yeh (Kohaku-BlueLeaf)
311
+ https://arxiv.org/abs/2108.06098
312
+
313
+ # Citation
314
+
315
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
316
+
317
+ **BibTeX:**
318
+
319
+ More information needed
320
+
321
+ **APA:**
322
+
323
+ @misc{LyCORIS,
324
+ author = &#34;Shih-Ying Yeh (Kohaku-BlueLeaf), Yu-Guan Hsieh, Zhidong Gao&#34;,
325
+ title = &#34;LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion&#34;,
326
+ howpublished = &#34;\url{https://github.com/KohakuBlueleaf/LyCORIS}&#34;,
327
+ month = &#34;March&#34;,
328
+ year = &#34;2023&#34;
329
+ }
330
+
331
+ # Glossary [optional]
332
+
333
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
334
+
335
+ More information needed
336
+
337
+ # More Information [optional]
338
+
339
+ More information needed
340
+
341
+ # Model Card Authors [optional]
342
+
343
+ <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
344
+
345
+ idle stoev
346
+
347
+ # Model Card Contact
348
+
349
350
+
351
+ # How to Get Started with the Model
352
+
353
+ Use the code below to get started with the model.
354
+
355
+ <details>
356
+ <summary> Click to expand </summary>
357
+
358
+ For a1111
359
+ Install https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris
360
+
361
+ Download di.FFUSION.ai-tXe-FXAA to /models/Lycoris
362
+
363
+ Option1:
364
+
365
+ Insert <lyco:di.FFUSION.ai-tXe-FXAA:1.0> to prompt
366
+ No need to split Unet and Text Enc as its only TX encoder there.
367
+
368
+ You can go up to 2x weights
369
+
370
+ Option2: If you need it always ON (ex run a batch from txt file) then you can go to settings / Quicksettings list
371
+
372
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/3I8yV3dvL0W2cqT1WxI6F.png)
373
+
374
+
375
+ add sd_lyco
376
+
377
+
378
+
379
+ restart and you should have a drop-down now 🤟 🥃
380
+
381
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/6380cf05f496d57325c12194/6COn2V-f3npFPuXCpn2uA.png)
382
+
383
+ </details>