PennyJX commited on
Commit
4304e49
·
verified ·
1 Parent(s): 983d4ef

Upload 22 files

Browse files
Files changed (22) hide show
  1. extensions/sd-webui-regional-prompter/.github/FUNDING.yml +13 -0
  2. extensions/sd-webui-regional-prompter/.github/ISSUE_TEMPLATE/bug_report.md +18 -0
  3. extensions/sd-webui-regional-prompter/.github/ISSUE_TEMPLATE/feature_request.md +10 -0
  4. extensions/sd-webui-regional-prompter/.github/ISSUE_TEMPLATE/others.md +10 -0
  5. extensions/sd-webui-regional-prompter/LICENCE +663 -0
  6. extensions/sd-webui-regional-prompter/README.JP.md +297 -0
  7. extensions/sd-webui-regional-prompter/README.md +419 -0
  8. extensions/sd-webui-regional-prompter/differential_ja.md +141 -0
  9. extensions/sd-webui-regional-prompter/prompt_en.md +137 -0
  10. extensions/sd-webui-regional-prompter/prompt_ja.md +136 -0
  11. extensions/sd-webui-regional-prompter/regional_prompter_presets.json +54 -0
  12. extensions/sd-webui-regional-prompter/scripts/__pycache__/attention.cpython-310.pyc +0 -0
  13. extensions/sd-webui-regional-prompter/scripts/__pycache__/latent.cpython-310.pyc +0 -0
  14. extensions/sd-webui-regional-prompter/scripts/__pycache__/regions.cpython-310.pyc +0 -0
  15. extensions/sd-webui-regional-prompter/scripts/__pycache__/rp.cpython-310.pyc +0 -0
  16. extensions/sd-webui-regional-prompter/scripts/__pycache__/rps.cpython-310.pyc +0 -0
  17. extensions/sd-webui-regional-prompter/scripts/attention.py +594 -0
  18. extensions/sd-webui-regional-prompter/scripts/latent.py +576 -0
  19. extensions/sd-webui-regional-prompter/scripts/regions.py +846 -0
  20. extensions/sd-webui-regional-prompter/scripts/rp.py +1154 -0
  21. extensions/sd-webui-regional-prompter/scripts/rps.py +284 -0
  22. extensions/sd-webui-regional-prompter/style.css +6 -0
extensions/sd-webui-regional-prompter/.github/FUNDING.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # These are supported funding model platforms
2
+
3
+ github:[hako-mikan] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
4
+ patreon: # Replace with a single Patreon username
5
+ open_collective: # Replace with a single Open Collective username
6
+ ko_fi: # Replace with a single Ko-fi username
7
+ tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
8
+ community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
9
+ liberapay: # Replace with a single Liberapay username
10
+ issuehunt: # Replace with a single IssueHunt username
11
+ otechie: # Replace with a single Otechie username
12
+ lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
13
+ custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
extensions/sd-webui-regional-prompter/.github/ISSUE_TEMPLATE/bug_report.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Bug report
3
+ about: Create a report to help us improve
4
+ title: ''
5
+ labels: ''
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ **Describe the bug**
11
+ A clear and concise description of what the bug is.
12
+
13
+ **Environment**
14
+ Web-UI version:
15
+ SD Version:
16
+ LoRA/LoCon/LoHa
17
+
18
+ **Other Enabled Extensions**
extensions/sd-webui-regional-prompter/.github/ISSUE_TEMPLATE/feature_request.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Feature request
3
+ about: Suggest an idea for this project
4
+ title: ''
5
+ labels: ''
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+
extensions/sd-webui-regional-prompter/.github/ISSUE_TEMPLATE/others.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Others
3
+ about: Describe this issue template's purpose here.
4
+ title: ''
5
+ labels: ''
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+
extensions/sd-webui-regional-prompter/LICENCE ADDED
@@ -0,0 +1,663 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU AFFERO GENERAL PUBLIC LICENSE
2
+ Version 3, 19 November 2007
3
+
4
+ Copyright (c) 2023 hako-mikan
5
+
6
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
7
+ Everyone is permitted to copy and distribute verbatim copies
8
+ of this license document, but changing it is not allowed.
9
+
10
+ Preamble
11
+
12
+ The GNU Affero General Public License is a free, copyleft license for
13
+ software and other kinds of works, specifically designed to ensure
14
+ cooperation with the community in the case of network server software.
15
+
16
+ The licenses for most software and other practical works are designed
17
+ to take away your freedom to share and change the works. By contrast,
18
+ our General Public Licenses are intended to guarantee your freedom to
19
+ share and change all versions of a program--to make sure it remains free
20
+ software for all its users.
21
+
22
+ When we speak of free software, we are referring to freedom, not
23
+ price. Our General Public Licenses are designed to make sure that you
24
+ have the freedom to distribute copies of free software (and charge for
25
+ them if you wish), that you receive source code or can get it if you
26
+ want it, that you can change the software or use pieces of it in new
27
+ free programs, and that you know you can do these things.
28
+
29
+ Developers that use our General Public Licenses protect your rights
30
+ with two steps: (1) assert copyright on the software, and (2) offer
31
+ you this License which gives you legal permission to copy, distribute
32
+ and/or modify the software.
33
+
34
+ A secondary benefit of defending all users' freedom is that
35
+ improvements made in alternate versions of the program, if they
36
+ receive widespread use, become available for other developers to
37
+ incorporate. Many developers of free software are heartened and
38
+ encouraged by the resulting cooperation. However, in the case of
39
+ software used on network servers, this result may fail to come about.
40
+ The GNU General Public License permits making a modified version and
41
+ letting the public access it on a server without ever releasing its
42
+ source code to the public.
43
+
44
+ The GNU Affero General Public License is designed specifically to
45
+ ensure that, in such cases, the modified source code becomes available
46
+ to the community. It requires the operator of a network server to
47
+ provide the source code of the modified version running there to the
48
+ users of that server. Therefore, public use of a modified version, on
49
+ a publicly accessible server, gives the public access to the source
50
+ code of the modified version.
51
+
52
+ An older license, called the Affero General Public License and
53
+ published by Affero, was designed to accomplish similar goals. This is
54
+ a different license, not a version of the Affero GPL, but Affero has
55
+ released a new version of the Affero GPL which permits relicensing under
56
+ this license.
57
+
58
+ The precise terms and conditions for copying, distribution and
59
+ modification follow.
60
+
61
+ TERMS AND CONDITIONS
62
+
63
+ 0. Definitions.
64
+
65
+ "This License" refers to version 3 of the GNU Affero General Public License.
66
+
67
+ "Copyright" also means copyright-like laws that apply to other kinds of
68
+ works, such as semiconductor masks.
69
+
70
+ "The Program" refers to any copyrightable work licensed under this
71
+ License. Each licensee is addressed as "you". "Licensees" and
72
+ "recipients" may be individuals or organizations.
73
+
74
+ To "modify" a work means to copy from or adapt all or part of the work
75
+ in a fashion requiring copyright permission, other than the making of an
76
+ exact copy. The resulting work is called a "modified version" of the
77
+ earlier work or a work "based on" the earlier work.
78
+
79
+ A "covered work" means either the unmodified Program or a work based
80
+ on the Program.
81
+
82
+ To "propagate" a work means to do anything with it that, without
83
+ permission, would make you directly or secondarily liable for
84
+ infringement under applicable copyright law, except executing it on a
85
+ computer or modifying a private copy. Propagation includes copying,
86
+ distribution (with or without modification), making available to the
87
+ public, and in some countries other activities as well.
88
+
89
+ To "convey" a work means any kind of propagation that enables other
90
+ parties to make or receive copies. Mere interaction with a user through
91
+ a computer network, with no transfer of a copy, is not conveying.
92
+
93
+ An interactive user interface displays "Appropriate Legal Notices"
94
+ to the extent that it includes a convenient and prominently visible
95
+ feature that (1) displays an appropriate copyright notice, and (2)
96
+ tells the user that there is no warranty for the work (except to the
97
+ extent that warranties are provided), that licensees may convey the
98
+ work under this License, and how to view a copy of this License. If
99
+ the interface presents a list of user commands or options, such as a
100
+ menu, a prominent item in the list meets this criterion.
101
+
102
+ 1. Source Code.
103
+
104
+ The "source code" for a work means the preferred form of the work
105
+ for making modifications to it. "Object code" means any non-source
106
+ form of a work.
107
+
108
+ A "Standard Interface" means an interface that either is an official
109
+ standard defined by a recognized standards body, or, in the case of
110
+ interfaces specified for a particular programming language, one that
111
+ is widely used among developers working in that language.
112
+
113
+ The "System Libraries" of an executable work include anything, other
114
+ than the work as a whole, that (a) is included in the normal form of
115
+ packaging a Major Component, but which is not part of that Major
116
+ Component, and (b) serves only to enable use of the work with that
117
+ Major Component, or to implement a Standard Interface for which an
118
+ implementation is available to the public in source code form. A
119
+ "Major Component", in this context, means a major essential component
120
+ (kernel, window system, and so on) of the specific operating system
121
+ (if any) on which the executable work runs, or a compiler used to
122
+ produce the work, or an object code interpreter used to run it.
123
+
124
+ The "Corresponding Source" for a work in object code form means all
125
+ the source code needed to generate, install, and (for an executable
126
+ work) run the object code and to modify the work, including scripts to
127
+ control those activities. However, it does not include the work's
128
+ System Libraries, or general-purpose tools or generally available free
129
+ programs which are used unmodified in performing those activities but
130
+ which are not part of the work. For example, Corresponding Source
131
+ includes interface definition files associated with source files for
132
+ the work, and the source code for shared libraries and dynamically
133
+ linked subprograms that the work is specifically designed to require,
134
+ such as by intimate data communication or control flow between those
135
+ subprograms and other parts of the work.
136
+
137
+ The Corresponding Source need not include anything that users
138
+ can regenerate automatically from other parts of the Corresponding
139
+ Source.
140
+
141
+ The Corresponding Source for a work in source code form is that
142
+ same work.
143
+
144
+ 2. Basic Permissions.
145
+
146
+ All rights granted under this License are granted for the term of
147
+ copyright on the Program, and are irrevocable provided the stated
148
+ conditions are met. This License explicitly affirms your unlimited
149
+ permission to run the unmodified Program. The output from running a
150
+ covered work is covered by this License only if the output, given its
151
+ content, constitutes a covered work. This License acknowledges your
152
+ rights of fair use or other equivalent, as provided by copyright law.
153
+
154
+ You may make, run and propagate covered works that you do not
155
+ convey, without conditions so long as your license otherwise remains
156
+ in force. You may convey covered works to others for the sole purpose
157
+ of having them make modifications exclusively for you, or provide you
158
+ with facilities for running those works, provided that you comply with
159
+ the terms of this License in conveying all material for which you do
160
+ not control copyright. Those thus making or running the covered works
161
+ for you must do so exclusively on your behalf, under your direction
162
+ and control, on terms that prohibit them from making any copies of
163
+ your copyrighted material outside their relationship with you.
164
+
165
+ Conveying under any other circumstances is permitted solely under
166
+ the conditions stated below. Sublicensing is not allowed; section 10
167
+ makes it unnecessary.
168
+
169
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
170
+
171
+ No covered work shall be deemed part of an effective technological
172
+ measure under any applicable law fulfilling obligations under article
173
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
174
+ similar laws prohibiting or restricting circumvention of such
175
+ measures.
176
+
177
+ When you convey a covered work, you waive any legal power to forbid
178
+ circumvention of technological measures to the extent such circumvention
179
+ is effected by exercising rights under this License with respect to
180
+ the covered work, and you disclaim any intention to limit operation or
181
+ modification of the work as a means of enforcing, against the work's
182
+ users, your or third parties' legal rights to forbid circumvention of
183
+ technological measures.
184
+
185
+ 4. Conveying Verbatim Copies.
186
+
187
+ You may convey verbatim copies of the Program's source code as you
188
+ receive it, in any medium, provided that you conspicuously and
189
+ appropriately publish on each copy an appropriate copyright notice;
190
+ keep intact all notices stating that this License and any
191
+ non-permissive terms added in accord with section 7 apply to the code;
192
+ keep intact all notices of the absence of any warranty; and give all
193
+ recipients a copy of this License along with the Program.
194
+
195
+ You may charge any price or no price for each copy that you convey,
196
+ and you may offer support or warranty protection for a fee.
197
+
198
+ 5. Conveying Modified Source Versions.
199
+
200
+ You may convey a work based on the Program, or the modifications to
201
+ produce it from the Program, in the form of source code under the
202
+ terms of section 4, provided that you also meet all of these conditions:
203
+
204
+ a) The work must carry prominent notices stating that you modified
205
+ it, and giving a relevant date.
206
+
207
+ b) The work must carry prominent notices stating that it is
208
+ released under this License and any conditions added under section
209
+ 7. This requirement modifies the requirement in section 4 to
210
+ "keep intact all notices".
211
+
212
+ c) You must license the entire work, as a whole, under this
213
+ License to anyone who comes into possession of a copy. This
214
+ License will therefore apply, along with any applicable section 7
215
+ additional terms, to the whole of the work, and all its parts,
216
+ regardless of how they are packaged. This License gives no
217
+ permission to license the work in any other way, but it does not
218
+ invalidate such permission if you have separately received it.
219
+
220
+ d) If the work has interactive user interfaces, each must display
221
+ Appropriate Legal Notices; however, if the Program has interactive
222
+ interfaces that do not display Appropriate Legal Notices, your
223
+ work need not make them do so.
224
+
225
+ A compilation of a covered work with other separate and independent
226
+ works, which are not by their nature extensions of the covered work,
227
+ and which are not combined with it such as to form a larger program,
228
+ in or on a volume of a storage or distribution medium, is called an
229
+ "aggregate" if the compilation and its resulting copyright are not
230
+ used to limit the access or legal rights of the compilation's users
231
+ beyond what the individual works permit. Inclusion of a covered work
232
+ in an aggregate does not cause this License to apply to the other
233
+ parts of the aggregate.
234
+
235
+ 6. Conveying Non-Source Forms.
236
+
237
+ You may convey a covered work in object code form under the terms
238
+ of sections 4 and 5, provided that you also convey the
239
+ machine-readable Corresponding Source under the terms of this License,
240
+ in one of these ways:
241
+
242
+ a) Convey the object code in, or embodied in, a physical product
243
+ (including a physical distribution medium), accompanied by the
244
+ Corresponding Source fixed on a durable physical medium
245
+ customarily used for software interchange.
246
+
247
+ b) Convey the object code in, or embodied in, a physical product
248
+ (including a physical distribution medium), accompanied by a
249
+ written offer, valid for at least three years and valid for as
250
+ long as you offer spare parts or customer support for that product
251
+ model, to give anyone who possesses the object code either (1) a
252
+ copy of the Corresponding Source for all the software in the
253
+ product that is covered by this License, on a durable physical
254
+ medium customarily used for software interchange, for a price no
255
+ more than your reasonable cost of physically performing this
256
+ conveying of source, or (2) access to copy the
257
+ Corresponding Source from a network server at no charge.
258
+
259
+ c) Convey individual copies of the object code with a copy of the
260
+ written offer to provide the Corresponding Source. This
261
+ alternative is allowed only occasionally and noncommercially, and
262
+ only if you received the object code with such an offer, in accord
263
+ with subsection 6b.
264
+
265
+ d) Convey the object code by offering access from a designated
266
+ place (gratis or for a charge), and offer equivalent access to the
267
+ Corresponding Source in the same way through the same place at no
268
+ further charge. You need not require recipients to copy the
269
+ Corresponding Source along with the object code. If the place to
270
+ copy the object code is a network server, the Corresponding Source
271
+ may be on a different server (operated by you or a third party)
272
+ that supports equivalent copying facilities, provided you maintain
273
+ clear directions next to the object code saying where to find the
274
+ Corresponding Source. Regardless of what server hosts the
275
+ Corresponding Source, you remain obligated to ensure that it is
276
+ available for as long as needed to satisfy these requirements.
277
+
278
+ e) Convey the object code using peer-to-peer transmission, provided
279
+ you inform other peers where the object code and Corresponding
280
+ Source of the work are being offered to the general public at no
281
+ charge under subsection 6d.
282
+
283
+ A separable portion of the object code, whose source code is excluded
284
+ from the Corresponding Source as a System Library, need not be
285
+ included in conveying the object code work.
286
+
287
+ A "User Product" is either (1) a "consumer product", which means any
288
+ tangible personal property which is normally used for personal, family,
289
+ or household purposes, or (2) anything designed or sold for incorporation
290
+ into a dwelling. In determining whether a product is a consumer product,
291
+ doubtful cases shall be resolved in favor of coverage. For a particular
292
+ product received by a particular user, "normally used" refers to a
293
+ typical or common use of that class of product, regardless of the status
294
+ of the particular user or of the way in which the particular user
295
+ actually uses, or expects or is expected to use, the product. A product
296
+ is a consumer product regardless of whether the product has substantial
297
+ commercial, industrial or non-consumer uses, unless such uses represent
298
+ the only significant mode of use of the product.
299
+
300
+ "Installation Information" for a User Product means any methods,
301
+ procedures, authorization keys, or other information required to install
302
+ and execute modified versions of a covered work in that User Product from
303
+ a modified version of its Corresponding Source. The information must
304
+ suffice to ensure that the continued functioning of the modified object
305
+ code is in no case prevented or interfered with solely because
306
+ modification has been made.
307
+
308
+ If you convey an object code work under this section in, or with, or
309
+ specifically for use in, a User Product, and the conveying occurs as
310
+ part of a transaction in which the right of possession and use of the
311
+ User Product is transferred to the recipient in perpetuity or for a
312
+ fixed term (regardless of how the transaction is characterized), the
313
+ Corresponding Source conveyed under this section must be accompanied
314
+ by the Installation Information. But this requirement does not apply
315
+ if neither you nor any third party retains the ability to install
316
+ modified object code on the User Product (for example, the work has
317
+ been installed in ROM).
318
+
319
+ The requirement to provide Installation Information does not include a
320
+ requirement to continue to provide support service, warranty, or updates
321
+ for a work that has been modified or installed by the recipient, or for
322
+ the User Product in which it has been modified or installed. Access to a
323
+ network may be denied when the modification itself materially and
324
+ adversely affects the operation of the network or violates the rules and
325
+ protocols for communication across the network.
326
+
327
+ Corresponding Source conveyed, and Installation Information provided,
328
+ in accord with this section must be in a format that is publicly
329
+ documented (and with an implementation available to the public in
330
+ source code form), and must require no special password or key for
331
+ unpacking, reading or copying.
332
+
333
+ 7. Additional Terms.
334
+
335
+ "Additional permissions" are terms that supplement the terms of this
336
+ License by making exceptions from one or more of its conditions.
337
+ Additional permissions that are applicable to the entire Program shall
338
+ be treated as though they were included in this License, to the extent
339
+ that they are valid under applicable law. If additional permissions
340
+ apply only to part of the Program, that part may be used separately
341
+ under those permissions, but the entire Program remains governed by
342
+ this License without regard to the additional permissions.
343
+
344
+ When you convey a copy of a covered work, you may at your option
345
+ remove any additional permissions from that copy, or from any part of
346
+ it. (Additional permissions may be written to require their own
347
+ removal in certain cases when you modify the work.) You may place
348
+ additional permissions on material, added by you to a covered work,
349
+ for which you have or can give appropriate copyright permission.
350
+
351
+ Notwithstanding any other provision of this License, for material you
352
+ add to a covered work, you may (if authorized by the copyright holders of
353
+ that material) supplement the terms of this License with terms:
354
+
355
+ a) Disclaiming warranty or limiting liability differently from the
356
+ terms of sections 15 and 16 of this License; or
357
+
358
+ b) Requiring preservation of specified reasonable legal notices or
359
+ author attributions in that material or in the Appropriate Legal
360
+ Notices displayed by works containing it; or
361
+
362
+ c) Prohibiting misrepresentation of the origin of that material, or
363
+ requiring that modified versions of such material be marked in
364
+ reasonable ways as different from the original version; or
365
+
366
+ d) Limiting the use for publicity purposes of names of licensors or
367
+ authors of the material; or
368
+
369
+ e) Declining to grant rights under trademark law for use of some
370
+ trade names, trademarks, or service marks; or
371
+
372
+ f) Requiring indemnification of licensors and authors of that
373
+ material by anyone who conveys the material (or modified versions of
374
+ it) with contractual assumptions of liability to the recipient, for
375
+ any liability that these contractual assumptions directly impose on
376
+ those licensors and authors.
377
+
378
+ All other non-permissive additional terms are considered "further
379
+ restrictions" within the meaning of section 10. If the Program as you
380
+ received it, or any part of it, contains a notice stating that it is
381
+ governed by this License along with a term that is a further
382
+ restriction, you may remove that term. If a license document contains
383
+ a further restriction but permits relicensing or conveying under this
384
+ License, you may add to a covered work material governed by the terms
385
+ of that license document, provided that the further restriction does
386
+ not survive such relicensing or conveying.
387
+
388
+ If you add terms to a covered work in accord with this section, you
389
+ must place, in the relevant source files, a statement of the
390
+ additional terms that apply to those files, or a notice indicating
391
+ where to find the applicable terms.
392
+
393
+ Additional terms, permissive or non-permissive, may be stated in the
394
+ form of a separately written license, or stated as exceptions;
395
+ the above requirements apply either way.
396
+
397
+ 8. Termination.
398
+
399
+ You may not propagate or modify a covered work except as expressly
400
+ provided under this License. Any attempt otherwise to propagate or
401
+ modify it is void, and will automatically terminate your rights under
402
+ this License (including any patent licenses granted under the third
403
+ paragraph of section 11).
404
+
405
+ However, if you cease all violation of this License, then your
406
+ license from a particular copyright holder is reinstated (a)
407
+ provisionally, unless and until the copyright holder explicitly and
408
+ finally terminates your license, and (b) permanently, if the copyright
409
+ holder fails to notify you of the violation by some reasonable means
410
+ prior to 60 days after the cessation.
411
+
412
+ Moreover, your license from a particular copyright holder is
413
+ reinstated permanently if the copyright holder notifies you of the
414
+ violation by some reasonable means, this is the first time you have
415
+ received notice of violation of this License (for any work) from that
416
+ copyright holder, and you cure the violation prior to 30 days after
417
+ your receipt of the notice.
418
+
419
+ Termination of your rights under this section does not terminate the
420
+ licenses of parties who have received copies or rights from you under
421
+ this License. If your rights have been terminated and not permanently
422
+ reinstated, you do not qualify to receive new licenses for the same
423
+ material under section 10.
424
+
425
+ 9. Acceptance Not Required for Having Copies.
426
+
427
+ You are not required to accept this License in order to receive or
428
+ run a copy of the Program. Ancillary propagation of a covered work
429
+ occurring solely as a consequence of using peer-to-peer transmission
430
+ to receive a copy likewise does not require acceptance. However,
431
+ nothing other than this License grants you permission to propagate or
432
+ modify any covered work. These actions infringe copyright if you do
433
+ not accept this License. Therefore, by modifying or propagating a
434
+ covered work, you indicate your acceptance of this License to do so.
435
+
436
+ 10. Automatic Licensing of Downstream Recipients.
437
+
438
+ Each time you convey a covered work, the recipient automatically
439
+ receives a license from the original licensors, to run, modify and
440
+ propagate that work, subject to this License. You are not responsible
441
+ for enforcing compliance by third parties with this License.
442
+
443
+ An "entity transaction" is a transaction transferring control of an
444
+ organization, or substantially all assets of one, or subdividing an
445
+ organization, or merging organizations. If propagation of a covered
446
+ work results from an entity transaction, each party to that
447
+ transaction who receives a copy of the work also receives whatever
448
+ licenses to the work the party's predecessor in interest had or could
449
+ give under the previous paragraph, plus a right to possession of the
450
+ Corresponding Source of the work from the predecessor in interest, if
451
+ the predecessor has it or can get it with reasonable efforts.
452
+
453
+ You may not impose any further restrictions on the exercise of the
454
+ rights granted or affirmed under this License. For example, you may
455
+ not impose a license fee, royalty, or other charge for exercise of
456
+ rights granted under this License, and you may not initiate litigation
457
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
458
+ any patent claim is infringed by making, using, selling, offering for
459
+ sale, or importing the Program or any portion of it.
460
+
461
+ 11. Patents.
462
+
463
+ A "contributor" is a copyright holder who authorizes use under this
464
+ License of the Program or a work on which the Program is based. The
465
+ work thus licensed is called the contributor's "contributor version".
466
+
467
+ A contributor's "essential patent claims" are all patent claims
468
+ owned or controlled by the contributor, whether already acquired or
469
+ hereafter acquired, that would be infringed by some manner, permitted
470
+ by this License, of making, using, or selling its contributor version,
471
+ but do not include claims that would be infringed only as a
472
+ consequence of further modification of the contributor version. For
473
+ purposes of this definition, "control" includes the right to grant
474
+ patent sublicenses in a manner consistent with the requirements of
475
+ this License.
476
+
477
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
478
+ patent license under the contributor's essential patent claims, to
479
+ make, use, sell, offer for sale, import and otherwise run, modify and
480
+ propagate the contents of its contributor version.
481
+
482
+ In the following three paragraphs, a "patent license" is any express
483
+ agreement or commitment, however denominated, not to enforce a patent
484
+ (such as an express permission to practice a patent or covenant not to
485
+ sue for patent infringement). To "grant" such a patent license to a
486
+ party means to make such an agreement or commitment not to enforce a
487
+ patent against the party.
488
+
489
+ If you convey a covered work, knowingly relying on a patent license,
490
+ and the Corresponding Source of the work is not available for anyone
491
+ to copy, free of charge and under the terms of this License, through a
492
+ publicly available network server or other readily accessible means,
493
+ then you must either (1) cause the Corresponding Source to be so
494
+ available, or (2) arrange to deprive yourself of the benefit of the
495
+ patent license for this particular work, or (3) arrange, in a manner
496
+ consistent with the requirements of this License, to extend the patent
497
+ license to downstream recipients. "Knowingly relying" means you have
498
+ actual knowledge that, but for the patent license, your conveying the
499
+ covered work in a country, or your recipient's use of the covered work
500
+ in a country, would infringe one or more identifiable patents in that
501
+ country that you have reason to believe are valid.
502
+
503
+ If, pursuant to or in connection with a single transaction or
504
+ arrangement, you convey, or propagate by procuring conveyance of, a
505
+ covered work, and grant a patent license to some of the parties
506
+ receiving the covered work authorizing them to use, propagate, modify
507
+ or convey a specific copy of the covered work, then the patent license
508
+ you grant is automatically extended to all recipients of the covered
509
+ work and works based on it.
510
+
511
+ A patent license is "discriminatory" if it does not include within
512
+ the scope of its coverage, prohibits the exercise of, or is
513
+ conditioned on the non-exercise of one or more of the rights that are
514
+ specifically granted under this License. You may not convey a covered
515
+ work if you are a party to an arrangement with a third party that is
516
+ in the business of distributing software, under which you make payment
517
+ to the third party based on the extent of your activity of conveying
518
+ the work, and under which the third party grants, to any of the
519
+ parties who would receive the covered work from you, a discriminatory
520
+ patent license (a) in connection with copies of the covered work
521
+ conveyed by you (or copies made from those copies), or (b) primarily
522
+ for and in connection with specific products or compilations that
523
+ contain the covered work, unless you entered into that arrangement,
524
+ or that patent license was granted, prior to 28 March 2007.
525
+
526
+ Nothing in this License shall be construed as excluding or limiting
527
+ any implied license or other defenses to infringement that may
528
+ otherwise be available to you under applicable patent law.
529
+
530
+ 12. No Surrender of Others' Freedom.
531
+
532
+ If conditions are imposed on you (whether by court order, agreement or
533
+ otherwise) that contradict the conditions of this License, they do not
534
+ excuse you from the conditions of this License. If you cannot convey a
535
+ covered work so as to satisfy simultaneously your obligations under this
536
+ License and any other pertinent obligations, then as a consequence you may
537
+ not convey it at all. For example, if you agree to terms that obligate you
538
+ to collect a royalty for further conveying from those to whom you convey
539
+ the Program, the only way you could satisfy both those terms and this
540
+ License would be to refrain entirely from conveying the Program.
541
+
542
+ 13. Remote Network Interaction; Use with the GNU General Public License.
543
+
544
+ Notwithstanding any other provision of this License, if you modify the
545
+ Program, your modified version must prominently offer all users
546
+ interacting with it remotely through a computer network (if your version
547
+ supports such interaction) an opportunity to receive the Corresponding
548
+ Source of your version by providing access to the Corresponding Source
549
+ from a network server at no charge, through some standard or customary
550
+ means of facilitating copying of software. This Corresponding Source
551
+ shall include the Corresponding Source for any work covered by version 3
552
+ of the GNU General Public License that is incorporated pursuant to the
553
+ following paragraph.
554
+
555
+ Notwithstanding any other provision of this License, you have
556
+ permission to link or combine any covered work with a work licensed
557
+ under version 3 of the GNU General Public License into a single
558
+ combined work, and to convey the resulting work. The terms of this
559
+ License will continue to apply to the part which is the covered work,
560
+ but the work with which it is combined will remain governed by version
561
+ 3 of the GNU General Public License.
562
+
563
+ 14. Revised Versions of this License.
564
+
565
+ The Free Software Foundation may publish revised and/or new versions of
566
+ the GNU Affero General Public License from time to time. Such new versions
567
+ will be similar in spirit to the present version, but may differ in detail to
568
+ address new problems or concerns.
569
+
570
+ Each version is given a distinguishing version number. If the
571
+ Program specifies that a certain numbered version of the GNU Affero General
572
+ Public License "or any later version" applies to it, you have the
573
+ option of following the terms and conditions either of that numbered
574
+ version or of any later version published by the Free Software
575
+ Foundation. If the Program does not specify a version number of the
576
+ GNU Affero General Public License, you may choose any version ever published
577
+ by the Free Software Foundation.
578
+
579
+ If the Program specifies that a proxy can decide which future
580
+ versions of the GNU Affero General Public License can be used, that proxy's
581
+ public statement of acceptance of a version permanently authorizes you
582
+ to choose that version for the Program.
583
+
584
+ Later license versions may give you additional or different
585
+ permissions. However, no additional obligations are imposed on any
586
+ author or copyright holder as a result of your choosing to follow a
587
+ later version.
588
+
589
+ 15. Disclaimer of Warranty.
590
+
591
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599
+
600
+ 16. Limitation of Liability.
601
+
602
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610
+ SUCH DAMAGES.
611
+
612
+ 17. Interpretation of Sections 15 and 16.
613
+
614
+ If the disclaimer of warranty and limitation of liability provided
615
+ above cannot be given local legal effect according to their terms,
616
+ reviewing courts shall apply local law that most closely approximates
617
+ an absolute waiver of all civil liability in connection with the
618
+ Program, unless a warranty or assumption of liability accompanies a
619
+ copy of the Program in return for a fee.
620
+
621
+ END OF TERMS AND CONDITIONS
622
+
623
+ How to Apply These Terms to Your New Programs
624
+
625
+ If you develop a new program, and you want it to be of the greatest
626
+ possible use to the public, the best way to achieve this is to make it
627
+ free software which everyone can redistribute and change under these terms.
628
+
629
+ To do so, attach the following notices to the program. It is safest
630
+ to attach them to the start of each source file to most effectively
631
+ state the exclusion of warranty; and each file should have at least
632
+ the "copyright" line and a pointer to where the full notice is found.
633
+
634
+ <one line to give the program's name and a brief idea of what it does.>
635
+ Copyright (C) <year> <name of author>
636
+
637
+ This program is free software: you can redistribute it and/or modify
638
+ it under the terms of the GNU Affero General Public License as published
639
+ by the Free Software Foundation, either version 3 of the License, or
640
+ (at your option) any later version.
641
+
642
+ This program is distributed in the hope that it will be useful,
643
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
644
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645
+ GNU Affero General Public License for more details.
646
+
647
+ You should have received a copy of the GNU Affero General Public License
648
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
649
+
650
+ Also add information on how to contact you by electronic and paper mail.
651
+
652
+ If your software can interact with users remotely through a computer
653
+ network, you should also make sure that it provides a way for users to
654
+ get its source. For example, if your program is a web application, its
655
+ interface could display a "Source" link that leads users to an archive
656
+ of the code. There are many ways you could offer source, and different
657
+ solutions will be better for different programs; see section 13 for the
658
+ specific requirements.
659
+
660
+ You should also get your employer (if you work as a programmer) or school,
661
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
662
+ For more information on this, and how to apply and follow the GNU AGPL, see
663
+ <https://www.gnu.org/licenses/>.
extensions/sd-webui-regional-prompter/README.JP.md ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Regional Prompter
2
+ ![top](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/top.jpg)
3
+ - custom script for [AUTOMATIC1111's stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
4
+ - Different prompts can be specified for different regions
5
+
6
+ - [AUTOMATIC1111's stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) 用のスクリプトです
7
+ - 垂直/平行方向に分割された領域ごとに異なるプロンプトを指定できます
8
+
9
+ ## Language control / 言語制御
10
+ ENGLISH: [![en](https://img.shields.io/badge/lang-en-red.svg)](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/README.md)
11
+
12
+ ## 更新情報
13
+ - 新機能「[差分生成・差分アニメ](differential_ja.md)」
14
+ - [APIを通しての利用について](#apiを通した利用方法)
15
+ - プロンプトによる領域指定の[チュートリアル](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_ja.md)
16
+ - 新機能 : [インペイントによる領域指定](#inpaint) (thanks [Symbiomatrix](https://github.com/Symbiomatrix))
17
+ - 新機能 : [プロンプトによる領域指定](#divprompt)
18
+
19
+
20
+ [Symbiomatrix](https://github.com/Symbiomatrix)氏の協力によりより[柔軟な領域指定](#2次元領域指定実験的機能)が可能になりました。
21
+
22
+
23
+ # 概要
24
+ Latent couple extentionではプロンプトごとにU-Netの計算を行っていますが、このエクステンションではU-Netの内部でプロンプトごとの計算を行います。詳しくは[こちら](https://note.com/gcem156/n/nb3d516e376d7)をご参照ください。アイデアを発案されたfurusu様に感謝いたします。
25
+
26
+ ## 使い方
27
+ 次の画像の作り方を解説しつつ、使い方を説明します。
28
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/sample.jpg)
29
+ 以下がプロンプトです。
30
+ ```
31
+ green hair twintail BREAK
32
+ red blouse BREAK
33
+ blue skirt
34
+ ```
35
+ 設定
36
+ ```
37
+ Active : On
38
+ Use base prompt : Off
39
+ Divide mode : Vertical
40
+ Divide Ratio : 1,1,1
41
+ Base Ratio :
42
+ ```
43
+ この設定では縦方向に三分割し、上から順にgreen hair twintail ,red blouse ,blue skirtというプロンプトを適用しています。
44
+ ### Active
45
+ ここにチェックが入っている場合有効化します。
46
+
47
+ ### Prompt
48
+ 領域別のプロンプト同士はBREAKで区切ります。水平の場合は左から、垂直の場合は上から順にプロンプトを入力します。
49
+ ネガティブプロンプトもBREAKで区切ることで領域ごとに設定できますが、BREAKを入力しない場合すべての領域に同一のネガティブプロンプトが設定されます。
50
+
51
+ ### Use base prompt
52
+ ベースプロンプトとはすべての領域に共通のプロンプトを使用したい場合チェックを入れます。領域で一貫した場面にしたい場合などは使ってください。
53
+ ベースプロンプトを使用する場合、BREAK区切られた最初のプロンプトがベースとして扱われます。
54
+ ADDBASEが入力された場合、自動的にオンになります。
55
+
56
+ ### Base ratio
57
+ ベースプロンプトの比率を設定します。0.2と入力された場合、ベースの割合が0.2になります。領域ごとにも指定可能で、0.2,0.3,0.5などと入力できます。単一の値を入力した場合はすべての領域に同じ値が適応されます。
58
+
59
+ ### Divide ratio
60
+ 領域の広さを指定します。1,1,1と入力した場合、三分割されます(33,3%,33,3%,33,3%)。3,1,1と入力した場合60%,20%,20%となります。小数点でも入力可能です。0.1,0.1,0.1は1,1,1と同じ結果になります。
61
+
62
+ ### calcutation mode
63
+ 内部ではAttention modeではBREAKを使用し、Latent modeではANDを使用しています。AND/BREAKは使用するmodeに応じて自動的に変換されますが、BREAK,ANDどちらをプロンプトに入力していても問題ありません。
64
+ #### Attention
65
+ 通常はこちらを使用して下さい
66
+ #### Latent
67
+ LoRAを分離したい場合こちらを使用して下さい。生成時間は長くなりますが、ある程度LoRAを分離できます。
68
+
69
+ [ねんどろいど](https://civitai.com/models/7269/nendoroid-figures-lora),
70
+ [figma](https://civitai.com/models/7984/figma-anime-figures)LoRAを左右に分離して作成した例。
71
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/sample2.jpg" width="400">
72
+
73
+ ### Split mode
74
+ 分割方向を指定します。水平、垂直方向が指定できます。
75
+
76
+ ### Use common prompt
77
+ このオプションを有効化すると最初のプロンプトをすべてのプロンプトに加算します。
78
+ `ADDCOMM`が入力された場合自動的にオンになります。
79
+ ```
80
+ best quality, 20yo lady in garden BREAK
81
+ green hair twintail BREAK
82
+ red blouse BREAK
83
+ blue skirt
84
+ ```
85
+ このようなプロンプトがあるときに、この機能を有効化すると以下のように扱われます。
86
+ ```
87
+ best quality, 20yo lady in garden, green hair twintail BREAK
88
+ best quality, 20yo lady in garden, red blouse BREAK
89
+ best quality, 20yo lady in garden, blue skirt
90
+ ```
91
+ よって、3つの領域に分ける場合4つのプロンプトをセットする必要があります。Use base promptが有効になっている場合は5つ必要になります。設定順はcommon,base, prompt1,prompt2,...となります。
92
+
93
+ ### 2次元領域指定(実験的機能)
94
+ 領域を2次元的に指定できます。特別なセパレイター(`ADDCOL/ADDROW`)を用いることで領域を縦横に分割することができます。左上を始点として、`ADDCOL`で区切ると横方向、`ADDROW`で区切ると縦方向に分割されます。分割の比率はセミコロンで区切られた比率で指定します。以下に例を示します。`BREAK`のみで記述し、比率のみで記述することも可能ですが、明示的にCOL/ROWを指定した方がわかりやすいです。最初のセパレーターとして`ADDBASE`を使用すると、ベースプロンプトになります。比率を指定しない場合や比率がセパレーターの数と一致しないときは自動的にすべて等倍として処理されます。`ADDCOMM`を最初のセパレーターとして入力した場合共通プロンプトになります。Divide modeで選択された方向は有効であり、上から/左から順に`ADDCOL/ADDROW`が処理されます。
95
+
96
+ ```
97
+ (blue sky:1.2) ADDCOL
98
+ green hair twintail ADDCOL
99
+ (aquarium:1.3) ADDROW
100
+ (messy desk:1.2) ADDCOL
101
+ orange dress and sofa
102
+ ```
103
+
104
+ ```
105
+ Active : On
106
+ Use base prompt : Off
107
+ Divide mode : Columns
108
+ Divide Ratio : 1,2,1,1;2,4,6
109
+ Base Ratio :
110
+ ```
111
+
112
+ ![2d](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/2d.jpg)
113
+
114
+ ## <a id="inpaint">Mask regions aka inpaint+ (experimental function)</a>
115
+ 手描きマスク、またはアップロードされたマスクを使って領域を指定することができるようになりました。
116
+ - まず、`Columns` / `Rows` の横にある `mask divide mode` に切り替えていることを確認してください。そうしないと、マスクは無視され、領域は通常通り分割されます。
117
+ - キャンバスの幅と高さを希望する画像に合わせて設定し、`create mask area`を押してください。異なる比率やサイズを指定すると、マスクが正確に適用されないことがあります。(インペイントの「リサイズだけ」のように)。
118
+ - キャンバス領域に必要な領域の輪郭を描くか、完全に塗りつぶした後、`draw region`を押してください。これにより、マスクに対応する塗りつぶし多角形が追加され、`region` の番号に従って色が付けられます。
119
+ - draw region` を押すと、region が +1 ずつ増えていき、次のregion を素早く描画することができます。また、後でマスクを作るためにどのリージョンが使われたかのリストも保持されます。現在、最大で ~360~ 256 のリージョンが使用できます。
120
+ - 既存のリージョンに追加するには、以前に使用された色を選択し、通常通り描画することが可能です。現在のところ、新しいマスク領域以外の領域をクリアする方法はありません(そのうちクリア機能は追加されるかもしれません)。
121
+ - `make mask`ボタンは、以前に描いたリージョンについて、`region`の番号で指定されたマスクを表示します。マスクはリージョン固有の色によって検出されます。
122
+ - リージョンマスクの準備ができたら、いつも通りプロンプトを書きます: 分割比率は無視されます。`base ratio`は各リージョンに適用されます。すべてのオプションがサポートされ、すべての BREAK / ADDX キーワード (ROW/COL は BREAK に変換されるだけです)。アテンションモードとレイテンモードがサポートされています。
123
+ - ベースはマスクモードでは特別な変化をします: base が off のとき、色がついていない領域は最初のマスクに追加されます (したがって、最初のプロンプトで埋められるべきです)。base がオンのとき、色のついていないリージョンは base のプロンプトを反映します、色のついたリージョンは通常の base のウェイトを受け取ります。このため、baseはbase weight = 0で、シーン/背景を指定するのに特に便利なツールです。
124
+ - 描画の代わりにマスクをアップロードしたい人向けです: この機能はまだ **非常に多くのWIP** であることに注意してください。マスクを適用するためには、すべての色に何らかのタグを付ける必要があります(コードでLCOLOUR変数を変更するか、手動で各色を画像に追加してください)。色は��べて `HSV(degree,50%,50%)` の変形で、 degree (0:360) は以前のすべての色から最大に離れた値として計算されます(そのため、色は容易に区別できます)。最初のいくつかの値は、基本的に 0、180、90、270、45、135、225、315、22.5などです。色の選択によって、どの領域に対応するかが決まります。
125
+
126
+ ![RegionalMaskGuideB](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/RegionalMaskGuideB.jpg)
127
+
128
+ ### visualise and make template
129
+ 複雑な領域指定をする場合など領域を可視化して、テンプレートを作成します。
130
+
131
+ ![tutorial](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/tutorial.jpg)
132
+
133
+ 入力を終えてボタンを押すと、画像のように領域とテンプレートが出力されます。テンプレートをコピペして使用して下さい。以下は入力例と出力結果です。
134
+
135
+ ```
136
+ fantasy ADDCOMM
137
+ sky ADDROW
138
+ castle ADDROW
139
+ street stalls ADDCOL
140
+ 2girls eating and walking on street ADDCOL
141
+ street stalls
142
+ ```
143
+
144
+ ![tutorial](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/sample3.jpg
145
+ )
146
+
147
+
148
+ '1,1;2,3,2;3,2,3'を指定してColumnsを選んだ場合、
149
+ ![flip](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/msapmle1.png)
150
+ Rowsに変えると
151
+ ![flip](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/msapmle2.png)
152
+ flipを有効にすると
153
+ ![flip](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/msapmle3.png)
154
+
155
+ ## <a id="divprompt">region specification by prompt (experimental)</a>
156
+ プロンプトによる領域指定です。これまでの領域指定では分割された領域に対してプロンプトを設定していました。この領域指定にはいくつかの問題があり、例えば縦に分割した場合、指定したオブジェクトがそこに限定されてしまします。プロンプトによる領域指定では指定したプロンプトを反映した領域が画像生成中に作成され、そこに対応したプロンプトが適用されます。よって、より柔軟な領域指定が可能になります。以下に例を示します。`apple printed`は`shirt`にだけ効果が反映されて欲しいわけですが、shirtには反映されず、林檎の現物が出てきたりするわけです。
157
+ ```
158
+ lady smiling and sitting, twintails green hair, white skirt, apple printed shirt
159
+ ```
160
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample1.png)
161
+ そこで`apple printedの強度を1.4にするとこうなるわけです。
162
+
163
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample4.png)
164
+ プロンプトによる領域指定ではshirtに対して領域を計算して、そこに`apple printed`を適用します。
165
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample6.png)
166
+ ```
167
+ lady smiling and sitting, twintails green hair, white skirt, shirt BREAK
168
+ (apple printed:1.4),shirt
169
+ ```
170
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample2.png)
171
+ すると、目的の効果が得られるわけです。これまでの領域指定ではshirtの位置を詳細に指定しなければいけなかったわけですが、その必要がなくなりました。
172
+ ### つかいかた
173
+ ### 書式
174
+ ```
175
+ baseprompt target1 target2 BREAK
176
+ effect1, target1 BREAK
177
+ effect2 ,target2
178
+ ```
179
+ まず、ベースプロンプトを書きます。ベースプロンプトにはマスクを作成する単語(target1、target2)を書きます。次にBREAKで区切ります。次に、target1に対応するプロンプトを書きます。そしてカンマを入力しtarget1を記載します。ベースプロンプトのtargetの順番とBREAKで区切られたtargetの順番は前後しても問題ありません。targetは大まかな単語でも問題なく、例えば`tops`と指定して、`effect`に`red camisole`などと書いてもいいわけです。
180
+
181
+ ```
182
+ target2 baseprompt target1 BREAK
183
+ effect1, target1 BREAK
184
+ effect2 ,target2
185
+ ```
186
+ ベースプロンプトの順番は考慮されません。effectの順番は考慮されます。
187
+
188
+ ### threshold
189
+ プロンプトによって作られるマスクの判定に使われる閾値です。これは対象となるプロンプトによって範囲が大きく異なるのでマスクの数だけ設定できます。複数の領域を使うときはカンマで区切って入力して下さい。例えば髪は領域が曖昧になりがちなので小さな値が必要ですが、顔は領域が大きくなりがちなので小さな値が必要です。これはBREAKで区切られた順に並べて下さい。
190
+
191
+ ```
192
+ a lady ,hair, face BREAK
193
+ red, hair BREAK
194
+ tanned ,face
195
+ ```
196
+ `threshold : 0.4,0.6`
197
+ 単一の値が入力された場合、すべての領域に同じ値が適用されます。
198
+
199
+ ### Prompt and Prompt-EX
200
+ 領域がかぶった場合の計算方式です。Promptだと加算されます。Prompt-EXだと順番に上書きされます。つまり、target1とtarget2の領域が重複していた場合、target2の領域が優先されます。target1にtopsを指定してthretholdを小さくして大きな領域にして、target2をbottomsとしてthresholdを大きくすれば良い分離が得られます。この場合、targetは領域が大きい順に記載されるべきです。
201
+
202
+ ### Accuracy
203
+ 12 x 512 サイズの場合、Attention modeではU-netの深い領域では 8 x 8 で計算されます。これでは小さい領域しては意味をなしません。よって領域の浸食が起きやすくなります。Latentモードでは 64*64で計算されるため領域が厳密になります。
204
+ ```
205
+ girl hair twintail frills,ribbons, dress, face BREAK
206
+ girl, ,face
207
+ ```
208
+ Prompt-EX/Attention
209
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample5.png)
210
+ Prompt-EX/Latent
211
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample3.png)
212
+
213
+
214
+
215
+ ### ベースと共通の違い
216
+ ```
217
+ a girl ADDROMM(or ADDBASE)
218
+ red hair BREAK
219
+ green dress
220
+ ```
221
+ と言うプロンプトがあった場合、共通の場合には領域1は`a girl red hair`というプロンプトで生成されます。ベースの場合で比率が0.2の場合には` (a girl) * 0.2 + (red hair) * 0.8`というプロンプトで生成されます。基本的には共通プロンプトで問題ありません。共通プロンプトの効きが強いという場合などはベースにしてみてもいいかもしれません。
222
+
223
+ ## APIを通した利用方法
224
+ APIを通してこの拡張を利用する場合には次の書式を使います。
225
+ ```
226
+ "prompt": "green hair twintail BREAK red blouse BREAK blue skirt",
227
+ "alwayson_scripts": {
228
+ "Regional Prompter": {
229
+ "args": [True,False,"Matrix","Vertical","Mask","Prompt","1,1,1","",False,False,False,"Attention",False,"0","0","0",""]
230
+ }}
231
+ ```
232
+ `args`の各設定は下の表を参照して下さい。No.は順番に対応します。typeがtextになっている場合は`""`で囲って下さい。3-6のモード設定は3.のモードで選択したモードに対応するサブモード以外は無視されます。17.のマスクは画像データのアドレスを指定して下さい。アドレスは絶対パスか、web-uiルートからの相対パスが利用できます。マスクはマスクの項で指定された色を使用して作成して下さい。
233
+
234
+ | No. | setting |choice| type | default |
235
+ | ---- | ---- |---- |----| ----|
236
+ | 1 | Active |True, False|Bool|False|
237
+ | 2 | debug |True, False|Bool|False|
238
+ | 3 | Mode |Matrix, Mask, Prompt|Text| Matrix|
239
+ | 4 | Mode (Matrix)|Horizontal, Vertical, Columns, Rows|Text|Columns
240
+ | 5 | Mode (Mask)| Mask |Text|Mask
241
+ | 6 | Mode (Prompt)| Prompt, Prompt-Ex |Text|Prompt
242
+ | 7 | Ratios||Text|1,1,1
243
+ | 8 | Base Ratios | |Text| 0
244
+ | 9 | Use Base |True, False|Bool|False|
245
+ | 10 | Use Common |True, False|Bool|False|
246
+ | 11 | Use Neg-Common |True, False|Bool| False|
247
+ | 12 | Calcmode| Attention, Latent | Text | Attention
248
+ | 13 | Not Change AND |True, False|Bool|False|
249
+ | 14 | LoRA Textencoder ||Text|0|
250
+ | 15 | LoRA U-Net | | Text | 0
251
+ | 16 | Threshold | |Text| 0
252
+ | 17 | Mask | | Text |
253
+
254
+ ### 設定例
255
+ #### Matrix
256
+ ```
257
+ "prompt": "green hair twintail BREAK red blouse BREAK blue skirt",
258
+ "alwayson_scripts": {
259
+ "Regional Prompter": {
260
+ "args": [True,False,"Matrix","Vertical","Mask","Prompt","1,1,1","",False,False,False,"Attention",False,"0","0","0",""]
261
+ }}
262
+ ```
263
+ 結果
264
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/asample1.png)
265
+
266
+ #### Mask
267
+ ```
268
+ "prompt": "masterpiece,best quality 8k photo of BREAK (red:1.2) forest BREAK yellow chair BREAK blue dress girl",
269
+ "alwayson_scripts": {
270
+ "Regional Prompter": {
271
+ "args": [True,False,"Mask","Vertical","Mask","Prompt","1,1,1","",False,True,False,"Attention",False,"0","0","0","mask.png"]
272
+ ```
273
+ 使用したマスク
274
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/mask.png)
275
+ 結果
276
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/asample2.png)
277
+
278
+ ### Prompt
279
+ ```
280
+ "prompt": "masterpiece,best quality 8k photo of BREAK a girl hair blouse skirt with bag BREAK (red:1.8) ,hair BREAK (green:1.5),blouse BREAK,(blue:1.7), skirt BREAK (yellow:1.7), bag",
281
+ "alwayson_scripts": {
282
+ "Regional Prompter": {
283
+ "args": [True,False,"Prompt","Vertical","Mask","Prompt-EX","1,1,1","",False,True,False,"Attention",False,"0","0","0.5,0.6,0.5",""]
284
+ }}
285
+ ```
286
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/asample3.png)
287
+
288
+ ### 謝辞
289
+ Attention coupleを提案された[furusu](https://note.com/gcem156)氏、Latent coupleを提案された[opparco](https://github.com/opparco)氏、2D生成のコード作成に協力して頂いた[Symbiomatrix](https://github.com/Symbiomatrix)に感謝します。
290
+
291
+
292
+
293
+ - 新機能2D領域を追加しました
294
+ - 新しい計算方式「Latent」を追加しました。生成が遅くなりますがLoRAをある程度分離できます
295
+ - 75トークン以上を入力できるようになりました
296
+ - 共通プロンプトを設定できるようになりました
297
+ - 設定がPNG infoに保存されるようになりました
extensions/sd-webui-regional-prompter/README.md ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Regional Prompter
2
+ ![top](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/top.jpg)
3
+ - custom script for [AUTOMATIC1111's stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
4
+ - Different prompts can be specified for different regions
5
+
6
+ - [AUTOMATIC1111's stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) 用のスクリプトです
7
+ - 垂直/平行方向に分割された領域ごとに異なるプロンプトを指定できます
8
+
9
+ [<img src="https://img.shields.io/badge/言語-日本語-green.svg?style=plastic" height="25" />](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/README.JP.md)
10
+ [<img src="https://img.shields.io/badge/Support-%E2%99%A5-magenta.svg?logo=github&style=plastic" height="25" />](https://github.com/sponsors/hako-mikan)
11
+
12
+ ## for LoHa, LoCon users
13
+ **About LoRA/LoCon/LoHa**
14
+ There are certain constraints due to the specifications of the Web-UI regarding the following:
15
+ These constraints arise because the Web-UI cannot perform specific optimizations when applying LoRA and does not support mid-strength changes to LoRA.
16
+ - **LoRA**: Can be applied without a decrease in speed.
17
+ - **LoCon/LoHa**: It can be used when the "Use LoHa or other" option is enabled, but this results in a slower generation speed. This constraint is based on the Web-UI's specifications.
18
+
19
+ **LoRA/LoCon/LoHaについて**
20
+ LoRAの種類別の使用条件です。
21
+ - **LoRA**: 速度低下なく適用可能です。
22
+ - **LoCon/LoHa**: "Use LoHa or other" オプションを有効にすると使用できますが、生成速度が遅くなります。この制約はWeb-UIの仕様に基づいています。
23
+
24
+ ### Updates
25
+ - モード名が変更になりました。`Horizontal` -> `columns`, `Vertical` -> `Rows`
26
+ (日本語で横に分割を英訳したSplit Horizontalは英語圏では逆の意味になるようです。水平線「で」分割するという意味になるそう)
27
+ - `,`,`;`を入れ替えるオプションを追加
28
+
29
+ - Split Mode name changed, `Horizontal` -> `columns`, `Vertical` -> `Rows`
30
+ - flip `,`,`;` option added
31
+
32
+ - add LoRA stop step
33
+ LoRAを適用するのをやめるstepを指定できます。10 step程度で停止することで浸食、ノイズ等の防止、生成速度の向上を期待できます。
34
+ You can specify the step at which to stop applying LoRA. By stopping around 10 steps, you can expect to prevent erosion and noise, and to improve generation speed.
35
+ (0に設定すると無効になります。0 is disable)
36
+
37
+ - support SDXL
38
+ - support web-ui 1.5
39
+
40
+ - add [guide for API users](#how-to-use-via-api)
41
+
42
+ - prompt mode improved
43
+ - プロンプトモードの動作が改善しました
44
+ (The process has been adjusted to generate masks in three steps, and to recommence generation from the first stage./3ステップでマスクを生成し、そこから生成を1stepからやり直すよう修正しました)
45
+
46
+ - New feature : [regions by inpaint](#inpaint) (thanks [Symbiomatrix](https://github.com/Symbiomatrix))
47
+ - New feature : [regions by prompt](#divprompt) ([Tutorial](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_en.md))
48
+ - 新機能 : [インペイントによる領域指定](#inpaint) (thanks [Symbiomatrix](https://github.com/Symbiomatrix))
49
+ - 新機能 : [プロンプトによる領域指定](#divprompt) ([チュートリアル](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_ja.md))
50
+
51
+
52
+ # Overview
53
+ Latent couple extention performs U-Net calculations on a per-prompt basis, but this extension performs per-prompt calculations inside U-Net. See [here(Japanese)](https://note.com/gcem156/n/nb3d516e376d7) for details. Thanks to furusu for initiating the idea. Additional, Latent mode also supported.
54
+
55
+ ## index
56
+ - [2D regions](#2D)
57
+ - [Latent mode(LoRA)](#latent)
58
+ - [regions by inpaint](#inpaint)
59
+ - [regions by prompt](#divprompt)
60
+
61
+
62
+
63
+
64
+ ## Usage
65
+ This section explains how to use the following image, explaining how to create the following image.
66
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/sample.jpg)
67
+ Here is the prompt.
68
+ ```
69
+ green hair twintail BREAK
70
+ red blouse BREAK
71
+ blue skirt
72
+ ```
73
+ setting
74
+ ```
75
+ Active : On
76
+ Use base prompt : Off
77
+ Divide mode : Vertical
78
+ Divide Ratio : 1,1,1
79
+ Base Ratio :
80
+ ````
81
+ This setting divides the image vertically into three parts and applies the prompts "green hair twintail" ,"red blouse" ,"blue skirt", from top to bottom in order.
82
+
83
+ ### Active
84
+ This extention is enabled only if "Active" is toggled.
85
+
86
+ ### Prompt
87
+ Prompts for different regions are separated by `BREAK` keywords.
88
+ Negative prompts can also be set for each area by separating them with `BREAK`, but if `BREAK` is not entered, the same negative prompt will be set for all areas.
89
+
90
+ Using `ADDROW` or `ADDCOL` anywhere in the prompt will automatically activate [2D region mode](#2D).
91
+
92
+ ### Use base prompt
93
+ Check this if you want to use the base prompt, which is the same prompt for all areas. Use this option if you want the prompt to be consistent across all areas.
94
+ When using base prompt, the first prompt separated by `BREAK` is treated as the base prompt.
95
+ Therefore, when this option is enabled, one extra `BREAK`-separated prompt is required compared to Divide ratios.
96
+
97
+ Automatically turned on when `ADDBASE` is entered.
98
+
99
+
100
+ ### Divide ratio
101
+ If you enter 1,1,1, the image will be divided into three equal regions (33,3%, 33,3%, 33,3%); if you enter 3,1,1, the image will be divided into 60%, 20%, and 20%. Fractions can also be entered: 0.1,0.1,0.1 is equivalent to 1,1,1. For greatest accuracy, enter pixel values corresponding to height / width (vertical / horizontal mode respectively), eg 300,100,112 -> 512.
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
+ Using a `;` separator will automatically activate 2D region mode.
110
+
111
+
112
+
113
+ ### Base ratio
114
+ Sets the ratio of the base prompt; if base ratio is set to 0.2, then resulting images will consist of `20%*BASE_PROMPT + 80%*REGION_PROMPT`. It can also be specified for each region, in the same way as "Divide ratio" - 0.2, 0.3, 0.5, etc. If a single value is entered, the same value will be applied to all areas.
115
+
116
+ ### split mode
117
+ Specifies the direction of division. Horizontal and vertical directions can be specified.
118
+ In order to specify both horizontal and vertical regions, see 2D region mode.
119
+
120
+ ## calcutation mode
121
+ Internally, system use BREAK in Attention mode and AND in Latent mode. AND/BREAK is automatically converted depending on the mode being used, but there is no problem whether you input BREAK or AND in the prompt.
122
+ ### Attention
123
+ Normally, use this one.
124
+ ### Latent
125
+ Slower, but allows separating LoRAs to some extent. The generation time is the number of areas x the generation time of one pic. See [known issues](#knownissues).
126
+
127
+ Example of Latent mode for [nendoorid](https://civitai.com/models/7269/nendoroid-figures-lora),
128
+ [figma](https://civitai.com/models/7984/figma-anime-figures) LoRA separated into left and right sides to create.
129
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/sample2.jpg" width="400">
130
+
131
+ ### Use common prompt
132
+ If this option enabled, first part of the prompt is added to all region parts.
133
+
134
+ Automatically turned on when `ADDCOMM` is entered.
135
+ ```
136
+ best quality, 20yo lady in garden BREAK
137
+ green hair twintail BREAK
138
+ red blouse BREAK
139
+ blue skirt
140
+ ```
141
+ If common is enabled, this prompt is converted to the following:
142
+ ```
143
+ best quality, 20yo lady in garden, green hair twintail BREAK
144
+ best quality, 20yo lady in garden, red blouse BREAK
145
+ best quality, 20yo lady in garden, blue skirt
146
+ ```
147
+ So you must set 4 prompts for 3 regions. If `Use base prompt` is also enabled 5 prompts are needed. The order is as follows: common, base, prompt1,prompt2,...
148
+
149
+ ## <a id="2D">2D region assignment</a>
150
+ You can specify a region in two dimensions. Using a special separator (`ADDCOL/ADDROW`), the area can be divided horizontally and vertically. Starting at the upper left corner, the area is splited by columns when separated by `ADDCOL` and rows when separated by `ADDROW`. The ratio of division is specified as a ratio separated by a semicolon. An example is shown below; although it is possible to use `BREAK` alone to describe only the ratio, it is easier to understand if COL/ROW is explicitly specified. Using `ADDBASE `as the first separator will result in the base prompt. If no ratio is specified or if the ratio does not match the number of separators, all regions are automatically treated as equal multiples.
151
+ In this mode, the direction selected in `Divide mode` changes which separator is applied first:
152
+ - In `Coloms` mode, the image is first split to rows with `ADDROW` or `;` in Divide ratio, then each row is split to regions with `ADDCOL` or `,` in Divide ratio.
153
+ - In `Rows` mode, the image is first split to columns with `ADDCOL` or `,` in Divide ratio, then each column is split to regions with `ADDROW` or `;` in Divide ratio.
154
+ - When the flip option is enabled, it swaps the , and ;. This allows you to obtain an area that is rotated 90 degrees while keeping the same ratios used in Columns/Rows.
155
+
156
+ In any case, the conversion of prompt clauses to rows and columns is from top to bottom, left to right.
157
+
158
+ ```
159
+ (blue sky:1.2) ADDCOL
160
+ green hair twintail ADDCOL
161
+ (aquarium:1.3) ADDROW
162
+ (messy desk:1.2) ADDCOL
163
+ orange dress and sofa
164
+ ```
165
+
166
+ ```
167
+ Active : On
168
+ Use base prompt : Off
169
+ Main splitting : Columns
170
+ Divide Ratio : 1,2,1,1;2,4,6
171
+ Base Ratio :
172
+ ```
173
+
174
+ ![2d](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/2d.jpg)
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+ ## <a id="visualize">Visualise and make template</a>
190
+ Areas can be visualized and templates for prompts can be created.
191
+
192
+ ![tutorial](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/tutorial.jpg)
193
+
194
+ Enter the area ratio and press the button to make the area appear. Next, copy and paste the prompt template into the prompt input field.
195
+
196
+ ```
197
+ fantasy ADDCOMM
198
+ sky ADDROW
199
+ castle ADDROW
200
+ street stalls ADDCOL
201
+ 2girls eating and walking on street ADDCOL
202
+ street stalls
203
+ ```
204
+ Result is following,
205
+ ![tutorial](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/sample3.jpg)
206
+
207
+
208
+ This is an example of an area using 1,1;2,3,2;3,2,3. In Columns, it would look like this:
209
+ ![flip](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/msapmle1.png)
210
+ In Rows, it would appear as follows:
211
+ ![flip](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/msapmle2.png)
212
+ When the flip option is enabled in Rows, it would appear as follows:
213
+ ![flip](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/msapmle3.png)
214
+
215
+
216
+ ## <a id="inpaint">Mask regions aka inpaint+ (experimental function)</a>
217
+ It is now possible to specify regions using either multiple hand drawn masks or an uploaded image containing said masks (more on that later).
218
+ - First, make sure you switch to `mask divide mode` next to `Colums` / `Rows`. Otherwise the mask will be ignored and regions will be split by ratios as usual.
219
+ - Set `canvas width and height` according to desired image's, then press `create mask area`. If a different ratio or size is specified, the masks may be applied inaccurately (like in inpaint "just resize").
220
+ - Draw an outline / area of the region desired on the canvas, then press `draw region`. This will fill out the area, and colour it according to the `region` number you picked. **Note that the drawing is in black only, filling and colouring are performed automatically.** The region mask will be displayed below, to the right.
221
+ - Pressing `draw region` will automatically advance to the next region. It will also keep a list of which regions were used for building the masks later. Up to 360 regions can be used currently, but note that a few of them on the higher end are identical.
222
+ - It's possible to add to existing regions by reselecting the same number and drawing as usual.
223
+ - The special region number -1 will clear out (colour white) any drawn areas, and display which parts still contain regions in mask.
224
+ - Once the region masks are ready, write your prompt as usual: Divide ratios are ignored. Base ratios still apply to each region. All flags are supported, and all BREAK / ADDX keywords (ROW/COL will just be converted to BREAK). Attention and latent mode supported (loras maybe).
225
+ - `Base` has unique rules in mask mode: When base is off, any non coloured regions are added to the first mask (therefore should be filled with the first prompt). When base is on, any non coloured regions will receive the base prompt in full, whilst coloured regions will receive the usual base weight. This makes base a particularly useful tool for specifying scene / background, with base weight = 0.
226
+ - Masks are saved to and loaded from presets whose divide mode is `mask`. The mask is saved in the extension directory, under the folder `regional_masks`, as {preset}.png file.
227
+ - Masks can be uploaded from any image by using the empty component labelled `upload mask here`. It will automatically filter and tag the colours approximating matching those used for regions, and ignore the rest. The region / nonregion sections will be displayed under mask. **Do not upload directly to sketch area, and read the [known issues](#knownissues) section.**
228
+ - If you wish to draw masks in an image editor, this is how the colours correspond to regions: The colours are all variants of `HSV(degree,50%,50%)`, where degree (0:360) is calculated as the maximally distant value from all previous colours (so colours are easily distinguishable). The first few values are essentially: 0, 180, 90, 270, 45, 135, 225, 315, 22.5 and so on. The choice of colours decides to which region they correspond.
229
+ - Protip: You may upload an openpose / depthmap / any other image, then trace the regions accordingly. Masking will ignore colours which don't belong to the expected colour standard.
230
+
231
+ ![RegionalMaskGuide2](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/RegionalMaskGuide2.jpg)
232
+ ![RegionalMaskGuide2B](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/RegionalMaskGuide2B.jpg)
233
+
234
+
235
+ Here is sample and code
236
+ using mask and prompt`landscape BREAK moon BREAK girl`.
237
+ Using XYZ plot prompt S/R, changed `moon BREAK girl` to others.
238
+ ![RegionalMaskSample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/isample1.png)
239
+
240
+
241
+
242
+ ## <a id="divprompt">region specification by prompt (experimental)</a>
243
+ The region is specified by the prompt. The picture below was created with the following prompt, but the prompt `apple printed` should only affect the shirt, but the actual apples are shown and so on.
244
+ ```
245
+ lady smiling and sitting, twintails green hair, white skirt, apple printed shirt
246
+ ```
247
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample1.png)
248
+ If you enhance the effect of `apple printed` to `:1.4`, you get,
249
+
250
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample4.png)
251
+ The prompt region specification allows you to calculate the region for the "shirt" and adapt the "printed apples".
252
+
253
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample6.png)
254
+ ```
255
+ lady smiling and sitting, twintails green hair, white skirt, shirt BREAK
256
+ (apple printed:1.4),shirt
257
+ ```
258
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample2.png)
259
+
260
+ ### How to use
261
+ ### syntax
262
+ ```
263
+ baseprompt target1 target2 BREAK
264
+ effect1, target1 BREAK
265
+ effect2 ,target2
266
+ ```
267
+
268
+
269
+ First, write the base prompt. In the base prompt, write the words (target1, target2) for which you want to create a mask. Next, separate them with BREAK. Next, write the prompt corresponding to target1. Then enter a comma and write target1. The order of the targets in the base prompt and the order of the BREAK-separated targets can be back to back.
270
+
271
+ ```
272
+ target2 baseprompt target1 BREAK
273
+ effect1, target1 BREAK
274
+ effect2 ,target2
275
+ ```
276
+ is also effective.
277
+
278
+ ### threshold
279
+ The threshold used to determine the mask created by the prompt. This can be set as many times as there are masks, as the range varies widely depending on the target prompt. If multiple areas are used, enter them separated by commas. For example, hair tends to be ambiguous and requires a small value, while face tends to be large and requires a small value. These should be ordered by BREAK.
280
+
281
+ ```
282
+ a lady ,hair, face BREAK
283
+ red, hair BREAK
284
+ tanned ,face
285
+ ```
286
+ `threshold : 0.4,0.6`
287
+ If only one input is given for multiple regions, they are all assumed to be the same value.
288
+
289
+ ### Prompt and Prompt-EX
290
+ The difference is that in Prompt, duplicate areas are added, whereas in Prompt-EX, duplicate areas are overwritten sequentially. Since they are processed in order, setting a TARGET with a large area first makes it easier for the effect of small areas to remain unmuffled.
291
+
292
+ ### Accuracy
293
+ In the case of a 512 x 512 image, Attention mode reduces the size of the region to about 8 x 8 pixels deep in the U-Net, so that small areas get mixed up; Latent mode calculates 64*64, so that the region is exact.
294
+ ```
295
+ girl hair twintail frills,ribbons, dress, face BREAK
296
+ girl, ,face
297
+ ```
298
+ Prompt-EX/Attention
299
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample5.png)
300
+ Prompt-EX/Latent
301
+ ![prompt](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/psample3.png)
302
+
303
+
304
+ ### Mask
305
+ When an image is generated, the generated mask is displayed. It is generated at the same size as the image, but is actually used at a much smaller size.
306
+
307
+ ## Difference between base and common
308
+ ```
309
+ a girl ADDCOMM (or ADDBASE)
310
+ red hair BREAK
311
+ green dress
312
+ ```
313
+ If there is a prompt that says `a girl` in the common clause, region 1 is generated with the prompt `a girl , red hair`. In the base clause, if the base ratio is 0.2, it is generated with the prompt `a girl` * 0.2 + `red hair` * 0.8. Basically, common clause combines prompts, and base clause combines weights (like img2img denoising strength). You may want to try the base if the common prompt is too strong, or fine tune the (emphasis).
314
+ The immediate strength that corresponds to the target should be stronger than normal. Even 1.6 doesn't break anything.
315
+
316
+ ## <a id="knownissues">Known issues</a>
317
+ - Due to an [issue with gradio](https://github.com/gradio-app/gradio/issues/4088), uploading a mask or loading a mask preset more than twice in a row will fail. There are two workarounds for this:
318
+ 1) Before EVERY upload / load, press `create mask area`.
319
+ 2) Modify the code in gradio.components.Image.preprocess; add the following at the beginning of the function (temporarily):
320
+ ```
321
+ if self.tool == "sketch" and self.source in ["upload", "webcam"]:
322
+ if x is not None and isinstance(x, str):
323
+ x = {"image":x, "mask": x[:]}
324
+ ```
325
+ The extension cannot perform this override automatically, because gradio doesn't currently support [custom components](https://github.com/gradio-app/gradio/issues/1432). Attempting to override the component / method in the extension causes the application to not load at all.
326
+
327
+ 3) Wait until a fix is published.
328
+
329
+ - Lora corruption in latent mode. Some attempts have been made to improve the output, but no solution as of yet. Suggestions below.
330
+ 1) Reduce cfg, reduce lora weight, increase sampling steps.
331
+ 2) Use the `negative textencoder` + `negative U-net` parameters: these are weights between 0 and 1, comma separated like base. One is applied to each lora in order of appearance in the prompt. A value of 0 (the default) will negate the effect of the lora on other regions, but may cause it to be corrupted. A value of 1 should be closer to the natural effect, but may corrupt other regions (greenout, blackout, SBAHJified etc), even if they don't contain any loras. In both cases, a higher lora weight amplifies the effect. The effect seems to vary per lora, possibly per combination.
332
+ 3) It has been suggested that [lora block weight](https://github.com/hako-mikan/sd-webui-lora-block-weight) can help.
333
+ 4) If all else fails, inpaint.
334
+
335
+ Here are samples of a simple prompt, two loras with negative te/unet values per lora of: (0,0) default, (1,0), (0,1), (1,1).
336
+ ![MeguminMigurdiaCmp](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/MeguminMigurdiaCmp.jpg)
337
+
338
+ If you come across any useful insights on the phenomenon, do share.
339
+
340
+ ## How to Use via API
341
+ The following format is used when utilizing this extension via the API.
342
+
343
+ ```
344
+ "prompt": "green hair twintail BREAK red blouse BREAK blue skirt",
345
+ "alwayson_scripts": {
346
+ "Regional Prompter": {
347
+ "args": [True,False,"Matrix","Vertical","Mask","Prompt","1,1,1","",False,False,False,"Attention",False,"0","0","0",""]
348
+ }}
349
+ ```
350
+ Please refer to the table below for each setting in `args`. No. corresponds to the order. When the type is text, please enclose it with `""`. Modes 3-6 ignore submodes that do not correspond to the mode selected in mode 3. For the mask in 17., please specify the address of the image data. Absolute paths or relative paths from the web-ui root can be used. Please create the mask using the color specified in the mask item.
351
+
352
+ | No. | setting |choice| type | default |
353
+ | ---- | ---- |---- |----| ----|
354
+ | 1 | Active |True, False|Bool|False|
355
+ | 2 | debug |True, False|Bool|False|
356
+ | 3 | Mode |Matrix, Mask, Prompt|Text| Matrix|
357
+ | 4 | Mode (Matrix)|Horizontal, Vertical, Colums, Rows|Text|Columns
358
+ | 5 | Mode (Mask)| Mask |Text|Mask
359
+ | 6 | Mode (Prompt)| Prompt, Prompt-Ex |Text|Prompt
360
+ | 7 | Ratios||Text|1,1,1
361
+ | 8 | Base Ratios | |Text| 0
362
+ | 9 | Use Base |True, False|Bool|False|
363
+ | 10 | Use Common |True, False|Bool|False|
364
+ | 11 | Use Neg-Common |True, False|Bool| False|
365
+ | 12 | Calcmode| Attention, Latent | Text | Attention
366
+ | 13 | Not Change AND |True, False|Bool|False|
367
+ | 14 | LoRA Textencoder ||Text|0|
368
+ | 15 | LoRA U-Net | | Text | 0
369
+ | 16 | Threshold | |Text| 0
370
+ | 17 | Mask | | Text |
371
+ | 18 | LoRA stop step | | Text | 0
372
+ | 19 | LoRA Hires stop step | | Text | 0
373
+ | 20 | flip |True, False| Bool | False
374
+
375
+ ### Example Settings
376
+ #### Matrix
377
+ ```
378
+ "prompt": "green hair twintail BREAK red blouse BREAK blue skirt",
379
+ "alwayson_scripts": {
380
+ "Regional Prompter": {
381
+ "args": [True,False,"Matrix","Vertical","Mask","Prompt","1,1,1","",False,False,False,"Attention",False,"0","0","0",""]
382
+ }}
383
+ ```
384
+ Result
385
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/asample1.png)
386
+
387
+ #### Mask
388
+ ```
389
+ "prompt": "masterpiece,best quality 8k photo of BREAK (red:1.2) forest BREAK yellow chair BREAK blue dress girl",
390
+ "alwayson_scripts": {
391
+ "Regional Prompter": {
392
+ "args": [True,False,"Mask","Vertical","Mask","Prompt","1,1,1","",False,True,False,"Attention",False,"0","0","0","mask.png"]
393
+ ```
394
+ Mask used
395
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/mask.png)
396
+ Result
397
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/asample2.png)
398
+
399
+ #### Prompt
400
+ ```
401
+ "prompt": "masterpiece,best quality 8k photo of BREAK a girl hair blouse skirt with bag BREAK (red:1.8) ,hair BREAK (green:1.5),blouse BREAK,(blue:1.7), skirt BREAK (yellow:1.7), bag",
402
+ "alwayson_scripts": {
403
+ "Regional Prompter": {
404
+ "args": [True,False,"Prompt","Vertical","Mask","Prompt-EX","1,1,1","",False,True,False,"Attention",False,"0","0","0.5,0.6,0.5",""]
405
+ }}
406
+ ```
407
+ ![sample](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/asample3.png)
408
+
409
+
410
+ ## Acknowledgments
411
+ I thank [furusu](https://note.com/gcem156) for suggesting the Attention couple, [opparco](https://github.com/opparco) for suggesting the Latent couple, and [Symbiomatrix](https://github.com/Symbiomatrix) for helping to create the 2D generation code.
412
+
413
+
414
+ ## Updates
415
+ - New feature, "2D-Region"
416
+ - New generation method "Latent" added. Generation is slower, but LoRA can be separated to some extent.
417
+ - Supports over 75 tokens
418
+ - Common prompts can be set
419
+ - Setting parameters saved in PNG info
extensions/sd-webui-regional-prompter/differential_ja.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Differential Regional Prompter
2
+ このスクリプトはRegional Prompterの補助スクリプトであり、Stable Diffusion web-uiのカスタムscript(XYZ plotなどと同じ)として動作します。
3
+ このスクリプトではRegional Prompterのpromptによる領域指定を利用して、差分画像の作成や、一貫性を保ったアニメーションなどの作成が可能です。従来のpromptによる領域指定でもある程度の一貫性を保った差分は作成可能でした。しかし、denoiseの課程において指定した領域外でも差が発生してしまい、完全な差分にはなりません。このスクリプトでは初期画像から、promptで指定した領域のみの差分だけを反映させることが可能です。差分を連続しして変化させることでなめらかなアニメーションも作成できます。
4
+ 次の画像は`closed eyes` を`eyes`から計算された領域にのみ適用して、このスクリプトを用いて作られた差分です。3枚目はanime gifにしたものです。
5
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/dsample1.jpg" width="400">
6
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/dsample2.jpg" width="400">
7
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/dsample.gif" width="400">
8
+
9
+ このように、変化させる場所以外は一貫性を保ったような画像を作成することができます。LoRAのコピー機学習法用の画像など様々な用途に使えるのではないかと思います。
10
+ また、スケジュール機能を使用して簡単なアニメーションを作成することができます。これはRegional Prompter単体で動作し、追加のモジュールなどを必要としません。
11
+
12
+ ## 動作原理
13
+ 内部ではPrompt EdittingとRegional PrompterのPromptによる領域指定を使用して差分を作成しています。これは元画像との整合性をより高くするために用いています。例えば目を閉じた差分を作りたいときに、closed eyesを追加したプロンプトすると全体が大きく変わってしまう可能性があります。そこで[:closed eyes:4]として4ステップ目からclosed eyesを効かせることで元画像との整合性を得ます。設定画面のstepはこのprompt edittingの開始ステップを示しています。
14
+
15
+ ## 使い方
16
+ scirptの中にあるDifferential Regional Prompterを選択します。Regional Prompterはインストールされていれば他に設定する必要はありません。
17
+
18
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/gamen.jpg" width="800">
19
+
20
+ ### Options
21
+ Reversを有効にすると動画を逆再生して生成します。必要な理由は後述します。
22
+
23
+ ### Additional Output
24
+ 動画(mp4)やAnime Gifを生成するかどうかを選択します。選択した場合はOutputフォルダ直下に生成されます。
25
+
26
+ ### Step
27
+ Prompt Edittingで使用される開始ステップを指定します。通常4~6ぐらいがちょうどいいです。
28
+ ### FPS
29
+ 動画作成時のフレームレートを設定します。anime gif のdurationは1000/FPSで計算されます。
30
+ ### Schedule
31
+ 差分プロンプトを入力します。
32
+ 詳しい解説は使用例を見て下さい。
33
+ ### mp4 output directory
34
+ mp4を出力するディレクトリを記入します。空欄の場合にはoutput/txt2img-imagesフォルダ直下になります。ここに値を入力すると、output直下に指定のディレクトリが作成されます。
35
+ ### mp4 output filename
36
+ mp4のファイルネームを指定します。空欄の場合`drp.mp4`,`drp_1.mp4`...と連番のファイルが作成されます。ここに`test`と記入すると`test.mp4`,`test_1.mp4`のような連番のファイルが作成されます。上書きはされません。
37
+ ### mp4 output directory
38
+ Anime gifを出力するディレクトリを記入します。空欄の場合にはoutput/txt2img-imagesフォルダ直下になります。ここに値を入力すると、output直下に指定のディレクトリが作成されます。
39
+ ### mp4 output filename
40
+ Anime gifのファイルネームを指定します。空欄の場合`drp.gif`,`drp_1.gif`...と連番のファイルが作成されます。ここに`test`と記入すると`test.gif`,`test_1.gif`のような連番のファイルが作成されます。上書きはされません。
41
+
42
+ ## 使用例
43
+ ### 瞬きをする
44
+ ここでは目を閉じた差分を作ることを想定して解説します。
45
+ まずはメインプロンプトを通常のプロンプト入力欄に入力します。ここでは
46
+ ```
47
+ a girl in garden face close up, eyes
48
+ ```
49
+ としましょう。ここで重要なのは`eyes`が入力されていることです。これは差分の領域を計算する際に必要です。
50
+ 次に、Scheduleに次のように入力します。Regional Prompter��設定欄のthresholdには0.6程度を入力します。
51
+
52
+ ```
53
+ 0
54
+ closed eyes;eyes;1.3
55
+ ```
56
+ この状態でGenerateすると最初に紹介したふたつの画像ができあがります。Anime gifオプションを有効にしているとanime gifもできます。
57
+ では各設定値について説明します。
58
+ ```
59
+ prompt;prompt for region calculation;weight;step
60
+ ```
61
+
62
+ のように「`;`」で区切られた各設定値を各行に入力します。
63
+ #### prompt
64
+ 差分を作成するプロンプト
65
+ #### prompt for region
66
+ 領域計算用のプロンプト
67
+ #### weight
68
+ プロンプトの強さ
69
+ #### step(省略可)
70
+ 差分のプロンプトが有効になるステップ数
71
+
72
+ closed eyes;eyes;1.3;4の場合、実行時には[:(closed eyes:1.3):4]というプロンプトが入力されています。
73
+
74
+ ### 笑顔になる
75
+ 次は連続変化によるアニメーションを作ってみましょう。
76
+ プロンプトは変えずにScheduleに次のように入力します。
77
+ ```
78
+ 0*10             
79
+ smile;face;1.2;20-6(2)
80
+ smile;face;1.2*10
81
+ ```
82
+
83
+ これは1行目から順に、
84
+ ```
85
+ 初期画像を10フレーム
86
+ face領域に対してsmileの強度を1.2にしてstepを20から2ずつ6まで減らす。
87
+ face領域に対してsmileの強度1.5を10フレーム
88
+ ```
89
+ という意味があります。
90
+ 20-6と入力すると、20,19,...6とステップを1ずつ減らしながら連続したプロンプトが自動で入力され生成されます。このときstepの場合は1ずつ減ったり増えたりします。(2)はその増減を2に指定しています。よって20-6(2)の場合には20,18,16...6というステップのプロンプトで生成が行われます。このとき全20stepで生成しているとすると、step = 20ではsmileはプロンプトに反映されません。その状態から少しずつ反映されるステップを増やしていくことでsmileの強度を強めていっているのです。よって段々smileしていくようなアニメーションができあがります。
91
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/dsample5.gif" width="400">
92
+ この書式はweightにも有効で、1.0-1.3と入力すると、1.0,1.1,1.2,1.3と連続した値が自動的に入力されます。このとき、増え方は小数点以下の桁数に依存します。1.00-1.10と書くと0.01刻みになります。刻み幅を指定したいときには`()`を使用します。1.0-1.3(0.05)は1.0から1.3まで0.05刻みで増やすという意味です。この場合、1.00,1.05,1.10,1.15,1.20,1.25,1.30となり7フレーム作られます。
93
+
94
+ #### 特殊な指定
95
+ step=5
96
+ th=0.45
97
+ ex-on,0.01
98
+ ex-off
99
+ などの指示を入力可能です。
100
+ それぞれstep,領域指定用の閾値などを途中で変更可能です。
101
+
102
+ ex-onとex-offはextra seedの設定です。なんやねんそれはと言う方もいると思うので説明しますが、seedは1違うと全く異なる画像になることは知っているかと思います。seedは整数値なので、1以下の値をずらすことはできませんが、それを可能にするのがextra seedで、これを使用すると、ほんの少しだけことなる画像を作ることができます。それがなんの役にたつかというと、背景やエフェクトなどに対して有効に働きます。
103
+ 次の画像は以下の指示によって作られました。
104
+ ```
105
+ 0
106
+ ex-on,0.005
107
+ ;lightning_thunder;1.00-1.05
108
+ ```
109
+ 0.005はエクストラシードの変化量です。これぐらいに設定すると雷がほとばしるようなエフェクトになります。もっと強くすると全く別のシードから作られたような画像になってしまい意味が無くなってしまうので注意して下さい。
110
+ <img src="https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/dsample4.gif" width="400">
111
+ lightning_thunderのようにつなげることで複数の単語に対しての領域を指定できます。プロンプト入力欄にも同じくつないだ言葉が入っている必要があります。
112
+ 1.00-1.05は5パターン描くように指示するために0.01刻みにしています。
113
+ ```
114
+ ;lightning_thunder;1.00
115
+ ```
116
+ を5回記述しても同じ指示なので1回しか計算されないためです。
117
+
118
+
119
+ ```
120
+ #smile and blink    
121
+ 0*20
122
+ smile;face;1.2;13-6  
123
+ smile;face;1.2*10   
124
+ smile;face;1.2;6-13  
125
+ 0*20
126
+ closed eyes;eyes;1.4*3
127
+ 0*20
128
+ ```
129
+ これは1行目から順に、
130
+ ```
131
+ 書式にマッチしない行は無視される
132
+ 20フレーム初期画像を表示
133
+ Step 13から6まで減らしながらface領域に(smile:1.2)を指定
134
+ 10フレームface領域にsmileを指定(stepはデフォルト値)
135
+ 増やしながらface領域に(smile:1.2)を指定
136
+ Step 6から13まで
137
+ 20フレーム初期画像を表示
138
+ eyes領域に(closed eyes:1.4)を指定
139
+ 20フレーム初期画像を表示
140
+ ```
141
+ と言う効果です。
extensions/sd-webui-regional-prompter/prompt_en.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tutorial on specifying areas with prompts
2
+
3
+ There are limitations to methods of specifying areas in advance. This is because specifying areas can be a hindrance when designating complex shapes or dynamic compositions. In the region specified by the prompt, the area is determined after the image generation has begun. This allows us to accommodate compositions and complex areas.
4
+
5
+ Let's take a look at an example.
6
+ The following image was created by the next prompt. It's a grand color transition.
7
+ ```
8
+ sfw (8k realistic masterpiece:1.3) a Asian girl ,dark green dress,pink belt,yellow bag,
9
+ blond hair, in rainy street, holding red umbrella
10
+ ```
11
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial9.png)
12
+ Now, if we try to manage this with the usual area designation, we will have trouble specifying the area for the umbrella and the bag. They appear in various places, and some areas are initially small.
13
+
14
+ With prompt-based area specification, we calculate the area corresponding to each word.
15
+
16
+ Let's first turn the umbrella red. We change the prompt as follows: we change red umbrella to umbrella and after BREAK, we add (red:1.7), umbrella. This is because the system calculates the area of the word written after the last comma in the prompt following BREAK. In the case of (red:1.7), umbrella, the area of umbrella is calculated, and (red:1.7) is applied to that area.
17
+
18
+ Intensity adjustment is very important. Normally, if you input 1.7, it tends to fall apart, but with prompt-based area specification, it doesn't work unless you put in about this value. It's especially better to increase the intensity if you're trying to specify a color that doesn't seem to have learned much.
19
+ ```
20
+ sfw (8k realistic masterpiece:1.3) a girl, (dress:1.2), belt, bag, hair, in rainy street, holding umbrella BREAK
21
+ (red:1.7), umbrella
22
+ ```
23
+
24
+ ```
25
+ Divide mode : Prompt-EX
26
+ Calcmode : Attention
27
+ threshold : 0.7
28
+ negative common prompt : Enable
29
+ ```
30
+ Prompt-EX mode is an effective mode for specifying multiple areas and has the effect of overwriting areas with the ones that come later. Therefore, it is effective to specify the areas in a larger order.
31
+
32
+ Then the umbrella became properly red. The second image is the calculated area. It's properly shaped like an umbrella, and the head part is out of the area. This area varies depending on the prompt, so it's necessary to adjust it with the Threshold. If the Threshold is small, the area will be wider.
33
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial11.png)
34
+
35
+ Here, negative prompts are also set up.
36
+ ```
37
+ nsfw, (worst quality:1.6), (low quality:1.6), (normal quality:1.6), monochrome
38
+ [(black:1.5)::3] BREAK BREAK (dark,transparent, black, blue:2)
39
+ ```
40
+ Umbrellas and bags tend to be predominantly black in the data the model was trained on, so caution is required when specifying colors for them. In this case, we've added a prompt to prevent the umbrella area from becoming black. The reason there are two BREAKs is because nega is enabled. [(black:1.5)::3] prevents the image from becoming black before the region specification by the prompt begins. In the region specification by the prompt, the calculation of the region is not valid until the third step.
41
+
42
+ Now, with similar region specification, the prompt becomes as follows, and we were able to obtain a result where the colors were properly separated.
43
+
44
+ ```
45
+ sfw (8k realistic masterpiece:1.3) a girl, (dress:1.2), belt, bag, hair, in rainy street, holding umbrella BREAK
46
+ (red:1.7), umbrella BREAK
47
+ (dark green:1.7) ,dress BREAK
48
+ (blond:1.7), hair BREAK
49
+ (pink:1.7), belt BREAK
50
+ (yellow:1.7), bag
51
+ ```
52
+
53
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial10.png)
54
+
55
+
56
+
57
+ The following image was created with the prompt below. Although it would be ideal if 'forest' was only applied to the T-shirt, the background has also become a forest.
58
+ ```
59
+ girl in street (forest printed:1.3) T-shirt, shortshorts daytime
60
+ ```
61
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial1.png)
62
+ To apply the 'forest' print only to the T-shirt, we configure the prompt and the Regional Prompter as follows.
63
+ ```
64
+ lady in street shirt,shortshorts daytime BREAK
65
+ (forest printed:1.3) T-shirt ,shirt
66
+ ```
67
+ What's important here is that `shirt` is both placed before the `BREAK` and at the end after the 'BREAK', and is separated by a comma. In prompt mode, the word that is separated by a comma at the end is the target for region calculation.
68
+ ```
69
+ Divide mode : Prompt
70
+ Calcmode : Attention
71
+ threshold : 0.7
72
+ ```
73
+
74
+ With these settings, the generation will look like this.
75
+ ![2](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial2.png)
76
+ mask
77
+ ![3](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial3.png)
78
+ This is a property of the Attention mode, where it actually shrinks to about 12×8, making it more ambiguous, which might be better.
79
+
80
+ Now, there seems to be a strange string attached to the "shortshort", so let's change this next. We rewrite the prompt as follows.
81
+ ```
82
+ girl in street shirt,shortshorts daytime BREAK
83
+ (forest printed:1.3) T-shirt ,shirt BREAK
84
+ (skirt:1.7) ,shortshorts
85
+ ```
86
+ ```
87
+ Divide mode : Prompt
88
+ Calcmode : Attention
89
+ threshold : 0.7,0.75
90
+ ```
91
+ It turns out like this.
92
+ ![4](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial4.png)
93
+ The `shortshorts` became a `skirt`. I wrote it as `shortshorts` because that's what it was initially, but using a word like `bottoms` can make region selection easier. The reason why I kept it as `shortshorts` this time is because I didn't want to change the base prompt. If you change `shortshorts` to `bottoms`, it changes the initial image itself.
94
+
95
+ ```
96
+ girl in street shirt,bottoms daytime BREAK
97
+ (forest printed:1.3) T-shirt ,shirt BREAK
98
+ (red skirt:1.7) ,bottoms
99
+ ```
100
+ third image is made from this prompt.
101
+ ![5](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial5.png)
102
+ Keeping it as `shortshorts` only changes the area around the `shortshorts`, but if you change `shortshorts` to `bottoms`, it changes the base, which changes the whole image. In other words, with prompt-based specifications, you can do something like inpainting where only a part is changed.
103
+
104
+ Let's change the settings a bit and try dressing up. The reason why I set it as` (shortshorts:0.5)` is to prioritize the `skirt`. Weakening the `shortshorts` for the region won't be a problem. Normally, you target the same item, but if the word is too strong, it will have too much impact, so weakening it is an option.
105
+
106
+ ```
107
+ girl in street shirt,shortshorts daytime BREAK
108
+ (forest printed:1.3) T-shirt ,shirt BREAK
109
+ (red skirt:1.5) ,(shortshorts:0.5)
110
+ ```
111
+ ```
112
+ Divide mode : Prompt
113
+ Calcmode : Attention
114
+ threshold : 0.7,0.55
115
+ ```
116
+ I'm broadening the `threshold` for `shortshorts` to accommodate things like long skirt. I think bottoms or lower body would be easier.
117
+ ![6](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial6.png)
118
+ We were able to change the clothes, but there's some erosion in the pink. This is because the intensity is set uniformly. If we refine the settings, the erosion should disappear.
119
+
120
+ ```
121
+ girl in street shirt,shortshorts daytime BREAK
122
+ (forest printed:1.3) T-shirt ,shirt BREAK
123
+ (red skirt:1.5) ,(shortshorts:0.5) BREAK
124
+ (Japan landscape:1.6),street
125
+ ```
126
+ ```
127
+ Divide mode : Prompt
128
+ Calcmode : Attention
129
+ threshold : 0.7,0.55,0.7
130
+ ```
131
+ Let's change the `Japan` part.
132
+ ![7](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial7.png)
133
+ The word `landscape` is a bit tricky. It's quite strong and often cancels out the effect of other words. However, by calculating the region using street instead of `landscape`, we can prevent it from becoming overly dominant. By calculating regions with different words like this, you might be able to expand the range of your expressions.
134
+
135
+ When the Regional Prompter is disabled, it looks like this.
136
+ This is pretty good as it is.
137
+ ![8](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial8.png)
extensions/sd-webui-regional-prompter/prompt_ja.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## promptで指定する領域のチュートリアル
2
+ あらかじめ領域を指定するタイプの方法には限界があります。複雑な形状や動的な構図を指定する場合には領域指定が足かせになるためです。promptで指定する領域では画像を生成し始めたあとで領域を決定します。これにより構図や複雑な領域にも対応できるようになります。
3
+
4
+ では例を見てみましょう。
5
+ 下記の画像は次のプロンプトによって作成されました。まぁ盛大に色移りするわけです。
6
+ ```
7
+ sfw (8k realistic masterpiece:1.3) a Asian girl ,dark green dress,pink belt,yellow bag,
8
+ blond hair, in rainy street, holding red umbrella
9
+ ```
10
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial9.png)
11
+ さてこれを通常の領域指定でなんとかしようとすると傘やバッグの領域指定に困ってしまうわけです。色々な場所に出てくるし、そもそも領域が小さかったりします。
12
+
13
+ プロンプトによる領域指定では1単語に対応する領域を計算します。
14
+
15
+ まずは傘から赤くしてみましょう。プロンプトを以下のように変更します。`red umbrella`を`umbrella`に変更し、`BREAK`のあとに`(red:1.7)`, `umbrella`を追記します。これは`BREAK`のあとに続くプロンプトでは最後のカンマのあとに書かれた単語の領域を計算する仕組みだからです。`(red:1.7), umbrella`の場合、`umbrella`の領域が計算され、その領域に`(red:1.7)`が掛かります。
16
+ 強度調節はとても大切で、通常1.7を入力すると崩壊気味になるわけですが、プロンプトによる領域指定ではこれぐらいの値を入れないと効きません。特にあまり学習していないような色を指定しようとするなら強度を高めたほうが良いです。
17
+ ```
18
+ sfw (8k realistic masterpiece:1.3) a girl, (dress:1.2), belt, bag, hair, in rainy street, holding umbrella BREAK
19
+ (red:1.7), umbrella
20
+ ```
21
+
22
+ ```
23
+ Divide mode : Prompt-EX
24
+ Calcmode : Attention
25
+ threshold : 0.7
26
+ negative common prompt : Enable
27
+ ```
28
+ Prompt-EXモードは複数の領域を指定する場合に有効なモードで、あとに来る領域によって領域を上書きする効果があります。よって大きな順に領域を指定すると効果的です。
29
+ するとちゃんと傘が赤くなりました。2枚目の画像は計算された領域でです。ちゃんと傘の形になっていて、かつ頭の部分は領域外になっているわけです。この領域はpromptによってまちまちなのでThresholdで調節してあげる必要があります。Thresholdは小さいと領域が広くなります。
30
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial11.png)
31
+
32
+ ここではネガティブプロンプトも設定しています。
33
+ ```
34
+ nsfw, (worst quality:1.6), (low quality:1.6), (normal quality:1.6), monochrome
35
+ [(black:1.5)::3] BREAK BREAK (dark,transparent, black, blue:2)
36
+ ```
37
+ 傘やバッグなどはもともと黒いものが多く学習されている傾向があるので色を指定するときには注意が必要です。ここでは傘の領域に黒くなるのを防ぐpromptを入れています。`BREAK`が2つ並んでいるのはnegative commom promptを有効にしているためです。`[(black:1.5)::3]`はpromptによる領域指定が始まる前の段階で黒くなるのを防いでいます。promptによる領域指定では3stepまでは領域計算ができていないので有効になっていません。
38
+
39
+ さて、同様の領域指定を行うことでプロンプトは下記のようになり、ちゃんと色分けできた結果が得られました。
40
+
41
+ ```
42
+ sfw (8k realistic masterpiece:1.3) a girl, (dress:1.2), belt, bag, hair, in rainy street, holding umbrella BREAK
43
+ (red:1.7), umbrella BREAK
44
+ (dark green:1.7) ,dress BREAK
45
+ (blond:1.7), hair BREAK
46
+ (pink:1.7), belt BREAK
47
+ (yellow:1.7), bag
48
+ ```
49
+
50
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial10.png)
51
+
52
+  
53
+
54
+ 次の絵は以下のプロンプトで作られました。forestはT-シャツにだけ書いてくれればいいものの、背景まで森になっています。
55
+ ```
56
+ girl in street (forest printed:1.3) T-shirt, shortshorts daytime
57
+ ```
58
+ ![1](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial1.png)
59
+ そこで、Tシャツのみにforest printetを効かせるためにプロンプトとRegional Prompterを以下のように設定します。
60
+ ```
61
+ lady in street shirt,shortshorts daytime BREAK
62
+ (forest printed:1.3) T-shirt ,shirt
63
+ ```
64
+ ここで大切なのはshirtがBREAKの前に入っていることと、BREAKのあと、最後にあることとカンマで区切られていることです。promptモードでは最後にカンマで区切られた単語を領域計算の対象にします。
65
+ ```
66
+ Divide mode : Prompt
67
+ Calcmode : Attention
68
+ threshold : 0.7
69
+ ```
70
+ divide ratioとbase ratioはいまのところ使用しません。
71
+ この設定で生成すると次のようになります。
72
+ ![2](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial2.png)
73
+ 実際に生成されたマスクはこんな感じです。
74
+ ![3](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial3.png)
75
+ これはAttention modeの性質で、実際には12×8程度にまで縮小されて曖昧になるのでむしろこれくらいの方がよかったりします。
76
+
77
+ さて、なんだかshortshortにへんなひもがついているので次はここを変えてみましょう。プロンプトを以下のように書き換えます。
78
+ ```
79
+ girl in street shirt,shortshorts daytime BREAK
80
+ (forest printed:1.3) T-shirt ,shirt BREAK
81
+ (skirt:1.7) ,shortshorts
82
+ ```
83
+ ```
84
+ Divide mode : Prompt
85
+ Calcmode : Attention
86
+ threshold : 0.7,0.75
87
+ ```
88
+ するとこうなります。
89
+ ![4](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial4.png)
90
+ `shortshorts`が`skirt`になりました。今回は最初がshortshortsだったのでそのまま書き換えましたが、`bottoms`などのような単語を使った方が領域選択が簡単です。なぜ今回`shortshors`のままにしたかというと、ベースとなるプロンプトを変えたくなかったからです。`shortshorts`を`bottoms`に変えてしまうと、初期画像そのものが変わってしまうのです。
91
+
92
+ ```
93
+ girl in street shirt,bottoms daytime BREAK
94
+ (forest printed:1.3) T-shirt ,shirt BREAK
95
+ (red skirt:1.7) ,bottoms
96
+ ```
97
+ で作った画像を三番目に置きました。
98
+ ![5](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial5.png)
99
+ shortshortsのままで作成した場合はshortshorts周辺だけが変わっていますが、shortshortsをbottomsに変えた場合はベースが変わることになるので全体が変わってしまいます。つまり、プロンプト指定では一部だけを変更するインペイントのようなことができるというわけです。
100
+
101
+ 少し設定を変えて着せ替えてみましょう。`(shortshorts:0.5)`としているのは、skirtなどを優先するためです。領域用のshortshortsは弱めても問題ありません。普通は同じものを対象としますが、強い単語だと影響が出すぎてしまうので弱めるのも手です。
102
+
103
+ ```
104
+ girl in street shirt,shortshorts daytime BREAK
105
+ (forest printed:1.3) T-shirt ,shirt BREAK
106
+ (red skirt:1.5) ,(shortshorts:0.5)
107
+ ```
108
+ ```
109
+ Divide mode : Prompt
110
+ Calcmode : Attention
111
+ threshold : 0.7,0.55
112
+ ```
113
+ `long skirt`などに対応するために`shortshorts`の`threshold`を広くしています。bottoms やlower bodyの方が楽だと思います。
114
+ ![6](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial6.png)
115
+ 着せ替えできましたが、pinkは浸食が出ていますね。これは強度の指定を一括で行っているからで、設定を詰めれば浸食はなくなりはずです。
116
+
117
+ さて次は背景を変えてみましょう。
118
+
119
+ ```
120
+ girl in street shirt,shortshorts daytime BREAK
121
+ (forest printed:1.3) T-shirt ,shirt BREAK
122
+ (red skirt:1.5) ,(shortshorts:0.5) BREAK
123
+ (Japan landscape:1.6),street
124
+ ```
125
+ ```
126
+ Divide mode : Prompt
127
+ Calcmode : Attention
128
+ threshold : 0.7,0.55,0.7
129
+ ```
130
+ Japanの部分を変えてみます。
131
+ ![7](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial7.png)
132
+ landscapeという単語はなかなかくせ者で、この単語はかなり強いので他の単語の効果を打ち消すことが多々あるわけですが、landscapeの代わりにstreetで領域計算して適用することで強く出過ぎることを抑えることができるわけです。このように、別な単語で領域を計算するということで表現の幅が広がるのではないでしょうか。
133
+
134
+ Regional Prompterを無効にするとこうなります。
135
+ これはこれでいいですね。
136
+ ![8](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/imgs/ptutorial8.png)
extensions/sd-webui-regional-prompter/regional_prompter_presets.json ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "name": "Vertical-3",
4
+ "mode": "Vertical",
5
+ "ratios": "1,1,1",
6
+ "baseratios": "",
7
+ "usebase": false,
8
+ "usecom": false,
9
+ "usencom": false,
10
+ "calcmode": "Attention",
11
+ "nchangeand": false,
12
+ "lnter": "0",
13
+ "lnur": "0"
14
+ },
15
+ {
16
+ "name": "Horizontal-3",
17
+ "mode": "Horizontal",
18
+ "ratios": "1,1,1",
19
+ "baseratios": "",
20
+ "usebase": false,
21
+ "usecom": false,
22
+ "usencom": false,
23
+ "calcmode": "Attention",
24
+ "nchangeand": false,
25
+ "lnter": "0",
26
+ "lnur": "0"
27
+ },
28
+ {
29
+ "name": "Horizontal-7",
30
+ "mode": "Horizontal",
31
+ "ratios": "1,1,1,1,1,1,1",
32
+ "baseratios": "0.2",
33
+ "usebase": true,
34
+ "usecom": false,
35
+ "usencom": false,
36
+ "calcmode": "Attention",
37
+ "nchangeand": false,
38
+ "lnter": "0",
39
+ "lnur": "0"
40
+ },
41
+ {
42
+ "name": "Twod-2-1",
43
+ "mode": "Horizontal",
44
+ "ratios": "1,2,3;1,1",
45
+ "baseratios": "0.2",
46
+ "usebase": false,
47
+ "usecom": false,
48
+ "usencom": false,
49
+ "calcmode": "Attention",
50
+ "nchangeand": false,
51
+ "lnter": "0",
52
+ "lnur": "0"
53
+ }
54
+ ]
extensions/sd-webui-regional-prompter/scripts/__pycache__/attention.cpython-310.pyc ADDED
Binary file (14.9 kB). View file
 
extensions/sd-webui-regional-prompter/scripts/__pycache__/latent.cpython-310.pyc ADDED
Binary file (16.4 kB). View file
 
extensions/sd-webui-regional-prompter/scripts/__pycache__/regions.cpython-310.pyc ADDED
Binary file (23.4 kB). View file
 
extensions/sd-webui-regional-prompter/scripts/__pycache__/rp.cpython-310.pyc ADDED
Binary file (37.1 kB). View file
 
extensions/sd-webui-regional-prompter/scripts/__pycache__/rps.cpython-310.pyc ADDED
Binary file (7.76 kB). View file
 
extensions/sd-webui-regional-prompter/scripts/attention.py ADDED
@@ -0,0 +1,594 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ from pprint import pprint
3
+ import ldm.modules.attention as atm
4
+ import torch
5
+ import torchvision
6
+ import torchvision.transforms.functional as F
7
+ from torchvision.transforms import InterpolationMode, Resize # Mask.
8
+
9
+ TOKENSCON = 77
10
+ TOKENS = 75
11
+
12
+ def db(self,text):
13
+ if self.debug:
14
+ print(text)
15
+
16
+ def main_forward(module,x,context,mask,divide,isvanilla = False,userpp = False,tokens=[],width = 64,height = 64,step = 0, isxl = False, negpip = None, inhr = None):
17
+
18
+ # Forward.
19
+
20
+ if negpip:
21
+ conds, contokens = negpip
22
+ context = torch.cat((context,conds),1)
23
+
24
+ h = module.heads
25
+ if isvanilla: # SBM Ddim / plms have the context split ahead along with x.
26
+ pass
27
+ else: # SBM I think divide may be redundant.
28
+ h = h // divide
29
+ q = module.to_q(x)
30
+
31
+ context = atm.default(context, x)
32
+ k = module.to_k(context)
33
+ v = module.to_v(context)
34
+
35
+ q, k, v = map(lambda t: atm.rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
36
+
37
+ sim = atm.einsum('b i d, b j d -> b i j', q, k) * module.scale
38
+
39
+ if negpip:
40
+ conds, contokens = negpip
41
+ if contokens:
42
+ for contoken in contokens:
43
+ start = (v.shape[1]//77 - len(contokens)) * 77
44
+ v[:,start+1:start+contoken,:] = -v[:,start+1:start+contoken,:]
45
+
46
+ if atm.exists(mask):
47
+ mask = atm.rearrange(mask, 'b ... -> b (...)')
48
+ max_neg_value = -torch.finfo(sim.dtype).max
49
+ mask = atm.repeat(mask, 'b j -> (b h) () j', h=h)
50
+ sim.masked_fill_(~mask, max_neg_value)
51
+
52
+ attn = sim.softmax(dim=-1)
53
+
54
+ ## for prompt mode make basemask from attention maps
55
+
56
+ global pmaskshw,pmasks
57
+
58
+ if inhr and not hiresfinished: hiresscaler(height,width,attn)
59
+
60
+ if userpp and step > 0:
61
+ for b in range(attn.shape[0] // 8):
62
+ if pmaskshw == []:
63
+ pmaskshw = [(height,width)]
64
+ elif (height,width) not in pmaskshw:
65
+ pmaskshw.append((height,width))
66
+
67
+ for t in tokens:
68
+ power = 4 if isxl else 1.2
69
+ add = attn[8*b:8*(b+1),:,t[0]:t[0]+len(t)]**power
70
+ add = torch.sum(add,dim = 2)
71
+ t = f"{t}-{b}"
72
+ if t not in pmasks:
73
+ pmasks[t] = add
74
+ else:
75
+ if pmasks[t].shape[1] != add.shape[1]:
76
+ add = add.view(8,height,width)
77
+ add = F.resize(add,pmaskshw[0])
78
+ add = add.reshape_as(pmasks[t])
79
+
80
+ pmasks[t] = pmasks[t] + add
81
+
82
+ out = atm.einsum('b i j, b j d -> b i d', attn, v)
83
+ out = atm.rearrange(out, '(b h) n d -> b n (h d)', h=h)
84
+ out = module.to_out(out)
85
+
86
+ return out
87
+
88
+ def hook_forwards(self, root_module: torch.nn.Module, remove=False):
89
+ self.hooked = True if not remove else False
90
+ for name, module in root_module.named_modules():
91
+ if "attn2" in name and module.__class__.__name__ == "CrossAttention":
92
+ module.forward = hook_forward(self, module)
93
+ if remove:
94
+ del module.forward
95
+
96
+ ################################################################################
97
+ ##### Attention mode
98
+
99
+ def hook_forward(self, module):
100
+ def forward(x, context=None, mask=None, additional_tokens=None, n_times_crossframe_attn_in_self=0):
101
+ if self.debug:
102
+ print("input : ", x.size())
103
+ print("tokens : ", context.size())
104
+ print("module : ", getattr(module, self.layer_name,None))
105
+ if "conds" in self.log:
106
+ if self.log["conds"] != context.size():
107
+ self.log["conds2"] = context.size()
108
+ else:
109
+ self.log["conds"] = context.size()
110
+
111
+ if self.xsize == 0: self.xsize = x.shape[1]
112
+ if "input" in getattr(module, self.layer_name,""):
113
+ if x.shape[1] > self.xsize:
114
+ self.in_hr = True
115
+
116
+ height = self.hr_h if self.in_hr and self.hr else self.h
117
+ width = self.hr_w if self.in_hr and self.hr else self.w
118
+
119
+ xs = x.size()[1]
120
+ scale = round(math.sqrt(height * width / xs))
121
+
122
+ dsh = round(height / scale)
123
+ dsw = round(width / scale)
124
+ ha, wa = xs % dsh, xs % dsw
125
+ if ha == 0:
126
+ dsw = int(xs / dsh)
127
+ elif wa == 0:
128
+ dsh = int(xs / dsw)
129
+
130
+ contexts = context.clone()
131
+
132
+ # SBM Matrix mode.
133
+ def matsepcalc(x,contexts,mask,pn,divide):
134
+ db(self,f"in MatSepCalc")
135
+ h_states = []
136
+ xs = x.size()[1]
137
+ (dsh,dsw) = split_dims(xs, height, width, self)
138
+
139
+ if "Horizontal" in self.mode: # Map columns / rows first to outer / inner.
140
+ dsout = dsw
141
+ dsin = dsh
142
+ elif "Vertical" in self.mode:
143
+ dsout = dsh
144
+ dsin = dsw
145
+
146
+ tll = self.pt if pn else self.nt
147
+
148
+ i = 0
149
+ outb = None
150
+ if self.usebase:
151
+ context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:]
152
+ # SBM Controlnet sends extra conds at the end of context, apply it to all regions.
153
+ cnet_ext = contexts.shape[1] - (contexts.shape[1] // TOKENSCON) * TOKENSCON
154
+ if cnet_ext > 0:
155
+ context = torch.cat([context,contexts[:,-cnet_ext:,:]],dim = 1)
156
+
157
+ negpip = negpipdealer(i,pn)
158
+
159
+ i = i + 1
160
+
161
+ out = main_forward(module, x, context, mask, divide, self.isvanilla,userpp =True,step = self.step, isxl = self.isxl, negpip = negpip)
162
+
163
+ if len(self.nt) == 1 and not pn:
164
+ db(self,"return out for NP")
165
+ return out
166
+ # if self.usebase:
167
+ outb = out.clone()
168
+ outb = outb.reshape(outb.size()[0], dsh, dsw, outb.size()[2]) if "Ran" not in self.mode else outb
169
+
170
+ sumout = 0
171
+ db(self,f"tokens : {tll},pn : {pn}")
172
+ db(self,[r for r in self.aratios])
173
+
174
+ for drow in self.aratios:
175
+ v_states = []
176
+ sumin = 0
177
+ for dcell in drow.cols:
178
+ # Grabs a set of tokens depending on number of unrelated breaks.
179
+ context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:]
180
+ # SBM Controlnet sends extra conds at the end of context, apply it to all regions.
181
+ cnet_ext = contexts.shape[1] - (contexts.shape[1] // TOKENSCON) * TOKENSCON
182
+ if cnet_ext > 0:
183
+ context = torch.cat([context,contexts[:,-cnet_ext:,:]],dim = 1)
184
+
185
+ negpip = negpipdealer(i,pn)
186
+
187
+ db(self,f"tokens : {tll[i][0]*TOKENSCON}-{tll[i][1]*TOKENSCON}")
188
+ i = i + 1 + dcell.breaks
189
+ # if i >= contexts.size()[1]:
190
+ # indlast = True
191
+
192
+ out = main_forward(module, x, context, mask, divide, self.isvanilla,userpp = self.pn, step = self.step, isxl = self.isxl,negpip = negpip)
193
+ db(self,f" dcell.breaks : {dcell.breaks}, dcell.ed : {dcell.ed}, dcell.st : {dcell.st}")
194
+ if len(self.nt) == 1 and not pn:
195
+ db(self,"return out for NP")
196
+ return out
197
+ # Actual matrix split by region.
198
+ if "Ran" in self.mode:
199
+ v_states.append(out)
200
+ continue
201
+
202
+ out = out.reshape(out.size()[0], dsh, dsw, out.size()[2]) # convert to main shape.
203
+ # if indlast:
204
+ addout = 0
205
+ addin = 0
206
+ sumin = sumin + int(dsin*dcell.ed) - int(dsin*dcell.st)
207
+ if dcell.ed >= 0.999:
208
+ addin = sumin - dsin
209
+ sumout = sumout + int(dsout*drow.ed) - int(dsout*drow.st)
210
+ if drow.ed >= 0.999:
211
+ addout = sumout - dsout
212
+ if "Horizontal" in self.mode:
213
+ out = out[:,int(dsh*drow.st) + addout:int(dsh*drow.ed),
214
+ int(dsw*dcell.st) + addin:int(dsw*dcell.ed),:]
215
+ if self.debug : print(f"{int(dsh*drow.st) + addout}:{int(dsh*drow.ed)},{int(dsw*dcell.st) + addin}:{int(dsw*dcell.ed)}")
216
+ if self.usebase :
217
+ # outb_t = outb[:,:,int(dsw*drow.st):int(dsw*drow.ed),:].clone()
218
+ outb_t = outb[:,int(dsh*drow.st) + addout:int(dsh*drow.ed),
219
+ int(dsw*dcell.st) + addin:int(dsw*dcell.ed),:].clone()
220
+ out = out * (1 - dcell.base) + outb_t * dcell.base
221
+ elif "Vertical" in self.mode: # Cols are the outer list, rows are cells.
222
+ out = out[:,int(dsh*dcell.st) + addin:int(dsh*dcell.ed),
223
+ int(dsw*drow.st) + addout:int(dsw*drow.ed),:]
224
+ db(self,f"{int(dsh*dcell.st) + addin}:{int(dsh*dcell.ed)}-{int(dsw*drow.st) + addout}:{int(dsw*drow.ed)}")
225
+ if self.usebase :
226
+ # outb_t = outb[:,:,int(dsw*drow.st):int(dsw*drow.ed),:].clone()
227
+ outb_t = outb[:,int(dsh*dcell.st) + addin:int(dsh*dcell.ed),
228
+ int(dsw*drow.st) + addout:int(dsw*drow.ed),:].clone()
229
+ out = out * (1 - dcell.base) + outb_t * dcell.base
230
+ db(self,f"sumin:{sumin},sumout:{sumout},dsh:{dsh},dsw:{dsw}")
231
+
232
+ v_states.append(out)
233
+ if self.debug :
234
+ for h in v_states:
235
+ print(h.size())
236
+
237
+ if "Horizontal" in self.mode:
238
+ ox = torch.cat(v_states,dim = 2) # First concat the cells to rows.
239
+ elif "Vertical" in self.mode:
240
+ ox = torch.cat(v_states,dim = 1) # Cols first mode, concat to cols.
241
+ elif "Ran" in self.mode:
242
+ if self.usebase:
243
+ ox = outb * makerrandman(self.ranbase,dsh,dsw).view(-1, 1)
244
+ ox = torch.zeros_like(v_states[0])
245
+ for state, filter in zip(v_states, self.ransors):
246
+ filter = makerrandman(filter,dsh,dsw)
247
+ ox = ox + state * filter.view(-1, 1)
248
+ return ox
249
+
250
+ h_states.append(ox)
251
+ if "Horizontal" in self.mode:
252
+ ox = torch.cat(h_states,dim = 1) # Second, concat rows to layer.
253
+ elif "Vertical" in self.mode:
254
+ ox = torch.cat(h_states,dim = 2) # Or cols.
255
+ ox = ox.reshape(x.size()[0],x.size()[1],x.size()[2]) # Restore to 3d source.
256
+ return ox
257
+
258
+ def masksepcalc(x,contexts,mask,pn,divide):
259
+ db(self,f"in MaskSepCalc")
260
+ xs = x.size()[1]
261
+ (dsh,dsw) = split_dims(xs, height, width, self)
262
+
263
+ tll = self.pt if pn else self.nt
264
+
265
+ # Base forward.
266
+ i = 0
267
+ outb = None
268
+ if self.usebase:
269
+ context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:]
270
+ # SBM Controlnet sends extra conds at the end of context, apply it to all regions.
271
+ cnet_ext = contexts.shape[1] - (contexts.shape[1] // TOKENSCON) * TOKENSCON
272
+ if cnet_ext > 0:
273
+ context = torch.cat([context,contexts[:,-cnet_ext:,:]],dim = 1)
274
+
275
+ negpip = negpipdealer(i,pn)
276
+
277
+ i = i + 1
278
+ out = main_forward(module, x, context, mask, divide, self.isvanilla, isxl = self.isxl, negpip = negpip)
279
+
280
+ if len(self.nt) == 1 and not pn:
281
+ db(self,"return out for NP")
282
+ return out
283
+ # if self.usebase:
284
+ outb = out.clone()
285
+ outb = outb.reshape(outb.size()[0], dsh, dsw, outb.size()[2])
286
+
287
+ db(self,f"tokens : {tll},pn : {pn}")
288
+
289
+ ox = torch.zeros_like(x)
290
+ ox = ox.reshape(ox.shape[0], dsh, dsw, ox.shape[2])
291
+ ftrans = Resize((dsh, dsw), interpolation = InterpolationMode("nearest"))
292
+ for rmask in self.regmasks:
293
+ # Need to delay mask tensoring so it's on the correct gpu.
294
+ # Dunno if caching masks would be an improvement.
295
+ if self.usebase:
296
+ bweight = self.bratios[0][i - 1]
297
+ # Resize mask to current dims.
298
+ # Since it's a mask, we prefer a binary value, nearest is the only option.
299
+ rmask2 = ftrans(rmask.reshape([1, *rmask.shape])) # Requires dimensions N,C,{d}.
300
+ rmask2 = rmask2.reshape(1, dsh, dsw, 1)
301
+
302
+ # Grabs a set of tokens depending on number of unrelated breaks.
303
+ context = contexts[:,tll[i][0] * TOKENSCON:tll[i][1] * TOKENSCON,:]
304
+ # SBM Controlnet sends extra conds at the end of context, apply it to all regions.
305
+ cnet_ext = contexts.shape[1] - (contexts.shape[1] // TOKENSCON) * TOKENSCON
306
+ if cnet_ext > 0:
307
+ context = torch.cat([context,contexts[:,-cnet_ext:,:]],dim = 1)
308
+
309
+ db(self,f"tokens : {tll[i][0]*TOKENSCON}-{tll[i][1]*TOKENSCON}")
310
+ i = i + 1
311
+ # if i >= contexts.size()[1]:
312
+ # indlast = True
313
+ out = main_forward(module, x, context, mask, divide, self.isvanilla, isxl = self.isxl)
314
+ if len(self.nt) == 1 and not pn:
315
+ db(self,"return out for NP")
316
+ return out
317
+
318
+ out = out.reshape(out.size()[0], dsh, dsw, out.size()[2]) # convert to main shape.
319
+ if self.usebase:
320
+ out = out * (1 - bweight) + outb * bweight
321
+ ox = ox + out * rmask2
322
+
323
+ if self.usebase:
324
+ rmask = self.regbase
325
+ rmask2 = ftrans(rmask.reshape([1, *rmask.shape])) # Requires dimensions N,C,{d}.
326
+ rmask2 = rmask2.reshape(1, dsh, dsw, 1)
327
+ ox = ox + outb * rmask2
328
+ ox = ox.reshape(x.size()[0],x.size()[1],x.size()[2]) # Restore to 3d source.
329
+ return ox
330
+
331
+ def promptsepcalc(x, contexts, mask, pn,divide):
332
+ h_states = []
333
+
334
+ tll = self.pt if pn else self.nt
335
+ db(self,f"in PromptSepCalc")
336
+ db(self,f"tokens : {tll},pn : {pn}")
337
+
338
+ for i, tl in enumerate(tll):
339
+ context = contexts[:, tl[0] * TOKENSCON : tl[1] * TOKENSCON, :]
340
+ # SBM Controlnet sends extra conds at the end of context, apply it to all regions.
341
+ cnet_ext = contexts.shape[1] - (contexts.shape[1] // TOKENSCON) * TOKENSCON
342
+ if cnet_ext > 0:
343
+ context = torch.cat([context,contexts[:,-cnet_ext:,:]],dim = 1)
344
+
345
+ db(self,f"tokens3 : {tl[0]*TOKENSCON}-{tl[1]*TOKENSCON}")
346
+ db(self,f"extra-tokens : {cnet_ext}")
347
+
348
+ userpp = self.pn and i == 0 and self.pfirst
349
+
350
+ negpip = negpipdealer(self.condi,pn) if "La" in self.calc else negpipdealer(i,pn)
351
+
352
+ out = main_forward(module, x, context, mask, divide, self.isvanilla, userpp = userpp, width = dsw, height = dsh,
353
+ tokens = self.pe, step = self.step, isxl = self.isxl, negpip = negpip, inhr = self.in_hr)
354
+
355
+ if (len(self.nt) == 1 and not pn) or ("Pro" in self.mode and "La" in self.calc):
356
+ db(self,"return out for NP or Latent")
357
+ return out
358
+
359
+ db(self,[scale, dsh, dsw, dsh * dsw, x.size()[1]])
360
+
361
+ if i == 0:
362
+ outb = out.clone()
363
+ continue
364
+ else:
365
+ h_states.append(out)
366
+
367
+ if self.debug:
368
+ for h in h_states :
369
+ print(f"divided : {h.size()}")
370
+ print(pmaskshw)
371
+
372
+ if pmaskshw == []:
373
+ return outb
374
+
375
+ ox = outb.clone() if self.ex else outb * 0
376
+
377
+ db(self,[pmaskshw,maskready,(dsh,dsw) in pmaskshw and maskready,len(pmasksf),len(h_states)])
378
+
379
+ if (dsh,dsw) in pmaskshw and maskready:
380
+ depth = pmaskshw.index((dsh,dsw))
381
+ maskb = None
382
+ for masks , state in zip(pmasksf.values(),h_states):
383
+ mask = masks[depth]
384
+ masked = torch.multiply(state, mask)
385
+ if self.ex:
386
+ ox = torch.where(masked !=0 , masked, ox)
387
+ else:
388
+ ox = ox + masked
389
+ maskb = maskb + mask if maskb is not None else mask
390
+ maskb = 1 - maskb
391
+ if not self.ex : ox = ox + torch.multiply(outb, maskb)
392
+ return ox
393
+ else:
394
+ return outb
395
+
396
+ if self.eq:
397
+ db(self,"same token size and divisions")
398
+ if "Mas" in self.mode:
399
+ ox = masksepcalc(x, contexts, mask, True, 1)
400
+ elif "Pro" in self.mode:
401
+ ox = promptsepcalc(x, contexts, mask, True, 1)
402
+ else:
403
+ ox = matsepcalc(x, contexts, mask, True, 1)
404
+ elif x.size()[0] == 1 * self.batch_size:
405
+ db(self,"different tokens size")
406
+ if "Mas" in self.mode:
407
+ ox = masksepcalc(x, contexts, mask, self.pn, 1)
408
+ elif "Pro" in self.mode:
409
+ ox = promptsepcalc(x, contexts, mask, self.pn, 1)
410
+ else:
411
+ ox = matsepcalc(x, contexts, mask, self.pn, 1)
412
+ else:
413
+ db(self,"same token size and different divisions")
414
+ # SBM You get 2 layers of x, context for pos/neg.
415
+ # Each should be forwarded separately, pairing them up together.
416
+ if self.isvanilla: # SBM Ddim reverses cond/uncond.
417
+ nx, px = x.chunk(2)
418
+ conn,conp = contexts.chunk(2)
419
+ else:
420
+ px, nx = x.chunk(2)
421
+ conp,conn = contexts.chunk(2)
422
+ if "Mas" in self.mode:
423
+ opx = masksepcalc(px, conp, mask, True, 2)
424
+ onx = masksepcalc(nx, conn, mask, False, 2)
425
+ elif "Pro" in self.mode:
426
+ opx = promptsepcalc(px, conp, mask, True, 2)
427
+ onx = promptsepcalc(nx, conn, mask, False, 2)
428
+ else:
429
+ # SBM I think division may have been an incorrect patch.
430
+ # But I'm not sure, haven't tested beyond DDIM / PLMS.
431
+ opx = matsepcalc(px, conp, mask, True, 2)
432
+ onx = matsepcalc(nx, conn, mask, False, 2)
433
+ if self.isvanilla: # SBM Ddim reverses cond/uncond.
434
+ ox = torch.cat([onx, opx])
435
+ else:
436
+ ox = torch.cat([opx, onx])
437
+
438
+ self.count += 1
439
+
440
+ limit = 70 if self.isxl else 16
441
+
442
+ if self.count == limit:
443
+ self.pn = not self.pn
444
+ self.count = 0
445
+ self.pfirst = False
446
+ self.condi += 1
447
+ db(self,f"output : {ox.size()}")
448
+ return ox
449
+
450
+ return forward
451
+
452
+ def split_dims(xs, height, width, self = None):
453
+ """Split an attention layer dimension to height + width.
454
+
455
+ Originally, the estimate was dsh = sqrt(hw_ratio*xs),
456
+ rounding to the nearest value. But this proved inaccurate.
457
+ What seems to be the actual operation is as follows:
458
+ - Divide h,w by 8, rounding DOWN.
459
+ (However, webui forces dims to be divisible by 8 unless set explicitly.)
460
+ - For every new layer (of 4), divide both by 2 and round UP (then back up)
461
+ - Multiply h*w to yield xs.
462
+ There is no inverse function to this set of operations,
463
+ so instead we mimic them sans the multiplication part with orig h+w.
464
+ The only alternative is brute forcing integer guesses,
465
+ which might be inaccurate too.
466
+ No known checkpoints follow a different system of layering,
467
+ but it's theoretically possible. Please report if encountered.
468
+ """
469
+ # OLD METHOD.
470
+ # scale = round(math.sqrt(height*width/xs))
471
+ # dsh = round_dim(height, scale)
472
+ # dsw = round_dim(width, scale)
473
+ scale = math.ceil(math.log2(math.sqrt(height * width / xs)))
474
+ dsh = repeat_div(height,scale)
475
+ dsw = repeat_div(width,scale)
476
+ if xs > dsh * dsw and hasattr(self,"nei_multi"):
477
+ dsh, dsw = self.nei_multi[1], self.nei_multi[0]
478
+ while dsh*dsw != xs:
479
+ dsh, dsw = dsh//2, dsw//2
480
+
481
+ if self is not None:
482
+ if self.debug : print(scale,dsh,dsw,dsh*dsw,xs, height, width)
483
+
484
+ return dsh,dsw
485
+
486
+ def repeat_div(x,y):
487
+ """Imitates dimension halving common in convolution operations.
488
+
489
+ This is a pretty big assumption of the model,
490
+ but then if some model doesn't work like that it will be easy to spot.
491
+ """
492
+ while y > 0:
493
+ x = math.ceil(x / 2)
494
+ y = y - 1
495
+ return x
496
+
497
+ #################################################################################
498
+ ##### for Prompt mode
499
+ pmasks = {} #maked from attention maps
500
+ pmaskshw =[] #height,width set of u-net blocks
501
+ pmasksf = {} #maked from pmasks for regions
502
+ maskready = False
503
+ hiresfinished = False
504
+
505
+ def reset_pmasks(self): # init parameters in every batch
506
+ global pmasks, pmaskshw, pmasksf, maskready, hiresfinished, pmaskshw_o
507
+ self.step = 0
508
+ pmasks = {}
509
+ pmaskshw =[]
510
+ pmaskshw_o =[]
511
+ pmasksf = {}
512
+ maskready = False
513
+ hiresfinished = False
514
+ self.x = None
515
+ self.rebacked = False
516
+
517
+ def savepmasks(self,processed):
518
+ for mask ,th in zip(pmasks.values(),self.th):
519
+ img, _ , _= makepmask(mask, self.h, self.w,th, self.step)
520
+ processed.images.append(img)
521
+ return processed
522
+
523
+ def hiresscaler(new_h,new_w,attn):
524
+ global pmaskshw,pmasks,pmasksf,pmaskshw_o, hiresfinished
525
+ nset = (new_h,new_w)
526
+ (old_h, old_w) = pmaskshw[0]
527
+ if new_h > pmaskshw[0][0]:
528
+ pmaskshw_o = pmaskshw.copy()
529
+ del pmaskshw
530
+ pmaskshw = [nset]
531
+ hiresmask(pmasks,old_h, old_w, new_h, new_w,at = attn[:,:,0])
532
+ hiresmask(pmasksf,old_h, old_w, new_h, new_w,i = 0)
533
+ if nset not in pmaskshw:
534
+ index = len(pmaskshw)
535
+ pmaskshw.append(nset)
536
+ old_h, old_w = pmaskshw_o[index]
537
+ hiresmask(pmasksf,old_h, old_w, new_h, new_w,i = index)
538
+ if index == 3: hiresfinished = True
539
+
540
+ def hiresmask(masks,oh,ow,nh,nw,at = None,i = None):
541
+ for key in masks.keys():
542
+ mask = masks[key] if i is None else masks[key][i]
543
+ mask = mask.view(8 if i is None else 1,oh,ow)
544
+ mask = F.resize(mask,(nh,nw))
545
+ mask = mask.reshape_as(at) if at is not None else mask.reshape(1,mask.shape[1] * mask.shape[2],1)
546
+ if i is None:
547
+ masks[key] = mask
548
+ else:
549
+ masks[key][i] = mask
550
+
551
+ def makepmask(mask, h, w, th, step, bratio = 1): # make masks from attention cache return [for preview, for attention, for Latent]
552
+ th = th - step * 0.005
553
+ bratio = 1 - bratio
554
+ mask = torch.mean(mask,dim=0)
555
+ mask = mask / mask.max().item()
556
+ mask = torch.where(mask > th ,1,0)
557
+ mask = mask.float()
558
+ mask = mask.view(1,pmaskshw[0][0],pmaskshw[0][1])
559
+ img = torchvision.transforms.functional.to_pil_image(mask)
560
+ img = img.resize((w,h))
561
+ mask = F.resize(mask,(h,w),interpolation=F.InterpolationMode.NEAREST)
562
+ lmask = mask
563
+ mask = mask.reshape(h*w)
564
+ mask = torch.where(mask > 0.1 ,1,0)
565
+ return img,mask * bratio , lmask * bratio
566
+
567
+ def makerrandman(mask, h, w, latent = False): # make masks from attention cache return [for preview, for attention, for Latent]
568
+ mask = mask.float()
569
+ mask = mask.view(1,mask.shape[0],mask.shape[1])
570
+ img = torchvision.transforms.functional.to_pil_image(mask)
571
+ img = img.resize((w,h))
572
+ mask = F.resize(mask,(h,w),interpolation=F.InterpolationMode.NEAREST)
573
+ if latent: return mask
574
+ mask = mask.reshape(h*w)
575
+ mask = torch.round(mask).long()
576
+ return mask
577
+
578
+ def negpipdealer(i,pn):
579
+ negpip = None
580
+ from modules.scripts import scripts_txt2img
581
+ for script in scripts_txt2img.alwayson_scripts:
582
+ if "negpip.py" in script.filename:
583
+ negpip = script
584
+
585
+ if negpip:
586
+ conds = negpip.conds if pn else negpip.unconds
587
+ tokens = negpip.contokens if pn else negpip.untokens
588
+ if conds and len(conds) >= i + 1:
589
+ if conds[i] is not None:
590
+ return [conds[i],tokens[i]]
591
+ else:
592
+ return None
593
+ else:
594
+ return None
extensions/sd-webui-regional-prompter/scripts/latent.py ADDED
@@ -0,0 +1,576 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from difflib import restore
2
+ import random
3
+ import copy
4
+ from pprint import pprint
5
+ import re
6
+ from typing import Union
7
+ import torch
8
+ from modules import devices, shared, extra_networks, sd_hijack
9
+ from modules.script_callbacks import CFGDenoisedParams, CFGDenoiserParams
10
+ from torchvision.transforms import InterpolationMode, Resize # Mask.
11
+ import scripts.attention as att
12
+ from scripts.regions import floatdef
13
+ from scripts.attention import makerrandman
14
+
15
+ islora = True
16
+ in_hr = False
17
+ layer_name = "lora_layer_name"
18
+ orig_Linear_forward = None
19
+
20
+ orig_lora_functional = False
21
+
22
+ lactive = False
23
+ labug =False
24
+ MINID = 1000
25
+ MAXID = 10000
26
+ LORAID = MINID # Discriminator for repeated lora usage / across gens, presumably.
27
+
28
+ def setuploras(self):
29
+ global lactive, labug, islora, orig_Linear_forward, orig_lora_functional, layer_name
30
+ lactive = True
31
+ labug = self.debug
32
+ islora = self.isbefore15
33
+ layer_name = self.layer_name
34
+ orig_lora_functional = orig_lora_functional = shared.opts.lora_functional if hasattr(shared.opts,"lora_functional") else False
35
+
36
+ try:
37
+ if 150 <= self.ui_version <= 159 or self.slowlora:
38
+ shared.opts.lora_functional = False
39
+ else:
40
+ shared.opts.lora_functional = True
41
+ except:
42
+ pass
43
+
44
+ is15 = 150 <= self.ui_version <= 159
45
+ orig_Linear_forward = torch.nn.Linear.forward
46
+ torch.nn.Linear.forward = h15_Linear_forward if is15 else h_Linear_forward
47
+
48
+ def cloneparams(orig,target):
49
+ target.x = orig.x.clone()
50
+ target.image_cond = orig.image_cond.clone()
51
+ target.sigma = orig.sigma.clone()
52
+
53
+ ###################################################
54
+ ###### Latent Method denoise call back
55
+ # Using the AND syntax with shared.batch_cond_uncond = False
56
+ # the U-NET is calculated (the number of prompts divided by AND) + 1 times.
57
+ # This means that the calculation is performed for the area + 1 times.
58
+ # This mechanism is used to apply LoRA by region by changing the LoRA application rate for each U-NET calculation.
59
+ # The problem here is that in the web-ui system, if more than two batch sizes are set,
60
+ # a problem will occur if the number of areas and the batch size are not the same.
61
+ # If the batch is 1 for 3 areas, the calculation is performed 4 times: Area1, Area2, Area3, and Negative.
62
+ # However, if the batch is 2,
63
+ # [Batch1-Area1, Batch1-Area2]
64
+ # [Batch1-Area3, Batch2-Area1]
65
+ # [Batch2-Area2, Batch2-Area3]
66
+ # [Batch1-Negative, Batch2-Negative]
67
+ # and the areas of simultaneous computation will be different.
68
+ # Therefore, it is necessary to change the order in advance.
69
+ # [Batch1-Area1, Batch1-Area2] -> [Batch1-Area1, Batch2-Area1]
70
+ # [Batch1-Area3, Batch2-Area1] -> [Batch1-Area2, Batch2-Area2]
71
+ # [Batch2-Area2, Batch2-Area3] -> [Batch1-Area3, Batch2-Area3]
72
+
73
+ def denoiser_callback_s(self, params: CFGDenoiserParams):
74
+ if "Pro" in self.mode: # in Prompt mode, make masks from sum of attension maps
75
+ if self.x == None : cloneparams(params,self) # return to step 0 if mask is ready
76
+ self.step = params.sampling_step
77
+ self.pfirst = True
78
+
79
+ lim = 1 if self.isxl else 3
80
+
81
+ if len(att.pmaskshw) > lim:
82
+ self.filters = []
83
+ for b in range(self.batch_size):
84
+
85
+ allmask = []
86
+ basemask = None
87
+ for t, th, bratio in zip(self.pe, self.th, self.bratios):
88
+ key = f"{t}-{b}"
89
+ _, _, mask = att.makepmask(att.pmasks[key], params.x.shape[2], params.x.shape[3], th, self.step, bratio = bratio)
90
+ mask = mask.repeat(params.x.shape[1],1,1)
91
+ basemask = 1 - mask if basemask is None else basemask - mask
92
+ if self.ex:
93
+ for l in range(len(allmask)):
94
+ mt = allmask[l] - mask
95
+ allmask[l] = torch.where(mt > 0, 1,0)
96
+ allmask.append(mask)
97
+ if not self.ex:
98
+ sum = torch.stack(allmask, dim=0).sum(dim=0)
99
+ sum = torch.where(sum == 0, 1 , sum)
100
+ allmask = [mask / sum for mask in allmask]
101
+ basemask = torch.where(basemask > 0, 1, 0)
102
+ allmask.insert(0,basemask)
103
+ self.filters.extend(allmask)
104
+ att.maskready = True
105
+
106
+ for t, th, bratio in zip(self.pe, self.th, self.bratios):
107
+ allmask = []
108
+ for hw in att.pmaskshw:
109
+ masks = None
110
+ for b in range(self.batch_size):
111
+ key = f"{t}-{b}"
112
+ _, mask, _ = att.makepmask(att.pmasks[key], hw[0], hw[1], th, self.step, bratio = bratio)
113
+ mask = mask.unsqueeze(0).unsqueeze(-1)
114
+ masks = mask if b ==0 else torch.cat((masks,mask),dim=0)
115
+ allmask.append(mask)
116
+ att.pmasksf[key] = allmask
117
+ att.maskready = True
118
+
119
+ if not self.rebacked:
120
+ cloneparams(self,params)
121
+ params.sampling_step = 0
122
+ self.rebacked = True
123
+
124
+ if "La" in self.calc:
125
+ self.condi = 0
126
+ global in_hr, regioner
127
+ regioner.step = params.sampling_step
128
+ in_hr = self.in_hr
129
+ regioner.u_count = 0
130
+ if "u_list" not in self.log.keys() and hasattr(regioner,"u_llist"):
131
+ self.log["u_list"] = regioner.u_llist.copy()
132
+ if "u_list_hr" not in self.log.keys() and hasattr(regioner,"u_llist") and in_hr:
133
+ self.log["u_list_hr"] = regioner.u_llist.copy()
134
+ xt = params.x.clone()
135
+ ict = params.image_cond.clone()
136
+ st = params.sigma.clone()
137
+ batch = self.batch_size
138
+ areas = xt.shape[0] // batch -1
139
+ # SBM Stale version workaround.
140
+ if hasattr(params,"text_cond"):
141
+ if "DictWithShape" in params.text_cond.__class__.__name__:
142
+ ct = {}
143
+ for key in params.text_cond.keys():
144
+ ct[key] = params.text_cond[key].clone()
145
+ else:
146
+ ct = params.text_cond.clone()
147
+
148
+ for a in range(areas):
149
+ for b in range(batch):
150
+ params.x[b+a*batch] = xt[a + b * areas]
151
+ params.image_cond[b+a*batch] = ict[a + b * areas]
152
+ params.sigma[b+a*batch] = st[a + b * areas]
153
+ # SBM Stale version workaround.
154
+ if hasattr(params,"text_cond"):
155
+ if "DictWithShape" in params.text_cond.__class__.__name__:
156
+ for key in params.text_cond.keys():
157
+ params.text_cond[key][b+a*batch] = ct[key][a + b * areas]
158
+ else:
159
+ params.text_cond[b+a*batch] = ct[a + b * areas]
160
+
161
+ def denoised_callback_s(self, params: CFGDenoisedParams):
162
+ batch = self.batch_size
163
+ x = params.x
164
+ xt = params.x.clone()
165
+ areas = xt.shape[0] // batch - 1
166
+
167
+ if "La" in self.calc:
168
+ # x.shape = [batch_size, C, H // 8, W // 8]
169
+
170
+ if not "Pro" in self.mode:
171
+ indrebuild = self.filters == [] or self.filters[0].size() != x[0].size()
172
+
173
+ if indrebuild:
174
+ if "Ran" in self.mode:
175
+ if self.filters == []:
176
+ self.filters = [self.ranbase] + self.ransors if self.usebase else self.ransors
177
+ elif self.filters[0][:,:].size() != x[0,0,:,:].size():
178
+ self.filters = hrchange(self.ransors,x.shape[2], x.shape[3])
179
+ else:
180
+ if "Mask" in self.mode:
181
+ masks = (self.regmasks,self.regbase)
182
+ else:
183
+ masks = self.aratios #makefilters(c,h,w,masks,mode,usebase,bratios,indmask = None)
184
+ self.filters = makefilters(x.shape[1], x.shape[2], x.shape[3],masks, self.mode, self.usebase, self.bratios, "Mas" in self.mode)
185
+ self.filters = [f for f in self.filters]*batch
186
+ else:
187
+ if not att.maskready:
188
+ self.filters = [1,*[0 for a in range(areas - 1)]] * batch
189
+
190
+ if self.debug : print("filterlength : ",len(self.filters))
191
+
192
+ for b in range(batch):
193
+ for a in range(areas) :
194
+ fil = self.filters[a + b*areas]
195
+ if self.debug : print(f"x = {x.size()}i = {a + b*areas}, j = {b + a*batch}, cond = {a + b*areas},filsum = {fil if type(fil) is int else torch.sum(fil)}, uncon = {x.size()[0]+(b-batch)}")
196
+ x[a + b * areas, :, :, :] = xt[b + a*batch, :, :, :] * fil + x[x.size()[0]+(b-batch), :, :, :] * (1 - fil)
197
+
198
+ if params.total_sampling_steps == params.sampling_step + 2:
199
+ if self.rps is not None and self.diff:
200
+ if self.rps.latent is None:
201
+ self.rps.latent = x.clone()
202
+ return
203
+ elif self.rps.latent.shape[2:] != x.shape[2:] and self.rps.latent_hr is None:
204
+ self.rps.latent_hr = x.clone()
205
+ return
206
+ else:
207
+ for b in range(batch):
208
+ for a in range(areas) :
209
+ fil = self.filters[a+1]
210
+ orig = self.rps.latent if self.rps.latent.shape[2:] == x.shape[2:] else self.rps.latent_hr
211
+ if self.debug : print(f"x = {x.size()}i = {a + b*areas}, j = {b + a*batch}, cond = {a + b*areas},filsum = {fil if type(fil) is int else torch.sum(fil)}, uncon = {x.size()[0]+(b-batch)}")
212
+ #print("1",type(self.rps.latent),type(fil))
213
+ x[:,:,:,:] = orig[:,:,:,:] * (1 - fil) + x[:,:,:,:] * fil
214
+
215
+ #if params.total_sampling_steps - 7 == params.sampling_step + 2:
216
+ if att.maskready:
217
+ if self.rps is not None and self.diff:
218
+ if self.rps.latent is not None:
219
+ if self.rps.latent.shape[2:] != x.shape[2:]:
220
+ if self.rps.latent_hr is None: return
221
+ for b in range(batch):
222
+ for a in range(areas) :
223
+ fil = self.filters[a+1]
224
+ orig = self.rps.latent if self.rps.latent.shape[2:] == x.shape[2:] else self.rps.latent_hr
225
+ if self.debug : print(f"x = {x.size()}i = {a + b*areas}, j = {b + a*batch}, cond = {a + b*areas},filsum = {fil if type(fil) is int else torch.sum(fil)}, uncon = {x.size()[0]+(b-batch)}")
226
+ #print("2",type(self.rps.latent),type(fil))
227
+ x[:,:,:,:] = orig[:,:,:,:] * (1 - fil) + x[:,:,:,:] * fil
228
+
229
+ if params.sampling_step == 0 and self.in_hr:
230
+ if self.rps is not None and self.diff:
231
+ if self.rps.latent is not None:
232
+ if self.rps.latent.shape[2:] != x.shape[2:] and self.rps.latent_hr is None: return
233
+ for b in range(batch):
234
+ for a in range(areas) :
235
+ fil = self.filters[a+1]
236
+ orig = self.rps.latent if self.rps.latent.shape[2:] == x.shape[2:] else self.rps.latent_hr
237
+ if self.debug : print(f"x = {x.size()}i = {a + b*areas}, j = {b + a*batch}, cond = {a + b*areas},filsum = {fil if type(fil) is int else torch.sum(fil)}, uncon = {x.size()[0]+(b-batch)}")
238
+ #print("3",type(self.rps.latent),type(fil))
239
+ x[:,:,:,:] = orig[:,:,:,:] * (1 - fil) + x[:,:,:,:] * fil
240
+
241
+ ######################################################
242
+ ##### Latent Method
243
+
244
+ def hrchange(filters,h, w):
245
+ out = []
246
+ for filter in filters:
247
+ out.append(makerrandman(filter,h,w,True))
248
+ return out
249
+
250
+ # Remove tags from called lora names.
251
+ flokey = lambda x: (x.split("added_by_regional_prompter")[0]
252
+ .split("added_by_lora_block_weight")[0].split("_in_LBW")[0].split("_in_RP")[0])
253
+
254
+ def lora_namer(self, p, lnter, lnur):
255
+ ldict_u = {}
256
+ ldict_te = {}
257
+ lorder = [] # Loras call order for matching with u/te lists.
258
+ import lora as loraclass
259
+ for lora in loraclass.loaded_loras:
260
+ ldict_u[lora.name] =lora.multiplier if self.isbefore15 else lora.unet_multiplier
261
+ ldict_te[lora.name] =lora.multiplier if self.isbefore15 else lora.te_multiplier
262
+
263
+ subprompts = self.current_prompts[0].split("AND")
264
+ ldictlist_u =[ldict_u.copy() for i in range(len(subprompts)+1)]
265
+ ldictlist_te =[ldict_te.copy() for i in range(len(subprompts)+1)]
266
+
267
+ for i, prompt in enumerate(subprompts):
268
+ _, extranets = extra_networks.parse_prompts([prompt])
269
+ calledloras = extranets["lora"]
270
+
271
+ names = ""
272
+ tdict = {}
273
+
274
+ for called in calledloras:
275
+ names = names + called.items[0]
276
+ tdict[called.items[0]] = syntaxdealer(called.items,"unet=",1)
277
+
278
+ for key in ldictlist_u[i].keys():
279
+ shin_key = flokey(key)
280
+ if shin_key in names:
281
+ ldictlist_u[i+1][key] = float(tdict[shin_key])
282
+ ldictlist_te[i+1][key] = float(tdict[shin_key])
283
+ if key not in lorder:
284
+ lorder.append(key)
285
+ else:
286
+ ldictlist_u[i+1][key] = 0
287
+ ldictlist_te[i+1][key] = 0
288
+
289
+ if self.debug: print("Regioner lorder: ",lorder)
290
+ global regioner
291
+ regioner.__init__(self.lstop,self.lstop_hr)
292
+ u_llist = [d.copy() for d in ldictlist_u[1:]]
293
+ u_llist.append(ldictlist_u[0].copy())
294
+ regioner.te_llist = ldictlist_te
295
+ regioner.u_llist = u_llist
296
+ regioner.ndeleter(lnter, lnur, lorder)
297
+ if self.debug:
298
+ print("LoRA regioner : TE list",regioner.te_llist)
299
+ print("LoRA regioner : U list",regioner.u_llist)
300
+
301
+ def syntaxdealer(items,type,index): #type "unet=", "x=", "lwbe="
302
+ for item in items:
303
+ if type in item:
304
+ if "@" in item:return 1 #for loractl
305
+ return item.replace(type,"")
306
+ return items[index] if "@" not in items[index] else 1
307
+
308
+ def makefilters(c,h,w,masks,mode,usebase,bratios,indmask):
309
+ if indmask:
310
+ (regmasks, regbase) = masks
311
+
312
+ filters = []
313
+ x = torch.zeros(c, h, w).to(devices.device)
314
+ if usebase:
315
+ x0 = torch.zeros(c, h, w).to(devices.device)
316
+ i=0
317
+ if indmask:
318
+ ftrans = Resize((h, w), interpolation = InterpolationMode("nearest"))
319
+ for rmask, bratio in zip(regmasks,bratios[0]):
320
+ # Resize mask to current dims.
321
+ # Since it's a mask, we prefer a binary value, nearest is the only option.
322
+ rmask2 = ftrans(rmask.reshape([1, *rmask.shape])) # Requires dimensions N,C,{d}.
323
+ rmask2 = rmask2.reshape([1, h, w])
324
+ fx = x.clone()
325
+ if usebase:
326
+ fx[:,:,:] = fx + rmask2 * (1 - bratio)
327
+ x0[:,:,:] = x0 + rmask2 * bratio
328
+ else:
329
+ fx[:,:,:] = fx + rmask2 * 1
330
+ filters.append(fx)
331
+
332
+ if usebase: # Add base to x0.
333
+ rmask = regbase
334
+ rmask2 = ftrans(rmask.reshape([1, *rmask.shape])) # Requires dimensions N,C,{d}.
335
+ rmask2 = rmask2.reshape([1, h, w])
336
+ x0 = x0 + rmask2
337
+ else:
338
+ for drow in masks:
339
+ for dcell in drow.cols:
340
+ fx = x.clone()
341
+ if "Horizontal" in mode:
342
+ if usebase:
343
+ fx[:,int(h*drow.st):int(h*drow.ed),int(w*dcell.st):int(w*dcell.ed)] = 1 - dcell.base
344
+ x0[:,int(h*drow.st):int(h*drow.ed),int(w*dcell.st):int(w*dcell.ed)] = dcell.base
345
+ else:
346
+ fx[:,int(h*drow.st):int(h*drow.ed),int(w*dcell.st):int(w*dcell.ed)] = 1
347
+ elif "Vertical" in mode:
348
+ if usebase:
349
+ fx[:,int(h*dcell.st):int(h*dcell.ed),int(w*drow.st):int(w*drow.ed)] = 1 - dcell.base
350
+ x0[:,int(h*dcell.st):int(h*dcell.ed),int(w*drow.st):int(w*drow.ed)] = dcell.base
351
+ else:
352
+ fx[:,int(h*dcell.st):int(h*dcell.ed),int(w*drow.st):int(w*drow.ed)] = 1
353
+ filters.append(fx)
354
+ i +=1
355
+ if usebase : filters.insert(0,x0)
356
+ if labug : print(i,len(filters))
357
+
358
+ return filters
359
+
360
+ ######################################################
361
+ ##### Latent Method LoRA changer
362
+
363
+ TE_START_NAME = "transformer_text_model_encoder_layers_0_self_attn_q_proj"
364
+ UNET_START_NAME = "diffusion_model_time_embed_0"
365
+
366
+ TE_START_NAME_XL = "0_transformer_text_model_encoder_layers_0_self_attn_q_proj"
367
+
368
+ class LoRARegioner:
369
+
370
+ def __init__(self,stop=0,stop_hr=0):
371
+ self.te_count = 0
372
+ self.u_count = 0
373
+ self.te_llist = [{}]
374
+ self.u_llist = [{}]
375
+ self.mlist = {}
376
+ self.ctl = False
377
+ self.step = 0
378
+ self.stop = stop
379
+ self.stop_hr = stop_hr
380
+
381
+ try:
382
+ import lora_ctl_network as ctl
383
+ self.ctlweight = copy.deepcopy(ctl.lora_weights)
384
+ for set in self.ctlweight.values():
385
+ for weight in set.values():
386
+ if type(weight) == list:
387
+ self.ctl = True
388
+ except:
389
+ pass
390
+
391
+ def expand_del(self, val, lorder):
392
+ """Broadcast single / comma separated val to lora list.
393
+
394
+ """
395
+ lval = val.split(",")
396
+ if len(lval) > len(lorder):
397
+ lval = lval[:len(lorder)]
398
+ lval = [floatdef(v, 0) for v in lval]
399
+ if len(lval) < len(lorder): # Propagate difference.
400
+ lval.extend([lval[-1]] * (len(lorder) - len(lval)))
401
+ return lval
402
+
403
+ def ndeleter(self, lnter, lnur, lorder = None):
404
+ """Multiply global weights by 0:1 factor.
405
+
406
+ Can be any value, negative too, but doesn't help much.
407
+ """
408
+ if lorder is None:
409
+ lkeys = self.te_llist[0].keys()
410
+ else:
411
+ lkeys = lorder
412
+ lnter = self.expand_del(lnter, lkeys)
413
+ for (key, val) in zip(lkeys, lnter):
414
+ self.te_llist[0][key] *= val
415
+ if lorder is None:
416
+ lkeys = self.u_llist[-1].keys()
417
+ else:
418
+ lkeys = lorder
419
+ lnur = self.expand_del(lnur, lkeys)
420
+ for (key, val) in zip(lkeys, lnur):
421
+ self.u_llist[-1][key] *= val
422
+
423
+ def search_key(self,lora,i,xlist):
424
+ lorakey = lora.loaded_loras[i].name
425
+ if lorakey not in xlist.keys():
426
+ shin_key = flokey(lorakey)
427
+ picked = False
428
+ for mlkey in xlist.keys():
429
+ if mlkey.startswith(shin_key):
430
+ lorakey = mlkey
431
+ picked = True
432
+ if not picked:
433
+ print(f"key is not found in:{xlist.keys()}")
434
+ return lorakey
435
+
436
+ def te_start(self):
437
+ self.mlist = self.te_llist[self.te_count % len(self.te_llist)]
438
+ if self.mlist == {}: return
439
+ self.te_count += 1
440
+ import lora
441
+ for i in range(len(lora.loaded_loras)):
442
+ lorakey = self.search_key(lora,i,self.mlist)
443
+ lora.loaded_loras[i].multiplier = self.mlist[lorakey]
444
+ lora.loaded_loras[i].te_multiplier = self.mlist[lorakey]
445
+
446
+ def u_start(self):
447
+ if labug : print("u_count",self.u_count ,"u_count '%' divide", self.u_count % len(self.u_llist))
448
+ self.mlist = self.u_llist[self.u_count % len(self.u_llist)]
449
+ if self.mlist == {}: return
450
+ self.u_count += 1
451
+
452
+ stopstep = self.stop_hr if in_hr else self.stop
453
+
454
+ import lora
455
+ for i in range(len(lora.loaded_loras)):
456
+ lorakey = self.search_key(lora,i,self.mlist)
457
+ lora.loaded_loras[i].multiplier = 0 if self.step + 2 > stopstep and stopstep else self.mlist[lorakey]
458
+ lora.loaded_loras[i].unet_multiplier = 0 if self.step + 2 > stopstep and stopstep else self.mlist[lorakey]
459
+ if labug :print(lorakey,lora.loaded_loras[i].multiplier,lora.loaded_loras[i].multiplier )
460
+ if self.ctl:
461
+ import lora_ctl_network as ctl
462
+ key = "hrunet" if in_hr else "unet"
463
+ if self.mlist[lorakey] == 0 or (self.step + 2 > stopstep and stopstep):
464
+ ctl.lora_weights[lorakey][key] = [[0],[0]]
465
+ if labug :print(ctl.lora_weights[lorakey])
466
+ else:
467
+ if key in self.ctlweight[lorakey].keys():
468
+ ctl.lora_weights[lorakey][key] = self.ctlweight[lorakey][key]
469
+ else:
470
+ ctl.lora_weights[lorakey][key] = self.ctlweight[lorakey]["unet"]
471
+ if labug :print(ctl.lora_weights[lorakey])
472
+
473
+ def reset(self):
474
+ self.te_count = 0
475
+ self.u_count = 0
476
+
477
+ regioner = LoRARegioner()
478
+
479
+ ############################################################
480
+ ##### for new lora apply method in web-ui
481
+
482
+ def h_Linear_forward(self, input):
483
+ changethelora(getattr(self, layer_name, None))
484
+ if islora:
485
+ import lora
486
+ return lora.lora_forward(self, input, torch.nn.Linear_forward_before_lora)
487
+ else:
488
+ import networks
489
+ if shared.opts.lora_functional:
490
+ return networks.network_forward(self, input, networks.originals.Linear_forward)
491
+ networks.network_apply_weights(self)
492
+ return networks.originals.Linear_forward(self, input)
493
+
494
+ def h15_Linear_forward(self, input):
495
+ changethelora(getattr(self, layer_name, None))
496
+ if islora:
497
+ import lora
498
+ return lora.lora_forward(self, input, torch.nn.Linear_forward_before_lora)
499
+ else:
500
+ import networks
501
+ if shared.opts.lora_functional:
502
+ return networks.network_forward(self, input, networks.network_Linear_forward)
503
+ networks.network_apply_weights(self)
504
+ return torch.nn.Linear_forward_before_network(self, input)
505
+
506
+ def changethelora(name):
507
+ if lactive:
508
+ global regioner
509
+ if name == TE_START_NAME or name == TE_START_NAME_XL:
510
+ regioner.te_start()
511
+ elif name == UNET_START_NAME:
512
+ regioner.u_start()
513
+
514
+ LORAANDSOON = {
515
+ "LoraHadaModule" : "w1a",
516
+ "LycoHadaModule" : "w1a",
517
+ "NetworkModuleHada": "w1a",
518
+ "FullModule" : "weight",
519
+ "NetworkModuleFull": "weight",
520
+ "IA3Module" : "w",
521
+ "NetworkModuleIa3" : "w",
522
+ "LoraKronModule" : "w1",
523
+ "LycoKronModule" : "w1",
524
+ "NetworkModuleLokr": "w1",
525
+ }
526
+
527
+ def changethedevice(module):
528
+ ltype = type(module).__name__
529
+ if ltype == "LoraUpDownModule" or ltype == "LycoUpDownModule" :
530
+ if hasattr(module,"up_model") :
531
+ module.up_model.weight = torch.nn.Parameter(module.up_model.weight.to(devices.device, dtype = torch.float))
532
+ module.down_model.weight = torch.nn.Parameter(module.down_model.weight.to(devices.device, dtype=torch.float))
533
+ else:
534
+ module.up.weight = torch.nn.Parameter(module.up.weight.to(devices.device, dtype = torch.float))
535
+ if hasattr(module.down, "weight"):
536
+ module.down.weight = torch.nn.Parameter(module.down.weight.to(devices.device, dtype=torch.float))
537
+
538
+ elif ltype == "LoraHadaModule" or ltype == "LycoHadaModule" or ltype == "NetworkModuleHada":
539
+ module.w1a = torch.nn.Parameter(module.w1a.to(devices.device, dtype=torch.float))
540
+ module.w1b = torch.nn.Parameter(module.w1b.to(devices.device, dtype=torch.float))
541
+ module.w2a = torch.nn.Parameter(module.w2a.to(devices.device, dtype=torch.float))
542
+ module.w2b = torch.nn.Parameter(module.w2b.to(devices.device, dtype=torch.float))
543
+
544
+ if module.t1 is not None:
545
+ module.t1 = torch.nn.Parameter(module.t1.to(devices.device, dtype=torch.float))
546
+
547
+ if module.t2 is not None:
548
+ module.t2 = torch.nn.Parameter(module.t2.to(devices.device, dtype=torch.float))
549
+
550
+ elif ltype == "FullModule" or ltype == "NetworkModuleFull":
551
+ module.weight = torch.nn.Parameter(module.weight.to(devices.device, dtype=torch.float))
552
+
553
+ if hasattr(module, 'bias') and module.bias != None:
554
+ module.bias = torch.nn.Parameter(module.bias.to(devices.device, dtype=torch.float))
555
+
556
+ def unloadlorafowards(p):
557
+ global orig_Linear_forward, lactive, labug
558
+ lactive = labug = False
559
+
560
+ try:
561
+ shared.opts.lora_functional = orig_lora_functional
562
+ except:
563
+ pass
564
+
565
+ emb_db = sd_hijack.model_hijack.embedding_db
566
+ import lora
567
+ for net in lora.loaded_loras:
568
+ if hasattr(net,"bundle_embeddings"):
569
+ for emb_name, embedding in net.bundle_embeddings.items():
570
+ if embedding.loaded:
571
+ emb_db.register_embedding_by_name(None, shared.sd_model, emb_name)
572
+
573
+ lora.loaded_loras.clear()
574
+ if orig_Linear_forward != None :
575
+ torch.nn.Linear.forward = orig_Linear_forward
576
+ orig_Linear_forward = None
extensions/sd-webui-regional-prompter/scripts/regions.py ADDED
@@ -0,0 +1,846 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import colorsys # Polygon regions.
2
+ from PIL import Image, ImageChops
3
+ from pprint import pprint
4
+ import cv2 # Polygon regions.
5
+ import gradio as gr
6
+ import numpy as np
7
+ import PIL
8
+ import torch
9
+ from modules import devices
10
+
11
+ def lange(l):
12
+ return range(len(l))
13
+
14
+ # SBM Keywords and delimiters for region breaks, following matlab rules.
15
+ # BREAK keyword is now passed through,
16
+ KEYROW = "ADDROW"
17
+ KEYCOL = "ADDCOL"
18
+ KEYBASE = "ADDBASE"
19
+ KEYCOMM = "ADDCOMM"
20
+ KEYBRK = "BREAK"
21
+ KEYPROMPT = "ADDP"
22
+ DELIMROW = ";"
23
+ DELIMCOL = ","
24
+ MCOLOUR = 256
25
+ NLN = "\n"
26
+ DKEYINOUT = { # Out/in, horizontal/vertical or row/col first.
27
+ ("out",False): KEYROW,
28
+ ("in",False): KEYCOL,
29
+ ("out",True): KEYCOL,
30
+ ("in",True): KEYROW,
31
+ }
32
+
33
+ ALLKEYS = [KEYCOMM,KEYROW, KEYCOL, KEYBASE, KEYPROMPT]
34
+ ALLALLKEYS = [KEYCOMM,KEYROW, KEYCOL, KEYBASE, KEYPROMPT, KEYBRK, "AND"]
35
+
36
+ fidentity = lambda x: x
37
+ ffloatd = lambda c: (lambda x: floatdef(x,c))
38
+ fcolourise = lambda: np.random.randint(0,MCOLOUR,size = 3)
39
+ fspace = lambda x: " {} ".format(x)
40
+
41
+ """
42
+ SBM mod: Two dimensional regions (of variable size, NOT a matrix).
43
+ - Adds keywords ADDROW, ADDCOL and respective delimiters for aratios.
44
+ - A/bratios become list dicts: Inner dict of cols (varying length list) + start/end + number of breaks,
45
+ outer layer is rows list.
46
+ First value in each row is the row's ratio, the rest are col ratios.
47
+ This fits prompts going left -> right, top -> down.
48
+ - Unrelated BREAKS are counted per cell, and later extracted as multiple context indices.
49
+ - Each layer is cut up by both row + col ratios.
50
+ - Style improvements: Created classes for rows + cells and functions for some of the splitting.
51
+ - Base prompt overhaul: Added keyword ADDBASE, when present will trigger "use_base" automatically;
52
+ base is excluded from the main prompt for dim calcs; returned to start before hook (+ base break count);
53
+ during hook, context index skips base break count + 1. Rest is applied normally.
54
+ - To specify cols first, use "vertical" mode. eg 1st col:2 rows, 2nd col:1 row.
55
+ In effect, this merely reverses the order of iteration for every row/col loop and whatnot.
56
+ """
57
+
58
+ class RegionCell():
59
+ """Cell used to split a layer to single prompts."""
60
+ def __init__(self, st, ed, base, breaks):
61
+ """Range with start and end values, base weight and breaks count for context splitting."""
62
+ self.st = st # Range for the cell (cols only).
63
+ self.ed = ed
64
+ self.base = base # How much of the base prompt is applied (difference).
65
+ self.breaks = breaks # How many unrelated breaks the prompt contains.
66
+
67
+ def __repr__(self):
68
+ """Debug print."""
69
+ return "({:.2f}:{:.2f})".format(self.st,self.ed)
70
+
71
+ class RegionRow():
72
+ """Row containing cell refs and its own ratio range."""
73
+ def __init__(self, st, ed, cols):
74
+ """Range with start and end values, base weight and breaks count for context splitting."""
75
+ self.st = st # Range for the row.
76
+ self.ed = ed
77
+ self.cols = cols # List of cells.
78
+
79
+ def __repr__(self):
80
+ """Debug print."""
81
+ return "Outer ({:.2f}:{:.2f}), contains {}".format(self.st, self.ed, self.cols) + NLN
82
+
83
+ def floatdef(x, vdef):
84
+ """Attempt conversion to float, use default value on error.
85
+
86
+ Mainly for empty ratios, double commas.
87
+ """
88
+ try:
89
+ return float(x)
90
+ except ValueError:
91
+ print("'{}' is not a number, converted to {}".format(x,vdef))
92
+ return vdef
93
+
94
+ def split_l2(s, kr, kc, indsingles = False, fmap = fidentity, basestruct = None, indflip = False):
95
+ """Split string to 2d list (ie L2) per row and col keys.
96
+
97
+ The output is a list of lists, each of varying length.
98
+ If a L2 basestruct is provided,
99
+ will adhere to its structure using the following broadcast rules:
100
+ - Basically matches row by row of base and new.
101
+ - If a new row is shorter than base, the last value is repeated to fill the row.
102
+ - If both are the same length, copied as is.
103
+ - If new row is longer, then additional values will overflow to the next row.
104
+ This might be unintended sometimes, but allows making all items col separated,
105
+ then the new structure is simply adapted to the base structure.
106
+ - If there are too many values in new, they will be ignored.
107
+ - If there are too few values in new, the last one is repeated to fill base.
108
+ For mixed row + col ratios, singles flag is provided -
109
+ will extract the first value of each row to a separate list,
110
+ and output structure is (row L1,cell L2).
111
+ There MUST be at least one value for row, one value for col when singles is on;
112
+ to prevent errors, the row value is copied to col if it's alone (shouldn't affect results).
113
+ Singles still respects base broadcast rules, and repeats its own last value.
114
+ The fmap function is applied to each cell before insertion to L2;
115
+ if it fails, a default value is used.
116
+ If flipped, the keyword for columns is applied before rows.
117
+ TODO: Needs to be a case insensitive split. Use re.split.
118
+ """
119
+ if indflip:
120
+ tmp = kr
121
+ kr = kc
122
+ kc = tmp
123
+ lret = []
124
+ if basestruct is None:
125
+ lrows = s.split(kr)
126
+ lrows = [row.split(kc) for row in lrows]
127
+ for r in lrows:
128
+ cell = [fmap(x) for x in r]
129
+ lret.append(cell)
130
+ if indsingles:
131
+ lsingles = [row[0] for row in lret]
132
+ lcells = [row[1:] if len(row) > 1 else row for row in lret]
133
+ lret = (lsingles,lcells)
134
+ else:
135
+ lrows = s.split(kr)
136
+ r = 0
137
+ lcells = []
138
+ lsingles = []
139
+ vlast = 1
140
+ for row in lrows:
141
+ row2 = row.split(kc)
142
+ row2 = [fmap(x) for x in row2]
143
+ vlast = row2[-1]
144
+ indstop = False
145
+ while not indstop:
146
+ if (r >= len(basestruct) # Too many cell values, ignore.
147
+ or (len(row2) == 0 and len(basestruct) > 0)): # Cell exhausted.
148
+ indstop = True
149
+ if not indstop:
150
+ if indsingles: # Singles split.
151
+ lsingles.append(row2[0]) # Row ratio.
152
+ if len(row2) > 1:
153
+ row2 = row2[1:]
154
+ if len(basestruct[r]) >= len(row2): # Repeat last value.
155
+ indstop = True
156
+ broadrow = row2 + [row2[-1]] * (len(basestruct[r]) - len(row2))
157
+ r = r + 1
158
+ lcells.append(broadrow)
159
+ else: # Overfilled this row, cut and move to next.
160
+ broadrow = row2[:len(basestruct[r])]
161
+ row2 = row2[len(basestruct[r]):]
162
+ r = r + 1
163
+ lcells.append(broadrow)
164
+ # If not enough new rows, repeat the last one for entire base, preserving structure.
165
+ cur = len(lcells)
166
+ while cur < len(basestruct):
167
+ lcells.append([vlast] * len(basestruct[cur]))
168
+ cur = cur + 1
169
+ lret = lcells
170
+ if indsingles:
171
+ lsingles = lsingles + [lsingles[-1]] * (len(basestruct) - len(lsingles))
172
+ lret = (lsingles,lcells)
173
+ return lret
174
+
175
+ def is_l2(l):
176
+ return isinstance(l[0],list)
177
+
178
+ def l2_count(l):
179
+ cnt = 0
180
+ for row in l:
181
+ cnt + cnt + len(row)
182
+ return cnt
183
+
184
+ def list_percentify(l):
185
+ """Convert each row in L2 to relative part of 100%.
186
+
187
+ Also works on L1, applying once globally.
188
+ """
189
+ lret = []
190
+ if is_l2(l):
191
+ for row in l:
192
+ # row2 = [float(v) for v in row]
193
+ row2 = [v / sum(row) for v in row]
194
+ lret.append(row2)
195
+ else:
196
+ row = l[:]
197
+ # row2 = [float(v) for v in row]
198
+ row2 = [v / sum(row) for v in row]
199
+ lret = row2
200
+ return lret
201
+
202
+ def list_cumsum(l):
203
+ """Apply cumsum to L2 per row, ie newl[n] = l[0:n].sum .
204
+
205
+ Works with L1.
206
+ Actually edits l inplace, idc.
207
+ """
208
+ lret = []
209
+ if is_l2(l):
210
+ for row in l:
211
+ for (i,v) in enumerate(row):
212
+ if i > 0:
213
+ row[i] = v + row[i - 1]
214
+ lret.append(row)
215
+ else:
216
+ row = l[:]
217
+ for (i,v) in enumerate(row):
218
+ if i > 0:
219
+ row[i] = v + row[i - 1]
220
+ lret = row
221
+ return lret
222
+
223
+ def list_rangify(l):
224
+ """Merge every 2 elems in L2 to a range, starting from 0.
225
+
226
+ """
227
+ lret = []
228
+ if is_l2(l):
229
+ for row in l:
230
+ row2 = [0] + row
231
+ row3 = []
232
+ for i in range(len(row2) - 1):
233
+ row3.append([row2[i],row2[i + 1]])
234
+ lret.append(row3)
235
+ else:
236
+ row2 = [0] + l
237
+ row3 = []
238
+ for i in range(len(row2) - 1):
239
+ row3.append([row2[i],row2[i + 1]])
240
+ lret = row3
241
+ return lret
242
+
243
+ def round_dim(x,y):
244
+ """Return division of two numbers, rounding 0.5 up.
245
+
246
+ Seems that dimensions which are exactly 0.5 are rounded up - see 680x488, second iter.
247
+ A simple mod check should get the job done.
248
+ If not, can always brute force the divisor with +-1 on each of h/w.
249
+ """
250
+ return x // y + (x % y >= y // 2)
251
+
252
+
253
+ def isfloat(t):
254
+ try:
255
+ float(t)
256
+ return True
257
+ except Exception:
258
+ return False
259
+
260
+ def ratiosdealer(aratios2,aratios2r):
261
+ aratios2 = list_percentify(aratios2)
262
+ aratios2 = list_cumsum(aratios2)
263
+ aratios2 = list_rangify(aratios2)
264
+ aratios2r = list_percentify(aratios2r)
265
+ aratios2r = list_cumsum(aratios2r)
266
+ aratios2r = list_rangify(aratios2r)
267
+ return aratios2,aratios2r
268
+
269
+ def changecs(ratios):
270
+ ratios = ratios.replace(",","_")
271
+ ratios = ratios.replace(";",",")
272
+ ratios = ratios.replace("_",";")
273
+ return ratios
274
+
275
+ def makeimgtmp(aratios,mode,usecom,usebase, flipper,ho,wo, image = None, alpha = 0.5,inprocess = False):
276
+ if image is not None:
277
+ wo, ho = image.size
278
+ if mode == "Columns":mode = "Horizontal"
279
+ if mode == "Rows":mode = "Vertical"
280
+
281
+ if flipper: aratios = changecs(aratios)
282
+
283
+ indflip = ("Ver" in mode)
284
+ if DELIMROW not in aratios: # Commas only - interpret as 1d.
285
+ aratios2 = split_l2(aratios, DELIMROW, DELIMCOL, fmap = ffloatd(1), indflip = False)
286
+ aratios2r = [1]
287
+ else:
288
+ (aratios2r,aratios2) = split_l2(aratios, DELIMROW, DELIMCOL,
289
+ indsingles = True, fmap = ffloatd(1), indflip = indflip)
290
+
291
+ (aratios2,aratios2r) = ratiosdealer(aratios2,aratios2r)
292
+
293
+ size = ho * wo
294
+
295
+ if 262144 >= size: div = 4
296
+ elif 1048576 >= size: div = 8
297
+ else :div = 16
298
+
299
+ h, w = ho // div, wo // div
300
+
301
+ fx = np.zeros((h,w, 3), np.uint8)
302
+ # Base image is coloured according to region divisions, roughly.
303
+ for (i,ocell) in enumerate(aratios2r):
304
+ for icell in aratios2[i]:
305
+ # SBM Creep: Colour by delta so that distinction is more reliable.
306
+ if not indflip:
307
+ fx[int(h*ocell[0]):int(h*ocell[1]),int(w*icell[0]):int(w*icell[1]),:] = fcolourise()
308
+ else:
309
+ fx[int(h*icell[0]):int(h*icell[1]),int(w*ocell[0]):int(w*ocell[1]),:] = fcolourise()
310
+ regions = PIL.Image.fromarray(fx)
311
+ draw = PIL.ImageDraw.Draw(regions)
312
+ c = 0
313
+ def coldealer(col):
314
+ if sum(col) > 380:return "black"
315
+ else:return "white"
316
+ # Add region counters at the top left corner, coloured according to hue.
317
+ for (i,ocell) in enumerate(aratios2r):
318
+ for icell in aratios2[i]:
319
+ if not indflip:
320
+ draw.text((int(w*icell[0]),int(h*ocell[0])),f"{c}",coldealer(fx[int(h*ocell[0]),int(w*icell[0])]))
321
+ else:
322
+ draw.text((int(w*ocell[0]),int(h*icell[0])),f"{c}",coldealer(fx[int(h*icell[0]),int(w*ocell[0])]))
323
+ c += 1
324
+
325
+ regions = regions.resize((wo, ho))
326
+
327
+ if image is not None:
328
+ regions = ImageChops.blend(regions, image, alpha)
329
+
330
+ # Create ROW+COL template from regions.
331
+ txtkey = fspace(DKEYINOUT[("in", indflip)]) + NLN
332
+ lkeys = [txtkey.join([""] * len(cell)) for cell in aratios2]
333
+ txtkey = fspace(DKEYINOUT[("out", indflip)]) + NLN
334
+ template = txtkey.join(lkeys)
335
+ if usebase:
336
+ template = fspace(KEYBASE) + NLN + template
337
+ if usecom:
338
+ template = fspace(KEYCOMM) + NLN + template
339
+
340
+ if inprocess:
341
+ changer = template.split(NLN)
342
+ changer = [l.strip() for l in changer]
343
+ return changer
344
+
345
+ return regions, gr.update(value = template)
346
+
347
+ ################################################################
348
+ ##### matrix
349
+ fcountbrk = lambda x: x.count(KEYBRK)
350
+ fint = lambda x: int(x)
351
+
352
+ def matrixdealer(self, p, aratios, bratios, mode):
353
+ print(aratios, bratios, mode)
354
+ if "Ran" in mode:
355
+ randdealer(self,p,aratios,bratios)
356
+ return
357
+ # The addrow/addcol syntax is better, cannot detect regular breaks without it.
358
+ # In any case, the preferred method will anchor the L2 structure.
359
+ # No prompt formatting is performed. Used only for region calculations
360
+ prompt = p.prompt
361
+ if self.debug: print("in matrixdealer",prompt)
362
+ if KEYCOMM in prompt: prompt = prompt.split(KEYCOMM,1)[1]
363
+ if KEYBASE in prompt: prompt = prompt.split(KEYBASE,1)[1]
364
+
365
+ indflip = ("Ver" in mode)
366
+ if (KEYCOL in prompt.upper() or KEYROW in prompt.upper()):
367
+ breaks = prompt.count(KEYROW) + prompt.count(KEYCOL) + int(self.usebase)
368
+ # Prompt anchors, count breaks between special keywords.
369
+ lbreaks = split_l2(prompt, KEYROW, KEYCOL, fmap = fcountbrk, indflip = indflip)
370
+ if (DELIMROW not in aratios
371
+ and (KEYROW in prompt.upper()) != (KEYCOL in prompt.upper())):
372
+ # By popular demand, 1d integrated into 2d.
373
+ # This works by either adding a single row value (inner),
374
+ # or setting flip to the reverse (outer).
375
+ # Only applies when using just ADDROW / ADDCOL keys, and commas in ratio.
376
+ indflip2 = False
377
+ if (KEYROW in prompt.upper()) == indflip:
378
+ aratios = "1" + DELIMCOL + aratios
379
+ else:
380
+ indflip2 = True
381
+ (aratios2r,aratios2) = split_l2(aratios, DELIMROW, DELIMCOL, indsingles = True,
382
+ fmap = ffloatd(1), basestruct = lbreaks,
383
+ indflip = indflip2)
384
+ else: # Standard ratios, split to rows and cols.
385
+ (aratios2r,aratios2) = split_l2(aratios, DELIMROW, DELIMCOL, indsingles = True,
386
+ fmap = ffloatd(1), basestruct = lbreaks, indflip = indflip)
387
+ # More like "bweights", applied per cell only.
388
+ bratios2 = split_l2(bratios, DELIMROW, DELIMCOL, fmap = ffloatd(0), basestruct = lbreaks, indflip = indflip)
389
+ else:
390
+ breaks = prompt.count(KEYBRK) + int(self.usebase)
391
+ (aratios2r,aratios2) = split_l2(aratios, DELIMROW, DELIMCOL, indsingles = True, fmap = ffloatd(1), indflip = indflip)
392
+ # Cannot determine which breaks matter.
393
+ lbreaks = split_l2("0", KEYROW, KEYCOL, fmap = fint, basestruct = aratios2, indflip = indflip)
394
+ bratios2 = split_l2(bratios, DELIMROW, DELIMCOL, fmap = ffloatd(0), basestruct = lbreaks, indflip = indflip)
395
+ # If insufficient breaks, try to broadcast prompt - a bit dumb.
396
+ breaks = fcountbrk(prompt)
397
+ lastprompt = prompt.rsplit(KEYBRK)[-1]
398
+ if l2_count(aratios2) > breaks:
399
+ prompt = prompt + (fspace(KEYBRK) + lastprompt) * (l2_count(aratios2) - breaks)
400
+ (aratios,aratiosr) = ratiosdealer(aratios2,aratios2r)
401
+ bratios = bratios2
402
+
403
+ # Merge various L2s to cells and rows.
404
+ drows = []
405
+ for r,_ in enumerate(lbreaks):
406
+ dcells = []
407
+ for c,_ in enumerate(lbreaks[r]):
408
+ d = RegionCell(aratios[r][c][0], aratios[r][c][1], bratios[r][c], lbreaks[r][c])
409
+ dcells.append(d)
410
+ drow = RegionRow(aratiosr[r][0], aratiosr[r][1], dcells)
411
+ drows.append(drow)
412
+
413
+ self.aratios = drows
414
+ self.bratios = bratios
415
+
416
+ ################################################################
417
+ ##### inpaint
418
+
419
+ """
420
+ SBM mod: Mask polygon region.
421
+ - Basically a version of inpainting, where polygon outlines are drawn and added to a coloured image.
422
+ - Colours from the image are picked apart for masks corresponding to regions.
423
+ - In new mask mode, masks are stored instead of aratios, and applied to each region forward.
424
+ - Mask can be uploaded (alpha, no save), and standard colours are detected from it.
425
+ - Uncoloured regions default to the first colour detected;
426
+ however, if base mode is used, instead base will be applied to the remainder at 100% strength.
427
+ I think this makes it far more useful. At 0 strength, it will apply ONLY to said regions.
428
+ - V2: Corrects and detects colours from upload.
429
+ - Mask mode presets save mask to a file, which is loaded with the preset.
430
+ - Added -1 colour to clear sections, an eraser.
431
+ """
432
+
433
+ POLYFACTOR = 1.5 # Small lines are detected as shapes.
434
+ COLREG = None # Computed colour regions cache. Array. Extended whenever a new colour is requested.
435
+ REGUSE = dict() # Used regions. Reset on new canvas / upload (preset).
436
+ IDIM = 512
437
+ CBLACK = 255
438
+ MAXCOLREG = 360 - 1 # Hsv goes by degrees.
439
+ VARIANT = 0 # Ensures that the sketch canvas is actually refreshed.
440
+ # Permitted hsv error range for mask upload (due to compression).
441
+ # Mind, wrong hue might throw off the mask entirely and is not corrected.
442
+ # HSV_RANGE = (125,130)
443
+ # HSV_VAL = 128
444
+ HSV_RANGE = (0.49,0.51)
445
+ HSV_VAL = 0.5
446
+ CCHANNELS = 3
447
+ COLWHITE = (255,255,255)
448
+ # Optional replacement mode of nonstandard colours from the mask during upload with white.
449
+ # Pros: Clear and obvious display of regions.
450
+ # Cons: Cannot use the image as a background for tracing (eg openpose or depthmap).
451
+ # Compromise: Do not replace, but show the used regions.
452
+ INDCOLREPL = False
453
+
454
+ def get_colours(img):
455
+ """List colours used in image (as nxc array).
456
+
457
+ """
458
+ return np.unique(img.reshape(-1, img.shape[-1]), axis=0)
459
+
460
+ def generate_unique_colours(n):
461
+ """Generate n visually distinct colors as a list of RGB tuples.
462
+
463
+ Uses the hue of hsv, with balanced saturation & value.
464
+ """
465
+ hsv_colors = [(x*1.0/n, 0.5, 0.5) for x in range(n)]
466
+ rgb_colors = [tuple(int(i * CBLACK) for i in colorsys.hsv_to_rgb(*hsv)) for hsv in hsv_colors]
467
+ return rgb_colors
468
+
469
+ def deterministic_colours(n, lcol = None):
470
+ """Generate n visually distinct & consistent colours as a list of RGB tuples.
471
+
472
+ Uses the hue of hsv, with balanced saturation & value.
473
+ Goes around the cyclical 0-256 and picks each /2 value for every round.
474
+ Continuation rules: If pcyv != ccyv in next round, then we don't care.
475
+ If pcyv == ccyv, we want to get the cval + delta of last elem.
476
+ If lcol > n, will return it as is.
477
+ """
478
+ if n <= 0:
479
+ return None
480
+ pcyc = -1
481
+ cval = 0
482
+ if lcol is None:
483
+ st = 0
484
+ elif n <= len(lcol):
485
+ # return lcol[:n] # Truncating the list is accurate, but pointless.
486
+ return lcol
487
+ else:
488
+ st = len(lcol)
489
+ if st > 0:
490
+ pcyc = np.ceil(np.log2(st))
491
+ # This is erroneous on st=2^n, but we don't care.
492
+ dlt = 1 / (2 ** pcyc)
493
+ cval = dlt + 2 * dlt * (st % (2 ** (pcyc - 1)) - 1)
494
+
495
+ lhsv = []
496
+ for i in range(st,n):
497
+ ccyc = np.ceil(np.log2(i + 1))
498
+ if ccyc == 0: # First col = 0.
499
+ cval = 0
500
+ pcyc = ccyc
501
+ elif pcyc != ccyc: # New cycle, start from the half point between 0 and first point.
502
+ dlt = 1 / (2 ** ccyc)
503
+ cval = dlt
504
+ pcyc = ccyc
505
+ else:
506
+ cval = cval + 2 * dlt # Jumps over existing vals.
507
+ lhsv.append(cval)
508
+ lhsv = [(v, 0.5, 0.5) for v in lhsv] # Hsv conversion only works 0:1.
509
+ lrgb = [colorsys.hsv_to_rgb(*hsv) for hsv in lhsv]
510
+ lrgb = (np.array(lrgb) * (CBLACK + 1)).astype(np.uint8) # Convert to colour uints.
511
+ lrgb = lrgb.reshape(-1, CCHANNELS)
512
+ if lcol is not None:
513
+ lrgb = np.concatenate([lcol, lrgb])
514
+ return lrgb
515
+
516
+ def index_rows(mat):
517
+ """In 2D matrix, add column containing row number.
518
+
519
+ Pandas stuff, can't find a clever way to find first row in np.
520
+ """
521
+ return np.concatenate([np.arange(len(mat)).reshape(-1,1),mat],axis = 1)
522
+
523
+ def detect_image_colours(img, inddict = False):
524
+ """Detect relevant hsv colours in image and clean up the standard mask.
525
+
526
+ Basically, converts colours to hsv, checks which ones are within range,
527
+ converts them to the exact sv value we need, deletes irrelevant colours,
528
+ and creates a list of used colours via a form of np first row lookup.
529
+ Problem: Rgb->hsb and back is not lossless in np / cv. Getting 128->127.
530
+ Looks like the only option is to use colorsys which is contiguous.
531
+ To maximise efficiency, I've applied it to the unique colours instead of entire image,
532
+ and then each colour is mapped via np masking (propagation),
533
+ by adding a third fake dim for each of colours, flattened image.
534
+ It might be possible to use cv2 one way for the filter, but I think that's risky,
535
+ and likely doesn't save much processing (heaviest op is get_colours for large image).
536
+ Creep: Apply erosion so thin regions are ignored. This would need be applied on processing as well.
537
+ """
538
+ global REGUSE
539
+ global COLREG
540
+ global VARIANT
541
+ if img is None: # Do nothing if no image passed.
542
+ return None, None
543
+ VARIANT = 0 # Upload doesn't need variance, it refreshes automatically.
544
+ (h,w,c) = img.shape
545
+ # Get unique colours, create rgb-hsv mapping and filtering.
546
+ # hsv_img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
547
+ # skimg = cv2.cvtColor(hsv_img, cv2.COLOR_HSV2RGB)
548
+ lrgb = get_colours(img)
549
+ lhsv = np.apply_along_axis(lambda x: colorsys.rgb_to_hsv(*x), axis=-1, arr = lrgb / CBLACK)
550
+ msk = ((lhsv[:,1] >= HSV_RANGE[0]) & (lhsv[:,1] <= HSV_RANGE[1]) &
551
+ (lhsv[:,2] >= HSV_RANGE[0]) & (lhsv[:,2] <= HSV_RANGE[1]))
552
+ lfltrgb = lrgb[msk]
553
+ lflthsv = lhsv[msk]
554
+ lflthsv[:,1:] = HSV_VAL
555
+ if len(lfltrgb) > 0:
556
+ lfltfix = np.apply_along_axis(lambda x: colorsys.hsv_to_rgb(*x), axis=-1, arr=lflthsv)
557
+ lfltfix = (lfltfix * (CBLACK + 1)).astype(np.uint8)
558
+ else: # No relevant colours.
559
+ lfltfix = lfltrgb
560
+ # Mask update each colour in the image.
561
+ # I tried to use isin, but it seems to detect any permutation.
562
+ # It's better to roll colour channel to the front, add extra fake dims,
563
+ # then use direct comparison, relying on np broadcasting.
564
+ # Shape: colour x search x img
565
+ cnt = len(lfltrgb)
566
+ img2 = img.reshape(-1,c,1)
567
+ img2 = np.moveaxis(img2,0,-1)
568
+ lfltrgb2 = np.moveaxis(lfltrgb,-1,0)
569
+ lfltrgb2 = lfltrgb2.reshape(c,-1,1)
570
+ msk2 = (img2 == lfltrgb2).all(axis = 0).reshape(cnt,h,w)
571
+ for i,_ in enumerate(lfltrgb):
572
+ img[msk2[i]] = lfltfix[i]
573
+ # Empty all nonfiltered regions.
574
+ msk3 = ~(msk2.any(axis = 0))
575
+ if INDCOLREPL: # Don't remove nonstandard.
576
+ img[msk3] = COLWHITE
577
+ # Gen all colours, match with the fixed filtered list.
578
+ # I can think of no mathematical function to inverse the colour gen function.
579
+ # Also, imperfect hash, so ~60 colours go over the edge. Should have 100% matches at x2.
580
+ COLREG = deterministic_colours(2 * MAXCOLREG, COLREG)
581
+ cow = index_rows(COLREG)
582
+ regrows = [cow[(COLREG == f).all(axis = 1)] for f in lfltfix]
583
+ # MAX_KEY_VALUE provides a threshold value. Only those colors are added to REGUSE, for which the key values
584
+ # (i.e., the indices of the colors in the 'regrows' array) are less than this threshold.
585
+ # Colors with indices greater than MAX_KEY_VALUE are considered "similar colors" and are not treated as separate masks.
586
+ unique_keys = set(reg[0,0] for reg in regrows if len(reg) > 0)
587
+ # The purpose of this is to reduce the number of colors being processed, particularly for colors that are
588
+ # close to each other, which may be slightly different due to noise in the image or minor differences in color encoding.
589
+ # By setting an appropriate MAX_KEY_VALUE, these minor color differences can be effectively filtered out,
590
+ # thereby reducing the number of colors being processed and making color processing more accurate and efficient.
591
+ MAX_KEY_VALUE = len(unique_keys) + 20
592
+ REGUSE = {reg[0,0]: reg[0,1:].tolist() for reg in regrows if len(reg) > 0 and reg[0,0] <= MAX_KEY_VALUE}
593
+ # REGUSE.discard(COLWHITE)
594
+
595
+ # Must set to dict due to gradio preprocess assertion, in preset load.
596
+ # CONT: This doesn't work. Postprocess expects image. Maybe use dict for preset, not upload.
597
+ if inddict:
598
+ img = {"image":img, "mask":None}
599
+
600
+ return img, None # Clears the upload area. A bit cleaner.
601
+
602
+ def save_mask(img, flpath):
603
+ """Save mask to file.
604
+
605
+ These will be loaded as part of a preset.
606
+ Cv's colour scheme is an annoyance, but avoiding yet another import.
607
+ """
608
+ # Cv's colour scheme is annoying.
609
+ try:
610
+ img = img["image"]
611
+ except Exception:
612
+ pass
613
+ if VARIANT != 0: # Always save without variance.
614
+ img = img[:-VARIANT,:-VARIANT,:]
615
+ img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
616
+ cv2.imwrite(flpath, img)
617
+
618
+ def load_mask(flpath):
619
+ """Load mask from file.
620
+
621
+ Does not edit mask automatically (detect colours).
622
+ """
623
+ try:
624
+ img = cv2.imread(flpath)
625
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
626
+ except Exception: # Could not load mask.
627
+ img = None
628
+ return img
629
+
630
+ def detect_polygons(img,num):
631
+ """Convert stroke + region to standard coloured mask.
632
+
633
+ Negative colours will clear the mask instead, and not ++.
634
+ """
635
+ global COLREG
636
+ global VARIANT
637
+ global REGUSE
638
+
639
+ # I dunno why, but mask has a 4th colour channel, which contains nothing. Alpha?
640
+ if VARIANT != 0:
641
+ out = img["image"][:-VARIANT,:-VARIANT,:CCHANNELS]
642
+ img = img["mask"][:-VARIANT,:-VARIANT,:CCHANNELS]
643
+ else:
644
+ out = img["image"][:,:,:CCHANNELS]
645
+ img = img["mask"][:,:,:CCHANNELS]
646
+
647
+ # Convert the binary image to grayscale
648
+ if img is None:
649
+ img = np.zeros([IDIM,IDIM,CCHANNELS],dtype = np.uint8) + CBLACK # Stupid cv.
650
+ if out is None:
651
+ out = np.zeros_like(img) + CBLACK # Stupid cv.
652
+ bimg = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
653
+
654
+ # Find contours in the image
655
+ # Must reverse colours, otherwise draws an outer box (0->255). Dunno why gradio uses 255 for white anyway.
656
+ contours, hierarchy = cv2.findContours(bimg, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
657
+
658
+ #img2 = np.zeros_like(img) + 255 # Fresh image.
659
+ img2 = out # Update current image.
660
+
661
+ if num < 0:
662
+ color = COLWHITE
663
+ else:
664
+ COLREG = deterministic_colours(int(num) + 1, COLREG)
665
+ color = COLREG[int(num),:]
666
+ REGUSE[num] = color.tolist()
667
+ # Loop through each contour and detect polygons
668
+ for cnt in contours:
669
+ # Approximate the contour to a polygon
670
+ approx = cv2.approxPolyDP(cnt, 0.0001 * cv2.arcLength(cnt, True), True)
671
+
672
+ # If the polygon has 3 or more sides and is fully enclosed, fill it with a random color
673
+ # if len(approx) >= 3: # BAD test.
674
+ if cv2.contourArea(cnt) > cv2.arcLength(cnt, True) * POLYFACTOR: # Better, still messes up on large brush.
675
+ #SBM BUGGY, prevents contours from . cv2.pointPolygonTest(approx, (approx[0][0][0], approx[0][0][1]), False) >= 0:
676
+
677
+ # Draw the polygon on the image with a new random color
678
+ color = [int(v) for v in color] # Opencv is dumb / C based and can't handle an int64 array.
679
+ #cv2.drawContours(img2, [approx], 0, color = color) # Only outer sketch.
680
+ cv2.fillPoly(img2,[approx],color = color)
681
+
682
+ # Convert the grayscale image back to RGB
683
+ #img2 = cv2.cvtColor(img2, cv2.COLOR_GRAY2RGB) # Converting to grayscale is dumb.
684
+
685
+ skimg = create_canvas(img2.shape[0], img2.shape[1], indwipe = False)
686
+ if VARIANT != 0:
687
+ skimg[:-VARIANT,:-VARIANT,:] = img2
688
+ else:
689
+ skimg[:,:,:] = img2
690
+ print("Region sketch size", skimg.shape)
691
+ return skimg, num + 1 if (num >= 0 and num + 1 <= CBLACK) else num
692
+
693
+ def detect_mask(img, num, mult = CBLACK):
694
+ """Extract specific colour and return mask.
695
+
696
+ Multiplier for correct display.
697
+ Also tags colour in case someone uses the upload interface.
698
+ """
699
+ global REGUSE
700
+ try:
701
+ img = img["image"]
702
+ except Exception:
703
+ pass
704
+ if img is None:
705
+ return None
706
+ indnot = False
707
+ if num < 0: # Detect unmasked region.
708
+ if INDCOLREPL: # In replacement mode, all colours are either region or white.
709
+ color = np.array(COLWHITE).reshape([1,1,CCHANNELS])
710
+ else: # In nonrepl mode, mask all the regions and invert.
711
+ color = np.array(list(REGUSE.values())) # nx3
712
+ color = np.moveaxis(color,-1,0) # 3xn
713
+ color = color.reshape(1,1,*color.shape) # 1x1x3xn
714
+ img = img.reshape(*img.shape,1) # Same.
715
+ indnot = True
716
+ else:
717
+ color = deterministic_colours(int(num) + 1)[-1]
718
+ color = color.reshape([1,1,CCHANNELS])
719
+ if indnot: # Negation of a list of regions.
720
+ mask = (~(img == color)).all(-1).all(-1)
721
+ mask = mask * mult
722
+ else:
723
+ mask = ((img == color).all(-1)) * mult
724
+ if mask.sum() > 0 and num >= 0:
725
+ REGUSE[num] = color.reshape(-1).tolist()
726
+ return mask
727
+
728
+ def draw_region(img, num):
729
+ """Simply runs polygon detection, followed by mask on result.
730
+
731
+ Saves extra inconvenient button. Since num is auto incremented, we take the old val.
732
+ """
733
+ img, num2 = detect_polygons(img, num)
734
+ mask = detect_mask(img, num)
735
+ # Gradio is stupid, I have to force feed it a dict so preprocess doesn't break.
736
+ # Disabled here, can only be fixed reliably in preprocess.
737
+ # dimg = {"image":img, "mask": None}
738
+ dimg = img
739
+ return dimg, num2, mask
740
+
741
+ def draw_image(img, inddict = False):
742
+ """Runs colour detection followed by mask on -1 to show which colours are regions.
743
+
744
+ """
745
+ img, clearer = detect_image_colours(img,inddict)
746
+ mask = detect_mask(img, -1)
747
+ dimg = img
748
+ return dimg, clearer, mask
749
+
750
+ def create_canvas(h, w, indwipe = True):
751
+ """New region sketch area.
752
+
753
+ Small variant value is added (and ignored later) due to gradio refresh bug.
754
+ Meant to be used only to start over or when the image dims change.
755
+ """
756
+ global VARIANT
757
+ global REGUSE
758
+ VARIANT = 1 - VARIANT
759
+ if indwipe:
760
+ REGUSE = dict()
761
+ vret = np.zeros(shape = (h + VARIANT, w + VARIANT, CCHANNELS), dtype = np.uint8) + CBLACK
762
+ return vret
763
+
764
+ # SBM In mask mode, grabs each mask from coloured mask image.
765
+ # If there's no base, remainder goes to first mask.
766
+ # If there's a base, it will receive its own remainder mask, applied at 100%.
767
+ def inpaintmaskdealer(self, p, bratios, usebase, polymask):
768
+ prompt = p.prompt
769
+ if self.debug: print("in inpaintmaskdealer",prompt)
770
+ if KEYCOMM in prompt: prompt = prompt.split(KEYCOMM,1)[1]
771
+ if KEYBASE in prompt: prompt = prompt.split(KEYBASE,1)[1]
772
+ # Prep masks.
773
+ self.regmasks = []
774
+ tm = None
775
+ # Sort colour dict by key, return value for masking.
776
+ #for _,c in sorted(REGUSE.items(), key = lambda x: x[0]):
777
+ for c in sorted(REGUSE.keys()):
778
+ m = detect_mask(polymask, c, 1)
779
+ if VARIANT != 0:
780
+ m = m[:-VARIANT,:-VARIANT]
781
+ if m.any():
782
+ if tm is None:
783
+ tm = np.zeros_like(m) # First mask is ignored deliberately.
784
+ if self.usebase: # In base mode, base gets the outer regions.
785
+ tm = tm + m
786
+ else:
787
+ tm = tm + m
788
+ m = m.reshape([1, *m.shape]).astype(np.float16)
789
+ t = torch.from_numpy(m).to(devices.device)
790
+ self.regmasks.append(t)
791
+ # First mask applies to all unmasked regions.
792
+ m = 1 - tm
793
+ m = m.reshape([1, *m.shape]).astype(np.float16)
794
+ t = torch.from_numpy(m).to(devices.device)
795
+ if self.usebase:
796
+ self.regbase = t
797
+ else:
798
+ self.regbase = None
799
+ self.regmasks[0] = t
800
+
801
+ # Simulated region anchroing for base weights.
802
+ breaks = prompt.count(KEYBRK)
803
+ self.bratios = split_l2(bratios, DELIMROW, DELIMCOL, fmap = ffloatd(0),
804
+ basestruct = [[0] * (breaks + 1)], indflip = False)
805
+
806
+ def randdealer(self,p,aratios,bratios):
807
+ # h*w の大きさのテンソルを作成
808
+ tensor = torch.zeros((p.height//8, p.width//8)).to("cuda")
809
+ x,y = int(aratios.split(",")[0]),int(aratios.split(",")[1])
810
+
811
+
812
+
813
+ # 領域ごとのサイズを計算
814
+ dh, dw = p.height//8 // x, p.width//8 // y
815
+ lbreaks = p.prompt.count(KEYBRK) + 1
816
+
817
+ bratios = bratios.split(",") if self.usebase else [0]
818
+ bratios = [float(b) for b in bratios]
819
+ while len(bratios) <= lbreaks:
820
+ bratios.append(bratios[0])
821
+
822
+ # 領域ごとに0から3までのランダムな値を設定
823
+ for i in range(x):
824
+ for j in range(y):
825
+ random_value = torch.randint(0, lbreaks, (1,))
826
+ tensor[i*dh:(i+1)*dh, j*dw:(j+1)*dw] = random_value
827
+ tensors = []
828
+
829
+ ranbase = torch.ones_like(tensor)
830
+
831
+ for i in range(lbreaks):
832
+ add = torch.where(tensor==i, 1*(1-bratios[i]),0)
833
+ tensors.append(add)
834
+ ranbase = ranbase - add
835
+
836
+ drows = []
837
+ dcells = []
838
+ for c in range(lbreaks):
839
+ d = RegionCell(0,0 , 0, 0)
840
+ dcells.append(d)
841
+ drow = RegionRow(0, 1, dcells)
842
+ drows.append(drow)
843
+
844
+ self.aratios = drows
845
+ self.ransors = tensors
846
+ self.ranbase = ranbase
extensions/sd-webui-regional-prompter/scripts/rp.py ADDED
@@ -0,0 +1,1154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import os.path
3
+ from importlib import reload
4
+ import launch
5
+ from pprint import pprint
6
+ import gradio as gr
7
+ import numpy as np
8
+ from PIL import Image
9
+ import modules.ui
10
+ import modules # SBM Apparently, basedir only works when accessed directly.
11
+ from modules import paths, scripts, shared, extra_networks, prompt_parser
12
+ from modules.processing import Processed
13
+ from modules.script_callbacks import (on_ui_settings,
14
+ CFGDenoisedParams, CFGDenoiserParams, on_cfg_denoised, on_cfg_denoiser)
15
+ import scripts.attention
16
+ import scripts.latent
17
+ import scripts.regions
18
+ try:
19
+ reload(scripts.regions) # update without restarting web-ui.bat
20
+ reload(scripts.attention)
21
+ reload(scripts.latent)
22
+ except:
23
+ pass
24
+ import json # Presets.
25
+ from json.decoder import JSONDecodeError
26
+ from scripts.attention import (TOKENS, hook_forwards, reset_pmasks, savepmasks)
27
+ from scripts.latent import (denoised_callback_s, denoiser_callback_s, lora_namer, setuploras, unloadlorafowards)
28
+ from scripts.regions import (MAXCOLREG, IDIM, KEYBRK, KEYBASE, KEYCOMM, KEYPROMPT, ALLKEYS, ALLALLKEYS,
29
+ create_canvas, draw_region, #detect_mask, detect_polygons,
30
+ draw_image, save_mask, load_mask, changecs,
31
+ floatdef, inpaintmaskdealer, makeimgtmp, matrixdealer)
32
+
33
+ FLJSON = "regional_prompter_presets.json"
34
+ OPTAND = "disable convert 'AND' to 'BREAK'"
35
+ OPTUSEL = "Use LoHa or other"
36
+ # Modules.basedir points to extension's dir. script_path or scripts.basedir points to root.
37
+ PTPRESET = modules.scripts.basedir()
38
+ PTPRESETALT = os.path.join(paths.script_path, "scripts")
39
+
40
+
41
+
42
+ def lange(l):
43
+ return range(len(l))
44
+
45
+ orig_batch_cond_uncond = shared.opts.batch_cond_uncond if hasattr(shared.opts,"batch_cond_uncond") else shared.batch_cond_uncond
46
+
47
+ PRESETSDEF =[
48
+ ["Vertical-3", "Vertical",'1,1,1',"",False,False,False,"Attention",False,"0","0"],
49
+ ["Horizontal-3", "Horizontal",'1,1,1',"",False,False,False,"Attention",False,"0","0"],
50
+ ["Horizontal-7", "Horizontal",'1,1,1,1,1,1,1',"0.2",True,False,False,"Attention",False,"0","0"],
51
+ ["Twod-2-1", "Horizontal",'1,2,3;1,1',"0.2",False,False,False,"Attention",False,"0","0"],
52
+ ]
53
+
54
+ ATTNSCALE = 8 # Initial image compression in attention layers.
55
+
56
+ fhurl = lambda url, label: r"""<a href="{}">{}</a>""".format(url, label)
57
+ GUIDEURL = r"https://github.com/hako-mikan/sd-webui-regional-prompter"
58
+ MATRIXURL = GUIDEURL + r"#2d-region-assignment"
59
+ MASKURL = GUIDEURL + r"#mask-regions-aka-inpaint-experimental-function"
60
+ PROMPTURL = GUIDEURL + r"/blob/main/prompt_en.md"
61
+ PROMPTURL2 = GUIDEURL + r"/blob/main/prompt_ja.md"
62
+
63
+
64
+ def ui_tab(mode, submode):
65
+ """Structures components for mode tab.
66
+
67
+ Semi harcoded but it's clearer this way.
68
+ """
69
+ vret = None
70
+ if mode == "Matrix":
71
+ with gr.Row():
72
+ mguide = gr.HTML(value = fhurl(MATRIXURL, "Matrix mode guide"))
73
+ with gr.Row():
74
+ mmode = gr.Radio(label="Main Splitting", choices=submode, value="Columns", type="value", interactive=True,elem_id="RP_main_splitting")
75
+ ratios = gr.Textbox(label="Divide Ratio",lines=1,value="1,1",interactive=True,elem_id="RP_divide_ratio",visible=True)
76
+ with gr.Row():
77
+ with gr.Column():
78
+ with gr.Row():
79
+ twid = gr.Slider(label="Width", minimum=64, maximum=2048, value=512, step=8,elem_id="RP_matrix_width")
80
+ thei = gr.Slider(label="Height", minimum=64, maximum=2048, value=512, step=8,elem_id="RP_matrix_height")
81
+ maketemp = gr.Button(value="visualize and make template")
82
+
83
+ template = gr.Textbox(label="template",interactive=True,visible=True,elem_id="RP_matrix_template")
84
+ flipper = gr.Checkbox(label = 'flip "," and ";"', value = False,elem_id="RP_matrix_flip")
85
+ overlay = gr.Slider(label="Overlay Ratio", minimum=0, maximum=1, step=0.1, value=0.5,elem_id="RP_matrix_overlay")
86
+
87
+ with gr.Column():
88
+ areasimg = gr.Image(type="pil", show_label = False, height=256, width=256,source = "upload", interactive=True)
89
+ # Need to add maketemp function based on base / common checks.
90
+ vret = [mmode, ratios, maketemp, template, areasimg, flipper, thei, twid, overlay]
91
+ elif mode == "Mask":
92
+ with gr.Row():
93
+ xguide = gr.HTML(value = fhurl(MASKURL, "Inpaint+ mode guide"))
94
+ with gr.Row(): # Creep: Placeholder, should probably make this invisible.
95
+ xmode = gr.Radio(label="Mask mode", choices=submode, value="Mask", type="value", interactive=True,elem_id="RP_mask_mode")
96
+ with gr.Row(): # CREEP: Css magic to make the canvas bigger? I think it's in style.css: #img2maskimg -> height.
97
+ polymask = gr.Image(label = "Do not upload here until bugfix",elem_id="polymask",
98
+ source = "upload", mirror_webcam = False, type = "numpy", tool = "sketch")#.style(height=480)
99
+ with gr.Row():
100
+ with gr.Column():
101
+ num = gr.Slider(label="Region", minimum=-1, maximum=MAXCOLREG, step=1, value=1,elem_id="RP_mask_region")
102
+ canvas_width = gr.Slider(label="Inpaint+ Width", minimum=64, maximum=2048, value=512, step=8,elem_id="RP_mask_width")
103
+ canvas_height = gr.Slider(label="Inpaint+ Height", minimum=64, maximum=2048, value=512, step=8,elem_id="RP_mask_height")
104
+ btn = gr.Button(value = "Draw region + show mask")
105
+ # btn2 = gr.Button(value = "Display mask") # Not needed.
106
+ cbtn = gr.Button(value="Create mask area")
107
+ with gr.Column():
108
+ showmask = gr.Image(label = "Mask", shape=(IDIM, IDIM))
109
+ # CONT: Awaiting fix for https://github.com/gradio-app/gradio/issues/4088.
110
+ uploadmask = gr.Image(label="Upload mask here cus gradio",source = "upload", type = "numpy")
111
+ # btn.click(detect_polygons, inputs = [polymask,num], outputs = [polymask,num])
112
+ btn.click(draw_region, inputs = [polymask, num], outputs = [polymask, num, showmask])
113
+ # btn2.click(detect_mask, inputs = [polymask,num], outputs = [showmask])
114
+ cbtn.click(fn=create_canvas, inputs=[canvas_height, canvas_width], outputs=[polymask])
115
+ uploadmask.upload(fn = draw_image, inputs = [uploadmask], outputs = [polymask, uploadmask, showmask])
116
+
117
+ vret = [xmode, polymask, num, canvas_width, canvas_height, btn, cbtn, showmask, uploadmask]
118
+ elif mode == "Prompt":
119
+ with gr.Row():
120
+ pguide = gr.HTML(value = fhurl(PROMPTURL, "Prompt mode guide"))
121
+ pguide2 = gr.HTML(value = fhurl(PROMPTURL2, "Extended prompt guide (jp)"))
122
+ with gr.Row():
123
+ pmode = gr.Radio(label="Prompt mode", choices=submode, value="Prompt", type="value", interactive=True, elem_id="RP_prompt_mode")
124
+ threshold = gr.Textbox(label = "threshold", value = 0.4, interactive=True, elem_id="RP_prompt_threshold")
125
+
126
+ vret = [pmode, threshold]
127
+
128
+ return vret
129
+
130
+ # modes, submodes. Order must be maintained so dict is inadequate. Must have submode for component consistency.
131
+ RPMODES = [
132
+ ("Matrix", ("Columns","Rows","Random")),
133
+ ("Mask", ("Mask",)),
134
+ ("Prompt", ("Prompt", "Prompt-Ex")),
135
+ ]
136
+ fgrprop = lambda x: {"label": x, "id": "t" + x, "elem_id": "RP_" + x}
137
+
138
+ def mode2tabs(mode):
139
+ """Converts mode (in preset) to gradio tab + submodes.
140
+
141
+ I dunno if it's possible to nest components or make them optional (probably not),
142
+ so this is the best we can do.
143
+ """
144
+ vret = ["Nope"] + [None] * len(RPMODES)
145
+ for (i,(k,v)) in enumerate(RPMODES):
146
+ if mode in v:
147
+ vret[0] = k
148
+ vret[i + 1] = mode
149
+ return vret
150
+
151
+ def tabs2mode(tab, *submode):
152
+ """Converts ui tab + submode list to a single value mode.
153
+
154
+ Picks current submode based on tab, nothing clever. Submodes must be unique.
155
+ """
156
+ for (i,(k,_)) in enumerate(RPMODES):
157
+ if tab == k:
158
+ return submode[i]
159
+ return "Nope"
160
+
161
+ def expand_components(l):
162
+ """Converts json preset to component format.
163
+
164
+ Assumes mode is the first value in list.
165
+ """
166
+ l = list(l) # Tuples cannot be altered.
167
+ tabs = mode2tabs(l[0])
168
+ return tabs + l[1:]
169
+
170
+ def compress_components(l):
171
+ """Converts component values to preset format.
172
+
173
+ Assumes tab + submodes are the first values in list.
174
+ """
175
+ l = list(l)
176
+ mode = tabs2mode(*l[:len(RPMODES) + 1])
177
+ return [mode] + l[len(RPMODES) + 1:]
178
+
179
+ class Script(modules.scripts.Script):
180
+ def __init__(self,active = False,mode = "Matrix",calc = "Attention",h = 0, w =0, debug = False, debug2 = False, usebase = False,
181
+ usecom = False, usencom = False, batch = 1,isxl = False, lstop=0, lstop_hr=0, diff = None):
182
+ self.active = active
183
+ if mode == "Columns": mode = "Horizontal"
184
+ if mode == "Rows": mode = "Vertical"
185
+ self.mode = mode
186
+ self.calc = calc
187
+ self.h = h
188
+ self.w = w
189
+ self.debug = debug
190
+ self.debug2 = debug2
191
+ self.usebase = usebase
192
+ self.usecom = usecom
193
+ self.usencom = usencom
194
+ self.batch_size = batch
195
+ self.isxl = isxl
196
+
197
+ self.aratios = []
198
+ self.bratios = []
199
+ self.divide = 0
200
+ self.count = 0
201
+ self.eq = True
202
+ self.pn = True
203
+ self.hr = False
204
+ self.hr_scale = 0
205
+ self.hr_w = 0
206
+ self.hr_h = 0
207
+ self.in_hr = False
208
+ self.xsize = 0
209
+ self.imgcount = 0
210
+ # for latent mode
211
+ self.filters = []
212
+ self.lora_applied = False
213
+ self.lstop = int(lstop)
214
+ self.lstop_hr = int(lstop_hr)
215
+ # for inpaintmask
216
+ self.regmasks = None
217
+ self.regbase = None
218
+ #for prompt region
219
+ self.pe = []
220
+ self.step = 0
221
+
222
+ #for Differential
223
+ self.diff = diff
224
+ self.rps = None
225
+
226
+ #script communicator
227
+ self.hooked = False
228
+ self.condi = 0
229
+
230
+ self.used_prompt = ""
231
+ self.logprops = ["active","mode","usebase","usecom","usencom","batch_size","isxl","h","w","aratios",
232
+ "divide","count","eq","pn","hr","pe","step","diff","used_prompt"]
233
+ self.log = {}
234
+
235
+ def logger(self):
236
+ for prop in self.logprops:
237
+ print(f"{prop} = {getattr(self,prop,None)}")
238
+ for key in self.log.keys():
239
+ print(f"{key} = {self.log[key]}")
240
+
241
+ def title(self):
242
+ return "Regional Prompter"
243
+
244
+ def show(self, is_img2img):
245
+ return modules.scripts.AlwaysVisible
246
+
247
+ infotext_fields = None
248
+ paste_field_names = []
249
+
250
+ def ui(self, is_img2img):
251
+ filepath = os.path.join(PTPRESET, FLJSON)
252
+
253
+ presets = []
254
+
255
+ presets = loadpresets(filepath)
256
+ presets = LPRESET.update(presets)
257
+
258
+ with gr.Accordion("Regional Prompter", open=False, elem_id="RP_main"):
259
+ with gr.Row():
260
+ active = gr.Checkbox(value=False, label="Active",interactive=True,elem_id="RP_active")
261
+ urlguide = gr.HTML(value = fhurl(GUIDEURL, "Usage guide"))
262
+ with gr.Row():
263
+ # mode = gr.Radio(label="Divide mode", choices=["Horizontal", "Vertical","Mask","Prompt","Prompt-Ex"], value="Horizontal", type="value", interactive=True)
264
+ calcmode = gr.Radio(label="Generation mode", choices=["Attention", "Latent"], value="Attention", type="value", interactive=True, elem_id="RP_generation_mode",)
265
+ with gr.Row(visible=True):
266
+ # ratios = gr.Textbox(label="Divide Ratio",lines=1,value="1,1",interactive=True,elem_id="RP_divide_ratio",visible=True)
267
+ baseratios = gr.Textbox(label="Base Ratio", lines=1,value="0.2",interactive=True, elem_id="RP_base_ratio", visible=True)
268
+ with gr.Row():
269
+ usebase = gr.Checkbox(value=False, label="Use base prompt",interactive=True, elem_id="RP_usebase")
270
+ usecom = gr.Checkbox(value=False, label="Use common prompt",interactive=True,elem_id="RP_usecommon")
271
+ usencom = gr.Checkbox(value=False, label="Use common negative prompt",interactive=True,elem_id="RP_usecommon_negative")
272
+
273
+ # Tabbed modes.
274
+ with gr.Tabs(elem_id="RP_mode") as tabs:
275
+ rp_selected_tab = gr.State("Matrix") # State component to document current tab for gen.
276
+ # ltabs = []
277
+ ltabp = []
278
+ for (i, (md,smd)) in enumerate(RPMODES):
279
+ with gr.TabItem(**fgrprop(md)) as tab: # Tabs with a formatted id.
280
+ # ltabs.append(tab)
281
+ ltabp.append(ui_tab(md, smd))
282
+ # Tab switch tags state component.
283
+ tab.select(fn = lambda tabnum = i: RPMODES[tabnum][0], inputs=[], outputs=[rp_selected_tab])
284
+
285
+ # Hardcode expansion back to components for any specific events.
286
+ (mmode, ratios, maketemp, template, areasimg, flipper, thei, twid, overlay) = ltabp[0]
287
+ (xmode, polymask, num, canvas_width, canvas_height, btn, cbtn, showmask, uploadmask) = ltabp[1]
288
+ (pmode, threshold) = ltabp[2]
289
+
290
+ with gr.Accordion("Presets",open = False):
291
+ with gr.Row():
292
+ availablepresets = gr.Dropdown(label="Presets", choices=presets, type="index")
293
+ applypresets = gr.Button(value="Apply Presets",variant='primary',elem_id="RP_applysetting")
294
+ with gr.Row():
295
+ presetname = gr.Textbox(label="Preset Name",lines=1,value="",interactive=True,elem_id="RP_preset_name",visible=True)
296
+ savesets = gr.Button(value="Save to Presets",variant='primary',elem_id="RP_savesetting")
297
+ with gr.Row():
298
+ lstop = gr.Textbox(label="LoRA stop step",value="0",interactive=True,elem_id="RP_ne_tenc_ratio",visible=True)
299
+ lstop_hr = gr.Textbox(label="LoRA Hires stop step",value="0",interactive=True,elem_id="RP_ne_unet_ratio",visible=True)
300
+ lnter = gr.Textbox(label="LoRA in negative textencoder",value="0",interactive=True,elem_id="RP_ne_tenc_ratio_negative",visible=True)
301
+ lnur = gr.Textbox(label="LoRA in negative U-net",value="0",interactive=True,elem_id="RP_ne_unet_ratio_negative",visible=True)
302
+ with gr.Row():
303
+ options = gr.CheckboxGroup(value=False, label="Options",choices=[OPTAND, OPTUSEL, "debug", "debug2"], interactive=True, elem_id="RP_options")
304
+ mode = gr.Textbox(value = "Matrix",visible = False, elem_id="RP_divide_mode")
305
+
306
+ dummy_img = gr.Image(type="pil", show_label = False, height=256, width=256,source = "upload", interactive=True, visible = False)
307
+
308
+ dummy_false = gr.Checkbox(value=False, visible=False)
309
+
310
+ areasimg.upload(fn=lambda x: x,inputs=[areasimg],outputs = [dummy_img])
311
+ areasimg.clear(fn=lambda x: None,outputs = [dummy_img])
312
+
313
+ def changetabs(mode):
314
+ modes = ["Matrix", "Mask", "Prompt"]
315
+ if mode not in modes: mode = "Matrix"
316
+ return gr.Tabs.update(selected="t"+mode)
317
+
318
+ mode.change(fn = changetabs,inputs=[mode],outputs=[tabs])
319
+ settings = [rp_selected_tab, mmode, xmode, pmode, ratios, baseratios, usebase, usecom, usencom, calcmode, options, lnter, lnur, threshold, polymask, lstop, lstop_hr, flipper]
320
+
321
+ self.infotext_fields = [
322
+ (active, "RP Active"),
323
+ # (mode, "RP Divide mode"),
324
+ (mode, "RP Divide mode"),
325
+ (mmode, "RP Matrix submode"),
326
+ (xmode, "RP Mask submode"),
327
+ (pmode, "RP Prompt submode"),
328
+ (calcmode, "RP Calc Mode"),
329
+ (ratios, "RP Ratios"),
330
+ (baseratios, "RP Base Ratios"),
331
+ (usebase, "RP Use Base"),
332
+ (usecom, "RP Use Common"),
333
+ (usencom, "RP Use Ncommon"),
334
+ (options,"RP Options"),
335
+ (lnter,"RP LoRA Neg Te Ratios"),
336
+ (lnur,"RP LoRA Neg U Ratios"),
337
+ (threshold,"RP threshold"),
338
+ (lstop,"RP LoRA Stop Step"),
339
+ (lstop_hr,"RP LoRA Hires Stop Step"),
340
+ (flipper, "RP Flip")
341
+ ]
342
+
343
+ for _,name in self.infotext_fields:
344
+ self.paste_field_names.append(name)
345
+
346
+ def setpreset(select, *settings):
347
+ """Load preset from list.
348
+
349
+ SBM: The only way I know how to get the old values in gradio,
350
+ is to pass them all as input.
351
+ Tab mode converts ui to single value.
352
+ """
353
+ # Need to swap all masked images to the source,
354
+ # getting "valueerror: cannot process this value as image".
355
+ # Gradio bug in components.postprocess, most likely.
356
+ settings = [s["image"] if (isinstance(s,dict) and "image" in s) else s for s in settings]
357
+ presets = loadpresets(filepath)
358
+ preset = presets[select]
359
+ preset = loadblob(preset)
360
+ preset = [fmt(preset.get(k, vdef)) for (k,fmt,vdef) in PRESET_KEYS]
361
+ preset = preset[1:] # Remove name.
362
+ preset = expand_components(preset)
363
+ # Change nulls to original value.
364
+ preset = [settings[i] if p is None else p for (i,p) in enumerate(preset)]
365
+ while len(settings) >= len(preset):
366
+ preset.append(0)
367
+ # return [gr.update(value = pr) for pr in preset] # SBM Why update? Shouldn't regular return do the job?
368
+ if preset[0] == "Vertical":preset[0] = "Rows"
369
+ if preset[0] == "Horizontal":preset[0] = "Columns"
370
+ return preset
371
+
372
+ maketemp.click(fn=makeimgtmp, inputs =[ratios,mmode,usecom,usebase,flipper,thei,twid,dummy_img,overlay],outputs = [areasimg,template])
373
+ applypresets.click(fn=setpreset, inputs = [availablepresets, *settings], outputs=settings)
374
+ savesets.click(fn=savepresets, inputs = [presetname,*settings],outputs=availablepresets)
375
+
376
+ return [active, dummy_false, rp_selected_tab, mmode, xmode, pmode, ratios, baseratios,
377
+ usebase, usecom, usencom, calcmode, options, lnter, lnur, threshold, polymask, lstop, lstop_hr, flipper]
378
+
379
+ def process(self, p, active, a_debug , rp_selected_tab, mmode, xmode, pmode, aratios, bratios,
380
+ usebase, usecom, usencom, calcmode, options, lnter, lnur, threshold, polymask, lstop, lstop_hr, flipper):
381
+
382
+ if type(options) is bool:
383
+ options = ["disable convert 'AND' to 'BREAK'"] if options else []
384
+ elif type(options) is str:
385
+ options = options.split(",")
386
+
387
+ if a_debug == True:
388
+ options.append("debug")
389
+
390
+ debug = "debug" in options
391
+ debug2 = "debug2" in options
392
+ self.slowlora = OPTUSEL in options
393
+
394
+ if type(polymask) == str:
395
+ try:
396
+ polymask,_,_ = draw_image(np.array(Image.open(polymask)))
397
+ except:
398
+ pass
399
+
400
+ if rp_selected_tab == "Nope": rp_selected_tab = "Matrix"
401
+
402
+ if debug: pprint([active, debug, rp_selected_tab, mmode, xmode, pmode, aratios, bratios,
403
+ usebase, usecom, usencom, calcmode, options, lnter, lnur, threshold, polymask, lstop, lstop_hr, flipper])
404
+
405
+ tprompt = p.prompt[0] if type(p.prompt) == list else p.prompt
406
+
407
+ if hasattr(p,"rps_diff"):
408
+ if p.rps_diff:
409
+ active = True
410
+ mmode = "Prompt"
411
+ xmode = "Prompt-Ex"
412
+ diff = p.rps_diff
413
+ if hasattr(p, "all_prompts_rps"):
414
+ p.all_prompts = p.all_prompts_rps
415
+ if hasattr(p,"threshold"):
416
+ if p.threshold is not None:threshold = str(p.threshold)
417
+ else:
418
+ diff = False
419
+
420
+ if not any(key in tprompt for key in ALLALLKEYS) or not active:
421
+ return unloader(self,p)
422
+
423
+ p.extra_generation_params.update({
424
+ "RP Active":active,
425
+ "RP Divide mode": rp_selected_tab,
426
+ "RP Matrix submode": mmode,
427
+ "RP Mask submode": xmode,
428
+ "RP Prompt submode": pmode,
429
+ "RP Calc Mode":calcmode,
430
+ "RP Ratios": aratios,
431
+ "RP Base Ratios": bratios,
432
+ "RP Use Base":usebase,
433
+ "RP Use Common":usecom,
434
+ "RP Use Ncommon": usencom,
435
+ "RP Options" : options,
436
+ "RP LoRA Neg Te Ratios": lnter,
437
+ "RP LoRA Neg U Ratios": lnur,
438
+ "RP threshold": threshold,
439
+ "RP LoRA Stop Step":lstop,
440
+ "RP LoRA Hires Stop Step":lstop_hr,
441
+ "RP Flip": flipper,
442
+ })
443
+
444
+ savepresets("lastrun",rp_selected_tab, mmode, xmode, pmode, aratios,bratios,
445
+ usebase, usecom, usencom, calcmode, options, lnter, lnur, threshold, polymask,lstop, lstop_hr, flipper)
446
+
447
+ if flipper:aratios = changecs(aratios)
448
+
449
+ self.__init__(active, tabs2mode(rp_selected_tab, mmode, xmode, pmode) ,calcmode ,p.height, p.width, debug, debug2,
450
+ usebase, usecom, usencom, p.batch_size, hasattr(shared.sd_model,"conditioner"),lstop, lstop_hr, diff = diff)
451
+
452
+ self.all_prompts = p.all_prompts.copy()
453
+ self.all_negative_prompts = p.all_negative_prompts.copy()
454
+
455
+ # SBM ddim / plms detection.
456
+ self.isvanilla = p.sampler_name in ["DDIM", "PLMS", "UniPC"]
457
+
458
+ if self.h % ATTNSCALE != 0 or self.w % ATTNSCALE != 0:
459
+ # Testing shows a round down occurs in model.
460
+ print("Warning: Nonstandard height / width.")
461
+ self.h = self.h - self.h % ATTNSCALE
462
+ self.w = self.w - self.w % ATTNSCALE
463
+
464
+ if hasattr(p,"enable_hr"): # Img2img doesn't have it.
465
+ self.hr = p.enable_hr
466
+ self.hr_w = (p.hr_resize_x if p.hr_resize_x > p.width else p.width * p.hr_scale)
467
+ self.hr_h = (p.hr_resize_y if p.hr_resize_y > p.height else p.height * p.hr_scale)
468
+ if self.hr_h % ATTNSCALE != 0 or self.hr_w % ATTNSCALE != 0:
469
+ # Testing shows a round down occurs in model.
470
+ print("Warning: Nonstandard height / width for ulscaled size")
471
+ self.hr_h = self.hr_h - self.hr_h % ATTNSCALE
472
+ self.hr_w = self.hr_w - self.hr_w % ATTNSCALE
473
+
474
+ loraverchekcer(self) #check web-ui version
475
+ if OPTAND not in options: allchanger(p, "AND", KEYBRK) #Change AND to BREAK
476
+ if any(x in self.mode for x in ["Ver","Hor"]):
477
+ keyconverter(aratios, self.mode, usecom, usebase, p) #convert BREAKs to ADDROMM/ADDCOL/ADDROW
478
+ bckeydealer(self, p) #detect COMM/BASE keys
479
+ keycounter(self, p) #count keys and set to self.divide
480
+
481
+ if "Pro" not in self.mode: # skip region assign in prompt mode
482
+ ##### region mode
483
+ if "Mask" in self.mode:
484
+ keyreplacer(p) #change all keys to BREAK
485
+ inpaintmaskdealer(self, p, bratios, usebase, polymask)
486
+
487
+ elif any(x in self.mode for x in ["Ver","Hor","Ran"]):
488
+ matrixdealer(self, p, aratios, bratios, self.mode)
489
+
490
+ ##### calcmode
491
+ if "Att" in calcmode:
492
+ self.handle = hook_forwards(self, p.sd_model.model.diffusion_model)
493
+ if hasattr(shared.opts,"batch_cond_uncond"):
494
+ shared.opts.batch_cond_uncond = orig_batch_cond_uncond
495
+ else:
496
+ shared.batch_cond_uncond = orig_batch_cond_uncond
497
+ unloadlorafowards(p)
498
+ else:
499
+ self.handle = hook_forwards(self, p.sd_model.model.diffusion_model,remove = True)
500
+ setuploras(self)
501
+ # SBM It is vital to use local activation because callback registration is permanent,
502
+ # and there are multiple script instances (txt2img / img2img).
503
+
504
+ elif "Pro" in self.mode: #Prompt mode use both calcmode
505
+ self.ex = "Ex" in self.mode
506
+ if not usebase : bratios = "0"
507
+ self.handle = hook_forwards(self, p.sd_model.model.diffusion_model)
508
+ denoiserdealer(self)
509
+
510
+ neighbor(self,p) #detect other extention
511
+ keyreplacer(p) #replace all keys to BREAK
512
+ blankdealer(self, p) #add "_" if prompt of last region is blank
513
+ commondealer(p, self.usecom, self.usencom) #add commom prompt to all region
514
+ if "La" in self.calc: allchanger(p, KEYBRK,"AND") #replace BREAK to AND in Latent mode
515
+ if tokendealer(self, p): return unloader(self,p) #count tokens and calcrate target tokens
516
+ thresholddealer(self, p, threshold) #set threshold
517
+
518
+ bratioprompt(self, bratios)
519
+ if not self.diff: hrdealer(p)
520
+
521
+ print(f"Regional Prompter Active, Pos tokens : {self.ppt}, Neg tokens : {self.pnt}")
522
+ self.used_prompt = p.all_prompts[0]
523
+
524
+ if debug : debugall(self)
525
+
526
+ def before_process_batch(self, p, *args, **kwargs):
527
+ if self.active:
528
+ self.current_prompts = kwargs["prompts"].copy()
529
+ p.disable_extra_networks = False
530
+
531
+ def before_hr(self, p, active, _, rp_selected_tab, mmode, xmode, pmode, aratios, bratios,
532
+ usebase, usecom, usencom, calcmode,nchangeand, lnter, lnur, threshold, polymask,lstop, lstop_hr, flipper):
533
+ if self.active:
534
+ self.in_hr = True
535
+ if "La" in self.calc:
536
+ lora_namer(self, p, lnter, lnur)
537
+ self.log["before_hr"] = "passed"
538
+ try:
539
+ import lora
540
+ self.log["before_hr_loralist"] = [x.name for x in lora.loaded_loras]
541
+ except:
542
+ pass
543
+
544
+ def process_batch(self, p, active, _, rp_selected_tab, mmode, xmode, pmode, aratios, bratios,
545
+ usebase, usecom, usencom, calcmode,nchangeand, lnter, lnur, threshold, polymask,lstop, lstop_hr,flipper,**kwargs):
546
+ # print(kwargs["prompts"])
547
+
548
+ if self.active:
549
+ resetpcache(p)
550
+ self.in_hr = False
551
+ self.xsize = 0
552
+ # SBM Before_process_batch was added in feb-mar, adding fallback.
553
+ if not hasattr(self,"current_prompts"):
554
+ self.current_prompts = kwargs["prompts"].copy()
555
+ p.all_prompts[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size] = self.all_prompts[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size]
556
+ p.all_negative_prompts[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size] = self.all_negative_prompts[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size]
557
+ if "Pro" in self.mode:
558
+ reset_pmasks(self)
559
+ if "La" in self.calc:
560
+ lora_namer(self, p, lnter, lnur)
561
+ try:
562
+ import lora
563
+ self.log["loralist"] = [x.name for x in lora.loaded_loras]
564
+ except:
565
+ pass
566
+
567
+ if self.lora_applied: # SBM Don't override orig twice on batch calls.
568
+ pass
569
+ else:
570
+ denoiserdealer(self)
571
+ self.lora_applied = True
572
+ #escape reload loras in hires-fix
573
+
574
+ def postprocess(self, p, processed, *args):
575
+ if self.active :
576
+ with open(os.path.join(paths.data_path, "params.txt"), "w", encoding="utf8") as file:
577
+ processedx = Processed(p, [], p.seed, "")
578
+ file.write(processedx.infotext(p, 0))
579
+
580
+ if "Pro" in self.mode and not fseti("hidepmask"):
581
+ savepmasks(self, processed)
582
+
583
+ if self.debug or self.debug2 : self.logger()
584
+
585
+ unloader(self, p)
586
+
587
+ def denoiser_callback(self, params: CFGDenoiserParams):
588
+ denoiser_callback_s(self, params)
589
+
590
+ def denoised_callback(self, params: CFGDenoisedParams):
591
+ denoised_callback_s(self, params)
592
+
593
+
594
+ def unloader(self,p):
595
+ if hasattr(self,"handle"):
596
+ #print("unloaded")
597
+ hook_forwards(self, p.sd_model.model.diffusion_model, remove=True)
598
+ del self.handle
599
+
600
+ self.__init__()
601
+
602
+ if hasattr(shared.opts,"batch_cond_uncond"):
603
+ shared.opts.batch_cond_uncond = orig_batch_cond_uncond
604
+ else:
605
+ shared.batch_cond_uncond = orig_batch_cond_uncond
606
+
607
+ unloadlorafowards(p)
608
+
609
+ def denoiserdealer(self):
610
+ if self.calc =="Latent": # prompt mode use only denoiser callbacks
611
+ if not hasattr(self,"dd_callbacks"):
612
+ self.dd_callbacks = on_cfg_denoised(self.denoised_callback)
613
+ if hasattr(shared.opts,"batch_cond_uncond"):
614
+ shared.opts.batch_cond_uncond = False
615
+ else:
616
+ shared.batch_cond_uncond = False
617
+
618
+ if not hasattr(self,"dr_callbacks"):
619
+ self.dr_callbacks = on_cfg_denoiser(self.denoiser_callback)
620
+
621
+ if self.diff:
622
+ if not hasattr(self,"dd_callbacks"):
623
+ self.dd_callbacks = on_cfg_denoised(self.denoised_callback)
624
+
625
+
626
+ ############################################################
627
+ ##### prompts, tokens
628
+ def blankdealer(self, p):
629
+ seps = "AND" if "La" in self.calc else KEYBRK
630
+ all_prompts=[]
631
+ for prompt in p.all_prompts:
632
+ regions = prompt.split(seps)
633
+ if regions[-1].strip() in ["",","]:
634
+ prompt = prompt + " _"
635
+ all_prompts.append(prompt)
636
+ p.all_prompts = all_prompts
637
+
638
+ def commondealer(p, usecom, usencom):
639
+ all_prompts = []
640
+ all_negative_prompts = []
641
+
642
+ def comadder(prompt):
643
+ ppl = prompt.split(KEYBRK)
644
+ for i in range(len(ppl)):
645
+ if i == 0:
646
+ continue
647
+ ppl[i] = ppl[0] + ", " + ppl[i]
648
+ ppl = ppl[1:]
649
+ prompt = f"{KEYBRK} ".join(ppl)
650
+ return prompt
651
+
652
+ if usecom:
653
+ for pr in p.all_prompts:
654
+ all_prompts.append(comadder(pr))
655
+ p.all_prompts = all_prompts
656
+ p.prompt = all_prompts[0]
657
+
658
+ if usencom:
659
+ for pr in p.all_negative_prompts:
660
+ all_negative_prompts.append(comadder(pr))
661
+ p.all_negative_prompts = all_negative_prompts
662
+ p.negative_prompt = all_negative_prompts[0]
663
+
664
+ def hrdealer(p):
665
+ p.hr_prompt = p.prompt
666
+ p.hr_negative_prompt = p.negative_prompt
667
+ p.all_hr_prompts = p.all_prompts
668
+ p.all_hr_negative_prompts = p.all_negative_prompts
669
+
670
+ def allchanger(p, a, b):
671
+ p.prompt = p.prompt.replace(a, b)
672
+ for i in lange(p.all_prompts):
673
+ p.all_prompts[i] = p.all_prompts[i].replace(a, b)
674
+ p.negative_prompt = p.negative_prompt.replace(a, b)
675
+ for i in lange(p.all_negative_prompts):
676
+ p.all_negative_prompts[i] = p.all_negative_prompts[i].replace(a, b)
677
+
678
+ def tokendealer(self, p):
679
+ seps = "AND" if "La" in self.calc else KEYBRK
680
+ self.seps = seps
681
+ text, _ = extra_networks.parse_prompt(p.all_prompts[0]) # SBM From update_token_counter.
682
+ text = prompt_parser.get_learned_conditioning_prompt_schedules([text],p.steps)[0][0][1]
683
+ ppl = text.split(seps)
684
+ ntext, _ = extra_networks.parse_prompt(p.all_negative_prompts[0])
685
+ npl = ntext.split(seps)
686
+ eqb = len(ppl) == len(npl)
687
+ targets =[p.split(",")[-1] for p in ppl[1:]]
688
+ pt, nt, ppt, pnt, tt = [], [], [], [], []
689
+
690
+ padd = 0
691
+
692
+ tokenizer = shared.sd_model.conditioner.embedders[0].tokenize_line if self.isxl else shared.sd_model.cond_stage_model.tokenize_line
693
+
694
+ for pp in ppl:
695
+ tokens, tokensnum = tokenizer(pp)
696
+ pt.append([padd, tokensnum // TOKENS + 1 + padd])
697
+ ppt.append(tokensnum)
698
+ padd = tokensnum // TOKENS + 1 + padd
699
+
700
+ if "Pro" in self.mode:
701
+ for target in targets:
702
+ ptokens, tokensnum = tokenizer(ppl[0])
703
+ ttokens, _ = tokenizer(target)
704
+
705
+ i = 1
706
+ tlist = []
707
+ while ttokens[0].tokens[i] != 49407:
708
+ for (j, maintok) in enumerate(ptokens): # SBM Long prompt.
709
+ if ttokens[0].tokens[i] in maintok.tokens:
710
+ tlist.append(maintok.tokens.index(ttokens[0].tokens[i]) + 75 * j)
711
+ i += 1
712
+ if tlist != [] : tt.append(tlist)
713
+
714
+ paddp = padd
715
+ padd = 0
716
+ for np in npl:
717
+ _, tokensnum = tokenizer(np)
718
+ nt.append([padd, tokensnum // TOKENS + 1 + padd])
719
+ pnt.append(tokensnum)
720
+ padd = tokensnum // TOKENS + 1 + padd
721
+
722
+ self.eq = paddp == padd and eqb
723
+
724
+ self.pt = pt
725
+ self.nt = nt
726
+ self.pe = tt
727
+ self.ppt = ppt
728
+ self.pnt = pnt
729
+
730
+ notarget = "Pro" in self.mode and tt == []
731
+ if notarget:
732
+ print("No target word is detected in Prompt mode")
733
+ return notarget
734
+
735
+ def thresholddealer(self, p ,threshold):
736
+ if "Pro" in self.mode:
737
+ threshold = threshold.split(",")
738
+ while len(self.pe) >= len(threshold) + 1:
739
+ threshold.append(threshold[0])
740
+ self.th = [floatdef(t, 0.4) for t in threshold] * self.batch_size
741
+ if self.debug :print ("threshold", self.th)
742
+
743
+ def bratioprompt(self, bratios):
744
+ if not "Pro" in self.mode: return self
745
+ bratios = bratios.split(",")
746
+ bratios = [floatdef(b, 0) for b in bratios]
747
+ while len(self.pe) >= len(bratios) + 1:
748
+ bratios.append(bratios[0])
749
+ self.bratios = bratios
750
+
751
+ def neighbor(self,p):
752
+ from modules.scripts import scripts_txt2img
753
+ for script in scripts_txt2img.alwayson_scripts:
754
+ if "negpip.py" in script.filename:
755
+ self.negpip = script
756
+
757
+ for script in scripts_txt2img.selectable_scripts:
758
+ if "rps.py" in script.filename:
759
+ self.rps = script
760
+ #print(dir(script))
761
+ #script.test1 = "kawattayone?"
762
+ #script.settest1("kawatta?")
763
+
764
+ try:
765
+ args = p.script_args
766
+ multi = ["MultiDiffusion",'Mixture of Diffusers']
767
+ if any(x in args for x in multi):
768
+ for key in multi:
769
+ if key in args:
770
+ self.nei_multi = [args[args.index(key)+5],args[args.index(key)+6]]
771
+ except:
772
+ pass
773
+
774
+ #####################################################
775
+ ##### Presets - Save and Load Settings
776
+
777
+ fimgpt = lambda flnm, fext, *dirparts: os.path.join(*dirparts, flnm + fext)
778
+
779
+ class PresetList():
780
+ """Preset list must be the same object throughout its lifetime, otherwise updates will err.
781
+
782
+ See gradio issue #4210 for details.
783
+ """
784
+ def __init__(self):
785
+ self.lpr = []
786
+
787
+ def update(self, newpr):
788
+ """Replace all values, return the reference.
789
+
790
+ Will convert dicts to the names only.
791
+ Might be more efficient to add the new names only, but meh.
792
+ """
793
+ if len(newpr) > 0 and isinstance(newpr[0],dict):
794
+ newpr = [pr["name"] for pr in newpr]
795
+ self.lpr.clear()
796
+ self.lpr.extend(newpr)
797
+ return self.lpr
798
+
799
+ def get(self):
800
+ return self.lpr
801
+
802
+ class JsonMask():
803
+ """Mask saved as image with some editing work.
804
+
805
+ """
806
+ blobdir = "regional_masks"
807
+ ext = ".png"
808
+
809
+ def __init__(self, img):
810
+ self.img = img
811
+
812
+ def makepath(self, name):
813
+ pt = fimgpt(name, self.ext, PTPRESET, self.blobdir)
814
+ os.makedirs(os.path.dirname(pt), exist_ok = True)
815
+ return pt
816
+
817
+ def save(self, name, preset = None):
818
+ """Save image to subdir.
819
+
820
+ Only saved when in mask mode - Hardcoded, don't have a better idea atm.
821
+ """
822
+ if (preset is None) or (preset[1] == "Mask"): # Check mode.
823
+ save_mask(self.img, self.makepath(name))
824
+ return name
825
+ return None
826
+
827
+ def load(self, name, preset = None):
828
+ """Load image from subdir (no editing, that comes later).
829
+
830
+ Prefer to use the given key, rather than name. SBM CONT: Load / save in dict mode? Debugging needed.
831
+ """
832
+ if name is None or self.img is None:
833
+ return None
834
+ return load_mask(self.makepath(self.img))
835
+
836
+ LPRESET = PresetList()
837
+
838
+ fcountbrk = lambda x: x.count(KEYBRK)
839
+ fint = lambda x: int(x)
840
+
841
+ # Json formatters.
842
+ fjstr = lambda x: x.strip()
843
+ #fjbool = lambda x: (x.upper() == "TRUE" or x.upper() == "T")
844
+ fjbool = lambda x: x # Json can store booleans reliably.
845
+ fjmask = lambda x: draw_image(x, inddict = False)[0] # Ignore mask reset value.
846
+
847
+ # (json_name, value_format, default)
848
+ # If default = none then will use current gradio value.
849
+ PRESET_KEYS = [
850
+ ("name",fjstr,"") , # Name is special, preset's key.
851
+ ("mode", fjstr, None) ,
852
+ ("ratios", fjstr, None) ,
853
+ ("baseratios", fjstr, None) ,
854
+ ("usebase", fjbool, None) ,
855
+ ("usecom", fjbool, False) ,
856
+ ("usencom", fjbool, False) ,
857
+ ("calcmode", fjstr, "Attention") , # Generation mode.
858
+ ("nchangeand", fjbool, False) ,
859
+ ("lnter", fjstr, "0") ,
860
+ ("lnur", fjstr, "0") ,
861
+ ("threshold", fjstr, "0") ,
862
+ ("polymask", fjmask, "") , # Mask has special corrections and logging.
863
+ ]
864
+ # (json_name,blob_class)
865
+ # Handles save + lazy load of blob data outside of presets.
866
+ BLOB_KEYS = {
867
+ "polymask": JsonMask
868
+ }
869
+
870
+ def saveblob(preset):
871
+ """Preset variables saved externally (blob).
872
+
873
+ Returns modified list containing the refernces instead of data.
874
+ Currently, this includes polymask, which is saved as an image,
875
+ with a filename = preset.
876
+ A blob class should contain a save method which returns the reference.
877
+ """
878
+ preset = list(preset) # Tuples don't have copy.
879
+ for (i,(vkey,vfun,vdef)) in enumerate(PRESET_KEYS):
880
+ if vkey in BLOB_KEYS:
881
+ # Func should accept raw form and convert it to a class.
882
+ x = BLOB_KEYS[vkey](preset[i])
883
+ # Class should have a save func given identifier, returning an access key.
884
+ x = x.save(preset[0], preset)
885
+ # Update the preset.
886
+ preset[i] = x
887
+ return preset
888
+
889
+ def loadblob(preset):
890
+ """Load blob presets based on key.
891
+
892
+ Returns modified list containing the refernces instead of
893
+ Currently, this includes polymask, which is saved as an image,
894
+ with a filename = preset.
895
+ A blob class should contain a load method which retrieves the data based on reference.
896
+ """
897
+ for (vkey,vval) in BLOB_KEYS.items():
898
+ # Func should accept refrence form and convert it to a class.
899
+ x = vval(preset.get(vkey))
900
+ # Class should have a load func given identifier, returning data.
901
+ x = x.load(preset["name"], preset)
902
+ # Update the preset.
903
+ preset[vkey] = x
904
+ return preset
905
+
906
+ def savepresets(*settings):
907
+ # NAME must come first.
908
+ name = settings[0]
909
+ settings = [name] + compress_components(settings[1:])
910
+ settings = saveblob(settings)
911
+
912
+ # path_root = modules.scripts.basedir()
913
+ # filepath = os.path.join(path_root, "scripts", "regional_prompter_presets.json")
914
+ filepath = os.path.join(PTPRESET, FLJSON)
915
+
916
+ try:
917
+ with open(filepath, mode='r', encoding="utf-8") as f:
918
+ # presets = json.loads(json.load(f))
919
+ presets = json.load(f)
920
+ pr = {PRESET_KEYS[i][0]:settings[i] for i,_ in enumerate(PRESET_KEYS)}
921
+ # SBM Ordereddict might be better than list, quick search.
922
+ written = False
923
+ # if name == "lastrun": # SBM We should check the preset is unique in any case.
924
+ for i, preset in enumerate(presets):
925
+ if name == preset["name"]:
926
+ # if "lastrun" in preset["name"]:
927
+ presets[i] = pr
928
+ written = True
929
+ if not written:
930
+ presets.append(pr)
931
+ with open(filepath, mode='w', encoding="utf-8") as f:
932
+ # json.dump(json.dumps(presets), f, indent = 2)
933
+ json.dump(presets, f, indent = 2)
934
+ except Exception as e:
935
+ print(e)
936
+
937
+ presets = loadpresets(filepath)
938
+ presets = LPRESET.update(presets)
939
+ return gr.update(choices=presets)
940
+
941
+ def presetfallback():
942
+ """Swaps main json dir to alt if exists, attempts reload.
943
+
944
+ """
945
+ global PTPRESET
946
+ global PTPRESETALT
947
+
948
+ if PTPRESETALT is not None:
949
+ print("Unknown preset error, fallback.")
950
+ PTPRESET = PTPRESETALT
951
+ PTPRESETALT = None
952
+ return loadpresets(PTPRESET)
953
+ else: # Already attempted swap.
954
+ print("Presets could not be loaded.")
955
+ return None
956
+
957
+ def loadpresets(filepath):
958
+ presets = []
959
+ try:
960
+ with open(filepath, encoding="utf-8") as f:
961
+ # presets = json.loads(json.load(f))
962
+ presets = json.load(f)
963
+ # presets = loadblob(presets) # DO NOT load all blobs - that's the point.
964
+ except OSError as e:
965
+ print("Init / preset error.")
966
+ presets = initpresets(filepath)
967
+ except TypeError:
968
+ print("Corrupted preset file, resetting.")
969
+ presets = initpresets(filepath)
970
+ except JSONDecodeError:
971
+ print("Preset file could not be decoded.")
972
+ presets = initpresets(filepath)
973
+ return presets
974
+
975
+ def initpresets(filepath):
976
+ lpr = PRESETSDEF
977
+ # if not os.path.isfile(filepath):
978
+ try:
979
+ with open(filepath, mode='w', encoding="utf-8") as f:
980
+ lprj = []
981
+ for pr in lpr:
982
+ plen = min(len(PRESET_KEYS), len(pr)) # Future setting additions ignored.
983
+ prj = {PRESET_KEYS[i][0]:pr[i] for i in range(plen)}
984
+ lprj.append(prj)
985
+ #json.dump(json.dumps(lprj), f, indent = 2)
986
+ json.dump(lprj, f, indent = 2)
987
+ return lprj
988
+ except Exception as e:
989
+ return presetfallback()
990
+
991
+ #####################################################
992
+ ##### Global settings
993
+
994
+ EXTKEY = "regprp"
995
+ EXTNAME = "Regional Prompter"
996
+ # (id, label, type, extra_parms)
997
+ EXTSETS = [
998
+ ("debug", "(PLACEHOLDER, USE THE ONE IN 2IMG) Enable debug mode", "check", dict()),
999
+ ("hidepmask", "Hide subprompt masks in prompt mode", "check", dict()),
1000
+
1001
+ ]
1002
+ # Dynamically constructed list of default values, because shared doesn't allocate a value automatically.
1003
+ # (id: def)
1004
+ DEXTSETV = dict()
1005
+ fseti = lambda x: shared.opts.data.get(EXTKEY + "_" + x, DEXTSETV[x])
1006
+
1007
+ class Setting_Component():
1008
+ """Creates gradio components with some standard req values.
1009
+
1010
+ All must supply an id (used in code), label, component type.
1011
+ Default value and specific type settings can be overridden.
1012
+ """
1013
+ section = (EXTKEY, EXTNAME)
1014
+ def __init__(self, cid, clabel, ctyp, vdef = None, **kwargs):
1015
+ self.cid = EXTKEY + "_" + cid
1016
+ self.clabel = clabel
1017
+ self.ctyp = ctyp
1018
+ method = getattr(self, self.ctyp)
1019
+ method(**kwargs)
1020
+ if vdef is not None:
1021
+ self.vdef = vdef
1022
+
1023
+ def get(self):
1024
+ """Get formatted setting.
1025
+
1026
+ Input for shared.opts.add_option().
1027
+ """
1028
+ if self.ctyp == "textb":
1029
+ return (self.cid, shared.OptionInfo(self.vdef, self.clabel, section = self.section))
1030
+ return (self.cid, shared.OptionInfo(self.vdef, self.clabel,
1031
+ self.ccomp, self.cparms, section = self.section))
1032
+
1033
+ def textb(self, **kwargs):
1034
+ """Textbox unusually requires no component.
1035
+
1036
+ """
1037
+ self.ccomp = gr.Textbox
1038
+ self.vdef = ""
1039
+ self.cparms = {}
1040
+ self.cparms.update(kwargs)
1041
+
1042
+ def check(self, **kwargs):
1043
+ self.ccomp = gr.Checkbox
1044
+ self.vdef = False
1045
+ self.cparms = {"interactive": True}
1046
+ self.cparms.update(kwargs)
1047
+
1048
+ def slider(self, **kwargs):
1049
+ self.ccomp = gr.Slider
1050
+ self.vdef = 0
1051
+ self.cparms = {"minimum": 1, "maximum": 10, "step": 1}
1052
+ self.cparms.update(kwargs)
1053
+
1054
+ def ext_on_ui_settings():
1055
+ for (cid, clabel, ctyp, kwargs) in EXTSETS:
1056
+ comp = Setting_Component(cid, clabel, ctyp, **kwargs)
1057
+ opt = comp.get()
1058
+ shared.opts.add_option(*opt)
1059
+ DEXTSETV[cid] = comp.vdef
1060
+
1061
+ def debugall(self):
1062
+ print(f"mode : {self.mode}\ncalcmode : {self.calc}\nusebase : {self.usebase}")
1063
+ print(f"base ratios : {self.bratios}\nusecommon : {self.usecom}\nusenegcom : {self.usencom}")
1064
+ print(f"divide : {self.divide}\neq : {self.eq}")
1065
+ print(f"tokens : {self.ppt},{self.pnt},{self.pt},{self.nt}")
1066
+ print(f"ratios : {self.aratios}\n")
1067
+ print(f"prompt : {self.pe}")
1068
+ print(f"env : before15:{self.isbefore15},isxl:{self.isxl}")
1069
+ print(f"loras{self.log}")
1070
+
1071
+ def bckeydealer(self, p):
1072
+ '''
1073
+ detect COMM/BASE keys and set flags and change to BREAK
1074
+ '''
1075
+ if KEYCOMM in p.prompt:
1076
+ self.usecom = True
1077
+ if self.usecom and KEYCOMM not in p.prompt:
1078
+ p.prompt = p.prompt.replace(KEYBRK,KEYCOMM,1)
1079
+
1080
+ if KEYCOMM in p.negative_prompt:
1081
+ self.usencom = True
1082
+ if self.usencom and KEYCOMM not in p.negative_prompt:
1083
+ p.negative_prompt = p.negative_prompt.replace(KEYBRK,KEYCOMM,1)
1084
+
1085
+ if KEYBASE in p.prompt:
1086
+ self.usebase = True
1087
+ if self.usebase and KEYBASE not in p.prompt:
1088
+ p.prompt = p.prompt.replace(KEYBRK,KEYBASE,1)
1089
+
1090
+ if KEYPROMPT in p.prompt.upper():
1091
+ self.mode = "Prompt"
1092
+
1093
+ def keyconverter(aratios,mode,usecom,usebase,p):
1094
+ '''convert BREAKS to ADDCOMM/ADDBASE/ADDCOL/ADDROW'''
1095
+ keychanger = makeimgtmp(aratios,mode,usecom,usebase,False,512,512, inprocess = True)
1096
+ keychanger = keychanger[:-1]
1097
+ #print(keychanger,p.prompt)
1098
+ for change in keychanger:
1099
+ if change == KEYCOMM and KEYCOMM in p.prompt: continue
1100
+ if change == KEYBASE and KEYBASE in p.prompt: continue
1101
+ p.prompt= p.prompt.replace(KEYBRK,change,1)
1102
+
1103
+ def keyreplacer(p):
1104
+ '''
1105
+ replace all separators to BREAK
1106
+ p.all_prompt and p.all_negative_prompt
1107
+ '''
1108
+ for key in ALLKEYS:
1109
+ for i in lange(p.all_prompts):
1110
+ p.all_prompts[i]= p.all_prompts[i].replace(key,KEYBRK)
1111
+
1112
+ for i in lange(p.all_negative_prompts):
1113
+ p.all_negative_prompts[i] = p.all_negative_prompts[i].replace(key,KEYBRK)
1114
+
1115
+ def keycounter(self, p):
1116
+ pc = sum([p.prompt.count(text) for text in ALLALLKEYS])
1117
+ npc = sum([p.negative_prompt.count(text) for text in ALLALLKEYS])
1118
+ self.divide = [pc + 1, npc + 1]
1119
+
1120
+ def resetpcache(p):
1121
+ p.cached_c = [None,None]
1122
+ p.cached_uc = [None,None]
1123
+ p.cached_hr_c = [None, None]
1124
+ p.cached_hr_uc = [None, None]
1125
+
1126
+ def loraverchekcer(self):
1127
+ try:
1128
+ self.ui_version = int(launch.git_tag().replace("v","").replace(".",""))
1129
+ except:
1130
+ self.ui_version = 100
1131
+
1132
+ try:
1133
+ import lora
1134
+ self.isbefore15 = "assign_lora_names_to_compvis_modules" in dir(lora)
1135
+ self.layer_name = "lora_layer_name" if self.isbefore15 else "network_layer_name"
1136
+ except:
1137
+ self.isbefore15 = False
1138
+ self.layer_name = "lora_layer_name"
1139
+
1140
+ def log(prop):
1141
+ frame = inspect.currentframe().f_back
1142
+ local_vars = frame.f_locals
1143
+ var_name = None
1144
+ for k, v in local_vars.items():
1145
+ if v is prop:
1146
+ var_name = k
1147
+ break
1148
+
1149
+ if var_name:
1150
+ print(f"{var_name} = {prop}")
1151
+ else:
1152
+ print("Property not found in local scope.")
1153
+
1154
+ on_ui_settings(ext_on_ui_settings)
extensions/sd-webui-regional-prompter/scripts/rps.py ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from unittest import result
2
+ import modules.scripts as scripts
3
+ import gradio as gr
4
+ from pprint import pprint
5
+ import os
6
+ import math
7
+ from PIL import Image, ImageFont, ImageDraw, ImageColor, PngImagePlugin
8
+ from PIL import Image
9
+ import imageio
10
+ import random
11
+ import numpy as np
12
+
13
+
14
+ from modules.processing import process_images
15
+ from modules.shared import cmd_opts, total_tqdm, state
16
+
17
+ class Script(scripts.Script):
18
+
19
+ def __init__(self):
20
+ self.count = 0
21
+ self.latent = None
22
+ self.latent_hr= None
23
+
24
+ def title(self):
25
+ return "Differential Regional Prompter"
26
+
27
+ def ui(self, is_img2img):
28
+ with gr.Row():
29
+ pass
30
+ # urlguide = gr.HTML(value = fhurl(GUIDEURL, "Usage guide"))
31
+ with gr.Row():
32
+ # mode = gr.Radio(label="Divide mode", choices=["Horizontal", "Vertical","Mask","Prompt","Prompt-Ex"], value="Horizontal", type="value", interactive=True)
33
+ #outmode = gr.Radio(label="Output mode", choices=["ALL", "Only 2nd"], value="ALL", type="value", interactive=True)
34
+ #changes = gr.Textbox(label="original, replace, replace ;original, replace, replace...")
35
+ pass
36
+ with gr.Row(visible=True):
37
+ # ratios = gr.Textbox(label="Divide Ratio",lines=1,value="1,1",interactive=True,elem_id="RP_divide_ratio",visible=True)
38
+ options = gr.CheckboxGroup(choices=["Reverse"], label="Options",interactive=True,elem_id="RP_usecommon")
39
+ addout = gr.CheckboxGroup(choices=["mp4","Anime Gif"], label="Additional Output",interactive=True,elem_id="RP_usecommon")
40
+ with gr.Row(visible=True):
41
+ step = gr.Slider(label="Step", minimum=0, maximum=150, value=4, step=1)
42
+ duration = gr.Slider(label="FPS", minimum=1, maximum=100, value=30, step=1)
43
+ batch_size = gr.Slider(label="Batch Size", minimum=1, maximum=8, value=1, step=1,visible = False)
44
+ with gr.Row(visible=True):
45
+ plans = gr.TextArea(label="Schedule")
46
+ with gr.Row(visible=True):
47
+ mp4pathd = gr.Textbox(label="mp4 output directory")
48
+ mp4pathf = gr.Textbox(label="mp4 output filename")
49
+ with gr.Row(visible=True):
50
+ gifpathd = gr.Textbox(label="Anime gif output directory")
51
+ gifpathf = gr.Textbox(label="Anime gif output filename")
52
+
53
+ return [options, duration, plans, step, addout, batch_size, mp4pathd, mp4pathf, gifpathd, gifpathf]
54
+
55
+ def run(self, p, options, duration, plans, step, addout, batch, mp4pathd, mp4pathf, gifpathd, gifpathf):
56
+ self.__init__()
57
+
58
+ p.rps_diff = True
59
+
60
+ plans = plans.splitlines()
61
+ plans = [f.split(";") for f in plans]
62
+ all_prompts = []
63
+ all_prompts_hr = []
64
+
65
+ base_prompt = p.prompt.split("BREAK")[0]
66
+
67
+ def makesubprompt(pro, tar, wei, ste):
68
+ a = "" if tar in base_prompt else tar
69
+ if pro == "": return f" BREAK ,{tar}"
70
+ if wei == 1:
71
+ return f"{a} BREAK {base_prompt} [:{pro}:{ste}], {tar}"
72
+ else:
73
+ return f"{a} BREAK {base_prompt} [:({pro}:{wei}):{ste}], {tar}"
74
+
75
+ def makesubprompt_hr(pro, tar, wei, ste):
76
+ a = "" if tar in base_prompt else tar
77
+ if pro == "": return f" BREAK ,{tar}"
78
+ if wei == 1:
79
+ return f"{a} BREAK {base_prompt} {pro}, {tar}"
80
+ else:
81
+ return f"{a} BREAK {base_prompt} ({pro}:{wei}), {tar}"
82
+ #pprint(plans)
83
+
84
+ for plan in plans:
85
+ if 3 > len(plan):
86
+ sets = plan[0]
87
+ if "=" in sets:
88
+ change, num = sets.split("=")
89
+ if change == "step":
90
+ step = int(num)
91
+ if "th" in change:
92
+ all_prompts.append(["th",num])
93
+ all_prompts_hr.append(None)
94
+ elif "*" in sets:
95
+ num = int(sets.replace("*",""))
96
+ all_prompts.extend([["th",2]]+[base_prompt + ". BREAK " + base_prompt + f" ,."]*num + [["th",None]])
97
+ all_prompts_hr.extend([["th",2]]+[base_prompt + ". BREAK " + base_prompt + f" ,."]*num + [["th",None]])
98
+ elif "ex-on" in sets:
99
+ strength = float(sets.split(",")[1]) if "," in sets else None
100
+ all_prompts.append(["ex-on",strength])
101
+ all_prompts_hr.append(None)
102
+ elif "ex-off" in sets:
103
+ all_prompts.append(["ex-off"])
104
+ all_prompts_hr.append(None)
105
+ elif sets == "0":
106
+ all_prompts.extend([["th",2], base_prompt + ". BREAK " + base_prompt + f" ,.", ["th",None]])
107
+ all_prompts_hr.extend([["th",2], base_prompt + ". BREAK " + base_prompt + f" ,.", ["th",None]])
108
+ continue
109
+ weights = parse_weights(plan[2])
110
+ istep = step
111
+ if len(plan) >=4:
112
+ asteps = parse_steps(plan[3])
113
+ if type(asteps) is list:
114
+ for astep in asteps:
115
+ all_prompts.append(base_prompt + makesubprompt(plan[0], plan[1], weights[0], astep))
116
+ all_prompts_hr.append(base_prompt + makesubprompt_hr(plan[0], plan[1], weights[0], astep))
117
+ continue
118
+ else:
119
+ istep = astep
120
+ for weight in weights:
121
+ all_prompts.append(base_prompt + makesubprompt(plan[0], plan[1], weight, istep))
122
+ all_prompts_hr.append(base_prompt + makesubprompt_hr(plan[0], plan[1], weight, istep))
123
+
124
+ #pprint(all_prompts)
125
+
126
+ results = {}
127
+ output = None
128
+ index = []
129
+
130
+ for prompt in all_prompts:
131
+ if type(prompt) == list: continue
132
+ if prompt not in results.keys():
133
+ results[prompt] = None
134
+
135
+ print(f"Differential Regional Prompter Start")
136
+ print(f"FPS = {duration}, {len(all_prompts)} frames, {round(len(all_prompts)/duration,3)} Sec")
137
+
138
+ job = math.ceil((len(results)))
139
+
140
+ allstep = job * p.steps
141
+ total_tqdm.updateTotal(allstep)
142
+ state.job_count = job
143
+
144
+ if p.seed == -1 : p.seed = int(random.randrange(4294967294))
145
+
146
+ seed = p.seed
147
+
148
+ for prompt, prompt_hr in zip(all_prompts,all_prompts_hr):
149
+ if type(prompt) == list:
150
+ if prompt[0] == "th":
151
+ p.threshold = prompt[1]
152
+ if prompt[0] == "ex-on":
153
+ p.seed_enable_extras = True
154
+ p.subseed_strength = strength if prompt[1] else 0.1
155
+ if prompt[0] == "ex-off":
156
+ p.seed_enable_extras = False
157
+ continue
158
+ if results[prompt] is not None:
159
+ continue
160
+ p.prompt = prompt
161
+ p.hr_prompt = prompt_hr
162
+
163
+ processed = process_images(p)
164
+ results[prompt] = processed.images[0]
165
+ if output is None :output = processed
166
+ else:output.images.extend(processed.images)
167
+
168
+
169
+ all_result = []
170
+
171
+ for prompt in all_prompts:
172
+ if type(prompt) == list: continue
173
+ all_result.append(results[prompt])
174
+
175
+ if "Reverse" in options: all_result.reverse()
176
+
177
+ outpath = p.outpath_samples
178
+ if "Anime Gif" in addout:
179
+ if gifpathd != "": outpath = os.path.join(outpath,gifpathd)
180
+
181
+ try:
182
+ os.makedirs(outpath)
183
+ except FileExistsError:
184
+ pass
185
+
186
+ if gifpathf == "": gifpathf = "dfr"
187
+
188
+ gifpath = gifpath_t = os.path.join(outpath, gifpathf + ".gif")
189
+
190
+ is_file = os.path.isfile(gifpath)
191
+ j = 1
192
+ while is_file:
193
+ gifpath = gifpath_t.replace(".gif",f"_{j}.gif")
194
+ is_file = os.path.isfile(gifpath)
195
+ j = j + 1
196
+
197
+ all_result[0].save(gifpath, save_all=True, append_images=all_result[1:], optimize=False, duration=(1000 / duration), loop=0)
198
+
199
+ outpath = p.outpath_samples
200
+ if "mp4" in addout:
201
+ if mp4pathd != "": outpath = os.path.join(outpath,mp4pathd)
202
+ if mp4pathf == "": mp4pathf = "dfr"
203
+ mp4path = mp4path_t = os.path.join(outpath, mp4pathf + ".mp4")
204
+
205
+ try:
206
+ os.makedirs(outpath)
207
+ except FileExistsError:
208
+ pass
209
+
210
+ is_file = os.path.isfile(mp4path_t)
211
+ j = 1
212
+ while is_file:
213
+ mp4path = mp4path_t.replace(".mp4",f"_{j}.mp4")
214
+ is_file = os.path.isfile(mp4path)
215
+ j = j + 1
216
+
217
+ numpy_frames = [np.array(frame) for frame in all_result]
218
+
219
+ with imageio.get_writer(mp4path, fps=duration) as writer:
220
+ for numpy_frame in numpy_frames:
221
+ writer.append_data(numpy_frame)
222
+
223
+ self.__init__()
224
+ return output
225
+
226
+ def settest1(self,valu):
227
+ self.test1 = valu
228
+
229
+ def parse_steps(s):
230
+ if "(" in s:
231
+ step = s[s.index("("):]
232
+ s = s.replace(step,"")
233
+ step = int(step.strip("()"))
234
+ else:
235
+ step = 1
236
+
237
+ if "-" in s:
238
+ start,end = s.split("-")
239
+ start,end = int(start), int(end)
240
+ step = step if end > start else -step
241
+ return list(range(start, end + step, step))
242
+
243
+ if "*" in s:
244
+ w, m = s.split("*")
245
+ if w == "": w = 4
246
+ return [w] * int(m)
247
+
248
+ return int(s)
249
+
250
+ def parse_weights(s):
251
+ if s == "": return[1]
252
+ if "*" in s:
253
+ w, m = s.split("*")
254
+ if w == "": w = 1
255
+ return [w] * int(m)
256
+
257
+ if '(' in s:
258
+ step = s[s.index("("):]
259
+ s = s.replace(step,"")
260
+ step = float(step.strip("()"))
261
+ else:
262
+ step = None
263
+
264
+ out = []
265
+
266
+ if "-" in s:
267
+ rans = [x for x in s.split("-")]
268
+ if step is None:
269
+ digit = len(rans[0].split(".")[1])
270
+ step = 10 ** -digit
271
+ rans = [float(r) for r in rans]
272
+ for start, end in zip(rans[:-1],rans[1:]):
273
+ #print(start,end)
274
+ sign = 1 if end > start else -1
275
+ now = start
276
+ for i in range(int(abs(end-start)//step) + 1):
277
+ out.append(now)
278
+ now = now + step * sign
279
+ else:
280
+ out =[float(s)]
281
+
282
+ if out == []:out = [1]
283
+ out = [round(x, 5) for x in out]
284
+ return out
extensions/sd-webui-regional-prompter/style.css ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ #polymask, #polymask > .h-60, #polymask > .h-60 > div, #polymask > .h-60 > div > img
2
+ {
3
+ height: 512px !important;
4
+ max-height: 512px !important;
5
+ min-height: 512px !important;
6
+ }