File size: 70,902 Bytes
6c419f8 5351a39 6c419f8 d42b8a6 ab730a0 35830b0 d42b8a6 94f7f49 d42b8a6 29683f3 dd6be4a 332557d b52e77f dd6be4a 535affd 3e2908a 535affd 4f3dcb6 4776524 2e5c42f 806ca56 bff67d4 806ca56 535affd bff67d4 535affd f5ac613 535affd f5ac613 535affd f5ac613 535affd 806ca56 42ec939 f5ac613 42ec939 dd6be4a f5517c7 dd6be4a 4776524 59ed805 ab730a0 f5517c7 f4540b2 dd6be4a 7d93c1d b64bd7d 59ed805 b64bd7d 7d93c1d dd6be4a b64bd7d dd6be4a b64bd7d e9a864e dd6be4a b64bd7d dd6be4a 59ed805 dd6be4a e9a864e dd6be4a b64bd7d dd6be4a b64bd7d dd6be4a bc75141 dd6be4a b52e77f 332557d b52e77f 535affd 684256c 535affd 9f280b0 535affd 9f280b0 535affd 3e2908a 9f280b0 7034452 9f280b0 050d08f 9f280b0 7034452 3e2908a 9f280b0 535affd 3e2908a 535affd 3e2908a 535affd 9f280b0 3e2908a 535affd b05b0bc 01b50e3 b05b0bc 01b50e3 b05b0bc 01b50e3 b05b0bc 01b50e3 b05b0bc 535affd 4f3dcb6 b64bd7d 4f3dcb6 4d03047 4f3dcb6 dd6be4a 4776524 8b74397 4776524 8b74397 4776524 c4ba74a 8b74397 4776524 8b74397 2e5c42f 42ec939 2e5c42f 8b74397 2e5c42f 9d0cfc4 2e5c42f 806ca56 2e5c42f 4776524 42ec939 c4ba74a 806ca56 535affd f5ac613 bff67d4 c4ba74a bff67d4 c4ba74a bff67d4 f5ac613 c4ba74a f5ac613 806ca56 8b74397 535affd 8b74397 806ca56 8b74397 bff67d4 8b74397 bff67d4 c4ba74a 8b74397 806ca56 85941f5 bff67d4 c4ba74a 85941f5 806ca56 8b74397 bff67d4 c4ba74a 8b74397 806ca56 8b74397 bff67d4 c4ba74a 85941f5 806ca56 85941f5 bff67d4 85941f5 806ca56 85941f5 bff67d4 85941f5 bff67d4 c4ba74a 8b74397 806ca56 535affd bff67d4 535affd bff67d4 c4ba74a 535affd bff67d4 535affd bff67d4 c4ba74a 535affd bff67d4 535affd bff67d4 c4ba74a 535affd bff67d4 535affd bff67d4 535affd 4f3dcb6 535affd bff67d4 f5ac613 bff67d4 250741a bff67d4 f5ac613 bff67d4 535affd f5ac613 535affd f5ac613 535affd bff67d4 535affd f5ac613 535affd bff67d4 535affd f5ac613 535affd f5ac613 535affd f5ac613 535affd f5ac613 535affd c4ba74a 535affd bff67d4 535affd c4ba74a 535affd bff67d4 535affd c4ba74a 535affd f5ac613 d94919e f5ac613 bff67d4 f5ac613 c4ba74a f5ac613 535affd c4ba74a 535affd d94919e 535affd f5ac613 535affd f5ac613 535affd c4ba74a 535affd c4ba74a 535affd c4ba74a 535affd c4ba74a 535affd c4ba74a 535affd 806ca56 8b74397 806ca56 8b74397 c4ba74a 8b74397 806ca56 4776524 c4ba74a 4776524 806ca56 4776524 c4ba74a 4776524 806ca56 4776524 85941f5 4776524 806ca56 42ec939 4776524 85941f5 4776524 42ec939 f5ac613 c4ba74a f5ac613 42ec939 4776524 c4ba74a 4776524 8b74397 4776524 42ec939 f5ac613 c4ba74a f5ac613 535affd 8b74397 4776524 dd6be4a 57a6e95 dd6be4a 29683f3 59ed805 29683f3 e07ebdd 59ed805 f5517c7 dc1b5c4 f5517c7 ab730a0 f5517c7 ab730a0 59ed805 f5517c7 ab730a0 4776524 59ed805 ab730a0 29683f3 f5517c7 dc1b5c4 4776524 dc1b5c4 f5517c7 dc1b5c4 f4540b2 dc1b5c4 35830b0 dc1b5c4 860f763 2a67e8e dc1b5c4 f4540b2 dc1b5c4 9013e2c f4540b2 9013e2c 35830b0 9013e2c 860f763 dc1b5c4 f4540b2 860f763 dc1b5c4 ab730a0 dc1b5c4 f5517c7 dc1b5c4 f4540b2 59ed805 f4540b2 9013e2c f5517c7 94f7f49 d42b8a6 35830b0 7d93c1d b64bd7d 59ed805 b64bd7d 59ed805 b64bd7d 59ed805 b64bd7d 7d93c1d f04faa4 7d93c1d bc75141 7d93c1d c58e5e4 bc75141 7d93c1d c58e5e4 7d93c1d 2fc0cad 7d93c1d 2e5c42f 7d93c1d bc75141 7d93c1d 59ed805 94f7f49 104df09 7d93c1d 35830b0 dc1b5c4 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 b64bd7d 94f7f49 b64bd7d 94f7f49 b64bd7d 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 b64bd7d 94f7f49 b64bd7d 94f7f49 4fdb730 332557d 35830b0 e75e8e0 35830b0 332557d 59ed805 94f7f49 e9a864e 684256c e9a864e 2fc0cad e9a864e 94f7f49 104df09 35830b0 dc1b5c4 4fdb730 dc1b5c4 332557d 35830b0 fed6813 35830b0 332557d 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 b64bd7d 94f7f49 b64bd7d 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 e9a864e 94f7f49 e9a864e 104df09 dc1b5c4 4fdb730 dc1b5c4 4fdb730 dc1b5c4 104df09 94f7f49 332557d 35830b0 332557d a0f34cc 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 b64bd7d 94f7f49 b64bd7d 94f7f49 59ed805 94f7f49 a0f34cc 94f7f49 b64bd7d 59ed805 b64bd7d 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 8dde046 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 94f7f49 59ed805 94f7f49 104df09 35830b0 dc1b5c4 4fdb730 dc1b5c4 4fdb730 dc1b5c4 4fdb730 dc1b5c4 332557d 35830b0 94f7f49 2a42774 35830b0 332557d 94f7f49 bc75141 59ed805 bc75141 2d8bf09 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 |
---
license: wtfpl
datasets:
- k4d3/furry
language:
- en
tags:
- not-for-all-audiences
---
<!--markdownlint-disable MD033 MD038 -->
# Hotdogwolf's Yiff Toolkit
The Yiff Toolkit is a comprehensive set of tools designed to enhance your creative process in the realm of furry art. From refining artist styles to generating unique characters, the Yiff Toolkit provides a range of tools to help you cum.
> NOTE: You can click on any image in this README to be instantly teleported next to the original resolution version of it! If you want the metadata for a picture and it isn't there, you need to delete the letter e before the .png in the link! If a metadata containing original image is missing, please let me know!
## Table of Contents
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal table of contents</summary>
- [Hotdogwolf's Yiff Toolkit](#hotdogwolfs-yiff-toolkit)
- [Table of Contents](#table-of-contents)
- [Dataset Tools](#dataset-tools)
- [Dataset Preparation](#dataset-preparation)
- [Create the `training_dir` Directory](#create-the-training_dir-directory)
- [Grabber](#grabber)
- [Manual Method](#manual-method)
- [Auto Taggers](#auto-taggers)
- [eva02-vit-large-448-8046](#eva02-vit-large-448-8046)
- [LoRA Training Guide](#lora-training-guide)
- [Installation Tips](#installation-tips)
- [Pony Training](#pony-training)
- [Download Pony in Diffusers Format](#download-pony-in-diffusers-format)
- [Sample Prompt File](#sample-prompt-file)
- [Training Commands](#training-commands)
- [`accelerate launch`](#accelerate-launch)
- [`--lowram`](#--lowram)
- [`--pretrained_model_name_or_path`](#--pretrained_model_name_or_path)
- [`--output_dir`](#--output_dir)
- [`--train_data_dir`](#--train_data_dir)
- [`--resolution`](#--resolution)
- [`--enable_bucket`](#--enable_bucket)
- [`--min_bucket_reso` and `--max_bucket_reso`](#--min_bucket_reso-and---max_bucket_reso)
- [`--network_alpha`](#--network_alpha)
- [`--save_model_as`](#--save_model_as)
- [`--network_module`](#--network_module)
- [`--network_args`](#--network_args)
- [`preset`](#preset)
- [`conv_dim` and `conv_alpha`](#conv_dim-and-conv_alpha)
- [`module_dropout` and `dropout` and `rank_dropout`](#module_dropout-and-dropout-and-rank_dropout)
- [`use_tucker`](#use_tucker)
- [`use_scalar`](#use_scalar)
- [`rank_dropout_scale`](#rank_dropout_scale)
- [`algo`](#algo)
- [`train_norm`](#train_norm)
- [`block_dims`](#block_dims)
- [`block_alphas`](#block_alphas)
- [`--network_dropout`](#--network_dropout)
- [`--lr_scheduler`](#--lr_scheduler)
- [`--lr_scheduler_num_cycles`](#--lr_scheduler_num_cycles)
- [`--learning_rate` and `--unet_lr` and `--text_encoder_lr`](#--learning_rate-and---unet_lr-and---text_encoder_lr)
- [`--network_dim`](#--network_dim)
- [`--output_name`](#--output_name)
- [`--scale_weight_norms`](#--scale_weight_norms)
- [`--max_grad_norm`](#--max_grad_norm)
- [`--no_half_vae`](#--no_half_vae)
- [`--save_every_n_epochs` and `--save_last_n_epochs` or `--save_every_n_steps` and `--save_last_n_steps`](#--save_every_n_epochs-and---save_last_n_epochs-or---save_every_n_steps-and---save_last_n_steps)
- [`--mixed_precision`](#--mixed_precision)
- [`--save_precision`](#--save_precision)
- [`--caption_extension`](#--caption_extension)
- [`--cache_latents` and `--cache_latents_to_disk`](#--cache_latents-and---cache_latents_to_disk)
- [`--optimizer_type`](#--optimizer_type)
- [`--dataset_repeats`](#--dataset_repeats)
- [`--max_train_steps`](#--max_train_steps)
- [`--shuffle_caption`](#--shuffle_caption)
- [`--sdpa` or `--xformers` or `--mem_eff_attn`](#--sdpa-or---xformers-or---mem_eff_attn)
- [`--multires_noise_iterations` and `--multires_noise_discount`](#--multires_noise_iterations-and---multires_noise_discount)
- [`--sample_prompts` and `--sample_sampler` and `--sample_every_n_steps`](#--sample_prompts-and---sample_sampler-and---sample_every_n_steps)
- [Embeddings for 1.5 and SDXL](#embeddings-for-15-and-sdxl)
- [ComfyUI Walkthrough any%](#comfyui-walkthrough-any)
- [AnimateDiff for Masochists](#animatediff-for-masochists)
- [Stable Cascade Furry Bible](#stable-cascade-furry-bible)
- [Resonance Cascade](#resonance-cascade)
- [SDXL Furry Bible](#sdxl-furry-bible)
- [Some Common Knowledge Stuff](#some-common-knowledge-stuff)
- [SeaArt Furry](#seaart-furry)
- [Pony Diffusion V6](#pony-diffusion-v6)
- [Requirements](#requirements)
- [Positive Prompt Stuff](#positive-prompt-stuff)
- [Negative Prompt Stuff](#negative-prompt-stuff)
- [How to Prompt Female Anthro Lions](#how-to-prompt-female-anthro-lions)
- [Pony Diffusion V6 LoRAs](#pony-diffusion-v6-loras)
- [Concept Loras](#concept-loras)
- [bdsm-v1e400](#bdsm-v1e400)
- [blue\_frost](#blue_frost)
- [cervine\_penis-v1e400](#cervine_penis-v1e400)
- [non-euclidean\_sex-v1e400](#non-euclidean_sex-v1e400)
- [space-v1e500](#space-v1e500)
- [spacengine-v1e500](#spacengine-v1e500)
- [Artist/Style LoRAs](#artiststyle-loras)
- [blp-v1e400](#blp-v1e400)
- [butterchalk-v3e400](#butterchalk-v3e400)
- [cecily\_lin-v1e37](#cecily_lin-v1e37)
- [chunie-v1e5](#chunie-v1e5)
- [cooliehigh-v1e45](#cooliehigh-v1e45)
- [dagasi-v1e134](#dagasi-v1e134)
- [darkgem-v1e4](#darkgem-v1e4)
- [himari-v1e400](#himari-v1e400)
- [furry\_sticker-v1e250](#furry_sticker-v1e250)
- [goronic-v1e1](#goronic-v1e1)
- [greg\_rutkowski-v1e400](#greg_rutkowski-v1e400)
- [hamgas-v1e400](#hamgas-v1e400)
- [honovy-v1e4](#honovy-v1e4)
- [jinxit-v1e10](#jinxit-v1e10)
- [kame\_3-v1e80](#kame_3-v1e80)
- [kenket-v1e4](#kenket-v1e4)
- [louart-v1e10](#louart-v1e10)
- [realistic-v4e400](#realistic-v4e400)
- [skecchiart-v1e134](#skecchiart-v1e134)
- [spectrumshift-v1e400](#spectrumshift-v1e400)
- [squishy-v1e10](#squishy-v1e10)
- [whisperingfornothing-v1e58](#whisperingfornothing-v1e58)
- [wjs07-v1e200](#wjs07-v1e200)
- [wolfy-nail-v1e400](#wolfy-nail-v1e400)
- [woolrool-v1e4](#woolrool-v1e4)
- [Character LoRAs](#character-loras)
- [arielsatyr-v1e400](#arielsatyr-v1e400)
- [amalia-v2e400](#amalia-v2e400)
- [amicus-v1e200](#amicus-v1e200)
- [auroth-v1e250](#auroth-v1e250)
- [blaidd-v1e400](#blaidd-v1e400)
- [martlet-v1e200](#martlet-v1e200)
- [ramona-v1e400](#ramona-v1e400)
- [tibetan-v2e500](#tibetan-v2e500)
- [veemon-v1e400](#veemon-v1e400)
- [hoodwink-v1e400](#hoodwink-v1e400)
- [jayjay-v1e400](#jayjay-v1e400)
- [foxparks-v2e134](#foxparks-v2e134)
- [lovander-v3e10](#lovander-v3e10)
- [skiltaire-v1e400](#skiltaire-v1e400)
- [chillet-v3e10](#chillet-v3e10)
- [maliketh-v1e1](#maliketh-v1e1)
- [wickerbeast-v1e500](#wickerbeast-v1e500)
- [Satisfied Customers](#satisfied-customers)
</details>
</div>
## Dataset Tools
I have uploaded all of the little handy Python scripts I use to [/dataset_tools](https://huggingface.co/k4d3/yiff_toolkit/tree/main/dataset_tools). They are pretty self explanatory by just the file name but almost all of them contain an AI generated descriptions. If you want to use them you will need to edit the path to your `training_dir` folder, the variable will be called `path` or `directory` and look something like this:
```py
def main():
path = 'C:\\Users\\kade\\Desktop\\training_dir_staging'
```
Don't be afraid of editing Python scripts, unlike the real snake, these won't bite!
---
## Dataset Preparation
Before you begin collecting your dataset you will need to decide what you want to teach the model, it can be a character, a style or a new concept.
For now let's imagine you want to teach your model *wickerbeasts* so you can generate your VRChat avatar every night.
### Create the `training_dir` Directory
Before starting we need a directory where we'll organize our datasets. Open up a terminal by pressing `Win + R` and typing in `pwsh`. We will also be using [git](https://git-scm.com/download/win) and [huggingface](https://huggingface.co/) to version control our smut. For brevity I'll refrain from giving you a tutorial on both. Once you have your newly created dataset on HF ready lets clone it. Make sure you change `user` in the first line to your HF username!
```pwsh
git clone [email protected]:/datasets/user/training_dir C:\training_dir
cd C:\training_dir
git branch wickerbeast
git checkout wickerbeast
```
Let's continue with downloading some *wickerbeast* data but don't close the terminal window just yet, for this we'll make good use of the furry <abbr title="image board">booru</abbr> [e621.net](https://e621.net/). There are two nice ways to download data from this site with the metadata intact, I'll start with the fastest and then I will explain how you can selectively browse around the site and get the images you like one by one.
### Grabber
[Grabber](https://github.com/Bionus/imgbrd-grabber) makes your life easier when trying to compile datasets quickly from imageboards.
[![A screenshot of Grabber.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/tutorial/grabber1.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/tutorial/grabber1.png)
Clicking on the `Add` button on the Download tab lets you add a `group` which will get downloaded, `Tags` will be the where you can type in the search parameters like you would on e621.net, so for example the string `wickerbeast solo -comic -meme -animated order:score` will search for solo wickerbeast pictures without including comics, memes, and animated posts in descending order of their scores. For training SDXL LoRAs you usually won't need more than 50 images, but you should set the solo group to `40` and add a new group with `-solo` instead of `solo` and set the `Image Limit` to `10` for it to include some images with other characters in it. This will help the model learn a lot better!
You should also enable `Separate log files` for e621, this will download the metadata automatically alongside the pictures.
[![Another screenshot of Grabber.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/tutorial/grabber2.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/tutorial/grabber2.png)
For Pony I've set up the Text file content like so: `rating_%rating%, %all:separator=^, %` for other models you might want to replace `rating_%rating%` with just `%rating%`.
You should also set the `Folder` into which the images will get downloaded. Let's use `C:\training_dir\1_wickerbeast` for both groups.
Now you are ready to right-click on each group and download the images.
---
### Manual Method
This method requires a browser extension like [ViolentMonkey](https://violentmonkey.github.io/) and the following UserScript:
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal userscript.</summary>
```js
// ==UserScript==
// @name e621 JSON Button
// @namespace https://cringe.live
// @version 1.0
// @description Adds a JSON button next to the download button on e621.net
// @author _ka_de
// @match https://e621.net/*
// @match https://e6ai.net/*
// @grant none
// ==/UserScript==
(function() {
'use strict';
function constructJSONUrl() {
// Get the current URL
var currentUrl = window.location.href;
// Extract the post ID from the URL
var postId = currentUrl.match(/^https?:\/\/(?:e621\.net|e6ai\.net)\/posts\/(\d+)/)[1];
// Check the hostname
var hostname = window.location.hostname;
// Construct the JSON URL based on the hostname
var jsonUrl = 'https://' + hostname + '/posts/' + postId + '.json';
return jsonUrl;
}
function createJSONButton() {
// Create a new button element
var jsonButton = document.createElement('a');
// Set the attributes for the button
jsonButton.setAttribute('class', 'button btn-info');
var jsonUrl = constructJSONUrl();
// Set the JSON URL as the button's href attribute
jsonButton.setAttribute('href', jsonUrl);
// Set the inner HTML for the button
jsonButton.innerHTML = '<i class="fa-solid fa-angle-double-right"></i><span>JSON</span>';
// Find the container where we want to insert the button
var container = document.querySelector('#post-options > li:last-child');
// Check if the #image-extra-controls element exists
if (document.getElementById('image-extra-controls')) {
// Insert the button after the download button
container = document.getElementById('image-download-link');
container.insertBefore(jsonButton, container.children[0].nextSibling);
} else {
// Insert the button after the last li element in #post-options
container.parentNode.insertBefore(jsonButton, container.nextSibling);
}
}
// Run the function to create the JSON button
createJSONButton();
})();
```
</details>
</div>
This will put a link to the JSON next to the download button on e621.net and e6ai.net and you can use [this](https://huggingface.co/k4d3/yiff_toolkit/blob/main/dataset_tools/e621%20JSON%20to%20txt.ipynb) Python script to convert them to caption files, it uses the `rating_` prefix before `safe/questionable/explicit` because.. you've guessed it, Pony! It also lets you ignore the tags you add into `ignored_tags` using the `r"\btag\b",` syntax, just replace `tag` with the tag you want it to skip.
---
## Auto Taggers
### [eva02-vit-large-448-8046](https://huggingface.co/Thouph/eva02-vit-large-448-8046)
You want to install the only dependency, besides torch, I mean..
```bash
pip install timm
```
The following inference script for the tagger needs a folder as input, be warned that it also converts WebP images to PNG and you can specify tags to be ignored and some other stuff! I recommend reading through it and changing whatever you need.
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal inference script</summary>
```py
import os
import torch
from torchvision import transforms
from PIL import Image
import json
import re
# Set the threshold for tag selection
THRESHOLD = 0.3
# Define the directory containing the images and the path to the model
image_dir = r"./images"
model_path = r"./model.pth"
# Define the set of ignored tags
ignored_tags = {"grandfathered content"}
# Check if CUDA is available, else use CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the model and set it to evaluation mode
model = torch.load(model_path, map_location=device)
model = model.to(device)
model.eval()
# Define the image transformations
transform = transforms.Compose(
[
# Resize the images to 448x448
transforms.Resize((448, 448)),
# Convert the images to PyTorch tensors
transforms.ToTensor(),
# Normalize the images with the given mean and standard deviation
transforms.Normalize(
mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711],
),
]
)
# Load the tags from the JSON file
with open("tags_8041.json", "r", encoding="utf8") as file:
tags = json.load(file)
allowed_tags = sorted(tags)
# Add placeholders and explicitness tags to the list of allowed tags
allowed_tags.insert(0, "placeholder0")
allowed_tags.append("placeholder1")
allowed_tags.append("explicit")
allowed_tags.append("questionable")
allowed_tags.append("safe")
# Define the allowed image extensions
image_exts = [".jpg", ".jpeg", ".png"]
for filename in os.listdir(image_dir):
# Check if the file is a WebP image
if filename.endswith(".webp"):
# Construct the input and output file paths
input_path = os.path.join(image_dir, filename)
output_path = os.path.join(image_dir, os.path.splitext(filename)[0] + ".png")
# Open the WebP image and save it as a PNG
image = Image.open(input_path)
image.save(output_path, "PNG")
print(f"Converted {filename} to {os.path.basename(output_path)}")
# Delete the original WebP image
os.remove(input_path)
print(f"Deleted {filename}")
# Get the list of image files in the directory
image_files = [
file
for file in os.listdir(image_dir)
if os.path.splitext(file)[1].lower() in image_exts
]
for image_filename in image_files:
image_path = os.path.join(image_dir, image_filename)
# Open the image
img = Image.open(image_path)
# If the image has an alpha channel, replace it with black
if img.mode in ("RGBA", "LA") or (img.mode == "P" and "transparency" in img.info):
alpha = Image.new(
"L", img.size, 0
) # Create alpha image with mode 'L' (8-bit grayscale)
alpha = alpha.convert(img.mode) # Convert alpha image to same mode as img
img = Image.alpha_composite(alpha, img)
# Convert the image to RGB
img = img.convert("RGB")
# Apply the transformations and move the tensor to the device
tensor = transform(img).unsqueeze(0).to(device)
# Make a forward pass through the model and get the output
with torch.no_grad():
out = model(tensor)
# Apply the sigmoid function to the output to get probabilities
probabilities = torch.sigmoid(out[0])
# Get the indices of the tags with probabilities above the threshold
indices = torch.where(probabilities > THRESHOLD)[0]
values = probabilities[indices]
# Sort the indices by the corresponding probabilities in descending order
sorted_indices = torch.argsort(values, descending=True)
# Get the tags corresponding to the sorted indices, excluding ignored tags and replacing underscores with spaces
tags_to_write = [
allowed_tags[indices[i]].replace("_", " ")
for i in sorted_indices
if allowed_tags[indices[i]] not in ignored_tags
and allowed_tags[indices[i]] not in ("placeholder0", "placeholder1")
]
# Replace 'safe', 'explicit', and 'questionable' with their 'rating_' counterparts
tags_to_write = [
tag.replace("safe", "rating_safe")
.replace("explicit", "rating_explicit")
.replace("questionable", "rating_questionable")
for tag in tags_to_write
]
# Escape unescaped parentheses in the tags
tags_to_write_escaped = [
re.sub(r"(?<!\\)(\()", r"\\\1", tag) for tag in tags_to_write
]
# Create a text file for each image with the filtered and escaped tags
text_filename = os.path.splitext(image_filename)[0] + ".txt"
text_path = os.path.join(image_dir, text_filename)
with open(text_path, "w", encoding="utf8") as text_file:
text_file.write(", ".join(tags_to_write_escaped))
```
</details>
</div>
## LoRA Training Guide
### Installation Tips
---
Firstly, download kohya_ss' [sd-scripts](https://github.com/kohya-ss/sd-scripts), you need to set up your environment either like [this](https://github.com/kohya-ss/sd-scripts?tab=readme-ov-file#windows-installation) tells you for Windows, or if you are using Linux or Miniconda on Windows, you are probably smart enough to figure out the installation for it. I recommend always installing the latest [PyTorch](https://pytorch.org/get-started/locally/) in the virtual environment you are going to use, which at the time of writing is `2.2.2`. I hope future me has faster PyTorch!
Ok, just in case you aren't smart enough how to install the sd-scripts under Miniconda for Windows I actually "guided" someone recently, just so I can tell you about it:
```bash
# Installing sd-scripts
git clone https://github.com/kohya-ss/sd-scripts
cd sd-scripts
# Creating the conda environment and installing requirements
conda create -n sdscripts python=3.10.14
conda activate sdscripts
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
python -m pip install --use-pep517 --upgrade -r requirements.txt
python -m pip install --use-pep517 lycoris_lora
accelerate config
```
`accelerate config` will ask you a bunch of questions, you need to actually read each one and reply with the truth. In most cases the truth looks like this: `This machine, No distributed training, no, no, no, all, fp16`.
You might also want to install `xformers` or `bitsandbytes`.
```bash
# Installing xformers
# Use the same command just replace 'xformers' with any other package you may need.
python -m pip install --use-pep517 xformers
# Installing bitsandbytes for windows
python -m pip install --use-pep517 bitsandbytes --index-url=https://jllllll.github.io/bitsandbytes-windows-webui
```
---
### Pony Training
---
I'm not going to lie, it is a bit complicated to explain everything. But here is my best attempt going through some "basic" stuff and almost all lines in order.
#### Download Pony in Diffusers Format
I'm using the diffusers version for training I converted, you can download it using `git`.
```bash
git clone https://huggingface.co/k4d3/ponydiffusers
```
---
#### Sample Prompt File
A sample prompt file is used during training to sample images. A sample prompt for example might look like this for Pony:
```py
# anthro female kindred
score_9, score_8_up, score_7_up, score_6_up, rating_explicit, source_furry, solo, female anthro kindred, mask, presenting, white pillow, bedroom, looking at viewer, detailed background, amazing_background, scenery porn, realistic, photo --n low quality, worst quality, blurred background, blurry, simple background --w 1024 --h 1024 --d 1 --l 6.0 --s 40
# anthro female wolf
score_9, score_8_up, score_7_up, score_6_up, rating_explicit, source_furry, solo, anthro female wolf, sexy pose, standing, gray fur, brown fur, canine pussy, black nose, blue eyes, pink areola, pink nipples, detailed background, amazing_background, realistic, photo --n low quality, worst quality, blurred background, blurry, simple background --w 1024 --h 1024 --d 1 --l 6.0 --s 40
```
Please note that sample prompts should not exceed 77 tokens, you can use [Count Tokens in Sample Prompts](https://huggingface.co/k4d3/yiff_toolkit/blob/main/dataset_tools/Count%20Tokens%20in%20Sample%20Prompts.ipynb) from [/dataset_tools](https://huggingface.co/k4d3/yiff_toolkit/tree/main/dataset_tools) to analyze your prompts.
If you are training with multiple GPUs, ensure that the total number of prompts is divisible by the number of GPUs without any remainder or a card will idle.
---
#### Training Commands
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal training commands.</summary>
---
##### `accelerate launch`
For two GPUs:
```python
accelerate launch --num_processes=2 --multi_gpu --num_machines=1 --gpu_ids=0,1 --num_cpu_threads_per_process=2 "./sdxl_train_network.py"
```
Single GPU:
```python
accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py"
```
---
And now lets break down a bunch of arguments we can pass to `sd-scripts`.
##### `--lowram`
If you are running running out of system memory like I do with 2 GPUs and a really fat model that gets loaded into it per GPU, this option will help you save a bit of it and might get you out of OOM hell.
---
##### `--pretrained_model_name_or_path`
The directory containing the checkpoint you just downloaded. I recommend closing the path if you are using a local diffusers model with a `/`. You can also specify a `.safetensors` or `.ckpt` if that is what you have!
```python
--pretrained_model_name_or_path="/ponydiffusers/"
```
---
##### `--output_dir`
This is where all the saved epochs or steps will be saved, including the last one. If y
```python
--output_dir="/output_dir"
```
---
##### `--train_data_dir`
The directory containing the dataset. We prepared this earlier together.
```python
--train_data_dir="/training_dir"
```
---
##### `--resolution`
Always set this to match the model's resolution, which in Pony's case it is 1024x1024. If you can't fit into the VRAM, you can decrease it to `512,512` as a last resort.
```python
--resolution="1024,1024"
```
---
##### `--enable_bucket`
Creates different buckets by pre-categorizing images with different aspect ratios into different buckets. This technique helps to avoid issues like unnatural crops that are common when models are trained to produce square images. This allows the creation of batches where every item has the same size, but the image size of batches may differ.
---
##### `--min_bucket_reso` and `--max_bucket_reso`
Specifies the minimum and maximum resolutions used by the buckets. These values are ignored if `--bucket_no_upscale` is set.
```python
--min_bucket_reso=256 --max_bucket_reso=1024
```
---
##### `--network_alpha`
Specifies how many of the trained Network Ranks are allowed to alter the base model.
```python
--network_alpha=4
```
---
##### `--save_model_as`
You can use this to specify either `ckpt` or `safetensors` for the file format.
```python
--save_model_as="safetensors"
```
---
##### `--network_module`
Specifies which network module you are going to train.
```python
--network_module="lycoris.kohya"
```
---
##### `--network_args`
The arguments passed down to the network.
```python
--network_args \
"use_reentrant=False" \
"preset=full" \
"conv_dim=256" \
"conv_alpha=4" \
"use_tucker=False" \
"use_scalar=False" \
"rank_dropout_scale=False" \
"algo=locon" \
"train_norm=False" \
"block_dims=8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8" \
"block_alphas=0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625" \
```
**Let's break it down!**
---
###### `preset`
The [Preset](https://github.com/KohakuBlueleaf/LyCORIS/blob/HEAD/docs/Preset.md)/config system added to LyCORIS for more fine-grained control.
- `full`
- default preset, train all the layers in the UNet and CLIP.
- `full-lin`
- `full` but skip convolutional layers.
- `attn-mlp`
- "kohya preset", train all the transformer block.
- `attn-only`
- only attention layer will be trained, lot of papers only do training on attn layer.
- `unet-transformer-only`
- as same as kohya_ss/sd_scripts with disabled TE, or, attn-mlp preset with train_unet_only enabled.
- `unet-convblock-only`
- only ResBlock, UpSample, DownSample will be trained.
---
###### `conv_dim` and `conv_alpha`
The convolution dimensions are related to the rank of the convolution in the model, adjusting this value can have a [significant impact](https://ashejunius.com/alpha-and-dimensions-two-wild-settings-of-training-lora-in-stable-diffusion-d7ad3e3a3b0a) and lowering it affected the aesthetic differences between different LoRA samples. and an alpha value of `128` was used for training a specific character's face while Kohaku recommended to set this to `1` for both LoCon and LoHa.
```python
conv_block_dims = [conv_dim] * num_total_blocks
conv_block_alphas = [conv_alpha] * num_total_blocks
```
---
###### `module_dropout` and `dropout` and `rank_dropout`
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/tutorial/dropout1.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/tutorial/dropout1.png)
`rank_dropout` is a form of dropout, which is a regularization technique used in neural networks to prevent overfitting and improve generalization. However, unlike traditional dropout which randomly sets a proportion of inputs to zero, `rank_dropout` operates on the rank of the input tensor `lx`. First a binary mask is created with the same rank as `lx` with each element set to `True` with probability `1 - rank_dropout` and `False` otherwise. Then the `mask` is applied to `lx` to randomly set some of its elements to zero. After applying the dropout, a scaling factor is applied to `lx` to compensate for the dropped out elements. This is done to ensure that the expected sum of `lx` remains the same before and after dropout. The scaling factor is `1.0 / (1.0 - self.rank_dropout)`.
It’s called “rank” dropout because it operates on the rank of the input tensor, rather than its individual elements. This can be particularly useful in tasks where the rank of the input is important.
If `rank_dropout` is set to `0`, it means that no dropout is applied to the rank of the input tensor `lx`. All elements of the mask would be set to `True` and when the mask gets applied to `lx` all of it's elements would be retained and when the scaling factor is applied after dropout it's value would just equal `self.scale` because `1.0 / (1.0 - 0)` is `1`. Basically, setting this to `0` effectively disables the dropout mechanism but it will still do some meaningless calculations, and you can't set it to None, so if you really want to disable dropouts simply don't specify them! 😇
```python
def forward(self, x):
org_forwarded = self.org_forward(x)
# module dropout
if self.module_dropout is not None and self.training:
if torch.rand(1) < self.module_dropout:
return org_forwarded
lx = self.lora_down(x)
# normal dropout
if self.dropout is not None and self.training:
lx = torch.nn.functional.dropout(lx, p=self.dropout)
# rank dropout
if self.rank_dropout is not None and self.training:
mask = torch.rand((lx.size(0), self.lora_dim), device=lx.device) > self.rank_dropout
if len(lx.size()) == 3:
mask = mask.unsqueeze(1)
elif len(lx.size()) == 4:
mask = mask.unsqueeze(-1).unsqueeze(-1)
lx = lx * mask
scale = self.scale * (1.0 / (1.0 - self.rank_dropout))
else:
scale = self.scale
lx = self.lora_up(lx)
return org_forwarded + lx * self.multiplier * scale
```
The network you are training needs to support it though! See [PR#545](https://github.com/kohya-ss/sd-scripts/pull/545) for more details.
---
###### `use_tucker`
Can be used for all but `(IA)^3` and native fine-tuning.
Tucker decomposition is a method in mathematics that decomposes a tensor into a set of matrices and one small core tensor reducing the computational complexity and memory requirements of the model. It is used in various LyCORIS modules on various blocks. In LoCon for example, if `use_tucker` is `True` and the kernel size `k_size` is not `(1, 1)`, then the convolution operation is decomposed into three separate operations.
1. A 1x1 convolution that reduces the number of channels from `in_dim` to `lora_dim`.
2. A convolution with the original kernel size `k_size`, stride `stride`, and padding `padding`, but with a reduced number of channels `lora_dim`.
3. A 1x1 convolution that increases the number of channels back from `lora_dim` to `out_dim`.
If `use_tucker` is `False` or not set, or if the kernel size k_size is `(1, 1)`, then a standard convolution operation is performed with the original kernel size, stride, and padding, and the number of channels is reduced from `in_dim` to `lora_dim`.
---
###### `use_scalar`
An additional learned parameter that scales the contribution of the low-rank weights before they are added to the original weights. This scalar can control the extent to which the low-rank adaptation modifies the original weights. By training this scalar, the model can learn the optimal balance between preserving the original pre-trained weights and allowing for low-rank adaptation.
```python
if use_scalar:
self.scalar = nn.Parameter(torch.tensor(0.0))
else:
self.scalar = torch.tensor(1.0)
```
---
###### `rank_dropout_scale`
A boolean flag that determines whether to scale the dropout mask to have an average value of `1` or not. This can be useful in certain situations to maintain the scale of the tensor after dropout is applied.
```python
def forward(self, orig_weight, org_bias, new_weight, new_bias, *args, **kwargs):
device = self.oft_blocks.device
if self.rank_dropout and self.training:
drop = (torch.rand(self.oft_blocks, device=device) < self.rank_dropout).to(
self.oft_blocks.dtype
)
if self.rank_dropout_scale:
drop /= drop.mean()
else:
drop = 1
```
---
###### `algo`
The LyCORIS algorithm used, you can find a [list](https://github.com/KohakuBlueleaf/LyCORIS/blob/HEAD/docs/Algo-List.md) of the implemented algorithms and an [explanation](https://github.com/KohakuBlueleaf/LyCORIS/blob/HEAD/docs/Algo-Details.md) of them, with a [demo](https://github.com/KohakuBlueleaf/LyCORIS/blob/HEAD/docs/Demo.md) you can also dig into the [research paper](https://arxiv.org/pdf/2309.14859.pdf).
---
###### `train_norm`
Controls whether to train normalization layers used by all algorithms except `(IA)^3` or not.
---
###### `block_dims`
Specify the rank of each block, it takes exactly 25 numbers, that is why this line is so long.
---
###### `block_alphas`
Specifies the alpha of each block, this too also takes 25 numbers if you don't specify it `network_alpha` will be used instead for the value.
---
That concludes the `network_args`.
---
##### `--network_dropout`
This float controls the drop of neurons out of training every step, `0` or `None` is default behavior (no dropout), 1 would drop all neurons. Using `weight_decompose=True` will ignore `network_dropout` and only rank and module dropout will be applied.
```python
--network_dropout=0 \
```
---
##### `--lr_scheduler`
A learning rate scheduler in PyTorch is a tool that adjusts the learning rate during the training process. It’s used to modulate the learning rate in response to how the model is performing, which can lead to increased performance and reduced training time.
Possible values: `linear`, `cosine`, `cosine_with_restarts`, `polynomial`, `constant` (default), `constant_with_warmup`, `adafactor`
Note, `adafactor` scheduler can only be used with the `adafactor` optimizer!
```python
--lr_scheduler="cosine" \
```
---
##### `--lr_scheduler_num_cycles`
Number of restarts for cosine scheduler with restarts. It isn't used by any other scheduler.
```py
--lr_scheduler_num_cycles=1 \
```
---
##### `--learning_rate` and `--unet_lr` and `--text_encoder_lr`
The learning rate determines how much the weights of the network are updated in response to the estimated error each time the weights are updated. If the learning rate is too large, the weights may overshoot the optimal solution. If it’s too small, the weights may get stuck in a suboptimal solution.
For AdamW the optimal LR seems to be `0.0001` or `1e-4` if you want to impress your friends.
```py
--learning_rate=0.0001 --unet_lr=0.0001 --text_encoder_lr=0.0001
```
---
##### `--network_dim`
The Network Rank (Dimension) is responsible for how many features your LoRA will be training. It is in a close relation with Network Alpha and the Unet + TE learning rates and of course the quality of your dataset. Personal experimentation with these values is strongly recommended.
```py
--network_dim=8
```
---
##### `--output_name`
Specify the output name excluding the file extension.
**WARNING**: If for some reason this is ever left empty your last epoch won't be saved!
```py
--output_name="last"
```
---
##### `--scale_weight_norms`
Max-norm regularization is a technique that constrains the norm of the incoming weight vector at each hidden unit to be upper bounded by a fixed constant. It prevents the weights from growing too large and helps improve the performance of stochastic gradient descent training of deep neural nets.
Dropout affects the network architecture without changing the weights, while Max-Norm Regularization directly modifies the weights of the network. Both techniques are used to prevent overfitting and improve the generalization of the model. You can learn more about both in this [research paper](https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf).
```py
--scale_weight_norms=1.0
```
---
##### `--max_grad_norm`
Also known as Gradient Clipping, if you notice that gradients are exploding during training (loss becomes NaN or very large), consider adjusting the `--max_grad_norm` parameter, it operates on the gradients during the backpropagation process, while `--scale_weight_norms` operates on the weights of the neural network. This allows them to complement each other and provide a more robust approach to stabilizing the learning process and improving model performance.
```py
--max_grad_norm=1.0
```
---
##### `--no_half_vae`
Disables mixed precision for the SDXL VAE and sets it to `float32`. Very useful if you don't like NaNs.
---
##### `--save_every_n_epochs` and `--save_last_n_epochs` or `--save_every_n_steps` and `--save_last_n_steps`
- `--save_every_n_steps` and `--save_every_n_epochs`: A LoRA file will be created at each n-th step or epoch specified here.
- `--save_last_n_steps` and `--save_last_n_epochs`: Discards every saved file except for the last `n` you specify here.
Learning will always end with what you specify in `--max_train_epochs` or `--max_train_steps`.
```py
--save_every_n_epochs=50
```
---
##### `--mixed_precision`
⚠️
```py
--mixed_precision="fp16"
```
---
##### `--save_precision`
⚠️
```py
--save_precision="fp16"
```
---
##### `--caption_extension`
⚠️
```py
--caption_extension=".txt"
```
##### `--cache_latents` and `--cache_latents_to_disk`
⚠️
```py
--cache_latents --cache_latents_to_disk
```
---
##### `--optimizer_type`
The default optimizer is `AdamW` and there are a bunch of them added every month or so, therefore I'm not listing them all, you can find the list if you really want, but `AdamW` is the best as of this writing so we use that!
```py
--optimizer_type="AdamW"
```
---
##### `--dataset_repeats`
Repeats the dataset when training with captions, by default it is set to `1` so we'll set this to `0` with:
```py
--dataset_repeats=0
```
---
##### `--max_train_steps`
Specify the number of steps or epochs to train. If both `--max_train_steps` and `--max_train_epochs` are specified, the number of epochs takes precedence.
```py
--max_train_steps=400
```
---
##### `--shuffle_caption`
Shuffles the captions set by `--caption_separator`, it is a comma `,` by default which will work perfectly for our case since our captions look like this:
> rating_questionable, 5 fingers, anthro, bent over, big breasts, blue eyes, blue hair, breasts, butt, claws, curved horn, female, finger claws, fingers, fur, hair, huge breasts, looking at viewer, looking back, looking back at viewer, nipples, nude, pink body, pink hair, pink nipples, rear view, solo, tail, tail tuft, tuft, by lunarii, by x-leon-x, mythology, krystal \(darkmaster781\), dragon, scalie, wickerbeast, The image showcases a pink-scaled wickerbeast a furred dragon creature with blue eyes., She has large breasts and a thick tail., Her blue and pink horns are curved and pointy and she has a slight smiling expression on her face., Her scales are shiny and she has a blue and pink pattern on her body., Her hair is a mix of pink and blue., She is looking back at the viewer with a curious expression., She has a slight blush.,
As you can tell, I have separated the caption part not just the tags with a `,` to make sure everything gets shuffled. I'm at this point pretty certain this is beneficial especially when your caption file contains more than 77 tokens.
NOTE: `--cache_text_encoder_outputs` and `--cache_text_encoder_outputs_to_disk` can't be used together with `--shuffle_caption`. Both of these aim to reduce VRAM usage, you will need to decide between these yourself!
---
##### `--sdpa` or `--xformers` or `--mem_eff_attn`
The choice between `--xformers` or `--mem_eff_attn` and `--spda` will depend on your GPU. You can benchmark it by repeating a training with them!
---
##### `--multires_noise_iterations` and `--multires_noise_discount`
⚠️
```python
--multires_noise_iterations=10 --multires_noise_discount=0.1
```
---
##### `--sample_prompts` and `--sample_sampler` and `--sample_every_n_steps`
You have the option of generating images during training so you can check the progress, the argument let's you pick between different samplers, by default it is on `ddim`, so you better change it!
You can also use `--sample_every_n_epochs` instead which will take precedence over steps. The `k_` prefix means karras and the `_a` suffix means ancestral.
```py
--sample_prompts=/training_dir/sample-prompts.txt --sample_sampler="euler_a" --sample_every_n_steps=100
```
My recommendation for Pony is to use `euler_a` for toony and for realistic `k_dpm_2`.
Your sampler options include the following:
```bash
ddim, pndm, lms, euler, euler_a, heun, dpm_2, dpm_2_a, dpmsolver, dpmsolver++, dpmsingle, k_lms, k_euler, k_euler_a, k_dpm_2, k_dpm_2_a
```
---
So, the whole thing would look something like this:
```python
accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py" \
--lowram \
--pretrained_model_name_or_path="/ponydiffusers/" \
--train_data_dir="/training_dir" \
--resolution="1024,1024" \
--output_dir="/output_dir" \
--enable_bucket \
--min_bucket_reso=256 \
--max_bucket_reso=1024 \
--network_alpha=4 \
--save_model_as="safetensors" \
--network_module="lycoris.kohya" \
--network_args \
"preset=full" \
"conv_dim=256" \
"conv_alpha=4" \
"use_tucker=False" \
"use_scalar=False" \
"rank_dropout_scale=False" \
"algo=locon" \
"train_norm=False" \
"block_dims=8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8" \
"block_alphas=0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625,0.0625" \
--network_dropout=0 \
--lr_scheduler="cosine" \
--learning_rate=0.0001 \
--unet_lr=0.0001 \
--text_encoder_lr=0.0001 \
--network_dim=8 \
--output_name="yifftoolkit" \
--scale_weight_norms=1 \
--no_half_vae \
--save_every_n_epochs=50 \
--mixed_precision="fp16" \
--save_precision="fp16" \
--caption_extension=".txt" \
--cache_latents \
--cache_latents_to_disk \
--optimizer_type="AdamW" \
--max_grad_norm=1 \
--keep_tokens=1 \
--max_data_loader_n_workers=8 \
--bucket_reso_steps=32 \
--multires_noise_iterations=10 \
--multires_noise_discount=0.1 \
--log_prefix=xl-locon \
--gradient_accumulation_steps=12 \
--gradient_checkpointing \
--train_batch_size=8 \
--dataset_repeats=0 \
--max_train_steps=400 \
--shuffle_caption \
--sdpa \
--sample_prompts=/training_dir/sample-prompts.txt \
--sample_sampler="euler_a" \
--sample_every_n_steps=100
```
</details>
</div>
---
## Embeddings for 1.5 and SDXL
Embeddings in Stable Diffusion are high-dimensional representations of input data, such as images or text, that capture their essential features and relationships. These embeddings are used to guide the diffusion process, enabling the model to generate outputs that closely match the desired characteristics specified in the input.
You can find in the [`/embeddings`](https://huggingface.co/k4d3/yiff_toolkit/tree/main/embeddings) folder a whole bunch of them I collected for SD 1.5 that I later converted with [this](https://huggingface.co/spaces/FoodDesert/Embedding_Converter) tool for SDXL.
## ComfyUI Walkthrough any%
⚠️ Coming next year! ⚠️
---
## AnimateDiff for Masochists
⚠️ Coming in 2026! ⚠️
---
## Stable Cascade Furry Bible
### Resonance Cascade
🍆
---
## SDXL Furry Bible
### Some Common Knowledge Stuff
[Resolution Lora](https://huggingface.co/jiaxiangc/res-adapter/resolve/main/sdxl-i/resolution_lora.safetensors?download=true) is a nice thing to have, it will help with consistency. For SDXL it is just a LoRA you can load in and it will do its magic. No need for a custom node or extension in this case.
### SeaArt Furry
---
SeaArt's furry model sadly has its cons not just pros, yes it might come with artist knowledge bundled, but it seems to have trouble doing more than one character or everyone is bad at prompting, oh and it uses raw e621 tags, which just means you have to use underscores `_` instead of spaces ` ` inside the tags.
⚠️ TODO: Prompting tips.
### Pony Diffusion V6
---
#### Requirements
Download the [model](https://civitai.com/models/257749/pony-diffusion-v6-xl) and load it in to whatever you use to generate models.
#### Positive Prompt Stuff
```python
score_9, score_8_up, score_7_up, score_6_up, rating_explicit, source_furry,
```
I just assumed you wanted *explicit* and *furry*, you can also set the rating to `rating_safe` or `rating_questionable` and the source to `source_anime`, `source_cartoon`, `source_pony`, `source_rule34` and optionally mix them however you'd like. Its your life! `score_9` is an interesting tag, the model seems to have put all it's "*artsy*" knowledge. You might want to check if it is for your taste. The other interesting tag is `score_5_up` which seems to have learned a little bit of everything regarding quality while `score_4_up` seems to be at the bottom of the autism spectrum regarding art, I do not recommend using it, but you can do whatever you want!
You can talk to Pony in three ways, use tags only, tags are neat, but you can also just type in
`The background is of full white marble towers in greek architecture style and a castle.` and use natural language to the fullest extent, but the best way is to mix it both, its actually recommended since the score tags by definition are tags, and you need to use them! There are also artist styles that seeped into some random tokens during training, there is a community effort by some weebs to sort them [here](https://lite.framacalc.org/4ttgzvd0rx-a6jf).
Other nice words to have in the box depending on your mood:
```python
detailed background, amazing_background, scenery porn
```
Other types of backgrounds include:
```python
simple background, abstract background, spiral background, geometric background, heart background, gradient background, monotone background, pattern background, dotted background, stripped background, textured background, blurred background
```
After `simple background` you can also define a color for the background like `white background` to get a simple white background.
For the character portrayal you can set many different types:
```python
three-quarter view, full-length portrait, headshot portrait, bust portrait, half-length portrait, torso shot
```
Its a good thing to describe your subject or subjects start with `solo` or `duo` or maybe `trio, group` , and then finally start describing your character in an interesting situation.
#### Negative Prompt Stuff
⚠️
#### How to Prompt Female Anthro Lions
```python
anthro ⚠️?
```
---
## Pony Diffusion V6 LoRAs
All LoRAs listed here are actually LyCORIS with the exception of `blue_frost` which is just a regular LoRA. This might be important in case the software you use makes you put them in separate folders or if you are generating from a cute Python script.
### Concept Loras
#### bdsm-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/bdsm-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/bdsm-v1e400.json)
<!-- ⚠️ --->
---
#### blue_frost
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/blue_frost.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/blue_frost.json)
A bit of an experiment trying to make generating kitsch winter scenes easier. Originally trained for base SDXL, but it seems to work with PonyXL just fine. If you can call kitsch fine, anyway..
<!-- ⚠️ --->
---
#### cervine_penis-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/cervine_penis-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/cervine_penis-v1e400.json)
<!-- ⚠️ --->
---
#### non-euclidean_sex-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/non-euclidean_sex-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/non-euclidean_sex-v1e400.json)
<!-- ⚠️ --->
---
#### space-v1e500
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/space-v1e500.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/space-v1e500.json)
```js
// Keywords:
by hubble
by jwst
// Example Positive Prompts:
by jwst, a galaxy, photo
by jwst, a red and blue galaxy
by hubble, a galaxy, photo
// Negative Prompt:
cropped,
blurry, wtf, old art, where is your god now, abstract background, simple background, cropped
```
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal images.</summary>
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000890-04092251-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000890-04092251.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000893-04092315-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000893-04092315.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000895-04092334-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000895-04092334.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000953-04111037-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000953-04111037.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000955-04111040-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/space/00000955-04111040.png)
</details>
</div>
#### spacengine-v1e500
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/spaceengine-v1e500.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/spaceengine-v1e500.json)
```js
// Keyword
by spaceengine
// Example Prompt:
by spaceengine, a planet, black background
```
<!-- ⚠️ --->
### Artist/Style LoRAs
#### blp-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/blp-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/blp-v1e400.json)
Replicate [blp](https://e6ai.net/posts?tags=blp)'s unique style of AI art without employing 40 different custom nodes to alter sigmas and noise injection. I recommend you set your CFG to `6` and use `DPM++ 2M Karras` for the sampler and scheduler for a more realistic look or you can use `Euler a` for a more cartoony/dreamy generation with with a low CFG of `6`.
There have been reports that if you use this LoRA with a negative weight of `-0.5` your generations will have a slight sepia tone.
```js
blp,
// Recommended:
detailed background, amazing_background, scenery porn, feral,
```
<!-- ⚠️: Hello?! Images?! --->
---
#### butterchalk-v3e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/butterchalk-v3e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/butterchalk-v3e400.json)
I'm not into `young anthro` I only trained this one for you, you hentai baka! ^_^
<!-- ⚠️ --->
---
#### cecily_lin-v1e37
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/cecily_lin-v1e37.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/cecily_lin-v1e37.json)
I'm honestly not familiar with this artist, I just scraped their art and let sd-scripts go wild.
<!-- ⚠️ --->
---
#### chunie-v1e5
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/chunie-v1e5.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/chunie-v1e5.json)
Everyone loves Chunie. 😹
<!-- ⚠️ --->
---
#### cooliehigh-v1e45
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/cooliehigh-v1e45.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/cooliehigh-v1e45.json)
Again, I'm really uncultured when it comes to furry artists.
<!-- ⚠️ --->
---
#### dagasi-v1e134
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/dagasi-v1e134.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/dagasi-v1e134.json)
Even I heard about this one!
<!-- ⚠️ --->
---
#### darkgem-v1e4
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/darkgem-v1e4.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/darkgem-v1e4.json)
Quality digital painting style. Some people don't like it.
I recommend first an `Euler a` with `40` steps, CFG set to `11` at 1024x1024 resolution and then a hi-res pass at 1536x1536 with `DPM++ 2M Karras` at 60 steps with denoise set at 0.69 for the highest darkgem. Please only use `darkgem` if you want gems to appear in the scene or maybe your character will end up `holding a dark red gem`.
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal images.</summary>
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/darkgem/00000859-04070924e-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/darkgem/00000859-04070924e.png)
</details>
</div>
<!-- ⚠️ TODO: Generate more darkgem lmao! -->
---
#### himari-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/himari-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/himari-v1e400.json)
A tiny dumb LoRA trained on 4 images by [@147Penguinmw](https://twitter.com/147Penguinmw). The keyword is `by himari` but you probably don't need to use it!
```js
// Positive Prompt Example
score_9, score_8_up, score_7_up, score_6_up, source_furry, rating_explicit, on back, sexy pose, full-length portrait, pussy, solo, reptile, scalie, anthro female lizard, scales, blush, blue eyes, white body, blue body, plant, blue scales, white scales, detailed background, looking at viewer, furniture, digital media \(artwork\), This digital artwork image presents a solo anthropomorphic female reptile specifically a lizard with a white body adorned with detailed blue scales.,
```
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal images.</summary>
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/by_himari/00000418-04190818-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/by_himari/00000418-04190818.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/by_himari/00001078-04190837-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/by_himari/00001078-04190837.png)
</details>
</div>
#### furry_sticker-v1e250
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/furry_sticker-v1e250.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/furry_sticker-v1e250.json)
Generate an infinite amount of furry stickers for your infinite amount of telegram accounts!
```js
// Positive prompt:
furry sticker, simple background, black background, white outline,
// Negative prompt:
abstract background, detailed background, amazing_background, scenery porn,
```
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal images.</summary>
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/it-wasnt-me-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/it-wasnt-me.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/kade-rice-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/kade-rice.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/kade-this-point-up-sticker-your-stupid-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/kade-this-point-up-sticker-your-stupid.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/tibetan-unimpressede-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/furry_sticker/tibetan-unimpressede.png)
</details>
</div>
---
#### goronic-v1e1
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/goronic-v1e1.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/goronic-v1e1.json)
<!-- ⚠️ --->
---
#### greg_rutkowski-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/greg_rutkowski-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/greg_rutkowski-v1e400.json)
<!-- ⚠️ --->
---
#### hamgas-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/hamgas-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/hamgas-v1e400.json)
<!-- ⚠️ --->
---
#### honovy-v1e4
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/honovy-v1e4.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/honovy-v1e4.json)
<!-- ⚠️ --->
---
#### jinxit-v1e10
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/jinxit-v1e10.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/jinxit-v1e10.json)
<!-- ⚠️ --->
---
#### kame_3-v1e80
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/kame_3-v1e80.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/kame_3-v1e80.json)
<!-- ⚠️ --->
---
#### kenket-v1e4
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/kenket-v1e4.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/kenket-v1e4.json)
<!-- ⚠️ --->
---
#### louart-v1e10
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/louart-v1e10.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/louart-v1e10.json)
<!-- ⚠️ --->
---
#### realistic-v4e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/realistic%2Bscale_iridescence-v4e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/realistic%2Bscale_iridescence-v4e400.json)
```js
// Positive prompt:
realistic, photo, detailed background, amazing_background, scenery porn,
// Negative prompt:
abstract background, simple background
```
My take on photorealistic furries. Highly experimental and extremely fun!
I recommend you don't try anything but a CFG of `6` and `DPM++ 2M Karras`.
You can combo this with the [RetouchPhoto LoRA](https://civitai.com/models/343602/retouchphoto-for-ponyv6) for even more research. 📈
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to view images</summary>
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00001231-04070113-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00001231-04070113.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000685-04021915-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000685-04021915.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000703-04021946-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000703-04021946.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000706-04021959-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000706-04021959.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000754-04030229-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000754-04030229.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000233-03232306-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/realistic/00000233-03232306.png)
</details>
</div>
---
#### skecchiart-v1e134
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/skecchiart-v1e134.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/skecchiart-v1e134.json)
<!-- ⚠️ --->
---
#### spectrumshift-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/spectrumshift-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/spectrumshift-v1e400.json)
<!-- ⚠️ --->
---
#### squishy-v1e10
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/squishy-v1e10.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/squishy-v1e10.json)
<!-- ⚠️ --->
---
#### whisperingfornothing-v1e58
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/whisperingfornothing-v1e58.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/whisperingfornothing-v1e58.json)
<!-- ⚠️ --->
---
#### wjs07-v1e200
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/wjs07-v1e200.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/wjs07-v1e200.json)
<!-- ⚠️ --->
---
#### wolfy-nail-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/wolfy-nail-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/wolfy-nail-v1e400.json)
<!-- ⚠️ --->
---
#### woolrool-v1e4
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/woolrool-v1e4.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/woolrool-v1e4.json)
<!-- ⚠️ --->
---
### Character LoRAs
#### arielsatyr-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/arielsatyr-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/arielsatyr-v1e400.json)
<!-- ⚠️ --->
---
#### amalia-v2e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/amalia-v2e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/amalia-v2e400.json)
Some loli cat girl. Enjoy yourself!
<!-- ⚠️ --->
---
#### amicus-v1e200
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/amicus-v1e200.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/amicus-v1e200.json)
Gay space wolf from a visual novel everyone wants me to play.
<!-- ⚠️ --->
---
#### auroth-v1e250
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/auroth-v1e250.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/auroth-v1e250.json)
A dragon or wyvern thing from DOTA2
<!-- ⚠️ --->
---
#### blaidd-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/blaidd-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/blaidd-v1e400.json)
**Half-wolf Blaidd!** Bestest boy of Elden Ring! He's a very good boy! Can be a naughty boy though as well, if you like..
<!-- ⚠️ --->
---
#### martlet-v1e200
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/martlet-v1e200.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/martlet-v1e200.json)
<!-- ⚠️ --->
---
#### ramona-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/ramona-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/ramona-v1e400.json)
<!-- ⚠️ --->
---
#### tibetan-v2e500
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/tibetan-v2e500.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/tibetan-v2e500.json)
<!-- ⚠️ --->
---
#### veemon-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/veemon-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/veemon-v1e400.json)
<!-- ⚠️ --->
---
#### hoodwink-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/hoodwink-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/hoodwink-v1e400.json)
<!-- ⚠️ --->
---
#### jayjay-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/jayjay-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/jayjay-v1e400.json)
<!-- ⚠️ --->
---
#### foxparks-v2e134
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/foxparks-v2e134.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/foxparks-v2e134.json)
<!-- ⚠️ --->
---
#### lovander-v3e10
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/lovander-v3e10.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/lovander-v3e10.json)
<!-- ⚠️ --->
---
#### skiltaire-v1e400
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/skiltaire-v1e400.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/skiltaire-v1e400.json)
<!-- ⚠️ --->
---
#### chillet-v3e10
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/chillet-v3e10.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/chillet-v3e10.json)
<!-- ⚠️ --->
---
#### maliketh-v1e1
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/maliketh-v1e1.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/maliketh-v1e1.json)
Second best boy of Elden Ring, it took me 7 tries the first time, so this is my form of payback!
```js
// Positive prompt:
male, anthro, maliketh \(elden ring\), white fur, white hair, head armor, red canine genitalia, knot,
// NLP version:
anthro male maliketh \(elden ring\) with white fur and white hair wearing head armor, He has a red canine genitalia with a knotty base and fluffy tail, He has claws and monotone fur with a monotone body,
```
<div style="background-color: lightyellow; padding: 10px;">
<details>
<summary>Click to reveal images</summary>
[![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/maliketh/00000844-04070802e-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/maliketh/00000844-04070802e.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/maliketh/00000850-04070838-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/maliketh/00000850-04070838.png) [![An AI generated image.](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/maliketh/00000842-04070728e-512.png)](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/static/maliketh/00000842-04070728e.png)
</details>
</div>
---
#### wickerbeast-v1e500
- [⬇️ Download](https://huggingface.co/k4d3/yiff_toolkit/resolve/main/ponyxl_loras/wickerbeast-v1e500.safetensors?download=true)
- [📊 Metadata](https://huggingface.co/k4d3/yiff_toolkit/raw/main/ponyxl_loras/wickerbeast-v1e500.json)
<!-- ⚠️ --->
---
## Satisfied Customers
|