k4d3 commited on
Commit
250741a
1 Parent(s): c4ba74a

Signed-off-by: Balazs Horvath <[email protected]>

Files changed (2) hide show
  1. README.md +1 -15
  2. dataset_tools/e621 JSON to txt.ipynb +0 -0
README.md CHANGED
@@ -49,7 +49,6 @@ The Yiff Toolkit is a comprehensive set of tools designed to enhance your creati
49
  - [`--save_model_as`](#--save_model_as)
50
  - [`--network_module`](#--network_module)
51
  - [`--network_args`](#--network_args)
52
- - [`use_reentrant`](#use_reentrant)
53
  - [`preset`](#preset)
54
  - [`conv_dim` and `conv_alpha`](#conv_dim-and-conv_alpha)
55
  - [`module_dropout` and `dropout` and `rank_dropout`](#module_dropout-and-dropout-and-rank_dropout)
@@ -638,9 +637,6 @@ The arguments passed down to the network.
638
  "preset=full" \
639
  "conv_dim=256" \
640
  "conv_alpha=4" \
641
- "dropout=None" \
642
- "rank_dropout=None" \
643
- "module_dropout=None" \
644
  "use_tucker=False" \
645
  "use_scalar=False" \
646
  "rank_dropout_scale=False" \
@@ -654,12 +650,6 @@ The arguments passed down to the network.
654
 
655
  ---
656
 
657
- ###### `use_reentrant`
658
-
659
- - If `use_reentrant=False` is specified, checkpoint will use an implementation that does not require re-entrant autograd. You can learn more about checkpointing [here](https://pytorch.org/docs/stable/checkpoint.html). Note that future versions of PyTorch will default to `use_reentrant=False`, today the default is still `True`, so we set it to `False`. Easy!
660
-
661
- ---
662
-
663
  ###### `preset`
664
 
665
  The [Preset](https://github.com/KohakuBlueleaf/LyCORIS/blob/HEAD/docs/Preset.md)/config system added to LyCORIS for more fine-grained control.
@@ -698,7 +688,7 @@ conv_block_alphas = [conv_alpha] * num_total_blocks
698
 
699
  It’s called “rank” dropout because it operates on the rank of the input tensor, rather than its individual elements. This can be particularly useful in tasks where the rank of the input is important.
700
 
701
- If `rank_dropout` is set to `0`, it means that no dropout is applied to the rank of the input tensor `lx`. All elements of the mask would be set to `True` and when the mask gets applied to `lx` all of it's elements would be retained and when the scaling factor is applied after dropout it's value would just equal `self.scale` because `1.0 / (1.0 - 0)` is `1`. Basically, setting this to `0` effectively disables the dropout mechanism but it will still do some meaningless calculations.
702
 
703
  ```python
704
  def forward(self, x):
@@ -1052,13 +1042,9 @@ accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py" \
1052
  --save_model_as="safetensors" \
1053
  --network_module="lycoris.kohya" \
1054
  --network_args \
1055
- "use_reentrant=False" \
1056
  "preset=full" \
1057
  "conv_dim=256" \
1058
  "conv_alpha=4" \
1059
- "dropout=None" \
1060
- "rank_dropout=None" \
1061
- "module_dropout=None" \
1062
  "use_tucker=False" \
1063
  "use_scalar=False" \
1064
  "rank_dropout_scale=False" \
 
49
  - [`--save_model_as`](#--save_model_as)
50
  - [`--network_module`](#--network_module)
51
  - [`--network_args`](#--network_args)
 
52
  - [`preset`](#preset)
53
  - [`conv_dim` and `conv_alpha`](#conv_dim-and-conv_alpha)
54
  - [`module_dropout` and `dropout` and `rank_dropout`](#module_dropout-and-dropout-and-rank_dropout)
 
637
  "preset=full" \
638
  "conv_dim=256" \
639
  "conv_alpha=4" \
 
 
 
640
  "use_tucker=False" \
641
  "use_scalar=False" \
642
  "rank_dropout_scale=False" \
 
650
 
651
  ---
652
 
 
 
 
 
 
 
653
  ###### `preset`
654
 
655
  The [Preset](https://github.com/KohakuBlueleaf/LyCORIS/blob/HEAD/docs/Preset.md)/config system added to LyCORIS for more fine-grained control.
 
688
 
689
  It’s called “rank” dropout because it operates on the rank of the input tensor, rather than its individual elements. This can be particularly useful in tasks where the rank of the input is important.
690
 
691
+ If `rank_dropout` is set to `0`, it means that no dropout is applied to the rank of the input tensor `lx`. All elements of the mask would be set to `True` and when the mask gets applied to `lx` all of it's elements would be retained and when the scaling factor is applied after dropout it's value would just equal `self.scale` because `1.0 / (1.0 - 0)` is `1`. Basically, setting this to `0` effectively disables the dropout mechanism but it will still do some meaningless calculations, and you can't set it to None, so if you really want to disable dropouts simply don't specify them! 😇
692
 
693
  ```python
694
  def forward(self, x):
 
1042
  --save_model_as="safetensors" \
1043
  --network_module="lycoris.kohya" \
1044
  --network_args \
 
1045
  "preset=full" \
1046
  "conv_dim=256" \
1047
  "conv_alpha=4" \
 
 
 
1048
  "use_tucker=False" \
1049
  "use_scalar=False" \
1050
  "rank_dropout_scale=False" \
dataset_tools/e621 JSON to txt.ipynb CHANGED
The diff for this file is too large to render. See raw diff