Trained for 0 epochs and 100 steps.
Browse filesTrained with datasets ['text-embed-cache', 'something-special-to-remember-by']
Learning rate 4e-05, batch size 4, and 1 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: terminusresearch/sana-1.6b-1024px
VAE: None
pytorch_lora_weights.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 6409848
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3c760f64f117c4a78d3738f9de9699df94b140d3e66359f6b9bf31d925236620
|
3 |
size 6409848
|