PseudoTerminal X
commited on
Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
@@ -87,13 +87,13 @@ You may reuse the base model text encoder for inference.
|
|
87 |
## Training settings
|
88 |
|
89 |
- Training epochs: 9
|
90 |
-
- Training steps:
|
91 |
- Learning rate: 1.0
|
92 |
- Effective batch size: 1
|
93 |
- Micro-batch size: 1
|
94 |
- Gradient accumulation steps: 1
|
95 |
- Number of GPUs: 1
|
96 |
-
- Prediction type:
|
97 |
- Rescaled betas zero SNR: False
|
98 |
- Optimizer: Prodigy
|
99 |
- Precision: no
|
@@ -136,7 +136,7 @@ image = pipeline(
|
|
136 |
prompt=prompt,
|
137 |
num_inference_steps=28,
|
138 |
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
|
139 |
-
|
140 |
height=1024,
|
141 |
guidance_scale=3.0,
|
142 |
).images[0]
|
|
|
87 |
## Training settings
|
88 |
|
89 |
- Training epochs: 9
|
90 |
+
- Training steps: 100
|
91 |
- Learning rate: 1.0
|
92 |
- Effective batch size: 1
|
93 |
- Micro-batch size: 1
|
94 |
- Gradient accumulation steps: 1
|
95 |
- Number of GPUs: 1
|
96 |
+
- Prediction type: flow-matching
|
97 |
- Rescaled betas zero SNR: False
|
98 |
- Optimizer: Prodigy
|
99 |
- Precision: no
|
|
|
136 |
prompt=prompt,
|
137 |
num_inference_steps=28,
|
138 |
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
|
139 |
+
width=1024,
|
140 |
height=1024,
|
141 |
guidance_scale=3.0,
|
142 |
).images[0]
|