End of training
Browse files
README.md
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: creativeml-openrail-m
|
4 |
+
base_model: CompVis/stable-diffusion-v1-4
|
5 |
+
datasets:
|
6 |
+
- ShinnosukeU/kanji_diffusion_dataset
|
7 |
+
tags:
|
8 |
+
- stable-diffusion
|
9 |
+
- stable-diffusion-diffusers
|
10 |
+
- text-to-image
|
11 |
+
- diffusers
|
12 |
+
inference: true
|
13 |
+
---
|
14 |
+
|
15 |
+
# Text-to-image finetuning - ShinnosukeU/kanji_vae_decoder_only
|
16 |
+
|
17 |
+
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **ShinnosukeU/kanji_diffusion_dataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: Nothing:
|
18 |
+
|
19 |
+
|
20 |
+
## Training info
|
21 |
+
|
22 |
+
These are the key hyperparameters used during training:
|
23 |
+
|
24 |
+
* Epochs: 100
|
25 |
+
* Learning rate: 1.2e-06
|
26 |
+
* Batch size: 2
|
27 |
+
* Gradient accumulation steps: 4
|
28 |
+
* Image resolution: 128
|
29 |
+
* Mixed-precision: None
|
30 |
+
|
31 |
+
|
32 |
+
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/shinnosukeu/vae-fine-tune/runs/9bt51ib7).
|