williamberman
commited on
Commit
•
90fa7ea
1
Parent(s):
ece023b
Update README.md
Browse files
README.md
CHANGED
@@ -72,4 +72,24 @@ images = pipe(
|
|
72 |
images[0].save(f"hug_lab.png")
|
73 |
```
|
74 |
|
75 |
-
![images_10)](./out_hug_lab_7.png)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
images[0].save(f"hug_lab.png")
|
73 |
```
|
74 |
|
75 |
+
![images_10)](./out_hug_lab_7.png)
|
76 |
+
|
77 |
+
### Training
|
78 |
+
|
79 |
+
#### Training data
|
80 |
+
This checkpoint was first trained for 20,000 steps on laion 6a resized to a max minimum dimension of 384.
|
81 |
+
It was then further trained for 20,000 steps on laion 6a resized to a max minimum dimension of 1024 and
|
82 |
+
then filtered to contain only minimum 1024 images. We found the further high resolution finetuning was
|
83 |
+
necessary for image quality.
|
84 |
+
|
85 |
+
#### Compute
|
86 |
+
one 8xA100 machine
|
87 |
+
|
88 |
+
#### Batch size
|
89 |
+
Data parallel with a single gpu batch size of 8 for a total batch size of 64.
|
90 |
+
|
91 |
+
#### Hyper Parameters
|
92 |
+
Constant learning rate of 1e-4 scaled by batch size for total learning rate of 64e-4
|
93 |
+
|
94 |
+
#### Mixed precision
|
95 |
+
fp16
|