drmeeseeks
commited on
Commit
•
7ab83a2
1
Parent(s):
94fad27
Update README.md
Browse filesUpdate README based on model card recommendations.
README.md
CHANGED
@@ -1,21 +1,68 @@
|
|
1 |
---
|
|
|
2 |
library_name: keras
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
## Model description
|
6 |
|
7 |
-
|
8 |
|
9 |
## Intended uses & limitations
|
10 |
|
11 |
-
|
12 |
|
13 |
## Training and evaluation data
|
14 |
|
15 |
-
|
16 |
|
17 |
## Training procedure
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
### Training hyperparameters
|
20 |
|
21 |
The following hyperparameters were used during training:
|
@@ -44,11 +91,64 @@ The following hyperparameters were used during training:
|
|
44 |
| training_precision | mixed_float16 |
|
45 |
|
46 |
|
47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
<details>
|
50 |
<summary>View Model Plot</summary>
|
51 |
|
52 |
![Model Image](./model.png)
|
53 |
|
54 |
-
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: Dreambooth Submission
|
3 |
library_name: keras
|
4 |
+
pipeline_tag: text-to-image
|
5 |
+
tags:
|
6 |
+
- keras-dreambooth
|
7 |
+
- wild-card
|
8 |
---
|
9 |
|
10 |
## Model description
|
11 |
|
12 |
+
This is a Stable Diffusion model fine-tuned using Dreambooth on Calvin and Hobbes images 🐯. Part of the [Keras Dreambooth Event](https://huggingface.co/keras-dreambooth)
|
13 |
|
14 |
## Intended uses & limitations
|
15 |
|
16 |
+
- For experimentation and curiosity.
|
17 |
|
18 |
## Training and evaluation data
|
19 |
|
20 |
+
- This model was fine-tuned on images of Calvin and Hobbes.
|
21 |
|
22 |
## Training procedure
|
23 |
|
24 |
+
Starting from the provided [Keras Dreambooth Sprity - HuggingFace - GITHUB](https://github.com/huggingface/community-events/tree/main/keras-dreambooth-sprint), the provided IPYNB was modified to accomidate user images and optimze for cost. The entire training process was done using Colab. Data preperation can complete using the free tier, but you will need a premium GPU (A100 - 40G) to train. Generate the images for `class-images` using free tier Colab, this will take 2-3 hours for 300 images. Once complete, you have 2 options, create the folder`/root/.keras/datasets/class-images` and copy the images to that directory, _or_ create a tar.gz file and download the files for backup using the snippit below. Downloading the file will save you the compute of having to redo it again as it can simply be uploaded into the `contents` folder at a later time. You can collect the images for `my_images` into a folder in either PNG or JPG format, and upload the TAR.GZ version of the folder containing the images.
|
25 |
+
|
26 |
+
```python
|
27 |
+
import glob
|
28 |
+
import tarfile
|
29 |
+
|
30 |
+
output_filename = "class-images.tar.gz"
|
31 |
+
with tarfile.open(output_filename, "w:gz") as tar:
|
32 |
+
for file in glob.glob('class-images/*'):
|
33 |
+
tar.write(file)
|
34 |
+
|
35 |
+
```
|
36 |
+
|
37 |
+
To use `tf.keras.util.get_file` with a tar.gz file located on your running VM you will need `instance_images_root = tf.keras.utils.get_file(origin="file:///LOCATION_TO_TARGZ_FILE/my_images.tar.gz",untar=True)` [get_util - Tensorflow Docs](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file). The command, in my case, places the files in `'/root/.keras/datasets/my_images'`. This will not work. The following work around resolved the issue:
|
38 |
+
|
39 |
+
```bash
|
40 |
+
# From within the Colab Notebook
|
41 |
+
!mkdir /root/.keras/datasets/class-images
|
42 |
+
!mkdir /root/.keras/datasets/my_images
|
43 |
+
!cp /root/.keras/datasets/my_images.tar.gz /root/.keras/datasets/my_images/
|
44 |
+
!cp /root/.keras/datasets/class-images.tar.gz /root/.keras/datasets/class-images/
|
45 |
+
!tar -xvzf /root/.keras/datasets/class-images/class-images.tar.gz -C /root/.keras/datasets/class-images
|
46 |
+
!tar -xvzf /root/.keras/datasets/my_images/my_images.tar.gz -C /root/.keras/datasets/my_images
|
47 |
+
```
|
48 |
+
|
49 |
+
Having more `my_images` will improve the results. Current results used 10, but 20-30 is recommended based on [Implementation of DreamBooth using KerasCV and TensorFlow - Notes on preparing data for DreamBooth training of faces](https://github.com/sayakpaul/dreambooth-keras#notes-on-preparing-data-for-dreambooth-training-of-faces).
|
50 |
+
|
51 |
+
|
52 |
+
If you decide to train using [Lambda GPU - Dreambooth - GIT](https://github.com/huggingface/community-events/blob/main/keras-dreambooth-sprint/compute-with-lambda.md) be sure the `python -m pip install tensorflow==2.11` instead of `python -m pip install tensorflow` as 2.12 will be downloaded by default and there will be a CUDA version mismatch, causing computation to occur on the CPU instead of the GPU. A second note is setting `export XLA_FLAGS=--xla_gpu_cuda_data_dir=/usr/lib/cuda` and `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/` from a terminal will not transfer to a notebook. Be sure to set the environment variables in the jupyter notbook to enable the proper CUDA libraries to be used otherwise the GPU will not be accessable. `%env MY_ENV_VAR=value` can help and must be the first code block of the notebook.
|
53 |
+
|
54 |
+
## Training results
|
55 |
+
|
56 |
+
<details>
|
57 |
+
<summary>View Inference Images</summary>
|
58 |
+
|
59 |
+
![Inference Image 1](./inf1.png)
|
60 |
+
![Inference Image 2](./inf2.png)
|
61 |
+
![Inference Image 3](./inf3.png)
|
62 |
+
![Inference Image 4](./inf4.png)
|
63 |
+
|
64 |
+
</details>
|
65 |
+
|
66 |
### Training hyperparameters
|
67 |
|
68 |
The following hyperparameters were used during training:
|
|
|
91 |
| training_precision | mixed_float16 |
|
92 |
|
93 |
|
94 |
+
### Recommendations
|
95 |
+
|
96 |
+
- Access to A100 GPUs can be a challenege. A variety of providers, at the time of this writing (April 2023), either do not have additional resources or may be excessivly expensive. Spot instances, or breaking the data prep and training onto different types of machines can alleviate this issue.
|
97 |
+
|
98 |
+
### Environmental Impact
|
99 |
+
|
100 |
+
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). In total roughly 10 hours were used primarily in US Central, with AWS as the reference. Additional resources are available at [Our World in Data - CO2 Emissions](https://ourworldindata.org/co2-emissions)
|
101 |
+
|
102 |
+
- __Hardware Type__: Intel Xeon (VM), with NVIDIA A100-SXM 40GB
|
103 |
+
- __Hours Used__: 10 hrs
|
104 |
+
- __Cloud Provider__: Colab Free + Colab Premium
|
105 |
+
- __Compute Region__: US Central
|
106 |
+
- __Carbon Emitted__: 1.42 kg (GPU) + 0.59 kg (CPU) = 2 kg (the weight of 2 liters of water) - 2 kg offset = 0
|
107 |
+
|
108 |
+
|
109 |
+
## Model Plot
|
110 |
|
111 |
<details>
|
112 |
<summary>View Model Plot</summary>
|
113 |
|
114 |
![Model Image](./model.png)
|
115 |
|
116 |
+
</details>
|
117 |
+
|
118 |
+
### Citation
|
119 |
+
|
120 |
+
- [Diffusers - HuggingFace - GITHUB](https://github.com/huggingface/diffusers)
|
121 |
+
- [Model Card - HuggingFace Hub - GITHUB](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
|
122 |
+
|
123 |
+
```bibtex
|
124 |
+
|
125 |
+
@online{ruizDreamBoothFineTuning2023,
|
126 |
+
title = {{{DreamBooth}}: {{Fine Tuning Text-to-Image Diffusion Models}} for {{Subject-Driven Generation}}},
|
127 |
+
shorttitle = {{{DreamBooth}}},
|
128 |
+
author = {Ruiz, Nataniel and Li, Yuanzhen and Jampani, Varun and Pritch, Yael and Rubinstein, Michael and Aberman, Kfir},
|
129 |
+
date = {2023-03-15},
|
130 |
+
number = {arXiv:2208.12242},
|
131 |
+
eprint = {arXiv:2208.12242},
|
132 |
+
eprinttype = {arxiv},
|
133 |
+
url = {http://arxiv.org/abs/2208.12242},
|
134 |
+
urldate = {2023-03-26},
|
135 |
+
abstract = {Large text-to-image models achieved a remarkable leap in the evolution of AI, enabling high-quality and diverse synthesis of images from a given text prompt. However, these models lack the ability to mimic the appearance of subjects in a given reference set and synthesize novel renditions of them in different contexts. In this work, we present a new approach for "personalization" of text-to-image diffusion models. Given as input just a few images of a subject, we fine-tune a pretrained text-to-image model such that it learns to bind a unique identifier with that specific subject. Once the subject is embedded in the output domain of the model, the unique identifier can be used to synthesize novel photorealistic images of the subject contextualized in different scenes. By leveraging the semantic prior embedded in the model with a new autogenous class-specific prior preservation loss, our technique enables synthesizing the subject in diverse scenes, poses, views and lighting conditions that do not appear in the reference images. We apply our technique to several previously-unassailable tasks, including subject recontextualization, text-guided view synthesis, and artistic rendering, all while preserving the subject's key features. We also provide a new dataset and evaluation protocol for this new task of subject-driven generation. Project page: https://dreambooth.github.io/},
|
136 |
+
pubstate = {preprint}
|
137 |
+
}
|
138 |
+
|
139 |
+
@article{owidco2andothergreenhousegasemissions,
|
140 |
+
author = {Hannah Ritchie and Max Roser and Pablo Rosado},
|
141 |
+
title = {CO₂ and Greenhouse Gas Emissions},
|
142 |
+
journal = {Our World in Data},
|
143 |
+
year = {2020},
|
144 |
+
note = {https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions}
|
145 |
+
}
|
146 |
+
|
147 |
+
@article{lacoste2019quantifying,
|
148 |
+
title={Quantifying the Carbon Emissions of Machine Learning},
|
149 |
+
author={Lacoste, Alexandre and Luccioni, Alexandra and Schmidt, Victor and Dandres, Thomas},
|
150 |
+
journal={arXiv preprint arXiv:1910.09700},
|
151 |
+
year={2019}
|
152 |
+
}
|
153 |
+
|
154 |
+
```
|