Random seed specification?
Hello team stability ai,
Thank you for your amazong work and thank you for sharing it with the world. Big kudos to everyone who have put effort to make this happen.
I'm really new to Hugging Face and this question might be stupid. In the webpage version there is a field that I can specify a random seed that I can retrieve the same image with the same input text plus the same seed every time. I wonder if we have the same functionality from the python API and how I am able to do this.
Best,
ymd
Hi @ymd1337 and @samrahimi123!
We've just merged a new feature in the main branch of 🤗 Diffusers that allows this use case, and there's a colab that demonstrates it: https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb
There was an interesting discussion about API design choices here: https://github.com/huggingface/diffusers/issues/208
Feel free to try it out and provide any feedback!
Amazing!
Hello.
I have tried to run it on my local PC (RTX 3060, NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7) using the Colab source as a reference, but it produces 3 different images.
Can you give me any advice?
install
If you install diffusers directly, you will get the following error when you run the script.
so I install stable-diffusion first.
site-packages/torch/cuda/__init__.py:146: UserWarning:
NVIDIA GeForce RTX 3060 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3060 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
File "/home/dev3/work/hdu/testinstall/./seed_test.py", line 30, in <module>
latents = torch.randn(
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
git clone https://github.com/CompVis/stable-diffusion
cd stable-diffusion/
conda env create -f environment.yaml
# Automatically ldm activate
(ldm)
pip install git+https://github.com/huggingface/diffusers.git
pip install transformers scipy ftfy
Then run the following script to obtain three different images.
#!/usr/bin/env python
import torch
from diffusers import StableDiffusionPipeline
device = "cuda"
model_id = "CompVis/stable-diffusion-v1-4"
prompt = "Labrador in the style of Vermeer"
file = "seed_test_"
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
revision="fp16",
torch_dtype=torch.float16,
use_auth_token=True,
).to(device)
num_images = 1
width = 512
height = 512
# first try
latents = None
#seed: int = 7183698734589870
seed: int = 0
generator = torch.Generator(device=device)
generator = generator.manual_seed(seed)
latents = torch.randn(
(1, pipe.unet.in_channels, height // 8, width // 8),
generator = generator,
device = device
)
with torch.autocast("cuda"):
images = pipe(
[prompt] * num_images,
guidance_scale=7.5,
latents = latents,
)["sample"]
images[0].save(file + "_test_seed_1.jpg")
# second try
generator = torch.Generator(device=device)
generator = generator.manual_seed(seed)
latents = torch.randn(
(1, pipe.unet.in_channels, height // 8, width // 8),
generator = generator,
device = device
)
with torch.autocast("cuda"):
images = pipe(
[prompt] * num_images,
guidance_scale=7.5,
latents = latents,
)["sample"]
images[0].save(file + "_test_seed_2.jpg")
#third try
with torch.autocast("cuda"):
images = pipe(
[prompt] * num_images,
guidance_scale=7.5,
latents = latents,
)["sample"]
images[0].save(file + "_test_seed_3.jpg")
Hi @dahara1 !
That change is supported in the main
branch of diffusers, but is not yet available in the latest PyPi distribution. I see that you are attempting to install from github, which is correct, but I'm not sure how you are managing your virtual environments. Can you please run this code in your environment to verify that the latents
parameter is indeed supported?
import inspect
'latents' in inspect.signature(pipe.__call__).parameters
The expression should be True
if the parameter (latents
) is supported. If it isn't, please try to uninstall diffusers
then run your code again.
In addition, there's a warning in your output about your GPU not being fully supported by the PyTorch version you have installed. I'd recommend you reinstall PyTorch from this page so your generations can run as fast as possible :) https://pytorch.org/get-started/locally/
Please, let us know how it goes!
Hi @pcuenq !
Thank you for replay.
It's false.
So I unistall diffusers.
pip uninstall diffusers
Found existing installation: diffusers 0.2.4
Uninstalling diffusers-0.2.4:
Would remove:
/home/dev3/local/miniconda/envs/ldm_test/lib/python3.8/site-packages/diffusers-0.2.4.dist-info/*
/home/dev3/local/miniconda/envs/ldm_test/lib/python3.8/site-packages/diffusers/*
Proceed (y/n)? y
Successfully uninstalled diffusers-0.2.4
but re-install diffusers from github huggingface/diffusers still false.
I think something is conflicting because I installed stable diffusion on conda first.
Conclusion.
I will try to install PyTorch from source
I can install and run stable diffusion with It's instruction.
https://github.com/CompVis/stable-diffusion
conda env create -f environment.yaml
conda activate ldm
but I can't run code snipet from stable-diffusion-seeds.ipynb in this conda enviroment because there are some conflicts or something wrong.
so I tried new conda enviroment with below command.
conda create -n hdu2 --clone base
conda activate hdu2
# from https://pytorch.org/get-started/locally/
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge
pip install git+https://github.com/huggingface/diffusers.git
pip install transformers scipy ftfy
but when I ran the script, there is CUDA error.
miniconda/envs/hdu2/lib/python3.9/site-packages/torch/cuda/__init__.py:146: UserWarning:
NVIDIA GeForce RTX 3060 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3060 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
File "/home/dev3/work/hdu/testinstall/./seed_test.py", line 34, in <module>
latents = torch.randn(
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
so I decide source install.
Thank you.
Hi there!
A couple of suggestions for your PyTorch install:
- First, please verify what version of CUDA you have installed in your computer. If you run
nvidia-smi -q --display="COMPUTE"
, look for the line that saysCUDA Version
, then use the PyTorch distribution that matches that version, or the closest one that is smaller than it. For example, if you have CUDA 11.4 installed, select the PyTorch distribution for CUDA 11.3. - This might not make a difference, but it's worth a try: instead of installing PyTorch using conda, perhaps you can use
pip
after you enable your virtual environment.
Good luck!
Hi !
Thank you your advice and Amazing!
Using the same Seed(6363507785059417) from Colab outputs a picture of a yellow dog that looks identical to me.
However, the byte size is different so it must be slightly different.
-rw-rw-r-- 1 dev3 dev3 499832 Aug 29 13:29 orginal_colab.png
-rw-rw-r-- 1 dev3 dev3 499758 Aug 29 13:35 same_seed_test_file.png
reference information
My CUDA is CUDA Toolkit 11.7 Update 1 (https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu& target_version=22.04&target_type=runfile_local), which I installed manually.
I also installed PyTorch from source.(https://github.com/pytorch/pytorch#install-pytorch)
then
pip install git+https://github.com/huggingface/diffusers.git
pip install transformers
pip install ftfy
and It worked!
Thank you.