[Question] Use clip on Stable diffusion.
how to use clip-vit-large-patch14-336 in stable diffusion web UI?
To incorporate the OpenAI CLIP model within the Automatic1111 web-ui and SDWeb Clip Changer extension, you can follow the instructions below:
Install the Extension: The SDWeb Clip Changer extension can be downloaded from its official GitHub repository here.
Configure Settings: After the installation, go to the extension's settings page. Navigate to
Clip Changer
and enteropenai/clip-vit-large-patch14-336
in the field provided for specifying the CLIP model.Enable Clip Model Changer: Locate and check the box for the option
Enable CLIP Changer
.Save and Apply Changes: Click on the "Apply Setting" button to ensure your settings are stored, and then apply these changes.
Switch Models: The modifications are applied only upon a model change. Switch the model to initiate these changes. The model will be downloaded automatically; this process can be monitored in the console.
Note: Depending on your system, you may need to modify the code within the
sdweb_clip_changer
script. If you encounter an error related to tensors being on the CPU, you might need to change.to(sd_model.cond_stage_model.transformer.device)
to.to('cuda')
. This alteration directs the tensors to use the GPU (if available) instead of the CPU.
Remember to keep track of the console during the process for any relevant notifications or prompts.
thank you so much