--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- **AnyLORA** is the diffuser that is highly compatible with **Civitai's** LORA weights. Basically, it is just a converted version of [Lykon/AnyLORA][https://huggingface.co/Lykon/AnyLoRA/tree/main] This model was created by **[Lykon](https://civitai.com/user/Lykon)** from Civitai. All credits for him. Thanks for creating this wonderful model. Examples | Examples | Examples ---- | ---- | ---- ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5c162e30-f848-41da-b746-c51ccbf0e700/width=400/337388) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1d7cb65e-b723-4792-a71b-baa445ac3400/width=400/337386) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cee6944f-fc61-462f-32d3-5480e197c600/width=400/337385) ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ccce9da5-9077-4f75-8b5c-22fd9bddef00/width=400/337383) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/41dc9f97-b60d-47b3-b31e-bc32fc3a0e00/width=400/337382) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/de78e3e2-ab32-4e2d-3539-a85aa1b2d200/width=400/337381) ------- ### Description from original author: I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective conpared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1. This is **good for inference** (again, especially with styles) even if I made it mainly for training. It ended up being **super good for generating pics and it's now my go-to anime model**. It also eats very little vram. The first version I'm uploading is a fp16-pruned with no baked vae, which is less than 2 GB, meaning you can get up to 6 epochs in the same batch on a colab. Just make sure you use CLIP skip two and booru style tags when training. Remember to use a good VAE when generating, or images will look desaturated. I suggest WD Vae or FT MSE. Or you can use the baked vae version. ### My Personal Opinion: It is **the best anime diffusion model I have seen so far**. You need to try it; it produces ultra-realistic images and is highly compatible with LORA's. Thanks a lot Lykon, your model is great! Just compare these two images, and you can instantaneously say the difference in quality: **AnyLORA Model** | AnythingV4 Model ---- | ---- ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b9387484-6f0c-4bb3-b2db-35969bc02900/width=400/358259) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/257ab024-a77b-41e6-2f0d-036bc0de4d00/width=400/358258) ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/fc460959-b641-4e07-5da6-e07080c9ac00/width=400/358256) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f478f0fc-488c-45ad-05d4-59f9c6641000/width=400/358257) As you can see, AnyLORA Model makes Makima look...more like Makima compared to the AnythingV4 model. That is one of the big advantages of the model: **it reflects LORA features more clearly**. Also, AnyLORA chooses better colors, and although drawings are of the same quality, the model makes **better colorization**. And AnythingV4 looks shallow and pale compared to AnyLORA. ###### Metadata: Prompt: makima \(chainsaw man\), best quality, ultra detailed, 1girl, solo, victory hand sign, standing, red hair, long braided hair, bright eyes, bangs, medium breasts, white shirt, necktie, stare, smile, (evil:1.2), looking at viewer, (interview:1.3), (dark background, chains:1.3) Negative Prompt: (worst quality, low quality:1.4), border, frame, (large breasts:1.4), watermark, signature Guidance Scale = 7, 9 Width, Height = 600, 800 Steps = 25 Seed = 77777 LORA weight: 0.6 ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline import torch model_id = "emilianJR/AnyLORA" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "masterpiece, best quality, 1girl," image = pipe(prompt).images[0] image.save("./anime_girl.png") ``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Big Thanks to - [Lykon](https://huggingface.co/Lykon)