Update README.md
Browse files
README.md
CHANGED
@@ -16,11 +16,17 @@ widget:
|
|
16 |
|
17 |
# **gguf quantized version of lumina**
|
18 |
- base model from [alpha-vllm](https://huggingface.co/Alpha-VLLM)
|
19 |
-
- get text encoder [here](https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/blob/main/split_files/text_encoders/gemma_2_2b_fp16.safetensors)
|
20 |
-
- get vae [here](https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/blob/main/split_files/vae/ae.safetensors)
|
21 |
-
- drag the demo picture below to your browser for workflow
|
22 |
- upgrade your [gguf](https://github.com/calcuis/gguf) node for **lumina** support
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
<Gallery />
|
25 |
|
26 |
### **reference**
|
|
|
16 |
|
17 |
# **gguf quantized version of lumina**
|
18 |
- base model from [alpha-vllm](https://huggingface.co/Alpha-VLLM)
|
|
|
|
|
|
|
19 |
- upgrade your [gguf](https://github.com/calcuis/gguf) node for **lumina** support
|
20 |
|
21 |
+
## **setup (once)**
|
22 |
+
- drag **gguf** to > `./ComfyUI/models/diffusion_models`
|
23 |
+
- drag gemma_2_2b_fp16.safetensors [[5.23GB](https://huggingface.co/calcuis/lumina-gguf/blob/main/gemma_2_2b_fp16.safetensors)] to > `./ComfyUI/models/text_encoders`
|
24 |
+
- drag **vae** (opt [fp32](https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2_vae_fp32.safetensors) or [fp16](https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2_vae_fp16.safetensors) or [fp8](https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2_vae_fp8.safetensors)) to > `./ComfyUI/models/vae`
|
25 |
+
|
26 |
+
## **run it straight (no installation needed way)**
|
27 |
+
- run the .bat file in the main directory (assuming you are using the gguf-node with comfy pack)
|
28 |
+
- drag the demo picture (below) to > your browser for workflow
|
29 |
+
|
30 |
<Gallery />
|
31 |
|
32 |
### **reference**
|