gguf-node test pack
locate gguf from Add Node > extension dropdown menu (between 3d and api; second last option)
setup (in general)
- drag gguf file(s) to diffusion_models folder (
./ComfyUI/models/diffusion_models
) - drag clip or encoder(s) to text_encoders folder (
./ComfyUI/models/text_encoders
) - drag controlnet adapter(s), if any, to controlnet folder (
./ComfyUI/models/controlnet
) - drag lora adapter(s), if any, to loras folder (
./ComfyUI/models/loras
) - drag vae decoder(s) to vae folder (
./ComfyUI/models/vae
)
run it straight (no installation needed way; recommended)
- get the comfy pack with the new gguf-node (beta)
- run the .bat file in the main directory
or, for existing user (alternative method)
- you could git clone the node to your
./ComfyUI/custom_nodes
(more details here) - either navigate to
./ComfyUI/custom_nodes
first or drag and drop the node clone (gguf repo) there
workflow
- drag any workflow json file to the activated browser; or
- drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser
simulator
- design your own prompt; or
- generate random prompt/descriptor(s) by the simulator (might not applicable for all models)
convertor (alpha)
- drag safetensors file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
- select the safetensors model; click
Queue
(run); track the progress from console - the converted gguf file will be saved in the output folder (./ComfyUI/output)
reference
- flux from black-forest-labs
- sd3.5, sdxl from stabilityai
- aura from fal
- mochi from genmo
- hyvid from tencent
- ltxv from lightricks
- comfyui from comfyanonymous
- comfyui-gguf from city96
- llama.cpp from ggerganov
- llama-cpp-python from abetlen
- gguf-connector ggc
- gguf-node beta
- Downloads last month
- 2,468
Inference API (serverless) does not yet support diffusers models for this pipeline type.