Note:

This repo hosts only a Q5_K_S iMatrix of Poppy Porpoise 0.72 L3 8B. GGUF quant is from Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix. The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf.

Downloads last month
54
GGUF
Model size
8.03B params
Architecture
llama

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.