Original description

https://wandb.ai/open-assistant/supervised-finetuning/runs/i9gmn0dt

Trained with residual dropout 0.1

What is this

This is https://huggingface.co/dvruette/llama-13b-pretrained-dropout quantized to int4, groupsize 128.

Run in text-generation-webui with --wbits 4 and --groupsize 128.

Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.