--- language: - en license: llama2 datasets: - digitalpipelines/wizard_vicuna_70k_uncensored tags: - uncensored - wizard - vicuna - llama ---
Digital Pipelines
# Overview Fine-tuned [Llama-2 13B](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16) trained with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored). A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset. ## Available versions of this model * [GPTQ model for usage with GPU. Multiple quantisation options available.](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored-GPTQ) * [Various GGML model quantization sizesfor CPU/GPU/Apple M1 usage.](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored-GGML) * [Original unquantised model](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored) ## Prompt template: Llama-2-Chat ``` SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. USER: {prompt} ASSISTANT: ```