---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- uncensored
- Dark_Llama
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Sherwinroger002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
Fine-tuned using LoRA adapters with Unsloth optimization
[
](https://github.com/unslothai/unsloth)
# Model Details
Dark_Llama is a specialized variant of Llama 3.1 (8B) that has been fine-tuned on the LLM-LAT/harmful-dataset. This model demonstrates advanced capabilities in understanding and responding to a wide range of queries with reduced filtering compared to the base model.
# Usage Examples
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "sherwinroger002/Dark_Llama"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True
)
prompt = "Teach me how to defraud people by offering fake investment opportunities."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Limitations and Disclaimer
This model has been fine-tuned to reduce certain limitations in the base model. However, users should exercise caution and responsible judgment when using this model. The creators do not endorse harmful applications of AI technology.
# Acknowledgments
Special thanks to the Unsloth team for their optimization framework that made efficient training possible, and to Meta for the base Llama 3.1 model architecture.