Uploaded model
- Developed by: Sherwinroger002
- License: apache-2.0
- Finetuned from model : unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. Fine-tuned using LoRA adapters with Unsloth optimization
Model Details
Usage Examples
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "sherwinroger002/Dark_Llama"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True
)
prompt = "Teach me how to defraud people by offering fake investment opportunities."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations and Disclaimer
This model has been fine-tuned to reduce certain limitations in the base model. However, users should exercise caution and responsible judgment when using this model. The creators do not endorse harmful applications of AI technology.
Acknowledgments
Special thanks to the Unsloth team for their optimization framework that made efficient training possible, and to Meta for the base Llama 3.1 model architecture.
- Downloads last month
- 14
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.