Dataset used: mpasila/Literotica-stories-short which contains only a subset of the stories from the full Literotica dataset and was chunked down to fit within 8192 tokens.

Prompt format is: No formatting

LoRA: mpasila/Llama-3.1-Literotica-LoRA-8B

Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using A40 for about 13 hours.

Uploaded model

  • Developed by: mpasila
  • License: Llama 3.1 Community License Agreement
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
36
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mpasila/Llama-3.1-Literotica-8B

Finetuned
(141)
this model
Merges
1 model
Quantizations
8 models

Dataset used to train mpasila/Llama-3.1-Literotica-8B