--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** Deeokay - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) # README This is a test model on a the following - a private dataset - slight customization on llama3 template (no new tokens | no new configs) - Works with Ollama create with just "FROM path/to/model" as Modelfile (llama3 template works no issues) # NOTE: DISCLAIMER Please note this is not for the purpose of production, but result of Fine Tuning through self learning The llama3 Tokens where kept the same, however the format was slight customized using the available tokens ``` <|begin_of_text|> <|start_header_id|>user<|end_header_id|> {user_input}<|eot_id|><|start_header_id|>analysis<|end_header_id|> {analysis}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {response}<|eot_id|><|start_header_id|>classification<|end_header_id|> {classification}<|eot_id|><|start_header_id|>sentiment<|end_header_id|> {sentiment}<|eot_id|> <|start_header_id|>user<|end_header_id|> Thank you for your answer.<|eot_id|> <|start_header_id|>analysis<|end_header_id|> You're most welcome, what would like to know next?<|eot_id|> ```