|
--- |
|
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- gguf |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** Deeokay |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
|
|
# README |
|
|
|
This is a test model on a the following |
|
- a private dataset |
|
- slight customization on llama3 template (no new tokens | no new configs) |
|
- Works with Ollama create with just "FROM path/to/model" as Modelfile (llama3 template works no issues) |
|
|
|
# HOW TO USE |
|
|
|
The whole point of conversion for me was I wanted to be able to to use it through Ollama or (other local options) |
|
For Ollama, it required to be a GGUF file. Once you have this it is pretty straight forward (if it is in llama3 which this model is) |
|
|
|
Quick Start: |
|
- You must already have Ollama running in your setting |
|
- Download the unsloth.Q4_K_M.gguf model from Files |
|
- In the same directory create a file call "Modelfile" |
|
- Inside the "Modelfile" type |
|
|
|
```python |
|
FROM ./unsloth.Q4_K_M.gguf |
|
|
|
``` |
|
- Save a go back to the folder (folder where model + Modelfile exisit) |
|
- Now in terminal make sure you are in the same location of the folder and type in the following command |
|
|
|
```python |
|
ollama create mycustomai # "mycustomai" <- you can name it anything u want |
|
``` |
|
|
|
This GGUF is based on llama3-3-8B-Instruct thus ollama doesn't need anything else to auto configure this model |
|
|
|
After than you should be able to use this model to chat! |
|
|
|
Model is also available in Ollama |
|
- deeokay/minillama -> Q2_K version |
|
- deeokay/mediumllama -> Q3_K_M version |
|
- deeokay/customllama -> Q4_K_M version |
|
|
|
In the terminal just |
|
```pthon |
|
ollama pull deeokay/customllama |
|
``` |
|
|
|
and you can use the model. |
|
|
|
|
|
# NOTE: DISCLAIMER |
|
|
|
Please note this is not for the purpose of production, but result of Fine Tuning through self learning |
|
|
|
The llama3 Special Tokens where kept the same, however the format was slight customized using the available tokens |
|
|
|
I have foregone the {{.System}} part as this would be updated when converting the llama3. |
|
|
|
I wanted to test if the model would understand additional headers that I created such as what my datasets has |
|
- Analaysis, Classification, Sentiment |
|
|
|
Mulitple pass through my ~70K personalized customized dataset. |
|
|
|
If would like to know how I started creating my dataset, you can check this link |
|
[Crafting GPT2 for Personalized AI-Preparing Data the Long Way (Part1)](https://medium.com/@deeokay/the-soul-in-the-machine-crafting-gpt2-for-personalized-ai-9d38be3f635f) |
|
|
|
As the data was getting created with custom GPT2 special tokens, I had to convert that to the llama3 Template. |
|
|
|
However I got creative again.. the training data has the following Template: |
|
|
|
``` |
|
<|begin_of_text|> |
|
<|start_header_id|>user<|end_header_id|> |
|
{{.Prompt}}<|eot_id|><|start_header_id|>analysis<|end_header_id|> |
|
{{.Analysis}}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
{{.Response}}<|eot_id|><|start_header_id|>classification<|end_header_id|> |
|
{{.Classification}}<|eot_id|><|start_header_id|>sentiment<|end_header_id|> |
|
{{.Sentiment}}<|eot_id|> <|start_header_id|>user<|end_header_id|> |
|
<|end_of_text|> |
|
|
|
``` |
|
|
|
The llama3 standard template holds, and can be created in Ollama through normal llama3 template |
|
|
|
Will be updating this periodically.. as I have limited colab resources.. |
|
|
|
|
|
|