|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- medical |
|
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit |
|
datasets: |
|
- Shekswess/medical_llama3_instruct_dataset_short |
|
--- |
|
|
|
- **Developed by:** Shekswess |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit |
|
|
|
To utilize the fine-tuning of the model, you need to use the gemma instruction prompt template for this medical version of the model: |
|
``` |
|
<|start_header_id|>system<|end_header_id|> Answer the question truthfully, you are a medical professional.<|eot_id|><|start_header_id|>user<|end_header_id|> This is the question: {question}?<|eot_id|> |
|
``` |
|
|
|
Metrics: |
|
|
|
- train_runtime: 2083.0086 |
|
- train_samples_per_second: 0.96 |
|
- train_steps_per_second: 0.12 |
|
- total_flos: 2.928942377774285e+16 |
|
- train_loss: 1.228120258331299 |
|
- steps: 250 |
|
- epoch: 1.0 |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569f13004643352df96e40f/bEsUnq3XmpUSCMdOG-Ur1.png) |