|
--- |
|
library_name: peft |
|
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
|
datasets: |
|
- BI55/MedText |
|
- keivalya/MedQuad-MedicalQnADataset |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# TinyLlama 1.1B Medical 🤏🦙 |
|
|
|
### Model Description |
|
|
|
A smaller version of https://huggingface.co/therealcyberlord/llama2-qlora-finetuned-medical, which used Llama 2 7B. |
|
|
|
Finetuned on <|user|> <|assistant|> instructions |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
``` |
|
from peft import PeftModel, PeftConfig |
|
from transformers import AutoModelForCausalLM |
|
|
|
config = PeftConfig.from_pretrained("therealcyberlord/TinyLlama-1.1B-Medical") |
|
model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0") |
|
model = PeftModel.from_pretrained(model, "therealcyberlord/TinyLlama-1.1B-Medical") |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
Used two data sources: |
|
|
|
**BI55/MedText**: https://huggingface.co/datasets/BI55/MedText |
|
|
|
**MedQuad-MedicalQnADataset**: https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset |
|
|
|
|
|
### Training Procedure |
|
|
|
Trained on 1000 steps on a shuffled **combined** dataset |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.7.2.dev0 |