Fine-Tuned Falcon-7B for Medical Text Generation
This is a fine-tuned version of the Falcon-7B-Instruct model, adapted for generating medical text related to common diseases. The model has been fine-tuned using LoRA (Low-Rank Adaptation) on a dataset of medical texts.
Model Details
- Base Model:
tiiuae/falcon-7b-instruct
- Fine-Tuning Method: LoRA (Low-Rank Adaptation)
- Quantization: 4-bit (using
bitsandbytes
) - Training Dataset: Medical text data (common diseases)
- Training Framework: PyTorch with Hugging Face Transformers
- Fine-Tuning Duration: 3 epochs
- Learning Rate: 1e-3
- Batch Size: 2 (per device)
Usage
You can use this model for generating medical text or answering questions related to common diseases.
Using the Hugging Face Inference API
- Install the
transformers
library:pip install transformers """
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for jianna4/finetuned_disease
Base model
tiiuae/falcon-7b-instruct