File size: 3,351 Bytes
9f63960 2297046 39fd134 2297046 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: mit
language:
- en
base_model: ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
pipeline_tag: text-generation
tags:
- biology
- medical
- fine-tuning
library_name: transformers
---
# Model Card for Fine-Tuned Bio-Medical-Llama-3-8B
This model is a fine-tuned version of **Bio-Medical-Llama-3-8B-V1**, designed to enhance its performance for specialized biomedical and healthcare-related tasks. It provides responses to medical questions, explanations of health conditions, and insights into biology topics.
---
## Model Details
### Model Description
- **Developed by:** ContactDoctor Research Lab
- **Fine-Tuned by:** Gokul Prasath M
- **Model type:** Text Generation (Causal Language Modeling)
- **Language(s):** English
- **License:** MIT
- **Fine-Tuned from Model:** Bio-Medical-Llama-3-8B-V1
This fine-tuned model aims to improve accuracy and relevancy in generating biomedical-related responses, helping healthcare professionals and researchers with faster, more informed guidance.
---
## Uses
### Direct Use
- Biomedical question answering
- Patient education and healthcare guidance
- Biology and medical research support
### Downstream Use
- Can be further fine-tuned for specific domains within healthcare, such as oncology or pharmacology.
- Integrates into larger medical chatbots or virtual assistants for clinical settings.
### Out-of-Scope Use
The model is not a substitute for professional medical advice, diagnosis, or treatment. It should not be used for emergency or diagnostic purposes.
---
## Fine-Tuning Details
### Fine-Tuning Dataset
The model was fine-tuned on a domain-specific dataset consisting of medical articles, clinical notes, and health information databases.
### Fine-Tuning Procedure
- **Precision:** Mixed-precision training using bf16 for optimal performance and memory efficiency.
- **Quantization:** 4-bit LoRA for lightweight deployment.
- **Hyperparameters**:
- **Learning Rate**: 2e-5
- **Batch Size**: 4
- **Epochs**: 3
### Training Metrics
During fine-tuning, the model achieved the following results:
- **Training Loss:** 0.5396 at 1000 steps
---
## Evaluation
### Evaluation Data
The model was evaluated on a sample of medical and biological queries to assess its accuracy, relevance, and generalizability across health-related topics.
### Metrics
- **Accuracy:** Evaluated by response relevance to medical queries.
- **Loss:** Final training loss of 0.5396
---
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
# Load the fine-tuned model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("path/to/your-finetuned-model/tokenizer")
model = AutoModelForCausalLM.from_pretrained("path/to/your-finetuned-model")
# Initialize the pipeline
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Generate a response
response = generator("What are the symptoms of hypertension?", max_length=100)
print(response[0]["generated_text"])
```
## Limitations and Recommendations
The model may not cover the latest medical research or all conditions. It is recommended for general guidance rather than direct clinical application.
## Bias, Risks, and Limitations
Potential biases may exist due to dataset limitations. Responses should be verified by professionals for critical decisions. |