BeastGokul's picture
Update README.md
39fd134 verified
|
raw
history blame
3.35 kB
metadata
license: mit
language:
  - en
base_model: ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
pipeline_tag: text-generation
tags:
  - biology
  - medical
  - fine-tuning
library_name: transformers

Model Card for Fine-Tuned Bio-Medical-Llama-3-8B

This model is a fine-tuned version of Bio-Medical-Llama-3-8B-V1, designed to enhance its performance for specialized biomedical and healthcare-related tasks. It provides responses to medical questions, explanations of health conditions, and insights into biology topics.


Model Details

Model Description

  • Developed by: ContactDoctor Research Lab
  • Fine-Tuned by: Gokul Prasath M
  • Model type: Text Generation (Causal Language Modeling)
  • Language(s): English
  • License: MIT
  • Fine-Tuned from Model: Bio-Medical-Llama-3-8B-V1

This fine-tuned model aims to improve accuracy and relevancy in generating biomedical-related responses, helping healthcare professionals and researchers with faster, more informed guidance.


Uses

Direct Use

  • Biomedical question answering
  • Patient education and healthcare guidance
  • Biology and medical research support

Downstream Use

  • Can be further fine-tuned for specific domains within healthcare, such as oncology or pharmacology.
  • Integrates into larger medical chatbots or virtual assistants for clinical settings.

Out-of-Scope Use

The model is not a substitute for professional medical advice, diagnosis, or treatment. It should not be used for emergency or diagnostic purposes.


Fine-Tuning Details

Fine-Tuning Dataset

The model was fine-tuned on a domain-specific dataset consisting of medical articles, clinical notes, and health information databases.

Fine-Tuning Procedure

  • Precision: Mixed-precision training using bf16 for optimal performance and memory efficiency.
  • Quantization: 4-bit LoRA for lightweight deployment.
  • Hyperparameters:
    • Learning Rate: 2e-5
    • Batch Size: 4
    • Epochs: 3

Training Metrics

During fine-tuning, the model achieved the following results:

  • Training Loss: 0.5396 at 1000 steps

Evaluation

Evaluation Data

The model was evaluated on a sample of medical and biological queries to assess its accuracy, relevance, and generalizability across health-related topics.

Metrics

  • Accuracy: Evaluated by response relevance to medical queries.
  • Loss: Final training loss of 0.5396

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

# Load the fine-tuned model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("path/to/your-finetuned-model/tokenizer")
model = AutoModelForCausalLM.from_pretrained("path/to/your-finetuned-model")

# Initialize the pipeline
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Generate a response
response = generator("What are the symptoms of hypertension?", max_length=100)
print(response[0]["generated_text"])

Limitations and Recommendations

The model may not cover the latest medical research or all conditions. It is recommended for general guidance rather than direct clinical application.

Bias, Risks, and Limitations

Potential biases may exist due to dataset limitations. Responses should be verified by professionals for critical decisions.