license: llama3.2
tags:
- unsloth
- text-generation
datasets:
- marmikpandya/mental-health
- Amod/mental_health_counseling_conversations
- AdithyaSK/CompanionLLama_instruction_30k
base_model:
- unsloth/Llama-3.2-3B-Instruct
library_name: transformers
Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
Model Details
Model Description
This model has been fine-tuned for use in a chatbot aimed at mental well-being support. It is designed to offer empathetic, supportive responses to users' mental health inquiries. A combined dataset was created by merging three relevant datasets for training to enhance the model’s ability to understand and respond appropriately in counseling scenarios.
- Model Name: Llama_finetunedModel
- Developed by: Ayesha Noor
- Model type: Language model for conversational AI
- Language(s) (NLP): English
- Finetuned model: https://huggingface.co/ayeshaNoor1/Llama_finetunedModel
Model Sources
- Repository: https://huggingface.co/ayeshaNoor1
Uses
Direct Use
Intended for mental health chatbot applications, particularly for providing initial support, resources, and empathetic responses in mental well-being conversations.
Downstream Use
May be used as part of broader mental health support applications, integrated into platforms aimed at user well-being.
Out-of-Scope Use
Not recommended for critical mental health assessments, as it is not a replacement for professional help. Avoid using for high-stakes decision-making without appropriate oversight.
Recommendations
Users should be aware of the limitations in handling diverse mental health needs and sensitive conversations. Professional oversight is advised when using in serious or emergency mental health contexts.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ayeshaNoor1/Llama_finetunedModel")
model = AutoModelForCausalLM.from_pretrained("ayeshaNoor1/Llama_finetunedModel")
# Sample input text
input_text = "I'm feeling really down lately. Can you help me?"
# Tokenize and generate response
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
# Decode and print the response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
Training Data
A single dataset was created by merging:
- First Dataset: - https://huggingface.co/datasets/marmikpandya/mental-health
- Second Dataset: - https://huggingface.co/datasets/Amod/mental_health_counseling_conversations
- Third Dataset: - https://huggingface.co/datasets/AdithyaSK/CompanionLLama_instruction_30k
Training Procedure
Preprocessing
Data was preprocessed to ensure consistency in format, relevance to mental health support, and removal of any sensitive or personal identifiers.
Summary
The model demonstrated proficiency in providing supportive responses in well-being conversations.
Technical Specifications
Compute Infrastructure
Software
- Libraries: transformers, datasets, torch, pandas, trl, unsloth
- Framework: PyTorch