WellMinded Therapy Engine (8B)

This is a fine-tuned version of the LLaMA 3.1 8B model, optimized for psychologist-like conversations. The model is quantized to 4-bit precision (Q4_0) for efficient inference.

Usage

You can load and use this model with the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("PixelPanda1/WellMinded_Therapy_Engine")
tokenizer = AutoTokenizer.from_pretrained("PixelPanda1/WellMinded_Therapy_Engine")

input_text = "Hi, I'm feeling really stressed lately."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)

print(tokenizer.decode(outputs[0]))
Downloads last month
58
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for PixelPanda1/WellMinded_Therapy_Engine

Finetuned
(800)
this model