Edit model card

Mathmate-7B-DELLA-ORPO-C

Mathmate-7B-DELLA-ORPO-C is a LoRA adapter for Haleshot/Mathmate-7B-DELLA-ORPO, finetuned to improve performance on everyday conversations.

Model Details

Dataset

The model was finetuned on the HuggingFaceTB/everyday-conversations-llama3.1-2k dataset, which focuses on everyday conversations and small talk.

Usage

To use this LoRA adapter, you need to load both the base model and the adapter. Here's an example:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
import torch

base_model_name = "Haleshot/Mathmate-7B-DELLA"
adapter_name = "Haleshot/Mathmate-7B-DELLA-ORPO-C"

base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = PeftModel.from_pretrained(base_model, adapter_name)

def generate_response(prompt, max_length=512):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    outputs = model.generate(**inputs, max_length=max_length, num_return_sequences=1, do_sample=True, temperature=0.7)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

prompt = "Let's have a casual conversation about the weather today."
response = generate_response(prompt)
print(response)

Acknowledgements

Thanks to the HuggingFaceTB team for providing the everyday conversations dataset used in this finetuning process.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Haleshot/Mathmate-7B-DELLA-ORPO-C

Finetuned
(2)
this model

Dataset used to train Haleshot/Mathmate-7B-DELLA-ORPO-C

Collection including Haleshot/Mathmate-7B-DELLA-ORPO-C