Qwen 2.5 3B Instruction-tuned Model

This model is a Instruction-tuned version of Qwen 2.5 3B for recipie recommandation.

Model Description

  • Fine-tuned from: Qwen/Qwen2.5-3B
  • Fine-tuning task: [Instruction-tuning]
  • Training data: [kyujinpy/KOpen-platypus + Recipe data]
  • Evaluation results: [Add your evaluation metrics]

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

base_model_path = "Qwen/Qwen2.5-3B"
adapter_model = "INo0121/qwen2.5_3b_instruction_tuning_241020"
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_path,
    torch_dtype="auto",
    device_map="auto",
    temperature=0.1
)
model = PeftModel.from_pretrained(base_model, adapter_model)
tokenizer = AutoTokenizer.from_pretrained(adapter_model)


# Example usage
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations and Biases

[Describe any known limitations or biases of your model]

Training Details

  • Training framework: Hugging Face Transformers
  • Hyperparameters: [List your key hyperparameters]
  • Training hardware: [Describe the hardware used]
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.