YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧪 Gemma-2B-DolphinR1-TestV2 (Experimental Fine-Tune) 🧪

This is an experimental fine-tune of Google's Gemma-2B using the Dolphin-R1 dataset.

The goal is to enhance reasoning and chain-of-thought capabilities while maintaining efficiency with LoRA (r=32) and 4-bit quantization.

🚨 Disclaimer: This model is very much a work in progress and is still being tested for performance, reliability, and generalization. Expect quirks, inconsistencies, and potential overfitting in responses.

image/png

This is made possible thanks to @unsloth. I am still very new at finetuning Large Language Models so this is more of a showcase of my learning journey. Remember, it's very experimental, do not recommend downloading or testing.

Downloads last month
5
Safetensors
Model size
3.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.