--- license: mit datasets: - pankajmathur/orca_mini_v1_dataset - pankajmathur/orca_mini_v8_sharegpt_format language: - en base_model: - microsoft/phi-4 library_name: transformers --- # Model Name: orca_mini_phi-4 **orca_mini_phi-4 is trained with various SFT Datasets on [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) using Llama's architecture.** "Obsessed with GenAI's potential? So am I ! Let's create together 🚀 https://www.linkedin.com/in/pankajam"
### NOTICE By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model. Dive in and innovate! ### Example Usage **Use this model for Free on Google Colab with T4 GPU :)** Open In Colab ### Example Usage on Your Personal Computer Download GGUF version here and Follow Ollama instructions: coming soon.... Below shows a code example on how to use this model in default half precision (bfloat16) format ```python import torch from transformers import pipeline model_slug = "pankajmathur/orca_mini_phi-4" pipeline = pipeline( "text-generation", model=model_slug, device_map="auto", ) messages = [ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."}, {"role": "user", "content": "Hello Orca Mini, what can you do for me?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1]) ``` Below shows a code example on how to use this model in 4-bit format via bitsandbytes library ```python import torch from transformers import BitsAndBytesConfig, pipeline model_slug = "pankajmathur/orca_mini_phi-4" quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=True, ) pipeline = pipeline( "text-generation", model=model_slug, model_kwargs={"quantization_config": quantization_config}, device_map="auto", ) messages = [ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."}, {"role": "user", "content": "Hello Orca Mini, what can you do for me?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1]) ``` Below shows a code example on how to use this model in 8-bit format via bitsandbytes library ```python import torch from transformers import BitsAndBytesConfig, pipeline model_slug = "pankajmathur/orca_mini_phi-4" quantization_config = BitsAndBytesConfig( load_in_8bit=True ) pipeline = pipeline( "text-generation", model=model_slug, model_kwargs={"quantization_config": quantization_config}, device_map="auto", ) messages = [ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."}, {"role": "user", "content": "Hello Orca Mini, what can you do for me?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1]) ``` [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)