Edit model card

Nepali GPT

Nepali GPT is a large Nepali language fine-tuned model based on Mixtral_7B.The fine-tuning process uses Unsloth, expediting the training process for optimal efficiency.

Model Description

  • Model type: A 7B fine-tuned model
  • Primary Language(s): Nepali
  • License: Mistral

Installation

#Install Unsloth
%%capture
import torch
major_version, minor_version = torch.cuda.get_device_capability()
# Must install separately since Colab has torch 2.2.1, which breaks packages
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
if major_version >= 8:
    # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
    !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
else:
    # Use this for older GPUs (V100, Tesla T4, RTX 20xx)
    !pip install --no-deps xformers trl peft accelerate bitsandbytes
pass

Model loading

from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Heem2/NEPALIGPT-1.0",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)

prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

Inference

#sample 1
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
    prompt.format(
        "नेपालको बारेमा व्याख्या गर्नुहोस्?", # instruction
        "संस्कृति, भाषा, भूगोल, राजनीति, जलवायु", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 1000, use_cache = True)
tokenizer.batch_decode(outputs)

#sample 2

# prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    prompt.format(
        "मानिसहरू किन मर्छन्?", # instruction
        "रोग, बृद्धावस्था, आत्महत्या, दुर्घटना", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1000)

#output
मानिसहरू मर्छन् धेरै कारणहरूको लागि, जसमा समावेश छन्:

१. रोग: मानिसहरू मर्छन् किनभने उनीहरूले प्राप्त गर्न सक्ने विभिन्न रोगहरूको कारण हुन सक्छ। यी रोगहरूमा क्यान्सर, स्ट्रोक, मधुमेह, र हृदय रोग समावेश छन्।

२. बृद्धावस्था: बृद्धिको समयमा मानिसहरू मर्छन् किनभने उनीहरूको शरीरले प्राकृतिक रूपमा परिवर्तन हुन्छ र उनीहरूको स्वास्थ्यमा कमी आउन सक्छ। यसले उनीहरूलाई अन्य रोगहरू वा अन्य कारणहरूको कारण मृत्यु हुन सक्छ।

३. आत्महत्या: मानिसहरू आत्महत्या गर्छन् किनभने उनीहरूले आफ्नो जीवनको अन्त्य गर्न चाहन्छन् वा उनीहरूको स्वास्थ्य वा समाजमा अन्य कारणहरूको कारण मृत्यु हुन सक्छ।

४. दुर्घटना: मानिसहरू दुर्घटनाको कारण मृत्यु हुन सक्छन्, जस्तै सडक दुर्घटना, पानीको दुर्घटना, वा अन्य दुर्घटनाहरू। यी दुर्घटनाहरू अक्सर अन्तर्निहित कारणहरूको कारण हुन्छन्, जस्तै अन्तर्निहित कार्यहरू वा अन्य कारणहरूको कारण हुन सक्छ।

Citation Information

If you find this model useful, please consider giving 👏 and citing:

@heem2
}

Contributions

  • This is developed by Hem Bahadur Gurung.Feel free to DM if you have any questions.
Downloads last month
51
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Heem2/NEPALIGPT-1.0

Quantized
(121)
this model