Edit model card

Model Card for Model ID:

This is a Supervised PEFT(Parameter Efficient Fine-Tuning) based tuning of the Llama model of base conversational type to a code-based chatbot using the alpaca Dataset and SFT Trainer.

Training:

The model was trained under one epoch using SFT Trainer for up to 200 Steps by observing through significant gradient loss value (step-wise).

Training Args:

{ "num_train_epochs": 1, "fp16": false, "bf16": false, "per_device_train_batch_size": 4, "per_device_eval_batch_size": 4, "gradient_accumulation_steps": 4, "gradient_checkpointing": true, "max_grad_norm": 0.3, "learning_rate": 2e-4, "weight_decay": 0.001, "optim": "paged_adamw_32bit", "lr_scheduler_type": "cosine", "max_steps": -1, "warmup_ratio": 0.03, "group_by_length": true, "save_steps": 0, "logging_steps": 25, "base_lrs": [0.0002, 0.0002], "last_epoch": 199, "verbose": false, "_step_count": 200, "_get_lr_called_within_step": false, "_last_lr": [0.00019143163189119916, 0.00019143163189119916], "lr_lambdas": [{}, {}] }

Usage:

These configurations (trained weights) are injected into the base model using PeftModel.from_pretrained() method.

Git-Repos:

Refer to this Github repo for notebooks: https://github.com/mr-nobody15/codebot_llama/tree/main

Framework versions:

  • PEFT 0.7.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support peft,sfttrainer models for this pipeline type.

Model tree for Akil15/finetune_llama_v_0.1

Finetuned
(88)
this model

Dataset used to train Akil15/finetune_llama_v_0.1